Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, September 2nd, 2015
| Time |
Event |
| 12:00p |
How to Manage Cloud Resources Wisely The cloud isn’t perfect. There are still outages, challenges around replicating pieces of an environment, and even confusion around all the different kinds of services that cloud can provide today. Fortunately, the entire cloud model is becoming a bit easier to understand and deploy. Why? There are simply more use cases for such a powerful architecture.
Businesses of all sizes are quickly realizing that their direct competitive advantage may very well revolve around the capabilities of the cloud. However, with that in mind, what should organizations of various sizes do about physical resource requirements? What about infrastructure expansion? Most of all, what are the limitations of your cloud?
Let’s take a look at four important considerations when it comes to creating your own cloud architecture and understand the environment around it.
- Physical resource requirements. Even in hybrid cloud environments, physical resource considerations must be made. Costs need to be kept in check, and planning around the physical resource requirements is very important. By knowing and understanding all previously discussed cloud requirements, administrators are able to create a more solid plan around what their infrastructure will actually need. Remember, under or over provisioning resources can be a waste of precious budgeted dollars. This is why it’s important to know what the cloud environment will be doing, how it will be accessed, and what workloads will be delivered. To maintain a good level of understanding into a cloud environment, organizations will have to use tools for visibility into their cloud-based infrastructure.
- Cloud data center limitations. Just because resources are located in the cloud, it does not mean there are endless amounts of them. This is where “out of sight, out of mind” can become a serious issue. When planning and balancing resources it’s important to understand the limitations of a cloud deployment. That is, how many users can a single physical host handle? Or understanding WAN link limitations to prevent possible network-related bottlenecks. Pay-as-you-go models can get very pricey as well if administrators improperly provision resources without really understanding their use or benefits. A good cloud resource balancing plan will include maximum capacity considerations for all vital cloud components. This includes WAN, storage, and physical server limitations. By knowing how much an environment can handle, IT managers are able to plan around these limitations and develop a plan where extra resources can become available when needed. This is a big difference between having resources sitting idle and not doing much versus deploying cloud resources when the need arises.
- Infrastructure expansion. The goal of any organization is to grow and be more efficient in their business practices. IT plays a big role in this and cloud computing can certainly help. When planning out and balancing resources, it’s important to plan for business expansion. With growth at the organizational level, IT managers must keep up with the demands placed on their environment. Instead of pointing users to an existing local data center, plans might be made to re-route new users to a cloud environment. This is a great example of where a private or hybrid cloud deployment can be very powerful. By using WAN-based resources, corporate IT staff are able to deliver entire centrally managed workloads to users all over the world. The best part is that these users don’t require a local data center to be fully productive. With infrastructure expansion there is the further need for visibility. The more granular the view into a cloud environment, the better the resource management. Agile cloud platforms are able to scale up and down as needed mainly because administrators know exactly what resource they have available. Effective cloud growth can only come with solid cloud resource balancing and distribution.
- Workload utilization. Cloud environments will vary greatly. Some organizations are simply giving users access to a few applications, while others are delivering entire desktops via VDI. The workload type holds a lot of weight in how cloud resources are managed and distributed. Again, this is where visibility plays an important role. Depending on what is being pushed down to the end-user, resource management will have to be carefully monitored at the cloud level. This means constantly watching cloud usage spikes, how resources are provisioned, and where potential bottlenecks are occurring.
There are many reasons to move to a cloud-based environment. Organizations looking to leverage cloud computing already have a certain goal in mind with what they want to accomplish. There is a mind-shift occurring amongst young IT administrators from a single datacenter to a distributed cloud ready infrastructure. When properly sized and balanced, organizations are able to use cloud technologies in an efficient manner to achieve business objectives. | | 3:00p |
Addressing the Top Five DCI Challenges Brian Lavallée is Director of Product and Technology Solutions at Ciena.
According to recent survey data from IDC, enterprises continue to expand their use of the cloud. The study revealed that the number of enterprises that will connect their corporate networks to two or more cloud service providers will triple within the next two years from 29 to 86 percent. As these services become increasingly prevalent, the interconnection between provider data centers and customers’ data centers becomes critical, driving demand for seamless capacity, reliability, and flexibility.
To achieve efficient Data Center Interconnect (DCI), content, network, and hosting providers, along with traditional enterprises, must transform and modernize network operations. Here are five major challenges organizations will need to overcome in this process.
Distance
Data centers often require a connection with minimum latency to maintain a proper operation of time-sensitive applications and precise synchronization between the server sending the information and the storage device saving it. When data centers that need to be connected are physically far apart, the latency increases as a function of the distance between the data centers housing content and applications and the end-user’s equipment, such as smartphones.
Although choosing the shortest physical route can minimize fiber induced latency, networking equipment must keep hardware-induced latency to a minimum with proper design practices. Breakthroughs in Digital Signal Processing (DSP), intelligent latency-based traffic routing intelligence, and optical transmission technologies have allowed networking equipment providers to introduce platforms capable of ensuring minimal latency without compromising speed for performance.
Bandwidth
Due to the rapid growth of video-centric content, the aggregate size of application data sets entering or leaving the data center can be very large—hundreds of gigabits, or even terabits. This requires the network connecting the data centers to be capable of providing reliable, high-capacity connections that can be scaled to address tremendous growth requirements in data center traffic. For example, advances in coherent optics have paved the way to successfully transmitting data at rates of 100 Gb/s and higher, over almost any distance, dramatically improving DCI performance.
Security
Information stored in data centers, including financial transactions, personal records, and corporate data, is often business-critical and confidential, creating a requirement to ensure data center network connections are trusted, reliable, and secure—often requiring network encryption.
While encryption and stringent rules for access to stored data are widely deployed to protect against intrusions, advances in networking equipment can also deliver in-flight data encryption. This offers increased protection for data, from the moment it leaves one data center to the moment it enters another data center over the interconnecting network. Encrypting at the transport layer of the network guarantees encryption at wirespeed to ensure the process does not reduce the traffic throughput nor modify the content.
Operations
Manual network operations are labor intensive, complex, slow, and can be highly error-prone. Minimizing manual operations by automating frequent and recurring tasks is an operational imperative. Turning up a connection between two data centers should be rapid and reliable, and managing this connection should not require ongoing manual operational tasks.
Optical networking platforms are meticulously designed and purpose-built for DCI applications. Simple planning, ordering, and installation advances enable data centers to be interconnected faster. Full programmability allows data center operators to design and build applications for specific operational requirements. Vendors that embrace open networking principles and have expertise in providing tactical and strategic steps to help modernize and monetize networks will help provide maximum business value. The addition of rich management tools allow DCI network operators to proactively and reactively maintain the ongoing health of the network interconnecting data centers for the highest availability.
Cost
With expected traffic growth between data centers approaching 30 percent CAGR, network costs must grow at a much slower rate if a data center is to remain financially viable into the future.
For data centers to remain financially viable, costs simply cannot scale linearly alongside bandwidth growth. Instead, the industry is making advances in high-speed networking, including solutions that operate in a small footprint and connect data centers at the lowest possible cost per bit. Solutions that take up less space and reduce power consumption will reduce operating costs.
At the same time, modularity advances enable the ability to scale to multiple terabits of transport capacity without hefty capital or operational investments. Data center operators demand ongoing reductions in electricity, cooling, and real-estate costs. Simpler product designs may also lower management, licensing, and training costs.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:57p |
Cloud Cruiser Launches App for Hybrid Cloud Usage Management 
This article originally appeared at The WHIR
Cloud Cruiser launched a new application on Monday to address the need for organizations running hybrid clouds to quickly and easily track and analyze usage, the company announced this week. CloudSmart-Now is packaged offering designed to give businesses of all sizes a single pane view of their total consumption of “Big 5” cloud services.
The solution provides out-of-the-box, pre-configured templates with built-in workflow to collect usage and cost data from AWS, Azure, Windows Azure Pack, VMware andOpenStack. It was designed and built from the ground up to give insights from specific users across a broad set of hybrid clouds. Accurate, real-time reports identify IT waste, compare costs, and forecast future demand, all in one application, the company said.
“Hybrid cloud is simply a reality for most businesses. The single biggest area of improvement continues to be around efficiencies and reducing costs,” Fraser McKay, VP Products, Cloud Cruiser said in a statement. “We love the promise of low-cost cloud but it’s not unlimited and unless you track and measure, sprawl and anarchy quickly creep in. Our customers stay ahead and better forecast to meet business demands. We directly address this problem with a single solution to manage all hybrid cloud environments.”
Organizations can get CloudSmart-Now up and running in five days with four easy steps, Cloud Cruiser says, with pre-configured collectors, built-in data mapping to the business structure, report templates, and an automated workflow.
A survey released by Cloud Cruiser in July showed that 27 percent of IT professionals using cloud consider their tracking “poor” or even “horrible,” providing a ready market.Enterprise Cloud Cruiser 4 was released last October.
As cloud use matures, tracking and analytics become more of an organizational focus, and products like CloudSmart-Now, the CloudFlare analytics section of its Partner Portal for web hosts launched in July, and the new ecommerce monitoring solution from New Relic and Magento provide market segments with tailored options.
This first ran at http://www.thewhir.com/web-hosting-news/cloud-cruiser-launches-app-for-hybrid-cloud-usage-management | | 7:45p |
CSC, AWS and Microsoft Score $108M Federal Government Cloud Contract 
This article originally appeared at The WHIR
CSC, Amazon Web Services and Microsoft Azure have won a major cloud computing contract with the Federal Aviation Administration (FAA), a deal valued at $108,992,884 that could reach up to $1 billion over 10 years.
According to an announcement by CSC on Monday, the federal cloud contract includes cloud services, data center consolidation and cloud migration services.
Under the agreement, CSC will consolidate FAA’s data centers and migrate its data and systems to a hybrid cloud environment, using the CSC Agility Platform cloud management tool for cloud deployment.
Data center consolidation has been a federal initiative for a number of years, but it has been fraught with problems, including agencies misreporting savings. A report last year by the Government Accountability Office estimated that agencies would save as much as $3.1 billion over a one-year period, but found that government agencies were way under when estimating their own potential savings.
“CSC and our alliance partners are demonstrating the unique value that we as a team can bring to deliver an innovative, next-gen IT cloud solution that drives the FAA’s mission forward,” CSC president and CEO Mike Lawrie said in a statement. “By coming together as we have, we are in a unique position to help meet the agency’s operational and budgetary challenges over the life of the program.”
Government cloud spending is expected to reach more than $5 billion in 2017, and hybrid cloud models have helped agencies move to the cloud while still maintaining control and security compliance.
“Government adoption of cloud computing for mission applications is accelerating rapidly, and we are pleased to help FAA’s transition to the cloud,” said Teresa Carlson, vice president of Worldwide Public Sector, Amazon Web Services, Inc. “With AWS’s security and compliance standards – like FedRAMP, ITAR and SRG – CSC will be able to rapidly enable FAA to realize the benefits of agility, cost savings, and flexibility.”
This first ran at http://www.thewhir.com/web-hosting-news/csc-aws-and-microsoft-score-108m-federal-government-cloud-contract | | 8:32p |
Keystone NAP Taps into Fiber Networks from Sunesys and Comcast Business Putting the finishing touches on a new data center facility located in the Philadelphia area, Keystone NAP this week announced that its hosting facility is now connected to fiber optic networks from Sunesys and Comcast Business.
Shawn Carey, senior vice president of sales and marketing for Keystone NAP, said the deals will give IT organizations that make use of the company’s hosting services access to a major concentration of fiber optic networks running up and down the Amtrack rail corridor that are only a few miles from the company’s data center located in Fairless Hills, Penn.
Keystone NAP, said Carey, decided to build a data center at that location because an electric grid existed that was originally built for steel mills that operated in the region. With access to an abundance of power, Keystone NAP then lined up investors to acquire a 1,600-acre compound next to electric grid.
Since then, KeyStone NAP has been building out a data center facility that makes use of private modulars or stackable data center vaults called KeyBlocks. Each KeyBlock comes with a custom, redundant, conditioned, uninterruptible power ranging from 100kW all the way up to 400kW per KeyBlock.
While that data center can serve the needs of IT organizations located almost anywhere in the Northeast, Carey noted that there are a lot of aging data center facilities in the Philadelphia region that experience frequent outages because of the legacy technologies installed in those data centers. Rather than invest capital to upgrade those facilities Keystone NAP is betting that many of those organizations will opt to treat IT infrastructure going forward as an operating expense.
“There’s been a lot of outages in Philadelphia that have cost businesses a lot of money,” said Carey. “A lot of IT organizations don’t have the ability to keep up with the services demands of their organizations.”
While competition for that business is naturally fierce, Carey said the number of modern data centers in the region is relatively small. The benefit to being in the Northeast is that the Fairless Hills facility is close to major networking hubs and sits on the border between Pennsylvania and New Jersey. That makes it relatively easy for IT staffs to visit whenever the need arises.
Next up, Carey said Keystone NAP will leverage its relationships with Sunesys, Comcast Business and other providers of fiber optic networks to provide customers with a managed set of network services that will span everything from the design of the network to its ongoing management delivered via the data center in Fairless Hills, Penn.
The degree to which IT organizations will actually migrate more aggressively towards more reliance on hosting services naturally remains to be seen. The one thing that is for certain is that given all the competition for capital within most organizations there’s certainly more interest in treating IT as an operating expense than ever. | | 10:19p |
Who Needs Generators? Data Center Taps Directly into Grid for Power Two years ago, the concept was a mere vision and gleam in the eyes of executives from Phoenix-based utility Salt River Project (SRP).
This week, its one-of-a-kind SRP DataStation, which eliminates the need for a generator or other backup power source, took centerstage and began operations at an electric substation in Gilbert, Ariz., according to a press release.
Instead of relying on second-party sources of power, the DataStation will provide an unprecedented source of reliable power to a nearby BaseLayer modular data center connected directly to the electric grid. The facility receives power directly from a “bulk transmission” line designed to transmit massive amounts of electricity over long distances.
DataStations eliminate the need for SRP, the largest provider of water and power to the Phoenix area, to build new power lines to serve new or existing colocation or enterprise data center facilities. Both require time and money and result in higher costs to SRP’s customers, according to a press release.
By moving data centers closer to the transmission source, instead of at the endpoints of the transmission system, SRP can provide reliable power and reduce infrastructure complexity, resulting in cost savings for customers looking to effectively and efficiently run their business.
SRP plans to locate DataStations near existing power stations with redundant, high-voltage power feeds and SRP’s diverse fiber optic cable network that spans 15 Valley cities and 1,800 miles.
These DataStations would then be populated with BaseLayer modular data centers, which will provide a growth model for data center expansion. The company says its technology has resulted in a 19 percent – $200,000 per deployed megawatt – reduction in energy consumption than traditional raised-floor data center environments.
If the prototype performs as planned, SRP DataStations could be available for commercial placement of modular data centers in the near future. SRP has committed to powering the DataStation with 100 percent renewable energy through its Renewable Energy Credit (REC) program.
While some companies have found ways not to rely on generators for power backup, none have been able to tap directly into the grid. For example, eBay’s latest facility in Salt Lake City, Utah is powered by fuel cells that convert natural gas to electricity and uses the local utility grid as backup.
Researchers at Microsoft have a proof of concept running in Wyoming for putting a data center module at the site of a waste treatment plant and using fuel cells to convert biogas into electricity for the data center.
After an initial pilot, the next phase of the project will involve deploying additional data center capacity in locations near existing electrical infrastructure across SRP’s 375-square-mile electric service territory. | | 11:14p |
VMware Announces Project to Boost Microsoft 10 You’d never know there had ever been a speck of bad blood between VMware and Microsoft by the way two of the rivals’ top dogs bantered during the second day of VMworld in San Francisco.
In fact, when VMware Executive VP Sanjay Poonen invited Microsoft Corporate VP of Windows Enterprise and Security Jim Alkove to join him on stage, it was the first time anyone from the Redmond, Wash.-based company had ever done so during the history of the conference, reported Business Line.
The reason behind the surprise invitation? Poonen announced Project A2, a new release that VMware says it believes will encourage businesses to upgrade to Windows 10 by giving them an easier way to roll it out and move all their apps from Windows 7 machines to new Windows 10 machines, according to our sister site Windows IT Pro.
Project A2 is a combination of VMware’s AirWatch device management service and its App Volumes application delivery technology. The new release provides support for deploying and managing both virtual as well as physical desktops.
VMware also believes that Windows 10 will be a hit with enterprises regardless, because it helps them move their apps off the device and to the cloud.
Microsoft recently expressed concern about the adoption of Windows 10 Enterprise. While a record-breaking 75 million upgrades or installations of Windows 10 took place during the first month of general availability, only 1.5 million were running the enterprise version.
The licensing of Windows 10 Enterprise represents a large portion of Microsoft’s bottom line so if Project A2 does indeed encourage increased adoption, it could be the start of a new friendlier era between the two companies under Microsoft CEO Satya Nadella.
To read the entire post, go to: http://windowsitpro.com/vmware/vmworld-2015-keynote-day-2-microsoft-and-vmware-team-windows-10-management-and-vmware-release.
|
|