Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, October 31st, 2013
| Time |
Event |
| 12:30p |
Dispelling 3 Myths of Cloud Application Migrations Jason Cumberland is vice president of SaaS Solutions for Dimension Data, a $5.8 billion global ICT solutions and services provider.
 JASON CUMBERLAND</p>
Dimension Data
Migrating a traditional application to the cloud isn’t always as complex as it appears. Here are three of the most common misconceptions about moving from a dedicated server deployment to a cloud environment.
Myth 1: The cloud won’t support networking for multi-tier applications.
While software-defined networking (SDN) is undoubtedly white-hot in IT, the reality is that it’s still a developing and unproven technology among enterprise buyers. One hurdle is the unforeseen risks associated with new concepts, such as security groups that are usurping traditional enterprise security rules and firewall designs. For companies delivering SaaS or other enterprise applications, clients are likely to be uncomfortable with the security delivered through these designs. Case in point, this August Amazon Web Services’ security group policies resulted in virtual machine connectivity losses, among other glitches. For end-users in industries such as banking, financial services and healthcare, this kind of concept and security risk can be a deal-breaker.
Moreover, any application written more than two or three years ago was almost certainly written to operate in a traditional 3-tier architecture with separate network segments for web, application and database servers. Generally, each of these tiers has its own firewall rules and load balancing profiles. Re-architecting all of this to function in a flat network can be a multi-year endeavor that isn’t worth the cost when you consider that there are already cloud environments today that support a traditional network architecture.
Lastly, many cloud vendors have chosen to implement a Layer 3 network topology, which leads to significantly lower performance than a traditional Layer 2, hardware-based network with reserved performance for each segment of the network. Is your application used to operating at wireline speeds and latency? What happens if you migrate that application to a cloud delivering 1/4 to 1/3 of Gigabit wireline speeds?
If you’re considering the move, you have two options: rewrite the application to deal with varying degrees of network latency and throughput, or choose a provider who is able to deliver this performance as a part of their standard platform.
Myth 2: My database won’t run in the cloud.
One to two years ago, the belief that databases couldn’t perform well in the public cloud was still valid, but much has changed in a short period of time. A handful of cloud providers now offer high-performance database options in the cloud. There are several ways to deliver databases in the cloud. Tiered storage offerings provide three disk speed options, with the highest tier designed specifically for transactional databases.
Companies like Zadara Storage even allow Microsoft SQL clustering in the cloud, which, in addition to enabling failover clustering, allows 100 percent customizable RAID storage options (from SATA to SSD, RAID 1 to 10, and most other options in-between). An experienced service provider can easily build an integration path for dedicated physical servers into cloud environments when neither of the previous two options are ideal.
Myth 3: The cloud isn’t reliable enough for my application.
Despite what the market may want you to believe, all cloud platforms are not inherently unreliable, and your application does not have to be designed to treat every server as disposable. It is not necessary to design for the fact that on any given server deployment, you might be allocated a “bad” VM, which you should automatically detect, delete, and replace with a properly performing machine.
Maybe this doesn’t have to be so hard. Enterprises often fear that they are not ready for cloud because they haven’t yet re-architected their applications to account for these perceived shortcomings in the public cloud. While these problems do exist on some platforms, they are not prevalent everywhere. Given the massive costs of an application re-write, vendor and platform selection are critical to ensure that your chosen platform supports the current state of your application without months or years of work required to make the leap.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:15p |
GreenQloud Enters U.S. Market With Digital Fortress GreenQloud, the cloud company committed to renewable power, has entered the U.S. market and announced its first infrastructure outside of Iceland. The unique cloud company selects data center locations based on the availability of renewable power in order to reduce the exponentially increasing amounts of CO2 generated by IT equipment. It found a match for its needs with colocation provider Digital Fortress in Seattle, where it will launch a new availability zone in the first quarter of 2014.
Customers will now be able to select whether to have their data geo-redundant through the US and Iceland, or only redundant through the Iceland data center locations. GreenQloud will also offer private and hybrid cloud solutions, which it began offering in Iceland as well.
“Providing compelling cloud solutions powered by renewable energy is core to GreenQloud’s mission of making the cloud easy-to-use, cost effective and green,” said Bala Kamallakharan, the CEO of GreenQloud. “Using Digital Fortress and creating a separate and independent operation in the US meets our stringent renewable energy requirements while enabling US-based companies - who require their cloud provider to have data center locations in the US - to utilize GreenQloud’s server hosting, storage and syncing solutions.”
Digital Fortress offers a renewable green energy source through Seattle City Light, along with low Power Usage Effectiveness (PUE) and high availability of fiber and fiber providers – which were all instrumental factors in GreenQloud’s choice og Digital Fortress as its primary data center provider in the Pacific Northwest.
“GreenQloud is leading the cloud industry in adopting renewable energy resources to power the cloud,” said Paul Gerrard, CEO of Digital Fortress. “We are proud to be GreenQloud’s data center provider for their expansion into the Northwest to better service their US customer base.”
GreenQloud has a European Availability Zone in two 100 percent renewable energy-powered data centers in Iceland on the Verne Global campus. GreenQloud was founded in 2010 and is privately funded by Icelandic investors and has won several government grants.
While cloud computing is more environmentally friendly than locating these workloads in inefficient server closets, there’s still a long way to go in making them greener. The biggest cloud, Amazon Web Services, currently only has two renewable energy options: Oregon (US-West) and AWS GovCloud (US) regions offer 100 percent carbon-free power. | | 1:45p |
Cloudera Launches Update of Hadoop Platform At the Strata + Hadoop World event in San Francisco this week Cloudera unveiled its fifth generation of its Platform for Big Data, Cloudera Enterprise – powered by Apache Hadoop 2. It offers unique features and advancements that simplify storing, processing, analyzing and managing large structured and unstructured datasets.
“With Cloudera Enterprise 5, Cloudera has taken several important steps toward realizing its vision to transform Hadoop into an enterprise data hub for analytics,” said Tony Baer, Principal Analyst for Ovum. “Adding support for in-memory data tiering and user-defined functions are essential for delivering the kind of performance that enterprises expect from their analytic data platforms.”
Key advances in the new release include the ability for datasets from HDFS to be cached in-memory, for customers to use the custom query functions in conjunction with Cloudera Impala, advanced resource management, and centralized data auditing for Hadoop.
“Over the last five years, we have worked closely with enterprises around the world to help them capture the value in the data they have. Resoundingly, they have asked for a more secure, more reliable real-time data platform that streamlines their existing architectures and speeds up time to insight,” said Mike Olson, chairman and chief strategy officer, Cloudera. “The market has spoken and we are listening. The new capabilities introduced in Cloudera Enterprise 5 deliver the industry’s first Enterprise Data Hub.”
Expanded partner ecosystem
Cloudera also announced Cloudera Connect: Cloud, an expanded partner program to support deployment of Hadoop in public cloud environments. Leading cloud solution providers who have already joined the program, include: Verizon Enterprise Solutions, Savvis (a CenturyLink company), SoftLayer (an IBM company) and T-Systems. The new program is designed to meet the growing needs of customer organizations looking to optimize Hadoop deployments in cloud environments, by offering the utmost flexibility in deployment, consumption and choice of vendor for their big data deployments.
“The cloud will play a significant role in the future of the enterprise, and Cloudera and Hadoop are part of that future,” said Tim Stevens, vice president, Corporate and Business Development, Cloudera. “We remain committed to solving higher order big data problems for our customers that will enable them to ask bigger questions and derive more sophisticated insights from their data — wherever it lives — from a single, unified enterprise data management platform. Cloudera’s richly diverse partner ecosystem is helping to provide the innovation and variety that our customers require to solve all of their data challenges.” | | 2:15p |
Verizon Tries Out Alternative Cooling Methods  A diagram of a new airflow pattern being tested by Verizon in an effort to improve its cooling efficiency. (Image: Verizon)
PALO ALTO, Calif. - Verizon Terremark is experimenting with a novel way to push hot and cold air around data center racks. Ben Stewart, senior vice president of facility engineering at the company, talked up the concept Tuesday at the 2013 Data Center Efficiency Summit.
Stewart calls the new approach virtual containment. Think of the concept as a new and potentially energy-saving way to push hot and cold air among racks in a data center.
Instead of having a downflow computer-room air handler (CRAH) on a raised floor running at a set speed and possibly not delivering enough cool air to a rack – or too much of it – Verizon is interested in finding a happy medium. Stewart opted for the term “Goldilocks zone” in his explanation.
“We can actually tune it to deliver exactly the right amount of cold air to this aisle and to this cabinet,” he said.
Researchers tried putting temperature sensors in cold aisles, and whenever it gets too cold, an alarm went off and an employee adjusted the floor tiles to deliver just the right temperature. But such a manual process might not scale well. The company developed a system to “dynamically balance the floor to achieve the Goldilocks zone, without the guy having to come out there,” Stewart said, resulting in savings of over $1 million a year with an eight-month payback period.
Then the researchers looked at adding barriers to form hot and cold aisles, but the result was too complicated. The system in tests now is simpler, with cold air being pushed over several adjacent aisles and hot air being pulled down through the cabinets and sent back to the CRAH underneath the cabinets.
“All we have to worry about is cold air sneaking back down there,” Stewart said. “It’s something we’re playing with right now, just kind of on the drawing board.”
The experiment currently spans around 2,000 square feet, Stewart said after his presentation.
Broader Focus on Efficiency and Sustainability
The airflow experiments are part of a broader focus on data center energy management at Verizon. The company already lays claim to six solar power sites that generate 5.4 megawatts of electricity, and 12 fuel-cell sites that make 9.6 megawatts, according to figures Stewart shared in his presentation. The figures put the company ahead of some other companies in the Fortune 500.
But increasing reliance on green energy sources isn’t the only way to reduce environmental impact. Energy efficiency can also move the needle – thus Verizon’s recent work in the area.
The company has begun working more with free cooling, and its Terremark data center unit is a long-time proponent of using flywheels for uninterruptible power systems.
“Flywheels are said to be more energy efficient” than batteries in a data center’s uninterruptible power supply, Stewart said. “Where you really get your energy savings is you don’t have batteries. … We don’t have to keep the flywheels at 77 degrees. We don’t have to maintain specific humidities to maintain the flywheel.”
Implementing free cooling with outside air at data centers could be another way for Verizon to consume less energy in its operations.
“As the market starts to accept that we can raise temperatures in the data center -and as we all know, we certainly can – then we have more sites where we can actually use free cooling,” Stewart said. | | 2:40p |
Hortonworks Certified for Spring XD Big Data announcements are coming from the Strata Conference + Hadoop World 2013 in New York this week from Hortonworks, Rainstor, and Appfluent. The event conversation can be followed on Twitter hashtag #strataconf.
Hortonworks certified for Spring XD
Hortonworks and Pivotal announced that Spring for Apache Hadoop, which is bundled with Spring XD, has been certified with Pivotal HD and Hortonworks HDP products. This certification enables Java developers to use modern and familiar tools to build big data applications that work across major Hadoop distributions without modification. Spring for Apache Hadoop (SHDP) aims to help simplify the development of Hadoop based applications by providing a consistent configuration and API across a wide range of Hadoop ecosystem projects such as Pig, Hive, and Cascading in addition to providing extensions to Spring Batch for orchestrating Hadoop based workflows.
“Spring XD connects big data apps to existing systems, as well as any new data source or data store – and will indeed appeal to the enterprise,” said Shaun Connolly, vice president, Corporate Strategy for Hortonworks.
Hortonworks also announced that its Hortonworks Data Platform (HDP) is now available for resale through HP. Built, integrated and tested by the core architects of Apache Hadoop, HDP includes the necessary components to help refine and explore new data sources, and find new business insights. HDP allows enterprise organizations to cost-effectively capture, process and share data in any format and at any scale. “This collaboration with HP will help customers and partners seamlessly incorporate Hadoop into their big data strategies and next-generation architectures,” said Shaun Connolly, vice president of corporate strategy, Hortonworks.
Rainstor validates database with EMC
Rainstor announced that it has successfully completed product testing, resulting in validation of its database on EMC Corporation’s Isilon Scale-Out network-attached storage (NAS) running on the Hadoop Distributed File System (HDFS). RainStor is already in production with customers running on Isilon, and by adding native Hadoop capabilities, customers gain further benefits and flexible deployment options with Big Data initiatives.
The RainStor highly-compressed file format is virtualized from the underlying storage layer and therefore behaves the same whether running on DAS or NAS. With the ability to co-exist Hadoop data running on both DAS and NAS, you can now separate the compute layer from the storage layer and gain both scale efficiencies and query performance.
“Isilon running on RainStor provides high-impact use cases, including a compliance data archive for years of history reaching petabyte scale,” said Sam Grocott, Vice President, Marketing and Product Management, EMC Isilon Storage Division. “The one-two punch of RainStor and Hadoop on Isilon gives customers both performance and efficient scale with the added bonus of being easy to deploy and maintain. Case in point: RainStor and Isilon customer saw a 32X compression rate enabling efficient, predictable scale.”
Appfluent Visibility for Hadoop
Appfluent announced its groundbreaking new product, Appfluent Visibility for Hadoop. Giving insight into Hive SQL, Appfluent Visibility for Hadoop delivers detailed insight into the user activity related to Hive including the SQL statements, their performance and the data sets being used in Hadoop.
The solution also shows how the SQL statements correlate to the performance of associated MapReduce jobs, system resource consumption and performance.
“Hadoop has become a major disruptive force, with a rapidly growing number of large enterprises moving workload and data onto Hadoop to slash database infrastructure costs and extend analytic capabilities,” said Frank Gelbart, Chief Executive Officer of Appfluent. “Operations and development teams need a way to proactively discover causes for performance bottlenecks and respond to end-user issues. Our solution gives them the in-depth, actionable information needed to allow them to quickly diagnose problems and increase performance levels.” | | 3:30p |
CSC Adds Cloud Management With ServiceMesh Acquisition CSC announced that it will acquire enterprise cloud management provider ServiceMesh. By adding the SerivceMesh capabilities, CSC is empowered to continue its transformation into a next-generation IT company that helps its clients migrate their applications into cloud computing environments.
“The future of next-generation IT infrastructure will involve a set of multiple clouds utilized simultaneously by enterprises,” explained CSC President and Chief Executive Officer Mike Lawrie. “ServiceMesh allows us to catalogue enterprise applications and orchestrate those applications dynamically to run in different clouds based on the characteristics of the applications. From our unique position as an independent global technology company, we will integrate those workloads for our clients through our portfolio of services and technologies.”
The ServiceMesh Agility Platform, an enterprise cloud management platform, to automate the deployment and management of enterprise applications and platforms across private, public and hybrid cloud environments. The company counts among its customers some of the world’s largest enterprises in financial services, healthcare and other highly regulated industries.
In the past 20 months under new leadership, CSC has undertaken a significant strategic transformation to position itself for the high-growth segments of the technology solutions and services marketplace, including cloud computing, big data, cybersecurity and next-generation applications. Recently, CSC has acquired Infochimps and 42Six Solutions as well as partnered with industry leaders that can deliver scalability to CSC clients.
Enterprise consumption habits have changed, as customers today want to move to a new IT model in which they can operate at a lower cost structure and get to market more quickly with their products and services,” said ServiceMesh Chief Executive Officer Eric Pulier. “Bringing ServiceMesh and CSC together creates a true leadership position in IT transformation.”
Existing customers of ServiceMesh will be able to leverage CSC’s global network of industry consultants, application developers, software engineers, cybersecurity experts and infrastructure professionals. ServiceMesh will have access to CSC’s global sales, marketing, delivery and solution development teams.
“The ServiceMesh Agility Platform is a foundational element of our cloud operating model,” said Commonwealth Bank of Australia’s Chief Information Officer Michael Harte. “Now the combination of ServiceMesh and CSC will further strengthen and accelerate enterprise IT transformations and drive significant value, not just from productivity gains and cost savings but, moreover, by providing an ecosystem for innovation across enterprises.” | | 4:00p |
LINX to Open Exchange in Ashburn With Dupont Fabros  The London Internet Exchange will open a new internet exchange inside the ACC5 data center in Ashburn, Virginia. (Photo: DuPont Fabros)
The London Internet Exchange (LINX) will build and operate a member-owned Internet exchange in Ashburn, Virginia on DuPont Fabros Technology’s Ashburn Corporate Center campus. The deal with LINX supports DuPont Fabros’ plan to host an Open-IX certified Internet exchange in each of its four markets.
The LINX node will be at DFT’s ACC5 data center becoming a part of LINX NoVa, a member-owned internet exchange that spans multiple data centers. LINX has already announced nodes at EvoSwitch in Manassas and CoreSite in Reston, and is now hitting all the key campuses in Northern Virginia. This node is particularly notable because it is in Ashburn, literally next door to Equinix, which runs the leading commercial traffic exchange in the area.
The LINX node at DFT’s ACC5 data center is expected to be live before the end of 2013 and LINX will offer incentivized port pricing through the end of 2014.
“Ashburn has been high on our list of potential sites since the beginning of the LINX NoVA project,” said John Souter, CEO at LINX. ”We are pleased to be partnering with DFT to build a LINX node in their Ashburn campus, which will contribute significantly to the number of networks able to join the LINX NoVA exchange.”
DFT’s Ashburn Corporate Campus has an underground conduit system that connects each of the company’s five data center facilities in Ashburn, which will grow to include ACC7 upon completion. There currently are 26 network carriers on DFT’s campus providing connectivity services.
“We are very excited to be partnering with LINX NoVA to provide our customers and the Internet community a neutral Internet exchange ready for Open-IX certification in the fiber- and content-rich Ashburn region,” said Vinay Nagpal, Director of Carrier Relations at DFT. “The LINX NoVA node will enhance network density at our Ashburn campus. In addition to providing our customers with additional connectivity and peering options, we anticipate the exchange to attract additional content, network and cable providers to our campus to connect to LINX for cost-effective peering and improved network performance.”
More Steam For the Open-IX Movement
In the European interconnection model, Internet traffic exchanges are managed by participants, rather than the colocation providers hosting the infrastructure. This model had struggled to establish itself in the U.S., barring recently.
Cue the recently-formed Open Internet Exchange (Open-IX), which hopes to import the European model stateside. Open-IX is a member owned, member governed non-profit looking to drive expansion of business-neutral Internet exchanges, reducing complexity, cost, and improving access within critical interconnection markets. The European approach seeks to extend the exchange across multiple data centers, offering participants a choice among providers and facilities.
The new exchange introduces an approach that has thrived in London, Amsterdam, Frankfurt and several Asian markets. It represents an alternative to the market leadership of Equinix, whose Ashburn campus is the focal point for interconnection activity in northern Virginia. LINX is now going to be right in the heart of it all: Ashburn, Virginia.
Other activity spurred by Open-IX includes German Exchange DE-CIX entering NY, and Digital Realty’s plans to support the model.
LINX currently connects over 480 networks from mearly 60 countries with more than 1,100 ports. In May LINX announced plans to enter the US market with an initial Northern Virginia node at the EvoSwitch data center in Manassas. |
|