Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 29th, 2014
Time |
Event |
12:12p |
VMware Launches Its Own Integrated OpenStack Distribution VMware launched an integrated OpenStack solution as a way to augment its larger vision for the software-defined data center and architecture for building private, hybrid or public clouds.
VMware recognizes the open source movement for cloud operating systems and the increasingly diverse and dynamic set of technologies used by its customers. Like its other announcement this week to team up with Google, Docker and Pivotal on container adoption, VMware’s OpenStack play is just another avenue to embrace the realities of other IT environments and empower VMware architecture within them.
VMware has been involved with the OpenStack foundation for years and it is currently the number 4 contributor to integrated (core) OpenStack projects. Through its 2012 acquisition of Nicira, VMware’s Network Virtualization Platform has been at the foundation of many OpenStack production environments.
Foundation for OpenStack Clouds
Going beyond the hypervisor and leveraging VMware’s software-defined data center technologies for compute, network, storage and management, VMware also says that operational cost savings will be achieved through quick setup times, and full integration with VMware administration and management tools. Additionally the company says it’s OpenStack distribution will help customers repatriate workloads that have been moved to the public cloud by creating a more developer-friendly, yet highly secure and reliable private cloud environment.
“The software-defined data center provides an open approach for creating an agile, scalable data center to support both traditional and modern 3rd Platform applications,” said John Gilmartin, vice president and general manager, SDDC Suite Business Unit at VMware. “With the VMware Integrated OpenStack distribution,VMware will provide further choice in how customers can implement a software-defined data center. Regardless of the path a customer wants to take to build a software-defined data center, VMware infrastructure provides the proven foundation for developer-friendly, enterprise-class clouds,”
VMware says it has also partnered with OpenStack distributions including Canonical, HP, Mirantis, Piston, Red Hat and SUSE to ensure OpenStack offerings work well with VMware infrastructure. The company is running a qualified beta program for VMware Integrated Openstack, with general availability expected in the first half of 2015.
“Creation of an on-site private cloud for agile software development and IT operations, or DevOps, is one of the primary enterprise use cases for OpenStack,” said Jay Lyman, Senior Analyst, Development, DevOps & Middleware at 451 Research. “To date, OpenStack implementations still require a great deal of technical expertise to deploy. By using a distribution such as VMware Integrated OpenStack rather than the DIY approach, a customer can have higher confidence that all the components will work together and will get the support they require when needed,” | 12:30p |
IBM’s Cognitive Computing System Watson Available As A Cloud Service IBM’s cognitive computing system Watson is now available as a cloud service.
Called Watson Discovery Advisor, the first Watson cloud service is for research teams needing to analyze large amounts of data to identify patterns and come up with research ideas. Watson enables researchers to accelerate the pace of scientific breakthroughs by discovering previously unknown connections in Big Data.
The move is a bid to commercialize Watson. Watson is famous for beating human contestants on Jeopardy three years ago. Since then, it has become smarter with a 2,400 percent improvement in performance, grown 24 times faster, and is 90 percent smaller - IBM has shrunk Watson from the size of a master bedroom to three stacked pizza boxes.
And this is just the beginning of its career in business and research applications. IBM says the next milestone in cognitive computing is accelerating scientific and industrial research. It’s a massive market, with the top 1,000 research and development companies spending more than $600 billion in 2013 according to Strategy&.
IBM’s Watson Discovery Advisor is designed to scale and accelerate discoveries by research teams. It reduces the time needed to test hypotheses and formulate conclusions that can advance their work.
Watson learns from data instead of being explicitly programmed to carry out instructions. It consists of a collection of algorithms and software running on IBM’s power line of servers. Building on Watson’s ability to understand nuances in natural language, the Watson Discovery Advisor service can understand the language of science, such as how chemical compounds interact.
“We’re entering an extraordinary age of data-driven discovery,” said Mike Rhodin, senior vice president, IBM Watson Group. “Today’s announcement is a natural extension of Watson’s cognitive computing capability. We’re empowering researchers with a powerful tool which will help increase the impact of investments organizations make in R&D, leading to significant breakthroughs.”
Researchers and scientists from leading academic, pharmaceutical and other commercial research centers are starting to deploy IBM’s new Watson Discovery Advisor to rapidly analyze and test hypotheses using data in millions of scientific papers available in public databases.
“On average, a scientist might read between one and five research papers on a good day,” said Dr. Olivier Lichtarge, the principal investigator and professor of molecular and human genetics, biochemistry and molecular biology at Baylor College of Medicine. “To put this in perspective with p53, there are over 70,000 papers published on this protein. Even if I’m reading five papers a day, it could take me nearly 38 years to completely understand all of the research already available today on this protein. Watson has demonstrated the potential to accelerate the rate and the quality of breakthrough discoveries.”
Watson has application in several domains. IBM provides a few potential areas where the service is applicable:
- Accelerate a medical researcher’s ability to develop life-saving treatments for diseases by synthesizing evidence and removing reliance on serendipity
- Enhance a financial analyst’s ability to provide proactive advice to clients
- Improve a lawyer’s merger and acquisition strategy with faster, more comprehensive due diligence and document analysis
- Accelerate a government analyst’s insight into security, intelligence, border protection and law enforcement and guidance, etc.
- Create new food recipes. Chefs can use Watson to augment their creativity and expertise and help them discover recipes, learning about the language of cooking and food by reading recipes, statistical, molecular and food pairing theories, hedonic chemistry, as well as regional and cultural knowledge
| 1:00p |
Carter Validus Acquires Two IO Properties in Phoenix Area The sale/leaseback model continues to see traction among data center operators and investors. In the latest example, Carter Validus Mission Critical REIT has acquired two properties from data center provider IO, which will continue to use the facilities through a long-term lease. Carter Validus paid $125 million to purchase the IO Phoenix and IO Scottsdale buildings, the company’s first two facilities.
The financial transaction will have no impact on operations in the two data centers, where IO will continue to support customers on a “business as usual” basis. IO also leases its properties in New Jersey and Singapore.
In a sale-leaseback deal, a property owner sells a property to an investor, while agreeing to continue to lease space in the building. The transaction generates cash for the former owner (now the tenant), and provides the new owner with steady rent. These deals are particularly attractive when the initial owner is a blue-chip company with a strong credit rating.
Carter Validus has a track record of buying properties owned by service providers, who then become the tenants and continue to operate their business. It has done three sale-leaseback deals with AT&T, and also has acquired buildings housing data centers operated by Internap, Peak 10, Atos and Equinix.
Focus on Core Competencies
The companies said the partnership allows both to focus on their core competencies. IO will continue to deliver colocation and cloud services across its global data center footprint, while landlord Carter Validus will gain revenue from the lease.
“This transaction is aligned with IO’s long-term strategic plan,” said Anthony Wanger, president of IO. “Partnering with Carter Validus allows IO to concentrate its focus on what we do best: Operating world-class IO data centers for our customers.”
As a real estate investment trust, Carter Validus Mission Critical REIT owns real estate and generates income from tenant leases. The company is focused on two sectors, data center and healthcare, citing societal trends that it believes will boost demand for data storage and outpatient healthcare.
“We are excited about adding IO to our roster of strong data center tenants in our portfolio and look forward to our continued relationship with them,” said Michael Seton, president and chief investment officer, Carter Validus Advisors, LLC. “The quality of these two assets will be a great addition to our growing portfolio of high quality mission critical assets.”
IO Phoenix is one of the world’s largest data centers, with more than 500,000 square feet of space, and is divided between traditional raised-floor colocation space and modular deployments using IO’s factory-built IO.Anywhere enclosures. IO Scottsdale, which was the company’s initial data center, is a 125,000 square foot facility. IO opened Scottsdale in 2007, followed by the Phoenix site in 2009. | 1:59p |
Alibaba Continues To Expand Cloud Data Center Footprint Chinese e-commerce and cloud services giant Alibaba is opening a fifth data center in Shenzhen in support of its cloud, AliCloud. The data center will house approximately 10,000 servers.
AliCloud posted $38 million in revenue last June, a very small portion of its multi-billion revenue (around $2.54 billion in USD). However, it is a quickly growing segment and the company continues to open data center locations in support. As we reported in May, the company recently launched a data center in Beijing and another one in Hong Kong.
The services include cloud servers (called Elastic Computing Servers), storage, relational database and content delivery network, all on a pay-as-you-go basis, much like offerings you can find in service portfolios of its U.S. counterparts, such as Amazon Web Services or Google Cloud.
The data center will serve “large and small companies, financial institutions and other third parties in southern China,” the company said.
The cloud platform is called Apsara. It is built using the Alibaba’s own proprietary technology that enables massive scalability. “A single Apsara cluster can be scaled up to 5,000 servers with 100 petabyte storage capacity and 100,000 CPU cores,” the company wrote in the SEC documents.
The company developed an advanced proprietary technology stack to support its growing empire earlier this year. The distributed system, living in data centers in China and Hong Kong, supports a multitude of cloud-based services, including rentable infrastructure resources and sophisticated Big Data analytics for marketers.
China is a massive untapped cloud market. While the likes of Amazon Web Services and Microsoft Azure are making their way into China, Alibaba represents the biggest local competition going forward.
There is also an expansion to the North American market on the horizon. Already a major competitor to U.S.-based e-commerce and cloud giants in China, Alibaba’s potential U.S. expansion would bring another big player into the North American market.
Alibaba’s upcoming IPO is expected to be the biggest IPO on U.S. markets since Facebook’s $16 billion float in 2012. The offering is expected to yield between $15 billion and $20 billion for the company.
One of the biggest beneficiaries of the offering will be Yahoo, which owns about 23 percent of the company. Yahoo’s stake in Alibaba is the second largest after Japanese telco SoftBank’s 34 percent stake. | 4:30p |
Six Critical Steps to Evolving Capacity Management Your data center is growing. You have more applications, workloads and users accessing critical resources. Business demands continuously place new strains on data center resources, capacity and efficiency.
So, is this going to change anytime soon?
The reality around the modern data center is that it is the central hub for all major technologies and delivery platforms. Organizations are asking their data centers for greater density and lower-cost operation. As the data center continues to grow, how can an administrator continue to cram more users and resources into an already tight space? This is where managing capacity becomes a critical operational initiative.
Effective capacity management has become a critical differentiator for IT organizations. Those that can’t effectively evolve their capacity management practice will continue to struggle with complexity and negligible insights into capacity sizing and the impact of changing demand and resulting service/application performance. However, those that gain advanced capacity management capabilities will be able to more effectively right-size investments, support key IT projects, and align resources with business objectives.
In this white paper from CA, we examine a practical look at capacity management, while outlining the six key steps IT organizations can take to realize capacity management that delivers maximum value.
For most IT executives, there is a fundamental, common reality: The demands are high while budget and manpower are flat or declining. Within this context, capacity management—the process of aligning IT resources with current and emerging demands—is increasingly growing in importance. Capacity management is a key means with which IT teams can address their core operational objectives:
- Maximizing resource utilization in order to reduce investments and costs
- Addressing the increasingly pressing demand to support business agility
- Performing strategic IT infrastructure planning that guarantees capacity will be available when needed while providing acceptable service/application performance levels
- Enhancing budgeting accuracy and intelligence in order to more effectively manage expenses and new investments
- Strengthening service level agreement (SLA) definition and compliance to meet or exceed availability, performance and response time requirements
- Improving visibility and transparency in terms of how business users are consuming IT resources
Without comprehensive, effective capacity management, IT organizations are flying blind, which means IT teams have to resort to being reactive rather than proactive. Not only does this make it difficult to manage current infrastructure and capacity demands, but it also significantly hinders the organization’s ability to support emerging requirements and initiatives.
To help address these capacity challenges – both long and short term – CA outlines six key steps that will help organizations advance their capacity management objectives.
- Step 1: Establish a unified view of component capacity management data
- Step 2: Establish application/service capacity management capabilities
- Step 3: Leverage scenario planning capabilities
- Step 4: Leverage business data
- Step 5: Leverage data from across the technology market
- Step 6: Implement continuous optimization and improvement
Capacity management is a critical endeavor and it’s only going to continue to grow in the months and years ahead. Fundamentally, organizations need to get more out of their IT investments and services, and capacity management is the way to make that happen.
Download this whitepaper today to learn how with the six steps outlined in this paper, IT organizations can begin to establish the comprehensive, intelligent capacity management capabilities they need to more effectively address their operational and strategic objectives. | 5:00p |
Friday Funny: Pick the Best Caption for 300 Pound Gorilla Do you hear that? It’s the sound of a three-day vacation! Let’s start this Labor Day weekend right with our Data Center Knowledge Caption Contest.
Several great submissions came in for last week’s cartoon – now all we need is a winner! Help us out by scrolling down to vote.
Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon!
Take Our Poll
For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website! | 5:30p |
DCK Webinar: Why 1% Efficiency Matters Join us Tuesday, September 9, at 2 p.m. EST for Why 1% Efficiency Matters. During this special one-hour webinar, Brad Thrash, Senior Product Manager at GE Critical Power, will discuss Total Cost of Ownership (TCO) and why it’s important during each phase of data center design.
The presentation will further highlight why stakeholders involved in UPS data center projects should employ a TCO model throughout the process and ways for bringing together the seemingly divergent goals of purchasing and operational teams.
Webinar details
Title: Why 1% Efficiency Matters
Date: Tuesday, September 9, 2014
Time: 2 p.m. Easter/11 a.m. Pacific (duration, 60 minutes including time for Q&A)
Moderator: Bill Kleyman
Register: Sign up for the webinar
Following the presentation there will be a Q&A session with your peers and industry experts.
About the speaker
Brad Thrash is a product manager for GE’s Critical Power business and is responsible for AC Power Systems product line globally.
In his more than 25 years at GE, Brad has held leadership roles in various businesses, including GE’s Power Quality and Power Generation businesses. In these roles, he focused on application engineering, service engineering, sales and product management.
Brad holds a B.S. in mechanical engineering and is a licensed professional engineer. He is a member of the Institute of Electrical and Electronics Engineers (IEEE) and the American Society of Mechanical Engineers (ASME). Brad is also on the Power Sub Work Group of The Green Grid.
Sign up today and you will receive further instructions via email about the webinar. We invite you to join the conversation.
| 8:00p |
How the Internet May Be Taken Down We’ve all seen some of the latest apocalyptic movies with some pretty epic reasons for losing Internet, electricity and other modern technologies. What’s interesting is that a lot of these reasons are far-fetched and aren’t always entirely realistic.
So here is where you pose a really challenging question: How, in today’s world, can the Internet completely go down?
Before we get into “how” – we have to understand “what” the Internet really is. At a very high-level, the Internet is a vast interconnected network of data centers spanning the globe. These data centers have exchange points, protocols and routes that they have to follow. With every year that passes, the Internet becomes more and more resilient. Why? Because at this point, Internet communication is absolutely critical to the survival of our current society.
To really understand just how complex the Internet will be, here is the entire Internet network, in all of its glory.
 Source: OPTE.org
We know the Internet is huge and that there are a lot of connections. So how can all of this fail? Well, there are a few ways.
Cutting the wires
Bringing down a couple – or even all – of the satellites will actually do little to cut Internet traffic. Yes, it will cause an amazing amount of issues, but the Internet will most likely live on. At this point, roughly 99 percent of global Web traffic is dependent on deep-sea networks of fiber-optic cables that blanket the ocean floor like a nervous system. These are major tangible targets – creating very real choke points in the system.
Consider this: As much as three-fourths of the international communications between the Middle East and Europe have been carried by two undersea cables, SeaMeWe-4 and FLAG Telecom’s FLAG Europe-Asia cable. To make things movie-worthy, you can’t just cut the wires. Why? Because they’re designed to be fixed. However, a strategic strike that will take out the fiber optic cables or damage the entire wire will do the trick. If this is done at choke points you can disable or almost completely halt global Internet traffic.
Destroy root servers
It’s much easier to go to Google.com than to type in 74.125.225.131. That’s what root servers do – they are responsible for decoding .com, .net, .org. names before aligning them with the correct IP address. If you take out these servers, the Internet will no longer recognize the alphabet when you type in an address.
Here’s the interesting part: there are “only” 13 servers that do this. Here’s the list of them. Effectively, if you take these servers down, the only way to “browse” the Internet will be with a physical piece of paper, a pen and a really good memory around numbers.
Here’s the other interesting part: take down these servers and IPv6 won’t work either. Phones, computers, businesses, everything will stop. The challenge with this is that these severs are replicated and backed-up and replicated hundreds of times over. Plus, with IPv6 – how these data centers receive and process multiple IP address is changing as well. Still, a “mission impossible” style attack where backups are killed, replication is stopped and only 13 servers remain could make a catastrophic outage possible.
Cyber warfare/political
China, Iran, North Korea, Syria and a few other folks already have an “Internet Kill Switch.” We’ve seen an entire country go dark. When Syrian and Egyptian rebels were posting pictures of the conflict, the government simply flipped a “switch.” This is what happened:
 Source: Akamai
 Source: Renesys.com
What if the U.S. had this switch? What about the EU? What if there were secret programs (NSA-style) that had complete control of the Internet from a kill-switch perspective? Here’s the interesting part – what if it broke? A country or governing body can take down the Internet; but what if they can’t bring it back up? What if a malicious group gains access to the kill switch and takes it down permanently? Even if you could fix it – having the Internet go down for a few months would be absolutely detrimental – especially if it was on a global scale. |
|