Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 4th, 2015
| Time |
Event |
| 1:00p |
Equinix Expanding Global Data Center Footprint New data centers in New York, Toronto, London, Singapore, and Australia—all key global financial and network interconnection hubs—will contribute close to a half-million square feet to the Equinix data center footprint this year. The company announced the global data center expansion program Wednesday morning.
It will bring the total to 10 million square feet for the global services provider, which promotes its connectivity and colocation space offerings as a way to distinguish itself from other companies in this arena.
“We’re the odd company in the sense that, although we’re a data center operator, we’re an interconnection company, and colo comes for a ride,” Jim Poole, vice president, Global Service Providers, at Equinix, said.
The connectivity value proposition becomes stronger as workload delivery changes. Hybrid infrastructure means that traffic patterns have changed, and the WAN is following in tow.
Poole said that as enterprises adapt colocation to hybrid and multi-cloud scenarios, companies are migrating larger back-office needs to Equinix data centers to cut down on data exchange between the proprietary environment and cloud providers.
Business Suites Coming to New York
Equinix is rolling out Business Suites at its new facility in New York, a product that sits between retail and wholesale colocation. These are dedicated suites, or as Poole explained, “standard domain in wholesale, but in wholesale, you have to operationalize it yourself.”
The Business Suites are in response to customers who have high connectivity needs but larger data center needs than retail colocation is suited for.
The larger trend points to the need for fewer workloads to go down to the office. “The only thing that really needs to go down to the office is rendering data,” said Poole. “The other stuff can happen inside the data center. It’s why we concentrate on these network and cloud hubs.”
Just the second market in the U.S. to offer Business Suites, New York joins Ashburn, Virginia, and some locations in Europe and Asia-Pacific.
The Redwood City, California-based company also does a lot of business with financial services companies in New York, which stands to grow further. “More and more asset classes are going to electronic trading,” said Poole.
Building Big in Asia Pacific
Equinix is building a third data center on the Singapore campus and its biggest facility in the Asia Pacific market. “Singapore is only second to Ashburn [Virginia] from an interconnection perspective,” said Poole.
Melbourne, Australia, is a relatively new market for Equinix. Its first data center there went live recently, with more than 30 customers taking space. “This expansion resulted at the request of many customers because it’s an economic center for Australia,” said Poole.
Building to Maintain Lead in Europe
The company will also open a sixth data center in Slough, U.K., first announced last year. Once the $80 million data center’s first phase of 85,000 square feet comes online, Equinix’s London campus will span close to 400,000 square feet.
Slough, a London suburb, is a big financial services hub for Equinix, especially considering that many of these firms are shifting from more power-challenged data centers in London to Slough, according to the company.
While the blockbuster merger between Interxion and TelecityGroup and NTT’s acquisition of e-shelter in Germany are changing the European data center market, Equinix had previously gone from third to first over the last three to four years. Much was based on organic growth, outside of a key Ancotel acquisition in 2012. Poole attributes the increase to interconnection and a natural tendency for American multinationals to attract other multinationals.
Expanding in Canada’s Financial Center
In Toronto, the new Equinix data center is close to 151 Front Street, the center of the city’s algorithmic trading facility. Equinix retrofitted an old factory there, though it was able to use the existing slab.
“In the case of Toronto, we found the right building, in a downtown area close to the existing hub, which was very attractive,” said Poole. “Toronto has many large data centers on the outskirts of the city. However, people like to expand as close [as possible] to the original for latency purposes.”
All five expansions are key strategic locations meant to capture the changing nature of IT and its changing connectivity needs.
“Cloud adoption is forcing traffic out of telecom buildings and to sites like ours,” said Poole. “At some point, there’s a tipping point: Do I want things talking in a WAN, or do I take the chatty bits and move them? [Network and Cloud Hubs] form a natural gravity, if you will, of enterprise.”
This enterprise trend is also contributing to the success of Equinix’s Performance Hub offering. “One of the first steps in that is WAN redesign,” said Poole.
Redesigning the network is the first integral step before enterprises are ready to move back-office systems. That one step alone can bring down costs significantly.
Connectivity is a large reason why The Wall Street Journal coined the term “the Internet’s Biggest Landlord” for Equinix and also the reason the company has historically been immune to market pricing weakness.
It’s about return on investment, said Poole. “The thing I say to service provider customers is, ‘If you’re not making money, you shouldn’t be here.’”
Equinix reported revenue of over $2.4 billion for 2014, up 14 percent from 2013. | | 4:30p |
Big, Bad and Ugly: Challenges of Maintaining Quality in the Big Data Era Sebastiao Correia is director of product development for data quality at Talend.
More than a decade ago, we entered an era of data deluge. One reason for this big data deluge is the steady decrease in the cost per gigabyte, which has made it possible to store more and more data for the same price. Another reason is the expansion of the Web, which has allowed everyone to create content and companies like Google, Yahoo, Facebook and others to collect increasing amounts of data. I’d like to explore some of the paradigm shifts caused by the data deluge and its impact on data quality.
The Birth of a Distributed Operating System
With the advent of the Hadoop Distributed File System (HDFS) and the resource manager called YARN, a distributed data platform was born. With HDFS, very large amounts of data can now be placed in a single virtual place and, with YARN, the processing of this data can be done by several engines such as SQL interactive engines, batch engines or real-time streaming engines.
The ability to store and process data in one location is an ideal framework to manage big data. While a tremendous step forward in helping companies leverage big data, data lakes have the potential of introducing several quality issues, as outlined in an article by Barry Devlin: In summary, as the old adage goes, “garbage in, garbage out.” Being able to store petabytes of data does not guarantee that all the information will be useful and can be used.
Another similar concept to data lakes that the industry is discussing is the idea of a data reservoir. The premise is to perform quality checks and data cleansing prior to inserting the data into the distributed system. Therefore, rather than being raw, the data is ready-to-use.
The accessibility of data is a data quality dimension that benefits from these concepts of a data lake or data reservoir. Indeed, Hadoop makes data and even legacy data accessible. All data can be stored in the data lake and tapes or other dedicated storage systems are no longer required. Indeed, the accessibility dimension was a known issue with these systems.
But distributed systems also have an intrinsic drawback, the CAP theorem. The theorem states that a partition-tolerant system can’t provide data consistency and data availability simultaneously. Therefore, with the Hadoop Distributed File System – a partitioned system that guarantees consistency – the availability dimension of data quality can’t be guaranteed. This means that the data can’t be accessed until all data copies on different nodes get synchronized (consistent). Clearly, this is a major stumbling block for organizations that need to scale and want to immediately use insights derived from their data.
Colocation of Data and Processing
Before Hadoop, organizations analyzed data stored in a database by sending it out of the database to another tool or database. With Hadoop, the data remains in Hadoop. The processing algorithm to be applied to the data can be sent to the Hadoop Map Reduce framework and the raw data can still be accessed by the algorithm. For data quality, this is a significant improvement as you no longer need to extract data to profile. You can then work with the whole data rather than with samples or selections. In-place profiling combined with BI Data systems opens new doors for data quality. It’s even possible to think about some data cleansing processes that will take place in the big data framework rather than outside.
Schema-on-Read
With traditional databases, the schema of the tables is predefined and fixed. Ensuring constraints with this kind of “schema-on-write” approach surely helps to improve the data quality, as the system is safeguarded against data that doesn’t conform to the constraints. However, very often, constraints are relaxed for one reason or another and bad data can still enter the system. Big data systems such as HDFS have a different strategy. They use a “schema-on-read” approach. This means that there is no constraint on the data going into the system. The schema of the data is defined as the data is being read. It’s like a “view” in a database. We may define several views on the very raw data, which makes the schema-on-read approach very flexible.
However, in terms of data quality, it’s probably not a viable solution to let any kind of data enter the system. Letting a variety of data formats enter the system requires some processing algorithm that defines an appropriate schema-on-read to serve the data. As time passes, the algorithm will become more complex. The more complex the input data becomes, the more complex the algorithm that parses, extracts and fixes it then becomes; to the point where it becomes impossible to maintain.
Pushing this reasoning to its limits, some of the transformations executed by the algorithm can be seen as data quality transformations. Data quality then becomes a cornerstone of any big data management process, while the data governance team may have to manage “data quality services” and not only focus on data.
On the other hand, the data that is read through the “views” would still need to obey most of the standard data quality dimensions. A data governance team would also define data quality rules on this data retrieved from the views. It raises the question of the data lake versus the data reservoir. Indeed, the schema-on-read brings huge flexibility to data management, but controlling the quality and accuracy of data can then become extremely complex and difficult. There is a clear need to find the right compromise.
We see here that data quality is pervasive at all stages in Hadoop systems and not only involves the raw data, but also the transformations done in Hadoop on this data. This shows the importance of well-defined data governance programs when working with big data frameworks.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:34p |
Alibaba Subsidiary to Open Cloud Data Center in Silicon Valley Alibaba‘s cloud services arm Aliyun is preparing to open its first overseas cloud data center in Silicon Valley. The Chinese tech giant is making a global push, with the data center a first important step.
The data center in Santa Clara, California, announced Wednesday, will provide a variety of cloud computing services. It will initially focus on Chinese companies based in the U.S., with the plan to gradually expand services and products to international clients later this year.
Alibaba has a robust set of cloud services, including Amazon Web Services-like Infrastructure-as-a-Service offerings, as well as big data analytics software for online marketers, also delivered as a service.
While details are scarce, the company is likely to have taken some amount of wholesale data center space from one of the Valley’s big providers. As we reported earlier this month, there is a lot more demand for data center space in the area than there is supply, and much of that demand is coming from internet companies in Asia.
“Aliyun hopes to meet the needs of Chinese enterprises in the United States, and the ultimate objective of Aliyun is to bring cost-efficient and cutting-edge cloud computing services to benefit more clients outside China to boost their business development,” Ethan Sicheng Yu, Aliyun vice president, said in a statement.
Aliyun’s existing cloud data centers are in Hangzhou, Qingdao, Beijing, Shenzhen, and Hong Kong. Alibaba’s cloud division has been growing quickly, adding a data center in Beijing, in Hong Kong, and Shenzhen all last year. The company hinted at North American expansion last August.
As of the end of June 2014, 1.4 million customers were using Aliyun services directly or indirectly through independent software vendors. A recent report from research firm IDC placed Aliyun as the largest Infrastructure-as-a-Service provider in China with close to a quarter of the market.
During a recent Shopping Festival, Aliyun handled peak order creation volumes of 80,000 orders per second, a testament to its ability to scale and the company’s network.
The company also touted its cloud security, citing successfully fending off a hacker attack on a Chinese gaming app company that is believed to be the largest DDoS (Distributed Denial of Service) attack recorded on the mainland. The attack in late December lasted 14 hours with peak attack traffic reaching 453.8 gigabytes per second.
A recent report from CDN provider Akamai on DDoS found that China is the largest emerging market for DDoS, accounting for 20 percent of attack origination. U.S. is first in terms of attack origination.
Alibaba shares are at their lowest point since the company’s blockbuster IPO in September 2014. Its Rival JD.com reported better-than-expected results. The company is also in the midst of a filing controversy and a “brushing” controversy. The Wall Street Journal reported Alibaba merchants were paying people to pretend to be customers in a practice dubbed “brushing.” | | 8:22p |
IBM’s SoftLayer to Launch OpenPOWER Cloud Servers IBM SoftLayer will offer OpenPOWER-based servers as part of its bare-metal cloud portfolio. The cloud servers join SoftLayer’s growing stable of POWER-based systems. IBM and SoftLayer worked closely with fellow OpenPOWER Foundation members Tyan and Mellanox on the design.
The move combines the big OpenPOWER initiative at IBM with the crown jewel of its cloud play in SoftLayer. The new cloud servers will be based on IBM’s POWER8 chip architecture, an open server platform built to run Linux applications. The service will come online in the second quarter. SoftLayer first deployed Power Systems in 2014 to support the IBM Watson cloud portfolio.
OpenPOWER is specifically focusing on big data cloud needs and high performance use cases. The nature of modern workloads and applications is shifting, and POWER is being positioned as the new architecture to match that evolution.
SoftLayer has long had a bare-metal cloud server offering positioned for performance. Bare metal is what it is primarily known for, although the company offers a range of cloud infrastructure options. It recently added more granular billing (which we covered in conjunction with the opening of a Melbourne, Australia data center).
Tyan provides advanced server and workstation platforms, and Mellanox brings in its InfiniBand and Ethernet solutions expertise.
The OpenPOWER consortium was launched in 2013, with heavy hitters Mellanox, NVIDIA, Tyan, and Google as initial members. The consortium makes POWER IP licensable to others and has made POWER hardware and software available for open development.
Several of the larger cloud providers have thrown weight behind OpenPOWER. Rackspace — whose competition with SoftLayer extends back to the days before SoftLayer was gobbled up by IBM — is an official member. Rackspace is building an OpenPOWER-based Open Compute server, uniting two big open hardware initiatives.
French hosting company OVH is big on customization and is adopting OpenPOWER as part of its RunAbove service for developers building data-intensive applications.
However, the traction isn’t just in cloud.
“OpenPOWER is really built on the notion that we’re growing an ecosystem on the POWER architecture,” said Calista Redmond, IBM’s director of business development for OpenPOWER. “There is huge momentum for OpenPOWER as it turns from an IBM organization to seeing the ecosystem take off. The more ways we can make this available to the industry as well as the end user community the better.”
In the war between OpenPOWER and x86, OpenPOWER has long had a compelling sell in terms of performance, but Redmond believes it wins in total cost of ownership as well. “We’re accutely aware of cost, and cost has been one of the factors in the past. A box-to-box comparison is not the right measure. Total cost of ownership is. If you can consolidate 26 boxes to one, and you’re paying by the core, factor in transition costs, data center costs, we come out ahead, especially with analytics.”
A recent report from the Linely Group (subscription required) examines POWER architecture from both a performance and TCO standpoint and makes a compelling argument.
Open projects and consortia that bring together a community of talent for a shared goal of advancing technology are primarily driving the innovation in cloud. OpenPOWER has a 100-plus-member open development community that continues to grow. Several big names are playing with the architecture and refining it.
Redmond also notes that OpenPOWER is getting closer to the Open Compute project, the Rackspace initiative being one example.
OpenPOWER is serious competition for the likes of Intel. Its competition comes in the form of proprietary offerings from big vendors and from many of the larger web-scale properties like Facebook building servers to their own specifications.
IBM sold its commodity x86 server business to Lenovo, and has been focusing resources on Power8 and OpenPOWER since. | | 9:00p |
Windows Server 2016 Leaks Preview Cloud-Optimized Nano Server 
This article originally appeared at The WHIR
Early details about Windows Server 2016 were leaked over the weekend, including a plan to release a lightweight, cloud-optimized version, called Nano Server.
A leaked Microsoft slide deck said that Nano Server is a new “headless deployment option for Windows Server” and features “deep refactoring” focused on CloudOS infrastructure, born-in-the-cloud applications and containers.
Nano Server uses a zero-footprint model where server roles and optional features live outside of the server.
Nano Server will “eliminate the need to ever sit in front of a server,” according to the slides, reflecting a Devops mindset to “treat servers like cattle not pets.”
According to a report by Redmond Mag, Nano Server will be a smaller option to use than the Server Core option that currently exists in Windows Server 2012 R2.
“Supposedly, using the Nano Server option will reduce the number of ‘critical’ and ‘important’ security bulletins that need to be applied, compared with Server Core,” the report said. “In addition, the number of reboots will be reduced compared with Server Core.”
The product will be available sometime in 2016, the report said. The test build of the server is due in the spring, ZDNet said.
This article originally appeared at http://www.thewhir.com/web-hosting-news/windows-server-2016-leaks-preview-cloud-optimized-nano-server | | 9:30p |
Worldwide Server Sales Reach $50.9 Billion in 2014: Report 
This article originally appeared at The WHIR
Worldwide server shipments and factory revenues increased in Q4 2014 on the strength of Chinese server investment which increased revenue in Asia/Pacific by 15.8 percent, according to the latest IDC report. The Q4 2014 Worldwide Quarterly Server Tracker was released on Tuesday, and also shows the disruption to the top vendor list of the completion of IBM’s sale of its x86 server business to Lenovo last October.
Demand in China was the server market’s greatest strength during the past quarter, however the EMEA, Japan, and US regions all grew, at 1.2, 1.5 and one percent year-over-year, respectively. That brought the worldwide gains to 2.8 percent on server shipments and 1.9 percent factory revenue, compared to a year earlier.
Full year server sales in 2014 topped 2013 by 2.3 percent, rising to $50.9 billion, and unit shipments reached a record high of 9.2 million units, a 2.9 percent increase.
The server market continued to fragment along system-type lines in the past quarter. Volume system revenue increased by 4.9 percent to $10.8 billion, supported by the “continued expansion of x86-based hyper-scale server infrastructures,” while mid-range system demand increased 21.2 percent to $1.4 billion with the help of “enterprise investment in scalable systems for virtualization and consolidation,” the report says.
In contrast to those rosy numbers, high-end systems declined 17.2 percent ahead of a recently announced IBM z systems upgrade.
“IDC continues to see a server market that is largely about serving the needs of two separate and distinct sets of customer workloads. Traditional 2nd Platform workloads are driving the need for richly configured integrated systems aimed at driving significant levels of consolidation while also enabling advanced management and automation in the datacenter,” said Matt Eastwood, Group Vice President and General Manager, Enterprise Platforms at IDC. “These 2nd Platform workloads continue to represent a healthy profit pool for server vendors targeting virtual environments in both enterprise and service providers. At the same time, new 3rd Platform profit pools continue to develop fueled by the needs of fabric-based disaggregated software defined environments. This 3rd Platform market continues to witness a build-vs.-buy battle for next generation workloads, which are stateless, scale horizontally, and do not assume infrastructure resiliency.”
Among vendors, HP gained a huge market share edge when IBM, which it had been virtually tied with as the leading server vendor, dropped below Dell to 13.7 percent of the market. In Q4 2013 IBM had held a 26.8 percent market share, while HP fell from 26.9 to 26.8 percent.
Lenovo made up some, but not all, of the lost share from IBM, rising from 0.9 percent to 7.6 percent of the market. That gain put Lenovo fourth, ahead of Cisco’s 5.3 percent share.
The deal between IBM and Lenovo looks good for the latter based on the continuing strength of the market, in particular in the Chinese company’s own backyard. It also looks increasingly wise for IBM, though, which might have been shut out of large parts of the Chinese server market eventually anyway.
This article originally appeared at http://www.thewhir.com/web-hosting-news/worldwide-server-sales-reach-50-9-billion-2014-report | | 10:46p |
Oracle Debuts New Ethernet Switches, Virtual Network Services At Mobile World Congress in Barcelona this week, Oracle announced new Ethernet data center switches and the addition of virtual network services to Oracle SDN.
Targeted at software-defined data centers and cloud, and touting low latency with massive scalability, Oracle says the new switches are optimized for Oracle engineered systems, servers, and storage, and are also used in its telco-focused Netra Modular Systems. Providing maximum port density, the new ES2-72 and ES2-64 switch models combine 10 Gbps and 40 Gbps ports with Oracle stack attributes such as routing on-demand management using Oracle Fabric Manager. The tool can connect up to 1,000 servers and 16,000 private virtual interconnects, according to Oracle.
“The transition to software-defined data centers has created new networking requirements where high speed and low latency are must-haves for most applications,” said David Krozier, principal analyst at Ovum. “Oracle’s new ES2-64 and ES2-72 Ethernet switches ably address these requirements with impressive performance at what Oracle claims is a very competitive price.”
With a new iteration on data center switch hardware the real spotlight is on Oracle SDN and new virtual network services to unify InfiniBand and Ethernet fabrics. Oracle says the new services create a single virtual instance for network functions such as firewall, load balancer, router, VPN and network address translation. Further appealing to the software-defined data center and cloud providers, Oracle notes that the new virtual network services can be daisy chained to enable configuration on either all of the services or a subset per tenant. Through a high availability feature, two virtual network service instances can be provisioned in an active/standby configuration.
“Cloud-enabled data centers are only as fast or as agile as their networking allows, which makes the convergence of software-defined networking and network services a next logical step in the evolution of the software-defined data center,” said Raju Penumatcha, senior vice president, Netra Systems and Networking, Oracle. “Oracle’s new Ethernet switches and virtual network services in Oracle SDN help clear the way for enterprises to deploy key network services faster and gain high performance at the lowest cost.” |
|