Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, September 22nd, 2014
Time |
Event |
6:18a |
IBM Opens Largest U.S. Business Continuity Center in North Carolina IBM has officially unveiled its newest business continuity center in Research Triangle Park (Durham, North Carolina), which includes data center space and can act as a DR site (disaster recovery). Active since July, the facility already has its first customer, a mobile banking and payment company called Monitise.
At about 4 million square feet, the facility represents IBM Business Continuity and Resiliency Services organization’s largest single-site investment in the country. The company has about 150 other “resiliency centers” around the world.
IBM will provide cloud and traditional data recovery services at the DR site, including services for its SoftLayer cloud customers.
Business continuity services is a major segment of the IT services market. Many providers of data center colocation, managed services and cloud infrastructure services offer business continuity, but IBM is one of the biggest players in the space.
Its new center has 72,000 square feet of raised-floor data center space, an 8,000-square-foot tape pool and 10 suites fully equipped for customer disaster recovery staff.
It is connected to the dedicated BCRS backbone. AT&, Time Warner Telecom, Level 3 and Verizon Business provide network connectivity services at the site.
IBM is currently planning to build more resiliency centers in Mumbai and Izmir, Turkey, Mike Errity, IBM vice president of business continuity and resiliency services, told the Triangle Business Journal.
The company has had a big presence in RTP before it built the new center.
Opened in the late 50s, 7,000-acre RTP is one of the world’s largest research parks and the largest research park in the U.S. IBM is one of nearly 200 companies there.
About 14,000 full-time IBMers go to work at RTP.
In 2010 the company opened a $360 million data center there, providing 100,000 square feet of data center space. | 2:53p |
Certified Data Center Training Seminar October 2014 National Tour The International Data Center Authority will pass through the US during the month of October for the IDCA National Tour.
The month long tour, hosted by TechXact, will be a series of training events covering the IDCA’s international standard based on the Infinity Paradigm, a comprehensive and all-encompassing framework. The internationally recognized standard fills in the gaps left by outdated, incomplete legacy standards still in use today.
Attendees will have the option to attend certified training events related to their specialties, whether it be in infrastructure, operations, management, and/or engineering. The dynamic training events will be led by industry leaders and attended by others looking to meet the international standard.
Individual events will be held in:
- Washington, D.C. on October 6 – 10
- Chicago on October 13 – 17
- San Francisco on October 20 – 24
- Dallas on October 27 – 31
For more information regarding the events, venues, or registration, visit www.techxact.com. | 4:39p |
Brocade Unveils OpenDaylight SDN Controller Vyatta Brocade announced an OpenDaylight controller for software defined networking.
Its commercially supported Vyatta controller, based on open source SDN software developed by the Linux Foundation’s OpenDaylight project, can be used to manage a wide range of physical and virtual network infrastructure components, such as switches, routers, firewalls, VPNs and load balancers. Deployed as a virtual machine, it supports all major hypervisors and non-Brocade network equipment.
Brocade has gear that supports OpenFlow, an open SDN protocol. While OpenDaylight supports OpenFlow, it is one of several network management protocols it addresses.
Brocade rivals in the data center network market have been working toward similar goals. Cisco’s XNC controller supports OpenDaylight, and Juniper submitted OpenDaylight plugin code for its Contrail controller to the open source project in April.
Brocade’s OpenDaylight controller includes an open source development platform customers and third-party developers can use to build their own SDN functionality. They can retain rights to whatever they develop.
Brocade expects Vyatta to become available in November, along with its first SDN application, the Path Explorer. The app provides network topology awareness and path optimization.
Brocade’s second app for Vyatta will be Volumetric Traffic Management, slate for release early next year. The app is designed to help manage volumetric traffic attacks and legitimate “elephant flows” in data centers.
“The momentum behind the OpenDaylight Project is unlike anything else the networking industry has experienced and that is because the customer demand for an open, software-defined platform is louder than ever before,” said Neela Jacques, executive director of the OpenDaylight Project. “Brocade has been among the most active contributors and the Brocade Vyatta Controller is not only a testament of its commitment to the OpenDaylight Project, but to delivering open networking solutions.” | 5:09p |
Mesos Founding Father/Twitter Fail Whale Slayer Hindman Joins Mesosphere Benjamin Hindman has joined Mesosphere in its quest to build the distributed operating system for managing data center and cloud resources at web scale. Hindman, who comes to Mesosphere after four years at Twitter, was one of the original creators of Apache Mesos, the popular open source server cluster management software Mesosphere built its business on.
Mesos lets organizations manage a mix of distributed infrastructure resources like a single machine. It has strong adoption at web-scale companies with significant engineering talent.
The cluster management software powers Twitter’s data centers and is widely credited for making appearances of the notorious Twitter Fail Whale a lot less frequent. Mesosphere’s goal is to bring the same capabilities to organizations of any size, offeing a variety of enterprise products based on the Mesos kernel, as well as commercial support.
“Today’s applications — not servers — should be the first class citizens in our data centers,” Hindman said. “And to accomplish that we need a new kind of distributed operating system, one that operates at the scale of the data center and cloud and that makes launching and running distributed applications as easy as launching and running applications on a personal computer or mobile device.”
Hindman will lead the design of the distributed operating system the company is building. He was previously a close advisor to the Mesosphere, going back to its founding last year.
Mesophere announced $10.5 million in funding this summer and has been assembling a team filled with distributed computing experts. Among other recent hires is Christos Kozyrakis associate professor of electrical engineering and computer science at Stanford University.
Mesos is a distributed systems kernel born out of UC Berkeley’s AMPLab about five years ago. Hindman was a PhD student at Berkeley at the time.
The software abstracts CPU, memory, storage and other compute resources on servers and cloud instances, enabling management of entire pools of resources as a single machine.
Mesosphere runs on top of the most popular server operating systems and cloud and Platform-as-a-Service environments already in use by enterprises, including OpenStack, Red Hat Linux, CoreOS, CentOS, Ubuntu, Coud Foundry, OpenShift and similar technologies.
“Web-scale is no longer just the problem of hyper-growth web companies like Google, Twitter and Facebook,” said Florian Leibert, CEO and co-founder of Mesosphere. “Today’s applications have outgrown single-server approaches, and deploying hundreds of containers across thousands of cloud or data center resources without a lot of human intervention or management is the new requirement for the enterprise CIO. Mesosphere is the answer and Ben Hindman is the visionary technologist who saw this when he was a PhD student at Berkeley and then made it a reality at Twitter.”
Mesosphere is also collaborating with Google to bring the startup’s server cluster management software and Google’s open source Docker container management solution to the Google Cloud Platform. | 5:18p |
Understanding the Cost of Delivering a Service Zahl Limbuwala, is co-founder and CEO of Romonet
Throughout the history of the data center, understanding the true costs behind the technology has been an uphill struggle.
Recently, eBay and Facebook announced that they are taking steps to understand data center cost and efficiency. And while this proves that the struggle is gaining momentum, it doesn’t mean much to the average data center owner who can’t command the budget to take similar steps.
That said, unless organizations of all sizes and at all levels understand the Total Cost of Ownership (TCO) of their IT estate, they cannot begin to understand the real costs of delivering a service.
Key factors that must be taken into consideration
The increasing demand for IT, whether for business or consumer services, has resulted in more and more data centers springing up worldwide. However, a number of factors are combining into a single tipping point that could undermine these services.
First, IT services are increasingly seen as a commodity, meaning that users expect much the same costs regardless of who provides the service. Second, IT budgets are coming under greater scrutiny, meaning that IT departments need to justify more and more of their expenditure to the CFO. Finally, the pace of technological change has been so rapid that the demands placed on data centers themselves have changed. This means that continual investment is needed to guarantee performance.
Without understanding these factors, data center owners will be unable to ensure that they can justify and use the IT resources at their disposal.
Understanding the contribution from IT
An increased reliance on IT means additional investment in energy, storage and the necessary skills to ensure everything is running smoothly. It also means that IT is taking up more of the budget.
In general this would be fine, however, most organizations are unable to precisely identify exactly what business activities IT spending supports. As long as the company as a whole is profitable, most see no need to identify which of their IT services are consuming the most resources and where money could be better allocated.
Other areas of the business are beginning to take note of this. One recent trend we have seen is that facilities departments are no longer willing to support the IT infrastructure and are asking for power consumption in data centers to be allocated to the IT budget.
Essentially, data center and IT spend can no longer be seen as a single monolithic cost that is separate from the rest of the organization. Instead, it needs to be an integral part of the overall budget and strategy.
Ensuring profitable growth for all
Understanding the TCO of a data center and how each service operating from there contributes to this, is key to ensuring that the business runs profitably as a whole. This is only in this way that businesses can truly understand the cost of delivering a service.
The sad fact is that data center owners have no easy way of predicting the costs of their data center, since they can only go by historic data gained after the fact.
While measurements like Power Usage Effectiveness (PuE) and other data from metering have helped data center operators understand efficiency, they give no real understanding on how this relates to TCO. Without aligning data center cost and efficiency to the goals of the business then measurement is useless on its own.
This is particularly true when data center owners are considering expanding or modernizing existing operations. The IT department has to justify new spend to a CFO, who may well be sceptical thanks to previous projections falling short of the mark and eroding confidence in any Return On Investment.
The problem is that without the right tools, accurate prediction is impossible, meaning organizations have no idea how a data center will perform until they have already built it. Numerous factors need to be considered such as ambient temperature, the distance to the local power supply, energy costs, taxes, data center design, and the hardware running inside. For example, a data center being built in Norway will have wildly different factors influencing its TCO than one built in Texas.
Eliminating the uncertainty
Data center owners and managers must be able to accurately predict performance of data centers to justify any future financial decisions. This is where predictive modelling comes in.
While organizations can measure how data centers are performing based on factors such as ambient temperature, data center design, cooling, and so on, predictive modelling is more concerned with how data centers ‘should’ be performing. By modelling data center performance based on these variables, organizations can understand how current performance stands in relation to the goals of their business and what they need to do to gain the most from their investment. Organizations can also predict the performance of data centers while they are still on the drawing board and say with certainty how much it will cost the business to operate, before they have allocated budget or committed to any construction.
To get long-term value out of IT investments, businesses need to optimize all aspects of the data center estate. By understanding how the full architecture operates, data center owners and operators can predict the actual cost of IT decisions and eliminate uncertainty in the process.
The data center of the future
With the right tools organizations can identify where further energy savings can be made, predict performance based on real data, and understand the real costs of delivering a service.
Real understanding of TCO cannot be achieved by reactive methods. Organizations need to take proactive steps toward understanding the cost of operating their data centers and how this relates to TCO.
As the commoditization of IT continues and the data center market inevitably follows the same path, the ones that survive will be the ones that understand how their investment decisions will impact on the wider business and which ones will come to fruition to bring them success in the future.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:54p |
Report: Microsoft to Launch South Korea Data Center Microsoft is close to making a deal with government officials in Busan, South Korea, to build a data center in the country’s largest port city, Korea Herald reported citing anonymous government and industry sources.
Company CEO Satya Nadella is due to visit Busan on Tuesday, where he plans to meet with government officials to talk about joint investment plans in the country. The company is expected to sign a deal for establishing a Microsoft data center in the city in conjunction with his visit, the report said.
As it continues pursuing a leadership position in the worldwide cloud services market, Microsoft has been expanding data center capacity in the U.S. and abroad. High-growth Asian markets have been a particular focus for Microsoft and its competitors.
By either establishing fully fledged cloud data centers in major metropolitan centers or by offering private network connectivity to its Azure cloud from colocation facilities, Microsoft improves performance of cloud services for local users.
Korean investment expected to be huge
The Herald cited an earlier statement issued by the office of Korean president Park Geun-hye, saying the data center project would result in $5.2 billion of investment in the country. Microsoft is planning to invest close to $450 million in the project over the first five years.
A site for the future Microsoft data center has already been identified. The facility will be built on an approximately 1.8 million-square-foot site near an existing data center by LG CNS, an IT services subsidiary of the Korean multinational LG Gorp.
Microsoft representatives did not respond with comments in time for publication.
Expanding in Asia and Europe
The most recent expansion of Microsoft’s Azure cloud infrastructure in Asia was announced in July. The company added private network connectivity services to Azure in Equinix data centers in Hong Kong and Singapore.
On Monday, the company also announced that Azure HDInsight (Hadoop delivered as a public cloud service) was available in public preview to customers in China.
Microsoft is considering more data center capacity expansion in Europe as well. Earlier this month, German news outlet Taggesspiegel reported that the company was considering building a data center in Germany. | 6:51p |
Digital Realty Launches Dublin Data Center Digital Realty Trust has brought online the first phase of its new four-building Profile Park data center campus in Dublin, Ireland.
The first phase has about 100,000 square feet of raised floor and 2 megawatts of critical power capacity. It is the only data center in Ireland with a Tier III design document certification by the Uptime Institute.
Over the years, Dublin has become one of the top destinations for massive data center construction projects. A major metropolis, it is close to mainland Europe and offers low taxes and a cool climate, attractive for data center operators who want to use free cooling.
Digital Realty’s EMEA and Asia Pacific general manager Bernard Geoghegan boasted about the new facility’s efficient cooling system during a recent launch reception for the site. “It also features an indirect air optimization system, which improves our clients’ data center energy efficiency by lowering PUE (Power Usage Effectiveness) and therefore their total cost of occupancy,” he said in a statement.
Microsoft, Google and Amazon have all built data centers in Dublin. Among data center providers besides Digital Realty, Interxion TelecityGroup and SunGard operate in the market.
Digital Realty has not yet secured a tenant for the Profile Park facility, according to a spokesperson.
 Site plan of Digital Realty’s Profile Park campus in Dublin. First phase of the four-building data center development is already online. (Image: Digital Realty Trust) Click for larger image
The San Francisco-based company has three more data centers in Dublin: a 120,000-square-foot multi-tenant facility in the Blanchardstown Corporate Park, a single-tenant 20,000-square-foot facility in the International Exchange Building and a 124,500-quare-foot single-tenant data center in Clonshaugh.
Implementing an ambitious turn-around plan, Digital Realty has been selective about adding locations this year. The company has been focused on streamlining its massive global real estate portfolio by divesting non-core underperforming properties and filling remaining space in other facilities. | 7:59p |
The Pirate Bay Spreads Infrastructure Across 21 VMs to Prevent Downtime 
This article originally appeared at The WHIR
Two years after switching its entire infrastructure to the cloud, The Pirate Bay now uses 21 virtual machines hosted at different providers to run its website, according to a report by TorrentFreak on Sunday.
A steady increase in traffic has forced The Pirate Bay to add four virtual machines over the past two years. Last year, torrent uploads to the website increased by 50 percent, with video being the most popular file type.
Eight of the VMs are used for serving the web pages while six other machines are used for searches.
According to the report, the site’s databases run on two VMs, and the remaining five VMs are used for load balancing, statistics, torrent storage and more.
The storage capacity is 620GB, which, as TorrentFreak points out, is relatively low considering the size of the site.
According to the report, The Pirate Bay hosts its VMs with “commercial cloud hosting providers, who have no clue that The Pirate Bay is among their customers. All traffic goes through the load balancer, which masks what the other VMs are doing. This also means that none of the IP-addresses of the cloud hosting providers are publicly linked to TPB.”
Since The Pirate Bay’s infrastructure is spread out, cloud servers can be disconnected but it would be relatively easy for the site to continue operating even if this did happen.
One of the bigger challenges for The Pirate Bay has been domains that have been seized by governments. Last year, The Pirate Bay had to switch its domain to an .AC ccTLD after pressure from Dutch anti-privacy agency Brein forced the domain registry of Sint Maarten to seize its .SX domain.
Because of this, The Pirate Bay has domains on standby so it can quickly switch should its domain be seized again.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/pirate-bay-spreads-infrastructure-across-21-virtual-machines-prevent-downtime | 8:30p |
Indian IT Services Firm Infosys Partners with Huawei to Deliver Enterprise Cloud Services 
This article originally appeared at The WHIR
Indian IT services provider Infosys has inked a deal with Huawei Technologies in order to bring cloud computing and big data services to its enterprise customers.
According to an announcement by the companies on Thursday, the agreement will see Infosys and Huawei jointly develop cloud, big data and communication solutions that bring together Huawei’s cloud infrastructure with Infosys’ service expertise.
The partnership comes as Huawei has committed to growing its revenue in the enterprise space. In March, Huawei set its sights on $10 billion in revenue from its enterprise division by 2017, indicating that cloud computing partnerships would help it achieve this goal.
Huawei and Infosys will build reference architectures for big data platforms on Huawei infrastructure for joint go-to-market efforts, according to a statement. Huawei’s customer contact technologies will also be integrated with communication services from Infosys. The companies will set up a joint lab in China to “enable better delivery in all areas of the partnership.”
“Infosys is focused on helping customers succeed with software and services to both grow their business in new ways and to achieve operational efficiencies,” Infosys CEO Dr. Vishal Sikka said. “We are really excited to partner with Huawei to bring the power of Huawei’s cloud infrastructure and communication technologies to our joint customers, so that they can efficiently achieve this dual objective.”
Infosys also announced extended partnership agreements with Microsoft and Hitachi to help enterprise customers migrate to Microsoft Azure public cloud and offer Hitachi data center transformation solutions, respectively.
Targeting the enterprise cloud opportunity has brought many companies together to develop joint solutions. Recently, IBM and Apple joined forces to develop mobile and cloud solutions for enterprises.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/indian-services-firm-infosys-partners-huawei-deliver-enterprise-cloud-services | 8:57p |
Report: Mysterious Espiritus LLC Buys Former Condé Nast Data Center in Delaware The Delaware data center recently vacated by Advance Publications, a media company that owns numerous publishing outfits, including Condé Nast, has been sold to a mysterious company called Espirit LLC, The News Journal reported citing property records.
The building was sold in August, and the news outfit estimated that the sale price was about $3 million, based on the amount of taxes the seller paid on the deal disclosed in New Castle County records.
Not much is known about Espirit, other than the fact that it is incorporated in Delaware. But Delaware, because of its regulatory environment, is the go-to state for incorporation in the U.S.
Condé Nast moved out of the data center and moved its entire infrastructure onto the Amazon Web Services cloud. The publisher puts out some of the most popular magazines in the country, such as Vanity Fair, Vogue, The New Yorker and Wired, and owns a number of major web properties, including Ars Technica and Reddit.
The Newark data center is about 70,000 square feet. Condé Nast CTO Joe Simon told CIO.com in July that the building had already been sold.
According to records reviewed by The News Journal, however, it was transferred to new owners in late August.
See an earlier Data Center Knowledge post that features a video of Condé Nast data center techs packing up servers during the move. | 9:00p |
Ericsson Buys Majority Stake in Ex-VMware CTO’s Startup Apcera Ericsson has acquired a majority stake in Apcera, a startup formed by former VMware CTO Derek Collison, which also unveiled its policy driven enterprise application platform called Continuum today. The platform, aimed at large enterprise IT shops, deploys, orchestrates and governs a diverse set of workloads, either on premises or in the cloud, assuring enterprise security and policies compliance from the start.
Ericsson’s investment will fund future operations and fuel growth. Size of the all-cash deal, expected to close in the fourth quarter, was not disclosed.
Apcera will retain its name and operate as a standalone company. Collison will remain CEO. The company plans to accelerate hiring immediately.
Collison, a former Google and VMware executive, founded Apcera in 2012 with investment from True Ventures, Kleiner Perkins Caufield & Byers, Rakuten, Andreessen Horowitz and Data Collective.
Policy-driven deployment platform
Apcera’s enterprise application platform aims to bridge the divide between developers and IT operations organizations. It speeds up deployment but integrates policy and security from the start.
“We believe the opportunity around the policy-IT-driven platform is accelerating to layers never seen before,” said Collison. “Continuum is meant and personally built to fill that gap. The three legs of the stool we provide are the ability to deploy, orchestrate and govern.”
During his time as VMware CTO (2009 to 2012) Collison played a key role in designing Cloud Foundry, the open source Platform-as-a-Service. While doing that, he saw an opportunity to go beyond traditional PaaS, he said.
His vision was to build a modern platform capable of deploying a diverse set of workloads from basic operating system to greenfield application and everything in between. “Any type of technology that is massively adopted needs to be extendable. Our technology is set up to provide that,” he said.
Continuum enables deployment of different types of workloads while presenting the proper layer of abstraction for each. “What we saw early on was that everyone is struggling to use assets more efficiently,” Collison said. “The notions of governance and compliance were being left behind, or people were band-aiding the problem.” |
|