Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, August 28th, 2014
| Time |
Event |
| 11:30a |
CentriLogic Acquires US Virgin Islands Data Center Provider Adveniat Cloud, managed hosting and colocation provider CentriLogic has acquired U.S. Virgin Islands data center and hosting provider Adveniat. The deal represents CentriLogic’s third acquisition in the past 18 months. Terms of the deal were not disclosed.
CentriLogic has nine data centers in seven markets throughout Canada, United States, United Kingdom, Hong Kong, and now the U.S. Virgin Islands. The acquisition of Adveniat and its data center footprint allows CentriLogic to better serve customers throughout the Caribbean and Southern Hemisphere.
CentriLogic is currently serving customers within a section of this facility. Over the next two years, the plan is to expand the footprint within the facility by 10,000 square feet in two 5,000 square foot increments.
The total size of the data center is approximately 30,000 square feet. The facility has two redundant groups of four 650 KVA power generation plants and 36 hours of independent battery backup.
The Virgin Islands is an attractive hosting destination for e-commerce because it offers up to a 90 percent reduction in income tax liability on qualifying revenues harmonized to U.S. Treasury Department regulations.
The U.S. Virgin Islands has a large concentration of unused bandwidth. The area connects South America, Central America and the Caribbean to North America and Europe.
Adveniat counts The University of the Virgin Islands Research and Technology Park (RTPark) as a customer. RTPark is an economic development entity chartered by the USVI Government with statutory authority to extend tax incentives to qualifying knowledge-based and e-commerce businesses. Customers can host in the USVI data center while also having the opportunity through RTPark to qualify for income tax reductions on revenues generated from e-commerce and digital media activities in the USVI.
“With a state-of-the-art data center and fiber connectivity in the USVI, we can now directly support customers who are looking for both reliable managed hosting solutions and significant income tax savings,” said Robert Offley, President & CEO. “We are also motivated to help fulfill the RTPark’s goal of attracting competitive and innovative e-commerce business to the region and helping develop the economies of its USVI Territories.”
CentriLogic is headquartered in Toronto, Ontario, Canada, and Rochester, New York. It previously acquired Dacentec and its 23,000 square foot data center in North Carolina and Canadian managed hosting provider The Capris Group. The company earmarked $40 million for acquisitions in 2013 and has been rolling up small providers with big potential. | | 12:00p |
VMware Boosts HP and VCE Partnerships to Grow Software Defined Data Centers and Hybrid Clouds VMware began its 2014 VMworld conference last Sunday with a day dedicated to partners, placing HP and VCE in the spotlight with expanded partnerships. The partnerships aim to speed adoption of the software defined data center and hybrid clouds.
The announcements highlight refreshed strategies for integration with new VMware product offerings and empowering businesses with open, secure and agile software-defined technologies.
Hewlett Packard
VMware and HP announced an expansion of their partnership and new collaborative development initiatives focused on the software-defined data center and hybrid cloud. The two long time partners say that they hope efforts will drive customers to adopt technologies in these two areas. They want to provide customers with flexibility and choice through supporting open frameworks, as well as choice in the underlying data center infrastructure.
A joint federated networking solution was announced that the companies say will provide a centralized view, automation and visibility into physical and virtual data center networks. Specifically, the solution federates the HP Virtual Application Networks SDN Controller with the VMware NSX network virtualization platform.
HP ramped up its enterprise cloud efforts this year with a $1 billion investment in its Helion Cloud Services portfolio, which is underpinned by OpenStack. It also recently launched new open standards-based software defined networking solutions to empower cloud providers with network-as-a-service offerings. To help simplify and push adoption of OpenStack, HP said it will support VMware vSphere in a future edition of its Helion OpenStack commercial distribution and that it will support VMware NSX network virtualization.
“VMware believes the software-defined data center is inevitable, and is teaming with HP to drive innovation and simplify adoption of open, software-defined technologies in the enterprise,” said Raghu Raghuram, executive vice president, SDDC Division at VMware. “Customers will be able to combine HP Helion OpenStack with VMware’s enterprise-class infrastructure including VMware vSphere and VMware NSX to achieve production-grade OpenStack deployments. Our integrated network virtualization and SDN solution will help customers achieve a completely new operational model for networking that enables data center operators to achieve orders of magnitude better agility and improved economics.”
VCE
VMware also announced an expansion of its partnership with VCE, including new collaborative development initiatives for delivering integrated management solutions for the hybrid cloud.
The almost five year old VCE alliance with VMware, EMC and Cisco has evolved over the years, but has always focused on converged infrastructure and cloud-based computing models. With the new expanded partnership VCE will integrate its Vblock Systems with VMware vCloud Automation Center and VMware vCenter Operations Suite, and provide technology onramps to VMware vCloud Air.
The companies say that the expanded partnership will join VCE blocks with VMware vRealize cloud management software into a single system, giving customers the option to order Vblock Systems pre-installed with VMware vCloud Automation and vCenter Operations Suite, and ready for vCloud Air. This gives customers a private cloud that is hybrid-ready, for migrating workloads to VMware vCloud Air out of the box.
“VCE and VMware are pioneers in delivering exceptional value to customers looking to get the most from their data center investments,” said Todd Pavone, executive vice president, product strategy and development at VCE. “As the industry continues to shift toward hybrid cloud computing models, we have a significant opportunity to help customers bridge their Vblock-based private cloud environments to VMware vCloud Air with simplified, built-in VMware cloud management and onramps that enable seamless workload migration between clouds. We’re excited to further expand our relationship with VMware, and we believe this new collaboration can unify the tremendous benefits of converged infrastructure and the public cloud, streamlining data center operations and disaster recovery for our mutual customers.” | | 12:32p |
Nutanix Raises $140 Million With $2 Billion Valuation Web-scale converged infrastructure provider Nutanix announced that it has closed on a $140 million Series E financing round, bringing its total to date to $312 million. The latest financing round was led by two unnamed premier Boston-based public market investors. Funds will be used to invest in sales, research and development, customer support and marketing.
CEO and co-founder Dheeraj Pandey feels that Nutanix is at war with EMC and others, and that the money they have raised won’t add just technology muscle, but also some world-class sales, marketing, distribution, and packaging muscle. EMC’s VMware validated the opportunity and the progression towards converged infrastructure in the enterprise with its EVO: RAIL hyper converged infrastructure offering this week. Nutanix is the disruptor, and a popular one, at what it sees as a more than $50 billion addressable market.
The financing round gives an implied valuation at over $2 billion for the five year old startup that also completed a $101 million funding round at the start of this year. At that time Pandey said that it was the last round of private funding prior to issuing an IPO. However, Pandey said in a blog post Wednesday however that the company had raised “an IPO-like amount, at an IPO-like valuation, in a private round with institutional investors who typically buy at IPO time.”
“The convergence of servers, storage and networking in the datacenter has created one of the largest business opportunities in enterprise technology, and Nutanix is at the epicenter of this transformation,” said Dheeraj Pandey. “We are proud of the progress we have made, and are confident in capitalizing on the enormous opportunity that lies ahead of us. We recognize the importance of building relationships with leading public market investors, and are honored to welcome them as partners in driving the long-term success of our Company.”
The tremendous success and international growth the company has seen this year validates the large market opportunity that Nutanix is capitalizing on. It’s approach to converged infrastructure is centered on intelligent software which ties together the various infrastructure layers and helps drive efficiency, flexibility and speed. Nutanix was granted a U.S. Patent earlier this year for its software-defined storage architecture. The company also recently signed an OEM agreement with Dell to build the XC Web-scale Converged Appliance. | | 1:23p |
KDDI Investing $270m On Two More TELEHOUSE Data Centers In Japan Japanese Telecom KDDI is investing $270 million to build two new TELEHOUSE data centers in Tokyo and Osaka. Tokyo is expected to open August 2015 while Osaka has an expected opening of February 2016.
These additional facilities will take the total amount of global TELEHOUSE data center space to 371,000 square meters (3.99 million square feet) provided by 46 sites across 13 countries/territories and 24 major cities.
The facilities are in the two most popular data center hubs in Japan. Both facilities will offer high-density colocation services. The company said it is building to meet demand for housing enterprise private clouds as well as public cloud service providers. KDDI looks to accommodate leading multinationals expanding their portfolio in Japan.
The facility in Osaka will be a 20 story building located in the center of the city. Osaka is the second largest economic center next to Tokyo. Many Tokyo-based companies have branches in the Osaka area and it is a popular site for disaster recovery and backup as well. The new data center will offer 700 racks worth of space.
Fellow telecom giant NTT has been building out its footprint in Osaka and Equinix is also in the area. Osaka has seen increased activity following the 2011 earthquake, as it is far south of the area affected.
The Tokyo data center is on an existing high security Tama data center campus, located about 18.5 miles from the city center. The data center will offer 1,300 racks worth of space.
KDDI continues to pump money into the TELEHOUSE colocation brand. “The cornerstone of our Global Strategy is the ICT solutions business, centered on “TELEHOUSE,” wrote KDDI President Takashi Tanaka. “We are positioning business in emerging economies and Asian consumer markets as the engine for expanding our scale of operations.”
KDDI recently made an approximately $230 million investment in the London Docklands, one of the UK’s most important telecommunications hubs. | | 2:00p |
The Road to Software-Defined Data Center New virtual services are sweeping the modern data center and are exciting data center administrators. Just imagine having the ability to abstract vast amounts of resources and manage heterogeneous environments all from one logical controller. New infrastructure components no longer care what type of hardware you’re using. It cares about how you are presenting resources. Software-defined technologies have come quite a long way. These logical systems are capable of introducing amazing optimizations for the data center platform as well as your business model.
But here’s the big question.How do you get there? What do you need to deploy to get to the SDDC state? The good news is some of the latest technologies out there surrounding network, storage and compute and now making it easier to become a software-defined data center.
- Storage. Software-defined storage is a very real concept and technology. There are a number of solutions now allowing you to completely abstract storage resource and point them to a virtual layer. Let’s pretend like you’re running on a VMware vSphere hypervisor. Now, you can deploy VMware’s Virtual SAN which pools resources and allows you to create a persistent storage tier at the hypervisor layer. Or, you can even utilize a 3rd party technology like Atlantis USX and integrate it with your hypervisor to pool SAN, NAS, RAM and any type of DAS (SSD, Flash, SAS).
- Network. The way we process and control network traffic has really come a long way. Now, we’re able to scale massive data center points with complex network routes and policies. The incorporation of software-defined networking (SDN) brings the cloud and data center conversation to a new level. VMware’s NSX solution, for example, allows you to provision, snapshot, delete and restore complex networks. The cool factor is that it can integrate with your existing network architecture. Ultimately, this gives you complete network control from a VMware management platform. Cisco’s Application Centric Infrastructure (ACI) takes the concept of SDN to a whole new level. If you’re running a Cisco architecture, you suddenly have the capability to create application-level policies for both virtual and physical workloads. Furthermore, you create automated security policies which can span logical, physical and cloud networks.
- Compute. This is an interesting stopping point. How can you software-define the compute layer? Well, you’ll always have the hardware aspect. The big difference revolves around how the resources living on the hardware are delivered. Cisco’s UCS platform comes very close to creating a more commodity-like platform. The idea with UCS is to create granular services and hardware profiles. Essentially, you’re making blades and rack-mount servers interchangeable pieces within the architecture. You can have a chassis working until 8 p.m., at which point a policy sets in to provision a new hardware profile to support and incoming set of new users from a different time zone. From an administrative perspective, nothing really needed to happen. Hardware was dynamically reassigned and the compute layer was powered the software-defined policies.
- Data center. VMware has taken an interesting approach to creating a true software-defined data center. Their logic is to aggregate all data center processes and allow the hypervisor and appropriate management consoles to control resources. Through network, storage, compute and management layers, the concept of the SDDC model allows administrators to completely control all aspects of a next-generation cloud data center. From within the management console, SDDC operations can include automated management, policy-driven services, better business-aware technology controls. The interesting piece here is that through VMware’s platform you’re capable of seamlessly expanding this from a private cloud into a hybrid cloud environment.
- Cloud automation and management. There are several technologies which allow you to take your data center platform and expand it easily into the cloud. Oftentimes, data center platforms have very heterogeneous systems supporting complex application and business processes. Automation and management systems like OpenStack, CloudStack and Eucalyptus each have a way of seamlessly allowing you to expand data center resources into the cloud. These control layers allow the management of virtual machines, data points, and entire IaaS models for your organization. This is the software layer which truly allows you to manage your cloud architecture.
Putting it all together
Here’s the reality – there’s no one recipe to become a software-defined data center. Bits and pieces of virtual services and code can be deployed in unison or separately to achieve optimal data center performance. Many organizations are taking the leisurely stroll during their path to a more logically controlled data center. The beauty here is that these technologies allow traditional data center technologies to live in parallel with next-generation SDDC platforms. In some cases, it’s smart to start with storage or just networking. Identify specific points of need within your organization and begin to apply SDDC technologies. You’ll quickly notice that management becomes simplified and you suddenly regain control over quite a few resources. Furthermore, with software-defined technologies – you’re able to interconnect with various cloud models a lot easier.
Data center technologies will continue to evolve as hardware and software platforms become more interconnected. The logical aspect of the data center allows modern organizations to truly span their environments and connect with a variety of cloud resources. It’s no wonder that they hybrid cloud model is becoming so prevalent. New services allow for the powerful expansion of private data centers directly into the cloud model. All of this is done at the virtual control layer. When you begin to put network, storage and compute all together on a virtual plane – you begin to truly see just how far you’ll be able to take your own data center platform. | | 2:30p |
Software-Defined Data Centers: What Lies Ahead? The “software-defined” term, applied as a modifier to data center, networks or storage, is growing in popularity. Software-defined solutions, like virtualization did before them, can allow for a great deal of flexibility and efficiency of shared resources. The potential is great, but of course, there are risks, too.
The software-defined concept is not complex — one must simplify switching, transport, storage, and related infrastructure hardware, and then move “command and control” up to the application and services layer, according to Art Meierdirk, Senior Director Of Business Services, INOC. An industry veteran, who has more than 35 years of telecommunications and data communications experience, he will be moderating a panel titled, “Software Defined Data Centers – Next Steps for Storage and Networking” at the Orlando Data Center World in October.
Data Center Knowledge asked him a few questions about software-defined data centers (SDDC), including software defined storage (SDS) and software defined networks (SDN).
There are many flavors to the set-up, especially when it comes to who maintains the software-defined system. “The command and control (automated control software) can be maintained by one or more entities such as network, data center or cloud services providers, and applications or service providers, in addition to enterprise businesses,” he noted. “All control some or all aspects of a business solution.”
Software-Defined Data Center Advantages and Drawbacks
Meierdirk said, “The software-defined data center is an environment in which all infrastructure is virtualized and delivered as a service and the control of this datacenter is entirely automated by software.” There are multiple advantages to using this approach.
He outlined unprecedented capabilities for business services, such as:
- It’s a more effective use of the IT / data center infrastructure, thus reducing costs.
- Provides flexibility, bringing rapid deployment of new services with a shorter time to value.
- Standards-based architecture to avoid single Original Equipment Manufacturers (OEM) limitations.
- Provides a redundant, distributed and diverse architecture for business continuity.
- Provides businesses with more control of all infrastructure, including interconnectivity.
- More complete integration of network, facilities and IT infrastructures.
“Yet,” he cautioned, “We are looking at ‘bleeding edge’ opportunities, the balance between pushing new opportunities for business with the risk of failed deployments.”
Issues could include:
- New security risks, may be over-stated, but the concern is to be addressed
- Proprietary implementation by Original Equipment Manufacturers (OEM) can delay standards implementations
- Concern about standards and interoperability being in a state of flux, which can paralyze investments.
- Considerable investment in legacy equipment leads to questions on how to manage a new deployment
- Software-defined solutions and virtualization are very new and complex, there will be a significant learning curve for deployment and support.
- For some regulated businesses, hands-off of network control and data storage / processing may not be allowed.
What Lies Ahead for this Trend?
“Perhaps in the future, ‘business-focused virtualization’ could allow the enterprise business control its IT infrastructure, remote locations, data and computing as well as interactions with its other services providers and customers over a virtual solution for all aspects of its business,” Meierdirk said.
A comprehensive solution, such as this, requires a software-controlled solution, provided by either the enterprise itself, or another provider (such as a data center) as an overlay on top of a carrier transport (but switching and routing controlled by the enterprise). “It is an exciting possibility and could be very fertile ground for an expansion of data center services,” he said.
In the future, Data Center Infrastructure Management (DCIM) offers several opportunities for use of software-control:
- Capacity Monitoring and Management.
- Power – using software to monitor and manage power utilization and balancing, as well as scheduling optional operations to match energy costs to best (peak v.s. non-peak) hours.
- HVAC – using software to monitor and balance loads with the intent of reduced heating or cooling costs.
- Hardware – turn up or down resources as needed.
- Application / Performance management & optimization through the use of software, interacting with applications, to monitor performance and make network or application selections to improve performance.
- Dynamic least cost configurations for storage, computing, access & transport by using software to compare and select best options in a dynamic environment.
Find out more on The Software Defined Data Center
Want to learn more? Attend the panel on “Software Defined Data Centers – Next Steps for Storage and Networking” at Orlando Data Center World or dive into any of the other 20 trends topical sessions curated by Data Center Knowledge at the event. Visit our previous post on cooling, Cooling Trends: Innovative Economization Increases ROI.
Check out the conference details and register at Orlando Data Center World conference page. | | 4:05p |
Google Makes Two Big Cloud Transparency Moves Google has made an independent security audit and compliance certificate available to the public in order assuage customer concerns over data protection and its cloud. The company has also started testing a status page for all services on the Cloud Platform.
Both documents are available on the company’s Enterprise Security site. The SOC 3 Type II audit report and updated ISO 27001 certification are meant to address security concerns over its Cloud Platform as well as Google Apps for Business and Education. SOC 3 Type II examines controls at a service organization relevant to security, availability, processing integrity confidentiality, or privacy.
This is the first time the company is making details of an independent security audit and a security compliance certification available to the public. Its public cloud competitors Microsoft Azure and Amazon Web Services (AWS) have already made documents available, as well as have status pages for their cloud.
Both audits and certifications are ubiquitous in the data center industry. SOC 2 examines the details of data center testing and operational effectiveness while SOC 3 is for public use and is the highest level of certification. A SOC 2 report includes auditor testing and results, while SOC 3 provides a system description and the auditor’s opinion.
Cloud competitors AWS and Microsoft Azure both offer various security reports. Both offer SOC 2 Type II and SOC 3 audits. Google is playing catch-up in this regard, however its cloud is also the newest entrant.
Google is also testing a status page for its cloud. It joins status pages for its Apps and its Platform as a Service. The status page is still in experimental stages, but it will boost the cloud appeal once it formally launches. Status pages reflect the availability across services and is key to providing good transparency.
It’s also key to host a status page outside of the cloud infrastructure – a lesson that Salesforce.com learned in the mid 00s when both its CRM and its status page went down in the same outage.
Both AWS and Microsoft Azure have status pages. The major public cloud providers are increasing transparency on the whole to win over business. | | 4:50p |
Microsoft Azure Cloud Integrates Kubernetes And Docker Microsoft announced today that Kubernetes can be used to manage Docker containers on Microsoft Azure. The Azure team has also released the Azure Kubernetes Visualizer project, which makes it easier to experiment with and learn Kubernetes on Azure.
Microsoft Open Technologies promised to bring Docker and Kubernetes support to Microsoft Azure last July. Kubernetes helps manage the deployment of Docker workloads. Docker is an open-source engine that automates the deployment of any application as a portable, self-sufficient container that will run almost anywhere, such as data centers and clouds. The combination of the two provides a commercial-grade, highly available and production ready compute fabric. Docker and Kubernetes recently melded into Google’s cloud.
As a result of Open Technologies’ work, it is now possible to use a single tool to manage Docker container clusters on Azure. With these latest contributions to the Kubernetes toolset, developers can transparently deploy and manage container clusters on Azure.
Key features included in this work:
- Build a container and publish it to Azure Storage
- Deploy an Azure cluster using container images from Azure Storage or the Docker Hub
- Configure an Azure cluster
- Update the Kubernetes application on an existing cluster
- Tear down an Azure cluster
The Visualizer project was built during Microsoft Hackathon.
“Our project was to build a visualization system for what Kubernetes is doing when managing Docker on Azure,” writes Madhan Arumugam Ramakrishnan, principal lead program manager, Azure Compute Runtime. “The goal was to visually demonstrate some Docker and Kubernetes concepts such as containers, pods, labels, minions, and replication controllers.” | | 5:30p |
Data Center Jobs: CBRE At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking an Owners Representative/Data Center Project Manager in The Dalles, Oregon.
The Owners Representative/Data Center Project Manager is responsible for creating and establishing the master project schedule, based on the critical path and key project milestones, then updating and enforcing team compliance during project life, creating and establishing the master project budget, including soft costs and hard costs, bonds, insurance, contingencies, allowances, etc., and contract negotiation, typically in concert with the clients PM and legal counsel, including establishment of design consultants and all GC business terms including mark-ups, fees, insurance, labor rates, escalation, etc. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 8:00p |
IO Focusing on Service Providers to Drive Growth About 25 miles outside the center of London, IO has begun construction on its newest data center. The IT infrastructure provider is building in Slough, the growing data center hub in the western suburbs of London, where it plans to begin operations in early 2015.
The Slough facility, which will serve as IO’s base of operations in Europe, also serves as a major transition point for the company.
“We will not build out any more data centers after London is finished,” said Jason Ferrara, vice president of marketing at IO. “Our go-to-market will be through service providers. It’s an easier way to get into new markets. Service providers and IO are a match made in heaven.”
The shift builds upon the “Powered by IO” strategy, which was launched in 2012 to build out a global network of partner sites using IO’s modules and data center management software. FORTRUST and CenturyLink are each deploying IO modules at scale to house customers IT gear. IO has also lined up key partners in its overseas markets, working with StarHub and TMI in Singapore, and Finnings in the UK.
A footprint across three continents
IO now has a global footprint of company-built data centers, including four facilities in the U.S. – two in the Phoenix market, and sites in New Jersey and Ohio – along with the presence in Singapore and the UK. IO will continue to sell space in these facilities, and work directly with large customers seeking on-site deployments, but will partner with service providers in new markets.
The FORTRUST and CenturyLink partnerships have demonstrated the strength of the IO partner model, and are driving demand for the company’s modular data centers, known as IO.Anywhere. “That’s where we see the uptick,” said Ferrara.
Service providers are an important customer segment for modular data centers. The standardized, factory-built units offer excellent speed to market, cost-effective use of capital, and allow providers to expand incrementally, with predictable costs for each new unit of capacity.
“The just-in-time delivery of the IO.Anywhere data modules enables Fortrust to add capacity in 200kW and 400kW increments within 60 days of customer demand,” said Rob McClary, Vice President of FORTRUST, in describing the success of its partnership.
Why service providers like modular designs
Modular capacity is a particularly attractive tool for service providers seeking to expand into secondary markets. A substantial portion of the third-party data center footprint is centered in six large markets – Silicon Valley, northern Virginia, greater New York, Chicago, Los Angeles and Dallas. These cities have become crowded with competing providers, while demand for Internet services is growing in cities that previously might not have supported the multi-tenant data center model. As a result, more providers are looking to new geographies to expand beyond the “Big Six.”
Incremental growth is attractive in these second-tier markets, where new entrants prefer to start small and build gradually, rather than incurring the up-front expense of a large data center build. This has boosted wholesale players like Compass Datacenters, which has found a niche in deploying factory-built infrastructure to service providers in secondary markets.
Focusing on service providers also offers favorable economics for IO, which can supply the modules and technology (including its popular IO.OS management software) while allowing partners to handle sales and marketing through small teams familiar with these local markets. Building deep relationships with growing providers holds the promise of regular repeat orders. IO says its partnership plans will also include professionals working with the data center industry, such as engineering firms. |
|