Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 17th, 2015

    Time Event
    1:07a
    IO Launches UK Data Center With Goldman Sachs as Customer

    IO has officially launched a data center in Slough, just outside of London, the company announced Tuesday. This is the Phoenix-based colocation provider’s second data center outside the US and its first UK data center.

    The facility’s first customer is Goldman Sachs. The investment-banking giant was also the first customer at IO’s Singapore data center, which came online in 2013.

    IO, whose differentiation strategy has been to invest in developing its own advanced data center technologies instead of simply providing power, cooling, network connectivity, and other basic colocation services, is once again going through a major change. The company is splitting its technology division off as a separate entity called Baselayer, IO the service provider becoming a customer of its technology products.

    The cornerstones of those technology products are data center modules that are similar to shipping containers in form factor and an advanced data center infrastructure management software solution. The company manufactures the modules at its own factory outside of Phoenix and ships them to data center locations around the world.

    Europe is one of the world’s biggest data center markets, and UK is the biggest data center market in Europe, so the decision to build a facility in Slough is a way to capitalize on the high demand for data center services from the many companies that either have operations there or want to serve users in the metro or in Europe in general.

    “We’re looking forward to helping existing customers expand their footprint into Europe, as well as attracting customers from the UK market,” Nigel Stevens, IO’s managing director for the UK, said in a statement.

    A big market also means lots of competition, and going in with an anchor tenant on board reduces the risk that comes with entering an already busy market. The modular approach reduces that risk further, since it allows to bring as much capacity online as needed at any point in time instead of sinking a lot of investment into a high-capacity facility from day one.

    The service-provider landscape on the European data center market is undergoing some significant changes.

    Equinix, the world’s largest data center provider, recently placed a successful bid to acquire TelecityGroup, one of the biggest providers in Europe, blocking a previously planned merger between Telecity and Interxion, another European giant. The Equinix-Telecity transaction, if approved by regulators and completed, will make Equinix the number-one provider in Europe.

    Prior to that, NTT acquired German provider e-shelter, which gave the Japanese company an instant spot on the list of Europe’s top data center providers.

    IO’s UK data center measures more than 100,000 square feet. It is the first facility where the company has deployed a new kind of Baselayer modules called Eco. They are designed for maximum energy efficiency by using outside air for cooling. The free-cooling system is backed by a traditional mechanical chilled-water-based cooling system.

    As it launches its offensive on the European market, IO maintains focus on Asia Pacific. Its founder and CEO George Slessman recently moved from Phoenix to Singapore to be able to better focus on growing its business in one of the world’s fastest-growing data center markets.

    3:00p
    Verilume Automates OpenStack Cloud, Hadoop Cluster Deployment

    A startup founded by executives with extensive experience working with financial services firms and EMC launched a cloud application service through which IT organizations can automate the deployment of OpenStack private clouds or Hadoop clusters inside their data centers.

    Leveraging hard-won experience gained at Fidelity, Goldman Sachs, and Morgan Stanley, Verilume co-founder Dan Petrozzo says the Verliume software suite is based on an automated cloud builder the company designed to instantiate stacks of complex software quickly that will make complex technologies more accessible to the average IT organization.

    Capabilities of the Software-as-a-Service application include the ability to schedule rollouts, self-service provisioning, and the ability to deploy software stacks on both new and legacy infrastructure. At a time when new technologies are rapidly emerging inside the data center, Petrozzo says, IT organizations shouldn’t have to hire dedicated experts just to deploy them.

    “There’s a lot of unprecedented innovation happening in the data center these days,” he says. “The problem organizations have is they don’t have enough expertise behind deploying it.”

    As a complement to its core automation software to address that problem, Verilume has also developed Verilume Forecaster, which applies machine learning techniques to help IT organizations optimize near-term capacity optimization requirements. Petrozzo says that while there are plenty of tools available to plan capacity requirements over an extended period of time, most IT organizations need access to analytics that allow them to plan for capacity utilization in near real time.

    In the next release of the company’s core application, Petrozzo says, Verilume will also extend its reach to include Infrastructure-as-a-Service platforms to enable IT organizations to have better control over hybrid cloud computing environments.

    One of the primary reasons that so much software is being pushed into public clouds using external service providers is that line of business units have developed the perception that internal IT organizations are resistant to change, when most of the time they don’t have the tools required to embrace new technologies. By making use of a cloud application to automate the deployment process IT organizations can not only reallocate staff to tasks that add more value to the business but also address criticisms concerning the level of agility inside their IT organizations, he says.

    That doesn’t necessarily mean that every new technology that comes down the pike should automatically be embraced. But it does means that internal IT organizations now have the potential to embrace them once they actually determine their true business value.

    3:30p
    Choosing Colocation, Expansion or New Build for Increasing Data Center Assets 

    Anthony Walker is with TE Connectivity’s Business Development and Strategy Department.

    Data centers are no longer just a tool for doing business – they are now aligned with the critical path that drives an organization’s objectives, growth and success. As data rates and usage continue to skyrocket with no foreseeable end in sight, many data centers are facing the challenging issue of providing advanced services while being constrained by physical space, costly equipment upgrades and a need to connect between longer distances.

    Operators must weigh their options to build new facilities, expand current infrastructure, seek colocation, or a combination of these options. The best upgrade solution for a business will typically be a balance of requirements versus time and cost.

    For some, it’s a financial decision – such as whether to invest $100 million in a new manufacturing plant or building a new data center. Which is the immediate need, and which will generate a faster return on investment? Also, how long can the current infrastructure hold up? A new data center could take many months to become operational. If the business has reached a point where its infrastructure is so stretched that it demands an immediate solution, investing in a colocation facility might be the best move. A colocation provider can also make additional servers available much easier than an organization can build new facilities.

    Another consideration is the organization’s core business. Is it in the “data business” where revenue is generated by managing huge amounts of data? Or does data play a critical role in the operation of the business? If data is your business, owning your data center assets is important. Building a new data center or expanding existing facilities is only a matter of affordability. However, if data is simply a critical aspect of your core business, then colocation might be a faster, less costly option.

    Some businesses use a combination of corporate-owned data centers and colocation services to meet their data needs. For example, a large financial institution may use a colocation facility for its non-secure or non-strategic information, but still manage a very robust network of global data centers for its core business. Because data is so essential to its operation, the organization must control its own primary and disaster recovery facilities.

    There are several vertical markets – healthcare, hospitals, banking, government and federal agencies – whose data must adhere to strict regulations. In these organizations, colocation may not be an option. If data is sensitive by nature and must be protected, such as in military operations, a cloud-based solution does not offer the necessary security. Many government agencies must manage and control their own data for security reasons.

    For those who must own their own data center assets, is it best to expand current infrastructure or build new facilities? Typically, an organization’s first inclination is to expand its current footprint if possible, and that is likely the less costly option. However, there may be considerations and circumstances that may lead to building new facilities from the ground up.

    Power consumption is increasingly becoming an issue for data centers. Some are said to consume enough power to light up a small town. So ensuring sufficient power will be available to an expansion is important. In fact, if your data center has been operating efficiently and the expansion plan will not jeopardize that operational efficiency, then expansion is the correct solution.

    Do you have enough physical space available to expand your data center infrastructure? If not, then you may have to change data center strategy or consider a building a new data center. Are there sufficient fiber assets to deliver the speeds and capacities your business requires? Is the environment conducive to expansion?

    Expansion is a workable solution for the majority of businesses who must control their own data center assets. Most of the issues involved in a data center expansion are easily navigated. However, it’s worth mentioning that there are a few external factors that may also weigh in on the decision. For instance, are there rebate programs or local tax exemption agreements that would bring the expense down for building new data facilities?

    These and other factors may change the entire financial equation for expansion versus new data center. But life in the data center world is about saving time and becoming operational as quickly as possible so, again, expanding current infrastructure is typically the better option. Regardless, of which solution you choose – colocation, expansion or new build – it’s important to ensure your business and technology objectives are aligned for successfully moving your organization into the future.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:43p
    RDMA Replaces TCP/IP in Linbit’s Data Replication Tool

    As more IT organizations start to replicate data between servers, Linbit announced this week that its software can now replicate data using remote direct memory access.

    With the release of the company’s DRBD9 open source software IT organizations can eliminate reliance on TCP/IP in data replication between servers, says Greg Eckert, business development manager for Linbit.

    “Now we can bypass the CPU using RDMA,” he says. “A lot of organizations have come to realize that TCP/IP is a bottleneck when it comes to high-performance replication.”

    Capable of connecting 30 nodes in real time to ensure data availability, Eckert says, the data replication software running on servers eliminates the need to use expensive replication software on storage systems. DRBD9 can interconnect more than 30 geographically diverse distributed storage nodes across any network environment, according to him.

    Using PCIe storage combined with InfiniBand network cards, Linbit claims DRBD9 is 100 percent faster than IP-based networks, while simultaneously reducing CPU load by 50 percent. To accomplish that, Eckert says, DRBD9 sits above the disk scheduler on a server to take control of the data replication process on Linux systems.

    A new DRBD Manage application enables deployments in a matter of minutes and exposes a set of APIs to facilitate integration with other management frameworks, such as OpenStack.

    Not only does a software approach provide a more flexible way to replicate data that is divorced from any hardware upgrade process, the Linbit software also doesn’t require IT organizations any commercial software licenses.

    Eckert says adoption of RDMA inside cloud computing environments makes clouds the first place that will witness adoption of DRBD9. But he adds it’s also a matter of time before traditional enterprise IT organizations begin to rely on DRBD9 along with BRBD9 Proxy software to replicate data directly between servers both inside a data center and servers located in distributed data centers.

    Replication software is a critical component of just about any disaster recovery strategy. The challenge that many organizations now face is that the amount of data that needs to be protected is growing in leaps and bounds. Keeping up not only with that volume of data, but also with the velocity at which that data is being created is tough.

    IT organizations may still need to rely on storage systems to replicate data in some instances. But in high-performance computing scenarios responsibility for replication appears to be shifting to the server.

    5:13p
    More Content, More Data Centers, and Better Interconnects

    The very real reality out there is that we are creating both more content and lot more data. In fact, very soon, every business will become a digital entity. We are finding new ways to communicate and deliver rich content to the end-user and the modern business. Through it all, data center providers must focus on ensuring constant content delivery to users located anywhere, accessing data from any device at any given time.

    Here’s an interesting whitepaper that discusses how the proliferation of new content not only requires more data centers, but it also has forced administrators to re-evaluate how all of this content is delivered. Most of all, they have to consider all of the tools and solutions out there to help with the process. There are also new capabilities to consider such as:

    • Faster data center interconnect expansions
    • Flexible client interface architectures including support for 10G/40G/100G networks
    • Direct API integrations for better control and automation

    However, most importantly, today’s content delivery architecture must be capable of web-scale capacities. This means utilizing tools which provide better global connectivity, improved content delivery, and better capacity control capabilities. Organizations are looking for even better ways to dynamically interconnect their data centers to deliver even more business productivity. Download this whitepaper today to learn about new web-scale capabilities around content delivery and data interconnect technologies.

    5:48p
    VMware Launches Identity Manager to Solve BYOD Risks

    logo-WHIR

    This article originally appeared at The WHIR

    VMware has launched an Identity Manager service to centralize authorization for applications across different environments and devices. Hosted on-premise or in vCloud Air public cloud, VMware promises secure, convenient, one-touch access for what it calls “the mobile/cloud era.”

    Identity Manager works with AirWatch Adaptive Access to provide single sign-on access to web, mobile, SaaS, and legacy applications. It works across iOS and Android, and does not involve changing app code or “app wrapping,” so updates can be applied without extra steps.

    The service includes a token generator, but also integrates with existing identity providers. It comes with an enterprise app store, app usage analytics, and a conditional access policy engine.

    BYOD risks, whether from misconfigured apps, prolonged app vulnerabilities, or the dark web, come with the flexibility and productivity benefits of “the mobile/cloud era.” Symantec launched new services earlier in June to address the same concerns.

    Creating device profiles and binding them to user identities allows an unlocked device to be used as a seamless authentication factor. “More importantly,” VMware’s Kevin Strohmeyer, senior director, product marketing, End-User Computing, says in a blog post, “because people have such a strong connection with their personal devices, it is less likely that they will lose control of them, and when they do, are more likely to report them stolen and/or initiate their own remote wipe to safeguard their personal information, as well as corporate information.”

    Identity Manager launched to VMware’s Blue and Yellow Management Suites, out of US data centers. The company plans to offer it as a stand-alone product, and also to add European and Asia Pacific regions in the third quarter. The service starts at $150 per year, and a free trial is available through AirWatch.

    This first ran at http://www.thewhir.com/web-hosting-news/vmware-launches-identity-manager-to-solve-byod-risks

    8:00p
    AT&T to Open Source Network Hardware, NFV Software

    AT&T was one of three giants sharing some details about the inner workings of their networks this morning at the Open Network Summit in Silicon Valley.

    While the other two – Google and Microsoft – build their data center networks to support online services like Google Search and Bing and provide cloud application and infrastructure services, such as Office 365, Azure, and Google Cloud Platform, a lot of of AT&T’s engineering muscle is used to support things like its gigabit internet access for consumers and businesses and its massive mobile-network user base.

    To keep up with growth of the user base and its service portfolio as well as rising connectivity speeds, the company last year changed its approach to network architecture from buying off-the-shelf gear from incumbent vendors like Cisco, Alcatel-Lucent, or Ericsson, to using low-cost “commodity” hardware and relying on software to manage the increasingly complex set of functions the network has to perform.

    This is the approach to network scale and complexity internet giants like Google, Microsoft and Facebook have taken. Like them, telcos operate networks that span the globe, but telcos have also been adding cloud services that are similar to those Google and Microsoft provide to their portfolios to leverage their network assets.

    Open Source Network Hardware and Software

    As it builds the new systems that support its services in its data centers and central offices around the world, AT&T relies a lot on open source software. In the process, however, it has also designed a lot of its own software and developed specifications for custom hardware underneath.

    Today, the company announced it will contribute some of those network hardware specs to the Open Compute Project, a Facebook-led open source hardware and data center design initiative. It will also open source a series of network software tools it has developed through open source network technology communities.

    When the hardware specs become open source, any hardware maker can come to AT&T with an offer to supply the boxes. Any other telco can also use those open specs and open source software to build their own data center network infrastructure in a similar fashion.

    Replacing Complex Appliances With Software

    AT&T’s gigabit home and enterprise internet service is called GigaPower. It’s extremely fast, but to provide that speed before the company had to install complex, expensive equipment in its central offices to provide it to neighborhoods – devices like gigabit passive optical network open line terminals, or GPON OLTs, for example, John Donovan, SVP of technology and operations at AT&T, wrote in a blog post.

    Using Network Function Virtualization, the company’s engineers plan to replace those appliances with software running on commodity servers and other hardware.

    Instead of simply replacing physical appliances with virtual ones, the team broke out and upgraded and optimized each subsystem on those devices. AT&T has done this with GPON OLTs, broadband network gateways, and Ethernet aggregation switches.

    Several individual line cards in each OLT are virtualized and run on a single media access control (MAC) card. This vOLT (virtual OLT) is the piece of hardware AT&T is open sourcing through Open Compute.

    “We’re inviting any white box hardware maker to build and sell them to us and also allowing others to build on the concept and design,” Donovan wrote. His team hopes to see prototypes by the end of the year and start trials and deployments in 2016.

    An Open Source Broadband Access System

    AT&T engineers are working with On.Lab, a non-profit open source network research foundation, to release an open source version of the software that enables vOLT called CORD, which is short for Central Office Re-architected as Datacenter. Together, vOLT and CORD will constitute a complete open source system to enable broadband access like AT&T’s GigaPower.

    The company is also planning to open source a tool it uses to configure devices in its software-defined network through the OpenDaylight, an open source SDN project under the Linux Foundation.

    8:59p
    Cohesity Raises $70M to Unify Secondary Storage

    Emerging from stealth mode today to unveil a unified approach to managing all secondary storage, Cohesity also revealed that is has raised a total of $70 million in funding.

    Led by CEO Mohit Aron, who helped found Nutanix and lead the development of the file system that Google uses to manage its data, the goal now is to bring to market a storage platform that not only unifies every workflow typically required of a secondary storage system but is also infinitely scalable.

    “When it comes to secondary storage today there is lots of fragmentation,” Aron said. “We’re going to unify all those processes around an object-based distributed file system where there are no bottlenecks.”

    Scheduled to be generally available later this year, Cohesity has already created an early access program under which pilot customers, such as Tribune Media and GS1 Canada, have deployed the platform.

    Aron promises the Cohesity Data Platform will eliminate all the silos in secondary storage systems that wind up making the total cost of managing it so cost-prohibitive. Because there are now so many copies of data sets strewn across the data center, the total size of secondary storage winds up dwarfing the amount of primary data storage most organizations have.

    Citing CSC Research, he noted that amount of data IT organizations will need to manage is expected to be 44 times greater in 2020 than it was in 2009. Most of this data will be what Aron describes as “dark data,” or data that was used once or not at all.

    The Cohesity Data Platform is designed to dramatically reduce the amount of data stored in secondary storage systems by creating a programmable storage environment where all data is indexed using an object-based system, thereby eliminating the need to create multiple copies of the same data over and over again. Once that data is indexed, it then becomes a lot easier to apply analytics to better understand how that data is actually being used.

    Just as importantly, because that data is indexed, it becomes available on demand to all applications versus being stored in a way that makes large amounts of data not readily accessible to any application.

    In terms of the overall IT budget, spending on secondary storage both in and out of the cloud is spiraling out of control. As such, it’s only a matter of time before IT organizations are forced to embrace new approaches to managing storage to reduce those costs.

    Backed by $70 million in venture capital, Cohesity is clearly betting that it will be one of those preferred new options for managing secondary storage.

    << Previous Day 2015/06/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org