Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, June 10th, 2015

    Time Event
    12:00p
    H5 Acquires Ashburn Data Center, Seeks Wholesale Tenant

    H5 Data Centers has acquired a data center just a few blocks away from one of the largest data center clusters in the world in Ashburn, Virginia. The facility, previously owned by a company called DBT, sits on a five-acre parcel of land.

    The Ashburn data center site has been marketed in the past, but not much work has been done on the older industrial build. H5 plans to invest in transforming the building into a move-in ready powered shell or build-to-suit facility.

    The 70,000-square-foot data center is expandable to 100,000 square feet and has more than 8.5 MW of power available from Dominion Power. H5 is targeting a customer who needs about 5 MW of critical IT load. COO David Dunn said the company is already working with the utility company to increase the site’s power capacity.

    “We’re looking for a large, sophisticated user to take over, but we can also provide the capital and work to get it even more ready,” he said, adding that H5 has started to engineer the layout.

    The facility has good infrastructure, is outside the 500-year flood plain and has high ceilings. A sample layout results in about 40,000 square feet of net usable space supporting about 1,500 cabinets, but there is a lot of flexibility, especially in terms of design. One reason is the structure has no internal columns, which can be cumbersome to design around.

    H5 is proposing an air-cooled design that might not use as much water.

    Over the next three or four months, upgrades will be made internally and externally. These include making aesthetic improvements inside and outside, modernizing and reinforcing the walls, and improving the loading dock area, office space, lobby, and conference rooms.

    H5 has successfully employed the powered-shell model across the country. Dunn said that all current projects are substantially leased, and the company is looking to make further acquisitions.

    The new property puts the company in play in one of the biggest markets. While Northern Virginia is arguably a crowded market, H5 believes that the new property is unique in that there isn’t much powered-shell space available in the Ashburn data center market.

    “The supply has been pretty rational,” said Dunn. “We think the demand is strong. We consistently see new builds in Northern Virginia pre-leased.”

    A powered shell usually appeals to providers who are willing to invest in competing the build-out according to their needs, said Dunn. “We’re taking a risk, but we’re able to do a build-to-suit as well,” he said. “This is a nice sweet spot for a powered shell.”

    A likely tenant is a large cloud or colo provider with data center design and management know-how. Given the role Ashburn plays as a connectivity hub for all major web-scale providers, finding this type of customer isn’t likely to be difficult. H5 is willing to make a bet in terms of capital investment in upgrading the site.

    Some existing H5 wholesale customers include Peak 10 in Charlotte, North Carolina, and Level 3 in Seattle.

    The company is eyeing other locations for further expansion.

    “We continue to believe that bytes are going to continue to seek lower-cost markets as the volume of bytes goes up,” Dunn said. “Companies will seek to optimize their data center spend and look for storage and compute locations in lower-cost areas, or you may see us focus on the network.”

    H5 recently announced it is pumping $8 million into its Denver data center for efficiency upgrades.

    3:30p
    Surviving Your Data Center Migration

    Francis Miers is a Director at Automation Consultants.

    A data center migration can be a tough process for even a seasoned data center manager to navigate. Relocating software and hardware is not simple and requires an analysis of risk, the data center environment, compatibility, the network in general, and possible latency issues.

    Manage Risk

    Good data center migration planning is based on the levels of risk you can tolerate. The principal risks being prolonged downtime and loss of data. Levels of tolerated risk will normally depend on the business importance of the apps being hosted. The extreme end would be an air traffic control operation that must be operational 24 hours a day, whereas the lower end of the risk scale includes internal company systems that can tolerate outages.

    Thinking about it in terms of a mobile telecom operator you would have:

    • Mission critical systems – e.g. the network itself
    • Business critical systems – e.g. telecom operator’s website
    • Important systems – e.g. HR, meeting room
    • Brochureware systems – e.g. intranet

    The risk will also affect the budget required for the migration; you may need a test environment, dedicated data center staff and additional software licenses. You may need to performance test the new environment (in an ideal world you’ll run a complete set of functional and performance tests) to ensure your applications behave correctly.

    Disaster recovery is also a key consideration for mission critical systems. Such systems normally have a disaster recovery capability with active servers on another site (active/active). In a migration of a mission critical systems, any disaster recovery facility would have to be taken into consideration.

    Discovery

    As well as establishing business and technical risks, you need to understand what the existing environment consists of and how it interacts with other systems. Over time, knowledge of the full extent of the components of an application can be lost. Some reliable applications may have worked without a significant pause for decades and the full knowledge of their components may have been lost or will be difficult to recover. Perhaps the person who designed and built them may have left the organization long ago. It is often necessary to spend time rediscovering all the components of legacy applications, including the hardware they’re installed on, or parts of the network they use. If all those components aren’t there post- migration, they might not work when you switch them on.

    Network tracing tools can prove invaluable for this discovery phase. Many systems will be in contact on a daily basis, but some will communicate rarely (once a month, for example). It is important to observe your network over a sustained period of time to ensure that you don’t miss these processes in action.

    Compatibility

    Legacy apps aren’t always a good fit for a new environment. MicroVAX minicomputers, for example, ceased to be manufactured in 2005 – despite this, they still figure prominently in many facilities.

    If your apps were made for MicroVAX systems or other older hardware, your migration has the additional dilemma of how to move them to modern equipment. There are two ways of dealing with this. The first is physical movement of legacy hardware: unplugging and re-racking, quite literally a “lift and shift”.

    The alternative is moving them to modern hardware. In some cases, emulators might ease the process considerably; in others, the apps will require a complete rewrite. As technology moves forward, systems like MicroVAX will become harder and harder to support. Adjusting applications to modern hardware may incur short-term cost but eliminates the risks of using long-outdated hardware.

    Network

    Again, solid planning rules the day. Part of the planning in migrating any new app is to think about how it will fit in the new network. Considerations include: firewall settings, the domains and trusts, and latency. For systems of medium to high business importance and/or technical risk, a trial migration in a test environment is advisable, so the system can be tested in its new location before going live.

    Latency

    Depending on the nature of an application, latency can be important or negligible; accessing a single file may take milliseconds, but if the application needs hundreds of files in sequence, then the process could take minutes which makes it unworkable. If an application is migrated thousands of miles away in another country, latency could become an issue and additional server and virtualization resources may be required.

    In any data center migration the main dangers are prolonged unplanned downtime and loss of data. The budget dedicated to the migration will reflect the damage that will be caused to the business if either of these are incurred. Good advanced planning and thorough testing (commensurating with the business importance of the application) will help the migration go smoothly.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:35p
    Amazon to Buy Solar Energy for Virginia Data Centers

    Amazon will support construction and operation of an 80 MW solar farm in Accomack County, Virginia, which will be called Amazon Solar Farm US East. This is the company’s second big renewable power purchase agreement in the US this year following a wind deal in Indiana.

    Amazon Web Services, the company’s cloud services arm, is teaming with Community Energy to support the construction and operation of the solar farm, which is expected to start generating 170,000 MWh of solar power annually as early as October 2016.

    Virginia is a big state for data centers but not solar farms. The Amazon project will be the largest solar farm there, with all energy generated delivered into the electrical grid that supplies AWS cloud data centers in the state.

    The bulk of Amazon data center capacity is in Virginia, served by Dominion Power, a utility whose fuel mix includes only 2 percent of renewables, the rest coming from coal, nuclear, and gas-powered plants, according to Greenpeace. The future wind farm is about 200 miles away from the big data center cluster in Northern Virginia.

    Amazon Solar Farm US East will serve both existing and planned AWS data centers in the central and eastern US, said Jerry Hunter, vice president of infrastructure at AWS. The plant has the added benefit of working to increase the amount of renewable energy available in the Commonwealth of Virginia, he added.

    AWS has been a black sheep in Greenpeace’s roundup of cloud providers, called out for its use of dirty energy. The company made a commitment to 100 percent renewable energy usage for AWS in November 2014.

    The year kicked off with Amazon signing a long-term power purchase agreement in Indiana. The 150-megawatt wind farm, called Amazon Web Services, Wind Farm, is scheduled to come online early next year. It will generate about 500,000 MWh annually, according to AWS, which has a 13-year PPA with Pattern Energy Group, the project’s developer.

    Amazon also announced earlier this year it was piloting Tesla batteries in its US West region last month. Batteries are not only important for data center reliability, but are enablers for the efficient application of renewable power. One of the biggest barriers to widespread adoption of wind and solar energy is intermittency of generation, which can be addressed with efficient energy storage.

    Virginia is home base for AWS US East, the largest region for the cloud services business which makes $6 billion in revenue per year. It’s also home to three edge locations and a burgeoning GovCloud Region built specifically for government needs.

    Two Amazon data center construction projects are in the works in the region. A developer is reportedly building a massive data center for it in Ashburn. The construction project was recently in the news after it caught on fire. Another proposed Amazon data center in Haymarket has run into opposition from a group of residents over Dominion’s plans for construction of a new power line.

    The new solar farm is a step toward meeting the company’s commitment to use 100 percent renewable energy for its data centers. But given the company’s sheer size and presence in Northern Virginia, there is still a long way to go.

    What might be interesting to see is influence, if any, AWS will have on Dominion Power, the utility giant serving one of the largest data center clusters in the world. As massive web-scale players make big investments in renewable energy, the multi-tenant data center industry would certainly leverage the infrastructure there.

    Data centers are big customers, and the customer is always right. Google and Apple lobbied Duke Energy in North Carolina, convincing the largest utility in the US to pump a $500 million investment into renewable energy.

    Virginia Governor Terry McAuliffe saw the news as very positive for the state at large. “Amazon’s new solar project will create good jobs on the Eastern Shore and generate more clean, renewable energy to fuel the new Virginia economy,” McAuliffe said in a statement. “I look forward to working with Amazon and Accomack to get this project online as we continue our efforts to make Virginia a global leader in the renewable energy sector.”

    5:00p
    HGST Launches 10TB Drive for Users With Deep Archive Needs

    Raising the bar yet again on the maximum capacity of a single hard drive, HGST, a Western Digital company, announced an enterprise-class HelioSeal 10TB hard drive aimed at applications with deep archive needs.

    The new Ultrastar Archive Ha10 drive is the company’s third helium-based drive. HGST says it uses the second generation HelioSeal platform and shingled magnetic recording (SMR) to achieve maximum density.

    HGST believes host-managed SMR as core technology is the future for its HelioSeal line, with greater density achieved in the same footprint through overlapping, or “shingling” the data tracks on top of each other. After previewing the 10TB drive last year HGST says customer feedback revealed that active archive applications are already sequential, creating the ideal environment for SMR hard drives to thrive. The company also estimates that active archive/deep archive applications are generating 20-35 percent of the data being stored today.

    “By layering SMR on top of helium, we are enabling massively scalable TCO-driven storage solutions with the performance and durability necessary for the long-term retention of archived data,” Brendan Collins, vice president of product marketing at HGST, said in a statement. “Making SMR design investments today minimizes incremental efforts for future SMR solutions and gives our customers a time-to-market advantage for all current and future high-capacity HDDs in the market.”

    With a 5-year warranty and 2,000,000-hour MTBF (mean time between failures), HGST positions the drive as purpose-built for cloud providers, online backup providers, and others with deep archive needs where data is only read occasionally.

    HGST says it is working with driver vendors for HBA support, and will provide an open source Software Development Kit for Linux application development. The company says roll-out of the Ha10 will initially focus on cloud and OEM storage customers.

    5:27p
    Cisco Beefs Up ACI, While Improving Non-ACI SDN Capabilities in Nexus

    Cisco has upgraded the operating system used across its data center switches to enhance software-defined networking capabilities across its full range of offerings.

    In addition, the company unveiled Nexus 3200 top-of-rack switches for next-generation 10G, 25G, 40G, 50G, and 100G cloud data center networks that will be available in the third quarter. The company has also created a Virtual Topology System, an data center SDN overlay provisioning and management system for Nexus switches.

    Cisco made the announcements today at the Cisco Live 2015 conference in San Diego.

    Locked in a fierce battle with rivals looking to usurp the company’s dominance of the data center network market using open source networking software, Cisco has been going through great pains to wrap its proprietary networking technology with a set of programmable open interfaces. By providing higher levels of interoperability Cisco maintains it is preserving customer choice while also preserving its ability to deliver innovative products and services. Thus far, it says it has signed 2,655 customers to deploy Nexus switches running its Application Centric Infrastructure software for data center SDN.

    VTS supports the BGP EVPN control plane for managing VXLAN overlays and can be integrated with cloud management systems, such as OpenStack. In fact, Cisco has already developed a VTS plug-in for VMware vSphere environments and expects partners and customers to be able to easily develop plug-ins for any number of management frameworks.

    VXLAN overlays are a different approach to SDN from Cisco’s proprietary ACI software. By supporting both, Cisco is trying to ensure it doesn’t lock itself out of the growing open-SDN market.

    Jacob Jensen, senior director of product management for the Cisco Insieme business unit, said ACI, which is at the core of Cisco’s SDN platform, can now also be integrated with Microsoft Azure and System Center software along with object stores typically used by cloud service providers.

    “This is an extension of our existing partnership with Microsoft,” says Jensen. “Microsoft System Center is now integrated with the Cisco APIC controller.”

    Other new capabilities include a software development kit for building ACI applications along with tools for setting up disaster recovery capabilities between Nexus switches that can be as much as 150 kilometers apart.

    Finally, Cisco has added heat maps, capacity planning, and simplified troubleshooting tools to ACI and recruited CliQr, a provider of application dependency mapping and application deployment automation software, into the ACI ecosystem.

    Given the relatively slow rate of change inside enterprise networks, it will take years for the battle for data center SDN market share to play out. But like most incumbents faced with challengers, Cisco will surely use all the technology and pricing muscle at its disposal to try and maintain its current market dominance.

    6:30p
    Cisco Invites Cloud Application Developers Into Intercloud Fray

    Cisco announced several initiatives focused on cloud application development, including a new marketplace and APIs for cloud services.

    The company’s Intercloud focus has been connecting enterprises with cloud service providers. Now, it’s adding an applications and development focus into the mix, connecting all three groups (service providers, app providers, and consumers) with one another, all atop Intercloud. The marketplace lists best-of-breed applications that work within the Intercloud vision, while the new APIs help developers build apps for what Cisco calls the hyper-distributed IT world.

    The company has built out a partner network of cloud service providers and is now building a network of cloud application service providers that work on the same Cisco fabric. The marketplace will act as the front end of this ecosystem.

    The aim is to give enterprise consumers a list of best of vetted apps that can freely move across Cisco’s Intercloud network, bringing their individual security policies along. The marketplace launches with 40 application partners, two examples being enterprise Hadoop players MapR and Hortonworks.

    “This is a curated marketplace,” said Michael Riegel, Cisco’s vice president of Internet of Everything. “It consists of key apps that they need for problems they need to solve.”

    To stimulate application development within Intercloud, the APIs are there to help developers build around hyper-distributed IT and Big Data, in other words, APIs to make applications good candidates for Cisco’s larger vision. Cisco is also developing unique APIs, such as its cloud services router or CSR.

    “Not only will we have APIs for network and control, we’ll also make our own cloud services,” said Riegel.

    The idea of siloed enterprises is going away as we enter the age of hyper-distributed IT. As end-user businesses tap into an increasing amount of cloud services and data, most customers think they use five to 10 cloud services, when the reality is closer to 675, according to Cisco.

    Sprawl is what the company is really trying to solve, said Riegel. All the cloud services coming in and out of the enterprise are growing by 17 percent annually. Shadow IT isn’t something that’s going to happen in the future, it’s something that is already happening. The average CIO has a better handle on what’s occurring, with the average CIO stating their business uses approximately 40 cloud services rather than the 5-10 estimated by other executives. However, 40 is still very much off from 675.

    Coupled with the explosion of connected devices, Cisco is seeing a hyper explosion of applications, data, and things at the edge of the network. It’s Intercloud strategy is around connecting this distributed world. The initial focus was connecting cloud service providers and becoming a cloud of clouds. Now, Cisco is delving deeper into applications.

    Cisco owned the network in the early days of the internet, and its strategy with cloud is to play a similar role. As enterprises become hyper-distributed, the network will need to support enterprise policies in a different way.

    “IT today is filled with disparate islands of clouds,” said Riegel. “A-tightly knit ecosystem of trusted partners – as Cisco has engaged – provides far greater infrastructure and application choice, greater industry expertise and better geographic coverage than any single vendor could.”

    In the backdrop, connectivity is skyrocketing, said Riegel, from 300,000 things an hour to 50 billion by 2020, with 2 billion more connected people. The growth of connectivity is creating a world about hyper-distribution, he said.

    Cisco has 350 data centers in 50 countries, according to Riegel, and it will continue to add resellers and cloud builders as a big focus. Across those 350 data centers, different partners like Telstra are building data center capabilities for cloud to have seamless interconnectivity to cloud. By solving connectivity issues in the hyper-distributed IT world, Cisco is essentially connecting all clouds in its fold.

    Cisco this week also announced it has upgraded the operating system used across its data center switches to enhance software-defined networking capabilities. Ahead of Cisco live, the company announced its acquisition of Piston Cloud Computing to boost its capabilities around private OpenStack clouds.

    8:16p
    Cloudera Snags Google Infrastructure Ace for Top Engineering Role

    Cloudera, an enterprise analytic data management provider powered by Apache Hadoop has named former Google infrastructure expert Daniel Sturman vice president of engineering. Sturman, who used to be compute lead at Google, will direct development efforts at Cloudera, including accelerating hybrid cloud adoption and deepening partner relations and technical solutions, the company announced this week.

    Sturman has a combination of database and distributed compute platform experience and comes from a company that provided early foundation for Hadoop.

    “Daniel brings to Cloudera an ideal blend of database and distributed compute platform experience,” Cloudera CEO Tom Reilly said in a press release. “The original design concepts for Apache Hadoop and many of the ensuing innovative analytic projects originate from Google.”

    Cloudera has a rich ecosystem, including tight relations with Intel, the company’s largest strategic shareholder. In 2014, Cloudera raised $160 million in a round that included Google Ventures, the internet giant’s venture-capital arm.

    Sturman was previously engineering lead at Google, a company with one of the biggest data center networks on the planet. During his eight years working on Google infrastructure, Sturman ensured that the underlying compute systems for all Google workloads were reliable, predictable, and efficient both on-premise and in the cloud, according to Cloudera.

    He was responsible for several Google Cloud Platform products, including Google Compute Engine, Google App Engine Platform-as-a-Service, Kubernetes, and Google Container Engine.

    Prior to Google, he headed development for IBM’s DB2 relational database products and identified emerging technology trends for IBM’s software group.

    “We are excited to have a Google technology leader who understands modern architectures and the art of turning data into insight take our platform to the next level,” said Reilly. “His background leading teams in running one of the world’s largest fleets of computers, his understanding of open source, distributed computing, and cloud platforms, coupled with his successful track record of delivering enterprise-grade software will bring immense value to Cloudera as we continue our high growth and deploy transformative Big Data applications in both traditional enterprises and the cloud.”

    In a statement, Sturman said the world was standing on the brink of incredible changes in the way value is extracted from data.

    8:25p
    Fixing Facilities Management and Increasing Data Center Efficiency

    Your data center now hosts a variety of different workloads, numerous remote users, and powerful multi-tenant architectures. This new kind of data center is the sort that is trying to keep up with market demands and stay ahead of the competition. In working with a number of data center administrators and managers, one big challenge is creating coherent facilities and infrastructure management environments.

    One way around this obstacle is to partner with a number of different leading solutions. However, there are issues with this option as well. When you work with too many partners and vendors, “finger-pointing” can become a serious concern. It can cause a “disconnect” among people that can lead to slower issue resolution times and increases in downtime, waste, and overall management costs.

    So, what’s causing this disconnect? The short answer is a lack of centralization and standardization around critical management systems and functions.

    However, a closer look will reveal a misalignment between the data center and the business it supports. In this white paper, we explore three pillars of CFM that help ensure this alignment: people, process, and centralized management.

    The right alignment of these pillars allows management to be able to view all facilities functions in a “single pane of glass.” When you have this kind of centralized control, you begin to remove information silos, create new kinds of economies of scale, and achieve a lower price for a far higher standard of facilities control. Download this white paper today to learn more about the three pillars of CFM and how they can help you better align your data center to the business and how you can more effectively work with key partners and vendors.

    8:30p
    RAND and Juniper Networks Report Finds Cybersecurity Costs to Rise 38 Percent by 2025

    logo-WHIR

    This article originally appeared at The WHIR

    A new cybersecurity risk assessment model released by Juniper Networks and RAND Corp. on Wednesday finds major cybersecurity cost reductions could be achieved through the elimination of software vulnerabilities as cybersecurity costs are on track to rise 38 percent by 2025. A reduction in half of vulnerabilities to software and applications would result in a cybersecurity cost decrease of 25 percent.

    The “heuristic economic model” created by RAND identifies key factors and decisions that influence the cost of cyber-risk to organizations. Although the study only interviewed 18 chief information security officers, the examination was in-depth and used the capabilities of the RAND National Security Research Division which conducts research and analysis for US defense, foreign policy, homeland security and intelligence communities.

    With the cost of data breaches and cybercrime expected to top $2 trillion by 2019, any research that attempts to nail down the cost factors involved with cybersecurity is important.

    “The security industry has struggled to understand the dynamics that influence the true cost of security risks to business,” said Sherry Ryan, chief information security officer, Juniper Networks. “Through Juniper Networks’ work with the RAND Corporation, we hope to bring new perspectives and insights to this continuous challenge. What’s clear is that in order for organizations to turn the table on attackers, they need to orient their thinking and investments toward managing risks in addition to threats.”

    Another factor that can help decrease cybersecurity costs is investment in employees. “Companies can benefit greatly in making people-centric security investments, such as technologies that help automate security management and processes, advanced security training for employees, and hiring additional security staff,” the company said.

    The RAND model finds that organizations with very high levels of security diligence can curb costs of managing security risks by 19 percent in the first year and 28 percent by the tenth year, as compared to organizations with very low diligence.

    Cybersecurity has been a huge focus the last several months. Even with the funding and capabilities of the US government behind them, the Japanese and US government suffered major breaches in the last week. The president had a strong focus on cybersecurity so far this year with a $14 billion dollar 2016 budget for cybersecurity, the ability to impose sanctions on cyberattackers, an executive order to promote threat sharing and the establishment of a dedicated cyberthreat center.

    This first ran at http://www.thewhir.com/web-hosting-news/rand-and-juniper-networks-report-finds-cybersecurity-costs-to-rise-38-percent-by-2025

    << Previous Day 2015/06/10
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org