Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, January 22nd, 2015

    Time Event
    1:00p
    Data Center SDN Startup Pluribus Raises $50M

    Pluribus Networks, a Palo Alto, California-based data center SDN startup, has closed a $50 million Series D funding round led by Temasek Holdings, a huge Singapore investment company with a total portfolio value of $177 billion.

    Pluribus’ flagship product, called Netvisor, creates a virtual network over a group of bare-metal hardware switches. It creates a switching fabric that abstracts the physical resources, presenting logical virtual switches to the applications.

    Pluribus is one of numerous startups pitching technological approaches to data center infrastructure taken by the so-called Web-scale operators – the likes of Google, Facebook, and Twitter – to smaller companies with much smaller data center requirements. SDN, or software defined networking, makes networks more agile, automating network configuration tasks that have traditionally been done manually and taken long time to complete.

    In addition to data center SDN functionality, companies like Pluribus are offering the benefit of independence from traditional network vendors like Cisco that sell switches that work only with Cisco software. Pluribus and its competitors, startups such as Cumulus Networks and Big Switch Networks, make network operating systems and management software that can be deployed on any bare metal hardware.

    Last year, there were multiple signs that traditional network hardware “incumbents” were reacting to the trend. Juniper Networks announced plans to ship a switch that will support any network OS in December. Dell has a line of switches that ship with Cumulus, Big Switch, or Midocura software.

    “Hardware independence coupled with a secure and programmable SDN platform, offering scale, automated configuration, operational simplicity, and open standards are key working group requirements being addressed by the Pluribus Netvisor Operating System,” Nick Lippis, co-chairman of the Open Networking User Group, said in a statement. ONUG is an SDN think tank of sorts, which has some heavyweight end users on its member roster, including IT execs from Citi, UBS, Bank of America, Fidelity Investments, FedEx, Pfizer, and Gap, among others.

    All existing investors joined Netvisor in the latest Pluribus funding round. They include New Enterprise Associates, Menlo Ventures, Mohr Davidow, and AME Cloud Ventures, led by Yahoo! co-founder Jerry Yang who was also the first investor in Pluribus.

    Other new investors were Ericsson and Netwech, a turnkey data center infrastructure provider in Asia.

    4:00p
    Bitcoin Gets Liquid: BitFury Buys Immersion Cooling Specialist

    One of Bitcoin’s biggest players is turning to immersion cooling to address the shifting economics of cryptocurrency mining. Bitcoin hardware specialist BitFury Group said today that it will acquire Allied Control, a startup known for designing a high-density bitcoin mine in a Hong Kong skyscraper.

    The deal is a vote of confidence in immersion cooling, in which high-density hardware is dunked into fluids similar to mineral oil. BitFury’s move suggests other mining players may also examine liquid cooling as a tool to slash operating costs following a price crash, which has altered the economics of bitcoin and caused a shakeout in the mining sector.

    The acquisition may also boost the use of data center containers to allow bitcoin miners to shift capacity to areas with cheaper power costs and renewable energy sources.

    BitFury is a leading maker of specialized semiconductors for Bitcoin transaction processing (“mining”), known as Application Specific Integrated Circuits (ASICs). Last year BitFury raised $20 million in venture funding to roll out a global data center network, including facilities in Finland, Iceland and the Republic of Georgia.

    Allied Control creates extreme density data centers for high performance computing. Last year it expanded into bitcoin, creating tanks filled with Novec, a liquid cooling solution created by 3M. Each tank houses densely-packed boards of ASICs. As the chips generate heat, the Novec boils off, removing the heat as it changes from liquid to gas. The system is extremely efficient, with one client reporting a Power Use Effectiveness (PUE) of 1.02.

    “This acquisition will enable us to substantially increase the energy efficiency of our data centers and speed up deployment of our new ASIC chip, allowing us to lower overall capital expenditure,” said Valery Vavilov, CEO of BitFury. “In addition, it provides an opportunity for us to enter new markets such as HPC, using the experience of the Allied Control team. The use of immersion cooling will provide BitFury with flexibility when choosing locations for our data centers.”

    bitcoin-mining-boards

    Liquid cooling solution boils inside an Allied Control immersion cooling tank, dissipating heat from the bitcoin ASIC boards visible under the surface. (Photo: Allied Control)

    Bitzfury says that using immersion cooling will reduce operating expenses on data center maintenance as well as on lowering its PUE. It has been working with Allied Control on proof of concept (POC) for its immersion-cooled data center. “The results, so far, are very promising and we are looking forward to significantly scale up soon,” the company said.

    The shift to immersion cooling reflects a sharper focus on data center efficiency by industrial mining operations, whose profit margins have been squeezed by the recent collapse in the price of bitcoin. After soaring as high as $1,100 in late 2013, the value of a bitcoin has plunged to about $230. This has had a huge impact on bitcoin cloud mining, with some firms shutting down or halting payouts to customers.

    In addition to supporting extreme power density, immersion cooling has the potential to slash the cost of data center infrastructure, allowing users to operate servers without a raised floor, computer room air conditioning (CRAC) units or chillers. It also allows ASICs to operate without fans, which are typically among the largest components of a bitcoin mining rig.

    Rapid Refresh Rate

    Allied Control’s immersion cooling tanks are ideal for bitcoin mining because they support rapid hardware refresh cycles. The backplanes housing the ASICs are designed to support multiple generations of chips.

    This is of particular interest to BitFury, which has updated its ASIC design three times over the past two years to keep pace with the arms race in bitcoin hardware. In 2013 bitcoins were mined using CPUs and GPUs, followed briefly by FPGAs (Field-Programmable Gate Arrays), and then the launch of custom ASICs.

    4:30p
    The End of the Public Cloud’s Reign

    Keao is responsible for marketing at 365 Data Centers and has led global marketing as the CMO at Dimension Data’s Cloud Business, OpSource, Reliance Globalcom and Yipes.

    The public cloud is not for everyone and that fact is bringing an end to the public cloud’s reign as we know it. The public cloud’s massive growth and popularity was a bubble that was built upon public Infrastructure-as-a-Service (IaaS) being promoted as the ultimate IT solution for most, if not all, companies.

    Unfortunately, for the proponents of public cloud omnipotence, the demand for undifferentiated or unsupported IaaS seems to be waning, with cloud-savvy businesses seeking a combination of managed cloud services, private cloud or colocation rather than pure public cloud solutions.

    For evidence, look no further than the price wars between the behemoths of the industry as the commoditization of public cloud services reflects buyers’ unwillingness to pay extra for undifferentiated services. Or look at Rackspace abandoning private and public cloud in favor of managed cloud.

    The Need for an Alternative Approach

    As businesses ventured into the public cloud with high hopes that it would solve all their IT woes – and do so at a considerably low cost – they soon discovered that mission critical functions that require high availability and performance were more successfully run via a hybrid model that includes private cloud or colocation. These realizations have led some organizations to return to, or augment, their public cloud deployments with private cloud or colocation for certain applications that require more performance, control, management or security. The reality is that companies need a hybrid solution to ease into the public cloud, or back out of it, while still controlling the mission critical parts of IT and applications that are most important to them and their clients. There are many reasons why a hybrid approach is more practical for a variety companies.

    Protecting Precious Possessions

    A company’s most valuable asset is often its data. But data is a particularly vulnerable resource, always at risk of possible loss or corruption via attack, natural disasters, theft, power outages and malware. Relinquishing control over data and IT resources can be a major downside to using the public cloud. As such, data security often is one of the presiding questions for IT managers when companies consider alternative data-storage options.

    In the public cloud, the infrastructure is out of the company’s hands and in many cases, geographically far away. Getting in touch with someone in customer care that is willing and capable of addressing technical problems in a timely manner has proven to be another concern.

    With colocation, the customer retains control over the operation and maintenance of their servers and hardware while the data center provides the required infrastructure including space, power, bandwidth, cooling, security, and redundant systems. It gives companies increased control over the environment while delivering the benefits of the cloud and the added services they need.

    Many data centers offer managed services that can be leveraged to monitor and manage IT infrastructure from the IT manager’s desktop. Managed colocation is offered as a complete turnkey solution where the customer owns the hardware but the data center manages all aspects of its operation, from updates and systems integration to troubleshooting. The key benefit is that the most skilled individual is managing the different aspects of the IT infrastructure and that it is locally managed by people who are accessible in case there is an urgent issue that needs immediate attention. Issues such as downtime that may be perceived as a small blip by a large and distant IaaS provider could have disastrous consequences for businesses that rely on those services.

    Growing Impact of Proximity

    For the most part, public cloud IaaS was built on the model of having extremely large, centralized facilities that take advantage of economies of scale to make for easier management by the IaaS provider as well as improved profitability for them.

    For this reason, public cloud facilities are generally located in cities with low cost access to space and power. However, mission critical applications that require storage and data access need extremely low latency to function and integrate properly with other systems. This typically means that the data storage and infrastructure need to be physically located close to one another. Beyond 100 miles of distance, even a fiber optic connection will have too much latency to deliver the required throughput to properly run certain applications.

    Proximity becomes even more important with the advent of new technology at the network edge that will deliver far higher bandwidth to businesses and consumers. Whether it’s mobile’s 4G LTE, high-speed cable or fiber to the home, network throughput at the edge will be increasing by about 10 times in the not-so-distant future. Delivering a good subscriber experience for content or cloud services at the edge will become increasingly difficult from geographically distant locations.

    Further, the economics of transporting 10 times more data at the edge could easily mean several times that amount will need to traverse the backbone and it will no longer make economic sense for carriers to support this model. Current peering agreements will likely break down under the strain. In this case, the benefit of improved local service, security, control and management of your data and cloud assets will pale in comparison to the cold hard reality faced by carriers—they will simply not be able to support massive increases in traffic across their backbones.

    What Does it All Mean?

    Taken together, these trends mean that a more distributed architecture will be necessary going forward. Former so-called Tier 2 and Tier 3 metro areas, cloud and data center facilities will become increasingly important as the edge transforms and as local content or service delivery becomes necessary.

    For the behemoths, the consumers of public cloud IaaS will surely appreciate the price reductions. However, lower prices do not always equate with better value. For businesses running mission critical applications that require always-on uptime and high performance to deliver what their clients demand, a local, hybrid cloud and colocation architecture will ultimately be the best choice.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Explosion of Research Data Drives “Tipping Point” for IT Facilities

    The scientific community explores the world around it, whether large or small. From the cosmos and its origins to things smaller than the human eye can perceive, such as cells and the human genome. Both represent the ends of the spectrum of scientific inquiry, and now more than ever, both extremes (and many inquiries in between) require more and more scientific computing and storage.

    This is a phenomenon which is driving IT facilities design shared by many academic institutions, according to James Cuff, Assistant Dean for Research Computing in the Faculty of Arts and Sciences at Harvard University. Cuff, who previously worked on the human genome project at Wellcome Trust Sanger Institute, will be one of the keynote speakers at the spring Data Center World Global Conference 2015 in Las Vegas, will discuss the current and future state of the data center, which is being called on to do more work and do it more efficiently.

    There is a common connection to most data center managers’ pain points, in that not only are academics experiencing the strong uptick in demand for compute and storage, but so are enterprises, service providers, government organizations and the like.

    “Data centers are the back bone of civilization,” Cuff said in a phone interview from his office in Cambridge. “Basic science is being done through computing. We have researchers who are modeling the early universe.” He added that a new microscope has come online that produces 3 terabytes of data per hour. (Talk about a storage challenge!) For Harvard, Cuff said, the scientific computing power over time went from core counts in the “hundreds” to more than 60,000 CPUs today. Their storage is about 15 petabytes currently.

    Looking Ahead Leads to Collaboration

    Seeing the ever-expanding need for compute and storage as well as the scientific inquiry drivers, Harvard, along with four other research institutions — Boston University, MIT, Northeastern University, and the University of Massachusetts — collaborated on a new data center facility in Western Massachusetts, titled, Massachusetts Green High Performance Computing Center. The facility, which is run by a non-profit owned by all the universities, is located where there is hydro-electric power. The data center was built to provide 10 megawatts, but so far has deployed 5 MW.

    The key to consolidating disparate computing resources into a more efficient was to “build trust with the community,” Cuff said.

    James Cuff

    James Cuff

    “It goes back to the old days. With main frames, the scientific community had to share,” he said. “Then the PC blew the doors off it. Until researchers found that one computer was not enough to get everything done. Then, they networked machines together.” The next tipping point has arrived, as the number of networked machines has grown significantly and requires specialized power, cooling and monitoring. This lead to the consolidation of computing resources, and sharing the computers again. The process included starting slowly with one area, Life Sciences, and “walking across the quad with computers under our arms at times,” Cuff said. There was also new equipment used as an enticement for researchers to move toward consolidation. Today, computing resources are outlined in the faculty’s offer of employment letter.

    The ability to provide a new facility and make it more energy efficient was also very appealing to the group of institutions. “In Cambridge, energy comes from coal, oil, and non-renewable resources,” Cuff said. “In Holyoke, it was an old mill town with a massive dam.”

    The MGHPCC is located in Holyoke, Mass. to take advantage of renewable power.

    The MGHPCC is located in Holyoke, Mass. to take advantage of renewable power.

    The other benefit to the site location was the connection to Route 90 (known as the Mass Pike) because it had a high-speed fiber-optic network running along it. “The different universities just use different wavelengths of light on the fiber,” Cuff explained.

    Currently, Cuff uses three flavors of facility to meet the computing needs of the faculty and staff:

    • Service Provider in Cambridge/Boston – high reliability site for “the crown jewels” (price of energy 15/16 cents per kilowatt-hour)
    • MGHPCC – Holyoke – “cheap and cheerful” – less reliable (20 percent uninterruptable power supply) benefit of great amount of capacity and allow users access to full computer (price of energy 8/9 cents per kwh). Harvard has 22/24 rack pods with hot aisle containment.
    • Public clouds such as AWS – instant, easy or temporary workloads

    This set up allows the academic team to take advantage of some computing requirements being transient. Workloads can be sent to the location that suits them, and are deployed in the location and at the time when it is most cost efficient to run them. It should be noted this works in research because some jobs are not that time sensitive.

    “We have tiered storage, including EMC gear and file systems that are built in house. So we get both vendor support and internal support on storage,” he said.

    Monitoring, Power Management and Orchestration

    “New chipsets are allowing for throttling of energy use during compute cycle, so a job that would run for two months, could be cut back to go for two months and a week. That would make an energy difference,” Cuff explained. He is now actively watching the power usage through rack-level monitoring. Previously, he was not as aware of energy usage. “Facilities paid the power bill. I was not incentivized to conserve. The CIOs should get the energy bills handed to them,” he said.

    Currently, the use of the MGHPCC allows Harvard to set standardization on vendor platforms and management tools. “We use Puppet for orchestration layer,” Cuff said. “We’d be dead in the water without orchestration software. We have an army of machines that all look like their friends, and if there is one that is different we can identify it quickly.”

    What Lies in the Future

    “We used to say that we were growing by 200 kilowatts every six months,” he said. Now, he has a monthly meeting and he’s asked “how many racks” will be added in a given month.

    “There’s a steep curve,” he said, “Once adoption happens, things start to pick up speed. We have now added the School of Engineering and the School of Public Health. In nine months, we will be looking at our next stage of design. The MGHPCC is about 50 percent occupied, I expect we will need about 40K more CPUs and the storage requirement is expected to grow as well.”

    To hear more about the high performance computing facility that Harvard is using and more case studies of the science it supports, attend Cuff’s keynote session at spring Data Center World Global Conference in Las Vegas. Learn more and register at the Data Center World website.

    6:30p
    CenturyLink Data Center in Silicon Valley Gets New Landlord

    Westcore Properties has bought a property in Sunnyvale, California, that includes a three-story office building and a one-story CenturyLink data center. The telco and data center services company is the data center’s sole tenant.

    The facility was one of 17 data centers CenturyLink added to its portfolio when it acquired Qwest Communications in 2010.

    Westcore, a San Diego, California-based commercial real estate firm, bought the 165,000 square foot property from WTA Kifer, a real estate LLC based in Palo Alto, for $52.7 million.

    A CenturyLink spokesman told us CenturyLink’s operations in the data center have not been affected by the acquisition. The data center tenant recently signed a five-year renewal on the lease, Victoria Grether, Westcore’s vice president of acquisitions and asset management, told Silicon Valley Business Journal.

    Marc Brutten, Westcore chairman and founder, said the company now owns 3.5 million square feet of commercial space in the San Francisco Bay Area.

    “We will be undertaking a major renovation of the office complex, which will help us to attract strong tenants,” he said in a statement. “We anticipate meaningful demand for space throughout the property once it is completed in the middle of this year.”

    According to Business Journal, one of the highest-profile deals Westcore has done recently was an acquisition of a former printing plant of the San Francisco Chronicle. The real estate company tore the building down to replace it with a manufacturing and distribution building, which, once complete, will be sold to Terreno Realty Corp.

    8:30p
    Federal Agencies Using Open Source Solutions More Satisfied with Cloud Security: MeriTalk

    logo-WHIR

    This article originally appeared at The WHIR

    Seventy-five percent of federal IT workers want to move more services to the cloud, but are held back by data control concerns, according to a survey released this week by MeriTalk. According to “Cloud Without the Commitment,” only 53 percent of federal IT workers rate their cloud experience as very successful, the same number as are being held back by fear of long-term contracts.

    MeriTalk surveyed 150 federal IT professionals in December for Cisco and Red Hat, and found serious interest and serious concerns about committing to cloud.

    Those agencies using open source solutions are reporting much higher levels of satisfaction with cloud data security and agility, with over 20 percent higher satisfaction levels for both among open source users.

    “Open source is not only driving much of the technology innovation in cloud, it is also enabling government agencies to answer their questions about cloud portability and integration,” says Mike Byrd, senior director, Government Channel Sales, Red Hat. “In this way, it is not surprising to me that the survey respondents who have embraced open source reported greater cloud success.”

    The survey also strongly indicates the most successful path to federal cloud adoption. Sixty-five percent are not completing an appropriate workload analysis, and 60 percent are not developing a cost model, while 56 percent have worked with a consultant and found the experience “very helpful.”

    The top barriers to further migration are integration concerns at 58 percent, followed by the inability to actually migrate the data from existing legacy systems (57 percent), and data mobility once it is in the cloud (54 percent).

    Additionally, an estimated 32 percent of data cannot be moved to the cloud because of security or data sovereignty issues, and 23 percent are not comfortable storing sensitive data with even FedRAMP-certified providers.

    Federal workers concerned about cloud security already have a higher standard to refer to, developed for the Department of Defense and released last week.

    Cloud adoption has continued at federal agencies despite the misgivings, supported by Cloud First and FedRAMP. Nineteen percent said their agency delivers a quarter or more of its services from the cloud, led by email and web hosting.

    The survey authors conclude that federal IT workers should start small, and build successful cloud use models, and that they should carefully consider provider options and explore open source options that deliver the needed flexibility.

    An August MeriTalk survey showed a lack of confidence in federal data center reliabilityamong federal IT workers.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/federal-agencies-using-open-source-solutions-satisfied-cloud-security-meritalk

    9:00p
    GoDaddy Patches Vulnerability That Could Allow Hackers to Hijack Customer Domains

    logo-WHIR

    This article originally appeared at The WHIR

    A vulnerability that could allow GoDaddy customer domains to be taken over has been patched. The vulnerability was discovered by independent security engineer Dylan Saccomanni on Saturday, and fixed within 48 hours.

    Saccomanni published a post on Sunday detailing the vulnerability. This vulnerability means that if an attacker successfully lured a GoDaddy customer to a site hosting an attack, the attacker could edit nameservers or change other DNS management settings, and take over the site.

    After discovering the vulnerability, Saccomanni made a series of attempts to notify GoDaddy. Finding that he could not reach GoDaddy through a couple of email addresses typically monitored within the industry, and that Google searches and phoning support did not provide the contact he was looking for, Saccomanni reached the company publicly through Twitter.

    Cross-site request forgery, or CSRF, is characterized by Threatpost as “a chronic web application vulnerability,” and Saccomanni told Threatpost that it wouldn’t be difficult to exploit.

    “A user could have a domain de facto taken over in several ways. If nameservers are changed, an attacker changes the domain’s nameservers (which dictates what server has control of DNS settings for that domain) over to his own nameservers, immediately having full and complete control,” Saccomanni said. “If DNS settings are changed, he simply points the victim’s domain towards an IP address under his control. If the auto-renew function is changed, the attacker will try to rely on a user forgetting to renew their domain purchase for a relatively high-profile domain, then buy it as soon as it expires.”

    Saccomanni expressed frustration at the challenges he had reaching GoDaddy, and that they would not let him speak directly to a security engineer.

    Microsoft and Google have accused each other of self-serving security policiesfollowing Google’s publication of a zero-day Windows vulnerability, after Microsoft requested they delay the disclosure, and before it was patched.

    The WHIR contacted GoDaddy for comment but had not heard back as of publication.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/godaddy-patches-vulnerability-allow-hackers-hijack-customer-domains

    9:40p
    IBM and Top Linux Distros Team Up to Drag x86 Workloads Onto Power

    IBM, a long-time supporter of Linux, has been working with major distributions of the popular open source operating system, to make it easier for users to port their applications written for x86 servers onto IBM Power Systems.

    IBM got out of the x86 server business just last year, after it sold its System x unit to China’s Lenovo for $2.3 billion. The company has since been placing a lot of focus on growing its business around the Power processor architecture and servers built on it.

    The way IBM and the big Linux distros – Canonical (the company behind Ubuntu) Red Hat, and SUSE – are tackling the portability problem has to do with the way server platforms treat data stored in memory. Most Linux software is written for the x86 architecture, which uses the “little endian” approach to storing bytes in memory. The alternative is “big endian,” which has traditionally been used by mainframes and IBM’s Power architecture. (Detailed explanation of the difference here)

    Power supports both endian modes, but until recently, the major Linux distros written for Power have only supported little endian. That has now changed. Canonical was first to launch a release of Ubuntu Server to support little endian on Power (Ubuntu Server 14.04) in April of last year, and SUSE rolled one out in October (SUSE Linux Enterprise 12).

    Red Hat released its Enterprise Linux 7.1 Beta, the company’s first distro to support little endian on Power, in December, and today IBM announced availability of that release through its Power Development Platform and at its innovation and client centers worldwide. The move essentially makes it easier for software vendors, enterprise developers, and individual developers to try it. IBM’s Power Development Platform gives developers free access to test Power servers over the Internet.

    “With these resources, ISVs, in-house and open source developers have an opportunity to access and test the beta on their own, with additional toolkits created specifically for our Power community,” Doug Balog, general manager for Power Systems at IBM, wrote in a Thursday blog post.

    IBM fist announced its hardware, software, and services would support Linux in 2000. Around the same time, the company committed to investing $1 billion in contributing to the open source project to make it more palatable for enterprise users. IBM Power5, launched in 2004, was the first Power system to support the 64-bit Linux kernel.

    But the Power team at IBM has recently cranked up its focus on Linux. In 2013, the company committed another $1 billion to development of new Linux and other open source tech for Power Systems.

    << Previous Day 2015/01/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org