Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 5th, 2015

    Time Event
    12:00p
    Why Should Data Center Operators Care About Open Source?

    Software developers obviously love open source. They get to collaborate, build on top of work already done by others instead of constantly building from scratch, and add features they need to existing solutions. Innovation often happens faster in open source communities than it does behind closed doors of corporate development departments.

    While software runs in data centers, data center managers’ job doesn’t usually extend much beyond making sure there is enough IT, power, and cooling capacity to support the application workloads and making sure the systems are configured and secured properly. But it isn’t going to stay this way forever.

    As software development tools evolve, and as businesses increasingly look to software as the main way to grow their value, more and more software is going to continue coming down the pipeline, and it’s going to come more and more frequently, requiring unprecedented levels of agility on data center managers’ part.

    This is why DevOps (shorthand for development and operations) is such a hot space. DevOps is about maintaining a constant feedback loop between developers and infrastructure operators to ensure new software that’s being written can be deployed quickly and efficiently, and that it can scale. IT automation and software-driven data center management in general are a big part of the DevOps philosophy and its great enablers, and a lot of important innovation in this space happens in open source communities.

    Data Center Manager of the Future is a Software Developer

    The data center manager of the future is an expert in both operations and software development. Software-enabled IT automation is what makes services at global scale possible, and enterprises that want to grow value through technology have to have a good grasp of the technologies that make this kind of infrastructure work.

    Google, which pioneered much of today’s thinking around operating services at global scale, doesn’t have sys admins. It has what it calls site reliability engineers, who are essentially software developers responsible for automating the infrastructure to support Google services they are assigned to support, Geng Li, CTO of enterprise infrastructure at Google, said while sitting on a panel at a recent industry event in San Francisco.

    Data center managers should stay abreast of what is happening in the open source communities created around software for data centers and participate in those communities, Jim Zemlin, executive director of the Linux Foundation, said, while sitting on a different panel at the DCD Internet conference in late July. The panel’s theme was the role of open source in the data center world.

    “I think open source is on the right side of history,” Zemlin said. The overall trend is toward collaborative development, and open source efforts lead this trend, because there is simply too much software that needs to be written for any single organization to write it on its own, he explained. It needs to be written because business leaders look at IT as a key growth driver.

    “The reality is that in order for business value to be driven out of IT, we need to get developers extremely productive and get rapid time to production,” he said, adding that this is something every CEO he talks to wants nowadays.

    Fighting Vendor Lock-In With Open Source

    Open source is also a way to fight the ever-dreaded vendor lock-in. By open sourcing its server designs through the Open Compute Project, Facebook opened doors to numerous hardware suppliers who are not the typical go-to data center vendors to compete with those go-to vendors for the same deals.

    Now that there are several compute, storage, and network hardware designs available publicly through OCP, there’s also a lot of work happening in the area of software, specifically network management.

    There are multiple open source efforts to drive standardization of network management software, and the incumbent networking technology vendors have joined some of them, demonstrating that not only do they see them as threatening to their market dominance but that they see their progress as inevitable. Vendors usually participate in open development efforts to be able to influence their direction and to ensure compatibility of whatever technology that comes out of them with their own products.

    One big example is Linux Foundation’s OpenDaylight, an initiative that has created an open source SDN controller. All major network vendors, including Cisco, HP, Dell, Juniper, Arista, and Huawei, are the initiative’s sponsors. Nella Jacques, OpenDaylight’s executive director, said such standardization efforts exemplify the trend of standardization as a process moving from standards organizations with their constant in-fighting to collaborative projects.

    But to participate in open source efforts and take advantage of them, the user needs to work harder than they may be used to. Open source organizations don’t have salespeople that look for end users. It’s up to the user to do the leg work. “It takes more work to consume it,” Jacques said. “It takes more work to understand it.”

    The good part, however, is that there isn’t a salesperson or a product manager between the user and the creator of the technology, he added. Open source projects provide direct lines of communication between users and developers.

    Freedom From the “Box Industry”

    Another major open SDN effort is the Open Networking Foundation, backed by a group of large network users, including Google, Facebook, Microsoft, Yahoo, Verizon, NTT, Goldman Sachs, and Deutsche Telekom.

    ONF’s biggest accomplishment so far is the creation of OpenFlow, a standard that enables separation of the controller plane from the forwarding plane in network switches and running it on a regular x86 server. The two functions have traditionally come locked in proprietary vendor boxes, with the vendor in effect dictating design and management of the user’s networking environment.

    SDN enables the transition from the “box-based” networking ecosystem to a compute- and software-based one, ONF Executive Director Dan Pitt said. It serves “to free the network operator from the box industry,” he said.

    The Price You Pay

    And that’s essentially what’s at the core of open source – shifting control over the direction of technology from vendors to users. Don’t want the box? Use open source and build your own or have somebody do it for you. Of course it’s never that simple. Salesforce, for example, in transitioning to a web-scale-style infrastructure, found that far from everything it needs is available through open source communities. The company’s engineers are building a lot of the tools they need to make the transition in-house and hoping to open source some of them eventually.

    But that’s the trade-off. Like with all other things in life, taking more control means taking on more responsibility and a lot more work.

    3:00p
    The Importance of Scalability and Cost of Data Center Solutions

    Chris Alberding is the Vice President of Product Management for FairPoint.

    Two of the most influential benefits in data center solutions are scalability and cost. These two advantages have the potential to impact data center performance the most. They also are key reasons for a business migrating to a service provider’s data center colocation facility.

    Scalability – How Data Center Solutions Support Seamless Growth

    When a business builds its own on-premise data center, it has a challenging task of determining the best size. The facility must meet current requirements, but it must also be able to address future capacity needs. But how do you rightsize a data center? How far into the future should your data center serve your business?

    When designing an in-house data center, cost control and performance depends on not over- or under-building the facility. If you over-build, resources will be wasted – resources that could be used to grow your business.

    Idle capacity is not only costly for your business, but you may end up with obsolete technology by the time you need to use it. For example, if you build a data center with an expected lifespan of 10 years, the extra capacity you’ve built to accommodate future growth may become outdated after five years. Not being able to leverage the advances in energy consumption, performance and other capabilities can put your operation at a distinct disadvantage.

    Under-building a data center has its own challenges and can be even more costly. If you run out of capacity sooner than you planned, you’ll be looking at a huge capital expense to expand your existing facility’s footprint.

    Because of the major issues from over- or under-building, having the ability to scale data center operations quickly, easily and economically is a priority for many businesses. To leverage the most flexible solution, businesses typically select a data center colocation model.

    Turning to a data center colocation provider allows you to “pay-as-you-grow.” You can add or contract your leased space as needed and only pay for what you’re using today – no idle or insufficient capacity. You eliminate all facility-related issues and can maximize the value of your IT investment.

    Cost – How Data Center Solutions Eliminate Capital Expenses

    Because of its many benefits, data center colocation has experienced growing demand in recent years. A major driver for this growth are lower costs for data center solutions.

    Data center colocation eliminates the capital costs associated with housing and protecting mission-critical systems. In fact, many companies today question whether an in-house data center is a good investment.

    The data center colocation model shifts capital expenses to operating expenses. Building an in-house data center requires a substantial capital outlay. In comparison, data center colocation involves a predictable, monthly operating expense. When you forgo building your own data center, the huge savings can be used instead to help fund your company’s growth. Data center colocation’s cost predictability also improves budgeting and resource allocation.

    In addition to being costly, the “build-it-yourself” approach creates uncertainty. As discussed previously, predicting and planning for future growth is extremely difficult. Construction projects are also notorious for costing more and taking longer than originally planned. Plus, you may end up with insufficient capacity to meet your needs over the life of the data center.

    Why can data center colocation providers offer cheaper data center solutions? Primarily because they can exploit economies of scale and build facilities for much less than many businesses. Given the magnitude of their data center operations, service providers may have more access to investment capital.

    They also have greater purchasing power on energy, connectivity and infrastructure hardware. For example, the size of their operation may enable them to negotiate discounted rates from utilities. Because power is the largest cost component in a data center, any savings can have a significant impact on overall operations.

    In addition, data center colocation providers have specialized expertise. With highly trained staff in all areas of data center management on board, they can run extremely efficient operations. A well-maintained data center greatly reduces the risk of costly downtime.

    Data center solutions can have a tremendous impact on scalability and cost. By only paying for what you need, your business can avoid the growing pains and capital expenses associated with maintaining an in-house data center. You simply lease more data center colocation space whenever you need it – no need to wait months or years for a complex and costly construction project to be completed.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:45p
    New DMTF Server Management Standard Supports JSON, REST APIs

    In what has the potential to become a boon for IT infrastructure management, the Distributed Management Task Force this week released a standard for server management based on the JavaScript Object Notation (JSON) file format and a simple REST API. The task force promises the standard will improve performance, functionality, scalability, and security in data center and systems management.

    “Everything is now starting to converge around REST APIs and JSON,” DMTF president Jeff Hilland said. “We think this is going to be a turning point in IT management.”

    Created by the Scalable Platforms Management Group within the DMTF, the development of the server management standard, called Redfish 1.0, was led by Broadcom, Dell, Emerson, HP, Intel, Lenovo, Microsoft, Supermicro, and VMware with additional support coming from AMI, Oracle, Fujitsu, Huawei, Mellanox and Seagate.

    Intended to replace the Intelligent Platform Management Interface effort that wound up being extended in a proprietary way by every vendor that adopted it, Hilland said, Redfish 1.0 is designed to not only make it clearer how a vendor has implemented it, but also provide a file format through which data stored in a spreadsheet can easily uploaded into a system management application that supports JSON and REST APIs.

    In addition, he noted, Microsoft has embraced REST APIs and JSON within its PowerShell scripting tools that are widely used within IT organizations running Windows.

    Hilland says the actual code used to create Redfish 1.0 is actually a subset of the code that was used to create the OData standard, an implementation of REST APIs that providers of application integration software are starting to adopt as standard mechanism for sharing data.

    In terms of using REST APIs for systems management, Hilland said, Redfish 1.0 only represents the “tip of the iceberg.” The DMTF will be making additional use of REST APIs to make firmware across a wide variety of types of IT infrastructure available to providers of systems management applications.

    The end result should not only eliminate the need for each provider of systems management software to develop agents for every class of IT infrastructure, the overall weight of those applications should be substantially less.

    Hilland noted that given the widespread support for REST APIs that already exists, the DMTF views Redfish 1.0 as an approach to systems management that most vendors will readily support. In the short term, however, the DMTF is focusing on traditional servers in order to avoid having to “boil the ocean” all at once.

    6:05p
    Officials Approve Data Center and Cogen Plant Project in Delaware

    A data center and co-generation plant might be coming to Delaware after all. The Middletown Town Council has unanimously approved plans for a data center and natural gas power plant, according to local news reports.

    That feeling of Déjà vu is due to a previous failed data center and co-gen project in Delaware from The Data Centers LLC. The Middletown project is unrelated, with different owners, but might have to overcome the bad blood lingering from the unrelated project.

    Steve Lewandowski of Cabe Associates, project manager for land owner and developer Mautom LLC, confirmed the ownership of the project is not the same at a Town Council meeting in June.

    The Data Centers project was proposed at the University of Delaware’s science and technology campus. Like TDC’s project, the Mautom one has its opponents, albeit on a much smaller scale. The deciding factor between the projects was the overall scope and the importance of the power plant piece to the project’s revenue.

    The Middletown project is for a $250 million, 40 MW data center and a 52.5 MW power plant. TDC” proposed project was for a $1.3 billion, 280 MW power plant, which opponents argued was way more than a data center needed, and therefore was a power plant in the guise of a data center co-gen facility.

    TDC planned to sell excess power back to the grid, and the University of Delaware decided this piece of the project was too central. Delaware Online reported that the Middletown plant would also return power to the grid via the Delaware Municipal Electric Corporation, however, this is not central to the project’s business model.

    Opponents of the TDC project mobilized, and the university put the final nail in the coffin by terminating the developer’s land lease last year. TDC briefly shopped the project around to other states but later got bogged down in multiple lawsuits.

    The Middletown project isn’t in the clear yet, although the council’s approval is a big step forward. Opponents argue the power plant will add air pollution. Two petitions opposing the project gathered 530 signatures total, reported local media. However, the project has more supporters than the previous one, who say it will bring much needed business to the town.

    Given the failure of the previous attempt, the new developer is proactively being transparent and offering experts to make sure the proposal clears all code requirements.

    A data center and co-gen plant is a fundamentally sound idea, but a largely uncharted territory. While data centers are often met with open arms due to the positive economic effects, TDC’s experience showed that adding an unusual power source to the mix can kill a data center project.

    6:23p
    DreamHost Anchor Tenant in ViaWest’s Oregon Data Center

    ViaWest has signed cloud and web hosting provider DreamHost as the anchor tenant in its new Oregon data center. ViaWest opened the 140,000 square foot data center earlier this month.

    Securing an anchor tenant is a crucial step for any new data center build, marking the turning point when the investment starts to generate income. Companies often hold off on building a facility until they secure an anchor tenant, but so-called “speculative” builds are also common.

    Los Angeles-based DreamHost is shifting its data center strategy from operating data centers on its own to colocation services. It is consolidating two of its data centers in California into the ViaWest facility, outsourcing data center management to the provider to reduce overhead.

    “We wanted to focus our internal resources on developing products instead of maintaining our data centers,” said Patrick Lane, VP of data center operations at DreamHost, in a statement. The company believes it will be able to achieve significant cost savings in the move.

    DreamHost has over 400,000 customers and hosts 1.5 million websites and 750,000 WordPress installations. It’s also big on OpenStack, having birthed several interesting open source projects. Ceph software defined storage and the Red Hat-owned InkTank spun off from DreamHost, as did Akanda, a network virtualization solution for OpenStack.

    ViaWest continues to expand under its parent Shaw Communications, the Canadian telco that acquired it for $1.2 billion last year as a keystone of the growth strategy for its data center business.

    The Hillsboro area in Oregon, where ViaWest’s new Brookwood data center is located, is known as Silicon Forest due to a high concentration of technology companies and lots of data centers.

    DreamHost is also the anchor tenant in RagingWire’s first data center in Ashburn, Virginia.

    6:37p
    Toshiba, SanDisk Unveil 256 Gigabit 48-Layer 3D NAND Chip

    Toshiba and SanDisk announced their first 3-bit-per-cell (X3) 48-layer 3D NAND chip targeted for production.

    The announcement rounds out most of the recent product news for 3D NAND technology, with Samsung’s new 3D V-NAND, and Intel and Micron introducing 3D XPoint in a new non-volatile memory category. Korea’s SK Hynix is expected to launch a 3D NAND product yet this year. SanDisk announced that it has reached an agreement with SK Hynix regarding trade secret litigation, modifying and extending their intellectual property licensing arrangement.

    The new Toshiba and SanDisk chip is based on Bit Cost Scalable (BiCS) flash technology, a non-volatile, three-dimensional flash technology that Toshiba has been developing for some time. Having recently announced 48-layer NAND chips with 128Gbit (16GB) of capacity, the new chip now doubles that, with 256 Gigabit (32GB) of capacity.

    SanDisk believes the new chip will be used in a wide variety of applications, ranging from consumer, client, mobile, and enterprise products. SanDisk selected this TLC (triple-level) die for its 3D NAND for mass production, which is slated to begin soon at Toshiba’s new Fab 2 facility in Yokkaichi, Japan. Products are expected to ship in the first half of 2016.

    6:59p
    Gaging the Length of a Data Center’s Useful Life

    Perhaps the single most difficult thing about building a data center is trying to forecast what the IT needs of the organization might actually be 20 years into the future.

    At the Data Center World conference in National Harbor, Maryland, this September, Jim Bolinder, senior manager of technical operations for Nu Skin Enterprises, a manufacturer of skin care products that are distributed in 54 countries, will describe the factors the company considered before embarking on building a data center that can be expanded to 10,000 square feet.

    Like almost everybody that makes extensive use of IT to drive their business, Bolinder said Nu Skin, when it began this project in 2013, wrestled with whether to build a data center, make use of a colocation facility or move to the cloud. Ultimately, the company opted to build a data center facility for financial and logistical reasons that ideally would still be functioning 25 years down the road.

    At present, Nu Skin is only using about 4,200 square feet of the raised-floor space in the facility, and given recent advances in energy efficiency across the IT infrastructure board it’s not exactly clear when Nu Skin might need to use all 10,000 square feet.

    “In 2013 we were building out a data center when the first blade servers came to market,” said Bolinder. “Since then, there’s been a lot of advances in terms of … energy usage.”

    Bolinder said IT organizations need to see over the horizon as much as possible to anticipate, for example, how much video might impact their future IT infrastructure needs.

    Naturally, building a data center designed to still be running two decades or more from now requires a lot of educated guesswork. But without making the effort to think that far out today, it’s almost a foregone conclusion that the value of data center investments being made now will diminish over time.

    For more information, sign up for Data Center World National Harbor, which will convene in National Harbor, Maryland, on September 20-23, 2015, and attend Jim’s session titled “A Case Study on Nu Skin’s Innovation Center.”

    7:07p
    GE Unveils Predix, Its Industrial Strength Cloud

    Multinational conglomerate GE announced Predix Cloud, a cloud service built specifically for industrial machine data and analytics. Predix will be commercially available in 2016, with GE migrating its software and analytics to Predix in the fourth quarter, the company said in a statement.

    Predix is a Cloud Foundry-based Platform-as-a-Service meant for aviation, energy, healthcare, and transportation needs. GE first announced an industrial cloud platform in 2013, when it also invested $105 million in Pivotal, the EMC subsidiary that at the time led Cloud Foundry, the open source PaaS project.

    Predix is tuned for industrial data with tools like asset connectivity, machine data support, and “industrial-grade” security and compliance, according to the company.

    The cloud provides asset connectivity, meant to handle the growing industrial Internet of Things. Machines produce different types of data, which consumer clouds aren’t built to handle, according to GE. Predix is also designed to streamline governance and drive down compliance costs through leveraging GE’s expertise across more than 60 regulatory areas.

    There have been a few “community cloud” offerings on the market, such as Veeva for healthcare and life science, or government clouds like AWS’ GovCloud. Predix is a “gated community” for the industrial internet, which is generating data twice as quickly as any other sector, according to GE. Industrial sector investment in infrastructure is expected to top $60 trillion over the next 15 years.

    GE isn’t known as a cloud provider per se, although it had $4 billion in software revenues in 2014 and is projected to hit $6 billion for this year.

    “GE’s Predix Cloud will unlock an industrial app economy that delivers more value to machines, fleets, and factories and enables a thriving developer community to collaborate and rapidly deploy industrial applications in a highly protected environment,” said Harel Kodesh, VP and general manager of Predix at GE Software, in a statement.

    << Previous Day 2015/08/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org