Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, February 8th, 2016

    Time Event
    1:00p
    Open Source DCIM Software Project Combats Spreadsheet-Based Data Center Management

    You cannot improve what you cannot measure. It may be a persuasive argument, but it hasn’t been persuasive enough to deliver the explosion in adoption of Data Center Infrastructure Management tools vendors were promising a few years ago, when DCIM software was considered a big “emerging” market.

    While few doubt its usefulness today, many companies have found it to be very expensive to implement. As a result, adoption has been growing, but perhaps not as quickly as the hype would lead you to expect several years ago.

    But what if at least the tools themselves came free? You’d still have to pay up to get them running in your data centers, but you wouldn’t have to pay for software licenses. Most DCIM software vendors charge based on the size of your footprint, so you pay more as your infrastructure scales.

    That free option does exist. openDCIM, an open source project born at one of the data centers supporting the US Department of Energy’s national labs. Its original creator, Scott Milliken, manages the Oak Ridge National Lab data center in Oak Ridge, Tennessee.

    Now on version 4.1, which came out last month, openDCIM has been deployed in production at “hundreds of data enters,” Milliken said. In addition to the ORNL data center he manages, they include data centers that support numerous universities and research facilities, as well as private enterprise data centers. NASA, Israel Institute of Technology, the National Human Genome Research Institute, Red Hat, AT&T, and DirectTV Latin America are some of the examples.

    The project’s number-one goal is to take away “the excuse for anybody to ever track their data center inventory using a spreadsheet or word processing document again,” according to its website.

    There’s a misconception in the industry that DCIM is only necessary for companies that own and operate their data centers. In Milliken’s opinion, there’s a lot of value to using DCIM even if you’re only monitoring a single cage in a colocation facility.

    As more and more data center capacity is outsourced to colocation providers, it’s going to become more and more important for colo customers to be able to manage that capacity intelligently. Colo providers increasingly provide DCIM capabilities to their customers as a service, but not all of them do.

    Milliken will speak about the importance of managing your colocation data center environment and about openDCIM at the Data Center World conference in Las Vegas in March. Visit the event website to register or to learn more.

    openDCIM isn’t the kind of open source project where the user has to spend many developer hours to turn core source code into a usable solution. The bulk of time spent on any DCIM software implementation usually goes to entering inventory data, but other than that, deployment of openDCIM doesn’t take long.

    “This is the complete solution in terms of the data center asset management,” Milliken said. “You can go from download to running in 30 minutes.”

    The project’s focus so far has been on data center capacity management. “There are a lot of things that the commercial packages do, especially when it comes to building control, that openDCIM stays away from,” he said. “openDCIM doesn’t really do management of any facility systems at all.”

    The latest release features an improved user interface and an API, so that it can be integrated with other systems. While it’s too early to decide what the focus will be for the next release, one potential upcoming feature is compatibility with devices that use Modbus, the communication protocol widely used by industrial machines.

    The addition of Modbus would greatly expand the range of devices openDCIM can monitor. “All your industrial controls are going to speak,” Milliken said.

    There have been three regular contributors to the open source project, including Milliken, but about a dozen people have contributed code during the four years that it’s been in existence.

    Want to learn more? Join ORNL’s Scott Milliken and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

    6:54p
    What’s Behind Docker’s Host OS Change?

    Docker containers are designed to be tiny and portable. The operating system that hosts them should be, too. That, presumably, is why Docker appears to be replacing Ubuntu with the lightweight Alpine Linux OS as the open source host environment for Docker apps.

    Late last month, a (self-identified) Docker employee reported in a discussion thread that the company has “hired Natanael Copa, the awesome creator of Alpine Linux and are in the process of switching the Docker official image library from Ubuntu to Alpine.” That would mean that Ubuntu, Canonical’s open source operating system, would no longer be the official host environment for Docker images.

    Docker would still work perfectly fine on Ubuntu, of course. Ubuntu would just no longer be the default.

    The employee suggested that the switch would benefit Docker developers and users because Alpine is a more minimalist operating system. Like Ubuntu, it is based on the Linux kernel and GNU utilities, but it ships with many fewer programs by default. Since Docker containers don’t need most of the software that is built into Ubuntu, the change would reduce overhead.

    In responses to the employee’s report, some writers noted that there are advantages to using a heavier-weight GNU/Linux distribution for Docker. In particular, they wrote, it has more tools available than a minimalist system like Alpine. Plus, there’s no reason you can’t strip Ubuntu or a similar system down to the bare essentials if you just want to run Docker on it.

    Practically speaking, the potential change is unlikely to have much of an impact on the way people distribute or use Docker containers. Most administrators will probably just use whichever GNU/Linux distribution they like best, customized however they prefer, as the host environment for Docker.

    Still, the report — which has yet to be confirmed by a Docker executive — suggests that Docker wants to maintain as slim a profile as possible. The company’s emphasis seems to be on cutting back everything to a minimum, within containers and the host environment alike — a trend consistent with its recent embrace of Unikernels, another way to deploy apps with very little overhead.

    8:11p
    Google and Level 3 Interconnect Network Backbones

    Network peering is one of the wonkier areas of the internet and data center infrastructure world, but performance of internet services, be they online video or cloud infrastructure, depends greatly on how efficiently companies behind those services and service providers they rely on exchange traffic with others.

    Google recently struck a new peering deal with Level 3 Communications using a fairly new type of arrangement the carrier has recently been advocating for.

    It’s called “bit mile,” and it essentially means that each partner carries exactly the same amount of data from their partner’s network over the exact same distance as the other partner. If Jack and Jane each have five apples they have to bring to their respective grandmothers, but Jack lives closer to Jane’s grandmother, and Jane lives closer to Jack’s grandmother, they can agree to deliver the apples to each other’s grandmothers, but only an equal amount of apples and only over equal distances. If Jack happens to carry more apples than agreed and over a longer distance, Jane has to compensate him.

    Level 3 operates one of the world’s largest internet backbones, but Google has also built out a formidable global backbone to deliver its services. Google’s backbone interconnects its massive data centers around the world and delivers traffic from those data centers to its 70-plus edge locations, where the content is cached for delivery to end users.

    Read more: How Edge Data Center Providers are Changing the Internet’s Geography

    Now, the two have agreed to help each other deliver traffic more efficiently under a bit-mile deal, with no money changing hands. The agreement also means the two operators’ networks will interconnect in more locations than currently, improving overall performance of the internet.

    Google has similar agreements with other carriers, but hasn’t used bit mile until recently. “We have similar, settlement-free arrangements with other carriers, though bit mile is an emerging standard in interconnect,” Kamran Sistanizadeh, VP of network operations at Google, said in an email.

    “Direct interconnection between Web services and internet service providers is a win-win for all parties,” he said. “ISPs save on traffic costs, and end viewers experience improved performance, like reduced buffering delay for video.”

    A bit mile is a unit of measurement. It’s equal to the number of bits you can carry, multiplied by the distances that they’re carried. Partners calculate how many bit miles they have delivered for each other and make routing changes to ensure a more equal share for each.

    In addition to wanting better efficiency, Level 3 has been advocating for bit mile as an alternative way to structure peering agreements with Internet Service Providers that deliver traffic to end users. In the traditional ratio-based system, a last-mile ISP will agree to serve x amount of the backbone operator’s traffic to its end users in exchange for being able to send y amount of traffic upstream, via the backbone operator’s network.

    A typical ratio is six to one, where the last-mile ISP delivers six Megabytes of data to its customers for every Megabyte its customers send out of its network. Partners have to compensate each other for any traffic sent beyond the agreed-upon ratio.

    Lately, these agreements have become less and less favorable for backbone operators like Level 3, since the proliferation of digital content streaming means a lot more traffic flows downstream to the end users than the other way, leaving backbone operators liable for more and more out-of-ratio traffic and paying tolls to the last-mile ISPs.

    8:33p
    Build, Colo, or Cloud? Five Steps to Help You Decide

    Tim Kittila, PE, is Director of Data Center Practice for Parallel Technologies

    On a weekly basis, I get asked, “Should we continue with or expand our corporate data center, or should we move to a colocation facility or move to the cloud?” My response is always an emphatic “yes!”

    It might seem like a flippant response to such a big question, but the best solution is likely a combination of these options. The data center strategy question really becomes: “How to analyze, rationalize and leverage all three alternatives for the best outcome.” The reality is that every business is different and a one-size fits all approach (build a data center, co-locate or go to the cloud) rarely is the right answer for all of a company’s applications.

    When our team is engaged with a new client to develop their data center strategy, we begin with a front-end assessment to determine their company goals, objectives and reliability needs. We then look closely where they are today and where they are going in the future. This requires working with multiple groups from facilities, IT and executives to really understand their data center requirements. To gain clarity on objectives, align solutions with a mission critical data center strategy, and ensure the client is investing their money wisely, it is critical to begin with the assessment.

    Yet in the fast paced technology industry, far too many organizations are guilty of not taking the time to “slow down” before they “speed up.” Another obstacle of slowing down is lack of cross departmental communication, largely between IT and facility teams. This is crucial.

    As part of the “slow down,” it is imperative that IT is included in the discussions of what truly drives the data center needs. The facilities, IT and the data center teams must all be on the same page and aligned with corporate objectives. The true goal is to bring together all the key stakeholders to the table to rightsize the data center and ensure it meets each teams’ goals and objectives. This is no easy endeavor, but the “slow down” approach, ultimately allows customers to “speed up” in the long run.

    To best determine what combination of corporate data center, colocation facility or cloud-based solution is best for the company, it’s important to consider five critical areas:

    Corporate Goals and Objectives

    The first step is to understand the company’s business purpose, mission, and vision. We hear repeatedly that at their core, they are really a technology company. We hear this in banking, transportation, manufacturing, healthcare and other industries. Technology is a critical part of business strategy, and the data center is the heartbeat of all the technology. A lack of understanding the business goals and objectives can lead to missed expectations for the data center needs. Discussions around company expansion or potential mergers or acquisitions should also be included in the assessment discovery phase. In short, understanding the goals and objectives of the organization is critical to determine the best approach for the data center. Knowledge of the company’s purpose and plans allows us to visualize how the company will leverage and use the data center now and in the future.

    IT Infrastructure and Application Requirements

    The next important step is to collaborate with IT to understand the data center IT assets (infrastructure and applications) that currently support the business, as well as the future plans for the company. Understanding the current IT environment and what IT is planning for the next 3-5 years is crucial. This warrants a discovery of all IT assets and applications supported by the data center. It is necessary to collaborate with IT to ensure server, storage, and network requirements are identified. At the end of the day, the data center exists to support the IT infrastructure, which supports the applications and related data.

    Every organization has numerous priorities and they almost always differ in importance to the overall business operations. Software applications should be reviewed and categorized by their relative importance to the business operations. It is also important to understand their SLA requirements and dependency to other applications. For instance, if the application requires low latency and high uptime, this may have an important impact on how many and which telecommunication carriers can best support these requirements. These discoveries may influence where this specific application is located within the data center to best serve the company needs.

    Another area that needs evaluation is storage requirements. It is common practice to increase the amount of storage that is being provisioned for application development and test environments. However, while very important, these programs are not production applications and thus have different requirements. We’ve found that engaging IT management in the discussion will yield valuable insight into overall storage requirements.

    Risk and Reliability Requirements

    Identifying the risk and reliability requirements is about managing to the lowest common denominator. There a several key risk and reliability areas to consider.

    A significant consideration in gathering risk requirements is to clearly understand the company’s need to adhere to government or industry compliance requirements. For instance, HIPPA for healthcare, or PCI for banking. These are significant requirements and it is important to determine which data center strategy will meet these requirements. When making the decision, it is important to remember that outsourcing the data center is not the same as outsourcing the compliance risk.

    One area of risk touched upon is the relative importance of applications. For instance, consider a group of applications that are deemed critical to the organization and the collective downtime cost is significant. This will have different data center uptime requirements than less critical applications. Calculating the “cost of downtime” is an important exercise to consider during this process. Downtime cost should be compared to the cost to mitigate the downtime risk.

    For the most part, down time risk can be mitigated through expertly engineered data centers. Redundancy can be engineered throughout the data center infrastructure systems to ensure if one system fails, the other redundant system will handle the workload. Building construction for geographical locations and impact of natural risks to the potential site also need to be considered. However, increasing reliability and site robustness comes with significant costs and needs careful consideration in regardless of a colocation data center or private owned corporate data center. It is important to strike a delicate balance between risk and cost.

    Too many times, we see data center clients casually specify a level of reliability that is not aligned with the business risk. Understanding risk in the requirements phase helps formulate the basis of design for the proper level of reliability and costs to properly balance the risk and cost equation.

    Space, Power, and Cooling Requirements

    With a current and future IT infrastructure model, growth projections and an understanding of reliability requirements in place, the next step is to determine how much power and space is needed to support the infrastructure. This is critical because it is the main cost driver for colocation or a data center build. It’s also important to determine how the power needs might change over time to support the company’s growth projections. Without taking the time to understand these critical variables, negotiating a colocation agreement or sizing the data center build is really just a guess with a very high probability it’s wrong. Whether the applications and related IT infrastructure reside in a corporate data center or shared colocation facility, these steps are critical to determine the right strategy for your business needs.

    Decision Framework

    Once data center requirements are agreed upon by all the stakeholders, it is time to move to the decision framework phase. The data center needs combined with cost, risk and time aids in making clear decisions. To do this, we leverage the use of a consolidated scorecard to determine which scenario best serves the company’s goals and objectives. With each option, there are always tradeoffs that need careful consideration. The scorecard exercise helps the client understand the tradeoffs between risk, cost and time and provides them with a sound framework to make the right decision for the right reasons.

    With the many data center options available today, it is critical to understand your core business, IT, and facilities requirements to make informed, mission critical decisions. We suspect that in the end, some of your applications will move to the cloud, other mission critical applications will remain in your own data center and some will migrate to a colocation data center. The challenge is not answering “yes” to the daunting question of “build, colo or cloud.” The challenge is to be able to know how and why to employ each data center option. Regardless of the data center strategy chosen, slowing down to gather foundational requirements is key to making the right data center decision, and ultimately speeding up.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    10:06p
    Cisco Acquires Jasper in $1.4B IoT Move

    WHIR logo

    By The WHIR

    Cisco announced last week that it will acquire Jasper Technologies, a Silicon Valley-based cloud-based Internet-of-Things service platform provider that calls itself the “on switch” for IoT.

    The Jasper platform is used by many of the world’s largest enterprises and service providers, allowing them to connect devices ranging from cars to pacemakers to jet engines over cellular networks. It also provides a Software as a Service (SaaS) platform for managing devices.

    The acquisition, in which Cisco will pay $1.4 billion in cash and assumed equity awards, and additional retention-based incentives, is Cisco’s biggest purchase since it bought cybersecurity company Sourcefire in 2013 for around $2.5 billion.

    The acquisition is expected to close by April 2016.

    Jasper CEO Jahangir Mohammed will run the new IoT Software Business Unit under Cisco’s IoT and Collaboration Technology Group.

    Cisco expects the acquisition to enable it to provide a complete IoT solution that is interoperable across devices and works with IoT service providers, application developers and an ecosystem of partners.

    It will add new features to the Jasper IoT service platform including enterprise Wi-Fi, security for connected devices, and advanced analytics to better manage device usage.

    “IoT has become a business imperative across the globe. Enterprises in every industry need integrated solutions that give them complete visibility and control over their connected services, while also being simple to implement, manage and scale,” said Jahangir Mohammed, Jasper Chief Executive Officer. “By coming together, Jasper and Cisco will help mobile operators and enterprises accelerate their IoT success.”

    Read more: Five Factors Data Centers Must Contend with in the IoT Era

    IoT presents many new and unusual challenges especially around connectivity and M security because it represents a whole new target for malicious hackers. Some major tech companies are focusing their efforts on building platforms that make it easier and safer to connect devices. Many are also preparing their users and integrators to use their platforms for IoT. Amazon Web Services, for instance, recently added the Internet of Things (IoT) as one of its AWS Partner Network Competencies.

    According to Cisco’s new Global Mobile Data Traffic Forecast, which covers 2015 to 2020, mobile data traffic will increase eight-fold over this period. Their research anticipates 5.5 billion mobile users – or 70 percent of the global population- by 2020, connecting more people in the developing world. And the opportunities for global development through IoT are enormous, according to a separate report from Cisco and the UN’s International Telecommunications Union.

    Interest in IoT is strong and companies like Cisco are spending heavily to be at the forefront of this movement to connect the world’s devices.

    This first ran at http://www.thewhir.com/web-hosting-news/ciscos-1-4b-jasper-technologies-acquisition-signals-major-iot-move

    << Previous Day 2016/02/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org