Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, November 3rd, 2014

    Time Event
    1:00p
    Amerimar and Newby Going Shopping for Carrier Hotels

    Amerimar Enterprises has acquired a data center hub in Chicago and will own and operate it along with telecom industry veteran Hunter Newby, the companies said. The building at 717 South Wells is the fourth project that Amerimar and Newby have teamed on, and there are more to come.

    Newby and Amerimar have acquired three previous telecom buildings – 325 Hudson Street in New York, 1102 Grand in Kansas City, and 401 North Broad in Philadelphia. The buildings are major intersections for Internet traffic in these cities, earning the name “carrier hotels,” as they can house dozens of ISPs, telcos, and network service providers.

    After working with a number of financial backers, Amerimar and Newby have teamed with Boston-based Abrams Capital on the acquisitions in Philly and Chicago. Newby says more deals are likely in the near future.

    “These guys want to put a lot of money to work in a short period of time,” said Newby. “So we’re going shopping for carrier hotels.”

    Major pickup in Chicago development

    The purchase of 717 South Wells is the latest sign of investor activity in the downtown Chicago market, where several new data center buildings are under development. The property is a 10-story, 100,000-square-foot building in a fiber-dense area of Chicago, and current tenants include carriers, service providers, and enterprise customers. The location is attractive to telecom businesses, as it supplies a gateway to both local and long-haul fiber in the region.

    “717 South Wells Street is an excellent addition to our growing carrier hotel platform,” said Newby. “Chicago is not only a major junction point for the north-south and east-west domestic fiber routes in the Midwest, but it is also a nexus for multiple international networks, making it a global gateway and therefore strategic for us and our customers.”

    Newby’s career has been marked by his prowess developing interconnection facilities known as meet-me rooms where providers can make physical network connections inside a multi-tenant building. He built one of the early meet-me room success stories as an executive at Telx, which got its start at the 60 Hudson Street carrier hotel in New York and later acquired 56 Marietta Street, the leading network hub in Atlanta.

    Meet-me room as redevelopment tool

    Newby is now revisiting that model, working with Amerimar to buy connected buildings and develop meet-me rooms to enhance their tenant lineup. Amerimar CEO Gerald Marshall was keen to work with him on carrier hotel deals, and in 2012 the companies bought the 325 Hudson Street property in Manhattan.

    “It made me want to get back in the game,” said Newby, who has also been building a national carrier-neutral network as CEO of Allied Fiber. “To me, it’s always about the fiber. I know how to create value there.”

    That’s the game plan at 401 North Broad, which is the primary connectivity hub in the Philadelphia market. “It’s a global subsea interconnection point, but it’s behind the times because its landlord never built a meet-me room,” said Newby. “There have been a couple of colo providers in the building, but they’ve never focused on (interconnection) the way I do. The tenants in the building are screaming for an MMR.”

    That’s also the plan in Chicago, where Amerimar will begin immediate redevelopment work at the property, including construction of a new meet-me room to draw additional network operators to the building. The first phase of the meet-me room is scheduled to open in early 2015.

    The Chicago market is seeing a pickup in development action, driven mainly by limited space at the city’s dominant carrier hotel, 300 East Cermak. QTS Realty, CenterPoint, McHugh Construction, and Ascent Corp. are all trying to develop data center properties in downtown Chicago.

    But who’s selling?

    In Abrams Capital, Newby and Amerimar have a partner with a low profile but deep pockets.

    The group’s interest in further acquisitions comes at an interesting time for the data center sector. One of the leading owners of Internet gateway properties is Digital Realty Trust, which owns key carrier hotels in Dallas, Seattle, Phoenix, St. Louis, Santa Clara, and San Francisco. Digital Realty recently announced plans to prune its portfolio and sell some “non-core” properties.

    Digital didn’t define “non-core,” but that isn’t likely to include highly-connected buildings in major markets. But telecom buildings with access to long-haul fiber and no owner-operated interconnection facility are likely to be of interest to Newby and his partners.

    “If a carrier hotel building is lacking a building-owned meet-me room, it needs one,” said Newby.

    4:00p
    Linux Foundation: Open Source is Eating the Software World

    PARIS – In every sector of the technology world there is now an open source project that is defining that particular technology. Software drives value in nearly every industry, and open source projects are where most of that value comes from.

    That’s according to Jim Zemlin, executive director of the Linux Foundation and one of Monday’s keynote speakers at this week’s OpenStack summit in Paris – the first in Europe. “Open source is really eating the software world,” Zemlin said, adapting the famous phrase from a 2011 Wall Street Journal OpEd by venture capitalist Mark Andreessen, titled Software is eating the world.

    There is a wholesale shift in the enterprise software world from using a little bit of open source code here and there to an 80-20 split, where 80 is the open source portion, he said. The reason for the shift is quite simple: software has become a way for an enterprise to add value, and open source is the best way to use a lot of software. “There is too much software being written for any organization to write that software on their own,” Zemlin explained.

    Managing external R&D: a new necessity

    The world’s top tech companies collectively spend tens of billions of dollars on R&D, and the bulk of their code comes from outside the organization, he said. This has created a new job category: managing external R&D. Companies like Google, HP and NEC, among others, all have people who are dedicated to managing open source software development.

    Now that it has become the dominant form of enterprise software development, it’s important for each company to understand how to pick the right projects, how to do “social coding,” how to integrate open source code into your own environment, and how to contribute back to the central project. “Open source collaboration really requires a new set of skills,” Zemlin said.

    Big users partner on method to open source madness

    In addition to individual companies dedicating resources to managing their relationships with open source technologies and communities, there is an ongoing attempt to centralize systematic management of numerous open source technologies from the end user’s perspective. Called TODO (Talk Openly, Develop Openly), the project was started by a handful of high-tech heavyweights, including Google, Facebook, Twitter, Box, GitHub, and a few others, in September.

    The companies created TODO to function as a clearinghouse of sorts for open source software. Details of its mission were fuzzy in its early days, but the founders said it would do things like create documentation for certain open source technologies to make them easier for others to deploy. A key part of TODO’s plan is to only get involved with open source technologies one of the members has used in production. Anyone looking at a software solution the organization lists will have the confidence that the solution has been used by a major company.

    The next open source blockbuster

    It is important to think about open source systematically, and having staff who are dedicated to that task is crucial if a company wants to be in the position to leverage the next big thing in open source, which today is OpenStack, Zimlin said. “OpenStack, without question, is a blockbuster,” he said.

    Jonathan Bryce, executive director of the OpenStack Foundation, said the open source cloud architecture is popular because it speaks to the desire for choice in the IT shop and among the IT shop’s customers – the developers. It gives IT departments the freedom of choosing the hardware to deploy their clouds on, and through APIs it gives the developers flexibility in configuring their environments.

    There is a lot of interest in OpenStack in Europe, if the summit’s attendance is any indication. Bryce said the foundation sold more tickets for the Paris summit than ever, admitting, however, that the city’s popularity probably had a lot to do with it. Thierry Carrez, the foundation’s director of engineering, said half of the attendees were European (compared to about 10 percent at the previous summit in Atlanta). Judging by a show of hands during the keynote, the overwhelming majority of attendees were at an OpenStack summit for the first time.

    4:30p
    Integrating Physical Layer Management Systems into Today’s Networks

    Damon DeBenedictis has had a 17-year career at TE Connectivity, managing copper and fiber product portfolios that have led to market-changing technologies for data centers, office networks, and broadcast networks.

    Physical layer management (PLM) systems provide complete visibility into the physical state of the network at any given time, but integrating such systems into a network and business processes may seem like a complex project. Where does one start? When do you integrate PLM and how do you do it? In this article, we’ll look PLM and at some key considerations when integrating a PLM system into a network.

    Breaking down a PLM system

    A PLM system is a tool that network managers use to access and catalogue real-time status information about their physical layer networks. PLM systems bring layer 1 to the same visibility as layers 2-7 by including intelligent connectors on patch cords and intelligent ports on patch panels. The solution software reports the state of every network connection: whether or not it is connected, how much bandwidth a circuit can carry, and the type of circuit (i.e., Cat5/6 Ethernet or single- or multi-mode fiber). The PLM system also provides circuit mapping, alarming, and reporting.

    Areas of consideration prior to integration

    The key opportunity for implementing a PLM system arises when there is a new data center or data center expansion project. This is the time to consider PLM.

    There are two basic ways to integrate a PLM system into a network:

    1. Use the PLM system’s own application and database;
    2. Use a middleware API in the PLM system to integrate its output into an existing network management system.

    The decision about which route to take depends on the network manager’s tolerance for using an additional management system on top of the others he or she is already using, and whether or not it’s worth the effort to adopt a new system.

    Two ways to integrate: the pros and cons of both

    The advantage to using the PLM system’s own application and database is that it manages the entire physical layer, mapping circuits, issuing work orders, reserving ports for new connections, reporting on circuit and patch panel inventories, and other functions. However, using a new application may require some duplication of effort as the manager compares the PLM system’s output with the output of other management systems. In addition, the PLM application will require process changes to employee workflows as a new work order system is integrated.

    With the middleware approach, the manager need not change anything about employee workflows. However, the value of the input is limited to what the target management system can accept. For example, if the management system doesn’t understand the network at the patch cord level, then patch cord status and locations will not be available to the network manager.

    Choosing between the two, what’s right for you?

    One key to deciding between the application and middleware approaches is to determine whether or not the existing work order and documentation systems are working well. Large carriers use existing or home grown software tools to manage their networks. Frequently, these systems include work order management systems that automatically email work orders to the appropriate technicians. In smaller organizations, however, network documentation may be done manually on spreadsheets. Either way, these manual data entry tools are fraught with errors and very labor-intensive.

    If a company has a robust work order management system and simply wants to add awareness of the physical network to its suite of tools, then integrating PLM middleware into an existing management system is the way to go. But for companies that struggle with work order management, using the PLM application will be well worth whatever changes must take place in employee workflows.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    Top 10 Data Center Stories, October 2014

    From Salesforce rolling out its cloud-based big data analytics service to Shutterfly deploying 1,000 cabinets at the SUPERNAP data center in Las Vegas, here are the 10 most read articles on Data Center Knowledge during the month of October. Enjoy!

    Salesforce Gets Into Cloud Business Intelligence Market – After about two years of engineering in stealth mode, Salesforce rolled out Wave, its cloud-based big data analytics service. This will be the biggest announcement at the company’s three-day Dreamforce conference kicking off in San Francisco today.

    Dell Becomes Bitcoin Mining Data Center Provider – Companies often end up with more data center space than they need, sometimes following a change in strategy, sometimes after an IT refresh, and sometimes for other reasons. Dell has found a creative way to deal with excess capacity at one of its Quincy, Washington, data centers.

    Super-Sizing Solar Power for Data Centers – Traveling east from Princeton, drivers can catch a brief glimpse of the panels, which are hidden by a series of high berms. It’s only when you walk around the edge of these grassy mounds of earth that the massive scale of the solar energy generation system is revealed.

    Explaining the Uptime Institute’s Tier Classification System – Uptime Institute’s Tier Classification System for data centers is approaching the two decade mark. Since its creation in the mid-1990s, the system has evolved from a shared industry terminology into the global standard for third-party validation of data center critical infrastructure.

    Shutterfly Deploys 1,000 Cabinets at Switch SUPERNAP – In one of the largest colocation deals ever, Shutterfly Inc. is deploying 1,000 cabinets at the SUPERNAP data centers in Las Vegas. The photo editing and sharing service has signed a seven-year contract for colocation and conenctivity services with Switch, which operates the huge – and growing – SUPERNAP campus.

    How is a Mega Data Center Different from a Massive One? – What is a mega data center and how is it different from a massive or a large one? What is the difference between a small data center and a mini data center, and what does the expression “high density” really mean?

    Digital Realty Taking its Medicine – On its earnings call toward the end of the month we will likely hear some details about the properties Digital Realty Trust (NYSE: DLR), one of the world’s largest data center real estate companies, is going to sell.

    Cloud Reboot Causes Cold Sweat at Netflix – Another tale has emerged from the great server reboot of 2014 to apply a Xen security patch that affected major cloud providers, including Amazon Web Services and Rackspace. Netflix, an AWS customer, lost 218 database servers during the reboot but managed to stay online.

    Is Colo Going the Way of the Shopping Mall? – Colo providers and hosting companies that own and operate their own data centers should think hard about diversifying their revenue streams or sell and get out of the business altogether while the going is good.

    EMC Buys OpenStack Cloud Builder Cloudscaling – EMC has acquired Cloudscaling, a company that sets up private OpenStack clouds on hardware of customers’ choice in their own data centers that integrate with the major public clouds, such as Amazon Web Services and Google Compute Engine.

    Stay current on data center news by subscribing to our daily email updates and RSS feed, or by following us on Twitter, Facebook, LinkedIn and Google+.

    5:52p
    BMW and Time Warner Stand Up OpenStack Clouds

    PARIS – OpenStack has widened the pool of companies that can stand up cloud environments for enterprises and service providers beyond the handful of big vendors with proprietary cloud architectures – the likes of Amazon Web Services, Microsoft Azure or VMware. It has also enabled some of the more technologically advanced (as well as adventurous) enterprises to build their own clouds.

    Two of the latest high-profile converts to the open source cloud architecture are BMW and Time Warner Cable. Both have recently stood up OpenStack clouds but went about it in very different ways.

    Stefan Lenz, data center and IT infrastructure manager at BMW, likes OpenStack for two reasons: it drives standardization, and it discourages technology mooching, where the end user develops something to customize a vendor’s solution for its own needs, and the vendor then uses that innovation to benefit from licensing it to others. That last one has happened to BMW in multiple instances, Lenz said.

    BMW was one of the end users the OpenStack Foundation highlighted during Monday’s keynote at the OpenStack summit in Paris – its first in Europe – and Lenz was on stage to tell the German automaker’s OpenStack story.

    Standardization the open source project is driving in the cloud computing space is important to BMW – a company that likes stability. The OpenStack API and data model used to describe cloud and virtual instances is becoming an industry standard. “When anyone develops on that, it will be stable,” Lenz said.

    BMW not all in just yet

    Because BMW likes stability, however, his team is not using OpenStack for critical applications. “We are not going to put really highly productive workloads on this at the moment,” he said, emphasizing “at the moment.”

    For now, OpenStack can be a part of the toolset the company is using, but it needs to do more development on it before delegating some heavy lifting to the open source system. “We need more stability still … but it doesn’t prevent us to use it right now just as it is,” Lenz said.

    There is also a very quick release cycle, with big changes from release to release, which is bothersome to his team. There have been about 30 major releases since Austin, the first one that came in 2010. There were nine this year alone, culminating with the most current stable release called Juno, which came out in October.

    Fixing past mistakes

    OpenStack is a way for BMW to solve some problems it has been having with a private cloud system the company’s engineers built in 2011 on their own. Over the past several years, the team has built what Lenz called a “global data center.”

    Through “brutal” infrastructure standardization, his team was able to create a centrally managed system that lives in four geographically distant data centers. Every facility has the same servers, set up exactly the same, he said.

    They have become much more cost efficient, but they realized even back in 2011 that there is a limit to how much you can benefit from standardization. So, they created their own piece of software in Python and called it private cloud.

    That internal private cloud has not worked as well as they hoped it would, however, and OpenStack became the answer, Lenz said.

    Time Warner doesn’t hold back

    BMW’s IT team is an example of a company doing OpenStack alone, but there are numerous vendors competing to help customers deploy OpenStack clouds. One of the users that went the vendor-assisted-OpenStack route is Time Warner Cable, the second-largest cable provider in the U.S.

    Matt Haines, vice president of cloud engineering and operations at Time Warner, was on stage Monday morning to talk about his team’s experience with the open source cloud technology. The company has about 15 million customers and provides TV, broadband, phone, and business services.

    Haines’ team wanted its OpenStack cloud to live in multiple data centers but have a single global identity management system; they wanted automated resource deployment, a high-availability and disaster recovery control plane, geographically redundant object storage, and overall operational maturity to provide enterprise applications at service-provider scale.

    And they wanted it done in six months. The project started in January, and the goal was to be up and running by July.

    Cisco and SwiftStack get the job

    The vendors they chose to help were Cisco and SwiftStack, a startup with a software-defined object storage controller based on Swift, the object storage component of OpenStack.

    Haines did not specify what he was using Cisco for, but it has a range of professional services designed to help customers stand up OpenStack clouds. The IT giant also has a piece of software that streamlines installation of OpenStack on its Unified Computing System servers and a plug-in for integrating Neutron, the software-defined network component of OpenStack, with its Nexus switches.

    By the end of the first quarter, Time Warner had the Havana release of OpenStack running in two data centers and iterating it toward Icehouse, the subsequent release. By the end of Q2, the cloud was in production, Haines said.

    The deployment didn’t go without problems, and the team had to make some big network-wide changes. The two biggest ones were moving from a flat network architecture to one that could use the latest SDN support in Icehouse and switching from a legacy Fibre Channel SAN platform to one based on Ceph, the open source storage platform that runs on commodity hardware.

    Time Warner’s team working on its OpenStack deployment is now about 15 people strong. The company has invested into hardening and maturing the platform and contributed to a number of OpenStack projects.

    Going forward, Haines’ team is looking to add DNS, load balancing, monitoring, orchestration, database, and messaging capabilities.

    More of our coverage of Monday’s keynote at the OpenStack summit in Paris is here

    7:00p
    CyrusOne Pre-Leases Large Portion Of N. Virginia Data Center Opening

    CyrusOne has pre-sold a third of the first phase of its upcoming Northern Virginia data center. An undisclosed Fortune 50 has pre-leased 12,000 square feet of colocation space with first right of refusal for another 9,000.

    CyrusOne broke ground in the very busy Northern Virginia market last April. This is the first pre-sold space in the upcoming mega-campus. At full build, the 14-acre site will accommodate a shell of approximately 400,000 square feet, with up to 240,000 square feet of colocation space.

    The first phase will consist of 30,000 square feet with up to 12 megawatts of critical load in a 125,000-square-foot building.

    Northern Virginia is set to overtake New York as the largest data center market by 2015. With over 5 million square feet of space and over 3 million in development, there was some question as to whether or not demand could sustain the immense amount of capacity set to come online in the coming years. So far it has been selling like hot cakes.

    This bodes particularly well for CyrusOne, who is a new entrant to the Northern Virginia market. All of the established players have reported strong sales, and CyrusOne is strong out of the gate. The company is able to tap an already sizable customer base for the Northern Virginia offering.

    “This is a current tenant in one of our other data center facilities that appreciates the scalability and flexibility we can deliver, and has chosen to expand their footprint with us in Northern Virginia,” said Tesh Durvasula, chief commercial officer, CyrusOne. “CyrusOne’s scalable Massively Modular engineering design approach, with its ability to support future infrastructure growth, as well as our exceptional service delivery levels were key contributors to the decision.”

    This is the second deal over 10 megawatts to occur in the market in the past few days. Dupont Fabros revealed an existing customer subleased the entire 13 megawatts of capacity at its ACC4 data center in Ashburn, Virginia, that was recently vacated by Yahoo.

    RagingWire also has an upcoming Northern Virginia data center and has announced a strong sales pipeline. The Equinix campus does extremely well, known for its connectivity and noting private links as its fastest growing business segment. CoreSite expressed a lot of optimism in the market ahead of opening its second facility.

    8:00p
    Cloud Automation: What You Need to Know and Why It’s Important

    Cloud computing is boldly going where no other system has gone before. With so many organizations moving to some type of cloud platform, providers are finding more ways to create the true fully automated cloud environment. Now, we haven’t quite reached that point, but we’re getting closer.

    Within the concept of cloud automation and orchestration are several important layers all working together. Starting at the data center level, the automation process includes technologies which, when combined, can produce some pretty powerful cloud infrastructures.

    So, what are those cloud automation layers and why are they important?

    Cloud automation

    • What you need to know. True cloud orchestration is being driven by open-source technologies. Unfortunately, there isn’t a lot of standardization here yet. But the technology and the development around it is developing very fast. For example, open-source IT automation tools such as Puppet from Puppet Labs or Chef from Opscode are now being used to automate management functions that previously required a lot of manual intervention. This heavy lifting used to be part of the administrative task. Now it can be completely off-loaded.
    • Why it’s important. Yes, it’s a newer concept. But this certainly hasn’t stopped very large organizations from diving into the development and testing pool. Many are even using platforms like CloudStack, OpenStack, and even OpenNebula. BMW and Time Warner Cable, are two of the latest converts to OpenStack, for example. Throw big data into the mix and tie in data management solutions like MapR or Cloudera and you’ve got a very interesting field that’s developing. Remember, your data is critical. And, creating powerful cloud automation services to replicate, secure and quantify this information is critical for you to stay competitive.

    Server provisioning/automation

    • What you need to know. Everyone is doing it. You should too. Many environments which work with cloud computing pretty much have to adopt a high-density computing model. Blade systems have come really, really far. So much so that you can basically insert a blade into, for example, a UCS chassis and let the hardware profile services do the rest. You are literally creating a puzzle-piece data center capable of amazing agility.
    • Why it’s important. In a world of maximum efficiency, data center administrators simply don’t have the time to configure and deploy blades on an individual basis. With pre-built templates, administrators control entire data center blade deployments from one central console. Essentially, you’re reducing IT management while still improving workload delivery.

    Virtualization/application automation

    • What you need to know. This has been the savior for many cloud systems. Providers which host a cloud environment fully understand the dynamic nature of the platform. Anything can change in a second. Whether a server needs more resources or an application is getting pegged, circumstances can change in a heartbeat.
    • Why it’s important. Automating ways to deliver applications and virtual servers can be a real life saver. Provisioning services are capable of spinning up VMs within seconds and are now also starting to allow users to connect to new resources. In working with intelligent load-balancing technologies, you’re able to intelligently provision and de-provision entire workloads on demand. Take VMware for example, which was faced with a dramatic increase in the volume of data, storage requirements, and the need for better management. To combat this, and as part of their software-defined data center, VMware turned to Puppet Labs (and gave them additional $30 million) to help manage IT resource much more efficiently.

    The cost of moving to a cloud platform is shrinking, which means more organizations are capable of adopting some type of cloud model. Whether this is a migration to Office 365 or a full data center-based cloud deployment, the business drivers to move to the cloud are growing, and cloud providers are looking everywhere to increase efficiency. One great way to do this is a structured and layered cloud automation platform. Look for this technology to continue to develop – from the data center all the way to the cloud.

    8:30p
    Samsung Acquires Proximal Data to Grow SSD Business

    logo-WHIR

    This article originally appeared at The WHIR

    Samsung announced on Sunday that it has acquired Proximal Data, a provider of server-side caching software. The terms of the deal have not been disclosed.

    With the acquisition, Samsung plans to expand its SSD business in the server and data center markets.

    According to the announcement, San Diego-based Proximal Data has been marketing AutoCache software, which is a virtual cache storage solution that increases VM density and performance by eliminating I/O bottlenecks.

    AutoCache was recognized at the Flash Memory Summit as a Best of Show award winner in the category of Most Innovative Flash Memory Technology in 2012.

    Hosting providers have been adding SSD caching technology to their offerings in order to give customers faster loading websites.

    “We are delighted to have Proximal Data join us, which will enable us to enhance our competence in SSD-based software for server systems,” SVP of the Memory R&D Team at Samsung Semiconductor Bob Brennan said. “With this acquisition, we will be able to further expand our SSD business in the server and data center markets, while continuing to provide the most advanced SSD solutions to customers.”

    Samsung’s acquisition of Proximal Data builds on its 2012 acquisition of NVELO, a creator of SSD caching software. Samsung has added the software to its branded SSDs since 2013.

    “Proximal Data sees tremendous value in being part of Samsung Electronics, the world leader in flash storage,” Promximal Data founder and CEO Rory Bolt said. “We are excited at the opportunity to enhance our AutoCache, as well as to create revolutionary new products in enterprise storage, the potential of which will be greatly improved with access to the full capabilities of Samsung.”

    Last year, SanDisk acquired enterprise SSD developer SMART Storage Systems for around $307 million.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/samsung-acquires-proximal-data-grow-ssd-business

    9:00p
    Mirantis OpenStack 6.0 Released in Technical Preview, Offering New OpenStack Juno Features

    logo-WHIR

    This article originally appeared at The WHIR

    Mirantis has released the Technical Preview of Mirantis OpenStack 6.0, an OpenStack distribution that includes the latest features included in OpenStack Juno, which is, itself, only a few days old.

    According to a Wednesday announcement from Mirantis, its latest distribution includes the advanced network functions virtualization and big data features from Juno.

    The Preview will offer a peek at the new distribution, and provide Mirantis with feedback. The Preview cannot be used in production, there is no upgrade or update path to or from earlier and later releases, and there is no technical support around issues specific to the Preview.

    With its latest version, however, Mirantis OpenStack will continues to provide an out-of-the-box distribution for developers and operators who want to get started with OpenStack, and without the vendor lock-in sometimes associated with other distributions, according to Mirantis.

    Mirantis OpenStack features native support for Apache Hadoop, which is being widely used in the processing of Big Data, with the Sahara project allowing users to easily provision Hadoop clusters.

    This latest release also features improvements around NFV. For instance, OpenStack deployment and management tool Fuel uses Modular Layer 2 plugins (allowing OpenStack Networking to utilize a variety of layer 2 networking technologies) which essentially streamlines support for new Layer 2 networking technologies. ML2 plug-in packages can be developed without modifying the Fuel core, which simplifies development new components for Fuel by 3d parties and allows plug-ins to be interchanged more easily.

    “Mirantis makes OpenStack simple,” Charlie O’Leary said in a statement. O’Leary is a DevOps engineer at Pixafy, a web development and software company that specializes in Magento e-commerce, and that has used Mirantis OpenStack as the basis for a cluster capable of supporting at least 1,000 virtual machines. “Fuel, combined with Mirantis’ 24×7 support, is phenomenal. It gives us a great high-level view of our environment in one place, and the way it manages our overall infrastructure makes things easier on our team.”

    Mirantis, however, isn’t alone in its adoption of Juno features. Last week, HP, the largest overall contributor to Juno, launched its OpenStack-based commercial cloud platform, HP Helion with updates to include Juno.

    Mirantis has around 100 engineers working on upstream contributions to OpenStack, and, having recently received significant financial backing, has announced plans to double its contributions in 2015, with a focus on ease of use and operation.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/mirantis-openstack-6-0-released-technical-preview-offering-new-openstack-juno-features

    << Previous Day 2014/11/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org