Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, March 16th, 2015

    Time Event
    12:00p
    Facebook Makes Open Source Networking a Reality

    SAN JOSE, Calif. – Unlike past years, when most of the focus was on servers, the data center network took center stage at the Open Compute Summit in Silicon Valley earlier this month.

    Facebook and a number of other data center end users and vendors announced network technology contributions to the Open Compute Project, the Facebook-led open source data center and hardware design community.

    If somebody wants to build a custom data center network using open source hardware and software, they now have access to just about every part of the stack, save for some software programming work that is still required. Key components of that stack are components of the Facebook network the company’s engineers have built for its own use.

    “All the pieces are available,” Najam Ahmad, Facebook’s director of network engineering, said. “It took us about a year and a half to get here, but I’m now really excited that we’ve done all the base platform work, and I see that momentum building.”

    The programming bit that would still be required is not trivial however. The user would still have to build their own network protocols on top, he said.

    Ahmad’s team has changed the way Facebook networks in its data centers are built, and the company has contributed some of those innovations to the open source community – the same way it has opened up its server design specs.

    Enter Open Source Network Hardware

    In February, Facebook announced the Six Pack, its latest network switch that will enable its new network fabric, and said it would contribute the spec to OCP. The first Facebook data center where the fabric is implemented is the company’s Altoona, Iowa, facility, launched last November.

    The Six Pack is not currently running in Facebook data centers at scale. The new switches are being tested in production in several parts of the infrastructure, Ahmad said.

    The Facebook network switch that is already running at scale is the top-of-rack switch called Wedge, which the company announced in June of last year. At this month’s summit in San Jose, Facebook said it would contribute the Wedge spec to OCP as well.

    Not only will the spec be available, but there’s also already a vendor that will sell Wedge switches. They will be available from the Taiwanese network equipment maker Accton Technology and its channel partners.

    Najam Ahmad Facebook

    Najam Ahmad, director of network engineering, Facebook (Photo: Facebook)

    Managing Switches Like Servers

    Facebook will also contribute a portion of FBOSS, the set of software applications for managing Wedge switches, to the open source project, making network hardware and network management software Facebook designed available for public consumption.

    Even though FBOSS stands for Facebook Open Switching System, it is not an operating system. It is a set of apps that can be run on a standard Linux OS, Adam Simpkins, a Facebook software engineer, explained in a blog post.

    The point is to make switches less like switches and more like servers. A Facebook switch behaves like a server that needs some FBOSS software to perform functions of a network switch.

    This way, Facebook data centers no longer need a network management team that sits in its own silo, separately from the team that manages servers. The same team can now manage both, widening the pool of people that can manage the entire infrastructure, Ahmad said.

    The company is contributing the FBOSS agent, which it uses to manage Wedge switches, as well as OpenBMC, which provides management capabilities for power, environmentals, and other system-level parameters.

    Facebook's Six Pack switch is a 7RU chassis that includes eight of its Wedge switches and two fabric cards (Photo: Facebook)

    Facebook’s Six Pack switch is a 7RU chassis that includes eight of its Wedge switches and two fabric cards (Photo: Facebook)

    APIs Make Open Source FBOSS Possible

    Switch hardware and management software on their own are not enough for a data center network. The FBOSS agent doesn’t manage the switches directly. It communicates with switching ASICs (the hardware circuits that perform packet forwarding in switches) through SDKs, or software development kits.

    Network vendors have traditionally kept ASIC specs and SDKs closed, but now several ASIC vendors are beginning to open them up, starting with Broadcom, which has released OpenNSL APIs that FBOSS can use to program the Broadcom ASICs on Wedge switches.

    Release of OpenNSL is what made release of FBOSS possible. “We couldn’t open source FBOSS if [it] talked directly to the SDK, because then we would be releasing Broadcom’s proprietary information,” Ahmad said. “OpenNSL talks to the SDK, and that’s Broadcom’s problem. Not our problem.”

    Non-FBOSS Options Available

    But FBOSS isn’t the only option for management software. A company called Big Switch contributed its Linux-based network operating system to OCP this month.

    Earlier, a company called Cumulus Networks, which also has a Linux OS for open switches, contributed ONIE (Open Network Install Environment) to the project, which enables installation of any network OS on switches and to manage switches like Linux servers.

    So users now have a choice between running Wedge switches with a Linux OS by Cumulus or Big Switch, or running FBOSS and OpenBMC.

    But they are not limited to using this software on Facebook’s Wedge switches. So-called “incumbent” network vendors have been announcing “open” switch products since early last year.

    They include Dell, Juniper, and, most recently, HP. Dell is offering a choice between Big Switch, Cumulus, and its own network OS. Juniper is planning to start shipping switches that support any open network OS sometime this year. HP’s first open switches – also manufactured by Accton – will ship with Cumulus software.

    Disaggregation Unlocks Potential

    Disaggregation between network hardware and network software is exactly the idea Ahmad and his team at Facebook had in mind when they started the networking group within OCP in 2013. This is the first time such disaggregation has occurred in data center networking, and OCP has had a lot to do with it.

    “We wanted to disaggregate the network appliance, because the network appliance was very much a black box,” Ahmad said. “And a black-box environment doesn’t work.”

    In the server world, you know what chip is being used, and what other hardware components are inside. You can install an OS of your choice, and you have the flexibility to program and customize it to your needs.

    That kind of flexibility has been simply impossible with networking hardware. “You get what you get, and the only thing you can do with it is whatever protocols they have implemented; whatever [Command Line Interface] is available; and that just doesn’t scale at our size, and for most people,” Ahmad said.

    Facebook’s engineers needed the flexibility, and if off-the-shelf products didn’t have it, they had the drive and the resources to design what they needed themselves, which has been the approach other large-scale data center operators, such as Google, Amazon, and Microsoft, have been using as well.

    Pent-Up Demand for Open Hardware

    The difference in Facebook’s approach was OCP – the idea that if you open source some of the pieces you create for your own use, you can spur an entire ecosystem of vendors and users who aren’t satisfied with business-as-usual. The rate of the OCP ecosystem’s growth indicates there was quite a bit of pent-up demand for that level of flexibility in the market.

    OCP has reached a point where it isn’t just Facebook making contributions anymore. The list of data center end users who are active in OCP now includes the likes of Apple, Microsoft, Goldman Sachs, and Fidelity Investments.

    The list of active vendors now extends beyond the Asian design manufacturers that supported the project from the start. It now has names like Cisco, Juniper, HP, Dell, Emerson Network Power, and Schneider Electric.

    It turns out the market wanted more openness and disaggregation, and the data center vendor establishment has reacted. There was a need for major changes in the way the hardware market worked, and OCP gave the push that was needed to get the ball rolling.

    3:30p
    Strategies for Evaluating Data Center Aisle Containment

    Todd Boucher is the Principal and Founder of Leading Edge Design Group (@ledesigngroup), a critical infrastructure firm that specializes in designing, building, and maintaining Data Center, LED Lighting, and Information and Communications Technology systems.

    A challenge for data center customers is finding a way to leverage existing cooling infrastructure to support ongoing technology upgrades. As technology is refreshed, consolidated, and virtualized, per rack density (in kilowatt, or kW per rack) increases and the capacity of legacy cooling infrastructure is often reached or exceeded.

    For example, a legacy data center is typically configured with a raised floor and perimeter Computer Room Air Conditioner (CRAC) units. The CRAC units are downflow – supplying cool air down into the raised floor plenum – with an open return. In general, this design could effectively support an average of 3kW per rack of IT load. In today’s data center, technology requirements are driving per rack densities far beyond 3kW per rack and customers are seeking an effective way to support their technology upgrades without a major data center renovation.

    Data Center Aisle Containment has emerged as an effective strategy for increasing an existing data center’s capacity to support higher density rack loads. The issue for many customers is determining how to implement aisle containment in their data center and what product, strategy, and project plan are right for them. Vendors of aisle containment systems can offer advice, but customers are concerned that this is limited to the product set that the vendor can provide.

    Evaluating Aisle Containment Solutions

    A persistent discussion in the data center industry is arguing the benefits of hot aisle containment systems (HACS) versus cold aisle containment systems (CACS). The reality is that both containment strategies are effective. Customers need to first understand the characteristics of their existing facility and then use that profile to determine the right aisle containment strategy for their environment.

    Understand the Importance of Return Air Temperature

    The primary function of aisle containment is to physically separate the air streams in the data center, ensuring that cool supply air is delivered to the IT equipment inlet (cold aisle) without mixing with the hot IT equipment exhaust air (hot aisle)3. Creating this separation will help eliminate hot spots and improve the consistency of supply temperature across the vertical face of the IT enclosure.

    More importantly, separating data center air streams with aisle containment systems can increase the usable capacity of your CRAC units. A common mistake data center operators make is assuming that a 30-ton CRAC unit is providing 30 tons of usable capacity in the data center. In most existing data centers, the usable capacity of a CRAC unit (actual cooling capacity being delivered) is reduced due to the return air temperature being returned to the CRAC unit. In other words, you can only expect to get 30 tons of usable capacity at specific design conditions. Examining the capacity data for a chilled water CRAC unit illustrates this reduction:

    Sample CRAC Unit Capacity Data (Based on 45°F Entering Water)

    IP_Figure1

    As you can see, the full capacity of the sample CRAC unit (149kW) is achieved when 80°F return air is returned to the CRAC unit. However, if that return air temperature is reduced by 5°F, the output of the CRAC unit decreases by 15 percent. If the return air temperature is decreased by 8°F, the capacity of the CRAC unit decreases by 24 percent.

    By implementing aisle containment, we can increase the return air temperature to data center CRAC units, thus increasing the usable cooling capacity in the data center space. Before evaluating aisle containment, it is important for data center operators to measure their existing return temperature conditions; it will provide a perspective on the operation of their current cooling system and provide a critical benchmark to improve upon through an aisle containment implementation.

    Evaluate your Rack/Row Orientation and Profile

    The existing rack/row orientation in your data center and its symmetry will dictate the complexity of implementing aisle containment. For example, if you have a homogenous data center with a standardized rack profile, you will have a number of options for aisle containment systems. The likelihood that these solutions will be “out-of-the-box” from aisle containment vendors is high. However, if you have a more heterogeneous data center with varying rack heights, depths, and widths, it is important to understand that more customization will be required. This is especially true if your data center rack footprint is fixed (i.e., rack locations cannot be adjusted for greater symmetry).

    Data center owners should understand which rack/row profile their data center falls into prior to beginning the aisle containment evaluation process. If your data center has a heterogeneous rack profile, it is important to determine how much (if any) footprint reconfiguration you are willing to undertake to support a containment project. Providing this information to prospective vendors will ensure that you receive system proposals that are accurate and able to be implemented in your current data center environment.

    Aisle containment implementations offer significant benefits to data center operators looking to support higher density IT equipment, regain capacity from their cooling system, and extend the lifecycle of their existing facility. Hot Aisle Containment Systems (HACS) and Cold Aisle Containment Systems (CACS) are both effective strategies; the correct aisle containment solution for a customer’s data center should be developed through a detailed review of the customer’s requirements and existing data center conditions.

    Part two of this article will be published Friday, March 20.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:00p
    China Has Hacked Every Major US Corporation, Former NSA Head Says

    logo-WHIR

    This article originally appeared at The WHIR

    In a speech at the University of Missouri on Thursday, former NSA director Mike McConnell revealed that China has been playing a major role in security breaches at US companies.

    McConnell said that the malware used by Chinese hackers allows them to access information at will and steal critical information including business plans and blueprints. An indictment in May 2014 accused Chinese hackers of stealing proprietary information from a nuclear power plant, steel plant, and solar energy company.

    “The Chinese have penetrated every major corporation of any consequence in the United States and taken information,” said McConnell in the speech. “We’ve never, ever not found Chinese malware.” Chinese hackers are suspected of several major data breaches. In August 2014, Chinese hackers were suspected in a medical records breach exposing over $4.5 million patients at Community Health Systems. In November, China was accused of a breach at the USPS exposing 800 million government employees.

    The implication to the US economy could be huge but not seen for years. McConnell said that during the Bush administration the Chinese government employed as many as 100,000 dedicated hackers while the US has that many spies in total. He said the Chinese government was stealing “planning information for advanced concepts, windmills, automobiles, airplanes, space ships, manufacturing design, software.”

    “Several hacking experts who consult companies on cybersecurity backed up the idea that Chinese hacking is widespread. But they doubt every single major US company has been broken into by Chinese government operatives. For example, since 2012, consulting firm EY has found evidence that China hacked into several well-known companies, including a major US medical research facility that conducts clinical trials and a large heavy equipment manufacturer,” according to CNN. “But EY consultant Chip Tsantes said the Chinese haven’t burglarized every firm. ‘I can’t say that’s true for every single one,’ he said. ‘If that was true, the Chinese would have the formula for Coke, and they don’t.’”

    Relations with China regarding technology are becoming even more strained over the last couple of months. The government recently proposed a bill that would require technology firms to hand over source code and encryption to the Chinese government. The proposal came as China removed several large US IT companies, including Cisco and Apple, from a list of approved government service.

    New banking rules already in place in China require that 75 percent of technology products used by banks must be classified as “secure and controllable” by 2019. As part of receiving this designation, technology companies have to turn over source code to the government.

    Not surprisingly,these ideas have been met with much resistance. Backlash from the US has caused the Chinese government to reconsider at least one bill, on Friday it announced a third reading of a counter-terrorism bill is on hold.

    This story originally appeared at http://www.thewhir.com/web-hosting-news/china-hacked-every-major-us-corporation-former-nsa-head-says

    6:03p
    Sentinel Pitches Lower-Reliability Data Center Service at Lower Cost

    Sentinel Data Centers said it is now offering customers a data center service option that does not include as much backup infrastructure equipment as its traditional high-availability design for applications with lower reliability needs than mission-critical applications.

    The service, called Transmission, comes at a lower price point than the full high-reliability option. But it doesn’t mean it’s unreliable.

    In Sentinel’s Somerset, New Jersey, data center, the new service receives power from redundant high-voltage utility feeds, which the company said have not gone down since it commissioned the facility in 2011. The time frame includes Hurricane Sandy in October 2012 and Hurricane Irene in August 2011.

    Besides New Jersey, Sentinel is offering Transmission in its Durham, North Carolina, data center as well.

    It is fairly uncommon for a data center service provider to accommodate multiple levels of redundancy in a single facility. Such optionality is a difficult engineering task and has implications for business strategy for a data center.

    Data center providers’ business margin is in assurances of reliability, and a lower-reliability product means lower revenue per square foot.

    Still, providers do increasingly create the mixed-reliability environment to satisfy market demand. One type of users that need low-reliability colocation options are companies that operate Bitcoin mining infrastructure.

    Another example is IT lab space. Silicon Valley data center provider Vantage Data Centers, for example, recently closed a wholesale lease for a large lab within one of the data centers on its campus. Since office space in the Valley has gotten very expensive, the customer looked to the data center provider to accommodate their lab space instead of using an office building as customary.

    Sentinel’s data centers are large. The Somerset facility is 430,000 square feet, and the Durham one is 420,000 square feet.

    6:19p
    Security Breaches, Data Loss, Outages: The Bad Side of Cloud

    As a big supporter of cloud computing, this is never an easy topic to discuss. However, security concerns will always be present as threats continue to rise. Let me give you an example. As soon as the whole Heartbleed topic arose, our organization began fielding calls from various IT shops asking for remediation, fixes, and patches. The crazy part was that not all OpenSSL systems were impacted. Many pre-version 1 OpenSSL systems were safe. Many others were facing the challenge up correcting and fixing this serious vulnerability. Cisco, Juniper, F5, and many others were actively deploying fixes to ensure that their systems stay safe. Numerous social media giants – Facebook and LinkedIn for example – were also having to deal with OpenSSL issues. Furthermore, even though they were stating that patches have been deployed, these giants were asking all of their users to reset their passwords… just in case.

    Interdependencies on cryptography library pieces can allow for a standardization around security protocols. However, it can also cause issues like Heartbleed where a number of large providers are impacted by the same very serious issue. Although cloud computing is a powerful platform, it can certainly have its “cloudier” days.

    Although we’ve come a long way with cloud design, there are still some concerns and issues to overcome. There are so many moving parts that create a cloud environment that sometimes, not all of the pieces fit together entirely well. In looking at cloud computing, consider some of the following:

    • Cloud and security. This is still absolutely an issue. In fact, it’s a growing issue. Arbor Networks 9th annual Worldwide Infrastructure Security Report illustrates this point very clearly with the largest reported DDoS attack in 2013 clocking in at 309 Gbps. As cloud computing becomes more popular, it will become the target of more malicious attacks. No single environment is safe and every infrastructure must be controlled with set policies in place. Heartbleed is a perfect example where a number of massive cloud organizations can be impacted by a standardized security structure.
    • Dealing with data loss. Allowing users to get into the cloud is one thing. Accessing applications through a cloud model is a powerful way to allow end-users to work remotely. However, what happens when users start uploading files to the cloud? Healthcare is a great example where data loss can be extremely costly. A recent report from the Health Information Trust Alliance (HITRUST) really paints the picture around the ramifications of a data breach. Over the recent years, the numbers around healthcare data breaches can be quit sobering.
      • Total Breaches: 495
      • Total Records: 21.12 million
      • Total Cost: $4.1 billion
      • Average Size: 42,659 records
      • Average Cost: $8.27 million
      • Average Time to Identify: 84.78 days
      • Average Time to Notify: 68.31 days

    Many organizations often times don’t have a Data Loss Prevention (DLP) system plan in place. This means that a user, even non-maliciously, might post some information or upload a file which can contain sensitive information.

    • Cloud outages. No entity is 100% safe from some type of disaster or emergency. In fact, a powerful storm in June of 2012 knocked out an entire data center which was owned by Amazon. What was hosted in that data center? Amazon Web Services. All affected AWS businesses in that data center were effectively down. Cloud-centric companies like Instagram, Netflix, and Pinterest were all made production ineffective for over six hours. To paint a clearer picture, there was a recent study conducted by the International Working Group on Cloud Computing Resiliency. This report showed that since 2007, about 568 hours were logged as downtime between 13 major cloud carriers. This has, so far, cost the customer about $72 million.

    So what do you do? If you take a look at the responses from folks like Facebook, Google, and even LinkedIn, you’ll see proactive actions which address the issue immediately and sets in motion plans to fix problems like this moving forward. You can never predict the future, especially not in IT or security. But you can be vigilant and ready for things like this to happen.

    New proactive security solutions like virtual security appliances give you the ability to deploy agile, powerful, and intelligent security systems anywhere within your infrastructure. The other big part is that these security platforms can be service-oriented. This means you can monitor specific network nodes and data points within a very distributed environment.

    For now, cloud computing has really done a good job staying out of the spotlight when it comes to major security issues. Yes, Dropbox might accidentally delete a few of your files, or some source code becomes exposed. But the reality is that a public cloud environment hasn’t really ever experience a massive data breach. Ask yourself this question, what would happen if AWS lost 80 million records like in the very recent Anthem breach? The conversation around public cloud security would certainly shift quickly. But the reality is that they haven’t. Maybe this gives us more hope that the cloud architecture is being designed in such a way that data is properly segregated, networks are well designed, and the proper boarder security technologies are in place. It all sounds great; but the key is to never become complacent. As more organizations move to a cloud-based model, advanced persistent threats may follow.

    8:20p
    Sidus Acquisition Gives ByteGrid Compliant Cloud Services

    Data centers and colocation used to be strictly a real estate and power game. However, in an effort to differentiate themselves from the pack, even those companies with pure facilities roots are expanding deeper into services and providing more than just space and power.

    This is part of the rationale behind ByteGrid’s acquisition of Sidus, a managed cloud services provider with a focus on compliance. The deal was announced Monday. Although financial terms were not disclosed, the move should pay big dividends.

    Once with roots solely in wholesale, ByteGrid moved into retail colocation offerings and is now looking to provide more cloud and hands-on services up the stack. It’s a similar story told by providers outside of core markets. Offering specialization opens up the potential customer base and appeals to the types of customers in emerging and secondary markets.

    Sidus offers compliant cloud hosting, managed hosting and IT regulatory consulting services. The consulting services will help ByteGrid approach the market from a different angle. Currently operating out of three data centers, Sidus will continue to serve its existing customers as well as bring more managed hosting across ByteGrid’s 750,000 square feet of data centers.

    “We started off real-estate centric with a focus on wholesale colocation trying to penetrate secondary markets and meet demand,” said ByteGrid CEO Ken Parent. “As we’ve gotten into it, we find it’s a very viable story. But we also find that in these markets, there’s a huge SMB element that’s hugely underserved. We’re combining a first-rate data center platform and services.”

    Headquartered in Annapolis, Maryland, Sidus is the second hosted and cloud services provider acquisition for the former wholesale data center-only provider. ByteGrid acquired NetRiver on the West Coast last year, a company strong in network services.

    The Sidus acquisition is squarely centered on compliant cloud services. It gives a more thorough managed hosting platform for ByteGrid to employ across its footprint. As one way to strengthen the company, ByteGrid named former Sidus’ CEO Mark Powell as president of its cloud services division.

    ByteGrid President Manuel Mencia said that the company will take the compliant hosting cloud product that does well in healthcare and roll it out to a more horizontal field.

    Parent said that Sidus has had many colocation opportunities and requests that the company couldn’t serve, while ByteGrid often had to partner for the requested services piece. Mencia said customers want the company to take on a bigger role and more responsibility. Many, in fact, eventually find it easier to hand over infrastructure management tasks to a trusted provider and get out of a capex model entirely.

    Sidus’ Maryland location especially appealed to ByteGrid because it touches on two of the biggest market opportunities in healthcare and government. In addition to its bread- and-butter healthcare customer base, Sidus is involved in the Federal Risk and Authorization Management Program (FedRAMP) with solid footing in the government space.

    Maryland is a much smaller data center market living in the shadow of nearby Ashburn. Providers need a compelling sell message to compete with large neighbors like Dupont Fabros, RagingWire, Digital Realty and Equinix. It’s hard to differentiate on a facilities basis and nearly impossible to compete with interconnection. Offering services is a big way to stay unique, so it doesn’t become a competition between wholesale in Maryland and wholesale in Ashburn.

    “Another driver these days is the increasing separation in the market between the top end and the middle of the pack,” said ByteGrid’s Structure Research Managing Director Philbert Shih. “With scale more crucial it is harder for small to mid-sized providers to compete without diversification and differentiation. In response, they get away from the scale game and pure play colocation and move into hosting and cloud services.”

    ByteGrid believes Maryland is a $500 million data center industry growing at a 10 percent compounded annual growth rate. The state excels in life sciences and biotech as well as government, arenas where customers often seek more than just wholesale space.

    8:30p
    vSphere 6, VMware’s ‘One Cloud’ Strategy Centerpiece, Enters Availability

    VMware’s “one cloud, any application, any device” strategy is afoot. The company announced Monday general availability of VMware vSphere 6, its flagship suite of software tools for building cloud, its own OpenStack flavor, and the VMware Virtual SAN 6.

    VM’s vSphere 6 acts as a wider hub for a mixed cloud infrastructure. While each product saw enhancements, the biggest one involves several features and products all tied into the bigger, hybrid picture.

    The aim is to provide a consistent environment across all cloud setups, be it private, public or hybrid in support of both modern and traditional applications. The marketing revolves around the “one cloud, any application, any device” adage.

    Initially announced in February, VMware vSphere 6 has been enhanced in a variety of ways, including new features—more than 650, according to the company—and deeper integrations. Its most touted feature is Virtual Volumes, a set of storage APIs to hook in third party storage arrays.

    Virtual Volumes also provides dynamic provisioning of capacity and data services for each virtual machine and makes it simpler to manage storage infrastructure. So far, VV has seen solid storage vendor support, so it will probably work with what you have. HP, IBM, NetApp, and Fujitsu have delivered vSphere Virtual Volumes-enabled products.

    Expect for additional Virtual Volumes-enabled products to come the second half of the year from Atlantis Computing, Dell, Hitachi Data Systems, NEC, NexGen, Pure Storage, Symantec, and Tintri.

    Additionally, the max number of hosts, memory and virtual machines has been greatly expanded in vSphere 6. A cluster can support up to 64 hosts and 8,000 virtual machines, while single vSphere Hypervisors can do the same for up to 480 physical CPUs, 12TB RAM and 1,000 virtual machines. VSphere 6 is also optimized for VMware Horizon 6, its Virtual Desktop Infrastructure offering.

    Other enhancements come in the form of fault tolerance and high availability (vMotion). Migration and moving is simpler, a key capability for taking advantage of a hybrid cloud environment.

    vSphere also received a face lift with a better web and user interface.

    Customers also benefit from VMware Integrated OpenStack, free for buyers of vSphere. It is a full OpenStack distribution with open APIs for accessing VMware infrastructure, and VMware packages, tests and supports all components of the distribution.

    Another vSphere enhancement is VMware Virtual SAN 6: A storage platform for virtual machines released in February. The 6 release introduced an all-flash architecture, snapshots and rack-awareness to prevent against complete rack failures. The company claims 6 has double the scalability and up to 4.5 times greater performance over the previous release.

    VMware vCloud Suite 6 integrates vSphere 6 with vRealize Automation 6.2 (cloud automation software formerly known as vCloud Automation Center) and vRealize Operations 6 (automating operations management) to deliver private cloud based on a software-defined data center architecture.

    The newest vCloud Suite release introduced automated virtual infrastructure cost and consumption reporting based on capabilities delivered within VMware vRealize Business 6 Standard.

    Finally, VMware vSphere with Operations Management 6 is an integrated platform and management solution. It provides predictive analytics to simplify infrastructure management, and automated recommendations and remediation capabilities.

    VMware customer MLB Network discussed its evolution in a press release.

    “Virtualization has completely changed how our broadcast IT interfaces with our infrastructure and allows us to scale to meet our business needs,” said Tab Butler, director of media management and post-production at MLB Network. “We have virtualized and automated our post-production workflows and infrastructure with VMware vSphere, helping us increase the speed and delivery of content while scaling our services to our end-user clients. We anticipate VMware vSphere 6 will further extend and enhance the performance and availability of our business-critical applications.”

    8:57p
    VCE Launches Converged Infrastructure With Scale-Out Capabilities

    VCE’s converged infrastructure offerings have expanded with a new line of appliances called VxBlock Systems. It features software-defined networking platforms, with the choice of either VMware’s NSX or Cisco’s ACI. The architecture in general has been tuned to scale-out and scale-up with better management of multiple appliances.

    Converged infrastructure’s initial purpose was to consolidate multiple assets into one box. Now vendors are focused on providing better control and flexibility through software. Software-defined networking and scaling abilities makes VxBlock more flexible and feasible for modern big data workloads or cloud infrastructure.

    Vision Intelligent Operations management software has been enhanced to better deal with multiple VxBlock systems, and Intelligent Operations can now define multiple VCE systems as a single pool or resources in 3.0. Previously, each VCE appliance was viewed and treated as individuals.

    In essence, Intelligent Operations meshes multiple converged appliances with the ability to monitor the whole with unified intelligence.

    “VCE’s mission since inception has been to free IT organizations from expending resources and efforts managing infrastructure so they could deliver next-generation services to their end users,” said Praveen Akkiraju, CEO of VCE, in a press release. Akkiraju said the new set of platforms and solutions is just the first phase of major expansion for the portfolio.

    In the same release, Mark Bowker, senior analyst for the Enterprise Strategy Group, said the enhancement provides “an open-door design and ways for IT professionals to invest in VCE-enabled, software-defined data center solutions that provide strategic longevity.”

    As converged infrastructure becomes hyper-converged, the lines blur—even within EMC-owned companies, such as VMware’s similar offering EVO:Rail. The only real difference between converged infrastructure and a modular data center at this point is the inclusion of environmental controls and a container.

    Born as a joint venture between EMC, VMware, and Cisco, EMC recently took complete control of VCE. Cisco is now doing its own converged infrastructure as well as partnering on converged infrastructure, and there’s also growing competition from the likes of SimpliVity, which recently raised a massive round.

    << Previous Day 2015/03/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org