Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 26th, 2015

    Time Event
    4:34p
    EMC’s $1.2B Virtustream Acquisition Takes It Deeper Into Services

    One EMC official recently said it wasn’t “your father’s EMC,” and the company has now made a big acquisition that illustrates the point. The IT giant is acquiring cloud service provider Virtustream for $1.2 billion in a bid to strengthen cloud services offerings. Virtustream will operate as a managed cloud services business arm of EMC following transaction close.

    EMC continues to distance itself from the image of a legacy enterprise storage company. It recently open sourced a key software defined storage controller called ViPR and released a free edition of the converged infrastructure software ScaleIO. Both of those moves suggest it sees its value in services and not necessarily in technology alone. With the Virtustream acquisition, it moves even deeper into services.

    Virtustream was founded in 2009. The company is focused on enterprise cloud in particular, meaning it provides hybrid cloud solutions with an emphasis on moving and managing mission critical and production applications, with all the security, compliance, and performance SLAs required by enterprises.

    Virtustream’s stack is based on its xStream cloud management software for private, public, and hybrid clouds powered by Virtustream µVM technology.

    Investor Elliott Management is likely to have played a role in EMC’s evolution. Elliott and EMC have what could be called a tough-love relationship, albeit a healthy one. The two have feuded over what to do with VMware in particular: Elliott requesting to spin it off, while EMC wants to retain its 80 percent controlling stake.

    The back-and-forth between company and investor is ongoing, but the two are playing nice. Pressure from EMC’s fifth largest stakeholder and the addition of two Elliott folks to its board are likely working behind the scenes to evolve the tech giant.

    The transition is a necessary one, and the Virtustream acquisition lays the foundation for things to come. EMC’s sales growth has slowed for two consecutive quarters, and all of its fellow technology giants are going hard into services to offset decline of traditional technology sales.

    The acquisition is expected to close in the third quarter and will have no material impact on earnings results until 2016, according to EMC.

    4:53p
    Equinix Expanding Two Hong Kong Data Centers

    Equinix is expanding two of its Hong Kong data centers, planning to invest over $40 million on a third phase in the HK2 facility and a ninth phase in HK1. The expansion adds room in Hong Kong for close to 1,200 cabinets.

    This is the latest in a series of expansions across Asia Pacific, all part of the colocation giant’s wider global expansion effort. Other recent expansions in the region include building a $100 million data center in Sydney, its fifth in Australia, following the opening of a Melbourne facility late last year. The company also opened its largest facility in Singapore and Asia Pacific on the whole in March.

    The expansion in HK2 is bigger of the two, adding 900 cabinets and bringing total capacity to 2,350, while the additional phase in HK1 will add 275.

    There’s demand for data centers both from within Hong Kong — Equinix cites demand from online media and video content providers in particular — as well as demand internationally. Hong Kong and Singapore are considered the two main entry points into mainland China and the Asia Pacific market in general.

    Hong Kong has high internet penetration, with 80 percent of the population connected and active online. More than two-thirds are active on social media and mobile platforms, respectively. In addition to end users, there is a great migration to cloud in some form or another across enterprises in general. Research firm IDC predicts the cloud market will grow 23 percent in 2015.

    A high internet penetration rate means digital and media content companies want to serve in close proximity with low network latency, and Equinix’s Hong Kong data centers are network dense.. The media and content customer base grew 16 percent year over year in 2014, according to Equinix.

    “With the strong momentum of cloud and content companies deploying in Hong Kong, as well as data center services demand from worldwide customers including many in China, it was a clear strategic business decision to expand our presence in Hong Kong,” said Alex Tam, managing director, Equinix Hong Kong. “The investment in HK1 and HK2 further positions Hong Kong as an important regional hub, not only for financial services firms but for cloud and content companies as well.”

    6:17p
    Hadoop and Big Data Storage: The Challenge of Overcoming the Science Project

    Andrew Warfield is the CTO of Coho Data and an associate professor of computer science at the University of British Columbia.

    This is Part I of a two-part series. Come back to read Part II next week.

    About two years ago, I started talking to Fortune 500 companies about their use of tools like Apache Hadoop and Spark to deal with big data in their organizations. I specifically sought out these large enterprises because I expected they would all have huge deployments, sophisticated analytics apps, and teams that were taking huge advantage of data at scale.

    As the CTO of an enterprise infrastructure startup, I wanted to understand how these large-scale big data deployments were integrating with existing enterprise IT, especially from a storage perspective, and to get a sense of the pain points.

    What I found was quite surprising. With the exception of a small number of large installs — and there were some of these, with thousands of compute nodes in at least two cases — the use of big data tools in most of the large organizations I met with had a number of similar properties:

    Many Small Big Data Clusters

    The deployment of analytics tools inside enterprise has been incredibly organic. In many situations, when I asked to talk to the “big data owner,” I wound up with a list of people, each of whom ran an 8-12 node cluster. Organizational IT owners and CIOs have referred to this as “analytics sprawl,” and several IT directors jokingly mentioned that the packaging and delivery of Cloudera CDH in Docker packages was making it “too easy” for people to stand up new ad hoc clusters. They had the sense that this sprawl of small clusters is actually accelerating within their companies.

    Non-standard Installs, Even on Standard Distributions

    The well-known big data distributions, especially Cloudera and Hortonworks, are broadly deployed in these small clusters, as they do a great job of combining a wide set of analytics tools into a single documented and manageable environment. Interestingly, these distributions are generally used as a “base image,” into which all sorts of other tools are hand-installed. As an example, customizations for ETL (extract, transform, load) — pulling data out of existing enterprise data sources — are common. So are additions of new analytics engines (H2o, Naiad and several graph analytics tools), that aren’t included in standard distributions. The software ecosystem around big data is moving so fast that developers are actively trying out new things, and extending these standard distributions with additional tools. While agile, this makes it difficult to deploy and maintain a single central cluster for an entire organization.

    Inefficiencies and Reinvention

    Whether large or small scale, analytics environments are typically being deployed as completely separate silos alongside traditional IT in their own racks and on their own switches. Data is being bulk copied out of enterprise storage and into HDFS, jobs are run, and then results are being copied back out of HDFS back to enterprise storage. Separate compute infrastructure is being deployed to run analytics jobs, resulting in wasted efficiency and an effective doubling of both capital and operational costs. Finally, business continuity concerns, such as the availability of clusters and the protection of data, are being solved by having physically duplicate clusters installed at multiple physical sites performing the exact same compute in each one.

    More Than One Way to Build Big Data

    It’s important to point out that none of these things are necessarily wrong: As companies are in the early stages of exploring big data tools, it makes complete sense that things happen in an organic and grassroots manner. However, as these tools start to bear fruit and become critical parts of business logic, their operational needs change quickly. It was remarkable to me that in many of my conversations — both with analytics cluster owners and with traditional IT owners — that the state of big data in their organization was described as a “a bit of a science project”; not in a necessarily negative way, but certainly as a way of characterizing the isolated and ad hoc nature of cluster deployments.

    As a result of all this, one of the most significant challenges facing enterprise IT teams today is how to efficiently support and enable the “science” of big data, while providing the confidence and maturity of more traditional (and often better understood) infrastructure services. Big data needs to become a reliable and repeatable product offering – not to mention efficient and affordable – within IT organizations in the same way that storage, virtual machine hosting, databases and related infrastructure services are today.

    From Science Project to Data Science Product

    So how do we get there? One thing to make clear is that this isn’t simply a matter of choosing an appropriate big data distribution. The fluidity of big data software stacks and the hands-on nature of development practices are not going to change any time soon.

    For big data science projects to evolve into viable, efficient solutions, it will take as much rethinking from vendors as it will from the companies deploying these solutions. This evolution is happening quite rapidly as vendors provide infrastructure solutions that bridge the gap between web-scale approaches and traditional data center architectures.

    I’m eager to see how these changes allow companies of all shapes and sizes to further leverage big data to grow their businesses and better serve and understand customers.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    7:12p
    VMware Doubles Size of EVO:RAIL Clusters

    VMware has doubled the number of virtual appliances that can be configured using VMware EVO:RAIL software, going from from four to eight for a total of 32 nodes in a single cluster.

    Mornay van der Walt, vice president of the EVO:RAIL Group for VMware, says that over time EVO:RAIL will fundamentally shift the focus of the data center away from servers to instances of hyper converged virtual appliances.

    “Rather than thinking in the context of servers, we’re trying to get people to embrace appliances,” said van der Walt. “The goal is to be able to scale out using a set of repeatable, reliable processes.”

    In third quarter of this year, van der Walt says, VMware will extend the size of an EVO:RAIL cluster by making use of recently unveiled VMware vSphere 6 virtualization and VMware VSAN 6 platforms underneath it to provide even higher levels of scale.

    Fundamentally, he says, EVO:RAIL changes the delivery and consumption model surrounding IT infrastructure in the data center. Such appliances are not only simpler to scale out, they also provide a non-disruptive method for introducing patches and updates into the environment. As a result, the overall cost of maintaining data center environments drops significantly.

    EVO:RAIL appliances are actually the building blocks that IT organizations will rely on to make the shift to deploying software-defined data centers, according to him.

    Of course, like with any technology transition, there is a learning curve involved in making the transition to EVO:RAIL. The worse thing that IT organizations can do, says van der Walt, is treat it as if it was a server.

    IT organizations need to make sure the network is configured properly before installing it. But once they get past that initial deployment, each appliance can either be configured to be automatically included in an existing cluster or be deployed as the foundational element of a new EVO:RAIL cluster.

    VMware is not the only vendor painting a picture of a bright SDDC future. But in terms of IT management expertise VMware is clearly among those who are furthest along. The challenge facing IT organizations is determining not only the rate at which they want to make the transition to an SDDC world, but also how much they want to rely on a single vendor to get them there.

    8:09p
    Firefighters Put Out Fire at Future Apple Data Center in Arizona

    Emergency crews Tuesday morning put out a fire on the roof of a building in Mesa, Arizona, slated to become an Apple data center.

    It took about 30 minutes to put the fire out, Forrest Smith, deputy chief and public information officer at Mesa Fire and Medical, said. There were no reports of injuries, but between 40 and 50 people were evacuated from the building, he added.

    Cause of the fire was undetermined as of early afternoon Tuesday, but investigators were leaning toward a problem with solar panels on the building’s roof. “We are suspecting it may have been some solar panels,” Smith said.

    The fire did not spread to the building’s interior, but there was a partial roof collapse over the loading-dock area, he said.

    About 100 firefighters were on the scene from Mesa and surrounding communities.

    It is unclear how far along the data center build-out at the site has progressed. Apple representatives did not respond to a request for comment.

    Apple and local officials announced the company’s plans to spend $2 billion on converting the former 1.3 million square foot manufacturing plant into a data center in February. They said the entire building would be powered by renewable energy, including solar.

    The bulk of the solar energy in the mix would not come from rooftop panels, however. It would be supplied by a solar farm being built nearby.

    The building used to be occupied by GT Advanced Technologies, a former supplier of materials for Apple’s smartphone screens that filed for bankruptcy last year. A legal dispute between Apple and GT ended with a settlement last December.

    In January, a roof fire broke out at the construction site of an Amazon Web Services data center in Ashburn, Virginia.

    10:44p
    Ciena Intros Data Center Interconnect for Web Scale

    Responding to the demand created by massive growth in inter-data center traffic, Ciena introduced Waveserver, a new data center interconnect platform powered by its WaveLogic3 Extreme chipset and new web-scale software. With the Waveserver’s compact design and Ciena’s new Emulation Cloud prototyping environment the company is prepared to address the global DCI market, which, according to Ovum, reached $2.5 billion in revenue in 2014.

    When compared to available competing platforms, Ciena says, the new Waveserver provides 60 percent more capacity per rack unit and nearly 20 terabits per fiber. Meant for metro DCI applications that carriers, content providers, and data center providers need, the new stackable interconnect system delivers a reliable high-capacity link to interconnect multiple data centers within a metro area, according to the company.

    Being able to provide a mix of 10, 40, and 100GbE clients, Ciena says, the solution provides a total of 800G of input and output made up of 400GbE clients plus 400G of line capacity in just one rack unit. Ciena says the Waveserver can deliver 19.2 terabits per fiber pair, with 96 channels of 200G.

    Ciena was able to leverage experience from clients using its 6500 Packet-Optical platform, such as Equinix, to build Waveserver. Jay Pabley, vice president of global network engineering at Equinix, said, “Ciena’s Waveserver will enable us to provide more capacity and more efficient data center interconnect, using significantly less space and power, which will help us to cost-effectively service our customers and rapidly grow our web-scale network.”

    With its new DCI platform Ciena also launched the Emulation Cloud open application development environment to cater to DevOps teams. The sandbox allows anyone to create, test, and fine-tune customized web-scale applications in a hosted cloud environment. Ciena also notes that the Waveserver utilizes its OPn architecture approach and a set of open APIs with which network providers can program Waveserver manually or remotely to quickly establish a connection via any smart device.

    10:59p
    Next-Generation Cloud and the Power of the Data Center

    With so many new types of cloud services emerging, people are getting their share of the cloud from all sorts of vendors. New types of content delivery architecture, cloud models, and even the emergence of IoT/IoE are all re-shaping how data center resources are being used. In working with the cloud, many administrators forget where these services originate and where the cloud is really housed.

    To the same respect, data centers all over the world are pushing hard to get caught up to the demands of the current market. Let’s face it, it’s no wonder so many data center providers are saying “It’s good to be in the data center business right now.”

    A recent Cisco report goes on to point out that an important traffic enabler in the rapid expansion of cloud computing is the increase in data center virtualization, which provides services that are flexible, fast-to-deploy, and efficient. By 2018, more than three-fourths of all workloads will be processed in the cloud. Additional trends influencing the growth of cloud computing include the widespread adoption of multiple devices combined with increasing user expectations to access applications and content anytime, from anywhere, over any network. To address these rising user demands, cloud-based services such as consumer cloud storage are gaining momentum. By 2018, more than 50 percent of the consumer internet population will be using personal cloud storage.

    In their efforts to catch up to today’s on-demand industry, data centers have had to adapt to new workloads, more bandwidth requirements and explore infrastructure multi-tenancy.

    • SDN/NFV Creating New Ways to Network. This is all about network efficiency and SDN and NFV. By literally virtualizing the networking layer, data centers have been able to create highly connected environments spanning the globe. What we’re able to do with layer 2-7 devices now is pretty amazing. Plus, our ability to create hundreds and even thousands of virtual connection all from one network controller further enhances cloud connectivity. Logical network segmentation has allowed data centers to thrive by providing dedicated services from intelligent switching technologies. Furthermore, new networking architecture allow administrators to truly understand that DNA of their data center. In turn, they can create powerful automation policies, better QoS standards, and even improve on network security.
    • Consolidation and Cloud. High-density computing has played a big role in the data center’s movement to the cloud. New kinds of hyperconverged systems and unified computing platforms are creating highly efficient and highly scalable environments. With advanced virtualization now available, we’re able to fit even more users, desktops and applications per server. This type of multi-tenancy simplifies the rack environment and allows for easier management. It also helps organizations become greener. Which brings us to the next point.
    • Going Greener. In moving to cloud platforms and providing cloud services, the data center environment became a focal point for new resource demands. So, the data center began to deploy more efficient technologies to support more users and run more economically. However, economics also gives way to greener technologies. Given the current focus on the environmental effects of data centers in today’s “green” culture, many data centers providers are taking a closer look at ways to improve their cooling and power efficiency. For example, data centers are replacing constant speed pumps and fans in their cooling plants with variable frequency motors that can more accurately match cooling demand to supply. They’re also investing in smart, automated ways to configure and operate their cooling plants in response to data floor and outside temperatures and humidity. Google is a great example of this. Their fleet-wide PUE has dropped significantly since Google first started reporting their numbers in 2008. The trailing twelve-month energy-weighted average PUE for all Google data centers is 1.12, making their data centers among the most efficient in the world.

    New market demands mean new kinds of opportunities for those data center providers that can keep up. Cisco points out that many enterprises will adopt a hybrid approach to cloud as they transition some workloads from internally managed private clouds to externally managed public clouds. All three types of cloud service delivery models (IaaS, PaaS, and SaaS) will continue to grow as more and more businesses realize the benefits of moving to a cloud environment. Here’s the thing: there’s going to be even more emphasis on the data center’s impact on cloud. Just take a look at what’s happening with mobility, IT consumerization and of course big data. This means that as the market and cloud evolve, your data center architecture will be the underlying foundation to it all. The winners will be the ones who can evolve with demand and continue to provide an architecture which is economical, scalable, and in many cases environmentally friendly as well.

    << Previous Day 2015/05/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org