Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, January 5th, 2015

    Time Event
    1:00p
    What Pundits Think: Cloud Predictions 2015

    While some trends in the cloud market in 2015 will signal maturity – trends like consolidation and cloud service portfolio pruning, for example – some are shifts of non-organic nature. Examples of the latter include a growing focus on Asia and the effects of new data privacy legislation around the world on cloud decisions.

    As a follow-up to the list of data center predictions we posted in December, here are some of the most interesting cloud predictions for this year from IDC, 451 Research, IO, and Dell.

    1) Data Privacy Laws to Shape Cloud Decisions

    Revelations about wide ranging government spying on electronic communications around the world that came out of document leaks by former U.S. National Security Administration contractor Edward Snowden that started in 2013 have put data privacy in the spotlight, resulting in lots of new legislative initiatives aimed at protecting it.

    According to the market research firm IDC, the IT world will really feel the results of this in 2015. This year, 65 percent of criteria for placing enterprise cloud workloads around the world will be shaped by organizations trying to comply with data privacy legislation, IDC analysts said in one of their 2015 cloud predictions.

    2) Enterprises Gradually Grow Comfortable with Open Source

    Do not expect an enterprise bonanza for open source technologies this year, but there is a gradual shift toward more adoption, according to IDC. The analysts predict that 20 percent of enterprises will see enough value in community-driven open source standards and frameworks to adopt them strategically by 2017.

    Some of the most prominent examples of open source projects that are enjoying strong enterprise momentum are the cloud architecture OpenStack, Platform-as-a-Service project Cloud Foundry, application container standard Docker, and software defined network standard OpenFlow.

    3) Personalized IT Automation on Horizon

    The past couple of years saw the rise of IT automation, IT organizations increasingly adopting the DevOps approach, where infrastructure configuration is automated to support a continuous software release schedule.

    Over time, however, automation is going to get more granular. About one-quarter of IT organizations will support a “consumer tier” by 2017, according to IDC, which means workers will be able to develop their own personal automation.

    4) Fewer IaaS Offerings

    The constant roll-out of new IaaS offerings and companies over the past two or three years was unrelenting. IDC expects providers to make wholesale changes to their service portfolios. The analysts said 75 percent of IaaS provider offerings will be redesigned, rebranded, or phased out over the next 12 to 24 months.

    5) Massive Service Provider Consolidation

    A massive multi-year consolidation process among data center, colocation, and cloud service providers will start this year, predicts George Slessman, CEO and co-founder of the Phoenix-based colocation service provider IO. Consolidation is nothing new to any of these business verticals, but the process Slessman is talking about will culminate in formation of a “global data center and application logistic network and system.”

    6) Applications’ New Home: Mix of Cloud and Colo

    More than 90 percent of all apps created in 2015 will run in cloud or “hybrid colocation” environments, according to Slessman.

    Lots of colocation providers have been pursuing the demand for hybrid solutions, where customers can combine their own servers with cloud services. Some of them, companies like CenturyLink and, to a lesser extent, IO, have developed their own cloud services. Others, like Equinix and Telx, have made their data centers hubs for access to cloud services by lots of different providers.

    7) Internet’s Center Will Move to Asia

    Chinese Internet giant Alibaba, which went public in 2014, will become the largest company in the world one year after the IPO, Slessman said. That will represent “the definitive move of the Internet’s center to Asia.” Alibaba will do for Asia what Intel did or Silicon Valley, he said.

    Alibaba operates an e-commerce web site similar to Amazon.com, but, like the Redmond, Washington-based giant, it also has a cloud services business called AliCloud. It operates data centers in China and Hong Kong and provides services ranging from raw cloud compute to sophisticated cloud based data analytics.

    8) Cloud Security Spending Will Remain High

    Security spending is up again and will continue in 2015, another market research company, 451 Research, predicted. Mergers and acquisitions, IPOs, venture capital, and private equity funding in cloud security will continue at or near record level, the analysts said.

    While good for the industry, the trend only means that along with new developments in IT, new vulnerabilities will appear that will demand new security products. “Security is in large part reactive,” 451 analysts said in one of their six cloud predictions for 2015, adding that security generally follows growth in IT and mobility about two years behind.

    9) Private Cloud Agility Will Sink In

    While agility is one of the main benefits of cloud, the agility of private cloud infrastructure will finally sink in with CIOs in 2015, according to Dell. Ability to remove converged infrastructure modules as needed will make private clouds an even more agile and attractive option for them.

    10) Value of Specialized IT Knowledge Will Shrink

    As more and more IT is outsourced, companies will rely less on admins with highly specialized certifications, according to Dell. Convergence and virtualization have enabled companies to focus more on what they can do with all the IT resources available to them as a whole, rather than spending a lot of resources on each individual piece of infrastructure.

    2:30p
    Social Year in Review: The 10 Most Shared Data Center Stories of 2014

    While the likes of Facebook, Google, Apple and others were busy making headlines in 2014, our readers were busy sharing the news. Here’s a look back at the 10 most shared articles on Data Center Knowledge during 2014. Enjoy!

    Facebook Turned Off Entire Data Center to Test Resiliency – A few months ago, Facebook added a whole new dimension to the idea of an infrastructure stress test. The company shut down one of its data centers in its entirety to see how the safeguards it had put in place for such incidents performed in action.

    Ten of the Strangest Data Center Outages – Every once in a while, utility power goes out and the backup systems fail, or a technician makes a mistake, and the data center goes down. While outages have become less frequent, as the industry’s practices continuously improve, things still occasionally go wrong.

    Hurricane Sandy was a freak of nature that caused floods in many basements with critical infrastructure, some bringing down data centers in those buildings for days.

    Hurricane Sandy was a freak of nature that caused floods in many basements with critical infrastructure, some bringing down data centers in those buildings for days.

    Google Dumps MapReduce in Favor of New Hyper-Scale Analytics System – Google has abandoned MapReduce, the system for running data analytics jobs spread across many servers the company developed and later open sourced, in favor of a new cloud analytics system it has built called Cloud Dataflow.

    Facebook Breaks the Network Switch Up Into Modules – After it customized servers and storage to optimize for its applications and to enable its developers to roll out new software features at lightning speeds, networking switches were the remaining component of Facebook’s infrastructure that was a “black box,” with tightly coupled vendor-designed proprietary software and hardware.

    IBM SoftLayer London Data Center Close to Launch – In January, IBM SoftLayer, Big Blue’s cloud services division, laid out a plan to open 15 new data centers as part of a $1.2 billion global investment to strengthen its cloud play around the world. Today the company announced a big step in that plan: a data center in London set to launch in early July that will satisfy customers that care about keeping their data within UK borders.

    Fire at Bitcoin Mine Destroys Equipment – Reports of a fire in a warehouse-style facility in Thailand have raised tough questions about the wisdom of hosting expensive bitcoin mining equipment in low-cost warehouse facilities.

    Photo of the devastation being circulated on social media (Reddit, BitcoinTalk)

    Photo of the devastation being circulated on social media (Reddit, BitcoinTalk)

    Google Using Machine Learning to Boost Data Center Efficiency – Google is using machine learning and artificial intelligence to wring even more efficiency out of its mighty data centers.

    Apple To Build Third Solar Farm Near North Carolina Data Center – Apple is planning a third solar farm in the vicinity of its Maiden, North Carolina, data center.

    One of Apple's solar farms in North Carolina. (Image: Apple)

    One of Apple’s solar farms in North Carolina. (Image: Apple)

    The SUPERNAP Goes Global, as Switch Adds International Partners – The SUPERNAP is going global. Colocation pioneer Switch has formed a joint venture to launch an international expansion, teaming with Accelero Capital Holdings and Orascom TMT Investments (OTMTI) to build SUPERNAP data centers around the world.

    The Home Data Center: Man Cave for the Internet Age – In the ultimate manifestation of the “server hugger” who wants to be close to their equipment, a number of hobbyists and IT professionals have set up data centers in their home, creating server rooms in garages, basements and home offices.

    Stay current on data center news by subscribing to our daily email updates and RSS feed, or by following us on Twitter, Facebook, LinkedIn and Google+.

    4:30p
    IT as Business’ Secret Sauce and Other 2015 Predictions

    John Matthews is the CIO of ExtraHop, the global leader in real-time wire data analytics for IT intelligence and business operations.

    Technology now infiltrates every aspect of business operations. Whether it’s managing human resources, analyzing marketing efforts, enabling a multichannel retail strategy, or delivering instant access to patient records and medical resources, organizations are deeply reliant on technology resources.

    As a CIO, I’ve spent my 20-year career watching information technology inexorably expand its reach across organizations, and while I know how powerful these technologies can be in advancing business objectives, poor management of IT operations and resources can also be a tremendous hindrance.

    In 2015, expect to see not only continued growth of IT resources dedicated to business operations, but rapid adoption of solutions aimed at helping IT itself become a strategic weapon to help the business do business better. Here are a few of my 2015 predictions.

    IT Gets Smart About Operations with Multisource Data Analytics

    Agent data has long been one of the key sources of insight for IT operations teams looking to ensure performance and availability of business critical applications. Likewise, network performance monitoring (NPM) solutions that monitor NetFlow and sFlow provide a similar solution at the network layer. But these solutions have long had their challenges. Servers throw out a deluge of alerts, and sifting through that data for any meaningful insight requires that you be either very smart or very lucky. Trying to sift through it in real time? That’s an even bigger hurdle.

    In 2015, expect to see real investment by enterprise IT in analytics technologies for a broader array of IT data sets, including machine data and wire data. The reality is that there is no one single lens that is going to provide IT with the insight it needs to turn itself into a well-oiled machine. You need multiple data sets that provide not only different points of view, but context and correlation for each other as well.

    Integration Mission Critical for Enterprise IT

    The Internet of Things (IoT) and the move toward a digital business model are changing enterprise IT architectures. This means application integration, data integration, and the integration of both applications and data are quickly becoming top priorities for enterprises.

    The need to have applications that seamlessly interact with other applications and data that can be leveraged across multiple platforms is going to dominate buying decisions and budget priorities in 2015. Expect to see IT stakeholders focusing on vendors that embrace this shift with solutions that streamline and enable broad integration with other applications and datasets.

    Big Data Gets Much-Needed Definition

    Just a few short years ago, the term “cloud” meant any number of things depending on the person using it. As the market matured, segmentation brought greater clarity, partitioning cloud into broad IaaS, SaaS, and PaaS categories. Big data is on the precipice of a similar segmentation. Right now, the term is used so broadly and applied to so many things that it has become unwieldy. In 2015, expect to see greater clarity emerge around big data as it increasingly becomes an umbrella term encompassing a few well-defined segments.

    M&M Security Model Gets Data Boost

    Unless your head was firmly embedded in the sand in 2014, you know that security has become a big problem for both IT and business. As if Heartbleed and Shellshock weren’t heart attack and shock-inducing enough, significant hacks of major businesses should have put every IT operations and security professional on notice: what you’re doing probably isn’t enough. If 2015 is the year when IT truly has its coming out party as the secret sauce of the business, security will have to be a major component of newer, more robust IT operations architectures.

    The old “M&M” model of security – a hard exterior protecting the soft interior – is clearly insufficient. Even adding an extra “M” to that equation – in this case, for machine data (log files) – won’t be enough. Perimeter defenses are being breached, and system self-reporting can too easily be compromised.

    In 2015, expect to see greater demand among enterprise IT and security teams for pervasive monitoring and anomaly detection systems that provide visibility into the IT infrastructure. In order to meet this demand, entrenched security vendors will also seek out technology partnerships that allow them to offer best-of-breed monitoring capabilities.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:37p
    Report: BlackRock’s New York State Data Center Construction Set to Start

    The $80 million data center construction project planned by asset management firm BlackRock in Amherst, New York, is scheduled to start before the end of February, Buffalo Business First reported. The project and Amherst site selection were officially announced in July.

    Amherst was chosen because of various tax incentives available in the area. It will be one of the largest data center projects and private sector-driven developments the area has seen.

    Tax breaks are a common way state and local governments lure data center construction projects, viewed as effective boosts to local economies.

    The 31,000 square foot building will be located in CrossPoint Business Park, home to a growing cluster of financial institutions backed by good fiber and a reliable power grid. The region also features low-cost hydropower.

    The incentive package saves BlackRock $6.8 million in sales tax, $2.15 million in property tax, and about $165,000 in mortgage recording taxes. BlackRock is spending close to $16 million in new-build expenses in addition to $1.3 million for the land, and over $16 million for non-manufacturing equipment.

    The multinational corporation reports assets of more than $4.32 trillion through its global investment and risk management services and has more than 11,000 employees around the world.

    7:27p
    Report: Google Plans $66M Taiwan Data Center Investment

    Google is planning to invest an additional $66.4 million into its data center in Taiwan’s Changhua County, one of two Google data centers in Asia.

    A government investment commission announced it approved the Mountain View, California-based Internet giant’s investment proposal in late December, Focus Taiwan reported. Local press reported that the company was planning a major data center expansion at the site in October.

    Google announced plans to build data centers in Taiwan, Singapore, and Hong Kong in September 2011. The company later scrapped its Hong Kong plans, but both Singapore and Taiwan facilities came online in December 2013.

    The Google data center in Taiwan, sitting on a 15 hectares of land the company bought in 2011, initially represented a $600 million investment.

    One of the unique features of the facility, operated by about 60 full-time employees, is its cooling and thermal energy storage system. It cools water at night, when it’s generally cooler, and stores it in insulated tanks to keep it cool, according to Google.

    Water from the tanks is circulated through the data center’s cooling system during the day, which minimizes the amount of energy required for mechanical cooling. This was the first data center in the company’s fleet to use such a system.

    Google said it chose Taiwan for one of its two Asian data centers because the country is a high-tech hub with reliable infrastructure. Its government supports innovation and foreign investment and fosters an “accommodating regulatory environment.”

    In addition to the two facilities in Asia, Google has six data centers in the U.S., one in South America (Chile), and four in Europe.

    7:59p
    Understanding Application Containers and OS-Level Virtualization

    Let’s imagine for a minute that you have a commonly used virtual hosting environment. Within that environment you have to securely segment your physical resources between lots of users. The users must be segmented on the virtual environment and have their own “virtual space.”

    Now, to manage these users and their respective resources, you deploy a powerful tool that allows the kernel of the operating system to have multiple user space instances, all isolated and segmented from each other. These user space instances, also known as containers, allow the user within the container to experience operations as if they’re working on their own dedicated server. The administrator with overall rights to these containers can then set policies around resource management, interoperability with other containers within the operating system, and required security parameters. From there, the same administrator can manage as well as monitor these containers and even set automation policies around dynamic load-balancing between various nodes.

    All of this defines operating system-level virtualization.

    Now let’s dive a bit further. Operating system-level virtualization is a great tool to create powerfully isolated multitenant environments. However, there are some new challenges organizations are facing when working with containers and operating system-level virtualization:

    • What if you have an extremely large number containers all requiring various virtual machine resources?
    • What if you need to better automate and control the deployment and management of these containers?
    • How do you create a container platform capable of running not only on a Linux server?
    • How do you deploy a solution capable of operating on premise, in a public or private cloud, and everywhere in between?

    We can look at a very specific example of where application containers are making a powerful impact. Technologies like Docker are now adding a new level of abstraction as well as automation to the operating system-level virtualization platform running on Linux servers. Docker implements new kinds of isolation features using cgroups to allow isolated containers to run within their own Linux instance. When working with a large number of containers spanning a number of different nodes, this helps eliminate additional overhead from starting new virtual machines.

    With all of this in mind, it’s important to understand that more organizations are deploying workloads built on Linux servers. These workloads are running large databases, mining operations, big data engines, and much more. Within these Linux kernels, container utilization is also increasing. Here are ways a platform like Docker can help:

    • Greater container controls. Application containers help abstract the operating system-level virtualization process. This gives administrators greater control over provisioning services, greater security and processes restriction, and even more intelligent resource isolation. The other big aspect is allowing and controlling how containers spanning various systems share resources.
    • Creating distributed systems. Platforms like Docker allow administrators to manage containers, their tasks, running services, and other processes across a distributed, multi-node, system. When there is a large system in place, Docker allows for a “resource-on-demand” environment, where each node can obtain the resources they need, right when they need it. From there, you begin to integrate with systems requiring large amounts of scale and resources, such as MongoDB. With that in mind, big data platforms now span a number of different, highly distributed nodes. These nodes are located at a private data center, within public clouds, or at a service provider. How do you take your containers and integrate it with the cloud?
    • Integration with cloud and beyond. In June, Microsoft Azure added support for Docker containers on Linux VMs, enabling the broad ecosystem of “Dockerized” Linux applications to run within the Azure cloud. With even more cloud utilization, container systems using Docker can also be integrated with platforms like Chef, Puppet, OpenStack, and AWS. Even Red Hat recently announced plans to incorporate advanced Linux tools such as systemd and SELinux into Docker. All of these tools allow you to span your container system beyond your own data center. New capabilities allow you to create your own hybrid cloud container ecosystem spanning your data center and, for example, AWS.

    Docker and other open source projects continue to abstract operating system-level virtualization and are allowing Linux-based workloads to be better distributed. Technologies like Docker help pave the way around container management and automation. In fact, as a realization that so many environments are running a heavy mix of Linux and Windows servers, Microsoft plans to integrate and support Docker containers on the next wave of Windows Server, thereby making available Docker open solutions across both Windows Server and Linux. If you’re working with a container-based solution running on a Linux server, start taking a look at how application containers can help evolve your ecosystem.

    9:53p
    Video Tour: Google Data Center in South Carolina

    Providing a rare look inside its data center operations, Google recently posted a video describing its data center in Berkeley County, South Carolina, including descriptions of the facility’s cooling system and security measures.

    The company announced it would build the South Carolina data center in 2007. Including an expansion project in 2013, Google’s total investment in the site amounts to $1.2 billion.

    Google spends tons of money on data center infrastructure. Just this morning we reported on a $66 million data center expansion the company is planning in Taiwan.

    If you don’t have the five and a half minutes to watch it, some of the more interesting things (beside the usual foosball tables and techs riding scooters in server aisles) in the Google data center video are:

    • Onion layers of security clearance: Very few Google employees have access to its data center campuses. But even if you can enter the campus, there are different levels of clearance required the closer you get to the heart of the territory. You need separate clearances to enter the building, to enter the corridor that lead to the data center, and then to enter the data center itself.
    • Intrusion detection via laser beams: The most secure areas of the data center have underfloor laser beams that will detect movement.
    • Unusual cooling system: Google’s vice president of data centers Joe Kava says in the video that in the six or so years that he’s been with the company, it has changed its data center cooling technology at least five times. The latest cooling system in the South Carolina data center features copper coils at the top of hot aisles that circulate cold water. Hot exhaust air rises to the top and goes through the coils, warming the water, which is then pushed to cooling towers outside before returning.
    • Hard drives destroyed by wood chipper: Google has explained its hard drive destruction processes before. In this most recent video, Kava says old hard drives that cannot be verifiably wiped clean are crushed and then shredded to bits by an industrial-grade wood chipper.

    And here’s the video tour of the Google data center in South Carolina:

    10:00p
    New Information Revealed Around Amazon’s New C4 Cloud Instances

    logo-WHIR

    This article originally appeared at The WHIR

    It appears that information about Amazon Web Services’ upcoming “compute-optimized” C4 cloud instances was unintendedly revealed in an AWS RSS feed update.

    According to VentureBeat, a blog post announcing the launch of C4 went out in an RSS feed but was quickly taken down.

    This post showed a pricing structure and the volume of dedicated Elastic Block Storage (or EBS) throughput available for C4 instances, which had not yet been revealed publicly.

    Instances range from the c4.large instance at 11.6 cents per hour on-demand with 500 Mbps dedicated EBS throughput to the c4.8xlarge instance at $1.848 per hour with 4 Gbps EBS throughput.

    AWS-C4-prices

    Originally announced in November with no concrete launch date or pricing structure, Amazon C4 is the company’s ultra-high performance EC2 instance type designed for compute-bound workloads. This could mean top-end website hosting, online gaming, simulation, risk analysis, and rendering. The new instances are based on the Intel Xeon E5-2666 v3 processor specifically designed for EC2.

    C4 instances will essentially offer for storage what Amazon has already provided for memory and storage with its memory and storage-optimized instances.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/china-blocks-access-gmail-via-third-party-email-clients

    << Previous Day 2015/01/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org