Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, June 15th, 2015

    Time Event
    9:00a
    IBM Makes Huge Apache Spark Commitment

    IBM has made a huge commitment to what it called the most significant open source project of the next decade, Apache Spark. Spark is an open source processing engine built around speed, ease of use, and sophisticated analytics. IBM is donating its IBM SystemML machine learning technology to the Spark ecosystem, incorporating Spark extensively in its offerings as well as committing significant resources to Spark-related projects.

    IBM sees Apache Spark as the analytics operating system of the future, and is investing to grow nascent Spark into a mature platform, according to Joel Horwitz director of Portfolio Marketing, IBM. IBM is dedicating 3,500 researchers and developers to work on Spark-related projects at more than a dozen labs worldwide. IBM also hopes to educate more than one million data scientists and data engineers on Spark.

    Spark is a fast and general engine for big data processing, with built-in modules for streaming, SQL, machine learning and graph processing. IBM’s donated technology advances machine learning in Spark, while the rest of the commitment advances Spark as a whole.

    “IBM is investing in the Apache Spark core processing technology because the market of intelligent applications represents a huge opportunity, ranging from the Internet of Things (IoT) to the digital, connected, and social needs that are transforming businesses everywhere,” said Horwitz

    IBM will also embed Spark into its analytics and commerce platforms, and offer Spark as a Service on Bluemix. The addition of Spark as a Service on IBM PaaS Bluemix enables developers to quickly load data, model it, and derive the predictive artifact to use in intelligent applications.

    Spark is considered an alternative to MapReduce, a technology that suffered a blow last year when Google said MapReduce was no longer sufficient for its needs. The Google news was falsely viewed as a blow to Hadoop, as MapReduce and Hadoop were originally inseparable. Hadoop is very much alive and thriving and Spark is one of the most promising technologies in the ecosystem, with potential to supplant the more complex MapReduce.

    The battle isn’t just Spark versus MapReduce. There are also other solutions like Apache Tez and Apache Flink in the mix. However, Spark is undergoing a meteoric rise.

    The creators of Spark formed Databricks, a company that offers a cloud-based platform based around the open source cluster computing framework. IBM also announced it will collaborate with Databricks to advance Spark’s machine learning capabilities.

    IBM isn’t the only big technology company that sees the Spark potential. Dell and Cloudera brought a Spark-Powered in-memory processing appliance to the market last year, the now Cisco-owned Piston Cloud Computing expanded its support from private OpenStack to Spark and others earlier this year. MapR supports Spark and Rackspace offers Spark on bare metal, Many others have gone to market with Spark-focused offerings.

    “We believe strongly in the power of open source as the basis to build value for clients, and are fully committed to Spark as a foundational technology platform for accelerating innovation and driving analytics across every business in a fundamental way,” said Beth Smith, General Manager, Analytics Platform, IBM Analytics. “Our clients will benefit as we help them embrace Spark to advance their own data strategies to drive business transformation and competitive differentiation.”

    IBM has partnerships with AMPLab, DataCamp, MetiStream, Galvanize and Big Data University MOOC to educate data folks on Spark.

     

    12:00p
    New Rules for Moving to the Cloud … or Not

    Matt Gerber is CEO at Digital Fortress.

    Let’s face it, the issues we were talking about a couple of years ago – security and reliability – are no longer barriers with public cloud. That’s not to say there aren’t risks in the cloud, just as there are inside a company’s walls, but cloud providers are generally well-prepared to defend against the next wave of hackers and other threats. AWS, for one, continues to offer new protections, such as encryption-as-a-service. Third-party products and services are also on the rise. Research firms predict strong growth for cloud security, with Synergy Research reporting 20 percent growth for the sector in 2014. As well, many mid-sized company CIOs and IT managers are acknowledging that AWS and Microsoft can do a better job running a secure and reliable data center. Assessing whether a workload should move to the cloud now depends upon a different set of factors: regulatory, demand, processing needs, internal skills and culture, cost, and finally, corporate finance considerations.

    Regulatory and Compliance

    When examining infrastructure hosting options, a company should first weigh any auditing or compliance requirements, especially if violating them results in hefty penalties. Banks and financial services companies, healthcare and life sciences organizations, and other enterprises capturing sensitive information about consumers must comply with a host of specific rules affecting how and where data is handled. Industry regulations, such as HIPAA, PCI and the European Union Data Protection Directive, will require certifications by cloud and hosting providers, and may preclude use of the public cloud altogether. The onus is on the customer to ensure that cloud and hosting providers are following recommended practices for compliance. In the case of an audit or electronic discovery warrant, a company may need to provide access to data wherever it’s hosted, within days. The scope of regulation to which a company must comply will help determine the optimal decision when choosing a hosting partner.

    Demand and Processing

    The sage advice has always been that if a workload has widely varying, unpredictable demand, go to the cloud, where it’s easier and cheaper to scale up and down at a moment’s notice. Another scenario made for the cloud are those apps and services which are predictably unpredictable, such as a business with the bulk of sales occurring during holidays or on a seasonal basis. However, in a steady-state heavy data processing or high-performance computing environment with a continuous and predictable workload, such as for processing stock trades or managing a 24X7 manufacturing floor, it can be more cost-effective to buy hardware to run the application. A general rule is that for consistent workloads, a fixed infrastructure with a fixed cost may deliver better economics. Conversely, applications with drastic spikes of minimal activity and high activity will likely run more cost-effectively in the on-demand environment of the cloud. These decisions can be complicated and may result in a back-and-forth strategy when they aren’t studied carefully at the beginning. Take Zynga, the online gaming company known for the popular Facebook game, Farmville. The company began with its own data center; then when Farmville went into the stratosphere, it moved everything over to AWS to manage the load. In 2011, Zynga decided to build its own data center again, this time customized for the specific needs of its users. Today, the company has decided to move back to the cloud and AWS, given the high cost of managing the data centers. One could argue that in a predictably unpredictable industry like gaming, it makes sense to look at cloud before investing in infrastructure.

    Skills and Culture

    The more we learn about the cloud, the more we know it’s not a simple shift for a larger company with an established IT department. It requires new workflows, different tools and management processes and effective business collaboration. Take the time to conduct an internal assessment of attitudes, willingness and skill sets needed for moving to the cloud. Without buy-in from the ranks at the top, moving to the cloud is a grim proposition. This tale of failed change management has played out over many different phases of IT in years past – remember the multimillion ERP failures? IT leaders have a job to do in building awareness and providing reachable career paths. There’s also the investment of time (and patience) needed to retrain or rehire people, which if it doesn’t go well and results in delays in launching competitive business strategies, the CIO is the one losing her job. Another option is to bring in an outside team to fill the skills gap, yet that also requires a proper fit. Contractors must be able to interact successfully with the internal team. Moving from traditional to cloud infrastructure is exciting but can be painful; companies may need to take this process in steps by first moving to a hybrid environment.

    Capex Vs. Opex

    Depending on the industry and business stage, companies may have an orientation toward capital versus operational expenses, which can affect whether going to the cloud is viable. If a company’s cost of capital is low, and a company is capital expenditure-oriented, it would be more inclined to purchase technology versus rent it. If a company’s cost of capital is high, or there are other balance sheet considerations, very often it makes sense to shift purchases to operating expenses, and cloud is a way to do that with IT infrastructure. There are, of course, other decision factors that might sway companies to move to the cloud, such as on-demand pricing (pay-as-you-go), avoidance of unpredictable maintenance, support costs for hardware, and “hidden” data center costs such as energy bills.

    There’s no one-size-fits-all cloud strategy – and there probably never will be. Companies are deploying combinations of private cloud, on-premise, hybrid cloud and public cloud to reach their goals and manage risks. Consider, too, that a strategy which makes sense today may not make sense in a year or two – particularly if your business is young and growing quickly or in the middle of a merger or reinvention. It’s always smart to reassess plans and providers annually to make sure that your infrastructure matches business and customer needs.

     

     

    1:22p
    Facebook To Submit Plans For $220M Data Center In Ireland

    A rumored Facebook data center in Ireland is now very much a reality, reported the Irish Times. The social media giant announced Monday that it will invest $220 million on an energy efficient data center and will submit plans to Meath council.

    Facebook has applied for planning permission to build the Ireland data center. It will be the company’s second European data center, joining an energy efficient data center in Lulea, Sweden. The facility will be located in Meath, approximately a 30-minute drive northwest of Dublin, location of Facebook’s European headquarters.

    “Ireland has been a home for Facebook since 2007 and today’s planning application demonstrates our continued interest to invest in Ireland,” said Facebook director of data center strategic engineering Rachel Peterson in a statement. “We hope to build an innovative, environmentally friendly data center that will help us continue to connect people in Ireland and around the world – while supporting local job creation and Ireland’s successful technology economy. We look forward to continuing our conversations with the Clonee community in coming weeks.”

    Dublin is quickly becoming a major tech hub, housing presences of several major tech companies. A low corporate tax rate, fair climate, and increasing tech presence has the country in favorable position to keep winning data centers. As more data centers cluster, the talent pool and available shared resources grows.

    Apple recently revealed plans for a $1 billion dollar project in Ireland located in Athenry, close to Galway. Apple is also investing in renewable energy projects there and across its footprint. Microsoft has a data center in Dublin, as do Google and Amazon. Digital Realty Trust launched a new facility in Dublin last year and TelecityGroup is also present.

    Technology companies have created several jobs in Ireland, and Facebook already has job listings for the new location. The construction will employ hundreds while estimates of full time jobs range from 40-100. It will bring Facebook’s total jobs in the country over the 1,000 mark.

    More than 1.3 billion people use Facebook worldwide and Facebook has had to scale its data centers in tow. Because the company has faced unique infrastructure challenges, it’s also innovating at the data center.

    5:00p
    Mesosphere Partners with Typesafe on Spark

    Looking to make it much simpler to deploy Apache Spark in-memory computing clusters in production environments, Mesosphere today announced a partnership with Typesafe to provide support for an instance of Spark that can be deployed on top of the Mesosphere Data Center Operating System (DCOS) running in the Amazon Web Services (AWS) cloud.

    Matt Trifiro, senior vice president of marketing for Mesosphere, says the goal is to enable IT organizations to be able to deploy Spark in a few minutes. The Mesosphere DCOS itself can be downloaded and installed in much the same amount of time, says Trifiro.

    “Using a single command, Spark can now be deployed on Docker containers on AWS,” says Trifiro. “We’re also working on an implementation that can be deployed on premise.”

    Mesosphere is partnering with Typesafe because Spark is written in Scala, a derivative of Java developed by Typesafe. Mesosphere has also had its implementation of Spark certified by Databricks, which originally developed Spark. For its part, Mesosphere just began shipping a commercially supported version of its software along with a community edition that runs on AWS earlier this month.

    While interest in Spark as a foundation for processing Big Data analytics applications in the cloud is high, the amount of expertise IT organizations have with frameworks such as Spark is often limited. By making use of AWS and Mesosphere, Trifiro says organizations can at the very least begin developing Spark applications in the cloud and then determine where they might want to deploy them in a production environment later.

    Spark itself is emerging as a faster alternative to the MapReduce programming construct originally developed for Hadoop. Closely associated with Hadoop, Spark itself does not store any data. Instead, data is processed in memory then stored back in the Hadoop cluster from which it was originally pulled.

    Trifiro notes at Mesosphere and Spark share a common University of California at Berkley heritage that has generated a number of open source technologies all aimed at solving challenges associated with deploying and managing applications at scale.

    In general, platforms such as Mesosphere are taking advantage of advances in IT automation to greatly simplify not only the provisioning and orchestration of IT infrastructure, but also now application frameworks that invoke the application programming interfaces they expose. The end result is a simplification of the overall IT infrastructure environment at a time when application frameworks themselves are becoming more distributed than ever.

     

    5:30p
    With Plans to Double Business Cloud Revenue by 2018, Deutsche Telekom Extends Huawei Partnership

    logo-WHIR

    This article originally appeared at The WHIR

    German telecommunications provider Deutsche Telekom is eyeing partnerships to help the company double its cloud revenue with business customers by 2018.

    On Monday, Deutsche Telekom announced the extension of its existing partnership with Huawei, improving its hardware and solutions expertise as part of a broader effort to step up cloud activities across the company and compete with Google and AWS.

    The companies announced their initial cooperation, involving private cloud services, atCeBIT in March. This extended agreement combines “know-how and cutting-edge technology in the public cloud area,” Haibo Zhang, president of Huawei Deutsche Telekom Key Account Department said.

    “At Deutsche Telekom, we want to grow by more than 20 percent each year in the field of cloud platforms, and to become the leading provider for businesses in Europe,” Dr. Ferri Abolhassan, Head of the IT Division at T-Systems, the business customer division of Deutsche Telekom said. “Last year, revenue from cloud solutions, in particular in the highly secure private cloud, increased by double-figure percentage points at T-Systems alone. The market for services from the public cloud – infrastructure, platforms and applications – that can be accessed through the public Internet promises further growth. In conjunction with partners, Deutsche Telekom plans to pit itself more strongly against the Internet corporations Google and Amazon in future. To achieve this, the departments within Deutsche Telekom’s segments are now stepping up their cloud activities across the Group.”

    Deutsche Telekom’s T-Systems currently has more than 2.6 million SAP users in the cloud. Its European cloud services are offered from data centers in Germany to comply with the country’s data protection laws, which impose strict controls on geographical location of stored data.

    With major growth plans in the pipeline for T-Systems, it is no surprise that Deutsche Telekom shot down rumors that it would sell the subsidiary. According to Telecompaper, T-Systems CEO Richard Clemens told German newspaper Handelsblattthat it would continue with a restructure and previously announced job losses.

    This first ran at http://www.thewhir.com/web-hosting-news/with-plans-to-double-business-cloud-revenue-by-2018-deutsche-telekom-extends-huawei-partnership

    7:00p
    Weekly DCIM News Roundup: June 12

    St. Louis hosting company Hostrian selects No Limits DCIM software for its new data center, the data center RFID market is estimated to be worth $1.8 million by 2020, and 451 Research analyst Rhonda Ascierto has an updated map depicting current DCIM vendors.

    1. Hostrian selects No Limits Software RaMP DCIM solution. St. Louis based hosting and managed services company Hostrian has selected the RaMP DCIM software from No Limits Software for its new downtown St. Louis data center.
    2. Data Center RFID market worth $1.8 million by 2020. A new Markets and Markets report estimates that the data center RFID market, which are featured in many DCIM solutions, will grow from $391.4 million in 2015 to $1,890.5 million by 2020.
    3. 451 Research – DCIM Suppliers map. 451 Research data center tech analyst Rhonda Ascierto has a great new ‘metro map’ of DCIM suppliers.
    7:11p
    Schneider Electric Targets Edge Computing With New Micro Data Center Porfolio

    Schneider Electric has launched a new micro data center portfolio for edge computing applications. The micro data center infrastructure portfolio makes it easy and cost effective to add data center capacity in nearby metros as-needed, according to the company. The micro data centers are now available in North America and launching globally later this year.

    Prefabricated data centers have been around for a while. There are modular data centers, containerized data centers and now the emergence of micro data centers. Micro data centers are similar in scope to shipping container-based systems but a smaller package focused on a modular row or individual rack that can be remotely managed.

    Schneider acquired AST Modular in 2014 to boost its modular data center offerings. Prefabricated data centers have seen a lot of historic adoption in areas where a full data center construction doesn’t make sense. One example of this is an AST deployment in Haiti in 2013. Now the company is targeting the wider opportunity presented by the explosion of Internet of Things and data applications, and the need to serve locally. The portfolio isn’t meant for remote, hard to reach places, but IT rooms and office environments that want to serve locally. There is also a “rugged” edition meant for atypical locations.

    Completely engineered to order, micro data center infrastructure solutions include the physical enclosure, UPS, PDU, cooling, software, environmental monitoring and security all tested, assembled and packaged at a Schneider Electric facility and then shipped together. Called SmartBunker and the multi-rack offering called SmartShelter, they are customizable to specific needs and come in four packages targeting specific situations:

    • SmartBunker SX is the traditional package, meant for IT rooms.
    • SmartBunker CX is optimized for office environments
    • SmartBunker FX is “ruggedized” for any environment
    • SmartShelter is similar to SmartBunker FX, but a multi-rack solution.

    These standard, repeatable designs provide simplified management, high levels of security, and reliability via standardization and factory testing, according to Schneider Electric.

    There is rising data center demand in non-core data center markets, and a rise in edge data center providers targeting these needs. Many secondary and tertiary cities are underserved when it comes to data centers, and this is exacerbated with the growth of content and its need for local delivery. Schneider is attempting to capture some of this demand with its micro data centers, a more mass market version of its other prefab offerings.

    Schneider Electric is addressing latency, bandwidth and processing speed challenges customers are facing with the growth of connected devices and data applications, said Dave Johnson, senior vice president, Data Center Solutions, Schneider Electric in a release.

    “We are already seeing the emergence of edge applications in retail and industrial applications, and we believe the need for edge computing will only grow as the Internet of Things expands into commercial applications,” said Johnson.

    Micro data centers are not new, however Schneider has created a standardized, repeatable framework, said David Cappuccio, vice president, distinguished analyst and chief of research for the Infrastructure teams at Gartner.

    “Localized or micro data centers are a fact of life, but by applying a self-contained, scalable and remotely managed solution and process, CIOs can reduce costs, improve agility, and introduce new levels of compliance and service continuity,” said Cappuccio in a Gartner report titled “Apply a Self-Contained Solution to Micro Data Centers”.

     

     

    8:01p
    AppFormix Unveils Continuous Infrastructure Monitoring Software

    Looking to help IT organizations get ahead of what will almost certainly be large amounts of Docker container sprawl, AppFormix has emerged from stealth with namesake IT infrastructure monitoring software that keeps track of both virtual machines and Docker containers.

    Fresh off of raising $7 million in additional funding, AppFormix CEO Sumeet Singh says the company’s tool is designed to continuously monitor how any piece of IT infrastructure is specifically being used on premise and in the cloud

    “IT organizations are going to need granular, real-time control of their data centers,” says Singh. “They need to see, for example, what kind of I/O access a specific virtual machine is getting.”

    While developers have flocked to Docker containers as a simpler alternative to provision applications, IT operations teams will soon be confronted with Docker containers running on both physical and virtual servers. In the instance of a physical server there may be as many as 100 Docker containers running in a physical server, which today compares to maybe 20 to 25 virtual servers. Managing all those Docker containers from an IT operations perspective is likely to be a major challenge for most IT organizations.

    Of course, at present most IT organizations are running Docker containers on top of virtual machines or in Platform-as-a-Service (PaaS) environments because of both security concerns and the simple fact they don’t have any tools to manage them.

    Compatible with OpenStack, Kubernetes and Mesos management frameworks for orchestrating virtual machines and containers, Singh says AppFormix makes use of REST application programming interfaces (APIs) to provide the visibility into next generation data center environments that IT organizations are going to need to adjust to constantly changing application requirements. The end result, says Singh, will be data center environments where IT contention issues will be more problematic than ever.

    With or without the permission of IT operations teams, developers are voting with their feet in large number to embrace Docker containers. However, none of that means that the huge numbers of applications currently running on virtual machines is going to go away anytime soon.

    IT organizations will obviously need to be able to respond to this changing application landscape by either embracing new tools or waiting for the existing tools they have to be upgraded to support containers running on both virtual and physical servers.

    Currently in beta, Singh says the AppFormix tool itself is scheduled to be available in the third quarter.

     

    << Previous Day 2015/06/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org