Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, January 27th, 2015

    Time Event
    1:00p
    As Microsoft’s Cloud Business Grows, so Does its Data Center Spend

    As Microsoft continues to push growth of its cloud services business, company executives expect to continue increasing investment in Microsoft data centers to support those services.

    Cloud services are a very small portion of the company’s overall revenue picture today, but Microsoft CEO Satya Nadella and CFO Amy Hood spent the bulk of Monday’s earnings call talking about this part of the business, illustrating just how important it is to Microsoft’s future.

    The company’s revenue for the second quarter of fiscal 2015 was $26.5 billion – up 8 percent year over year. Its quarterly earnings per share were $0.71, or down 9 percent.

    Cloud and consumer devices are two areas that are crucial to Microsoft’s strategy. The company plans to invest more in data centers and computer systems for research and development, among other things, in support of its cloud and devices businesses.

    Microsoft’s commercial cloud revenue, which includes Office 365, Azure, and Dynamic CRM Online, grew 114 percent year over year to $696 million. This business segment is growing quickly. Last quarter was a sixth consecutive quarter of triple-digit growth, Nadella said. Its annual run rate is now $5.5 billion.

    Microsoft Data Center Spend Up 24 Percent

    As it grows, however, so does the cost of running it. The cost of revenue for commercial cloud and enterprise services went up $328 million, or 24 percent, year over year, in the second quarter, according to a company SEC filing. This increase was primarily due to higher data center and other infrastructure expenses.

    Microsoft owns and operates its own data centers and leases from commercial data center providers. Most recently, it announced plans to establish Azure data centers in Australia and India.

    The company has been expanding capacity in the U.S. as well. In 2014, for example, it leased 6 megawatts of data center capacity from DuPont Fabros Technology in Santa Clara, California, and 13.65 megawatts from Yahoo in Ashburn, Virginia (also in a DFT-owned facility), according to North American Data Centers, a commercial real estate company.

    Existing Install Base as Advantage

    While well behind Amazon Web Services in on-demand cloud infrastructure services market share, Microsoft believes it has a strong competitive position because of the amount of servers in enterprise data centers running Windows Server. Many enterprises want a combination of in-house data center resources and cloud, and Microsoft claims it is easier to extend customers’ existing on-premise Windows environments with Azure infrastructure hosted in its own data centers.

    VMware is going for a similar angle with its vCloud Air services, promising customers easy and seamless integration of their in-house VMware environments with cloud services provided by VMware out of colocation data centers around the world.

    The other two big players in public cloud, AWS and Google, don’t have the benefit of a huge existing install base in enterprise data centers.

    As Nadella pointed out on Monday’s call, companies often engage with Azure Infrastructure-as-a-Service first and add more products from the menu as they go along. They may, for example, move a VM onto IaaS, and later decide to build a mobile app using data from that VM and the Azure Platform-as-a-Service.

    Analytics, Machine Learning as Cloud Growth Drivers

    Microsoft is also investing a lot in “advanced data analytics and machine learning driven capabilities that improve with more customer adoption and usage of the cloud,” Nadella said. It has been acquiring companies that specialize in these areas. Just this month, it announced acquisition of Revolution Analytics, a statistical computing and predictive analytics specialist, and Equivio, a provider of compliance solutions driven by machine learning technology.

    Microsoft’s bottom line continues to be impacted by the restructuring plan it announced in July 2014 and integration of Nokia. These restructuring and integration expenses for the most recent complete quarter were $243 million, or a negative impact of $0.02 per share.

    1:00p
    MapR to Offer Free Hadoop Training to Close Big Data Skills Gap

    MapR will provide a free Apache Hadoop training program it developed to help address what it says is a gap in Hadoop skills in the workforce. The company, one of the leading Hadoop distribution providers, developed an on-demand curriculum and increased hiring and build-out of its education services team.

    Hadoop is a popular open source framework for distributed storage and processing for large data sets. MapR thinks 2015 will be the year of enlightenment when it comes to Hadoop, its chef marketing officer Jack Norris said.

    “The journey that we see is people are moving from batch orientation and thinking of Hadoop as back-office stuff to people doing real-time applications and automated adjustments to impact the business as it happens,” he said.

    There’s an obvious benefit in providing Hadoop training for MapR, as it will get those starting on Hadoop journeys interested in the MapR platform. However, this is about Hadoop education and essentials, not about MapR-specific education, according to Norris.

    “With Hadoop, the knowledge transfer and the education is a major hurdle,” he said. “MapR is in good position [to provide education]. We get 90 percent of revenue from software licensing, so we’re not heavily reliant on services.”

    Free Hadoop training benefits the larger community. Labor supply is a key constraint to adoption of the framework. The company’s goal is to train 10,000 people for free on big data skills this year. Given an average of three courses per person and cost of $1,750 per course, this equates to a $50 million in-kind contribution to the industry.

    A variety of quizzes and interactive lab examples will be used to teach three paths:

    • Developer path: building applications on Hadoop
    • Data Analyst path: for getting insights from massive amounts of data
    • Administrator path: learn how to architect, deploy, and manage a Hadoop cluster

    The curriculum also illustrates typical applications and use cases. “How do you think about Hadoop? At one end of the spectrum, it’s about using Hadoop to offload and collect data from existing apps and existing processes as well as new data sources,” said Norris. “At the other end of the spectrum, what’s the elegant algorithm to improve my fraud detection, and there’s a whole lot in between.”

    The training program readies people for several different certifications if they choose to go that route. Certifications are not administered by MapR, but by third parties, so they are not free and not required.

    In the past few years, there have been several funding rounds for Hadoop-based companies like MapR, Cloudera, Altiscale, Pepperdata and Hortonworks, which also recently held its Initial Public Offering. These companies are well-funded, but they need better education to help the workforce catch up with the technology.

    4:30p
    Leveraging a Global CDN or Building Data Centers: Which Should Your Enterprise Choose?

    Sharon Bell is the Director of Marketing for CDNetworks, a global CDN provider that helps online businesses reach and delight their audiences around the world.

    How do enterprises scale globally online? In general, they either build localized data centers in target regions or leverage a global content delivery network (CDN).

    The decision requires weighing available resources with specific goals. For example, some goals might be to accelerate dynamic content, establish e-commerce in Asia, mitigate latency or Time to Interact (TTI) for users in Europe, or decrease global data management and security costs. To successfully and specifically scale an enterprise online, the following essential resources must be examined, measured, and properly allocated.

    Budget

    Building a data center network can be expensive to construct and maintain, costing about $1.6 million for a 1000-square-foot data center. Ongoing operating costs amount to about 65 percent of the data center build. Other costs include mandated fire protection, power and cooling supply, finding an adequate location and acquiring building permits, a specialized general contractor, physical security, and labor costs, including an expanded IT staff.

    Outsourcing to a CDN provider does not require as many upfront costs. Even though staff to maintain a CDN from the customer standpoint is drastically reduced, extra expenses may be required to train current IT staff to remotely access the CDN through the provider’s cloud portal. The ongoing costs (and features) for CDN services vary from provider to provider, but they typically involve a monthly fee based on amount of traffic. The expense also depends on the specific content acceleration services employed. Sites with mostly static content will be cheaper to optimize globally on a CDN. While static may be cheaper, accelerating dynamic content with an application delivery network (ADN) add-on provides a more engaging user experience and may lead to wider site use.

    Time

    Time as a resource has ample facets when it comes to content acceleration options. Time to ROI can take a few years for a data center build; however, developing proper infrastructure with a data center as opposed to a CDN allows for significant control over the way in which content is distributed. The control that comes with decision making, including where to place the data centers, can yield peace of mind that content is delivered as efficiently as possible with minimal latency. Therefore, the data center is as good as the enterprise behind it. Since a content delivery network and infrastructure are already built, the time to ROI is considerably shortened when looking to reach a country supported by the chosen CDN.

    Site performance KPIs like Time to First Byte (TTFB) and TTI in the target regions are required to decide whether to build data centers or leverage a CDN. Basics such as general latency due to target distance from the origin, network topologies, peering points, and the type of content to accelerate are essential factors to consider. The question only the enterprise itself can answer is, based on the variables unique to us, could our own data center or an existing CDN get our content there faster? And, is this a reliable method?

    Scalability

    Both a CDN and an enterprise’s own data center network can scale. As discussed above, however, a global CDN can help an enterprise scale quicker, since the network is already built and has the ability to handle spikes in traffic. Conversely, some CDNs may have more difficulty optimizing sites with certain SSL security and/or SPDY functionality than native data centers.

    No matter which content acceleration route is taken, the site itself should be optimized for acceleration. Utilizing CSS sprites, enabling HTTP compression, leveraging browser caching, etc. will ensure content doesn’t take unnecessary trips or add TTI for unnecessary components. The larger the scale, the greater likelihood that revenue could be lost from slow load times. Speaking of large scale, e-commerce titan Amazon witnessed a sales drop of 1 percent for every 100ms of website loading delay.

    Server Security

    The level of security an enterprise maintains is also essential to protect it against threats and earn consumer or client trust on the site. The established CDN infrastructure provides protection against large-scale DDoS attacks, downtime and site crashes, and data loss. On the other hand, the enterprise’s network affords it a thorough and historical understanding of what is required to operate and protect the best interests of the enterprise.

    Though the enterprise would have insight into how its data functions with a CDN, deciding not to outsource keeps control close, and theoretically, security measures on a tight leash. The question is whether the enterprise can adequately maintain the security of its servers in a different geographical region with its unique security and financial threats.

    Cultural Liaisons

    If entering a region with a different culture and/or mother tongue, the enterprise needs cultural liaisons to communicate effectively with its audience. If not on staff or contracted already, a cultural liaison is an extra expense required to understand how to approach the market as well as local regulations on behalf of the enterprise.

    When electing to build a data center, employing cultural resources and creating good ties with the target region should be considered. The budget must include the cost of liaisons as well as a local network of influencers. Some CDN providers offer cultural liaison services as part of content and/or application delivery acceleration. This might appeal to enterprises because of their pre-established ties with local governments and insider knowledge on how to acquire appropriate permits and licenses.

    Which Should Your Enterprise Choose?

    Choosing whether to leverage a global CDN or build a data center network is a balancing act. How an enterprise values the time it takes to get up and running (CDN) as compared to their desired level of retained control (data center) are core factors of the decision. All resources should be weighed, budget items must be listed, and time-value-money (TVM) should be discussed. There is no formula for the right decision; however, thorough research with the goal in mind will help an enterprise determine whether it is best to build a data center network or leverage a global CDN.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:38p
    Facebook Outage Affects Sites That Used Social Network’s Log-In System

    Facebook, Instagram and other high-profile sites using Facebook’s log-in system suffered outages lasting around 45 minutes to an hour Monday evening Pacific Time.

    Dynatrace analysis showed that over 7,500 websites were impacted by the Facebook outage because they were using Facebook as a third-party service.

    A group called Lizard Squad claimed responsibility for launching a Distributed Denial of Service (DDoS) attack that took down or slowed many sites using the Facebook authentication system. Facebook denied these claims. The company blamed a configuration change for the outage.

    “[It] was not the result of a third-party attack but instead occurred after we introduced a change that affected our configuration systems,” a Facebook spokeswoman told The Wall Street Journal.

    Examples of online services that suffered from the Facebook outage included hook-up site Tinder and AOL’s Instant Messenger AIM. Facebook-owned Instagram was down. Facebook migrated Instagram infrastructure from Amazon Web Services into its own data centers after it bought the company.

    Twitter was extremely active, including Lizard Squad itself:

    Lizard Squad should be familiar to most following a cyberattack on Sony’s Playstation videogame servers in December. The group also claimed responsibility for hacking Malaysia Airlines website.

    A Facebook spokesperson said, “We moved quickly to fix the problem, and both services are back to 100 percent for everyone.”

    Facebook Vice President Engineering Jay Parikh tweeted something with NSFW language about how hard it is serving more than 1.4 billion people but later deleted it. Parikh’s team spends a lot of time and resources on resiliency architecture and tests. One of the tests included shutting down an entire data center to see if services would stay up. They did.

    For a look into how Facebook manages so many users, check out recent DCK article about Web caching and Facebook.

     

     

    6:31p
    Could Microsoft Cosmos Challenge Hadoop?

    A new Microsoft data crunching framework is set to launch on the company’s Azure cloud, according to a report from Redmond pundit Mary Jo Foley on ZDNet. Dubbed Cosmos, it’s a potential competitor to both Hadoop and eventually Google’s homegrown Dataflow.

    Microsoft Cosmos is used extensively within the company to aggregate data from every major service into a shared pool. These services include Azure, Skype, and search engine Bing.

    It is similar to MapReduce, the heart of Hadoop, as it uses a structured query interface. However, Cosmos has the additional ability of directed acyclic graphs (DAGs), a method of modeling to connect different kinds of information. The DAG approach is said to reduce time and effort involved in complex analysis and potentially improves performance. Cosmos may have a stream-processing component, based on Foley’s claims. A close contemporary and competitor of all of this functionality and benefits would be Apache Spark.

    Spark allows in-memory analytics. Spark is supported in Mesosphere, which treats data centers as one big computer. Spark creators Databricks recently unveiled Spark-as-a-Service.

    Over 5,000 of the company’s engineers, as well as several businesses, use Microsoft Cosmos. Foley suggests its ready for wider release.

    Prior to Cosmos, Microsoft’s homegrown alternative to Hadoop’s batch processing platform was developed until 2011 and was hailed as a potential Hadoop challenger.

    Another potential challenger is Google’s Dataflow, its historical and real-time data analytics system that replaced MapReduce. Google dubbed Dataflow as the next evolution of Hadoop ecosystem technologies. Dataflow is believed to be undergoing commercialization after internal successes.

    Hadoop momentum has been building rapidly over the past several years.

    It should be noted that Google continues to support Hadoop financially through its investment arm. Numerous services based on different Hadoop distributions are available on the Google Cloud Platform, including, most recently, official support of Hortonworks.

    Microsoft Cosmos might end up either as competitor or as complementary to Hadoop, depending on how the company chooses to move.

    Microsoft CEO Satya Nadella outlined the company’s path to deliver a platform for ambient intelligence at a past customer event, stressing a “data culture.”

    “The era of ambient intelligence has begun, and we are delivering a platform that allows companies of any size to create a data culture and ensure insights reach every individual in every organization,” Nadella said in April, prior to launching a Data Platform and Internet of Things Service.

    6:55p
    Cologix Qualified for Data Center Tax Break in Minnesota

    New 28,000 square foot MIN3 data center of the colocation provider Cologix has qualified for Minnesota’s sales tax incentive program. The data center provider and its customers stand to benefit from tax rebates.

    Minnesota is one of the states with more aggressive data center tax breaks. To qualify for the incentives, the requirements are a facility of 25,000 square feet or more and a commitment to investing $30 million in the first four years. The exemption is from sales tax on IT gear, cooling and power equipment, energy use, and software for 20 years.

    The state does not tax anyone for personal property, utilities, and Internet access, among other services.

    The Department of Employment and Economic Development’s sales tax program recently lowered the threshold to qualify. Compass Data Centers, a wholesale data center provider that focuses on second-tier data center markets, also recently qualified for the DEED tax break. Compass leases its Minnesota data center to CenturyLink.

    Minnesota’s data center market has been burgeoning over the past few years. Players besides Compass and Cologix include Stream Data Centers, ViaWest, DataBank, and its planned 20 megawatt data center, Digital Realty, OneNeck IT, and Zayo’s zColo.

    Many enterprises in emerging markets like Minnesota are looking to outsource data center infrastructure to colocation providers. The trend has spurred the birth of several new colocation players.

    The benefits of colocation include shedding the cost of building and operating companies’ own facilities or server closets, leveraging economies of scale, having top-notch floor space and features, and sharing the cost of enterprise-grade security and other capabilities.

    In a statement, Mike Hemphill, Minnesota general manager at Cologix, said the data center tax break is further incentive to leverage colocation. These customers regularly spend $50,000 to $100,000 per cabinet to replace legacy equipment.

    The qualifying Cologix data center is located in downtown Minneapolis, in one of the state’s most connected building, a carrier hotel known as the 511 Building. The company recently expanded within the building. Cologix acquired the Minnesota Gateway, located in the carrier hotel in 2012. It operates the Meet-Me room in the building.

    “Minnesota enterprises are increasingly responding to the benefits of the colocation model to support their IT needs, especially where data centers are close to home, highly connected and highly redundant. We designed our new data center with these customers in mind,” Hemphill said.

    7:16p
    DoE Lab in California Sheds 26 Facilities, Rooms in Data Center Consolidation

    U.S. Department of Energy’s Lawrence Livermore National Laboratory, in Livermore, California, has successfully closed 26 out of 60 data centers resulting in big annual savings. The average size of the data centers was 1,000 square feet. Lawrence Livermore defines a data center as a space of 500 square feet or greater.

    Data centers are massive consumers of energy. A recent study from the National Resources Defense Council found that all U.S. data centers consume around 3 percent of energy in the country, for an estimated 91 billion kilowatt-hours in 2013, expected to increase by roughly 47 billion kilowatt-hours by 2020.

    Despite a massive projected overall increase in energy consumption, through data center consolidation, LLNL was able to reduce its energy demand, leading to a Department of Energy Sustainability award. LLNL is aiming for an average Power Usage Effectiveness of 1.4.

    LLNL cut close to 130 physical servers out and created a private cloud consisting of 140 virtual servers. Its enterprise data center space now consists of 15,620 square feet, housing 2,500 mission critical science, engineering, computational research, and business computing systems.

    LLNL is saving $300,000 in energy bills and $40,000 in maintenance costs. LLNL is also saving money by eliminating the need to install and maintain electrical meters required by the DoE’s sustainability program to monitor overall power usage. That resulted in close to $350,000 saved and avoided more than $10 million in expenditures. DoE has required all LLNL data centers to be metered by end of fiscal year.

    Redundant equipment, such as air handlers and cooling systems, can be repurposed for new facilities.

    The consolidation program began in 2011 and used LLNL’s High Performance Computing Strategic Facility Plan as a guide. This initial consolidation was called “the low hanging fruit” in a statement, and LLNL is now moving to its next phase under the DoE’s Better Building Challenge. The laboratory has committed to reducing energy intensity of data centers by at least 20 percent within 10 years.

    “Now we need to take a more institutional approach and to conduct an education campaign to show the advantages of consolidating into a professionally managed data center,” said Doug East, LLNL chief information officer, in a statement. “Centralizing equipment in efficient data centers makes business sense and has the potential to save research programs money — resources that could be redirected to science.”

    “Our institutional HPC needs will grow in the future and it is important for the Lab to have a Data Center Sustainability Plan to ensure we manage our computing resources in an efficient and cost effective way,” said Anna Maria Bailey, LLNL’s HPC Facilities manager in the release.

    Many enterprises and institutions of all ilk are undergoing consolidation programs. It’s occurring among nearly all federal agencies.

    The Navy recently awarded CGI Federal, the company responsible for initially botched Healthcare.gov launch, with a consolidation contract. Much of the effort has been pushed by the Federal Data Center Consolidation Initiative.

    Canada’s government is also in the process of consolidating data centers.

    Data center consolidation initiatives are also occurring in the private sector, as they promise big savings and the benefit of a smaller carbon footprint. British bank Barclays was a recent example.

    8:30p
    Lenovo Launches Partner Program for Cloud and Managed Service Providers

    logo-WHIR

    This article originally appeared at The WHIR

    Lenovo has launched a new partner program on Tuesday for cloud and managed service providers. The program is the result of the transition of the IBM System x server business to Lenovo, which Lenovo acquired from IBM as part of the $2.3 billion IBM x86 transaction last January.

    In October, IBM System x become part of Lenovo, and the ThinkServer brand and System X are now a part of the Lenovo Enterprise Server Group.

    The partner program will offer service providers access to discounted systems directly from distributors, business development funds that can be used for certifications to improve technical skills, and financing plans that help service providers improve profitability and cash flow, Lenovo said.

    The program is currently available to partners in North America, but Lenovo said it plans to roll out the program worldwide in the coming months.

    “Managed services spending is expected to triple in the next few years, and with the recent integration of the IBM System x portfolio, Lenovo is well positioned to offer beneficial programs to new and existing providers,” Lenovo Enterprise Business Group vice president and general manager Brian Hamel said. “This new program will provide approved service providers with a simple, efficient and profitable means to deliver high-value services to customers.”

    With Lenovo’s Kickstart program, service providers can defer payments for up to 120 days, and the Rent & Grow program mirrors IT payments to usage. There is also a Trade-In program available.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/lenovo-launches-partner-program-cloud-managed-service-providers

    9:00p
    CloudBees Snags $23.5M in Series D Funding to Meet Growing Demand for Jenkins

    logo-WHIR

    This article originally appeared at The WHIR

    CloudBees announced a $23.5 million financing round on Tuesday. The company plans to use the funding to continue the growth it has experienced “from demand for its continuous delivery solutions based on Jenkins CI, the leading open source continuous integration (CI) platform.”

    “With Jenkins’ global dominance as the hub of continuous delivery for enterprises, CloudBees sees explosive demand in 2015 for its Jenkins-based solutions,” the company said in a press release.

    The company is supported by previous investor Lightspeed Venture Partners along with Matrix Partners, Verizon Ventures and Blue Cloud Ventures. Less than a year ago Cloudbees received $11.2 million in a round led by Verizon. With the latest funding round, the company has received a total investment of just under $50 million in less than five years.

    Founder and chief executive officer of CloudBees Sacha Labourey said the funding will allow Cloudbees to capture more market share and “further cement our position as the continuous delivery leader.” It plans to use the capital to support sales, marketing and development.

    “We’ve been with CloudBees since the early days and we continue to be impressed with the way this world class team delivers on its market vision,” Lightspeed Venture Partners general partner John Vrionis said. “The business is scaling quickly and we believe there is a massive opportunity to build a franchise company given all the momentum in continuous delivery.”

    In September, Cloudbees repositioned itself as “The Enterprise Jenkins Company” and dropped its runtime PaaS, closing its RUN@cloud platform that was used by between 300 and 500 customers.

    In 2013, Cloudbees began offering Continuous Cloud Delivery (CCD), a service to accelerate application delivery and meet high-frequency update requirements necessary for web and mobile applications. CCD offers a way for development teams to quickly push application changes and deploy the resulting code to production, all via the cloud.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/cloudbees-snags-23-5m-series-d-funding-meet-growing-demand-jenkins

    << Previous Day 2015/01/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org