Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 8th, 2015
| Time |
Event |
| 12:00p |
Emerson’s Data Center Business to Gain Agility Post Spin-Off By spinning off its data center and telco infrastructure business, Emerson Electric will create a company that can react faster to changing market demands while also freeing itself of the drag on overall revenue and profit the Network Power business segment created last year.
The company’s leadership has been talking about divesting underperforming or non-core businesses for some time now, but Network Power, its data center business, isn’t an insignificant side gig that got started and didn’t go anywhere. While it’s the only segment that did not increase its revenue or profit last year, it was responsible for about one-fifth of the company’s overall revenue, so the spin-off plan, announced last week, is likely to be about more than just getting rid of an underperforming business unit.
An Emerson spokesman said the company would not provide any information about its plans beyond the official announcement, because it was “early in the process.”
In a Changing Market, Agility is a Must
The need to be more agile is crucial for a vendor in today’s data center market. Like all of its competitors, Emerson Network Power has felt the impact of more compute capacity being deployed in commercial data center provider facilities and hyperscale data centers than in traditional enterprise facilities, Rhonda Ascierto, research director for data center technologies at 451, said.
Resiliency, the function that has until recently been delegated solely to redundant power and cooling systems, is increasingly being handled by software. This trend has also placed a strain on sales growth for vendors like Emerson, whose power and cooling gear is a common sight in data centers around the world.
Emerson’s data center business has a substantial software play as well, but in that space, agility is even more crucial. It’s important for a DCIM software vendor to be able to partner and integrate with vendors that sell other building and IT systems management software, Ascierto said. Network Power may find it easier to make such partnerships as a stand-alone entity rather than as part of a 125-year-old industrial giant.
Opportunity Outside of Data Center
Network Power will be more agile, but it will also have more market opportunity as a stand-alone entity, according to 451.
One big option is leveraging the backend of its DCIM software platform to expand into adjacent sectors, such as smart buildings and smart cities, Ascierto said. Schneider Electric, one of Emerson’s biggest competitors, has versions of its StruxureWare software suite for data center operation, as well as for building, plant, and supply chain operation, among other uses.
A similar adjacent-market opportunity is there for Emerson’s precision-cooling products, which can be targeted at non-data-center commercial buildings, Ascierto said.
The stand-alone company will also be freer to make acquisitions, which in 451’s opinion will most likely be in software to go after those adjacent markets.
Tumultuous Year for Network Power
The Network Power segment’s revenue and profit have declined since 2012. Last year, however, the drop was especially steep, due mostly to the sale of its connectivity solutions business and the embedded computing and power business.
The data center business segment made about $5 billion in sales in 2014, down 18 percent. Its earnings were $460 million, or 17 percent lower than 2013 earnings. Emerson’s total revenue for the year was $24.54 billion, down 1 percent year over year.
Last year, the company also took a substantial impairment charge due to lackluster performance of Chloride, a European supplier of data center uninterruptible power supplies, cooling products, and data center services and solutions selling into Europe, Middle East, and Africa. Emerson attributed the write-off to weak economic conditions in Western Europe.
The impairment charge was $508 million, or about one-third of what Emerson paid for Chloride to outbid its Swiss competitor ABB, which was preparing to acquire the London-based vendor. Analysts at the time saw the move as a way to prevent ABB from grabbing a big chunk of the EMEA region’s data center infrastructure market.
Hyperscale Project Makes Positive Impact
A large hyperscale data center project and an increase in UPS sales lifted the data center portion of Network Power up slightly. These successes, however, were offset by a drop in thermal management and infrastructure product sales.
Emerson did not specify which hyperscale data center project had such an impact on its results, but the company did announce in May 2014 that it had supplied more than 250 modules to build the second building on Facebook’s data center campus in Luleå, Sweden. The order included power skids, evaporative air handlers, a water treatment plant, data center superstructure solutions, and control systems.
An innovative project, the modular data center was the first time Facebook had used the approach of relying to the extreme on modules manufactured elsewhere and shipped to the location for quick assembly. The goal was to shrink dramatically (by 50 percent) the time it takes to build a data center.
Fundamentals Not Likely to Change
Other than doubling down on software, partnerships, and expansion into adjacent markets, 451 does not expect much to change for Network Power or its existing customers. The segment has always had its own global distribution and partner channel which should remain in place.
It will probably keep its headquarters in Columbus, Ohio, as well as its manufacturing facilities, which have operated independently from Emerson Electric, Ascierto said. It is likely that the new stand-alone entity will get a different name. | | 2:54p |
DigitalOcean Sharpens Developer Focus with $83 Million Series B Funding Round 
This article originally appeared at The WHIR
DigitalOcean is putting rumors around additional funding to rest on Wednesday, announcing the close of an $83 million Series B funding round led by Access Industries, with participation from Andreessen Horowitz.
Last week Business Insider reported that DigitalOcean secured $83 million in additional funding, building on a $37.2 million Series A round led by Andreessen Horowitz last March.
With aggressive growth and expansion, DigitalOcean is used by more than 500,000 developers that have deployed more than six million cloud servers using its platform. DigitalOcean has grown significantly since its launch in 2012, becoming the second largest cloud hosting company behind AWS.
In an interview with the WHIR in January,DigitalOcean CEO and co-founder Ben Uretsky explained how its focus on developers gives it an edge over AWS.“Where we saw an opportunity for DigitalOcean was to be the exact opposite of AWS has become. As an individual developer, you don’t have the same resources to figure out the complex ecosystem that AWS provides…not to mention, Amazon wants to lock you into their ecosystem as much as possible, so they brand everything under their own terminology. So, all-in-all, you walk away extremely confused and frustrated. And that’s where we saw the opportunity.”
By targeting developers who need to spin up cloud servers and scale them quickly, DigitalOcean has done well and plans to continue to support its core users by expanding its feature set beyond its virtual cloud service it calls Droplets.
“We are laserfocused on empowering the developer community,” Mitch Wainer, cofounder and CMO at DigitalOcean said in a statement. “This capital infusion enables us to expand our worldclass engineering team so we can continue to offer the best infrastructure experience in the industry.”
Access Industries reiterated DigitalOcean’s commitment to the developer community in its statement announcing the funding.
“Technology represents the foundation on which today’s greatest companies are built,” said Pueo Keffer, head of venture capital at Access Industries. “DigitalOcean is uniquely positioned to grow with the developer community, which creates and maintains this very foundation.”
DigitalOcean has data centers in NYC, San Francisco, Singapore, London and Amsterdam. It opened a data center in Germany in April.
This first ran at: http://www.thewhir.com/web-hosting-news/digitalocean-sharpens-developer-focus-with-83-million-series-b-funding-round | | 3:00p |
Why Hybrid Cloud Won’t Slow Down Toby Owen is the VP of Products for Peer 1 Hosting.
It’s no surprise that hybrid cloud adoption is growing. In fact, it is expected to triple by 2018 as companies seek to realize better elasticity, availability and security, all at a manageable price in their hybrid environments.
This growth isn’t all about the technology, though. There are some strong market influences that have contributed to an uptick in the use of hybrid cloud computing – namely, businesses’ price preferences, more sophisticated technology components, and the growth of big data.
A Custom Price for Custom Needs
According to research conducted by Vanson Bourne, cost has consistently ranked as a top factor for IT decision makers in their cloud investments. That likely doesn’t come as a surprise to many; after all, businesses can boost profits by increasing revenue or cutting costs, and the latter is generally easier to sell to company leadership due to a faster turnaround time.
Cloud computing works to lower costs by moving workloads to cloud environments so companies no longer need to invest in their own infrastructure – they simply pay for how much power and RAM they use. That has helped propel cloud uptake overall.
Hybrid clouds take those cost benefits a step further by allowing companies to build a cloud environment a la carte, choosing everything from the OS to the firewall to the load balancer. By customizing every piece of the hybrid environment, companies have complete control over the pricing as well. As a result, they may find significant cost reductions by moving away from the more “cookie cutter” public cloud approach. With so much emphasis on price in the enterprise world, this has generated huge interest in hybrid cloud.
Building a Cohesive Hybrid Cloud Takes Time
Beyond cost, the technology that makes up hybrid cloud environments has also become more sophisticated, attracting enterprise users.
For example, OpenStack, one of the most widely used open source cloud platforms, now includes Security Assertion Markup Language (SAML), offering single sign-on for web properties. This allows businesses to more easily and seamlessly conduct B2B and B2C transactions without fear of the data being exposed.
Many hosting providers are also facilitating cloud federations with trusted cloud environments to help IT decision makers maintain security throughout their hybrid clouds. For instance, by integrating SAML into OpenStack-powered clouds, businesses can now more easily federate with other cloud environments, ultimately giving cloud buyers more hybrid options.
Finally, advanced hybrid cloud solutions that incorporate many advanced offerings from infrastructure and software providers are coming onto the market. Because these environments incorporate more sophisticated components, including disaster recovery, firewalls, bare metal and virtual servers, self-service online portals, and HPC capabilities, hosting providers are able to bundle high-performing, enterprise-grade hybrid cloud solutions that meet even the most demanding business requirements. With more use cases and wider applications, hybrid cloud is a natural solution for businesses of all sizes.
Hybrid is the Perfect Match for Big Data
Beyond the capabilities and price of hybrid cloud, companies are facing more and more situations where pure public or private cloud environments are not well-suited. Take big data, for example. By 2020, Gartner predicts that big data will be used to reinvent, digitize or altogether eliminate 80 percent of business processes and products from a decade earlier.
Contending with vast amounts of data requires that companies invest in a cloud solution that can quickly and reliably process large data sets. But, can a pure public or pure private environment handle that much data? Many companies would say no, as the public cloud is seen as too unreliable to successfully process and transfer big data, while private clouds limit scaling and availability.
Therefore, hybrid cloud proposes a “best of both worlds” solution for the growing demands of big data. And, as big data pushes the limits of today’s infrastructure, more and more companies are opting for hybrid cloud environments to make the most of their data.
The Best of Both Worlds
Although hybrid cloud hasn’t overtaken public or private environments quite yet in terms of adoption rates, the interest in it isn’t slowing down. Hybrid cloud capabilities are quickly expanding to tackle concerns about IT costs and capabilities, and many decision makers now see it as a far more attractive offering than they did even a year ago. At a lower cost than on premise infrastructure, hybrid cloud has become a natural fit for companies across verticals and around the world.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:44p |
United Airlines Grounds Flights Due to IT Issues United Airlines’ US flights were grounded for over an hour due to “automation issues,” according to the US Federal Aviation Administration.
The airline issued a statement saying a network connectivity issue kept flights grounded, however, little more information has been made available so far. The ground stop began around 8:30 am EDT Wednesday and was finally lifted around 9:50 am. The airline is in the process of looking into the issue and flight delays continue into the day.
A lack of “proper dispatch information” kept flights grounded for less than an hour last month as well. In 2012, issues with the merger between United and Continental were credited with causing significant systems outages in August and again in November.
Any issues with individual airlines often have a butterfly effect, affecting air travel beyond just the airline.
IT issues disrupting air travel is a periodical occurrence. As we wait for more information on United’s debacle, let’s take a look back at a few incidents from recent history:
IBM generator failure causes New Zealand airline chaos
An IBM generator failure caused chaos in 2009. Key services were crippled for Air New Zealand, prompting the airline’s CEO to publicly chastise IBM’s CEO. Around 10,000 passengers were grounded.
The problem occurred during planned maintenance in IBM’s Newton data center in Auckland. Generator failure dropped power to parts of the data center, including the mainframe operations supporting Air New Zealand’s ticketing.
Cable cut in Midwest hobbles Alaskan Air
A Sprint cable cut in Wisconsin hobbled Alaska Airlines’ connection to Sabre, a reservation and ticketing system. It was a good example of how something as simple as a fiber optic cable cut can be felt thousands of miles away.
IT woes ground American Airlines
An outage in a key reservations system kept flights grounded for a day. An early tweet from the company blamed downtime problems with the Sabre reservations network but was later corrected to say it was an issue in its ability to access Sabre — a network issue. Sabre said it was up and running fine with all other airlines.
“Leap Second Bug”
This entry also appeared on our strangest outages countdown. A leap second is a one-second adjustment that is occasionally applied to Universal Time to account for variations in the earth’s rotation speed. The latest leap second happened June 30 this year.
A leap-second bug in 2012 caused computer problems with the Amadeus airline reservation system, triggering long lines and traveler delays at airports across Australia. More than 400 Qantas flights around Australia were delayed by at least two hours as staff switched to manual check-ins. The outage at Amadeus, one of the world’s major reservation systems, lasted about an hour but had a longer impact on air travelers and airline staff.
For more of a feel-good story, also check out Amerijet’s discussion of modernizing its IT at a Data Center World conference put on by our sister company AFCOM. Senior director of technology Jennifer Torlone spoke about where the company was when she arrived on the scene and how it ultimately modernized its systems. | | 6:45p |
“Technical Issue” Halts Trading on NYSE Operators of the New York Stock Exchange have suspended trading Wednesday morning because of an unspecified technical issue. NYSE, owned by Intercontinental Exchange (ICE), said the outage was not caused by a cyber attack.
The company has not provided any details about the issue, saying on Twitter it was “doing our outmost to produce a swift resolution and will be providing further updates as soon as we can.”
It suspended trading across all symbols at 11:32 am ET. NYSE President Thomas Farley told CNBC he expected the market to reopen by 2:45 pm or 3 pm ET.
This is the second high-profile IT infrastructure outage today. United Airlines grounded all flights this morning due to a network-connectivity issue.
NYSE-listed securities continued trading on other exchanges. The issue did not affect NYSE Amex and Arca options, the company said in a status update.
“We’re currently experiencing a technical issue that we’re working to resolve as quickly as possible,” a company spokesperson said in an emailed statement. “We will be providing further updates as soon as we can, and are doing our utmost to produce a swift resolution, communicate thoroughly and transparently, and ensure a timely and orderly market re-open.”
NYSE’s primary data centers are in Mahwah, New Jersey, and Basildon, UK. The facilities house infrastructure that supports trading engines as well as IT equipment owned by other financial-services companies that lease space there to be as close to the trading engines as possible to reduce latency of their communications with the markets.
Besides primary data centers, exchange operators also usually have backup facilities.
Because of robust backup infrastructure, stock exchange outages are rare. The most recent high-profile one happened in 2013, when trading on Nasdaq halted reportedly because of software issues traced to NYSE’s Arca exchange infrastructure problems.
In 2008, before ICE’s merger with NYSE Euronext, commodities trading stopped on ICE because of a power outage in a data center.
Trading on the London Stock Exchange was suspended in 2009 because of unspecified technical issues. | | 7:20p |
New Technologies And Strategies To Enhance Data Center Efficiency As the most progressive data center in the Rocky Mountain region, and a leader in our industry, Fortrust saw an opportunity to provide an outstanding example of how a data center should operate and contribute to a sustainable environment. Together as an organization, it created the “Fortrust Sustainability (Green) Initiative” in 2011, which was a strategic plan to reduce our energy consumption and carbon footprint while encouraging our employees, customers, vendors and partners to join us in our efforts to improve our impact on the planet as well as the industry and our community.
In this white paper, we explore how Fortrust changed its operations to become more “green” and implement new technologies and strategies to further its leadership in the data center industry. The ability to deploy modular technology along with a raised floor to perfectly match its customers’ of-the-moment computing demands has transformed it into a hybrid data center, which is inherently more efficient than traditional, one-size-fits-all data center facilities.
Download this white paper today to read the full white paper and see Fortrust’s 3-part process for making its operations more green, reducing energy consumption and carbon | | 8:43p |
Confluent Raises $24M to Commercialize LinkedIn-Developed Apache Kafka Confluent closed a $24 million Series B to help it commercialize Apache Kafka, a real-time data streaming service. Founded by the creators of Kafka, Confluent was spun out of LinkedIn in order to commercialize the open source technology.
Kafka makes all types of data available for stream processing and is battle-tested. At LinkedIn, Kafka was used to handle over 800 billion messages across real-time messaging, alerting, and other services.
The developers open sourced the project at LinkedIn and donated it. Confluent founder and CEO Jay Kreps said it initially found its way to big Silicon Valley technology companies, eventually spreading over time to virtually every industry.
There’s a lot of talk of the next-generation data-driven business, a business capable of pulling in all sorts of data and acting on it instantly or near instantly. However, the “data fluid” business has a mess of integration woes to get over. Confluent believes Kafka is the answer. It can act as hub for the flood of disparate data.
“The problem we solve is the proliferation of systems and the resulting mess,” said Kreps. “Different types of data result in multiple point-to-point connected systems. We connect all these systems together. Anything you pump into Kafka is available for stream processing.”
It can be difficult to integrate your data in a single place for analysis, according to Kreps, let alone make it available for real-time processing. It was initially created to handle disparate systems, with the real-time processing part only a hypothesis in the early days.
“We had several different systems, log aggregation, old-fashioned messaging, etc. Each sucked in a different way: one would have no scalability or have weak data delivery guarantees. A lot of it was batch jobs.” Kafka was not only able to handle disparate databases, it was capable of handling real-time data at scale.
LinkedIn’s needs are not limited to big social networks but mirror what’s happening in general. There are two trends that have caused Kafka to take off, according to Kreps.
“Companies are moving towards more diverse infrastructure setups,” he said. “It’s no longer one size fits all, and they’re using data systems of different types, shuffling data back and forth.”
“At the same time, there’s an embracing of ‘activity data’,” said Kreps. “That’s data that shows what’s happening in my business right now; sensor data device data coming from mobile, or the Industrial Internet of things. Web companies have this data – but it’s become increasingly universal.”
LinkedIn Netflix, Twitter, Uber and PayPal, Apple, and Salesforce all use Kafka, but its appeal isn’t limited to the big web-scale companies. Enterprises down to startups building unique real-time data functionality use it extensively. Confluent is a platform for the popular open source project.
Where Confluent adds value is it’s turning the open source project into a platform for diverse data. While Kreps said it’s ready to go and has is already used in the biggest cases of mission critical data, it will continue to build out features around Kafka.
The round was led by Index Ventures with participation prior investor and leader of a $7 million Series A, Benchmark Ventures. The company is using the round to build out security and management features around the platform. The company will also extend the number of databases and sources it can plug into.
The company will invest in features like connectors into more databases and systems, security hardening and deeper management and monitoring tools, in addition to commercial support.
There’s a big Internet of Things play as well as an opportunity to act as stream data hub for an enterprise move to micro-services. The trend in both consumer and business world is diverse streams of data. Kafka also acts as a pipeline into Hadoop and works well with offline systems. Kreps compares it to the data warehouses of old tuned for modern data needs. | | 10:08p |
CommScope Focuses on User Experience in Latest DCIM Software Release CommScope’s latest release of the iTRACs data center infrastructure management platform focuses on making it simple to use. The latest iteration has a new interactive interface called iTRACs SimpleView and better role-based management to provide privacy and security.
Vendors in the DCIM software space are in a race to offer a suite with the most complete and rich functionality. Unfortunately, this evolution often adds complexity to what is already viewed as complex. CommScope is focusing on usability with the 4.1 update.
SimpleView is a visualization of the data center that lets a user view, manage, and interact with the physical ecosystem. It’s a user interface enhancement that shows the data center infrastructure in a browser. It has highlighting, filtering, tracing, and other capabilities.
Role-based management has been enhanced to make iTRACs easier to use in a colocation or multi-tenant setting. Through managing roles, the platform can actively restrict access to infrastructure so people only see what they’re supposed to see. The company said it provides a secure partition between clients in multi-tenant environments.
While being pretty-looking isn’t everything, if DCIM softwrae becomes simpler to use and understand through visualization, it’s a win. Since SimpleView is web-based, the data center can be viewed on a multitude of devices.
“We’re streamlining how you use the software,” said William Bloomstein. “It gives a complete look at the power chain in a browser-based interface that simplifies the experience.”
At the end of the day, all the data and metrics in the world are useless if people don’t use it. DCIM implementations fail if people don’t treat it as an ongoing tool rather than an occasional snapshot, or if they can’t see the information the specifically want and need to see. The 4.1 release makes it user friendly to a wider audience or more applicable in multi-tenant settings.
Furthering the goal of getting people using the DCIM, 4.1 has better collaborative capabilities allowing users to share and filter reports. Browser-based automatic scheduling features are meant to speed up insights. Enhanced Reporting Viewer lets users filter reports, making it easier to drill down, isolate and leverage vital information, according to the company. |
|