Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, January 2nd, 2014
Time |
Event |
12:42p |
PTC Grows Internet of Things Platform with ThingWorx Acquisition PTC announced it has acquired ThingWorx, creators of a platform for building and running applications for the Internet of Things (IoT), for approximately $112 million, plus a possible earn-out of up to $18 million. The acquisition extends PTC’s strategy by accelerating its ability to support manufacturers creating connected products. As part of PTC, ThingWorx will continue to help customers in a wide range of industries seeking to leverage the IoT – including telecommunications, utilities, medical devices, agriculture, and transportation – as well as an emerging partner network of IoT-enabled service providers.
A recent McKinsey research report estimates that the Internet of Things has a potential economic impact of $2.7 trillion to $6.2 trillion potential annually. PTC will use the ThingWorx platform to speed the creation of high value IoT applications that support manufacturers’ service strategies, such as predictive maintenance and system monitoring.
“All aspects of our strategy to date have centered on helping manufacturing companies transform how they create and service smart, connected products,” said PTC president and CEO Jim Heppelmann. “For manufacturers today, it is clear to us that improved service strategies and service delivery is the near-term ‘killer app’ for the Internet of Things and this opportunity has guided our strategy for some time. With this acquisition, PTC now possesses an innovation platform that will allow us to accelerate how we help our customers capitalize on the market opportunity that the IoT presents.”
Earlier in the month PTC announced an agreement with GE Intelligent platforms (GE), where GE will resell the PTC Manufacturing Process Management software in combination with GE’s manufacturing execution system (MES) software. The two companies kicked off a series of joint sales, marketing, services, support, and product development initiatives designed to collaboratively meet this fast-growing market demand.
“At ThingWorx, we share PTC’s vision for helping organizations fundamentally leverage the connected world,” said Russell Fadel, CEO and co-founder, ThingWorx. “We believe all industries, but especially manufacturing, will be transformed in the Internet of Things era. We are excited to pursue this broad set of opportunities with the resources and proven solution portfolio that PTC provides.”
At the 2014 Consumer Electronics Show (CES) next week in Las Vegas, expect the Internet of Things / Internet of Everything to be a big theme. There are conference tracks for IoT, wearables, and MEMS (MicroElectroMechanical Systems). Whether its called the Internet of Things, Internet of Everything (Cisco), the Internet of Customers (Salesforce.com), or the Internet of Nouns (Greylock Partners) – 2014 is sure to be a break out year. | 1:25p |
QTS is Top Performer in 2013 as Data Center Stocks Lag the Market  Here’s a look at the data center winners and losers on Wall Street in 2013.
Wall Street loved IPOs in 2013, and that trend extended to the data center sector. QTS Realty and CyrusOne were the top performers among publicly-held data center companies for 2013 after going public earlier in the year.
But they were the best of a weak bunch. The sector trailed the broader market, as investors cooled on data center stocks amid growing cloud competition and debates about the valuation of service providers.
QTS Realty (QTS) ended the year at $24.78, an 18 percent improvement on the $21 pricing of its IPO in October. Shares of QTS jumped in late December after analysts from Morgan Stanley recommended the stock. CyrusOne (CONE) had its IPO in January at $19 a share, and closed the year 17.5 percent higher at $22.33, recording strong sales along the way. The two newcomers edged out CoreSite Realty (COR), which had the best showing among incumbent data center REITs with a 16.4 percent gain for the year.
Mediocre Gains as Wall Street Advances
Despite those gains, it was a year many data center investors might like to forget, as even the sector’s top performers trailed the broader market. The Dow Jones Industrial Average gained 26.5 percent for the year, while the S&P 500 improved by 29.6 percent and the NASDAQ soared 38.3 percent.
That weak performance marked a departure from recent years, when the data center sector outperformed the market with strong gains in both 2011 and 2012. By early 2013, the lofty valuations for data center and cloud computing companies came under scrutiny on Wall Street, including high-profile debates over the performance of two industry bellwethers, Rackspace Hosting (RAX) and Digital Realty Trust (DLR).
Rackspace had been on the vanguard of the providers gaining from interest in cloud computing, soaring 37 percent in 2011 and 73 percent in 2012. But in February Rackspace shares slid 20 percent after the company’s earnings raised concerns that the rate of adoption for cloud computing services may be moderating. The company was then the focus of a critical writeup in Barron’s asserting that RAX was overvalued. Subsequent earnings stumbles left Rackspace shares down 47 percent for the year, making it the sector’s worst performer.
In May, hedge fund Highfields Capital Management asserted that investors should short shares of Digital Realty, saying the huge data center developer was understating the future investment in facilities that would be required to support its enterprise customers. Digital Realty said Highfields was “mischaracterizing and drawing inaccurate conclusions” from its disclosures, but the debate focused Wall Street’s attention on data center maintenance costs. In October Digital Realty lowered its revenue guidance for the coming year, saying enterprise tenants were deploying new data center space more slowly than expected. That triggered a selloff in data center stocks, which helped push DLR shares to a decline of 27 percent for the year.
As for the fourth quarter performance, the Data Center Investor chart tracks closely with the full-year showings, with QTS and CyrusOne leading the pack, with Rackspace trailing.
 | 2:32p |
Unifying the Data Center Operations with True DCIM Suvish Viswanathan is the senior analyst, unified IT at ManageEngine, a division of Zoho Corp. You can reach him on LinkedIn or follow his tweets at @suvishv. This is a last part of a three-part series.
 SUVISH VISWANATHAN
ManageEngine
In the second post in this series, we looked at the evolution of data center asset management and the degree to which it has evolved in parallel with traditional IT management. Ultimately, if you adopt a service-oriented management focus, the goal of both management efforts is the same — to enable the optimal delivery of a service to the end user. That said, the IT and facilities management worlds should not need to operate in parallel. Instead, they need to operate as one, in a truly integrated manner.
That’s what DCIM — data center infrastructure management — should be all about.
Now, before you go saying “Ah, DCIM — it is rubbish” (as, in fact, a journalist said to me just the other day), let me distinguish what I’m talking about from the DCIM that everyone else is talking about (which, I agree, is rubbish).
Unfortunately, DCIM has become one of those buzzwords in the marketplace that has no standard definition. I recently saw one article which mentioned that more than 80 vendors claim to offer DCIM solutions. The problem with that is that most of them don’t. They may offer an IT or facilities management product that facilitates the management of one part of the data center infrastructure, but that’s a far cry from the kind of integrated DCIM solution that today’s fast-paced business needs.
The Shape of a Truly Integrated DCIM Solution
Data center management will never be performed efficiently if the IT infrastructure and facilities infrastructure are managed separately. Can you imagine your blade servers running in a room cooled to only 90°F? Should you really feel comfortable about the ongoing availability of your business-critical applications if you don’t know that the diesel tank fueling your back-up generator is only 10 percent full? Is it really possible to ensure the security of your infrastructure and the critical data it processes without a proper sensing mechanism in place?
When viewed through the lens of service delivery, all the assets in your data center are connected, and your ability to monitor and manage them needs to be equally as interconnected. A true DCIM must be able to do the following:
Collect Data. The data center is full of data collection nodes: IT systems collecting performance data in real time from servers, switches, data storage systems and more — as well as facilities infrastructure systems collecting data about rack temperatures, power consumption, backup generator fuel tank levels and more. These systems rely less and less frequently on an agent-based approach to reporting, so a DCIM solution must be able to collect data using a wide range of common communications protocols — from SNMP, WMI, SSH and the like for IT assets to Modbus, BACnet, LonMark and others for the facilities infrastructure assets.
The data capture features of DCIM need to support more than real-time infrastructure monitoring, too. The DCIM system must be able to reach deep into the broader infrastructure to pull granular data from individual pieces of equipment for planning and forecasting purposes.
Provide analytical support. Ultimately, the point of collecting data is to subject it to analysis and correlation, so a DCIM system needs a powerful analytical component. From a data center management standpoint, the analytical engine can facilitate decisions. These can be programmatic decisions, as when an alert might prompt the automated transfer of virtual machines from one server to another or automatically increase the airflow within a certain set of racks because of a sudden spike in CPU temperatures. Or, they can be strategic decisions taken by a committee, as when planners view DCIM data for environmental trends, application performance patterns or the broader user experience.
Accommodate the operator. A DCIM solution that can monitor and manage a wide range of assets — but only if those assets have been built by the same vendor that built the DCIM solution — is a non-starter. The days of a monolithic, single-vendor infrastructure are long past. In fact, just the opposite is true: The whole notion of the “data center” itself is becoming more and more fluid. If the data center is where an organization runs its mission critical applications and manages the delivery of the user experience, then parts of that data center may be in the cloud. Parts of that data center may reside in physically non-contiguous locations. And decisions about future data center elements may be governed as much by time-to-service delivery as physical location.
An integrated DCIM solution must accommodate a wide range of systems, tools, protocols and standards. It needs to be able to pick up alerts from different assets in the data center and send them to the appropriate authority (via email, SMS or whatever mechanism is preferred by the enterprise). All the elements in the infrastructure need to expose their APIs so that the management tools can understand and interact with them. This would give data center managers the flexibility they need to expand in the ways that will be best for their business (which a vendor lock-in never does).
Control and Automate. Today’s data centers are enormously complex. Some management issues need human oversight; others do not. A truly integrated DCIM solution can help you manage your resources so that issues that do require human intervention are flagged and escalated accordingly. The solution needs to be able to contact the person with the right skills, the right authority and the right access. It needs to be able to alert that person in a manner that is in keeping both with the severity of the issue and the policies and procedures of the organization itself.
For those issues that do not require human intervention, the DCIM must be able to handle them programmatically through various workflow automations. This enables you to focus your (highly intelligent, creative and skilled) human resource on the strategic management tasks that can enhance business productivity, the end-user experience or some other area that matters more to the enterprise.
Manage inventory centrally. Asset management is a major pain point in the data center, but a truly integrated DCIM solution can eliminate this pain through an automated asset discovery engine.
Such an engine would provide capabilities to crawl the data center infrastructure and discover all the devices and services involved — then feed those discoveries into a centralized repository such as a configuration management database (CMDB). Such a database would not be a mere manifest of detected devices, systems and services, though; for this database to be truly useful, it must enable data center managers to understand the relationships between the devices, systems and services. Thus, if a data center manager were planning a project to swap out a row of batteries, for example, the CMDB could let the manager know precisely which servers this row of batteries is backing up as well as precisely which mission-critical applications and services are running on those servers.
The practical impact of any asset change could be readily seen if this kind of DCIM were in place. It’s a hyperconnected world in the data center, which is why we need a truly integrated DCIM tool to handle it.
The Utility of Metrics
Finally, you’ll note that I have not mentioned any of the metrics we usually discuss when talking about data center management. Historically, many people have described data center management in terms of total cost of ownership (TCO), power usage effectiveness (PUE), data center infrastructure efficiency (DCIE) and other metrics. These are important metrics insofar as they can help a data center manager monitor and understand the data center from an environmental perspective. The green IT initiative is important, and failure to monitor with an eye toward the data center’s carbon footprint will have a significantly negative impact on both the company’s tax bill and public image.
However, these metrics provide only a fragmented view of overall data center performance. Data center infrastructure management needs to transcend that fragmented view. The data center is the nerve center of business today, and it needs to be managed with the organization’s service delivery goals in mind. There are human, resource and environmental components that we need to balance and manage effectively. Only by taking an approach that unifies, integrates and consolidates all these elements can we manage the entire data center in a manner consistent with our broader service delivery goals.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 7:06p |
Open-IX Accepting Applications for Data Center Members  Open-IX is now accepting applications for membership from data centers and exchange point operators.
Open-IX has been bringing in a significant paradigm shift to the interconnection and internet exchange world, importing the European interconnection model stateside. It is creating a network of member-governed Internet exchange points housed in multiple neutral data center facilities, which opposes the privatized approach that has been dominant. After several announcements by European exchanges, Open-IX is now officially accepting applications for North American Data Center and Internet Exchange Point (IXP) Certifications.
“After opening up the application process for the NoVA market, demand for other regions and interest in achieving OIX certification has been tremendous,” said Martin Hannigan, co-founder and Treasurer of Open-IX Association.
In order to become an OIX certified data center or IXP, companies must submit an application that includes a deposit and processing fee. The OIX executive board then reviews each application and once accepted, data centers and IXPs are required to complete a compliance report. The compliance report is so that members adhere to the OIX technical requirements.
In addition to applying for OIX certification, industry members who are interested in Internet Exchange Points and data center interconnection and infrastructure, interconnect engineering, or research can also join the OIX as members,” said Hannigan. “Membership is $50 per year and provides the opportunity to have input on shaping the operations, engineering and future of interconnecting networks.”
OIX was formed this year and is a non-profit, neutral body of volunteers from the Internet community with the common goal of creating standards for Internet connectivity, resiliency, interconnect, security and cost. The European exchanges came flooding into North America. OIX standards are already being adopted by LINX, AMS-IX, Digital Realty, DuPont Fabros, Raging Wire, and many other data center and IX providers.
The Summer of Open-IX
There were several high profile launches on the part of data center providers following the group’s formation, including EvoSwitch and Digital Realty jumping in and launching initiatives in August.
Also following the momentum of Open-IX, the London Internet Exchange (LINX) revealed plans to open an Exchange at three data centers in northern Virginia, which began operations today. “There has been a lot of attention worldwide for what we’re building here,” said LINX CEO John Souter. “Our members called on LINX to build neutral, member-governed and distributed Internet Exchanges in the most important US markets where IP traffic is exchanged, and that is precisely what we are doing here in northern Virginia”.
Meanwhile, the Amsterdam Internet Exchange (AMS-IX) opened an Internet exchange in New York in November, with Dupont Fabros, Sabey Data Centers and 325 Hudson. Franfurt-based DE-CIX also entered New York.
“We are excited to see the official launch of the Open IX initiative come to fruition,” said John Sarkis, Vice President of Carrier and Connectivity Operations for Digital Realty. “This is a testament to the hard work and dedication of many members of the Internet community and solidifies a revolutionary change for the industry.” | 7:59p |
Server Farm Realty Adds Second Chicago Data Center Additional data center inventory is coming to the suburban Chicago area market. Server Farm Realty has acquired a second facility at 800 and 810 Jorie Boulevard in Oak Brook, Illinois, a southwest suburb of Chicago. This greatly strengthens Server Farm Realty’s position in the Chicagoland market, with the new facility serving as a complement to its downtown data center at 840 South Canal.
The Oak Brook facility is 66 percent leased, and it offers prospective customers direct access to numerous existing fiber carriers.
“In addition to being one of the best connected facilities in the Chicago market thanks to its expansive ecosystem of existing technology tenants, the Oak Brook data center features close proximity to robust underground telecoms fiber, as well as optimal power and electrical service,” said Avner Papouchado, President of Server Farm Realty. “Backed by SFR’s deep technical capabilities, the region’s dense fiber optic network, and solid infrastructure, the facility provides tenants with the expertise, connectivity and framework necessary to succeed in today’s saturated technology market.”
The buildings at 800 and 810 Jorie were formerly known as the Oak Brook Technology Center. It consists of two mid-rise office buildings totaling 193,688 square feet of rentable space and sits on 13.49 acres of land.
A Bridge to the Future
The complex was originally built as the corporate headquarters of the Chicago Bridge & Iron Company (CB&I). The 2-story (64,295 square feet) building at 810 and the 3-story (129,393 square feet) building at 800 Jorie Blvd were converted into multi-tenant office buildings after CB&I relocated to the southwest suburb of Plainfield, IL.
Server Farm Realty opened its first Chicago facility at 840 South Canal Street with anchor tenant Peerless Network in June of last Year. 840 South Canal is a 450,000 square foot facility that previously housed a General Electric factory and served as Northern Trust’s data center and operations hub. Server Farm Realty says it has has invested more than $220 million to redevelop and transform the building into a Tier III data center.
“Securing 800 and 810 Jorie Blvd. was strategic in enabling Server Farm Realty to expand its reach throughout the Chicagoland market from our downtown location at 840 South Canal,” adds Mitch Kralis, Vice President of Real Estate for Server Farm Realty. “The acquisition also allows our organization to simultaneously fortify its portfolio with in-demand, high-quality services to fulfill the growing technology needs of our existing and new customers alike.”
Server Farm owns and operates an expanding portfolio of data center properties spanning eight facilities with more than 1.5 million square feet.
The sale of the Oak Brook facility was brokered by Transwestern Managing Directors Gary Nussbaum and Thomas Gorman, as well as Senior Associate David Matheis.
The colocation market for Chicago appears to be strong going into the new year. This comes off the heels of a Digital Realty expansion. Digital Realty bought a campus in Franklin Park back in 2012. The suburban Chicago market is already home to a major data center for Microsoft, as well as facilities for Equinix (EQIX), DuPont Fabros Technology (DFT), Ascent Corp. and Latisys,
Other recent expansions in the Chicago market include NetSource, who is targeting proximity trading, and Continuum, who is planning a new data center. Telx also recently added capacity in 350 East Cermak. |
|