Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 14th, 2014
Time |
Event |
11:30a |
AWS Cloud Management Startup 2nd Watch Raises $10M, Names New CEO 2nd Watch, a provider of cloud management software for Amazon Web Services has extended a 2013 Series C funding round by $10 million and announced a new CEO. Tech veteran Doug Schneider replaces founder Kris Bliesner, who will continue to lead the company’s technical direction as CTO.
Top Tier Capital Partners led the new investment, with participation from existing investors Madrona Venture Group and Columbia Capital. 2nd Watch has raised more than $37 million in venture capital since 2010.
The company’s cloud-native platform helps large organizations manage mission critical applications, computing and data in the cloud and make a transition to the cloud. The company says it has effectively moved more than 2 megawatts of enterprise data center capacity into the AWS cloud in more than 300 migrations.
The new funds will expand its sales, marketing and product development efforts and finance its geographic expansion ambitions.
Appointment of the new CEO and more finding may indicate that the company has reached a certain level of maturity. Technology-minded founder CEOs often step aside to make room for experienced business-growth-minded executives once their companies reach a certain point in their development.
2nd Watch says it has seen a major increase in bookings recently, and the company is part of a select group of AWS Premier Consulting partners, which puts it in the company of the enterprise IT services giants like Accenture, Capgemini, Infosys and Wipro, among others.
“Large companies moving beyond initial cloud infrastructure deployments to far more sizable commitments need partners with experience in strategically approaching the shift and then managing these projects,” Schneider said.
He was previously president of the hosting provider Verio, which he helped go public prior to its acquisition bu NTT Communications for more than $5 billion. His twenty-year record includes leading several companies towards acquisition, starting with Colorado Internet Services (acquired by Verio), followed by the CEO position at AllCall (sold to Nextel). All tenures led to acquisitions, save for Schneider’s most recent, as acting executive vice president and general manager at Melbourne IT.
“He’s a skilled leader with considerable business-building acumen who has founded, guided and grown several successful enterprise companies over the past two decades,” Bliesner said about his company’s new CEO. “2nd Watch and its customers are in great hands with Doug at the helm.” | 3:30p |
Big Data Remakes the Data Center Mike Wronski is Vice President of Systems Engineering and Customer Success with StrataCloud
Your company’s been spending the past year or two investigating how the business and IT can support Big Data initiatives across key areas such as predictive demand, customer service, and R&D. But Big Data also has a big role to play in remaking your data center, too.
It’s high time for the data-driven data center. That may sound like an oxymoron, but the fact is, data centers need an overhaul.
Despite cloud computing, many data centers are snake pits of complexity. A survey by Symantec Corporation found that pervasive use of cloud computing, virtualization and mobile technologies may diminish investments in blade servers and other modernization technologies meant to simplify the data center.
Growth in data volumes and business applications combined with high user expectations for speed and uptime place added pressures on data center managers. Tackling these issues to make data centers more responsive and efficient begins with a more precise, data-driven approach to IT operations management, one founded in Big Data technologies and strategies.
These four tenets can help simplify operations, save money and improve performance and user experience for the business:
One: storage of real-time machine generated data
IT operations data is produced at increasingly high rates within the data center from all corners of the business and the Web. This real-time machine data includes performance data from components such as servers, storage and networking equipment as well as application response times, click to action, and load times.
Access to data from these different platform components is now easier thanks to modern APIs, virtualization, and software-defined infrastructure. Data center operators have ample opportunity to make use of these diverse data types through Big Data analytics tools and thereby gain powerful insight into operations.
A massive increase in raw operational data demands new technologies and strategies for storing it. Even though hard disk storage costs have declined over the years, those cost declines haven’t kept pace with the growth of data production that some IT organizations are experiencing.
Fortunately, Big Data platforms specializing in compression and deduplication are becoming more available to help with the cost and management challenge. A large portion of the data available is time-series performance data (metric, measurement, and timestamp). To obtain highly-accurate analytics, the original raw data must be stored for longer periods of time andreferenced frequently, making general purpose databases and storage schemes a poor choice for management.
To further reduce storage costs, IT organizations should choose a platform with a storage model that scales out horizontally across many smaller nodes. This will balance the query across nodes reducing response time and enabling intelligent analysis of the raw data.
Two: predictive modeling
Many companies count on accurate forecasting to better execute on business goals, and that advantage doesn’t stop with customer-facing business challenges. Predictive models are also important in the data center and may cover resource utilization, resource demand, failure, and problem prediction. These models surface possible issues before they become real problems and play a critical role in procurement and capacity planning.
However, providing good models means having sufficient data on which to base the models. Since storage of granular data is challenging, a shortcut is to water down the data into averages, instead of using the original raw data. Yet doing this usually results in predictions with a high margin of error.
To provide more relevant predictive modeling, the data behind the models must be collected frequently and from across the application stack. This enables IT organizations to accurately predict when applications will have issues and to optimize resources on the fly for both demand and cost.
Three: cross-stack visualization of business applications
IT organizations still typically operate in silos such as virtualization, compute, storage, networking and applications. While each organization generates plenty of usable data, often using their own preferred tools, real value comes from merging the data in context of the applications. Therefore, cross-stack visualization requires integrating data from all hardware and software involved in running the applications.
Consider the exercise of judging the capacity needs for adding 500 new virtual machines. Increases in storage, network, and CPU are needed but without correlating all of them together you may miss an important point: the storage layer also consumes network capacity so the network capacity must increase substantially more. Without cross-stack analytics giving the full picture, operations teams can wind up chasing contention problems at the network layer. If cross-stack visibility is available, it’s possible to quickly eliminate areas that are not the sources of problems. That usually results in faster time to resolution.
To get started on cross-stack visibility, encourage teams to share and store data centrally. Groups can continue to use their domain-specific management tools but allow those tools to push data into a central Big Data repository.
Four: distributed in-memory analytics
The value of real-time intelligence is clear, but getting there is not easy due to the volume of data streaming into the organization. Traditionally, IT has performed analysis in batch mode, yet that’s not viable with today’s virtualized data center where decisions need to be made at any point in time.
Distributed in-memory analytics entails keeping relevant portions of data in-memory and performing the analytics as new data arrives or is aged off. The concept is similar to distributed storage, and helps improve the efficiency and speed of analytics programs by splitting large tasks into subtasks for calculation, which can be combined with other results later. The in-memory component is just as important. When data is ready and available in-memory, it can be acted upon immediately. Alternatively, data is available on disk (slow storage) and thus any operation or calculation is dependent on the time to load the data into memory. With large data sets, the load time can cause a material impact on the time to make calculations. When near-real-time is the goal, in-memory analytics is the only option.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:00p |
Five Essential DCIM Use Cases to Unite IT and Facilities There are so many new kinds of demands around the modern data center that it’s a bit hard to keep up with so much expansion. New workloads, applications, and even devices are connecting and requesting data center resources.
It’s important to understand that the data center is a shared resource that needs a shared management approach. Yet IT and facilities often rely on disjointed tools, data and processes, which fragments rather than unites data center management. This not only hampers day-to-day operations but also disrupts ongoing planning and optimization.
Data Center Infrastructure Management (DCIM) helps IT and facilities escape their silos and take a more integrated approach. It transcends disparate protocols, multiple data points and a wide range of performance metrics to unify and simplify every facet of the data center—starting with the physical infrastructure.
To help organizations plan their DCIM roadmap, this paper from CA explores five essential DCIM use cases and outcomes. From better asset utilization and faster provisioning to maintaining availability, greater efficiency and smarter capacity planning, we look at how DCIM can close the operational and optimization gap for both IT and facilities.
These use cases include:
- Asset Management – Can inventory, maintenance, warranty and financial information be recorded for both IT and facility hardware assets?
- Provisioning – Can the solution provide device auto-placement as well as connecting power, network, and storage ports?
- Availability – Does the solution enable integration with other IT management systems such as incident management, service desk, workload automation, and change management?
- Energy Efficiency – Can metrics such as PUE, DCiE, SI-EER and others be calculated out-of-the-box, and does the solution also allow you to easily create custom metrics?
- Capacity Planning – Does the solution provide advanced predictive analysis to better understand capacity utilization and to reallocate and consolidate resources as needed?
As 451 Research states: “The best-run data centers are those where managers have accurate and meaningful information about their data center’s assets, resource use and operational status—from the lowest level of the facility infrastructure to the higher echelons of the IT stack. This information enables them to plan, forecast and manage; to make decisions based on real-time data; and to use automated systems with confidence.”
Download this whitepaper today to learn how with the right DCIM solution, organizations will be able to tap into ongoing efficiency and financial savings. Engineers will no longer inadvertently compromise power capacity that may impact revenue generation. And your staff will not unnecessarily recommend investment in additional power and cooling resources and even new facilities, since they will now know if there is existing untapped capacity.
With a DCIM solution, your entire IT staff will continue to bridge the operational and optimization divide in their data center. | 4:55p |
Lenovo and VMware to Develop, Market Joint Infrastructure Solutions Lenovo and VMware have entered a technological partnership around software-defined data center infrastructure whose plans include validated cloud infrastructure and storage solutions that consist of both vendors’ products.
The two will define and market the solutions — that will consist of Lenovo hardware and VMware software — together. Private and hybrid cloud solutions are on the slate, as well as software-defined storage, based on Lenovo’s newly acquired x86 server line and VMware’s Virtual SAN software.
The partnership is a carryover from Lenovo’s $2.3 billion acquisition of IBM’s x86 server business, closed in September. The development relationship between IBM’s System x and VMware goes back 16 years.
Lenovo has also worked with VMware in the past, most recently validating vSphere with Operations Management, a virtualization platform with insight to IT capacity and performance on its enterprise-grade ThinkServer machines. They’ve also collaborated on network virtualization gateways and integrated traffic management solutions, and virtual desktop services through Lenovo eXFlash tech and VMware’s Horizon virtual desktop infrastructure platform.
“With the combination of VMware and Lenovo solution, we are empowering organizations with powerful automation, agility and flexibility in their IT infrastructure,” said Raghu Raghuram, executive vice president of the Software-Defined Data Center Division at VMware.
“By leveraging the strengths and respective geographic reach of our teams, we see significant synergy in delivering end-to-end designed-for-cloud infrastructures to our customers and their service providers,” said Adalio Sanchez, senior vice president, Enterprise Systems Group, Lenovo. | 5:44p |
VMware to Launch Cloud Data Center in Germany VMware plans to launch a cloud data center in Germany that will host infrastructure for customers who are either obligated to keep their data within the country’s borders by law or worried their data will be unsafe if hosted outside of the country.
The company announced the plan at its VMworld 2014 Europe conference in Barcelona Tuesday. It expects to bring the data center online in 2015.
VMware unveiled its cloud service, now called vCloud Air (previously vCloud Hybrid Cloud Service) last year. The company takes space with colocation providers for the public cloud side of the service in different locations but has its staff operate the data centers.
vCloud’s promise is seamless integration of customers’ existing VMware environments in their data centers with the public cloud infrastructure hosted in those facilities. It is instant hybrid cloud that has a single set of policies across in-house and VMware data centers.
“As we continue to expand VMware vCloud Air into new markets, with more services than ever before, we are only just scratching the surface of what the service will become,” Bill Fathers, executive vice president and general manager of VMware’s Cloud Services business unit, said in a statement.
Location matters, and it matters more in Germany
To compete in an already crowded public cloud market, however, VMware has to expand the physical footprint of its cloud. Companies like Amazon Web Services and Microsoft Azure operate massive data center facilities around the world and have been at it for years.
VMware has vCloud Air data centers in California, Nevada, Texas, Virginia and New Jersey. It also has one location in the U.K. and another one in Japan.
Physical location of cloud infrastructure affects performance of a service for users, depending on how close or far they are from the data center. Companies in some industries, such as financial services and healthcare, have always been compelled by law to keep their data within their countries’ borders.
Physical location of a customer’s data, however, has become a much bigger concern after the Snowden revelations. Germany has been one of the countries where this concern is biggest.
Both Amazon and Microsoft have made plans to build cloud data centers in Germany in recent months, according to reports. T-Systems has made data sovereignty a central message in promoting its newly completed data center in Biere. | 6:30p |
US Government Faces Cybersecurity Risk Due to Faulty Cloud Contracts 
This article originally appeared at The WHIR
Federal agencies are putting sensitive data at risk according to a report released to the public on Thursday from the Council of the Inspectors General on Integrity and Efficiency’s (CIGIE) IT Committee. The report selected 77 commercial cloud contracts for review after 19 Offices of Inspector General (OIGs) shared testing results. Based on OIG reports there were 348 commercial cloud contracts with a value of about $12 billion dollars.
Although most commercial cloud contracts included some of the required items not a single one included all of them. Over three-quarters of the contracts failed to meet FedRAMP standards which were required as of June 5th this year. FedRAMP establishes a risk-based approach for federal agencies adopting and using cloud services which includes standardized security requirements.
As more government agencies move services due to the cloud first initiative in the US, Australia and the UK, providers able to easily adhere to federal guidelines and mitigate security concerns will have a clear advantage.
“FedRAMP’s purpose is to ensure that cloud-based services have an adequate information security program that addresses the specific characteristics of cloud computing and provides the level of security necessary to protect government information,” according to the CIGIE report. “The failure of the cloud system to address and meet FedRAMP security controls increases the risk that Federal program data may be compromised, intercepted, or lost, which could expose the data to unauthorized parties.”
With recent cybersecurity breaches at huge companies such as JP Morgan, Target, Home Depot, Kmart and Dairy Queen, the public is becoming more aware of the risk of hackers and malware putting their private data in danger.
In addition to putting agencies at a security risk, the faulty contracts may also cause the government to spend more taxpayer money. The CIGIE stated, “Furthermore, because 42 contracts, totaling approximately $317 million, did not include detailed SLAs specifying how a provider’s performance was to be measured, reported, or monitored, the agencies are not able to ensure that CSPs meet adequate service levels, which increases the risk that agencies could misspend or ineffectively use Government funds.”
The report also found that nearly half of the agencies did not have a clear picture of what cloud services are being used. “Without accurate and complete inventories, the agencies involved do not know the extent to which their data reside outside their own information system boundaries and are subject to the inherent risks of cloud systems.” the report stated. Lack of complete inventory was due to manual reporting (human error) and agencies not applying a consistent definition of cloud computing.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/us-government-faces-cybersecurity-risk-due-faulty-cloud-contracts | 7:07p |
SAP and IBM Roll Out SAP Business Suite and HANA on SoftLayer Cloud Enterprise technology giants IBM and SAP have partnered to provide the latter’s business software and in-memory data analytics platform on top of the former’s cloud infrastructure.
SAP’s Business Suite and HANA, its high-performance analytics solution, will now be available as a service offered through the IBM SoftLayer cloud.
The deal gives SAP IBM’s global cloud data center footprint, tripling the German software giant’s capacity to deliver its services in the cloud. IBM gains another valuable enterprise offering and a potential source of new revenue for its growing cloud business.
SAP’s business apps and HANA cloud services are also available as certified for Amazon Web Services, a major competitor to IBM SoftLayer. The IBM cloud is arguably more tuned toward big enterprises, however, emphasizing security, transparency and control over where data resides.
SAP and IBM have a longstanding relationship, but up to this point the partnership was mainly around SAP for on-premises IBM or pre-production cloud. The new services are for production-ready implementations of the business suite and the addition of the real-time analytics through in-memory computing capabilities of HANA.
There are also integration benefits. IBM is pushing an open-standards-based approach to its cloud to create a foundation for easier integration of existing technology investments with new workloads.
“Our secure, open, hybrid enterprise cloud platform will enable SAP clients to support new ways to work in an era shaped by big data, mobile and social,” IBM CEO Ginni Rometty said in a statement.
Momentum strong for IBM cloud
IBM’s cloud business has seen strong momentum recently. The company now touts nearly all top 50 of the Fortune 500 as its cloud customers, and industry analysts have placed it high in the rankings among other providers in the space.
IDC has its cloud as the top offering in six of the eight major industries covered in a recent study. It finished in the top three for the other two industries. Synergy Research Group ranks IBM as second behind Microsoft’s cloud business in terms of year-to-year growth at 80 percent compared to same time last year.
IBM investment in cloud is in the tens of billions. Its cloud initiative really started following $7 billion in key acquisitions.
The $2 billion acquisition of SoftLayer served as cornerstone. The company followed the deal with a $1.2 billion investment to expand its SoftLayer footprint this year in several major markets, the latest in Melbourne, Australia.
It has committed $1 billion to IBM Bluemix, its Platform-as-a-Service and made significant investment in commercializing Watson, the cognitive computing technology.
So far revenue growth in cloud has been solid. Q2 2014 saw IBM’s cloud revenue up over 50 percent, with its “as-a-service” business doubling once again. SoftLayer contributed about one point to Global Transaction Services (GTS) revenue growth in the same quarter.
For all things delivered as a service, second quarter annual run rate is up nearly 100 percent to $2.8 billion year-over-year in Q2. Software-as-a-Service offerings grew by nearly 40 percent. | 8:31p |
With DCN, HP Wants to Enable Virtual Multi-Data Center Networks HP introduced a new network virtualization solution in its bid to capture software defined networking market share. Dubbed Distributed Cloud Networking, it is for automatically deploying secure virtual networks across distributed infrastructure.
While virtualized servers and storage are ubiquitous, the network largely remains in its legacy state, and there is a race among vendors old and new to bring it into the modern age. Vendors like HP, Cisco and VMware, as well as startups like Cumulus, Pica8 and Arista, are focusing on virtualizing the data center network to make it as flexible as storage and compute already are today.
SDN in general is still fairly complex. HP’s DCN is for configuring, managing, and optimizing virtual network topologies over many data centers in a variety of cloud configurations.The target audience is service providers and large organizations, with a starting price of $65,585 for a single instance.
DCN looks to simplify creation and deployment of virtual networks in short order, and to do so across distributed infrastructure. The company said it takes minutes instead of months to deploy secure virtual networks across distributed infrastructure with DCN.
Everything is managed from a central location, regardless of incorporating private, public or hybrid cloud infrastructure. It helps with Network Function Virtualization, enabling communications services over a fully automated multi-data center environment through an open architecture.
“Customers are looking for ways to upgrade their networks to better focus on building business and incorporating new technologies that adapt to rapidly changing demands,” said Antonio Neri, senior vice president and general manager, Servers and Networking, HP. “Distributed Cloud Networking allows customers to seamlessly work across their distributed environment, removing the need to manually reconfigure the network and offering a more efficient infrastructure at reduced costs.”
DCN includes:
- HP Virtualized Service Directory: refines service design and integrates with customer service policies, to manage users, compute, and network resources
- HP Distributed Services Controller: Control plane of the data center network. Manage and control in a centralized location
- HP Distributed Virtual Routing and Switching: Based on OpenvSwitch, this serves as a virtual endpoint for network services. It immediately detects changes and triggers the right network connectivity needs of an application
HP recently split into two companies, one focused on consumer products and the other on the enterprise. | 9:00p |
Super-Sizing Solar Power for Data Centers EAST WINDSOR, N.J. - Traveling east from Princeton, drivers can catch a brief glimpse of the panels, which are hidden by a series of high berms. It’s only when you walk around the edge of these grassy mounds of earth that the massive scale of the solar energy generation system is revealed.
And what a sight it is. The solar farm stretches nearly to the horizon, with blue and gray-green photovoltaic panels blanketing nearly 50 acres of New Jersey countryside. The system provides energy for the nearby QTS Princeton data center campus, more than 57,000 solar panels generating up to 14.1 megawatts of power. That’s more than enough to supply the daytime energy needs for McGraw-Hill’s electronic publishing operation, currently the sole tenant at the data center.
The QTS Princeton solar array symbolizes a new phase in the use of renewable energy in data centers. Massive arrays can now provide tens of megawatts of solar power for companies that can afford the land and the expense. As a handful of players pursue on-site solar farms, other cloud builders are opting for power purchasing agreements that subsidize new wind farms or tapping landfills for biofuels that can power fuel cells.
Scaling up for renewable energy
The use of solar power in data centers has come a long way since 2005, when AISO built the first fully solar-powered data center. The California hosting firm used 120 photovoltaic panels to provide all the power for a 2,000-square-foot data hall.
Solar power hasn’t been widely used in data centers because a very large installation of photovoltaic solar panels is required to produce even a fraction of the energy required by most data centers. An all-solar facility would either need to stay small or use thousands of solar panels deployed across dozens of acres of land.
Until recently, that type of large-scale solar array seemed impractical. Data center companies, under pressure from environmental groups like Greenpeace, opted instead for on-site arrays in the 100-to-200-kilowatt range that generated enough electricity to power office space within a facility. Companies adopting this approach included Facebook, Emerson and Cisco, among others.
In 2011 McGraw-Hill announced its ambitious plans for its East Windsor data center. The $60 million facility was built to support the data center on McGraw Hill’s nearby campus, which powers its Standard & Poor’s investment ratings, energy pricing services from Platt’s, and the Connect learning platform for higher education.
The company cited its focus on sustainable business practices as the motivation for the solar farm. By using the sun to power its data center during the day, McGraw-Hill said it achieved the same environmental impact as eliminating the carbon output for 1,580 homes or nearly 2,500 vehicles.
 An aerial view reveals the full scope of the massive solar array in East Windsor, N.J. (Photo: McGraw-Hill)
As large as it is, the QTS installation isn’t even the largest of new solar farms. Apple has built two 20-megawatt solar arrays near its campus in Maiden, North Carolina, and plans similar large solar fields to supports its new server farm in Reno, Nevada.
Even at cloud scale, solar power is a part-time solution — it’s only available when the sun is shining. Since most data centers are online around the clock, a solar-driven facility will need alternate power. When the sun goes down, QTS switches over to grid power from the local utility, Jersey Central Power & Light. |
|