Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 26th, 2014
Time |
Event |
12:30p |
CloudPhysics Raises $15M for Predictive Analytics in Virtual Data Center Management The data analytics space continues to enjoy a healthy investment environment. Most recently, CloudPhysics, a data center management-focused startup in the analytics space, picked up a $15 million Series C funding round. The firm uses analytics for predictive virtual data center management.
Virtualized environments are becoming larger, denser and more complex than ever before. More integrations with other systems in the IT stack now exist, involving subtle inter-dependencies. Managing these environments has become very complex, and applying Big Data analytics to this problem is a novel approach being promoted by a number of startups and big companies, including Google.
CloudPhysics said the money will go toward improving its Software-as-a-Service solution, which leverages Big Data to uncover hidden operational hazards before problems emerge and to identify efficiency improvements in virtualized IT operations management.The company says it helps customers run virtualized data centers like Google runs its facilities.
The round was led by venture capital firm Jafco Ventures with participation from the company’s existing investors Kleiner Perkins Caufield & Byers and Mayfield Fund. CloudPhysics has raised $27.5 million to date, including its previous $10 million round.
“This is a very exciting time for CloudPhysics as it brings to market a fundamentally new approach to IT operations management,” said Jeb Miller, general partner at Jafco Ventures and CloudPhysics board member. “Through Big Data analytics, CloudPhysics is shifting the focus of infrastructure management from reactive to predictive, enabling more intelligent and cost-effective IT.”
Founded in 2011, CloudPhysics backers include VMware co-founder Diane Greene. The company has attracted customers across a large swath, including privately held investment firm Thiel Capital, Australian IaaS provider ZettaGrid and global healthcare company Sanofi.
New release tackles storage
Along with the funding, the company announced Storage Analytics with Smart Alerts, an updated offering aimed at preempting storage problems. Smart Alerts is a new feature that shifts the focus of alerting from reactive to predictive.
Storage is considered a major painpoint in the virtual data center. CloudPhysics uses a global dataset to examine metadata and trends from thousands of data centers and bakes these learnings into the analytics solution.
The technology evaluates all objects in the virtual data center against certain criteria (latency, duration, outstanding IOs, IOPS, etc.) and triggers Smart Alerts based on thresholds derived dynamically from patterns and trends observed across their global dataset. The performance and capacity analytics run across the entire virtual infrastructure.
Storage is a common painpoint
“Storage continues to be an extremely painful resource to manage in virtual environments, and the tools available today fail to provide the visibility, much less the intelligence, to help IT fully understand and easily control its most expensive data center resource,” said John Blumenthal, co-founder and CEO of CloudPhysics. “With our new release, we’ve doubled down on storage, providing storage-focused analytics that cut through the layers of complexity and provide answers IT teams need to prevent storage-induced downtime, optimize capacity and keep their virtual data centers healthy and operating with Google-like operational efficiency.”
It allows customers to immediately diagnose storage performance culprits, reducing the amount of time it takes to find in fix problems, according to the company. It also helps eliminate storage-related downtime which accounts for 58 percent of all downtime, CloudPhysics claims.
Bernd Harzog, Analyst at The Virtualization Practice notes the storage problem: “IT Ops teams managing virtualized infrastructures face a continuous onslaught of allegations that something is slow and that it is their fault. It turns out in more than 90 percent of the cases where the infrastructure actually is at fault, the culprit is storage.”
Storage Analytics proactively manages datastores with deep health checks, ensuring proper setup and preventing waste. It determine the efficacy of SSD caching on virtual workloads prior to making a purchase. It also creates custom storage analytics and reporting for interactive exploration, root cause analysis and ongoing management. | 12:30p |
The Five Year Plan Your Network Needs Michael Bushong is the vice president of marketing at Plexxi.
Like everything from starting a business to building a house, true success doesn’t happen overnight. It can take years of planning to build a stable structure that will last more than just a few years. The same is true for your network.
Keeping up with the growing demands in today’s world of overloaded data centers requires tough conditioning so your network is in its best shape. Cisco’s 2013 Global Cloud Index report suggests that data center traffic will triple by 2017; 76 percent of that traffic is server to server traffic within the data center. With this in mind, many networks are already behind.
Revamping a data center network requires IT decision makers to step back and see the long-term potential by preparing for the growth and obstacles along the way. With one, three, and five year mile markers, consider this five year plan that every network team should apply to make sure their network can grow with demand in a linear fashion.
Year One: build a foundation for capacity
As companies build out their data centers, Year One is typically marked by an acceleration of capacity growth. The drivers of capacity growth have been well-documented: video and rich media driving higher volumes of traffic, mobility leading to an explosion in the number of devices and virtualization enabling higher compute utilization within the data center. Whatever the driver for a business, the result is an increasingly steep capacity growth ramp.
Given the predicted growth rate, the data center architect must add capacity across all resources. But more capacity requires more money, which can be limiting and is usually a derivative of the previous year’s budget. Companies that need to add capacity to meet this load demand must contend with the Year One problem year after year. The industry continues to expand and demands an efficient and capable data center. These contingencies rely on a solid, adaptable foundation.
Although this problem is recurring, having a foundation that can adjust to significant capacity changes will make it easier to address.
Year Three: asses and limit your operational expenses
After establishing a foundation that can adapt to increased capacity demand, the next step in Year Three is to determine how to handle looming operational expenses. The corporate gymnastics required to solve the recurring problem of capacity in Year One leaves IT leaders faced with an even more daunting challenge in Year Three. Even if IT vendors gave away equipment at no cost, the companies could not use it because of a lack of operational budget to manage the devices. For some companies, this issue in Year Three has already taken root. And while this situation may be uncommon, it speaks to the crippling effects of IT sprawl when operating expenses are left unmitigated.
Operational expense comes in various forms including environmental costs, management, and integration and orchestration. The fact that operating expense outpaces capital expense over the life of infrastructure has not been lost. Several approaches are designed to reduce operational cost:
- Fabric-based data center solutions
- Software-defined Networking (SDN)
- Development and Operations (DevOps)
The combination of options is unique to each network infrastructure. Don’t wait until the problem arises in Year Three to address it. Be proactive and solve this issue before it becomes one.
Year Five: scale your network
Now that you have a foundation to move past the recurring capacity problem in Year One and have enough leftover manpower and budget to address the longer-term operating costs from Year Three, we’ve reached Year Five. But imagine successfully navigating that gauntlet only to find out that it was all for naught. Further out on the horizon, beyond a point where most companies even consider real strategies or budgets, the Year Five problem looms. And that far out, Year Five has the biggest stakes.
Data center traffic is not just growing–it’s exploding. And architectural scaling goes well beyond merely providing capacity. If buyers aren’t certain that an architecture will ultimately prove scalable for large networks, they run the risk of outgrowing their chosen path. Such an outcome would be disastrous as future plans could require wholesale replacement of some or all existing gear.
The success in Year Five is contingent upon the building blocks in Year One and Year Three so be aware of this when making decisions with Year Five in mind. While some of the capabilities (or even platforms) might not be necessary during Year One and Year Three, any plan ought to consider long-term growth trajectories. Recognize that growth rates are only accelerating. The meaningful question for architects everywhere is how to transition from today’s reality to tomorrow’s promise.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:00p |
Why Real-Time Data Center Intelligence Pays Off How do you control something you can’t monitor and don’t see? It’s a novel idea but modern data center requirements have actually gone even further than that. Real-time data center monitoring is critical not only for the health of your infrastructure, but for the success of your business model.
Capacity planning is still one of the most challenging aspects of building a data center due to the complexity and number of variables. Proactive capacity management ensures optimal availability of four critical data center resources: rack space, power, cooling and network connectivity. All four of these must be in balance for the data center to function most efficiently in terms of operations, resources and associated costs. Putting in place a holistic capacity plan prior to building a data center is a best practice that goes far to ensure optimal operations.
IT and facility managers face a host of infrastructure challenges, with capacity issues at the top of the list:
- Stranded/lost capacity/fragmentation
- Running out of data center resources
- Finding optimal space for critical business assets
- Requirement for CapEx spending to address capacity issues
In this whitepaper from Panduit, we learn that the only real way to control these challenges is to have real-time, proactive visibility into all critical data center operations and resources. Without data center infrastructure management (DCIM) solutions, many data center managers cannot access data center intelligence to understand and improve capacity utilization, to determine if there is stranded capacity, or to proactively provision capacity when adding floor space or building a new data center.
Download this whitepaper today to learn how data center capacity management helps optimize four areas at the same time:
- Rackspace
- Network Connectivity
- Power
- Cooling
In using The Panduit SmartZone Solution, data center administrators are able to address a company’s power and energy usage challenges, capacity constraints, and environmental issues in order to provide the tools and information needed to make intelligent decisions for its data operations.
Remember, as your business and IT needs evolve, it will be critical to have the right tools in place to help your organization stay agile and extensible. | 4:11p |
Dell Rolls Out Mid-Tier Storage Arrays, Accelerated Flash Appliances Dell announced a new series of full-featured enterprise data storage arrays, bolstering of its software-defined storage ecosystem and plans to broaden its portfolio with new converged server and storage appliances. Announcements were made at the U.S. Dell Forum event in Miami earlier this week.
Advanced features at lower cost
The new Dell Storage SC4000 Series arrays are based on Dell’s SC8000 platform and feature dual redundant controllers, 24 internal drives and eight ports of 8Gb Fibre Channel network access. The SC4020, the first entry in the SC4000 series, is a 24-drive 2U array that can scale up to 413 TB of raw capacity. Designed to provide the same advanced capabilities as enterprise-class arrays, the new series is able to support Fibre Channel and iSCSI connectivity.
Looking to be a blend of price and performance, the new array series is engineered to offer what some of the larger SANs do, but at a mid-sized SAN entry point. While it is an independent offering, it leverages technology from Dell’s Compellent and EqualLogic product lines. The SC4000 series features the same intelligent data placement capability as the Compellent SC8000, as well as write-optimized flash (SLC) and read-optimized flash (MLC) drives.
“Dell has excelled historically as an innovator that brings full-featured storage capabilities to lower price points, helping the vast majority of storage customers to benefit from advanced features previously only available to the largest storage deployments,” said Arun Taneja, founder and president of market research firm Taneja Group. “With the Dell Storage SC4000 Series, Dell brings all of these capabilities, such as intelligent tiering, thin provisioning and snapshots, to a new breed of customers.”
The Dell Storage SC4000 Series will be available worldwide during the third quarter of 2014
Workload-specific enterprise appliances
Dell also announced several appliances and integrated systems to target specific workloads and application optimization opportunities. They build on recent joint partner solutions and reference architectures with Oracle, SAP, Red Hat, Microsoft and SAP Hana.
In a new new agreement with Nutanix, Dell plans to offer customers the Dell XC Series of Web-scale Converged Appliances, which combine compute, storage and networking into a single offering, powered by Nutanix software. Dell’s approach to software defined storage is to offer integration with open source platforms and leading hypervisor vendors to provide a variety of solutions to address various business requirements.
The company also introduced the Dell Acceleration Appliance for databases, a pre-built, pre-integrated appliance designed to accelerate leading database environments including MySQL, Sybase, Microsoft SQL and MongoDB. This solution includes Dell PowerEdge Servers, Dell Storage and Dell Networking, with application acceleration technology from Fusion-io to improve database performance. An integrated systems appliance was also introduced for Oracle 12c databases to help customers migrate to and accelerate current Oracle 12c environments.
Dell is working with Cloudera and Intel to launch new Dell In-Memory Appliances for Cloudera Enterprise aimed at accelerating Hadoop deployments. The appliances will uniquely join Cloudera Enterprise and Intel’s performance and security optimized chipset and ScaleMP’s Versatile SMP architecture to aggregate multiple x86 servers into a single virtual machine to create large memory pools for in-memory processing. | 4:30p |
Microsoft’s Hosted Flavor Of Exchange Takes Tuesday Off Sick A network problem took down Microsoft’s hosted version of Exchange for nine hours on Tuesday.
The number of affected users hasn’t been disclosed, but social media did well in policing the outage, with many customers making their frustrations vocal. The outage is a black-eye for Microsoft’s self-hosted flavor of both Exchange and its larger Office 365 hosted apps business suite.
The outage hit Microsoft’s self-hosted service, which competes with Google’s apps suite. The company has many service provider partners offering hosted Exchange, but they were not affected.
Many Exchange users have been looking to move away from self-hosted on-premises Exchange as the comfort for outsourcing grows. However, outages such as this one are fatal in convincing businesses to make the switch.
While no outages are acceptable, this one hit at the worst possible time: during U.S. work hours on Tuesday, starting at 9 a.m. Eastern and finally resolving around 6 p.m. “Engineers have identified an issue in which a portion of capacity that facilitates connectivity to Exchange Online services has entered into a degraded state,” according to the admin status page. A community message board shows the extent of customer frustrations.
Perhaps the biggest black-eye came from a perceived lack of communication on the part of Microsoft. Transparency and frequent updates are necessary when dealing with an outage to minimize customer anger – a lesson many service providers have learned over the years.
Exchange online is sold both as a standalone and as part of hosted business apps suite Office 365. Formerly named Outlook.com, this isn’t the first outage for the hosted email service.
A network device was also identified as the culprit in an Azure outage in 2012. | 5:00p |
Open Source NoSQL Database Firm Couchbase Raises $60M NoSQL open source database software startup Couchbase has raised $60 million in funding. The company, built around an open source technology, has had a year of stellar growth, with sales increasing 400 percent and size of the workforce doubling. The funding will help propel this growth with continued product investments, expanded regional tech support and scaling its marketing and sales teams.
The Big Data market can be segmented into two halves: operational data management, where Couchbase and NoSQL vendors play, and analytical data processing, which has been led by Hadoop vendors – although Google’s abandonment of MapReduce is sure to shake up the Hadoop world.
Operational data management, where Mountain View, California-based Couchbase plays, is accelerating with increasing enterprise adoption of NoSQL. The startup already has several household names as customers, including Beats Music, Cisco, Comcast, Disney, eBay/PayPal, Neiman Marcus, Orbitz, Sky, Tencent and Verizon.
The Series E was led by two new investors, WestSummit and Accel Growth Fund, and brings the total amount of Couchbase funding to $115 Million. All existing venture capital investors also participated.
Couchbase is in a great position to disrupt the database market, said Kevin Efrusy, partner at investor Accel Partners. “It is building momentum with Oracle replacements and demonstrating continued competitive success over other NoSQL vendors.”
IDC predicts the market for big data will reach $16.1 billion in 2014, growing six times faster than the overall IT market.
“Global adoption of NoSQL for operational big data initiatives is just beginning,” said Bob Wiederhold, CEO of Couchbase. He said his company continued to win over customers from Oracle and MongoDB.
Couchbase’s flagship product is Couchbase server. Architected to consolidate multiple layers of functionality, it is able to support multiple Big Data use cases. It aims to be simple like a key value database, flexible like a document database and deliver read speeds of an in-memory database.
In order to capitalize on the growth of mobile, the company launched Couchbase Mobile, an end-to-end operational big data management platform that supports both cloud and edge-based (the end devices) computing.
Investor Accel Growth III is a new $1 billion fund that focuses on investing in later-stage companies driving important new trends in technology adoption. The other new investor, WestSummit, has partners located across the U.S. and China and is particularly excited about Couchbase’s Asia Pacific potential.
Raymond Yang, co-founder and managing partner at WestSummit, will join the Couchbase Board of Directors. “China, and broader Asia, represent a huge market, and we believe Couchbase is well positioned to take leading share of this opportunity. As an investor and board member, I’m looking forward to supporting their APAC growth.”
| 8:00p |
Emerson Launches Larger Ultra-Silent Liebert Coil Condensers for Data Center Cooling Emerson Network Power has launched extended-size versions of its ultra-silent outdoor coil condenser for data center cooling systems. The product line previously included one- and two-fan units that ranged from 28kW to 40kW in capacity. Emerson now also offers three- and four-fan options, and the upper capacity range has been extended to 220kW.
Data centers are loud creatures, both inside and outside, AC condensers being only one of the sources of noise, joining a chorus with other gear, such as generators and exhaust fans. Many areas, such as the New York metro, have condenser noise level limits written into building codes data center operators have to comply with, which makes turning down the volume of a condenser a worthy engineering challenge.
In normal mode of operation, Emerson’s Liebert MC condenser is not a whole lot quieter than alternatives (between 2.5 and 5.5 decibels lower, according to the vendor), but it is something. It also has a Quiet-Line mode, which throttles the unit’s fan speed and lowers the volume by 10 to 20 dBA.
Using Emerson’s Liebert iCOM, the control system for the company’s cooling products, customers can flip between the modes to adjust noise levels based on time of day.
Liebert MC is built for use with Emerson’s Liebert CRV in-row cooling system, as well as Liebert Challenger 3000, DS and DSE precision cooling products.
The vendor is also boasting the unit’s efficiency, claiming 50-percent higher energy efficiency than traditional fin and tube condensers (Emerson sells those too). It gets its efficiency from its microchannel coil technology and from variable-speed fans, which can adjust cooling capacity based on IT demand (although many experts in the industry have been skeptical about the potential of adjusting cooling load dynamically as IT load fluctuates).
Emerson says its “IT system-matched heat rejection” reduces energy costs and operational expenses.
John Peter Valiulis, North America marketing vice president for Emerson’s cooling products, said, “The Liebert MC condenser is a major technology breakthrough in an area of data center thermal management that is often overlooked.”
The company’s other outdoor condensers are a fan-based Liebert PFH and a Liebert Fin/Tube unit.
Updated: The original post has been updated to include the recent additions to the product line. | 9:54p |
Belden Adds Cormant’s DCIM to Product Portfolio Belden, a vendor best known in the data center industry for its cabling products, has added a data center infrastructure management component to its product portfolio by partnering with DCIM software company Cormant. The two vendors will market and sell joint solutions.
DCIM is considered a growing market that has yet to hit its stride, and many vendors that sell into data centers have been seeking and finding different ways in. They’ve done it through acquisitions, partnerships, in-house development and various combinations thereof.
In a recent example, CommScope, a major Belden competitor in many areas, bought DCIM company iTRACS to get into the market.
The Cormant-CS DCIM solution monitors infrastructure and environmental conditions, power and data connectivity, and has an asset-management component. The monitoring takes place in real time.
It supports tablets, which data center techs can use to scan barcodes on the IT boxes on their data center floor to keep track of the assets.
Asset management of network and server equipment and components for inventory and deployment planning
Connectivity visibility to see where equipment is located, what it’s connected to and how it’s connected
Monitoring and reporting of power, environmental factors, network capacity and trends
Planning and forecasting with full view of data center floor plans and rack-level detail, historical data analysis and “what-if” scenarios
Work flow management for efficient deployment, reconfigurations and recording of test data
The system already integrates with a lineup of Belden wares, including traceable patch cords, intelligent power distribution units, cabinet access control systems and its heat containment system for enclosures. The companies plan to offer solutions that combine DCIM with Belden products.
Paul Goodison, CEO of Cormant, said the partnership had already won a number of customers. “By combining Belden’s leadership in data connectivity solutions with Cormant’s pure-play expertise in DCIM, we offer a world-leading combined solution to customers,” he said.
Cormant rebranded its DCIM product in 2013, changing the name from CableSolve to Cormant-CS to make sure people don’t confuse it with a cabling-oriented solution. | 10:00p |
Lessig’s Mayday Super PAC Scales in Google’s Cloud The online fundraising campaign for Lawrence Lessig’s Super PAC to end all Super PACs kicked into gear on May 1, and on May 2 the site started crashing. The small server hosting what was then a simple WordPress website couldn’t handle the influx of donations and had scaling issues.
Thanks to Lessig’s reputation in the developer world and support for his Mayday PAC vision, however, a group of volunteers that included top engineering talent from all over the country quickly came together to resuscitate the site, which would go on to raise more than $1 million over a period of two weeks.
Using a combination of public cloud infrastructure and open source developer tools, the team, within a week, replaced a simple blog-style site with a fully-fledged web application that in addition to collecting money and hosting promotional material enables users to create their own pledge pages.
A Super PAC-killing Super PAC
Mayday PAC’s goal is to raise millions in cash to support contenders in congressional races who pledge to fight for campaign finance reform. Super PACs (Political Action Committees) can raise unlimited amounts of cash and spend it to support political causes as long as they don’t give any of it directly to candidates and their campaigns.
Made possible by decisions in two 2010 court cases (Citizens United v. Federal Election Commission and Speechnow.org v. Federal Election Commission) Super PACs have served as a loophole corporations, interest groups and rich individuals can use to affect election outcomes through unrestricted spending on political advertising.
Mayday’s goal is to exploit this loophole in order to close it. Lessig and company’s goal is to raise $12 million and use it to “make fundamental reform the key issue in five congressional races” in November, according to the Super PAC’s website.
Volunteer firefighters
Before the fundraising effort could cross its first $1 million threshold, however, the organization needed to fix some fundamental scaling issues. Before Lessig had a chance to send a call out for help on Twitter, a number of concerned engineers were already knocking on the door, offering to volunteer and fix the site, JT Olds, one of those volunteers, recalled.
Olds is an indistinguished engineer at Space Monkey, a distributed storage startup competing with the likes of Dropbox. On May 1 he was simply excited about Mayday and made a donation, and on May 2 he saw that the site had started to have issues and offered to help.
A replacement site with backend hosted on Google App Engine (a Platform-as-a-Service product) was already in the works, being built by a volunteer from Google. Meanwhile, the WordPress site had to stay up, so the team that quickly formed (Olds included) decided to get a “beefy” Media Temple server to host it. “That weekend we ended up putting up a new architecture for the website, and over the course of the next week, we rewrote the site to be hosted by Google,” Olds said.
Besides limited horsepower of the server it was hosted on, the problem with the WordPress site was caused by a plugin it was using to interface with Stripe, an online credit card payment processing service. They eventually got a hold of the developer that wrote the plugin and a WordPress developer and fixed it. Olds said the site would probably have worked fine on the Media Temple server until the end of May (when the initial $1 million campaign was closing), but it was already decided to move it into cloud.
By that time there was already an active developer community chatting about the issue online and the majority was clamoring for migrating the site onto App Engine. The team ultimately chose Google’s cloud because the faction of people pushing for it was simply bigger than the other faction. “It was sort of a social decision of where we went with the code,” Olds said. But there were a few technological and cost advantages to going that route as well.
Pay-per-use auto scaling cloud a cheaper option
Cloud was less expensive. Aaron Lifshin, another volunteer who has since become Mayday CTO, said the Media Temple server cost about $750 per month. Now that everything is in the cloud, “it costs us a dollar a day or something, two dollars a day sometimes,” depending on demand. Once the re-architected site was launched, the Media Temple server was retired.
With Media Temple, Mayday was paying for anticipated peak demand all the time, while App Engine (like other public cloud infrastructure services) scales up and down automatically, and Google charges only for the capacity the site actually uses, Olds explained. There are also reliability benefits to App Engine, which offers replication across multiple VM instances, all automatically load-balanced.
The site’s backend services – a payment service, the service that lets people create their own pledge pages and authentication via Facebook, Google or Twitter accounts – are all now hosted in Google’s cloud. The landing pages are hosted on GitHub Pages, a GitHub service that hosts websites using code stored in its popular open source code repository.
Open source to the rescue
Both Lifshin and Olds said the Mayday site was saved so quickly by high-caliber engineers who worked for free because many engineers care deeply about open source and support Lessig. The Harvard Law School professor is a well-known political activist who in addition to fighting for campaign finance reform devotes a lot of time to the issues of copyright and net neutrality. One of the founders of Creative Commons, he is a big supporter of the open source movement.
True to form, the Mayday infrastructure team deposited all of the code written for the site into GitHub. “We were very interested in making sure that all of the code written for this project was open source,” Olds said.
Rough going in second round, Valley bigwigs on board
Currently in the middle of its second much more ambitious fundraising round, Mayday, while running on a modern cloud architecture, is struggling to hit its $5 million goal by July 4. Donors had pledged about $1.4 million as of Wednesday afternoon — nine days before the deadline.
The Super PAC has recently managed to secure support from a number of Silicon Valley heavyweights. Apple co-founder Steve Wozniak, Union Square Ventures managing partner Fred Wilson, PayPal co-founder Peter Thiel and LinkedIn co-founder Reid Hoffman have all joined the campaign, Reuters reported earlier this week. |
|