Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, September 15th, 2014
| Time |
Event |
| 11:00a |
Cloud Cost Management Startup Cloudyn Raises $4M Cloud cost management and monitoring is a crowded space, but Cloudyn believes it offers unique value that will set it apart from the pack. The company has secured $4 million in its first institutional funding round, which will go toward expanding its market reach and the breadth of its product portfolio.
The round was led by Titanium Investments, existing investor RDSeed also participating.
Cloudyn CEO Sharon Wagner said Cloudyn had a technological edge in its ability to work across multiple clouds and strength in hybrid cloud.
One a user provides credentials, Cloudyn begins analyzing usage. “We look for optimal cost and performance,” Wagner said. “We look at resources between clouds, data centers. Forty-eight hours after we gather information, we provide a multi-dimensional view of performance and cost, allowing [the customer] to slice and dice. After 10 days of collecting information, we start to provide actionable recommendations. The first option is to optimize internally, then we also provide comparison and recommendations for workload migration and transition among clouds.”
Cloudyn started with cloud cost management for Amazon Web Services customers, like many companies in the space do, but has continuously added support for other clouds, such as Google Cloud Platform, Rackspace and IBM SoftLayer. Wagner said the next big additions will be Microsoft Azure and VMware’s public cloud offering. “What we hear from clients is that Azure is taking off seriously with enterprises,” he said.
The company’s biggest install base (about 70 percent) is on AWS, which comes as no surprise given the cloud provider’s maturity. Cloudyn monitors approximately 8 percent of all instances on AWS, according to Wagner.
About 20 percent of Cloudyn’s customers run on GCE and the rest on an OpenStack cloud. Rackspace’s cloud is OpenStack-based, and SoftLayer, while not built on OpenStack, supports the open source cloud architecture.
In total, Wagner said, the company has over 2,400 clients, most of whom have significant cloud footprints of 250 concurrent instances and above.
Cloudyn provides visibility into hybrid and multi-cloud environments but will expand the breadth of what it can show users, according to Wagner. Expect the company to introduce more granular cost visibility, down to the cost of delivering the application itself. This is something that will appeal to CIOs looking to understand the true cost to deliver a specific service.
Cloudyn has doubled its revenue every quarter for the last six quarters, resulting in 400 percent year-over-year growth, according to the company. A team of 17, the equity will help it expand staff as well. | | 12:00p |
Vantage Pitches ‘OCP-Ready’ Colocation Space Saying you don’t have to be Facebook to take advantage of open source hardware the social network’s engineers have designed for it, Vantage Data Centers is pitching its Santa Clara V2 facility in the Silicon Valley to customers interested in using the stripped-down, or “vanity-free,” gear to power their applications.
Vantage has become a member in the Open Compute Project, the Facebook-led open source hardware design initiative. Other data center providers that support the project include Rackspace and IO, both of whom have built public cloud services using the designs.
Vantage’s interest doesn’t lie in cloud services, however. It’s roots are in selling wholesale data center space, but recently the company has been moving toward what it calls a “wholo” model, a middle ground between wholesale and colocation. The provider lets companies start with a 0.5-megawatt deployment and grow into wholesale eventually. Its competitors, such as Digital Realty Trust and DuPont Fabros Technology, have been introducing similar changes.
Model changes also include becoming more hands-on with the customer’s architecture. A traditional wholesale customer taking down 3 megawatts usually knows what they want, while smaller customers often need some guidance. It is this latter group that Vantage’s “OCP-ready” pitch is aimed at.
Pitching Open Compute to the mid-size market
“For mid-size enterprises with IT capacity of 100-200 racks, migrating [to OCP] would absolutely be a benefit,” Chris Yetman, senior vice president of operations at Vantage, said, suggesting the transition can be done during a routine IT refresh. “Everybody does tech refreshes. With OCP, they will run the same availability for less power.” He believes mid-market enterprises can realize the cost, efficiency and scalability benefits of Open Compute. “The basics behind design are to make a data center efficient,” he said. “Whether you’re using exact [OCP] designs or principles, this is the future and the right direction to head.”
An open compute server runs at higher power conversion efficiency than a non-OCP one and has more efficient 60-millimeter fans, which adjust speed automatically based on inlet air temperature, according to OCP. The main design principles behind OCP also include getting rid of the uninterruptible power supply and using a 480V electrical distribution system to reduce energy loss that happens during the multiple conversion steps electricity goes through in a typical data center.
Not having to buy UPS units and a simpler electrical supply system may make for a lower cost of deployment (in UPS-less design, the UPS is replaced by a massive battery cabinet). The UPS-less design is optional for Open Compute server deployments. Whether the hardware itself is cheaper to buy for a small-scale customer is questionable, since the number of suppliers that produce OCP server motherboards is limited.
“We’ll work with you on OCP design,” Yetman said. “And if you’re on UPS, you can do OCP minus the battery. If you’re putting in new stuff, you can say, I don’t want the UPS. We have powered shells, and we’ll leave it out.”
Unnecessary redundancy in best practices?
While he believes Open Compute is the way of the future, he said it will not happen overnight. Most data center managers are skeptical, since UPS systems and infrastructure redundancy in the name of uptime are long-time common practices. As a result, data centers have cooling systems that are at least N plus 20 percent and electrical systems where there is two of everything.
“It’s an insane amount of redundancy,” Yetman said. “This is an example of where OCP can turn the industry on its head. If you look at the Open Rack specification … you simplify the amount of power supplies that you use. I call it ‘nearly 2N benefit.’” The Open Compute standard brings higher power to the rack itself and splits that power at the rack. All this is optional at Vantage, however. The provider uses UPS systems and will continue to use them, he said.
What makes a colo OCP-ready?
OCP has put out a spec document for deploying its hardware in colos, offering three options: retrofitting standard 19-inch racks, using the Open Rack design (the latter has the same footprint as the standard rack) and using the Open Tack triplet design.
Going the single Open Rack route requires that the data center floor can support a rack that weighs more than 2,200 pounds when fully loaded. The rack also needs a 208V line-to-line power distribution unit (PDU).
The triplet design can include a battery cabinet which can be used instead of the traditional UPS-based power backup.
A fully loaded triplet rack can weigh more than 5,000 pounds, so users and operators should take care that the data center floor can support the weight. Users going for the battery-cabinet option should also consider weight of the cabinet, which weighs more than 2,220 pounds when fully loaded.
Each triplet rack also needs three 208V PDUs. If a battery cabinet is involved, a 208V variation of the battery cabinet and the OCP power supply is required as well.
Retrofitting a standard rack for Open Compute servers requires riveting some shelves and side panels from an OCP triplet, installing 208V line-to-line power strips and new power cords to connect server power supply to the new power strips. If the facility supports AC backup power, top-of-rack switches can be powered with 208V. If not, they can use 48V DC power from the OCP battery cabinet, in which case the switches need to have a DC power option.
Facebook did this retrofit at one of its data centers, and the process took about three months from design to deployment.
It will take a handful of early movers
“We’re saying let’s not get in the way of this thing; we’re saying ‘what is it that you want to accomplish?’” Yetman said, adding that the company is working with potential customers who are considering the OCP option. “We understand that not everyone can change overnight. This doesn’t happen in a day, but some people are early movers. Bring in the less pretty looking rack and see which one you like. See which one doesn’t lose a server when the power supply goes out.” | | 12:30p |
Disaster Recovery: Strong People Bring Stronger Results Charles Browning is Senior Vice President of Operations at vXchnge. Charlie is responsible for operations and site management.
Disasters – whether caused by Mother Nature or human error – are lurking around every corner and the outcomes can be catastrophic to an ill-prepared company. Unfortunately, when disaster strikes, the first place business performance is affected is at the data center.
According to Uptime Institute, 70 percent of data center outages are directly attributed to human error. And while a solid plan is essential, it’s only as good as the people who support it. A strong team can make all the difference in whether or not downtime at the data center goes unnoticed. Focusing on the people behind the maintenance, power and communication in your data center can help lessen the risk of downtime.
Skilled team members, maintenance
A 2014 study from Ponemon Institute found that the average cost of an unplanned data center outage is $7,900 per minute, a 41 percent increase since 2010. On average, outages last 86 minutes bringing the average cost of a downtime incident to a jolting $690,000. To avoid this hefty and unnecessary cost, safeguarding your data center with an experienced team is the first step to take.
Having skilled team members can help maintain an effective running data center. Each member should demonstrate disaster recovery experience, possess a “what if” attitude, and have a problem-solving assertiveness, without placing blame on third-party vendors or network providers.
Equally as important as the in-house team, is the team that supports the data center. They too need to be well-versed and have proven experience, not only in their specific field, but in disaster recovery. From the equipment manufacturers, network providers, and power and energy engineers, these supporting companies should have specialized individuals who can deliver solutions quickly when issues arise.
For example, when a natural disaster strikes, the data center needs the individuals who engineered the equipment to be present, not a hired contractor who is unfamiliar with the technology or the facility. While this individual may be adequate, they most likely do not know the ins and outs of your specific data center.
Keep your eye on the power source
At any data center the most important element is the power. Monthly testing and planning sessions need to occur in order to keep things running properly. That way, if disaster strikes, the action plan is stable and each team member knows their role in restoring power.
The most imperative plans in any data center are the power and cooling plans. Elite power and cooling processes help businesses keep costs down and guarantee performance. When a data center’s power source is suddenly cut off, backup generators are key. This helps power restore almost instantly.
However, simply having generators isn’t enough. There must also be a plan to test the backup generators on a monthly basis. This ensures that generators are up to speed and that the backup equipment can handle the takeover if need be. It may seem like an obvious statement, but testing outcomes should also be a high focus, that way any necessary steps in fixing a potential problem can be taken before it’s too late.
Communication is always key
For any disaster recovery plan to work effectively, communication is key. A one-line diagram of the facility’s infrastructure is highly suggested, so that the team, as well as businesses within the data center, can easily identify the specific cause of the problem. Without this type of diagram, when an issue arises, it becomes difficult to identify what is causing the problem. Without knowing what causes the problem, it is impossible to fix it, resulting in longer downtimes and interrupted service.
Around-the-clock support of live experts is becoming an industry norm for data centers. These people should also be experts within the data center so there isn’t further miscommunication or delay.
Falling victim to disaster
Without adequate brainpower fueling its plans and procedures, even the most comprehensive data center can fall victim to an unforeseen disaster. Focusing on the people behind the maintenance, power and communication will surely help lessen the risk of downtime at a data center, saving companies both money and unwanted stress.
Whether it’s adding in new elements such as 24/7 local, live expert support or a monthly reissuing of the one-line diagram to all team members, there are always new ways to make sure data centers can weather the storm.
What are some other elements that can help? Leave your comments below!
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:26p |
2015 Technology Convergence Conference TELADATA’s 2015 Technology Convergence Conference (TCC) will be held February 26, at the Santa Clara Convention Center Mission City Ballroom in Santa Clara, California.
The TCC is a one-day educational conference where IT, Facilities, and Data Center professionals and executives come together to learn from each other in a unique collaborative setting.
More details will be published as they become available. Visit TELADATA’s website for more information.
To view additional events, return to the Data Center Knowledge Events Calendar. | | 4:30p |
Dimension Data Intros Managed Services Across Global Footprint South Africa’s Dimension Data, a $6 billion IT and communications solutions provider, has deployed globally standardized managed services across its data centers, built atop its automation platform. The portfolio manages server, storage and networks for on-premise, cloud and hybrid data centers.
Dimension Data announced it would quadruple the size of its data center business to $4 billion by 2016. Becoming a managed services provider is the next step toward that goal.
Founded in 1983, the company has operations in more than 58 countries on all continents (save for Antarctica). It is a subsidiary of the NTT Group and has access to NTT’s significant global data center assets.
It also recently acquired Nexus, expanding operations in the U.S. by 40 percent.
Steve Joubert, executive for data centers at Dimension, said, “Even with the advent of cloud, Technology Business Research Inc. recently reported that 70 percent of private cloud adopters will utilize third parties to manage their environments.”
Joubert said with the new services Dimension automates routine transactional and knowledge work across network, server and storage for both on-premise and cloud environments.
The SLA-backed cloud services support multi-vendor environments, including those by Cisco, EMC, HP, Dell, VCE, Microsoft, Red Hat, VMware, Citrix and NetApp. Dimension has built a client portal that provides business insights for C-level executives. | | 5:30p |
Data Center Jobs: Data Foundry, Inc. At the Data Center Jobs Board, we have a new job listing from Data Foundry, Inc, which is seeking a Facilities Technician – Night Shift in Austin, Texas.
The Facilities Technician – Night Shift is responsible for solving problems using analytical, technical, and organizational abilities, understanding and documenting facilities infrastructure, regular monitoring of HVAC equipment (CRACs, CRAHs, Chillers, Pumps, RTUs, etc.), responding to facilities-based alerts and problems, producing weekly and monthly reports, and tracking and trending operational characteristics. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:07p |
UPDATED: Facebook, Google and Others Partner to Make Open Source Easier Updated with more details about the organizations plans and criteria for joining.
Facebook has teamed up with Google, Twitter, Box, Github and a handful of other companies on a collaboration to “make open source easier for everyone.”
Most of the companies involved in the initiative not only use open source software, but also produce a ton of it, building tools to manage their own infrastructure and contributing the code to the open source community. The other initial participants are Dropbox, Khan Academy, Stripe, Square and Walmart Labs.
The organization will function as a clearing house of a kind. While there is a lot of open source software out there, not all of it is high quality and not all of it is regularly maintained. The open source projects the new organization is going to get involved with are going to be projects that have produced software that has been deployed in production at one of the companies on the list.
Jay Parikh, global head of engineering at Facebook, announced the program at the company’s @Scale conference in San Francisco. The conference’s theme is challenges of designing and operating data center infrastructure at web scale.
The new program, called TODO (talk openly, develop openly), aims to address challenges companies like Facebook and its partners face when using open source software. Facebook did not provide any specifics about TODO because it is in its very early stages – only a couple of weeks old, according to Parikh.
As James Pearce, head of open source at Facebook, put it in a blog post, “We want to run better, more impactful open source programs in our own companies; we want to make it easier for people to consume the technologies we open source; and we want to help create a roadmap for companies that want to create their open source programs but aren’t sure how to proceed.”
One of the first and easiest tasks of the group will be creating a set of best practices around open source tools members have built or used. A project being on TODO’s list is a statement that it has been used by one of the companies (successfully), Pearce said in a press Q&A session. It will act as a seal of approval in a way, he said.
While sharing of knowledge about open source tools between engineers at companies like Facebook and Google already happens ad hoc, TODO is also an opportunity for companies that are involved in open source but don’t have the same contact networks to join those conversations.
Of course not everybody can join. At a minimum, a member company will have a dedicated open source office, Pearce said. “If you’re a one- or two-man company dong an iPhone app, [you're] probably not there yet,” he said.
A website dedicated to the TODO program has been launched: http://todogroup.org/ | | 8:12p |
Piston’s New TCO Calculator Shows the True Price of an OpenStack Cloud 
This article originally appeared at The WHIR
In conjunction with the release of Piston OpenStack 3.5, which includes improvements around security and operational savings, Piston has introduced the Piston OpenStack TCO Calculator which is designed to estimate the total cost of a Piston OpenStack private cloud compared.
An IT administrator simply needs to provide the online TCO Calculator with their estimated private cloud requirements and preferred hardware vendors. Based on this data, the free calculator provides a custom projection of the total cost of a Piston OpenStack private cloud amortized monthly, as well as the option of comparing this estimated cost to what it would cost on Amazon Web Services.
Shawn Madden, product manager at Piston Cloud Computing, told the WHIR that there have been instances where companies have paid upwards of $100,000 per month to AWS, and found much better economics through deploying an on-premise cloud.
Madden said the TCO Calculator helps customers understand the potential savings based on their particular needs. “It shows you how much your Piston OpenStack cloud is going to cost when you buy the hardware and the software, and things like that,” he said. “And it also compares what Piston OpenStack would look like compared to AWS – and where that ‘sweetspot’ is when it becomes cheaper to own your own on-premise cloud.”
Piston is designed to fully automate the orchestration of an entire private cloud environment on x86 servers, making them into a pool of elastic and scalable computing resources. It also includes many AWS-like features that make it easier to transition from Amazon’s public cloud to their own OpenStack private cloud, including features for Big Data applications. With the market for OpenStack set to reach $1.7 billion by 2016, Piston’s TCO calculator is just one of the tools that will help companies justify setting up an OpenStack cloud to contribute to this growth.
Piston co-founder and CTO Joshua McKenty said the reason that Piston is often compared to AWS is tied to the development of OpenStack itself. McKenty played a crucial role in the early stages of OpenStack as the Technical Architect of NASA’s Nebula Cloud Computing Platform and the OpenStack compute components, and continues to be a member of the OpenStack Foundation board.
“When we built OpenSpack at NASA, the original mandate was to figure out how the agency could take advantage of cloud computing, and really the only public cloud we looked at at the time was AWS,” McKenty said. Among the number of reasons AWS wasn’t chosen included security concerns, performance, capacity and sizing concerns, and overall cost.
“We actually had a TCO calculator at NASA that we had to build because we were providing services to the White House,” he said. “We had to build it under government full-cost accounting regulations. So, we had really granular understanding of what it costs to run a private cloud. And so the TCO calculator we’ve done at Piston is based on that original model.”
He said that with the launch of OpenStack, its main competitors were basically AWS and on one side and VMware on the other. “All of these various service providers were using OpenStack to fight with Amazon in public clouds, or fight against VMware in private clouds. And so the cost of ownership calculations have always been done in those two directions – those two basis.”
Even with public cloud providers in a price war, there are many instances where private clouds can actually be more cost-effective to run – in addition to the various other advantages of running a private cloud. This TCO calculator provides IT departments with a way to quantify the price difference between public and private cloud options in order for them to make a more informed decision on where their cloud belongs.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/pistons-new-tco-calculator-shows-true-price-openstack-cloud | | 10:22p |
Lobbied by Google, Apple, Duke Pumps $500M Into Renewable Energy in N. Carolina Duke Energy, the largest power utility in the U.S., announced it will acquire and construct three solar facilities in North Carolina and has signed five power purchase agreements with solar energy generation developers in the state.
The commitment represents a $500 million investment in renewable energy in one of the biggest data center markets in the country. It is also the single largest investment in renewable energy Duke has ever made.
Owners and operators of large data centers in North Carolina, including Google, Apple and Facebook, have been pushing Duke for more renewable options over the past several years.
Google has continuously expanded its data center campus in Lenoir and said it was going to use its purchasing power to jump-start a renewable energy program for Duke Energy last year. It looks like its efforts may have paid off.
Apple, which has committed publicly to keep its operations powered entirely by renewable energy. The company reached that goal across all of its data centers in 2013.
Apple recently got a green light to build out its third solar farm in Maiden, North Carolina, a 100-acre 17.5 megawatt plant.
Duke’s latest move suggests a lot has changed since 2013. Commenting on Apple’s announcement last year, Gary Cook, senior IT analyst at Greenpeace, insisted that Apple “still [had] major roadblocks” to meeting its 100-percent clean energy commitment in the state, where Duke was “intent on blocking wind and solar energy from entering the grid.”
Duke’s eight projects will have total capacity of 278 megawatts. The three solar facilities will have the capacity to generate 128 megawatts of power total, and the power-purchase agreements represent 150 megawatts of generation capacity in sum.
“This is Duke Energy’s largest single announcement for solar power and represents a 60-percent increase in the amount of solar power for our North Carolina customers,” said Rob Caldwell, senior vice president of distributed energy resources at Duke.
The three solar facilities are in Bladen, Duplin and Wilson counties. Bladen will consist of 23 MW, developed by Tangent Energy Solutions; Duplin will be 65 MW, developed by Strata Solar; Wilson will be 40 MW Developed By HelioSage Energy.
The power purchase agreements are sprinkled across different locations around the state:
- 48 MW – Bladen County (developed by Innovative Solar Systems)
- 48 MW – Richmond County (developed by FLS Energy)
- 20 MW – Scotland County (developed by Birdseye Renewable Energy)
- 19 MW – Cleveland County (developed by Birdseye Renewable Energy)
- 15 MW – Beaufort County (developed by Element Power US)
In addition to the five power purchase agreements, Duke has signed 33 other agreements in North Carolina this year for projects totaling 109 megawatts of capacity. | | 11:00p |
Facebook Turned Off Entire Data Center to Test Resiliency A few months ago, Facebook added a whole new dimension to the idea of an infrastructure stress test. The company shut down one of its data centers in its entirety to see how the safeguards it had put in place for such incidents performed in action.
Jay Parikh, global head of engineering at Facebook, talked about the exercise in his keynote presentation at the company’s @Scale conference in San Francisco Monday.
“This is not a small thing,” he said. “This is tens of megawatts of power that basically we turned off for an entire day to test how our systems were going to actually respond.”
He didn’t specify which of Facebook’s data centers was shut down. It has its own facilities in Oregon, Iowa, North Carolina and Sweden, and leases wholesale data center space in California and Virginia.
The company did run some “fire drills” prior to the test to prepare, and while there were skeptics that the team would actually pull the plug, it was important that it did happen. “We turned the entire region off,” Parikh said.
And the prep work paid off. “It was actually pretty boring for us,” he said.
Not everything worked 100 percent, and the team did put some improvements on the roadmap. But the overall system persevered, and the applications stayed up, and Parikh’s team is planning to continue such stress tests.
An exercise like this falls into one of key tenets of engineering at Facebook, which is embracing failure, Parikh said. Facebook encourages its engineers to take big risks – without being reckless – and doesn’t punish those who take them and fail.
“We don’t squash those,” Parikh said. There are precautions taken to minimize the consequences of failure, and the team spends a lot of energy on analyzing causes of failure and being able to recover quickly. |
|