Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, July 30th, 2014
Time |
Event |
12:00p |
Longterm DARPA Project Yields Inter-Cloud Connectivity Breakthrough IBM Research, together with AT&T and Applied Communication Sciences, announced the first cloud technology breakthrough of the long-term DARPA CORONET project. The proof-of-concept technology reduces set-up times for cloud-to-cloud connectivity from days to seconds.
IBM and AT&T started the project way back in 2006. Funded by the DARPA CORONET program, the technology is of great interest to the U.S. Department of Defense. However, one of the program’s goals is to make this technology available commercially.
The companies claim this technology could one day lead to sub-second provisioning time with IP and next-generation optical networking equipment. It enables elastic bandwidth between clouds at high connection-request rates using intelligent cloud data center orchestrators, instead of requiring static provisioning for peak demand.
AT&T was responsible for developing the overall networking architecture for this concept. IBM provided the cloud platform and intelligent cloud data center orchestration technologies to support dynamic provisioning. ACS contributed expertise in network management and innovations in optical-layer routing and signaling as part of the overall cloud networking architecture.
The technology uses intelligence in the cloud to:
- Request bandwidth from pools of network connectivity when needed by an application
- Release it back when it’s no longer needed
- Dynamically connect various cloud networks on the fly — within seconds — when there is a need to share data or resources, or provision a real-time connection to a back-up cloud in the event of a disaster or other demand spike
“The program was visionary in anticipating the convergence of cloud computing and networking and in setting aggressive requirements for network performance in support of cloud services” said Ann Von Lehmen, the ACS program lead.
Developed for defense but has commercial applications
The original program goal was to enable affordable and fast bandwidth on demand between clouds, ensuring the survival of cloud networks in the event of multiple and system-wide failures. If disaster strikes, this was a way to connect cloud computing networks on the fly to immediately share resources and computing power in order to keep the Internet and the government running.
This prototype was implemented on OpenStack, an open-source cloud-computing platform for public and private clouds, elastically provisioning WAN connectivity and placing virtual machines between two clouds for the purpose of load balancing virtual network functions. The use of flexible, on-demand bandwidth for cloud applications, such as load balancing, remote data center backup operation and elastic scaling of workload provides the potential for major cost savings and operational efficiency for both cloud service providers and carriers.
“This technology not only represents a new ability to scale Big Data workloads and cloud computing resources in a single environment, but the elastic bandwidth model removes the inefficiency in consumption versus cost for cloud-to-cloud connectivity,” said Douglas Freimuth, IBM Research senior technical staff member and master inventor. “IBM Research brought a unique understanding of both cloud environments and networking infrastructures, which made us an ideal collaborator for this project.” | 12:30p |
Dell to Resell Schneider UPS, Racks, PDUs Dell and Schneider Electric subsidiary APC have deepened their longstanding relationship. Dell will sell APC-engineered and Dell-branded Smart-UPS and racks, along with a wider assortment of APC-branded UPS systems and PDUs. The partnership brings tighter integration between the two companies’ products.
The relationship between the two companies goes back 22 years, starting with APC providing power protection for Dell servers and networking equipment.
“Dell’s IT-savvy customers are well-acquainted with the APC history of quality in IT power protection, as well as Schneider Electric’s reputation as experts in data center physical infrastructure,” said Michael Maiello, senior vice president of Schneider’s Home and Business Networks unit. “This agreement will expand our twenty-two-year relationship with Dell to a more strategic level and provide fully integrated technology solutions for data centers and small to medium-sized business IT environments.”
The partnership allows Dell to serve a wide swath of customer types with co-branded gear. Smart-UPS is generally used in small-office, network-closet and server-room scenarios. However, Dell will sell the full suite of Schneider UPS products, racks and PDUs found in bigger data center facilities.
“As network, compute, storage and management continue to converge on a common architecture, it’s critical that we support our customers by providing a sound physical infrastructure – whether it’s for the IT office or the enterprise data center,” says Mike Arterbury, general manager of enterprise infrastructure at Dell. “We’ve worked closely with APC by Schneider Electric for many years, and together with APC and our other partners, we provide a reliable physical infrastructure to support the needs of business.” | 2:00p |
Data Center Jobs: McKinstry At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking an Account Executive – Data Center Maintenance in Seattle, Washington.
The Account Executive – Data Center Maintenance is responsible for initiating and developing consultative relationships with potential and existing clients in one or more industry verticals, establishing productive, professional relationships with key personnel in assigned and targeted client accounts, leading solution development efforts that best address client needs, while coordinating the involvement of all necessary company personnel, coordinating the involvement of company personnel, including technical, operational, and management resources, in order to meet account performance objectives and clients expectations, and proactively assessing, clarifying, and validating client needs on an ongoing basis. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 4:00p |
SSD Enhancements: Protecting Data Integrity and Improving Responsiveness Doug Rollins is a principal SSD systems marketing engineer at Micron Technology, Inc., who holds 13 U.S. patents and is an active member of the Storage Networking Industry Association (SNIA).
Part 1 of this article looked at techniques used to ensure data integrity and optimize performance of enterprise SSDs. This second part takes a look at two additional enterprise-grade SSD features: one is aimed at ensuring data integrity in the event of a sudden power loss; the other helps to increase responsiveness through better background operations management.
Enterprise-class data path protection
Enterprise-class data path protection (eDPP) refers to mechanisms employed inside an enterprise-grade SSD that protect the host data inside the SSD controller. eDPP can be implemented by storing additional information (metadata) along with user data to help ensure that the SSD returns the exact data requested.
Note that data and metadata are very different: data is written and/or read by the host; the logical block address (LBA) for that data (or the location associated with it) is called metadata, which literally means “data about data.” One can think of metadata this way: if the host data was a building, the LBA metadata would be that building’s street address.
As shown in Figure 3, when the host writes data to the SSD, two key elements combine: the actual data to be written and the LBA from which it came. All data written to the SSD has an LBA associated with it—just like every building has a street address.
 Figure 3: LBA Embedding and Checking
By logically combining the host LBA with the associated data before writing the data to the NAND device, the source LBA can be read, checked, and verified when the data is read by the host. This second-level embedding and checking helps guard against data mismatches.
Furthermore, embedding the host LBA as metadata enables more robust power-loss protection. SSDs that use a lookup table to locate data (matching the requested LBA to the NAND page that contains the data) can rebuild that table when power is restored simply by reading the embedded LBA metadata.
In addition to host LBA checking and embedding, eDPP provides other data path protection methods like memory protection ECC (MPECC).
Figure 4 shows how MPECC protects the host data by adding ECC coverage as it enters the DRAM on the SSD. This additional MPECC then follows the host data through the SSD and helps prevent bit fumbling.
 Figure 4: eDDP Memory Protection ECC
Background task management
Background tasks are housekeeping measures performed inside the SSD such as wear management and power loss protection routines. In some SSDs, background tasks can get in the way of servicing host I/O, which can cause unpredictable responses at the application level. However, well-managed background tasks in enterprise-grade SSDs enable faster application response times because they don’t get in the way of processing the intensive host I/O seen in the datacenter. Optimal enterprise-grade SSD design strikes a balance that results in substantially lower maximum WRITE latency at the application level.
An example of two such background tasks are copying any data mapping structures resident in the SSD’s DRAM into the NAND (a process that reduces startup time after a sudden power loss), and wear management.
Enterprise-grade SSDs offer more efficient internal data structures and manage them more granularly— executing background tasks in smaller chunks and ensuring that the operations can be effectively mixed with host I/O processing to optimize both.
Figure 5 shows how wear management can be performed at a very granular level. Managing and moving active data in small chunks facilitates seamless blending with the servicing of host I/O requests. The vertical columns represent NAND blocks and the green dotted area shows the smallest data unit that can be moved to manage wear.
 Figure 5: Granular Wear Leveling
Figure 6 shows an example of a data mapping structure stored in the DRAM of an SSD. In an enterprise SSD, this structure is often logically divided into smaller chunks (8 kilobytes in this example) so that it can be copied to the NAND device more efficiently by interspersing the background operation with host I/O requests. This technique offers better overall performance and superior latency reduction.
 Figure 6: Multiple, Smaller Structures in an Enterprise SSD
Summary
Enterprise-grade SSDs may look similar to client SSDs on the outside, but their internal operations tend to be very different. This two-part article introduced:
- Dynamic read tuning
- NAND die-level redundancy
- Enterprise-grade data path protection
- Optimal background task management
This article also discussed how the key design features of an enterprise-grade SSD can be very specialized. When selecting SSDs for your data center operations, it is imperative that these features and other design elements are well understood by your SSD supplier. Be sure that they can explain how the design works—including each critical element—and how their enterprise-grade design differs from client-grade designs. In short, be a persistent and informed customer before you sign the purchase order.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:08p |
Data Center Battery Monitoring Firm Canara Raises $4.25M Canara, a provider of monitoring and predictive analytics for data center backup power systems, has raised $4.25 million in equity. The round was provided by Columbia Capital and will be used to implement a strategic growth plan CEO Tom Mertz and chief business officer Steve Manos (both appointed recently) have put in place.
Batteries are often at fault in data center outages when they don’t kick in as expected. The San Rafael, California-based company’s devices continuously analyze data center backup batteries to predict when they are going to die, helping avoid unexpected battery failure. The value proposition is risk mitigation, higher efficiency and more reliable backup power systems.
Mertz, a former exec at QTS, joined in June and brought Manos aboard earlier this month. The new leadership has an aggressive growth plan, which the equity round will support.
Manos leads business development, marketing and other revenue-generation efforts. He is well known in the industry from his time at Lee Technologies, a data center management services company acquired by Schneider in 2011, and from his frequent speaking engagements.
Combination of the two new executives may make Canara more visible in the industry.
The funding will go toward aggressive growth objectives for sales, strengthening partner relationships and the product itself, Mertz said. “This is a major expansion of Columbia Capital’s investment in Canara, which is a great vote of support for our growth plan and a reflection of how bullish Columbia is about the growing market for predictive data center analytics.”
“Every company that operates a data center should be looking at these solutions in order to protect their investment and maximize uptime,” said Patrick Hendy, partner at Columbia Capital and a Canara board member. “Organizations that utilize Canara’s solutions rave about how indispensable they become, and Tom’s growth plan will make these products a vital part of more organizations’ data center management systems.” | 9:49p |
CoreSite Sells Chunk of Silicon Valley Space to China Telecom Customer CoreSite Realty Corp. has made an unusual deal with China Telecom to provide data center space to one of the partner’s “premier customers.” This is the first time the U.S. data center provider has made such a deal, selling space through a partner.
The unnamed client is taking capacity at one of CoreSite’s Santa Clara, California, data centers. The provider did not disclose the size of the deal, but in its second-quarter earnings report, delivered earlier this month, it mentioned that one customer had taken down 26,500 square feet at its SV3 facility, which is located in Santa Clara.
Average rate for data center leases CoreSite signed with tenants in the second quarter was about $160 per square foot.
The customer provides Web and mobile marketplace solutions, and the expansion to Silicon Valley is meant to boost its ability to deliver its solutions in the U.S. Joe Han, president of China Telecom Americas, said it was critical to find a data center provider that would provide a scalable data center solution for the client’s expansion.
“In addition to having the right data center infrastructure in Silicon Valley, CoreSite was able to meet our stringent security and technical requirements, and provide vital network connections,” Han said.
CoreSite has five data centers in the Silicon Valley, including the Santa Clara campus called Coronado. The company operates about 860,000 square feet of data center space across the five facilities and says it can expand its footprint in the region up to 1.3 million square feet.
As of the end of June, about 85 percent of finished data center space in CoreSite’s national portfolio had been leased out. | 10:30p |
Codero Hosting Launches Proactive Managed Hosting Service 
This article originally appeared at The WHIR
Codero Hosting announced on Tuesday that it has launched a new Proactive Managed Hosting service, which manages all IT infrastructure up to the application layer, including the data center infrastructure, networks, networking devices, and more.
Codero’s new managed hosting service allows customers to hand off the management of hybrid hosting, elastic block storage, private or public cloud, dedicated services, networks devices, and more.
According to Codero, customers have been using Proactive Managed Hosting in beta for nearly six months. So far, feedback has been positive as customers report significant savings.
Earlier this year, Codero received $8 million in financing to deploy data centers in the US and Europe and to expand its hosting portfolio.
“We created Codero Proactive Managed Hosting in response to customer and industry requests for a more integrated service that takes advantage of the automation benefits of our existing on-demand hosting platform,” Emil Sayegh, CEO of Codero Hosting said. “Internet-dependent businesses can now rely on Codero Proactive Managed Hosting in tandem with all of our other offerings for an unmatched curated service powered by a dedicated team of experts. Customers get a fundamentally better and more cost-effective way to manage their IT infrastructure so they can instead focus on business.”
Customers of Proactive Managed Hosting have access to a dedicated team of engineers, acting as an extension to their internal IT departments.
Automation is another focus of Codero’s new managed hosting offering. Customers benefit from improved monitoring and provisioning tools.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/codero-hosting-launches-proactive-managed-hosting-service | 11:00p |
Bitcoin Mining Heavyweight Takes Space at CenturyLink Data Centers Bitcoin mining infrastructure has very quickly (over the course of only the past couple of years) grown into a substantial market niche for data center providers. It is a tricky business, however, and not every provider can make the math of providing a fairly bare-bones space-and-power product to Bitcoin mining companies work.
CenturyLink Technology Solutions has become the latest provider to take a crack at this market. The company recently signed with Austin-based CoinTerra, a major player in the Bitcoin-mining-infrastructure ecosystem, as a tenant starting with four of its data centers.
According to CoinTerra, about 15 percent of the global Bitcoin network runs on its hardware. The company’s CEO Ravi Iyengar said this estimate was based on the number of mining servers it has shipped to customers or deployed in its own data centers and the total mining performance of the network.
CoinTerra’s main revenue stream is the Bitcoin value mined by its own servers, but the company also provides mining as a service to others and sells mining hardware. It currently has about eight data center locations, all in North America, including the four it has leased from CenturyLink.
Location, power requirements kept close to vest
Other than CenturyLink and C7 Data Centers, Iyengar was reluctant to disclose the names of CoinTerra’s other data center providers for security purposes. It is safer to disclose the names of companies like the ones he did disclose, since they operate in multiple locations, and it is impossible to deduce which locations the mining servers live in. The other providers, he said, were single-location data center companies.
Iyengar also declined to disclose the exact amount of power the company’s infrastructure uses or has contracted for with CenturyLink, saying only that it was north of 10 megawatts but that it would be north of 20 megawatts “very soon.”
Mining capacity is a trade secret in the world of mining companies, and the total amount of power a company consumes could easily be used by a competitor to calculate, with reasonable accuracy, how many servers it is running, Iyengar explained.
Very unusual data center tenants
At least $600 million is expected to be spent on Bitcoin mining infrastructure in the second half of this year, according to a recent estimate. That is a huge opportunity for data center providers, but not an opportunity all of them are in the position to capture.
Around-the-clock uptime is not as critical to mining operations as it is to most typical data center customers. A few seconds of downtime don’t really make a big difference, Iyengar said.
For mining companies like CoinTerra the most important attributes of a data center are extreme power densities and short deployment timelines. Since data center providers draw the bulk of their profit margins from providing redundant, reliable infrastructure, they cannot necessarily keep those margins and remain attractive to mining firms.
C7 CEO Wes Swenson told us in an earlier interview that his company has designated space in its data center portfolio designed specifically for Bitcoin. It is high-density, low-reliability space, and the provider offers it with no service-level agreements.
CenturyLink took a different approach with CoinTerra – its first Bitcoin customer. Drew Leonard, the provider’s vice president of colocation product management, said the company managed to sell CoinTerra a fairly standard product, including an SLA for cooling (a guarantee to maintain a certain ambient temperature).
Iyengar said CenturyLink did have to modify the solution to a certain extent. “They all had to customize their solution to reduce the cost,” he said about his data center providers. The traditional approach was too expensive for providers to be able offer the level of operational expense CoinTerra had asked for.
‘Move at Bitcoin speed’
Ultimately, CenturyLink won the contract because it was able to accommodate the power densities, the cooling efficiency and the extremely tight deployment deadlines CoinTerra required. “They’ve shown that they can move at Bitcoin speed, and that’s what we like about them,” Iyengar said.
CenturyLink installs the equipment for the customer. One of the environments it stood up for CoinTerra in one week after it received the shipment of the TerraMiner boxes. Speed of deployment is crucial, since, as Leonard put it, every day a server sits somewhere in a loading dock is a day of lost revenue for the mining company.
CoinTerra was a good match for CenturyLink because of the provider’s traditional focus on high power density in data center design. “The reason this type of customer is ideal is because they are very dense,” Leonard said. “They don’t require a lot of space. We can put a lot of power in it.”
Current-generation TerraMiners can take anywhere between 20 kW and 25 kW per rack, Iyengar said. “Mining machines consume more power than any of the traditional severs, by a big margin.”
Besides power density, however, cooling the load efficiently is also a big deal for this kind of equipment. The better it is cooled, the more Bitcoin value it can generate. “The way you cool them basically allows the machines to perform better,” Iyengar said. “Cooling directly impacts the revenue.”
Because cooling is so important to the business, CoinTerra is exploring non-orthodox options, such as immersion cooling. Some vendors, such as Green Revolution Cooling, offer cooling solutions that submerge server motherboards in dielectric cooling to take advantage of higher cooling efficiency of using liquid versus air as the medium.
In the near future, CoinTerra may be using CenturyLink’s data centers to house such immersion-cooling systems. The next-generation of TerraMiner technology will feature a mix of air and immersion cooling, Iyengar said.
Big enough for own data center
As it grows, CoinTerra is expanding its data center footprint aggressively. In addition to using service providers, its data center strategy also includes construction of its own dedicated facilities. The company is currently fitting out a data center in Canada that will provide more than 20 megawatts of power for bitcoin mining, Iyengar said.
It is also exploring a build-out of similar scale in the U.S., potentially contracting with CenturyLink, he said, but that project is still in very early stages, and no decisions have been made.
CoinTerra is the first customer in the Bitcoin ecosystem for CenturyLink, but public announcement of the deal indicates that the provider is interested in doing more business in this space. “We are seeing a definite up-swing in the trend and the opportunities with some of these companies,” Leonard said.
It is too early to tell whether the niche represents a major growth opportunity for a provider like CenturyLink, however. “There’s an opportunity there,” he said. “Whether it’s major growth or not, it really depends.”
Visit the dedicated Bitcoin data center market section on Data Center Knowledge for more coverage of this space. |
|