Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 5th, 2013
| Time |
Event |
| 1:00p |
Data Center Jobs: Atos At the Data Center Jobs Board, we have a new job listing from Atos, which is seeking a Datacenter Technician in Boydton, Virginia.
The Datacenter Technician is responsible for monitoring and controlling daily service ticket activity, customer calls and service levels, resolving technical problems with hardware, software and connectivity, installing/ upgrading/ replacing server, device, or network components as needed, Windows Server OS Support (2003/2008), networking support (protocols, troubleshooting connectivity), performing physical hardware audits, performing hardware warranty/ RMA requests, and coordinating deliveries and transport requests. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:27p |
Equinix Opens Osaka Data Center Equinix opens its Osaka Japan data center, SunGard finds its third quarter sweet spot with customer success stories, and Online Tech is set to exhibit its HIPAA compliant products at the mHealth Summit in Washington D.C.
Equinix opens Osaka data center. Equinix (EQIX) has announced the opening of its data center in Osaka Japan, called OS1. The OS1 data center was opened in partnership with K-Opticom, one of the largest access providers in the Osaka/Kansai area, and Kanden Energy Solutions (KENES) with support from O-BIC, an Osaka government agency. As a carrier-neutral data center, OS1 will provide multinational telecommunications companies the ability to interconnect with multiple local network services through K-Opticom. Given the prevalence of earthquakes in Japan, OS1 has preventative measures in place in case of power supply issues or floods. With power systems that have built-in redundancy and backup generator systems in the event of a local utility failure, the structure is built to sustain potential environmental disasters. OS1 is also built with the most sophisticated seismic resistant system, handling maximum movement in response to anticipated large earthquake motions. ”The opening of our first data center in Osaka with the support of K-Opticom, KENES and O-BIC allows our customers around the globe with a diversity of networks and connectivity options in Osaka, driving synergies with our Tokyo operations,” said Kei Furuta, managing director of Equinix, Japan. ”With the two key hubs in Japan, cloud, content and network providers, as well as financial services firms, can access the broadest choice of networks in the region, enabling them to rapidly and cost effectively expand their business, while better serving their international customers.”
SunGard sees Third Quarter Sweet Spot. SunGard Availability Services announced that it saw an uptick in customer migrations across several lines of business in the third quarter including managed services, cloud and disaster recovery services. Advanced Energy (AE), a global leader in reliable power conversion solutions, turned to SunGard Availability Services to provide full support for 13 different SAP landscapes supporting all AE processes within a private cloud environment, along with disaster recovery. SunGardassisted the company in its full migration to the SAP landscape which resulted in a 30 percent performance improvement in the cloud. Jonas Fitness, a leading provider of enterprise management software and billing solutions to a variety of industries, was seeking a partner to help position the company as a trusted provider of ‘software for life.’ SunGard Availability Services’ solution for Jonas is comprised of production managed hosting with replication to a disaster recovery environment, the SunGard Managed Recovery Program and consulting services for mainframe data migration.
Online Tech to exhibit at 2013 mHealth Summit. Online Tech announced that it will exhibit its HIPAA compliant products at the upcoming 2013 mHealth Summit. Customers instaRounds and DocView Solutions host in Online Tech’s enterprise-class and fully-encrypted cloud, and will be guests of the company at the mHealth Summit. instaRounds allows physicians to communicate over HIPAA compliant servers, share call schedules, follow appointments, allow cross coverage and give physicians a mobile patient sign-out application. instaRounds CEO Kurian Thott said, ”future growth potential includes extending its use to pediatrics, urology, trauma and other specialties. Thott chose Online Tech as his hosting provider because it was HIPAA, PCI and SOX audited and his data would be encrypted throughout the process. They took care of compliance so we didn’t have to spend a lot of time, effort or money proving it.” | | 1:30p |
Clustering in the Cloud: Has The Holy Grail Arrived? Noam Shendar is VP of Business Development at Zadara Storage, a provider of enterprise Storage as a Service (STaaS) solutions. He has more than 15 years of experience with enterprise technologies including at LSI Corporation, MIPS Technologies, entertainment technology startup iBlast, and Intel.
 NOAM SHENDAR Zadara Storage
Cloud economics are so compelling the more data center managers are evaluating what additional applications make sense in the cloud, whether their own private cloud, a hybrid option or a public cloud from Amazon Web Services or a service provider. Yet, there are reasons why proven enterprise-class features are considered as such — because they deliver, reliably, against agreed upon SLAs.
After growing accustomed to them, data center managers are loathe to give them up.
Enterprise Storage
This is particularly true with traditional enterprise storage system features. One such feature is clustering, the standard enterprise method for achieving application high availability on mission critical enterprise applications. Clustering works by having multiple servers run the same application, so that the failure of any one server will not cause downtime (the other servers “pick up the slack” for the failed server). It is de rigueur for databases, and given so much of enterprise computing runs on a database, it’s effectively among a punch list of features that, without it, most enterprise application just can’t move to the cloud.
To date, leveraging clustering in the cloud has required that IT teams rewrite legacy applications specifically for cloud deployment because the storage system constrains the database for one or both of the following reasons:
(1) The storage is too slow, requiring the application to be broken up into parallel processes, each running on slow storage but in aggregate producing sufficient performance
(2) The storage lacks certain capabilities such as volume sharing, protocol support for NFS or CIFS which are commonly found in legacy applications.
Certainly very few IT groups have time for this added work, and so applications requiring clustering have been forced to stay out of the cloud, or to use “managed hosting” options where the service provider creates a private setup for the customer using dedicated equipment. This is expensive and rigid, requiring long lead-times to modify, and multi-year commitments.
Enter Software-Defined Storage
The software-defined storage movement is changing all the above, and rapidly. Select 3rd party solutions can create clusters in the cloud despite the networking limitations, such as lack of IP multicast and iSCSI persistent reservations as well as the ability to present a single volume to multiple cloud servers/instances.
Depending on the extent of these limitations, basic file sharing for collaborative work such as CAD, CAM and other types of shared workloads would not be possible. For most legacy application (SQL, Exchange, Oracle), clustering is a standard. Moving these clustered applications to the cloud while leaving the clustering aspects behind is not an option.
There are also novel options made possible by new software-defined approaches to intelligently sharing screaming-fast storage hardware among multiple customers, without issues of either privacy or performance.
To support clustering, cloud-based storage approaches need a punch list of features:
- Application and/or OS with clustering support (e.g., Red Hat Failover Cluster or Windows Failover Cluster).
- Volume sharing, so that the same volume can be mounted to all the servers in the cluster, instead of allowing just one-volume-to-one-server attachment, as is typically found in cloud storage solutions.
- support for SCSI Persistent Reservations in order to allow the servers to avoid modifying the same data simultaneously. Essentially, each server can temporarily lock out other servers from a data region on which the first server is working – a feature that is common for on-premises storage systems but emerging in the cloud.
- IP multicast or Layer 2 communication support among the servers in the cluster, in order for the servers to ascertain each other’s health.
Moreover, the storage needs to scale up in performance as the number of servers in the cluster grows, and as each server produces more I/O – which may be a challenge in itself since these applications are typically high IOPs in nature to begin with.
Since mid-2013 Deloitte Consulting’s Infrastructure and Operations hosting business unit has used Dimension Data’s servers and its IP Multi-cast feature along with a Storage as a Service (STaaS) solution that Dimension Data provides from Zadara Storage that supports native SQL clustering. In addition to using Zadara for providing over 10 TB of storage to numerous clients, Deloitte’s Managed Analytics SaaS offering also uses Zadara to provide information and insight into the cloud environments that Deloitte hosts for clients so as to help the companies run their IT infrastructures more efficiently.
With several of its clients having SLAs at 4 9’s or 5 9’s levels – clients in some demanding sectors including retail, inventory management and patient claims where there was little room for cloud-induced system hiccups – typically Deloitte would have taken a hybrid approach by placing the web and application tiers in the cloud, and keeping the physical servers on site and connect to clustered storage resources. But doing so meant maintenance time and sunk storage costs. Deloitte would have been forced to buy allotments of storage – usually more than it needed – if and when growth occurred, and to physically connect new resources to existing ones as its network scaled.
By clustering in the cloud, versus using physical resources, Deloitte’s Michael Hayes, master specialist in Deloitte Consulting’s Infrastructure & Operations group, said he was able to gain an element of nimbleness in meeting client needs that was otherwise impossible when using physical resources that would have to be purchased and installed on site as customers’ needs grew.
As formerly impossible capabilities for enterprise computing become possibilities, like clustering, the Holy Grail of the cloud – where all IT resources are flexible, performant and economical, even at high availability and at scale – is becoming a reality. Data center managers should be on notice that they no longer have to make compromises between features they must have that dictate being on-premises – and management requirements to run an optimal operation.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:00p |
Peak Teams With Telx to Expand Footprint, Customer Base  The raised-floor area inside a Telx data center.
Data center and interconnection specialist Telx and channel-focused cloud provider Peak (formerly PeakColo) have formed a very symbiotic partnership. Peak will expand into two Telx data centers, SCL2 in Santa Clara, California and ATL1 at 56 Marietta Street in Atlanta, Georgia. This improves Peak’s infrastructure footprint and gives Telx a cloud partner for key facilities. The newest Peak cloud nodes will be available in January 2014.
“If you think about our customer base, which is service providers,telecoms, and VARs, Telx is in almost in perfect alignment,” said Luke Norris, CEO of Peak. “Telx has these customers, so there’s almost perfect synergy from the start.”
Telx and Peak are coordinating on developing cloud, connectivity, managed services and colocation solutions to new and existing clients. Peak is the first named partner in Telx’s Enablement Program, cementing a partnership that strengthens Peak’s cloud presence throughout North America and increases Telx’s portfolio of products for its clients. The partnership has been in the works for about a year.
Data center providers have to get into cloud; either by building it themselves and potentially cannibalizing business or stepping on the toes of customers, or through smart partnerships such as the Telx-Peak agreement. Telx clients will be able to plug into Peak’s platform to use a customizable set of cloud services, managed storage and backup services, and do it while leveraging Telx’s colocation space and expansive access to both regional and global networks.
“By locating in Telx data centers, we can offer our mutual clients an award winning platform, excellent latency, reliability, scalability, and connectivity to other enterprises and cloud providers,” said Norris. “Our relationship with Telx offers a number of benefits to customers, including a variety of architectural options. Customers will be able to enjoy the benefits of hybrid solutions that include enterprise data centers, hosted private clouds and colocation-based resources, Peak cloud services, and tethering to third party cloud providers.”
Peak has been on a tear, posting tremendous growth for the last few years. Its 100 percent channel-centric cloud approach has been a winning formula, as well as part of the reason it changed its old, somewhat misleading name of PeakColo. The company discussed its positioning with Data Center Knowledge last April.
Telx continues to boost its offerings in regards to connectivity, as well as continues to form smart partnerships with cloud providers giving its customers a myriad of options to work with. The ecosystem is growing.
“Peak represents the latest in a broad range of cloud service providers that have selected Telx, ranging from IaaS to SaaS (Software as a Service) providers,” said Chris Downie, CEO of Telx. “We are servicing some of the world’s largest network service providers and enterprises across a wide range of business verticals, such as digital media and entertainment, social networks, video streaming, financial services, healthcare, retail, other cloud companies, and startups. The diversity and ability to interact with hundreds of businesses in our Telx ecosystem creates and provides growth opportunities for all participants across our 20 data centers from coast to coast.”
Telx’s 20 highly interconnected data centers, offering hundreds of network service providers and tens of thousands of connections combined with Peak’s cloud computing services leadership provides customers a unique value proposition for interactive applications with uncertain demand. | | 3:30p |
Intel, WebTrends Host With ViaWest in Oregon ViaWest powers a Top500 supercomputer and an extension for Webtrends in Oregon, Compass Raleigh-Durham receives LEED Gold, and FORTRUST is awarded the Uptime Institute’s Management & Operations Stamp of Approval in Denver.
ViaWest helps a Top500 supercomputer and Webtrends expand. ViaWest announced that Webtrends is expanding its global SaaS operations to ViaWest’s Data Center Complex in Hillsboro, Oregon. The 80,000 square foot Hillsboro facility is SSAE 16 audited and features 8.5 megawatts of backup power. “We handle billions of transactions a day and reliability, availability and security are primary considerations for Webtrends and our customers,” said Bruce Kenny, Executive Vice President of Product at Webtrends. “ViaWest provides world-class operations with proactive compliance with SSAE 16/SOC 1, 2 and 3 and overall security. This and their partnering model was a crucial factor in our choosing them as our data center partner. We are pleased to expand with ViaWest as they share our fanatical focus on being the very best stewards of our customers’ digital assets.”
ViaWest also announced a partnership with Intel to house their most efficient and densely packed supercomputer in ViaWest’s Hillsboro Data Center. The ‘Cherry Creek’ supercomputer recently debuted at number 400 ranking on the Top 500 Supercomputer List for the most powerful computer systems. “As Oregon’s leading colocation provider, ViaWest is proud to support Intel’s innovation efforts,” states Jim Linkous, Regional Vice President Sales and General Manager for ViaWest. “Our technical team was very excited to support this supercomputer configuration by delivering 140 kW (kilowatts) of power and cooling – a powerhouse configuration in which no other provider had the power or cooling systems to deliver,” he added. The supercomputer utilized 74kW of power spread over two standard cabinets
Compass Raleigh-Durham receives LEED Gold. Compass Datacenters announced it has been awarded the prestigious LEED Gold certification for its data center in the Raleigh-Durham area. The certification was awarded by the U.S. Green Building Council and verified by the Green Building Certification Institute (GBCI). Completed earlier this year, the 22,000 square foot facility is located in Research Triangle Park (RTP) in North Carolina, and it was built using sustainable engineering and construction practices that adhere to the USGBC’s standards for materials use, water use, energy efficiency and other sustainability criteria. “I am a long-time proponent of LEED certification for data centers going back to my involvement years ago with some of the very first data centers that ever earned approval from USGBC, and I am proud that Compass has earned the first of many LEED certifications for the dedicated data center facilities we build for customers.” said Chris Crosby, CEO of Compass Datacenters. “LEED has tremendous value for the data center industry because the principles LEED is based upon are the same principles that are so important to us as data center professionals: stewardship of materials, capital efficiency, proven best practices for operations, resourcefulness, and on and on.”
FORTRUST receives Uptime’s Management & Operations Stamp of Approval. FORTRUST announced that it has received the Management & Operations (M&O) Stamp of Approval from the Uptime Institute for the company’s Denver, Colorado data center. The M&O Stamp of Approval validates the critical facilities management and operations practices and gives third-party assurance that the site management satisfies industry-recognized criteria for 24 x 7 uptime. “The M&O Stamp of Approval substantiates FORTRUST’s High-Availability Service Delivery Model and the ongoing commitment to excellence of our management and operations teams,” said Rob McClary, Senior Vice President and General Manager at FORTRUST. ”This is the basis of our organizational and operations strategies that have provided the results that our customers have become accustomed to for well over 12 years. The engagement conducted by Uptime Institute was a both a collaborative and thorough assessment of our Management and Operations organization.” | | 3:45p |
Bitcoin Mining Arms Race Boosting Interest in Liquid Cooling  Densely-packed chips performing Bitcoin calculations are immersed in cooling fluid, which bubbles as it boils, removing the heat from the ultra-high density “mining” operation. (Photo: Allied Control)
There’s a computing arms race going on in the world of Bitcoin. Interest in the digital crypto-currency is driving the development of specialized hardware chips, which are selling out almost as fast as they can be built. This is boosting interest in data centers using immersion cooling, in which high-density hardware is dunked into fluids similar to mineral oil.
This new frontier in high-performance computing can be seen in Hong Kong, where a bitcoin mining company called ASICMiner has created an unusual data center. Within the facility, rows of rack-mounted tanks are filled with Novec, a liquid cooling solution created by 3M. Inside each tank,densely-packed boards of ASICs (Application Specific Integrated Circuits) run constantly as they crunch data for creating and tracking Bitcoin. As the chips generate heat, the Novec boils off, removing the heat as it changes from liquid to gas.
These systems are part of a huge computing network driving global payment processing for Bitcoin, which uses processing power to verify transactions. Participants in the network – which includes individuals, corporations and mining collectives – are rewarded with the issuance of new bitcoins, which happens about every eight minutes. There’s a wrinkle: Over time the algorithms make it progressively harder to earn new bitcoins. If processing power remained static, it would take much longer to generate new bitcoins. The solution: more computing power!
Bitcoin mining can be done with CPUs or GPUs, but most serious players have graduated to specialized chips such as FPGAs (Field Programmable Gate Arrays) or ASICs that can be optimized for very specific workloads. This has led to the emergence of a new class of hardware vendors selling custom hardware for bitcoin mining.
There are two key elements in bitcoin mining: computing power and electric power. Miners must make the most of both resources.
Hardware Power
The state of bitcoin mining hardware is perhaps best expressed by Ravi Iyengar, who left a position as lead CPU architect at Samsung to launch CoinTerra, startup to design custom ASIC hardware for the bitcoin market. “I’ve been in arms races throughout my career – AMD, ARM, Intel,” Iyengar told Reuters. “But none of them match the intensity of Bitcoin mining. Each month in Bitcoin mining is like a year.”
Austin-based CoinTerra launched in August after closing a $1.5 million seed round of financing. It quickly sold out its first batch of units, and a similar sellout was reported by HashFast, a Bay Area startup featuring alumni of Xerox PARC and Engine Yard. Both CoinTerra and HashFast are building ASICs featuring state-of-the-art 28nm chip designs. Other early leaders in the Bitcoin hardware market include Avalon, KnCMiner, BitFury and Butterfly Labs.
The performance benchmark for Bitcoin hardware is gigahashes (GH) per second. The hash rate is the number of bitcoin calculations that hardware can perform every second. Tools have sprung up to help aspiring miners evaluate hardware performance and the economics of mining, including The Genesis Block and Decentralized Hashing. | | 4:30p |
Pivotal and Capgemini Form Strategic Partnership At its Data Science Summit Pivotal announced a new strategic partnership with consulting, technology and outsourcing services company Capgemini. The partnership will combine Capgemini’s expertise in business solutions for big data and analytics with Pivotal’s data platform technologies.
Together the companies will innovate around big data and analytics, and have started with “Business Data Lake”, which will bring a new approach to combining big data volumes from new sources with legacy data to provide business relevant analytics capabilities on a robust platform. The Business Data Lake transforms how information is leveraged within the enterprise, moving away from a single centralized view. It enables a broad base of business users to create their own personal perspectives on all data: structured and unstructured, stored and streamed and from inside and outside their organization. The end result will provide agile and relevant analytical insight to a broad community of business users, and through real-time technology, integrate those insights directly into business processes.
“As we enter a new era of information centric computing, companies must co-innovate to solve tough problems for their customers,” said Pivotal CEO Paul Martiz. “This is why we are working with Capgemini, a market leader in business analytics, around the Business Data Lake. This new offer represents our belief that the future of information insight within enterprises requires a new operating model, as both data volumes increase and real-time intelligent-response becomes a necessity of doing business.”
Capgemini and Pivotal are working together to establish a dedicated Pivotal Center of Excellence (CoE) within Capgemini’s Business Information Management (BIM) center in India, that will scale to 500 dedicated Pivotal product experts by 2015. The CoE has access to over 8,000 information management practitioners and 6,000 Java developers.
The Data Science Summit was held this week in Redwood City, California. The event conversation can be followed on Twitter hashtag #Datasciencesummit. |
|