Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, August 22nd, 2014
| Time |
Event |
| 1:56p |
IBM and DESY Develop Big Data Architecture For Science IBM and Deutsches Elektronen-Synchrotron (DESY) are joining forces to speed up management and storage of massive volumes of X-ray data.
Together they are building a Big Data and Analytics architecture using IBM software-defined technology. It will handle more than 20 gigabytes per second of data at peak performance and help scientists worldwide gain faster insights into the atomic structure of novel semiconductors, catalysts, biological cells and other samples.
DESY develops, builds and operates large particle accelerators used to investigate the structure of matter. It operates a 1.7 mile-long super microscope called the PETRA III accelerator. PETRA III speeds up electrically charged particles nearly to the speed of light, sending them through a tight magnetic slalom course to generate brilliant X-ray radiation.
This synchrotron radiation is used by more than 2,000 scientists each year to examine the internal structure of a variety of materials with atomic resolution. The problem and challenge is storing and handling huge volumes of X-ray data.
DESY is tackling the Big Data challenge with help of IBM Research and IBM software-defined storage technology Elastic Storage. Elastic Storage will provide scientists with high-speed access to increasing volumes of research data, and can easily scale to store and handle more than 20 gigabytes of data flowing every second from PETRA III.
The architecture will also allow DESY to develop an open ecosystem for research and offer analysis as a service, as well as cloud solutions to users worldwide.
“IBM’s software-defined storage technologies can provide DESY the scalability, speed and agility it requires to morph into a real-time analytics service provider,” said Jamie Thomas, General Manager Storage and Software Defined Systems, IBM. “IBM will take the experience gained at DESY and transfer it to other fields of data intensive science such as astronomy, climate research and geophysics and design storage architectures for the analysis of data generated by distributed detectors and sensors.”
The scalability of the system developed can support DESY and a number of international partners that are currently building the X-ray laser European XFEL, a research light source that will generate even more data.
“We expect about 100 petabyte per year from the European XFEL,” said Dr. Volker Gülzow, head of DESY IT. That is comparable to the yearly data volume produced at the world’s largest particle accelerator, the Large Hadron Collider (LHC) at the research center CERN in Geneva.
“A typical detector generates a data stream of about 5 gigabit per second, which is about the data volume of one complete CD-ROM per second,” Gülzow said. “And at PETRA III we do not have just one detector, but 14 beamlines equipped with many detectors, and they are currently being extended to 24. All this Big Data must be stored and handled reliably.”
DESY is housed in Hamburg and Zeuthen in Germany and is home to 3,000 scientists from over 40 countries a year. | | 2:29p |
Couchbase Partnerships Bring Scalable NoSQL to the Cloud NoSQL open source database software provider Couchbase announced new partnerships with CumuLogic, ElastiBox and Cloudsoft to help business deploy Couchbase Server to a cloud of choice, whether on-premise, public or hybrid.
The expansion of Couchbase’s cloud deployment ecosystem opens the door for customers to automate the process of provisioning Couchbase Server to Amazon Web Services, HP Cloud, IBM SoftLayer, VMware, OpenStack, and more.
Couchbase is coming off of a $60 million financing round recently and a year that witnessed 400 percent growth in sales. The company believes it is at the right place at the right time, citing Gartner forecasts that show a projected 17.7 percent growth rate for public cloud services through 2016, and nearly half of large enterprises moving to hybrid cloud deployments. The company feels this emphasizes the importance of enabling the deployment of its software across all cloud-based environments.
Vice President of Business Development at Couchbase Rod Hamlin says that the addition of these new partners “ensures customers can find the right option for their business to deploy Couchbase Server to their cloud of choice, whether it’s on-premises, public or hybrid, and power their applications with the highest performing and most scalable NoSQL database.”
Couchbase as a service
Couchbase and Database-as-a-Service (DBaaS) software provider CumuLogic announced a partnership to bring developers seamless access to Couchbase Server “as a service” via on-premises, public, and hybrid cloud infrastructure. CumuLogic VP of Product Strategy Chip Childers says Couchbase is particularly strong in the mobile application developer community, and that DBaaS will give the developers databases they want, and the operations team a common platform for operations and management on whatever infrastructure they want to use.
The partnership with ElasticBox will provide a ‘Couchbase box’ solution, an encapsulated architecture (box) as a service. EleasticBox describes its box technology as reusable application components that can include configuration management runtimes and platforms, are stacked together to build applications that can be shared with coworkers and repeatedly deployed anywhere. Enabling IT with the Couchbase boxes will help with building and deploying applications as well as speeding up application delivery.
“If Big Data’s promise is being turned into reality by technologies like NoSQL, then ElasticBox is bringing out the capabilities of the cloud to help businesses create, deploy and manage internal applications,” said Ravi Srivatsav, CEO and cofounder of ElasticBox. “ElasticBox will continue to deeply integrate with top technologies, like Couchbase’s robust NoSQL database, as we bring even more power and capabilities to the platform.”
Cloudsoft joined the Couchbase partner ecosystem by adding support for Couchbase server, thus giving users the ability to create composite blueprints that will enable them to model, deploy and manage their Couchbase applications both on-premises and in the cloud. Built on an open source Apache Brooklyn project itself Cloudsoft has been contributed support in the past to the Couchbase open source server and says it is actively working with Couchbase to add Couchbase mobile support.
CumuLogic, ElasticBox and Cloudsoft join a practical who’s who list on the Internet for Couchbase partners. Besides listing major Internet companies as customers, Couchbase cloud deployment partners include Amazon Web Services, RightScale, VMWare, Windows Azure, HP Cloud Services, Red Hat’s OpenShift, and Eucalyptus. | | 3:02p |
CyrusOne Expands Austin Data Center CyrusOne expanded its Austin II data center with 5,000 square feet of white space in data hall two. The Austin II facility is approximately 65,000 square feet. It is the first of a potential four-phase data center campus consisting of 290,000 square feet.
CyrusOne continues to expand in Austin and Texas in general to meet demand. It recently added 102,000 square feet and 12 megawatts combined to Carollton and Houston data centers. CyrusOne also acquired 22 acres of land in Austin earlier this year, following land acquisitions in San Antonio and Houston. Nationally, it is also building data centers in Chandler, Ariz., and Northern Virginia.
“Based on current and projected customer demand, we made the decision to add more square footage to our Austin II facility,” said John Hatem, senior vice president, data center design and construction, CyrusOne. “Staying true to our Massively Modular design philosophy, we want to ensure our facilities can scale as our customers’ requirements grow.”
Austin customers also have access to CyrusOne’s Texas IX platform, which provides interconnection to other CyrusOne data centers in Texas and beyond. The Texas-IX platform currently links more than a dozen of CyrusOne’s enterprise facilities and third-party locations in multiple metropolitan markets.
CyrusOne operates 25 carrier-neutral data centers across the United States, Europe and Asia. It touts more than 645 customers including nine of the Fortune 20 and around 135 of the Fortune 1000. | | 4:01p |
Friday Funny Caption Contest: 300 Pound Gorilla Although it’s somewhat hard to believe, the weekend is upon us again. Let’s have a little fun this Friday afternoon with a brand new Kip and Gary!
Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. We challenge you to submit a humorous and clever caption that fits the comedic situation. Please add your entry in the comments below. Then, next week, our readers will vote for the best submission.
Here’s what Diane had to say about this week’s cartoon, “Kip and Gary knew their next colo tenant was going to be a gorilla, I’m just not sure they were expecting this!”
Congratulations to the last cartoon winner, Jon, who won with, “If I can’t work at Google, at least I can make this place look like Google!”
For more cartoons on DCK, see our Humor Channel. For more of Diane’s work, visit Kip and Gary’s website.
| | 5:57p |
Peak 10 Expands Cincinnati Data Center Data center, cloud and managed services provider Peak 10 added 5,000 square feet to its Greater Cincinnati data center. The new space will add 3,800 square feet of net usable data center space. The expansion brings its total footprint to 27,000 square feet in its West Chester, Ohio, facility.
Peak 10 was acquired by GI Partners earlier this year in a deal believed to be between $800-$900 million. The acquisition helps fuel aggressive but calculated expansion plans. Peak 10 recently broke ground on a 60,000 square foot data center facility in Tampa, Florida, and Phase 1 of its 70,000 square foot facility in Alpharetta, Georgia. The provider has continued to add capacity in its core markets while expanding selectively into new markets.
The West Chester facility opened in October 2008. In addition to the expansion, the company already has a contract for another 22,000 square feet of data center space in the adjacent building, to open sometime in 2015.
The provider often hires executives with local knowledge to run its facilities.
“Our growth is a result of strong demand for our secure infrastructure, cloud and managed services, particularly from companies in heavily regulated industries, such as finance and healthcare, who have stringent compliance, security and availability requirements,” said Dan Doerflein, vice president and general manager for Peak 10, overseeing Cincinnati operations. “We look forward to even more growth in the Greater Cincinnati region, as we continue to support the IT and business operations of local businesses, and expand our partner community.”
Preak 10′s product strategy has evolved with the times, offering a unique mix of colocation, managed hosting, and most recently, cloud. It provides tailored solutions, often winning a piece of a customer’s IT infrastructure and further growing the relationship as time goes on.
Founded in 2000, Peak 10 operates 23 data centers in 10 markets, primarily in the southeastern U.S. The Charlotte, North Carolina, based company touts over 2,500 customers. | | 8:04p |
Microsoft Azure Launches New Services Aimed at Developers 
This article originally appeared at TheWHIR.
Microsoft is seeking to improve Azure’s appeal to developers with a set of new services and updates, the company announced Thursday. New services include DocumentDB and Search, while HDInsight support for Apache HBase has been made generally available and VM Depot images have been added to the Azure gallery.
Azure also added support for SQL Server AlwaysOn, Web Site WebJobs, and API Managment REST API, which allows nearly any management operation in the Management API Portal to be accessed programmatically.
“One of the core tenets of Microsoft Azure’s strategy is to build an open platform that gives developers flexibility in building applications using technologies of their choice,” Azure director of product marketing Vibhor Kapoor said. “Today we are excited to announce new services and updates that affirm our commitment to enabling developers build innovations how they want.”
The announcement also includes Azure availability for credit card purchase and deployment in 51 new countries.
DocumentDB is a managed NoSQL document database-as-a-service with “query processing and transaction semantics common to relational database systems.” It offers an additional NoSQL choice for leveraging user-generated data across multiple platforms, concurrent versions, or applications.
With Azure Search, developers can increase and decrease queries per second and document count as load changes to increase the cost effectiveness of searches. DocumentDB and Search are now available in the Azure Preview portal.
Azure expanded its network for private connection in July. As it expands and adds new features, it will need to keep the service online to maintain the confidence of developers and other customers. Azure has suffered several service outages in August.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/microsoft-azure-launches-new-services-aimed-developers | | 8:05p |
Docker CEO: Docker’s Impact on Data Center Industry Will Be Huge Docker came on fast. It happened unusually fast for the IT infrastructure world.
Launched as an open source project only about 18 months ago, the technology that packages an application in a way that makes it portable across different data center or cloud environments, now enjoys support from the likes of Google, IBM, Microsoft and Red Hat, among many others.
Docker the company officially announced itself in June, launching the first production-ready release of its software. Earlier this month, there was a report that relied on anonymous sources that said the company was close to completing a $40-to-$75-million funding round.
If the momentum it currently enjoys doesn’t lose steam, Docker and a handful of other like-minded startups, are poised to overhaul the way developers build applications and the way IT infrastructure admins serve them.
We caught up with Docker CEO Ben Golub recently to hear his thoughts on the effect companies like the one he’s steering may have on the data center industry and on the role he sees Docker play in the world of enterprise IT.
Data Center Knowledge: What impact do you expect Docker to have on the data center industry?
Ben Golub: I think there are going to be huge impacts across the data center industry. It’ll change how people think about virtualization; how they think about networking; how they think about storage. [It will] certainly drive significantly greater efficiencies. You can get 20x to 80x greater density using Docker than you could by making every application a full VM.
DCK: What should users consider hardware-wise when using Docker?
BG: Docker has been approved and tested on 64-bit architectures, so pretty much all you need is a 64-bit server running some sufficiently modern Linux kernel, and Docker will run. The hardware isn’t a restriction. There are people who are doing things with ARM chips and Docker, and there are people that are doing it with Power systems. We haven’t tested those, but we suspect that Docker architectures will expand just the same way that Docker operating systems will expand.
DCK: What does support from big companies, such as Google, Microsoft and IBM, mean for Docker?
BG: Docker has in a very short amount of time, 17 months since we launched the open source project, has become mainstream. And I think everybody is recognizing that Docker will be a fundamental and disruptive force in how applications are built, shipped and run.
DCK: How is Docker different from other Linux container technologies, such as Red Hat’s?
BG: Linux containers are a low-level component. Until Docker came around some people used containers but the use was very much restricted to large organizations, like Google, that had specialized teams and training. But the containers weren’t portable between different environments. With Docker we’ve made containers easy for everybody to use; we’ve made them portable between environments; we made them exceptionally lightweight and we built up a huge ecosystem around that.
DCK: Why did you go the open source route?
BG: We went open source because we thought that for Docker to succeed we wanted a huge ecosystem to grow up around us. We wanted Docker to work well with all the products that are above us in the stack, which includes a lot of open source tools like Chef and Puppet and Salt and Ansible, as well as everything below us. The Linux stack, OpenStack, every major cloud provider, etc. So being open was really the only option for us.
DCK: What makes Docker attractive for traditional enterprises?
BG: There are really two main use cases. There’s one, which is improving the software development lifecycle. And the other is making it much easier to scale, move across clouds in production.
It used to take weeks or even longer to go from the time a developer developed an application to the time it went through QA test, staging and production, and generally it would break multiple times along the way because of incompatibilities between different environments. With Docker now, you go to places like eBay or Gilt, and they’ll tell you that it takes minutes rather than weeks. The developer commits a change to source; the application is Dockerized automatically; that Docker container goes through whatever automated test system they want, and 90 percent of the time it goes directly to production. The 10 percent of the time it fails, it’s clear whether it’s inside the container and the developer needs to fix something, or its outside the container and ops needs to fix something.
DCK: Are there applications for which using Docker doesn’t make sense?
Right now we only support Linux applications. There’s a huge world of non-Linux applications for which it won’t make sense. We are [eventually] going to have non-Linux support. Not this year, but next year. It’s on the roadmap.
Often people look at Docker as a replacement for VMs, and Docker doesn’t do certain things that VMs do well, like let you take a Windows application and run it on a Linux box or vice versa. That’s totally not something you’d want to use Docker for today.
And Docker does not really support things where you need to freeze the state and live migrate. That’s coming over time but what people are often finding is that with Docker it’s so fast and cheap to create and destroy containers that it can really change the way they think about state and think about the way they do applications. |
|