Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, December 4th, 2013
| Time |
Event |
| 12:30p |
Liquid Web Launches Storm Private Cloud Hosting provider Liquid Web has introduced a quick way for an organization to get a private cloud up and running. Storm Private Cloud allows a user to purchase a server and split it into multiple virtual instances, creating a private cloud environment.
“It’s a bare metal server on the Storm platform that can be partitioned, and moved between our public and private clouds,” said Liquid Web spokesman Cale Sauter. Storm Private Cloud originally launched with select private Beta customers and is now widely available.
The product will appeal to several customer types:
- Businesses looking for new ways to partition separate hosting environments
- Cloud resellers
- Developers looking for efficiency and control
- Those interested in cloud but desiring the security of a dedicated server
The development was inspired by direct customer feedback. “Answering our customers’ demands by delivering the ability to deploy customized cloud slices within single servers allows us to offer a level of private cloud functionality we’ve been looking forward to introducing for quite some time,” said Liquid Web CTO Siena Fath-Azam.
Storm Private Cloud users are able to move instances between public and private cloud as well as the ability to customize each instance’s size, RAM, disk size, number of cores and more.
LiquidWeb has data centers in Lansing, Michigan, Phoenix, Arizona, and is in the process of opening up a European location in response to growth. | | 1:30p |
How to Move Bigger Data into the Cloud Nicos Vekiarides, CEO of integrated cloud storage provider TwinStrata
 NICOS VEKIARIDES
TwinStrata
It’s no secret that storage needs will continue to grow in the upcoming years. In fact, research from IDC projects installed raw storage to exceed 7 zettabytes (each zettabyte is 1 million petabytes) by 2017 as part of a staggering 16 ZB digital universe. What this means for IT organizations is an increased strain managing growing storage capacities. Further exacerbating matters are regulatory requirements extending data lifetimes to 10 years or more and effectively requiring data to remain accessible online for that extended period of time.
While cloud storage does not necessarily meet IT’s growing storage capacity needs, adding cloud-integrated storage into existing NAS or SAN environments makes it relatively easy to relocate data securely into a cloud provider, eliminating the need to maintain an expanding on-premise storage infrastructure.
Offloading storage to cloud may sound good on paper, but it does present at least one initial challenge when it comes to storing large amounts of data: getting the initial upload to the cloud.
Importing Data to the Cloud
A little math can compute how long it takes to upload a large amount of data across a WAN. For instance, if you have an uplink speed of 100Mbit/sec, you should be able to push nearly 1TB per day. Before becoming too comfortable with that figure, consider that it is theoretical and does not account for other users sharing the WAN link, hops/latencies or other overhead that can slow down throughput. Even at a theoretical maximum of 1TB/day, 100TB of data may take over 3 months to upload – a relatively long and cumbersome process.
In instances where network bandwidth is not the best option for the initial upload, consider shipping your data to the cloud via a cloud provider import service (as offered by Google and AWS). You can ship disks containing your data directly to the cloud providers, who can load the data in one of their data centers or at a high-bandwidth access point with zero impact to your network. For large data sets, this can be the difference between weeks and months versus days to get data into the cloud.
What about Security?
While best practices dictate that corporate data should be encrypted at-rest in the cloud, security is sometimes a forgotten aspect of the import process. Transporting unencrypted storage can easily become the weak link in an otherwise tightly secured cloud environment.
Ideally, a data import process ought to follow the same practice of encrypting data at-rest prior to transporting it. Would you ever hand your unencrypted data to stranger? Well, whether the data is handled by a “trusted” shipping/transport company or cloud provider, there is no reason to leave a window open for breach by an unauthorized or unknown party.
Look for an import process that encrypts and encapsulates data into object format stored in the cloud prior to shipping the data from your premises. Following the same security practice used for storing data online in the cloud eliminates having to make any security compromises during the import.
The Bottom Line
Cloud storage has become a viable alternative for both storing data online and protecting data by copying it offsite. While the process of loading large data sets into the cloud may seem ambitious and cumbersome, cloud import processes can significantly reduce the time requirements as well as the potential network impact.
An import process that follows best practices around security can provide a rapid data upload with the appropriate level of security that your corporate data demands.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
PayPal and Groupon Go All In With Node.js SAN FRANCISCO — Developers at staid technology companies such as eBay and PayPal have seen the perks of developing with the hip server-side JavaScript runtime Node.js. Many have watched it speed application development and lower memory utilization in comparison with more traditional programming languages.
“Developers can actually develop their applications quite fast,” said eBay’s senior director of platforms, Jigar Desai, during a panel Tuesday at the Node Summit on Tuesday. With Node, developers can get programs into production “in days rather than weeks,” he said. That’s the biggest justification he sees for the movement toward Node.
Node.js is a server-side JavaScript runtime that is used to build fast, scalable network applications. An open source project, Node has been maintained and supported by cloud computing company Joyent, which this week offered paid support for Node.
Discoveries about the advantages of Node over, say, Java or Ruby on Rails have only made developers more eager to use it at work. That’s certainly been the case at PayPal.
“We’ve had to slow people down because they’re so excited to get into Node,” said Bill Scott, senior director of user interface engineering at the payment company. “… No one goes into Node application development until me, myself, my team and the engineering team approve those people. It’s not IT governance for it, but I don’t want the wrong types of developers working in Node.”
At PayPal, engineers built and ran the same application in Java and in Node, Scott said. As a result of the experiments, the company has made a major shift. “We don’t want to do Java anymore,” Scott said. “We want to do Node. This is much faster.” Nowadays, he said, every application is being built on Node.
Next to Scott and Desai on the panel, Groupon’s Sri Viswanath explained the rationale and benefits of moving from Ruby on Rails to Node across the entire company, over a period of about six months.
Three and a half years into Groupon’s lifecycle, life had become too complicated. It was time to explore alternatives to Ruby on Rails. It helped that Node executes JavaScript, which Viswanath characterized as relatively safe. High speed and agility were high priorities, and Node offered those qualities in application prototypes.
“Node.js is like a Ferrari,” Viswanath said. In his simile, developers drive at 200 mph, only to crash and trigger an explosion. But at least they can move at high speed.
It took three months just to change the color appearing across the website, he said. “After we moved to Node.js, it took us literally a week.”
The best part for Viswanath? Node.js still has not arrived at version 1.0, even almost five years since its original release. So presumably it has room to get even better.
At LinkedIn, too, speed of development and execution were important, but so was the use of physical resources. Code in Node.js “ran really efficiently in terms of its I/O utilization, and its memory utilization is really low,” said Kiran Prasad, senior director of mobile engineering at the social network. Those characteristics are important, because these days, he said, “you’re not really CPU-bound anymore. You’re memory bound and I/O bound.”
Desai felt confident in calling Node “probably 1.5 times better” than Java, “but not 100 times better.” Processes like template rendering can be quite CPU-intensive in Node.js, which amounts to one issue.
“On the scalability side, there is probably more work to do,” he added. Instead of going all in, Desai talked about giving developers choices – such as Java, PHP or Node – and proceeding with caution, slowly, with Node.
Nevertheless, Desai expects that it will become more and more widely adopted in software stacks in the years to come.
“The way I see it, we have very thin front-end web applications, and we have layer services on the back-end side,” he said. “It’s almost emerging as a theme — we’re going to use more and more Node.js as front-end technology and Java or Scala (for an) asynchonrous back end. Thats where I’m seeing the trend and curiosity of developers moving in that direction.” | | 3:00p |
Red Hat Launches Open Shift Enterprise 2 PaaS Platform Red Hat releases version 2 of its OpenShift Enterprise PaaS, Joyent launches enterprise support for Node.js, and HP Propel helps IT organizations become Service Brokers.
Red Hat launches OpenShift Enterprise 2. Red Hat (RHT) announced the general availability of OpenShift Enterprise 2, the latest version of its on-premise private Platform-as-a-Service (PaaS) offering. The new version offers datacenter infrastructure integration, an advanced administration console, support for even more programming languages, and new collaboration capabilities – and expanded global availability, a wider range of developers can achieve the benefits of private PaaS technology for their cloud deployments. “PaaS represents the fastest growing segment of cloud computing, and Red Hat offers the industry’s only full suite of open source PaaS solutions for both public and private PaaS,” said Ashesh Badani, general manager, Cloud and OpenShift at Red Hat. ”OpenShift Enterprise 2 extends this leadership and delivers what users want – an application-driven enterprise – by making PaaS even easier to consume.”
Joyent offers enterprise-grade Node.js expertise. Joyent announced at Node Summit 2013 the offering of Node.js Core Support that provides direct availability to severity one support for enterprise-grade Node.js applications. Node.js has earned the respect of enterprise cloud developers and is increasingly relied upon by companies like Uber, DowJones, Yahoo, PayPal, and eBay for replacing their web front ends and building API servers. Joyen’ts new service provides developers who are building production-grade Node.js applications access to Joyent’s years of experience in running its own Public Cloud. “Node.js has been a fundamental design choice for all of our products — The Joyent Public Cloud and its related components, including the customer portal and our Joyent Manta Storage Service, as well as our Joyent SmartDataCenter Private Cloud software. We learned invaluable lessons while building these large distributed systems, and this new support offering reflects that expertise,” said Bryan Cantrill, senior vice president of engineering at Joyent. “As corporate stewards of Node.js, it is our duty to encourage and support developers who are building business applications in Node.js. That is how you build a very successful, collective ecosystem around an open source project.” This new enterprise-grade stand-alone Node.js support offering is in addition to Joyent’s current full stack Node.js Support that provides Joyent Public and Private cloud customers technical support for the running and operationalization of an entire Node.js application and technology stack.
HP Launches Propel cloud-based solution. HP (HPQ) announced Propel, a cloud-based service solution that enables IT organizations to deliver self-service capabilities to end users. The new solution lets IT transform into a broker, with a self-service portal, catalog and exchange to deliver services within an organization with greater speed, flexibility, responsiveness and efficiency. Available on both desktop and mobile platforms, the free version of HP Propel includes a standard service catalog; the HP Propel Knowledge Management solution, which accelerates user self-service with immediate access to information needed.Clients also can integrate their on-premises service management solutions through the HP-hosted Propel Service Exchange. “As our business becomes more complex, we believe that open framework and consumerized user experience are central to enabling our self-service strategy aimed to improve performance and cost control,” said Edgar Aschenbrenner, chief information officer, E.ON. “The implementation of HP Propel service portal is a key step towards transforming our IT organization into a strictly service oriented multi-supplier environment.” | | 4:25p |
Activists Target Water Supply for NSA Data Center  The NSA data center in Bluffdale, Utah. (Photo by swilsonmc via Wikipedia)
An activist group is mounting an effort to force the state of Utah to shut off the water supply to the National Security Agency’s huge data center in Bluffdale, Utah. The OffNow coalition wants to pass state legislation that “would ban Utah from participating with or assisting the NSA in any way in the implementation of their warrantless spying program.”
The effort appears to be a political longshot, but gained widespread visibility this week when it was featured on USNews, Time and The Drudge Report.
The OffNow coalition is targeting what it calls the “Achilles heel” of the NSA data center – the fact that it will require a large volumes of water (more than 1 million gallons a day by some estimates) for the cooling systems for its servers. The NSA purchases its water from the city of Bluffdale, which has a wholesale supply agreement from the Jordan Valley River Conservancy District. OffNow notes that the Jordan Valley district is a subdivision of the state of Utah, and thus presents an opportunity to use state-level legislation to pursue a denial of water service to the NSA.
The group is invoking a legal principle known is “anti-commandeering,” which holds that the federal government doesn’t have the authority to force the states (or local communities) to carry out federal laws or regulatory programs. OffNow says a Utah legislator has agreed to introduce its bill, but is not yet willing to come forward until the legislation is finalized.
To date, most of the political influence in the region has been wielded to support the NSA data center project, including an agreement udner which Bluffdale will sell the NSA water at below-market prices to boost economic development in the town. The NSA business provides enough revenue for Bluffdale to build extensive water infrastructure, which will allow it to open up new land for commercial development.
Without the NSA revenue, it would have been 15 years before Bluffdale could have afforded to bring water to that area, Bluffdale City Manager Mark Reid told the Salt Lake Tribune. “It got us a ton of infrastructure, water infrastructure, in places we wouldn’t have it,” Reid said.
The enormous volume of water required to cool high-density server farms is making water management a growing priority for data center operators. The move to cloud computing is concentrating enormous computing power in mega-data centers containing hundreds of thousands of servers. In many designs, all the heat from those servers is managed through cooling towers, where hot waste water from the data center is cooled, with the heat being removed through evaporation. Most of the water that remains is returned to the data center cooling system, while some is drained out of the system to remove any sediment, a process known as blowdown.
As the scale of these huge facilities has increased, data center operators have begun working with local municipalities, water utilities and sewage authorities to reduce their impact on local potable water supplies and sewer capacity. | | 7:28p |
Cologix Increases Credit Facility To $165 million  Cologix cabinets inside the company’s data center at the Infomart in Dallas. (Photo: Rich Miller)
Cologix has armed itself with a substantial credit facility to fuel further acquisition and expansion. The data center and interconnection company has increased its credit facility to $165 million, with amended terms to enable further growth initiatives. The credit facility also provides an option to flex further through a $50 million term loan. Cologix said the credit facility was “substantially oversubscribed,” indicating strong lender interest in participating.
The company has been busy in 2013. It acquired the JAX meet me room in Jacksonville, expanded into a new facility in Vancouver, and opened a second site at the Dallas INFOMART. It’s also been busy expanding its interconnection product to new markets. The credit facility will allow the company to keep this momentum going into 2014.
“Since inception, Cologix has focused on building a fully integrated network-neutral interconnection-focused colocation platform through acquisition and expansion” said Brian Cox, Chief Financial Officer, Cologix. “We are pleased with the market’s response and look forward to putting the capital to work for the benefit of our customers, employees and investors. It was most pleasing to have all four existing banks participate in the new facility and we are delighted to welcome the five new lenders to the group.”
RBC Capital Markets and TD Securities are lead arrangers for the transaction with Royal Bank of Canada serving as Administrative Agent and TD Securities serving as Co-Syndication Agent. The remainder of the syndicate includes Scotia Bank, Capital Source, ING, JPMorgan Chase, Bank of America, CIT, and Raymond James.
“RBC Capital Markets has been Cologix’s lead arranger since 2011 and we are pleased with the strong level of lender interest for this transaction,” said Scott Johnson, Managing Director, RBC Capital Markets. “Furthermore, the debt upsizing positions the company well for both organic growth opportunities and potential acquisitions.”
TeliaSonera Deploys PoPs in Jacksonvile and Dallas
The company additionally announced that international carrier TeliaSonera has deployed Points-of-Presence (PoPs) in Cologix’s 421 West Church Street data center in Jacksonville, FL and Dallas INFOMART. This is an expansion of a relationship that started in Toronto
“Through our relationship with TeliaSonera in Toronto, we have seen firsthand their impressive growth and we are committed to validating their choice to trust Cologix to support their growth in both Dallas and Jacksonville,” stated Jay Newman, Chief Sales Officer, Cologix. “TeliaSonera’s expansion into Jacksonville also reinforces its position as an emerging and dynamic Latin American as well as North American interconnection market.”
Cologix is headquartered in Denver, Colorado, with facilities in Dallas, Jacksonville, Minneapolis, Montreal, Toronto and Vancouver. With more than 330 network choices and sixteen prime interconnection locations, Cologix currently serves over 600 carrier, managed services, cloud, media, content, financial services and enterprise customers. |
|