Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, October 16th, 2013

    Time Event
    11:30a
    Network News: CenturyLink, Cisco, Juniper

    CenturyLink and Cisco help power ultra-fast 100 Gbps networks, and Juniper and Thursby Software partner for  Government secure authentication solution.

    CenturyLink provides 100 Gbps to Colorado customer

    CenturyLink (CTL) announced that it has launched its first dedicated 100 Gigabit per second (Gbps) data networking circuit for a CenturyLink business customer through a direct connection to the company’s recently upgraded 100 Gbps nationwide network. Colorado company DigitalGlobe will use CenturyLink’s 100 Gbps Optical Wavelength Service to launch a new product allowing its customers direct access to satellite mapping data. The 100 Gbps circuit allows DigitalGlobe to transfer massive amounts of data at lightning speed from its headquarters in Longmont, Colorado, to its offsite data center, where its customers will be able to access the data remotely. “This is a significant day for the industry and for the state of Colorado. CenturyLink’s deployment of commercial 100 Gbps service gives businesses bandwidth for the big data applications they need to excel,” said Scott Russell, vice president and general manager for CenturyLink in Denver and Northern Colorado. “Our speed and capacity upgrades provide DigitalGlobe with an end-to-end optical network that improves efficiency and productivity. Simply put, 100 Gbps is the best the industry can offer.”

    Russian Telecom launches Cisco 100G Network

    Cisco announced that Russian TransTelecom (TTK) has launched its 100G Ultra Long Haul dense wavelength-division multiplexing (DWDM) network in the St. Petersburg-Moscow and Moscow-Chelyabinsk-Yekaterinburg segments. The coherent optical technology increases the backbone network capacity from 40 to 100 gigabits per second per optical carrier. Launch of the segments with a total length of over 3000 km completes the first stage of the company’s project to create a national 100G ultra long haul DWDM network. Cisco optical solutions have made it possible to create the world’s  longest unregenerate data transfer segment in land line networks in commercial operation – 2600 km, from Moscow to Yekaterinburg. “The ULH DWDM technology is currently the most advanced in the world,” said Artem Kudryavtsev, president of TTK. ”It will increase the total throughput of TTK’s launched network sections by six times. Its implementation will guarantee new connection and data transfer quality for our corporate customers. TTK’s private customers can count on consistently high Internet connection speed even with rapid subscriber growth and a steady increase in TransTeleCom’s subscriber traffic. We are currently implementing the technology in other network segments”.

    Juniper and Thursby Software partner 

    Juniper Networks (JNPR)  announced it has partnered with Thursby Software to enable government agencies to use smartcard authentication on Apple iOS devices. Through the integration, Juniper’s Junos Pulse Secure Access Service and Thursby’s PKard software and card reader hardware, government employees can now use the same smartcards in use today for all levels of authentication — both physical and online — to connect to private or carrier mobile networks through their iPhones or iPads. Use of smartcards as a means of authentication has been mandated and incorporated into service by many different government agencies and ministries around the world. The latest technology partnership allows Juniper to address these same needs for other nations’ government agencies and ministries. ”Providing a strong authentication option for government employees to securely connect to networks with their smartphones is another example of Juniper’s leadership in network innovation,” said Brian Roach, vice president, federal at Juniper Networks. ”Juniper has provided secure, remote connectivity via mobile devices to enterprises for years, and now with Thursby, government employees have the option to securely connect to networks from their iOS device of choice.”

    1:12p
    Flash Storage: The Smart Way to Deploy Cost-Effectively

    Alan Atkinson is vice president and general manager of Dell Storage.

    Alan-Atkinson-tn
    ALAN ATKINSON
    Dell

    The exponential growth of data and the need to access it put huge demands on storage infrastructures. According to IDC, businesses’ storage demands are growing in excess of 50 percent a year while their total available storage capacity is growing at barely half that rate, presumably due to cost constraints.

    Meanwhile, virtualization and the explosion of I/O-intensive applications for big data analytics, database transactions, and more have given IT leaders and their end users an appetite for higher levels of storage performance. CIOs have to choose between performance and cost, and increasingly, those choices are becoming more challenging to call.

    Between Moore’s Law and technologies like virtualization, cloud and flash memory, the IT industry is consumed by a “new, better, best” mentality, with a tendency to discard “old” technology, even though it continues to provide useful service.

    Unfortunately, a complete “rip ‘n replace” strategy can be more expensive and a lot more complex. It makes the most sense to add new technology, like flash and tiering, to an existing infrastructure to address new demands, improve performance or lower costs for existing applications.

    Technologies like flash, or solid-state memory, and tiering have evolved to the point where, when combined, they are proven cost-efficient for the data center. They provide the speed, agility, flexibility and cost-efficiencies that are the alternative to a full “rip and replace.” And, when deployed intelligently, users can get flash performance at the price of disk today.

    The Fastest Storage Medium

    Flash is the new high performance leader, which comes in a variety of flavors, each delivering significant advantages over high end hard disks (HDDs). Disk is not going to disappear anytime soon, but it will be relegated to less demanding work over time.

    Flash benefits range from cost for performance – the dollar per IOPS (input/output operations per second) is over seven times cheaper for flash than for HDDs – to lowering rack space and power consumption costs. Beyond the infrastructure, flash also boosts employee productivity and helps meet business critical SLAs.

    When IOPS is the key metric, then flash clearly outperforms disk. While a conventional 15k HDD can deliver approximately 200 IOPS, a single SSD can provide thousands of IOPS in the same form factor.

    The two most popular flash technologies are single-level cell (SLC) and multi-level cell (MLC, or enterprise-class eMLC). SLC is ten times more persistent with three times faster sequential write, comparable sequential read, and more than four times the cost of MLC. Flash storage also comes in various formats and is being deployed in both all-flash and hybrid – a mix of flash and HDD – models and inside servers (i.e. PCIe cards), as well.

    Enterprise adoption is growing with 30 percent already using solid-state storage and another 32 percent planning to deploy it. Forrester expects that flash will become ubiquitous in transaction-heavy environments, not just performance-sensitive ones, in the near future.

    Disk’s Day is Not Done

    Though all-flash storage excels in high-performance use cases, disk and hybrid systems will continue to serve major roles in data centers. Disk will be around for at least another decade and will continue to complement flash-based storage for applications like databases and email and for supporting virtual server environments.

    While all-flash arrays compare favorably to enterprise arrays with high performance hard drives, they do not compare favorably to high capacity hard drives.

    Unstructured data growth dictates the need for dense, affordable bulk storage of less critical data that disk drives most affordably support. Unless you are having thousands of people access the same file at the same time, like in a web front application, hard drives still make sense.

    Tiering Delivers the Best of Both Worlds

    Tiering enables CIOs to seamlessly bridge the price/performance chasm and assign data and applications to the most appropriate storage medium. It involves assigning different categories of data to different types of storage media to ensure optimal performance and the lowest total cost.

    Tiering can be considered the equivalent of an automated workflow that knows which “packages” of data require “immediate express delivery” and places them on that first tier (in this instance, flash) and which “packages” of data can be safely stored on less expensive and slightly slower second tiers. Not only can tiering allocate data between the different media, but innovative vendors have developed the capability to also automatically allocate data across write-intensive SLC and read-intensive MLC SSDs. It improves performance for data-intensive applications and workloads in a high-performance storage solution that can achieve over 300,000 IOPS.

    The ability for a storage array to automatically tier across multiple SSD drive types is new and quite revolutionary with numerous advantages. While many available flash arrays leverage write-intensive SLC drives, a balance of MLC and SLC offers customers greater overall cost for performance. Overall flash reliability is increased when an array leverages the more vulnerable MLC flash tier mostly for reads. Capacity of the more expensive SLC tier can be kept to a minimum, just large enough to handle inbound write traffic. As a result, this model dramatically reduces the overall cost to implement flash.

    The attraction of all-flash arrays is the predictable nature of performance. Users do not have to worry about a tier or cache miss causing data to be served from hard disk, but, without tiering, this comes at a high cost.

    To be clear, all flash without or even with tiering does not solve all storage challenges alone. The latest vendors offering only all-flash arrays today typically lack full enterprise-class features (e.g. advanced replication, replays and management) and industry integrations that the more established vendors offer. There’s also significant cost savings by adding new capabilities to a user’s existing storage environment, avoiding the costs associated with the “rip ‘n replace” strategy mentioned earlier.

    A storage infrastructure that allows you to easily morph into a hybrid array, one that mixes SLC, MLC and disk, can further reduce costs and increase capacity offering a much lower price point per/GB than all-flash arrays while providing the performance of flash. As a result, users are able to get flash performance when it’s needed, and do so at a price that’s comparable to an all-disk solution.

    While flash is growing in adoption, the real value is on tiering that optimizes every application and every volume to best meet the combination of price and performance. It gives users the best of both worlds: data is written to the fastest tier using SLC drives and as data ages, the data is automatically moved to MLC drives, and eventually to slower and much less expensive traditional HDD drives.

    By taking this innovative approach and adding new capabilities to their existing storage infrastructure, users are finding they can get the flash performance they need when they need it, and have improved storage performance while staying closer to the cost of a disk solution.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Coho Data Emerges From Stealth With SDN Storage Appliance
    coho-data

    Coho Data this week introduced its DataStream appliance to take advantage of SDN and PCIe flash technology to accelerate storage. (Photo: Coho Data)

    Emerging from stealth mode Tuesday, Andreesen Horowitz-backed startup Coho Data unveiled and SDN storage appliance called Coho DataStream, which packages key elements of public cloud – including pay-as-you-grow economics, rapid scaling and high resiliency – into an on-premise solution. Coho offers a software defined storage (SDN) model that combines commodity hardware with software that’s tuned for the latest high performance PCIe flash technologies.

    “As we saw with Nicira and networking, Coho Data is shaking up how storage has traditionally been delivered; instead of focusing on the storage hardware, it’s all about using software to extract the best performance from commodity hardware,” said Andreessen Horowitz General Partner Peter Levine explained. “I’m thrilled to work with the founding team, comprised of veteran product visionaries and architects, to usher in storage for the cloud generation.”

    Coho Data’s founders built the DataStream to address underlying scale and performance limitations in traditional storage architectures. It starts with a cloud-inspired storage model that combines scale-out software on commodity hardware, and adds a storage stack optimized for PCIe flash and the use of software-defined networking (SDN) to embed storage intelligence in the network.

    Redesigning the Storage Stack

    “Even as new advancements in flash memory have come on the scene, the storage industry has remained stagnant, relying on a 30-year-old architecture to deliver performance and accessibility,” said Ramana Jonnala, CEO of Coho Data. “By redesigning the storage stack itself, we have taken the best ideas from public cloud-based architectures and improved them for demanding on-premise datacenters. DataStream uses sophisticated software to take advantage of flash in such a way that it can be used for all applications, not just the top tier ones. Whether data is in the public or private cloud, it should be on a storage architecture that can meet the scalability and performance today’s cloud generation companies need, at the pricing they demand.”

    The Coho DataStream provides up to 180K IOPS in every 2U appliance, that can be scaled linearly for 2x the price/performance of all-flash arrays. Starting at $2.50/GB, before deduplication and compression, businesses can start with a single Coho DataStream chassis of 40TB and grow incrementally as needed without the big upfront expenditure of traditional arrays. The Coho DataStream product line is currently installed in large enterprise private cloud environments and will be generally available later this year.

    “Coho Data is delivering a promising and highly innovative approach that enables enterprises to provide internal storage services in powerful, scalable, and efficient ways that challenge the notion that the public cloud is the only route to economy and flexibility,” said Mark Peters, Senior Analyst, Enterprise Strategy Group. “ESG Lab’s testing showed that Coho Data users will not only get predictable performance as they grow their environment using the product’s pay-as-you-grow model, but will also enjoy eliminating many of the configuration and management challenges that are endemic with traditional monolithic storage.”

    3:09p
    Media Temple, GoDaddy Talk Acquisition, Growth and Culture

    Brought to you by The WHIR. WHIR_logo_100

    When Demian Sellfors co-founded Media Temple in 1998, he had planned a three year exit strategy. Fifteen years later, GoDaddy has acquired (mt) Media Temple for an undisclosed amount, and (mt) CEO Sellfors says the outcome of his journey is nothing like what he expected it to be.

    “This is not at all what I had in mind originally. I like to joke that I’m on year fifteen of the three year plan. I was hoping that I could get this done much sooner but as you can probably imagine I’m a very patient person, and I woke up every day as if I was building a 100 year old company,” Sellfors says. “I also never would have thought that I would sell my company to GoDaddy. It wasn’t until I met Blake [Irving, GoDaddy CEO], and Jeff [King, SVP and GM Hosting, GoDaddy], and some of the other GoDaddy team that I realized the company was indeed absolutely transforming.”

    It’s a transformation that has been months in the making. When Irving took the helm at GoDaddy nine months ago, the company was in need of a makeover. As one of the largest hosting companies in the world, GoDaddy was obviously doing many things right, but alienated customers, particularly women, with its aggressive marketing campaigns.

    “When the market you serve is 48 percent women, you have to change the way you speak to that audience,” Irving says. Part of that is the shift was changing the messaging in its TV commercials to focus on its small business users. The focus on SMBs became central to its growth strategy, and (mt) marks its sixth acquisition in 15 months.

    “[GoDaddy has] changed its strategy, it is changing its thinking, its advertising, and I can tell you with a straight face that this is not the same company that people are used to,” Sullfors says. “The fact that its going to keep Media Temple autonomous and independent, the fact that we are going to be able to stand on a bigger stage and take our philosophy to a greater number of people is just absolutely fantastic.”

    According to Irving, operating Media Temple as a separate business is a critical part in making this acquisition successful. (mt) will continue to operate out of Los Angeles, and its 225 employees will stay on board. The only big change is the departure of Virb, a website builder that (mt) acquired but never fully integrated, which is being bought back by its original founders with investments from Sellfors and (mt) co-founder John Carey. 

    “People chose Media Temple. They didn’t choose GoDaddy. They chose Media Temple for a reason and we are not going to disrupt that in any way, shape or form,” Irving says. “In fact, what we will do is provide investment to help it grow that value proposition globally. That’s our goal and that’s our commitment to the company.”

    While Irving says the cultural synergies between GoDaddy and MediaTemple were “startling,” there are also notable differences, including the way they serve developers.

    “We really see ourselves as developers building solutions for other developers,” Media Temple president Russell Reeder says. “One of the big ‘ah-has’ for Media Temple has been going through the due diligence process and looking behind the covers of Go Daddy. We are very impressed. There’s nothing to be apologetic about by being acquired by the largest hosting company in the world. The amount of tech-savvy individuals, and the solutions they have behind the scenes, it was really eye opening.”

    When the news of the acquisition broke on Tuesday, many (mt) customers expressed concern around supporting GoDaddy. Tweets about shooting elephants and half-naked GoDaddy girls abound. Those stereotypes don’t align with where GoDaddy is today, according to Sellfors.

    “When you look at the change that GoDaddy has been able to make over the last few months, it’s amazing,” Sellfors says. “The acquisition, the changing of the brand, the new management team…there are people that are holding on to things that are years old. I think that [Media Temple] is lucky enough that we’ve been able to see under the curtain a bit more than others, but I think people will be very impressed as they see GoDaddy grow.”

    So where will that growth take GoDaddy? Beginning in 2014, GoDaddy will extend its services to Latin America, adding 60 new markets and 30 languages.

    “We’re definitely looking forward to the growth that GoDaddy can provide us,” Reeder says. “We’ve grown traditionally over the past 15 years through word-of-mouth. We’ve had great growth but now being part of GoDaddy we really see that we’re going to be able to offer more technical solutions to those web designers and web professionals globally.”

    Original article published at: http://www.thewhir.com/web-hosting-news/media-temple-godaddy-talk-acquisition-growth-and-culture

    6:00p
    First Look: Facebook’s Oregon Cold Storage Facility
    fb-hdds-470

    Facebook’s cold storage systems incorporate Western Digital hard drives. (Photo: Jordan Novet)

    PRINEVILLE, Ore. - Last Thursday, Facebook began migrating data – primarily pictures – to its newly constructed cold storage facility, within walking distance from its two huge data halls containing thousands of servers. On Tuesday, it opened the facility to reporters, providing a look at a hyperscale implementation of custom-built long-term storage.

    The new building can be likened to an attic for cost-effectively housing data that hasn’t been accessed in a while – and might never be again – but still ought to be kept around.

    “What’s really different about this place … (is that it’s intended for the) storage of data over the long term,” said site director Chuck Goolsbee, as he entered the new building on Facebook’s campus in the middle of the Beaver State.

    As Facebook users add hundreds of millions of new photos to the site every day, the company is responding by treating different kinds of data in different ways. Not every company can afford to create custom designs to meet its needs, but it could be that the tiered approach will become more popular as data proliferates, not least because Facebook has revealed many aspects of the cold storage through the Open Compute Project.

    The racks that executives showed off were largely kept clustered together in a small portion of the available square footage inside the building. The few that were on site sat several feet away from one another. But this configuration won’t last. Over time Facebook will add racks full of cold storage on either side of those in place currently. The new racks  will connect to the switches inside these initial racks. After all, the hard disks are connected to a couple of nodes in each rack, leaving plenty of space to fill up ports on the switches.

    fb-coldstorage-racks-470

    Specifications for Facebook’s cold storage, shown here at the company’s data center in Prineville, Ore., have been made public through the Open Compute Project. (Photo: Jordan Novet)

    Each disk in the cold storage gear can hold 4 terabytes of data, and each 2U system contains two levels of 15 disks. In other words, each unit can handle 120 terabytes. A rack could hold 16 of these storage systems, allowing for 2 petabytes of cold storage in a rack.

    The disks are rarely spinning – perhaps one will at any given time. As a result, these use less power than Open Compute racks filled with servers, and there’s “no need to put electrical infrastructure on each row,” Goolsbee said. Instead, power can be distributed across multiple racks.

    Goolsbee doesn’t think Facebook is the only company with good reason to consider implementing highly efficient tiers of storage. Other webscale companies could be inclined to do so, if they haven’t already. Web companies with smaller scale will follow suit, then enterprises and eventually governments, he said.

    Then again, wholesale copying might not be the game plan.

    “It’s not about who builds this, but what can you build from these basic blocks. (Companies) can build exactly what they need,” said Goolsbee, who previously ran the digital.forest colocation facility in Seattle.

    fb-coldstorage-chuck-470

    Chuck Goolsbee, site director of the Facebook Prineville data center campus, with one of the company’s new cold storage racks. (Photo: Jordan Novet)

    While adoption of Facebook’s take on cold storage might not come overnight, companies can access the designs right now. Cold storage Specifications on the Open Vault storage gear are available through the Open Compute Project.

    Facebook director of infrastructure Jason Taylor called for flash for cold storage earlier this year. Even so, there was no flash-based cold storage in the new building. “It’s just not ready for cold storage, for obvious reasons,” Goolsbee said. But given Taylor’s recent comments, that could change in the years to come, as the price of flash per gigabyte gets closer and closer to that of hard disk storage.

    Cold storage won’t be installed at every Facebook data center, just at the Prineville site and the one in Forest City, N.C, a spokesman said. And that should be fine for now. Facebook is projecting that the cold-storage building will be filled to capacity by 2017, Goolsbee said.

    6:45p
    Apple Quietly Builds Its Prineville Data Center
    apple-prineville-470

    Apple’s data center in Prineville, Ore. (Photo: Jordan Novet)

    PRINEVILLE, Ore. - Apple has said little in public about its data center here, leaving even local officials wondering what’s inside and how the site could expand in the future.

    During a Tuesday tour of Facebook’s new cold-storage facility at its site across the highway, reporters caught a glimpse of the Apple data center, which is expected to support the company’s iCloud service.

    The Prineville campus is part of a major expansion of Apple’s data center capacity that began with the opening of a 500,000 square foot data center in Maiden, North Carolina in 2010. The company is building new server farms in Prineville and Reno, Nevada and rumored to be planning another in Hong Kong.

    While crews bring infrastructure into the Prineville data center itself, they are also constructing a power substation nearby.

    apple-prineville-substation

    Apple is constructing a power substation south of its data center in Prineville, Ore. (Photo: Jordan Novet)

    Plans originally filed with local jurisdictions showed berms that would hide the initial Apple building. But the large data center nearby the first building can be easily seen from the highway just north of the site.

    apple-prineville-road-470

    Apple is building a data center south of Facebook’s in Prineville, Ore. (Photo: Jordan Novet)

    The site can also be seen from above. Aerial photos on file with county planners show a long corridor, or spine, connected with offices as well as two data halls (referred to as “modules”) for housing servers and other computing infrastructure. The company could add six more modules on to the spine of the building, and the entire structure could well be replicated just south of the current large building.

    Apple-Prineville-aerial-470

    Aerial view of Apple’s data center in Prineville, Ore. (Photo: Jordan Novet, courtesy of Crook County)

    It’s unclear how long it will be before Apple decides to expand its land ownership and data center square footage in Prineville. But to make sure the small city is ready to host sudden expansion — perhaps a data center or renewable energy generation source — city of Prineville officials are now considering the expansion of the city’s urban growth boundary to incorporate a 96-acre lot directly to the east of Apple’s 160-acre site. That move, along with a zone change, could help the city succeed in large-scale economic development, “particularly expansion of the Apple Data Center site,” according to a city ordinance.

    apple-prineville-map-470

    The city of Prineville is discussing the idea of expanding its urban growth boundary to include a 96-acre lot next to Apple’s site. (Photo: Jordan Novet, courtesy of Crook County)

    9:16p
    Cloud Channel Summit

    The Cloud Channel Summit will offer participants an opportunity to focus on building successful alliances in the cloud. The one-day event will be held November 4 at the Computer History Museum in Mountain View, CA.

    Leading cloud vendors will be on hand to talk about enlisting channel partners to enhance their solutions, especially with an eye toward meeting the needs of geographic and vertical markets.

    The Cloud Channel Summit aims to provide a valuable meeting place for channel and partner development executives to share industry best practices and establish new relationships. The event organizers plan to draw key channel and business development executives from major Cloud vendors, VARs, ISVs, SIs, hosting companies, and other service providers.

    For more information, visit Cloud Channel Summit site. DCK Readers will get a discounted price for the event, when registering through this Cloud Channel Summit link.

    Venue

    Computer History Museum
    1401 N. Shoreline Blvd
    Mountain View, CA 94043

    For more events, please return to the Data Center Knowledge Events Calendar.

    << Previous Day 2013/10/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org