Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, September 8th, 2014

    Time Event
    12:30p
    Hey, You, Get Off My Cloud!

    Chris Nolan is director of engineering at 2nd Watch.

    When it comes to choosing a public or private cloud architecture, the debate still resonates around the concerns of the past: privacy and security. Yet in its recent Magic Quadrant report for Cloud Infrastructure as a Service, Gartner suggests that the public cloud has come a long way.

    Various surveys show that enterprises are increasing their adoption of both forms of cloud computing. Many large organizations seem to favor using both public and private clouds together, although one survey found that nearly 30 percent of organizations were using public cloud only for their infrastructure outsourcing needs. The incredible economics and agility possible in the public cloud are clearly influencing CIO spending today.

    So long fear, hello strategy

    While strong security is still a major requirement for enterprise cloud computing, CIOs aren’t wringing their hands over what could happen in the cloud. The debate has shifted from fear to strategy. IT leaders know that they must be an active partner in the challenge, and fortunately, they have many more third-party tools and services available to them today. This mitigates the need to hire an arsenal of cloud experts.

    The major providers, such as AWS and Google, can help make a company’s cloud as secure as they wish it to be, even when complying with stringent regulations such as HIPAA in healthcare. AWS has what it calls a “shared security model,” where they secure the data center and protect against DDoS attacks and give companies the tools to create a highly secure environment with all the access controls and governance requirements needed.

    Big companies, tech giants and choices

    Certainly, there are still compelling use cases for private cloud computing technologies and services. Large companies with rooms full of legacy hardware want to amortize the equipment before it reaches end-of-life. That’s sensible, and often leads toward a private cloud architecture, at least as an interim strategy.

    Major technology-centric companies with big wallets that desire full control of their infrastructure, such as Google, Facebook, Twitter and Yahoo!, gravitate toward setting up and managing their own private clouds. Not all big technology companies feel this way.

    Consider Netflix, the world’s largest Internet video streaming company, which now runs 100 percent on AWS. There is no reason not to believe that public cloud services will continue to improve and meet customer needs for security, governance, performance, flexibility and pricing. According to Gartner’s report, AWS ranked miles above all of the other providers on vision and ability to execute.

    Risk management and security: addressing the concerns

    A new way of managing your infrastructure requires a new way of managing risk and security, however, the legacy tools are not yet up for the challenge.

    A survey by Cloud Passage found that whether using a private or public cloud, companies struggled with effectively applying legacy security products to the cloud infrastructure. Among the top challenges stated were compliance, lack of specific cloud security functions, and the inability to work across different clouds and data centers. The one security challenge that was much more common for public clouds was a lack of integration with management and automation tools. To address these concerns, many new, open source vendors are developing tools and services that are built for managing, monitoring and securing cloud applications.

    Even though private and public clouds are growing at similar trajectories, it’s smart to carefully evaluate a private cloud decision, since it will be more expensive (with the customer typically owning the hardware) and will require more management and support from your team.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. neutral

    1:00p
    DevOps IT Automation Software Chef Goes Freemium

    Chef, the company behind the popular IT automation software for DevOps, has merged the open source version of its software and the one with premium features built for enterprises into a single open source code base.

    Until now, if a company wanted to use premium Chef features, such as multi-tenancy and access control, they had to buy a separate software package. Now, anybody can simply download the open source software package and turn those features on or off as they need, paying accordingly.

    But that’s for large enterprises. Smaller users, ones with 25-server deployments or smaller, can just use the enterprise features free of charge.

    “That’s all now come together as a single downloadable thing,” Chef CEO Barry Crist said in an interview. “It’s [all] open source and it ships with premium features that are accessible through an API.”

    There are both business and technological reasons for merging the two versions of Chef into a single code base. Both reasons have to do with ease of transition between the two.

    “Rather than being free-to-premium, it was more free or premium,” Crist said. In other words, the free version was not a gateway drug to the paid one as the company’s leadership had hoped.

    Chef CTO Adam Jacob said it was also hard for a customer technologically to upgrade from free to premium or to stop using the premium features once they had started. “Once you had them, it was hard to get rid of them, and if you didn’t have them it was hard to migrate to them,” he said.

    Keeping the two separate was also not very healthy for progress of both. As the open source version would improve, the enterprise version would stay behind and vice versa, Jacob explained.

    In addition to open sourcing Chef’s enterprise features, the latest release – Chef 12 – includes new high-availability options, policy replications between servers, data centers or different cloud environments, workflow automation for Docker containers, integration with Windows PowerSHell and VMware vSphere and vCloud Air.

    Also, since several months ago, Chef has included an analytics platform, which logs activity on the Chef server and serves as a starting point for development of future audit and compliance features.

    Jacob founded Seattle-based Chef (originally Opscode) together with tech entrepreneur Jesse Robbins in 2008. Many of the world’s largest companies, including the likes of Facebook, Disney, Nordstrom, General Electric, Fidelity Investments and Goldman Sachs, use its software to automate server configuration in their data centers.

    Chef is one of the major forces behind the so-called DevOps movement, which is essentially about bringing IT and developers closer together and building tools to make roll-out of new software features faster.

    While DevOps principles have been employed extensively by web-scale companies, such as Facebook, which constantly deploy new features, there is now pressure on more traditional enterprises to adopt the same approach.

    The enterprise space is responsible for most of Chef’s revenue today. “Seventy-five percent of our revenue now comes from the enterprise, and they want premium features; they want a throat to choke; they want someone to be there,” Crist said. “But, interestingly, increasingly they also want open source, because they want the code.”

    Forward-thinking enterprises want to be able to customize their software tools for their environments.

    1:30p
    Cray Packs Extreme GPU Power Into Latest CS-Storm Supercomputer

    Cray launched the Cray CS-Storm, a high-density accelerator compute system chock-full of NVIDIA Tesla GPUs and a peak performance of more than 11 teraflops per node. Based on the Cray CS300 cluster supercomputer the new CS-Storm is a powerful and dense system, packing an eight-to-two ratio of GPUs to CPUs.

    The new system makes a 250 Teraflop rack possible, with 22 x 2U servers in a 48U rack, all with 176 NVIDIA Tesla K40 GPU accelerators.  To take advantage of this dense GPU environment Cray has specifically tuned its CS programming environment and tools for performance GPU computing. The CS-Storm is targeted at specific HPC markets and needs that can justify the impressive compute power that the CS-Storm can offer. With a NVIDIA K40 GPU costing around $4,000 each, a fully loaded 2U CS-Storm would run $32,000, and a rack would be around $704,000 — and that is just for GPU chips.

    Barry Bolding, Cray’s vice president of marketing and business development, said, “The Cray CS-Storm is built to meet the most demanding compute requirements for production scalability, while also delivering a lower total-cost-of-ownership for customers with accelerator workload environments. With the combination of an extremely efficient cooling infrastructure, Cray’s high-productivity cluster software environment and powerful NVIDIA K40 accelerators, the Cray CS-Storm is designed to be a production workhorse for accelerator-based applications in important areas such as seismic simulation, machine learning and scientific computing.”

    The CS-Storm also features the Cray CS300 air-cooled system, Intel Xeon E5 2600 v2 processors, and the Cray Advanced Cluster Engine cluster management software. Cray says that the CS300 series of supercomputer clusters are available with air or liquid cooled architectures. The CS-Storm system is designed for HPC workloads in the defense, oil and gas, media and entertainment and business intelligence sectors.

    4:40p
    Intel’s Latest 22nm Xeon for Data Centers Out of the Gate

    Intel has launched the Xeon E5-2600 v3 processor family, formerly code-named Grantley-EP, optimized and accelerated for the modern data center. In the “tick-tock” product cycle that Intel follows, the latest edition (tock) of the two-year-old E5-2600 portfolio brings the Haswell microarchitecture to the 22-nanometer Xeon. The E5-2600 v2 chips have accounted for more than 80 percent of Intel’s server chip segments in recent quarters.

    For the first time, Xeon chips for compute, storage and network workloads feature a single architecture. This creates efficiencies in hardware development and opens doors for simplified operating models.

    The company made the announcement in conjunction with Intel Developer Forum in San Francisco which kicked off Monday.

    Eighteen cores, DDR4 memory and PCI Express 3… but wait, there’s more!

    The EP in Grantley-EP stands for Efficient Performance, and Intel has packed a variety of enhancements into the chip, as well as extra ingredients to speed integrated components. Intel has increased the core count on the v3 line up to 18 cores and added support for PCI Express 3 and four fast lanes to Samsung’s DDR4 memory. Balancing performance and efficiency Intel is offering 29 products across the E5-2600 v3 line, with six- or eight-core basic configurations, eight- or 12-core low-power options, 10- to 12-core advanced options and 10 SKUs in what it calls segment-optimized, ranging all the way up to 18 cores.

    Making more custom SKUs than ever

    Diane Bryant, senior vice president and general manager of Intel’s Data Center Group, said the amount of custom SKUs the company has now either committed to developing or has been shipping to certain customers is 20. Each of these chips are “custom to a specific customer to meet their specific workload needs,” she said at a press conference in San Francisco Monday.

    One of these customers is Microsoft. Kushagra Vaid, general manager of server engineering at Microsoft, joined Bryant on stage Monday, to briefly talk about the CPU customization Intel has done for the customer.

    For Microsoft, the chipmaker has designed accelerated encryption, faster compression and a larger number of cores than available in off-the-shelf Xeon SKUs.

    With the Haswell microarchitecture Intel now also offers per core P-states (PCPS), which drives voltage up and down per core, delivering up to 36-percent reduction in CPU power, while allowing greater flexibility for workloads, according to the company. An integrated voltage regulator is now integrated into the die, which enables faster transitions between power states and works with operating systems, which will orchestrate core use management.

    Intel’s Cache Monitor is also a pretty big deal in the v3 line, allowing Cache QoS to provide information about individual VMs in cache and enabling IT automation to make better decisions about utilization. Intel notes that much of the performance improvements over the last generation, as much as 90 percent, are due to Advanced Vector Extensions 2.0 (AVX2). To help separate workloads and optimize processing for them independently, AVX2 and Intel Turbo Boost Technology 2.0 automatically allows processor cores to run faster than the rated AVX base frequencies if they are operating below power, current and temperature specification limits.

    Extended ingredients

    Also launching at the 2014 IDF is the server reference architecture for Intel’s Open Network Platform, and the Intel Ethernet Controller XL710, code named Fortville. The XL710 is a part of the converged network adapter family, and delivers 40GbE in the form of quad 10GbE ports or up to two 40GbE ports. Network technologies is another key focus area in Intel’s approach to driving performance and efficiency into all areas of the data center. Intel recently laid out $650 million for LSI’s networking business. 

    Storage updates

    Intel feels there are a number of new market opportunities for the v3 Xeon processor, beyond the targeted enterprise, cloud, storage, HPC and communications segments. One of the many optimization techniques for the v3 family is described by Intel as evolving the data center rack into a composable set of pooled and disaggregated resources. With infrastructure driven by application requirements, this rack scale architecture has hardware attributes exposed upward to the provisioning management layer – optimizing all aspects for the software-defined data center.

    The future

    Next-generation Broadwell 14 nanometer chips are already ramping up and are quieter, cooler and have equal performance with less power required. This will help fuel Intel’s huge efforts outside of the server – in mobile, wearable and other Internet of Things endeavors. Intel’s willingness to customize architectures and processor solutions for high-volume, large cloud players will help keep demand high as well.

    At a recent investor event by Intel CEO Brian Krzanich confirmed that the Cannonlake 10nm microarchitecture, due out in 2016, will not need to use extreme ultraviolet (EUV) lithography. Krzanich also mentioned the new E5-2600 v3 chips being compatible with DDR3, DDR4 and two ‘other’ memory technologies. There has been speculation that Intel may be getting into the memory business.

    5:30p
    Expedient Breaks Ground in Columbus Market

    Colocation, managed services and cloud provider Expedient has broken ground on a 60,000-square-foot, $52 million data center in Dublin, Ohio (just outside Columbus). Once completed, the facility will add 18 megawatts and triple Expedient’s footprint in the Columbus market. The first phase is expected to open in the second half of 2015.

    The new facility will bring Expedient’s overall footprint to 275,000 square feet across 10 data centers. The company is building the facility from the ground up in three phases. The first phase will be 28,000 square feet with 15,000 square feet of net usable space supporting 575 cabinets and will cost in excess of $22 million.

    The company expanded its Upper Arlington, Ohio, data center in 2013. The Upper Arlington data center marked Expedient’s entrance in the Columbus area in 2011. The company purchased and upgraded an existing data center and is now expanding its market footprint with a greenfield build.

    “In just a few short years since coming to Columbus in 2011, the market has become core to our business strategy,” said Shawn McGorry, Expedient president and chief operating officer. “Thanks to the support of our customers, the great partnerships we’ve enjoyed in the community and of course, the hard work of our employees, we are delighted to expand our commitment to the mid-Ohio region.”

    Local officials welcomed the construction project. City of Dublin Director of Development Dana McDaniel said, “Expedient’s announcement to expand in the City of Dublin is a clear indication of our growing  ability to meet not only business demands for reliable infrastructure but also for access to a highly-qualified workforce needed to support the new technologies hosted within these facilities.”

    Bryan Smith, Expedient’s regional vice president, said the company was fortunate in its site-selection process to find a location directly adjacent to an electrical substation and fourteen connectivity service providers with fiber optic infrastructure running through the property.

    Expedient also expanded its Pittsburgh data center this year.

     

    6:24p
    Microsoft Buys Into US-Brazil Submarine Cable by Seaborn

    Microsoft has agreed to buy capacity on a fiber optic submarine cable a company called Seaborn Networks is building between U.S. and Brazil.

    Microsoft’s commitment assures the Seabras-1 cable system, in the works for several years, will be built. Once completed, the capacity will enable Microsoft to provide higher-performance services in Brazil and the rest of Latin America.

    Larry Schwartz, CEO of Seaborn, said Microsoft would be the “foundational customer” for the system. “With their full participation in the system, it is clear that Microsoft is highly committed to delivering the best cloud experience and infrastructure in Brazil and all of Latin America,” he said in a prepared statement.

    The cable will link New York and São Paulo. Seaborn expects to complete it in 2016.

    It will be the first cable system linking the two countries directly. Other submarine U.S.-Brazil links – there are about five – go through intermediary landing points in Bermuda and Caribbean islands.

    Seabras-1 will be a six-fiber-pair system with maximum capacity of 60 Tbps. Earlier this month, Seaborn announced that it had secured agreements for backhaul connectivity from landing stations in New York and São Paulo.

    Microsoft has been expanding its cloud services business aggressively, and Brazil is one of the world’s fastest-growing IT services markets. The company launched a brand new Brazil South availability region for its Azure cloud services in June.

    Google, one of Microsoft’s biggest competitors in the cloud market, has also been investing in submarine cable capacity. The latest was Google’s participation in FASTER, a five-company effort to build a trans-Pacific cable system that will link major cities on the U.S. west coast to two coastal locations in Japan.

    7:16p
    CenturyLink Wants to Acquire Rackspace: Bloomberg Report

    logo-WHIR

    This article originally appeared at The WHIR

    CenturyLink is in talks to acquire cloud provider Rackspace, according to a report by Bloomberg on Monday. Citing anonymous sources familiar with the matter, the report said that odds of the deal going through are “less than 50 percent unless Rackspace is willing to take payment in stock or enter a joint venture.”

    Rackspace hired Morgan Stanley in May to help evaluate its options, and this is not the first time CenturyLink’s name has come up as a possible buyer. In June, Citigroup speculated that the teleco would acquire Rackspace as it would give them exposure to OpenStack, although it could be difficult to fold in to its existing portfolio, which includes Savvis and Tier 3.

    Rackspace and CenturyLink have both declined to comment on the rumor.

    Rackspace’s stock has fallen around 53 percent from a record closing price of $79.24 in January 2013. The stock closed at $37.24 on Sept. 5.

    Rackspace has taken a hit in terms of market share in the US as AWS, Google and Microsoft have grown their cloud offerings. A recent report by Gartner showed Rackspace is leading in Europe’s cloud managed hosting market, though. Its European presence could be complementary to CenturyLink’s European operations via Savvis, which has grown over the past couple of years with a new data center in London and increased connectivity.

    Private cloud could also be an area of possible synergies as CenturyLink Technology Solutions (formerly Savvis) has launched a new private cloud service in August. Rackspace’s private cloud offering is based on OpenStack.

    Earlier this year, CenturyLink slashed its public cloud prices and added new support bundles. As part of the upgrades to support, CenturyLink introduced Cloud Technical Service Engineers which provide day-to-day support, proactive advisory services and account management, a service designed for enterprises to make the most of out of their cloud. Rackspace has always focused on differentiating its cloud services through its Fanatical Support, so support could be another area of interest for CenturyLink.

    At this point, the deal is pure speculation, but it does seem to indicate that there is still interest in finding Rackspace a buyer.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/centurylink-wants-acquire-rackspace-bloomberg-report

    7:23p
    Digital Realty’s APAC Man Kumar Resigns

    Kris Kumar, who has ran Digital Realty Trust’s business in Asia Pacific over the past two years, has left the company.

    Digital Realty announced earlier this month the appointment of Bernard Geoghegan as Kumar’s replacement. Geoghegan is an inside hire, who until recently oversaw the company’s business in the EMEA region.

    This is Geoghegan’s second run at Digital Realty. He originally joined the company in 2006 and stayed until 2010, when he left to work as executive vice president of the data center services division of the European service provider Colt. He rejoined Digital Realty as EMEA managing director last year.

    The company has four data center properties in APAC – two in Australia, one in Singapore and one in Hong Kong. Its Singapore site is the largest of the four, measuring about 370,500 square feet total.

    The 200,000-square-foot Hong Kong data center is second-largest. The Melbourne facility is about 100,000 square feet, and the Sydney one is about 90,000 square feet.

    Kumar’s is a third recent departure of a senior Digital Realty team member. In August, its solutions director Michael Siteman left to work for a boutique IT solutions firm M-Theory Group in Los Angeles, and Rebecca Brese, who ran customer service and quality control, went to work at Compass Data Centers, a mid-market data center provider founded by another Digital Realty alumnus Chris Crosby.

    Kumar has been senior vice president and regional head of Asia Pacific for Digital Realty since March 2012. He was vice president of corporate development and APAC regional head for two years prior to that.

    Digital Realty, the world’s largest wholesale data center developer and landlord, has been undergoing big changes since the sudden departure of its founding CEO Michael Foust in March. The company has not yet selected a successor, steered by its CFO William Stein in the interim.

    On its first-quarter earnings call, Digital Realty leadership announced some strategic changes, which included diversifying its product portfolio with more hands-on services than its traditional space-and-power offerings and selling off non-core and underperforming properties.

    It reported year-over-year revenue growth for the second quarter and a slight drop in earnings per share. Its management also said they had made progress in identifying properties to divest.

    8:00p
    The Bitcoin Arms Race Shifts to the Cloud

    For Josh Garza, the path to the future of crytocurrency mining runs through the cloud.

    Garza is the CEO of GAW Miners, which has retooled to track the rapid evolution of the Bitcoin market, shifting its business from the physical to the virtual. After building one of the largest online retail stores for cryptocurrency mining hardware, GAW Miners has bought up data center capacity and launched a flurry of new “Hashlets” providing cloud-based data crunching power to Bitcoin enthusiasts.

    GAW Miners’ move reflects a new reality: the Bitcoin technology arms race is shifting from the processor to the data center, with hardware vendors pointing their businesses towards the cloud. This week KnCMiner said it was getting out of retail sales and unveiled a new cloud mining service powered by its new data center in Lulea, Sweden, while Bitmain announced a new mining platform operating from a large facility in China.

    ASIC refresh cycles drive rapid change

    These companies make specialized chips known as ASICs (Application Specific Integrated Circuits) that process transactions for virtual currencies. Over the past 18 months they’ve released a series of powerful mining rigs, creating a rapid refresh cycle that has shortened the useful life of earlier models. After a turbulent period in which ASIC vendors focused on retail sales, many are now installing their hardware in data centers and selling mining capacity, known as “hashing power.”

    “I saw that shipping hardware to people wasn’t going to last forever, so I worked on a plan to migrate our business to a cloud model,” said Garza.

    When GAW launched its Hashlets cloud offering on August 15, the volume of orders was large enough to briefly knock the Shopify e-commerce network offline. “We had a huge amount of sales,” said Garza. “We had to slow down because we were running out of data center capacity.”

    Bucking industry trends

    GAW Miners has made waves by bucking prevailing trends in the Bitcoin sector, where hardware is often delivered late, and sometimes not at all. Garza, a veteran of the telecom industry, believes GAW Miners can set a new standard for business practices, as well as serve as a consolidator in a fragmented industry.

    “I got into (mining) as a hobby, as a lot of people do,” said Garza. “I ran into some unsavory companies and lost money on a couple of deals. I just wanted to create an alternative so people could do business with confidence. And it kind of exploded.”

    Rather than the standard practice of booking pre-paid orders for future equipment deliveries, GAW Miners has sold ASICs from inventory, delivered products promptly and offered responsive customer service. The company did $50,000 in sales on its first day of business, and has scaled up from there. Garza says the company is on pace for $150 million in annual sales.

    Growing through acquisition

    GAW Miners initially focused on mining equipment for “altcoins” such as Litecoin and Dogecoin, which use the Scrypt algorithm to process transactions, rather than the SHA-256 algorithm used by Bitcoin. But the company’s Bitcoin business is growing, as reflected in Garza’s recent purchase of the BTC.com domain name for more than $1 million. Last month GAW Miners paid $8 million to acquire a controlling interest in ZenMiner, a cryptocurrency hosting service that housed some of GAW’s hardware customers.

    The deal was significant because it allowed GAW to expand its capacity for cloud hashing sales. When it launched its Hashlet product, offering a return on investment in about two months, customers quickly bought up the available capacity.

    << Previous Day 2014/09/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org