Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, September 23rd, 2014

    Time Event
    1:00p
    365 Data Centers Gets Into Cloud Storage Business

    365 Data Centers is transforming from colocation provider to colocation and cloud services provider. Today, it has taken the first step in this transformation, announcing a new local enterprise cloud storage service the company is launching across its entire footprint of 17 data centers.

    The company recently raised $16 million in a Series B funding round and secured a $55 million credit facility, which it said it would use for service portfolio and market expansion. This is the first major service expansion.

    With the new service the company is hoping to cater to customers who are looking for hybrid infrastructure options. The storage-as-a-service offering will include self-service provisioning and pay-for-what-you-use pricing.

    365 tapped Zadara to create the offering. Zadara provides an open platform based on x86 servers, using OpenStack Cinder and Nova for volume management and orchestration, as well as OpenStack Keystone Identity Management.

    The service can be privately accessed from within 365 facilities or from a customer’s own data center. Customers who are not using the company’s colocation services can connect to 365 Cloud Storage via metro Ethernet or metro fiber.

    After a customer deploys a physical connection, 365 shares information on the network side to configure private VLAN into the network environment, giving access to the console. From the console, customers can provision storage on-demand, pool those drives into RAID groups, start creating volumes and attach servers to those volumes.

    Features:

    • Selection of dedicated SATA, SAS and SSD drives
    • Support for both SAN block storage and NAS file storage (NFS/SMB)
    • 10Gbps and 1Gbps private network access via cross connect or metro fiber
    • Redundancy support for customized RAID configurations
    • Encryption of data at rest and in flight
    • Built-in data protection through snapshots, cloning and remote mirroring
    • Scalability to hundreds of terabytes per storage volume
    • Integration with Amazon Web Services and Microsoft Azure environments
    • Pay-for-use pricing as low as 5 cents per GB per month.

    Keao Caindec, chief marketing officer at 365, said public cloud providers have not been able to “crack the code” in the cloud storage market for enterprises because they require to move applications and data into their clouds. “We believe businesses should be able to connect their enterprise applications and servers to local on-demand storage to support development, testing, QA and production environments securely.”

    365 Data Centers’ name comes from the iconic 365 Main colocation facility in San Francisco, now owned by Digital Realty Trust. In 2012, 365 Main co-founders Chris Dolan and James McGrath resurrected the brand by acquiring 16 data centers from Equinix. Equinix was looking to divest some facilities it got through its Switch & Data acquisition.

    The company’s strategy is to serve small and mid-size businesses in second-tier markets. “We’re looking to expand into more emerging U.S. markets with strong support by the local government and businesses to drive job growth and build technology hubs,” Caindec said.

    365 eschews traditional annual colocation contracts in favor of month-to-month agreements.

    1:00p
    Redis Labs: We Have 3,000 Paying Cloud In-Memory NoSQL Customers

    Redis Labs has reached 3,000 paying customers in short order for its fully managed utility style in-memory NoSQL database service over multiple clouds and data centers that handles all management, scaling and availability.

    While open source Redis provides high performance because it leverages RAM, like many open source tools, it is difficult to manage and scale in production.

    Redis Labs said developers using its Redis Cloud and Memcached Cloud services have created more than 60,000 database instances since the company announced general availability in 2013.

    Customers include Bleacher Report, a sports site that sees more than 80 million visitors a month, hot app container startup Docker, HotelTonight and Scopley.

    The company also said it has more than 17,000 free customers using its cloud database for small-scale deployments. Many of these customers graduate to the paid service, Redis Labs CEO and co-founder Ofer Bengal said.

    Redis started in 2009 and counts Twitter, Pinterest, Tumblr and GitHub among early adopters. Snapchat is among more recent users.

    Redis Labs does very well with verticals like multiplayer games and online advertising, where processes and decisions need to occur fast on the database front.

    “We developed a very extensive technology that overcomes the limitations of open source Redis,” Bengal said. “We offer enterprise-class Redis. We built our first product line which was Database-as-a-Service, offering Redis in a fully managed service in multiple clouds.”

    Redis Labs scales everything behind the scenes, adding resources as needed, unbeknownst to the customer. It also offers high availability. Users can create in-memory replicas in and across data centers and clouds.

    “If a node fails, we immediately switch and serve the application from a replica,” Bengal said. “We had 150 cases of node failures over the last year, and five complete data center outages without any loss of data. Customers didn’t even know something happened because it didn’t affect them.”

    The service is offered in Amazon Web Services, Microsoft Azure, Google Cloud Platform and IBM SoftLayer. Bengal says that the service is also offered through several channel partners, most of them Platform-as-a-Service providers like Salesforce.com’s Heroku, IBM’s BlueMix and Pivotal.

    The service is in 10 different cloud regions and 15 data centers. “We operate 28 clusters with hundreds of nodes,” said Bengal.

    “Today, the world has changed in terms of how to use databases,” he said. “A few years back a company would choose one database. Today’s applications use multiple databases.”

    The biggest knock against Redis is that it can be expensive, since it operates in RAM instead of disk. The benefits and performance needs have driven adoption nonetheless.

    Redis Labs has raised $13 million in venture funding from Bain Capital Ventures, Carmel Ventures and others.

    3:44p
    Improving Electrical Efficiency in Your Data Center

    David Wright, account manager at WhoIsHostingThis.com, is a passionate, self-proclaimed tech geek who loves all things IT.

    Continuous monitoring of your data center efficiency is essential if you want to reduce your power cost. Reducing electrical consumption will allow your data center to be more sustainable and increase ROI.

    This article is aimed at a wide variety of companies (or departments within companies) that want to learn how to manage data in a sustainable, ethical and efficient way by saving energy and at the same time increasing their ROI.

    The increasing need for sustainability and efficiency

    Data centers are popular in areas such as education, charity, the public sector and research organizations. There are also some pretty interesting and unusual data centers, such as Bahnhof located in a former military bunker in Sweden.

    There are many factors that are forcing data centers to become more sustainable and run efficiently in terms of power usage. Business demand, cost and ROI, environmental pressure and security concerns are just a few of the influencing factors. These issues are targeted by governments all across the world and other organizations such as The Green Grid and even the European Union.

    The first step to controlling the energy use of a data center is to have a good understanding of how the data center uses the energy. The best way to do this is by coming up with ways of measuring energy through energy efficiency metrics.

    Determining your data center’s energy efficiency level will allow you to come up with a strategy. Benchmarking can be done by using data efficiency metrics. Below we’ll take a look at two of the most accepted benchmarking practices.

    Power Usage Effectiveness

    The most common energy efficiency metric is called Power Usage Effectiveness, widely known as PUE.

    PUE is determined by using the following formula:

    PUE = Total Facility Power/IT Equipment Power

    Total Facility Power is the power measured at the utility meter. The IT Equipment Power includes all the actual load of IT equipment such as workstations, servers, storage, switches, printers and other service delivery equipment.

    PUE is determined on a scale from 1 to 4, 1 being very efficient and 4 very inefficient.

    PUE is a green computing principle promoted by The Green Grid, a global organization based in the US which aims to develop and promote data center energy efficiency.

    The Green Grid is a non-profit association of technology providers, end users, facility architects, utilities companies and policy makers. They all work together toward improving the resource efficiency of data centers and information technology all around the world. The Green Grid is famous for creating efficiency metrics, PUE being their biggest hit so far.

    Data Center Infrastructure Efficiency

    Data Center Infrastructure Efficiency, more commonly known as DCIE, is another popular metric used to benchmark energy efficiency of a data center. It is used by many data centers throughout the world.

    The most obvious difference between PUE and DCIE is that the latter is expressed as a percentage rather than a number. The higher the percentage the more efficient the data center.

    DCIE can be worked out by using the following formula:

    DCIE = IT Equipment Power/Total Facility Power x 100%

    IT Equipment Power and Total Facility Power principles explained above apply to DCIE too.

    Here’s a quick example that will help you understand how to work out your data center energy efficiency by using the two metrics explained above:

    Total Facility Power = 320 kW
    IT Equipment Power = 160 kW

    PUE = 320/100 = 3.2
    DCIE = 100/320 x 100% = 31.25%

    You can now calculate your data center’s electrical efficiency and then use the spectrum below to determine whether or not you need to improve it.

    PUE_DCIE

    We hope you found this article useful, please feel free to share your thoughts in the comments section below.­

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:21p
    Survey: Efficiency Investment Drives Facilities and IT Convergence

    Data center energy efficiency investment is expanding and is both a major driver and benefit of the convergence of facilities and IT in data center management. Three quarters of top decision makers in a recent survey commissioned by Schneider Electric indicated they have invested in energy efficiency programs in the past year and more than half project this investment to increase next year.

    The results suggest business leaders are seeing a money return on convergence of IT and operational technology programs in data center management, particularly when it comes to efficiency.

    Redshift Research conducted the survey on behalf of Schneider. It had about 300 employed respondents at companies with at least $50 million in revenue with a decision-making role in facilities management, operations management, technology management, supply chain management or energy management.

    Convergence leads to efficiency

    More than half of respondents cited the convergence trend in data center management as the biggest impact to business today and 61 percent said energy efficiency was the biggest benefit of convergence. Byproducts of increased efficiency include cost reductions (48 percent) and optimized business processes (43 percent).

    The survey also acknowledged that this convergence trend comes with its own set of challenges, including more complex technology management (55 percent), security (54 percent) and conflict between IT and operations staff (47 percent).

    “If you boil OT and IT down to the data center it’s about facilities and IT,” said Domenic Alcaro, vice president, mission critical services and software, Schneider Electric. “There’s that clear historical divide. The systems and people don’t communicate and they treat it as fiefdoms. For quite a few years there’s been this mantra that facilities and IT needs to come together, but you need tools to do so.”

    Schneider pitches DCIM as convergence enabler

    Schneider has a vested interest in promoting such convergence, pitching its data center infrastructure management (DCIM) software suite as a set of tools for managing a converged system.

    Alcaro admits that the DCIM hype has been out of control. There’s still a lot of work to be done on Return On Investment (ROI) models.

    But there’s a balance returning to availability and reliability with efficiency. There is investment both on the hardware and software side of the house, with software providing a united view into operations.

    Operators take efficiency more seriously now

    Alcaro, a longtime veteran who came to Schneider via the APC acquisition, gave anecdotal evidence for changes in the sentiment toward energy efficiency.

    In the late nineties, APC acquired an uninterruptible power supply company with efficient UPS. Alcaro said the company approached customers with that message and saw that they only cared about availability and reliability. “It’s important, but it was too much the leading thought.”

    A data center draws at least 10 to 20 times more power per square foot more than office space does. “Six years ago I saw a true interest in efficiency, but it peaked out in terms of hype. Now we’re at a point where people are investing. It would not be happening were there not a financial impact.”

    4:35p
    Cisco Gets Hands on Network Memory IP With Memoir Systems Acquisition

    Picking up the pace on acquisitions for the year, Cisco announced intent to acquire Memoir Systems, a developer of semiconductor memory intellectual property and tools that enable ASIC vendors to build programmable network switches with increasing speeds. The news comes one week after Cisco announced acquisition of OpenStack cloud player Metacloud.

    Memoir technology is already part of Cisco’s Nexus 9000 switches and its “algorithmic memory” is the IP prize, allowing Cisco to leverage it in future products, advance ASIC innovations and improve memory capabilities and performance.

    Memory 2.0

    Cisco networking products are nothing without extreme performance, as businesses demand higher port density (feeds) and faster line rates (speeds) of 40Gb/s, 100Gb/s and beyond. Integrating faster processor speeds, parallel architectures and multicore processors certainly helps, but with much of what Cisco products do living in memory, the need to stay on pace with speeds, programming requirements and port density requires that memory in ASICs keep up.

    Cisco Senior Vice President of Business Development Hilton Romanski notes in a blog post that Memoir licenses its soft-logic IP, which speeds up memory access by up to 10 times and also reduces the overall footprint this memory takes up in typical switch ASICs. By Memoir’s own description, its patented Algorithmic Memory uses the power of algorithms to increase the performance of existing embedded memory macros – up to 10X more Memory Operations Per Second (MOPS) — and lowers area and power consumption.

    The Memoir technology allows the development of switch and router ASICs with speeds, feeds and costs typically not possible with traditional physical memory design techniques.

    Old Cisco ties

    Santa Clara, California-based Memoir develops a portfolio of products to serve networking, general computing and mobile needs. It’s Renaissance products have been licensed to numerous companies, including IBM.

    Co-founders Da Chuang and Sundar Iyer are both former Cisco employees, having co-lead its network memory group. Cisco acquired Iyer’s Nemo Systems, which specialized in memory algorithms, in 2005.

    Memoir will fall under Cisco’s Insieme business unit. The companies expect to complete the acquisition in the first quarter of Cisco’s fiscal 2015.

    6:11p
    Feds Shut Down Alleged Bitcoin Server Scam in Missouri

    A federal court shut down Missouri-based seller of Bitcoin miner servers called Butterfly Labs Tuesday. The company was accused of taking money from customers and either not shipping the equipment at all or shipping it so late that it was outdated and useless.

    The court shut the company down at the request of the Federal Trade Commission, which said Butterfly had charged customers “thousands of dollars,” but failed to provide the Bitcoin miners until they were “as effective as a room heater,” according to an FTC announcement.

    Jessica Rich, director of the FTC’s Bureau of Consumer Protection, said the commission was pleased with the court’s decision to grant its request. “We often see that when a new and little-understood opportunity like Bitcoin presents itself, scammers will find ways to capitalize on the public’s excitement and interest,” she said in a statement.

    Big server business, niche for data center providers

    Sales of Bitcoin miner computers has grown into a big business, but new generations of more and more powerful miners come out frequently because of the nature of the process of mining for virtual currency. The more Bitcoin gets mined the more complex the algorithms the servers need to crunch through become, requiring increasingly powerful processors.

    These are simple but high-octane machines, and because they have so much processing power, they require a lot of cooling, which has spawned a growing niche in the data center market. Some colocation providers have set aside space just to serve mining customers, who need high power densities and a lot of cooling, but not necessarily the high levels of reliability needed by typical mission-critical customers.

    Broken promises

    The FTC complaint alleges that Butterfly has charged customers anywhere between about $150 and $30,000 for one of its BitForce Bitcoin miner, depending on model. In some cases the servers never came and in others they were obsolete or near-obsolete by the time the company shipped them.

    Buttefly started advertising the product in June 2012, saying it was in its final stages of development. The company promised to deliver them by October of that year, but did not deliver a single machine by April the following year.

    As of September 2013, more than 20,000 customers who had paid in full did not receive their BitForce servers, according to the FTC complaint.

    The story repeated itself with Monarch, Butterfly’s next-generation Bitcoin miner, which the company started advertising in August 2013 but had not shipped a single machine as of August of this year.

    The company frequently failed to provide refunds to the affected customers, according to the allegations, and its reps were unreachable.

    Mining services not rendered

    In addition to selling servers, Butterfly also offered mining services, on which it also failed to deliver to paying customers, according to the FTC. It was charging $10 per gigahash of processing power per year for the service.

    A gigahash is a unit of measurement of Bitcoin mining compute power. The FTC referenced an estimate that to generate a significant amount of Bitcoin one would need about 1,000 gigahashes per year.

    The company said it would use the Monarch cards to provide its “cloud mining” services out of its Kansas City data center as soon as the cards “come off the production line.”

    7:29p
    How to Prevent AWS, Azure or Google from Eating your Lunch? Work with Them

    logo-WHIR

    This article originally appeared at The WHIR

    Many service providers feel like the big guys (Amazon, Google and Microsoft) are a threat. The question is what can you do about it? Do you work with them or compete against them, possibly using a federated approach?

    One example in the hosting industry of a cloud federation is OnApp. OnApp has championed that service providers need to band together versus Amazon and others. It believes that federation is the answer to cloud computing, similar to Uber for transportation or Airbnb for accommodations.

    Is federation the answer?

    Federation isn’t a growth strategy for service providers or a model like Airbnb. Instead we believe its only valid use case is as a short term approach to using spare capacity. Federation will lead to commoditization because you are only able to compete at the lowest level: the infrastructure itself. The spare capacity is likely to be the capacity the service provider can’t utilize so it may be the least cost effective. The opportunity is what you put on top of the infrastructure, how you add value through additional services, applications and functionality. Differentiation is at the heart of this and where service providers can, and will, compete.

    OnApp made an interesting purchase with SolusVM, adding the IaaS platform to its user base and platform. Federated cloud based on pooling unused capacity is an interesting model, but is it the answer or just a part of the industry consolidation process?

    Let’s assume you accept that this is going to happen, do you join the OnApp Federation and extend your business a little or move where the market will ultimately go? There are many things to like about OnApp. I feel it would be more compelling if it supported the large cloud vendors, however that wouldn’t suit why many people joined. Many like the idea of selling excess capacity, however if most people are sellers then ultimately there is a lack of demand.

    David Myer from Gigaom explains this well in his latest article, “OnApp buys Solusvm to boost the demand side of its federated marketplace”:

    London-based OnApp has had an interesting history. It started off providing software for ISPs and old-school hosting outfits to start offering cloud services. Once it had over a thousand of these companies on board, it started federating their spare content delivery network (CDN), storage and compute capacity – first so that they could supply each other through an internal marketplace, and then so they could supply cloud customers through OnApp portals like CDN.net and Cloud.net and through other new virtual service providers.

    Problem is, there’s more spare capacity on offer than the other providers in the federation want, and no pure-play virtual service providers have popped up yet. “We’ve got oodles of supply in the marketplace and we’ve got some demand,” OnApp CEO Ditlev Bredahl told me. “We do around 1,000 buys and sells a week in the marketplace, but to be honest I was hoping to do about 10,000 by now.”

    However, according to OnApp, most of these providers focus their efforts on squeezing what they can out of their own capacity before trying to buy in more through the marketplace.

    We don’t believe that most service providers will put excess capacity into the marketplace at fire sale prices like an airline does to fill empty seats on a plane. There is little evidence to support this direction currently.

    HyperScale vendor’s strategies for growth

    Here are some of the hyperScale vendor’s strategies for growth and winning key markets:

    1. Google

    Google is offering early-stage startups that meet certain criteria $100,000 worth of services available on the Google Cloud Platform, which includes everything from Infrastructure and Platform-as-a-Service to Database-as-a-Service and APIs for a handful of application services.

    2. AWS has a Spot Market Price for Cloud Services

    The spot could be at an 80 percent plus discount to the regular price and is based an AWS offering spare capacity to the spot market. This is very similar to the airline industry. Given AWS’s superior ability to manage big data they are ideally placed to price and manage this, as would Google or Microsoft.

    3. Microsoft Azure

    Microsoft uses term Software License contracts and bundles of Azure to achieve huge discounts for customers.

    Can you build a good business with one or more of the hyperscale partners?

    All these vendors need partners to provide consulting, services, add-ons, API integrations and much more.

    If you understand the cloud vendor’s strategy and it is well articulated in the market, then you can build a good business working with them.

    Here’s a checklist of what to look for from a good cloud vendor partner:

    • Do they understand what they both do best? Is this well articulated in the market?
    • Do they value partners or really only value end user customers?
    • Do they provide a roadmap or guidelines on future product offerings?
    • Do they provide an API for all their services?
    • Do they allow you to manage billing either directly or via API?
    • Can you earn a margin for reselling their services?
    • How is channel conflict managed?
    • Is there a co-operative marketing program?
    • Are there programs to utilize services “in-house” at special rates?
    • Are there programs to teach you how to make money selling their services?

    If you feel that that the majority of these are answered positively, it is likely you can build a good business with one or more of the hyperscale partners.

    Craig Deveson can be reached at Cloud Manager Twitter email

    This article originally appeared at: http://www.thewhir.com/blog/prevent-aws-azure-google-eating-lunch-work

    7:46p
    French Cloud Firm Goes to Canada for ‘Patriot Act-Free’ Location

    Clever Cloud, a Paris-based Platform-as-a-Service startup, has added a North America region to its cloud by establishing a data center in Montreal. The company said it chose Canada over U.S because of the U.S. government’s digital surveillance practices.

    “We care about our customers’ privacy,” Clément Nivolle, head of marketing at Clever Cloud, wrote in a blog post announcing the new data center location. “We have selected this location because it is Patriot Act-free, and Canada has IP laws to protect your data.”

    The Patriot Act, which then president George W. Bush signed into law in 2001 in the wake of the 9/11 terrorist attacks and whose key provisions President Barack Obama extended in 2011, is credited with giving the National Security Agency a green light for the broad mass surveillance practices that have been under criticism worldwide after former agency contractor Edward Snowden leaked information about surveillance programs to the press last year.

    U.S. tech companies have been vocal about the damage NSA surveillance does to their ability to compete on the global market. Cloud service providers, who store customer data in data centers around the world are especially vulnerable to this damage.

    Microsoft has been one of the most vociferous service providers. Brad Smith, the company’s general counsel and executive vice president of legal and corporate affairs, wrote that the reports that resulted from Snowden’s disclosures have created wide-spread concern around the world about data privacy, which has the potential to hamper further adoption of cloud services. “After all, people won’t use technology they don’t trust,” he wrote.

    Clever Cloud’s decision to go into Montreal is a small example of a real impact on the data center market. The company used a Canadian data center services provider called Netelligent for its expansion into North America.

    While small PaaS providers like Clever Cloud do not take a lot of space, the expansion still represents a deal potentially lost by a U.S. data center provider.

    8:37p
    OpenStack Silicon Valley 2014: Video Interview with Jonathan Bryce

    Last week at the 2014 OpenStack Silicon Valley conference, the WHIR spoke to Jonathan Bryce, Executive Director of the OpenStack Foundation. Bryce talked to WHIR reporter David Hamilton about some of the challenges and opportunities with OpenStack, and how web hosting and service providers fit into the growing OpenStack ecosystem.

    8:58p
    Local Telco StarHub Moves Into IO’s Singapore Data Center

    Singapore’s StarHub will use IO’s data center modules and data center platform in the Arizona company’s Singapore data center as part of the infrastructure that supports its portfolio of enterprise network solutions. The IO-powered offerings will be called “StarHub Data Center powered by IO.”

    StarHub provides connectivity and high-speed Internet, security, media and video management solutions and managed network services to enterprises. Using IO’s modules will allow it to deploy infrastructure in support of its enterprise hosted offerings quickly, while the platform will be used to manage and operate the infrastructure.

    IO has done well with its modules and continues to innovate when it comes to the software-defined data center. The company also does well in Singapore, which acts as headquarters of its Asia Pacific operation.

    IO opened a data center in Singapore in October 2013. Last year, it also released a video of some of its modules rolling through the streets in support of its anchor customer there, investment bank Goldman Sachs.

    The combination of IO’s data center modules and data center operating system called IO.OS is called Intelligent Control. The setup allows operators to quickly deploy data centers in any location using the modules, while the built-in platform analyzes and monitors performance metrics across the IT equipment and support infrastructure.

    StarHub will provide customers with service dashboards, such as Low Latency Network, Managed Services, Distributed Denial of Service Protect, Content Delivery Network and Web Application Firewall. StarHub said several companies in financial, media and IT sectors are already using its service.

    “We are very pleased that StarHub trusts IO Singapore to deliver the most technologically advanced data center services and solutions to their customers,” said George Slessman, IO CEO and product architect.

    << Previous Day 2014/09/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org