Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, July 31st, 2013

    Time Event
    12:00p
    Data Center Jobs: ViaWest

    At the Data Center Jobs Board, we have a new job listing from ViaWest, Inc,  which is seeking a Regional Data Center Manager in Chaska, Minnesota.

    The Regional Data Center Manager is responsible for managing facilities staff to deliver expected service levels to customers within the prescribed budget, managing team schedule to ensure customer support and facility coverage, serving as an operational leader in the region, coordinating work assignments among facilities staff, vendors, and contractors, reviewing work orders to ensure that assignments are completed, reviewing price quotes for the procurement of parts, services, and labor for projects, developing and maintaining positive relationships with customers, and responding to problems in a tactful and expedient manner. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    1:49p
    NVIDIA Acquires The Portland Group

    NVIDIA (NVDA) announced that it has acquired independent supplier of high performance computing compilers and tools The Portland Group (PGI). After working closely with the group over the past years NVIDIA has expanded its GPU presence in the HPC market, and PGI has extensive history of innovation in compiler technology for Intel, IBM, Linux, OpenMP, GPGPU and ARM. PGI was a wholly-owned subsidiary of semiconductor manufacturer STMicroelectronics.

    PGI and its staff will continue to operate under the PGI flag – developing OpenACC, CUDA Fortran and CUDA x86 for multicore x86 and GPGPUs. PGI offers NVIDIA Fortran and C compilers with a high-level programming framework geared to accelerators – a growing key component of HPC systems. Having a strong foothold in GPUs and now compilers, are there interconnect, or other acquisitions in the future?

    NVIDIA held its Asia GPU Technology Conference in  Tokyo Tuesday. The event is held in collaboration with the Tokyo Institute of Technology. The company is also launching its new Android-based SHIELD gaming and entertainment portable, which includes a NVIDIA Tegra 4 mobile pocessor and 5 inch HD retinal touchscreen.

    SuperMicro SuperBlades

    At the GPU Technology Conference SuperMicro announced new GPU SuperServers – with NVIDIA K10, K20 and K20X Keper support. Products it is featuring at the event include GPU-accelerated computing solutions includes the 12x GPU 4U, 4-node FatTwin, rackmount SuperServers including the new 1U 3x GPU SYS-1027GR-TRT2+ featuring 512GB of memory in 16x DIMM slots, ultra high-density GPU SuperBlade solutions delivering up to 256 TFLOPS in 42U of space.

    A new SBI-7127RG3 Blade supports 3x NVIDIA Tesla K20X SXM form factor GPUs, dual Intel Xeon E5-2600 series processors, up to 256GB memory and onboard BMC for IPMI 2.0 support. A new SBI-7127RG-E Blade supports 2x GPUs, dual Intel Xeon E5-2600 series processors, up to 256GB memory, 1x SSD or 1x SATA-DOM, and onboard BMC for IPMI 2.0 support.

    “Supermicro’s expanding line of GPU blade, server and workstation solutions is unrivaled in the marketplace and focus is placed on maximizing performance per watt, per dollar, per square foot for any application,” said Charles Liang, President and CEO of Supermicro. “We cover the widest range of performance requirements with single or multi GPU workstations and servers, twelve GPU FatTwin, and now support for up to thirty NVIDIA Tesla K20X GPU accelerators and twenty CPUs in our new 7U GPU SuperBlade. Close engineering collaboration with NVIDIA ensures our customers are enabled with the latest technologies and most optimized platforms for enterprise-class GPU computing.”

    2:05p
    AMD Adds New Embedded Low-Power Chip

    AMD announced a new low-power Accelerated Processing Unit (APU) in the G-Series SoC family. The new GX-210JA APU, a full System-on-Chip (SoC) design, uses one-third less energy than the previous low-power Embedded G-Series SOC at 6 watts maximum thermal design power (TDP), and approximately 3 watts expected average power.

    “The advance of APU processor design, the Surround Computing era, and ‘The Internet of Things’ has created the demand for embedded devices that are low power but also offer excellent compute and graphics performance,” said Arun Iyengar, vice president and general manager, AMD Embedded Systems. “AMD Embedded G-Series SOC products offer unparalleled compute, graphics and I/O integration, resulting in fewer board components, low-power use, and reduced complexity and overhead cost. The new GX-210JA operates at an average of approximately 3 watts, enabling a new generation of fan-less designs for content-rich, multimedia and traditional workload processing.”

    The GX-210JA is part of the AMD Embedded G-Series SOC processor family, which features ECC memory support, industrial temperature ranges, discrete-class AMD Radeon GPU and an integrated I/O controller. The new GX-210JA is currently shipping.

    “AMD multi-core APUs have played a key role in powering our latest cloud client platforms with excellent performance in an extremely compact and efficient form factor,” said Kiran Rao, director of Hardware Platforms, Dell Wyse. 

    “As the newest dual-core member of the AMD Embedded G-Series SOC family, the AMD GX-210JA offers the right level of performance, low-energy use, I/O integration and operating system support, plus a small footprint that should further simplify build requirements,” Rao noted.

    2:25p
    Cisco and NetApp Expand FlexPod Portfolio

    Cisco (CSCO) and NetApp (NTAP) announced that the two companies have broadened the FlexPod portfolio — a converged infrastructure of compute, network and storage — with new validated designs across the entire portfolio. Highlighting the effort is the introduction of the FlexPod Select FlexPod Datacenter for core enterprise data centers and service providers, and FlexPod Express for medium-sized businesses and branch offices. A new FlexPod Select for data intensive workloads is also available.

    The FlexPod portfolio combines NetApp storage systems, Cisco Unified Computing System servers, and Cisco Nexus fabric into a single, flexible architecture. FlexPod solutions are validated and tested to reduce risk and increase IT efficiency.

    FlexPod Select is the first family in the FlexPod portfolio to address targeted workloads, and features NetApp E-Series and FAS storage systems, Cisco UCS C-Series servers, Cisco Nexus switches, and Cisco management software into an architecture that shortens time to insight and accelerates time to value. A validated FlexPod Select with Hadoop comes in two configurations – one with Cloudera’s distribution including Apache Hadoop and one with Hortonworks Data Platform.

    Through a shared vision of a unified data center, NetApp and Cisco have rapidly grown FlexPod. Since its launch in 2010, FlexPod has grown to more than 2,400 customers and 900 channel partners across more than 35 countries.

    “Cisco and NetApp are committed to constant innovation in our joint FlexPod platform to address the business needs of our mutual customers,” said Jim McHugh, vice president, Unified Computing Marketing, Cisco. “This portfolio expansion delivers broader flexibility across the unified data center for an open, scalable, multi-cloud infrastructure that can now also support some of the world’s largest datasets.”

    The FlexPod Datacenter solutions now feature the Cisco Nexus 7000 Series Switch, giving scale to the FlexPod platform with 10 Gigabit Ethernet, and support for up to 768 10 Gigabit Ethernet ports. The Cisco Nexus 7000 FlexPod configuration enables end-to-end Fibre Channel over Ethernet (FCoE), delivering a unified Ethernet fabric, and provides DCI capabilities for multi–data center deployments.

    5:30p
    Buyer Beware: Considerations Before Purchasing Data Protection
    There are many major areas to considered before purchasing a data protection solution as well as the issues affecting businesses on multiple levels, including total cost of ownership and time spent on administration, maintenance, support and recovery.

    There are many major areas to considered before purchasing a data protection solution as well as the issues affecting businesses on multiple levels, including total cost of ownership and time spent on administration, maintenance, support and recovery.

    Faced with many data protection solutions, an enterprise IT person can find the challenge of selecting the right one for his or her organization daunting. Jarrett Potts, director of strategic marketing for STORServer, a provider of data backup solutions for the mid-market, wrote a series on the eleven items to consider before purchasing. We bring them all together in this post for as a resource to our readers.

    Eleven Points to Consider Before Buying a Data Protection Solution

    Over the course of three posts, Jarrett examines 11 major items that must be considered before purchasing a data protection solution as well as the issues affecting businesses on multiple levels, including total cost of ownership (TCO) and time spent on administration, maintenance, support and recovery.

    More Points to Consider Before Buying a Data Protection Solution

    When selecting a data protection solution, it’s important to pick a product that’s easy to use. This column also explains why different data should be treated differently, how to eliminate the burden of virtual machine backups, and why all the talk shouldn’t focus on deduplication.

    Points to Consider Before Buying a Data Protection Solution

    This installment covers how making the right licensing decision can save you money, how to scale data protection, why to set different policies for different data and the role of unified recovery management.

    6:30p
    Data Center Jobs: eSite Systems

    At the Data Center Jobs Board, we have a new job listing from eSite Systems, LLC,  which is seeking an Electrical Sales Position/Mechanical Sales Position  in Plymouth Meeting, Pa.

    The Electrical Sales Position/Mechanical Sales Position is responsible for drawings and specification review; equipment selection and sizing; and quotation preparation. Sales Engineer will be required to forecast sales opportunities, understand and communicate competitor activities, and work with sales management from the product lines we represent. Candidate will be expected to spend 50 percent of time out of office on customer visits. All travel will be in local tri-state area (NJ, PA, and DE). To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    6:45p
    What Does Cloud Computing 2.0 Look Like?

    After a 19-year career with HP that included a six-year stint running Enterprise Architecture for HP.com, as well as being a founding member of HP’s public cloud efforts, Pete Johnson joined ProfitBricks in February 2013 as Senior Director of Cloud Platform Evangelism. You can follow him on his ProfitBricks blog.

    Pete-Johnson-ProfitBricks PETE JOHNSON
    ProfitBricks

    There’s been a lot of coverage in the tech press lately about “per minute” billing of cloud services, which pushes the envelope of flexibility and the method may be putting pressure on Amazon to do the same. But what’s next? It’s fair to say that, after seven years of cloud computing, we’ve seen what Cloud Computing 1.0 is about. While better than traditional hosting, it’s still not all it could be. Not by a long shot.

    What does Cloud Computing 2.0 look like? Here are some ideas:

    1: Choose # of CPU cores, RAM, and amount of disk space independently

    How cloudy is it really when your IaaS provider makes you pick from a list of cookie cutter sizes that makes life easier for them instead of more flexible for you? Really, think about it. How can a service provider force you to pick what is right for your app or database? For most IaaS providers now, it’s like buying a car — when you want to get the leather seats but they are only available in the package that also has a sunroof that you don’t want. Why use an IaaS platform that makes you pay for resources you don’t need? How very 1.0!

    If you’re using a public cloud provider today, go through the following experiment:

    Pick one of your larger servers and look at the CPU utilization. Then look at the memory utilization. Finally, look at how much ephemeral (temporary) disk space you are actually using and divide it by the amount you had to pay for when you selected the instance size you did. Add up the percentages and divide by 3, one for each of the 3 dimensions of your server. That’s the percentage of your money that you’re wasting on that VM. Cloud Computing 2.0 allows you to embrace flexibility and pay for exactly what you use.

    Find an IaaS provider that lets you select the number of CPU cores, RAM, and amount of block storage disk space independently from one another. That way, you can size your system to your specific needs instead of trying to take your square peg of a workload and wedge it into a round hole of an instance size.
    Plus, what about when your workload changes?

    2: Scaling has two dimensions, we just all forgot about the vertical

    The ability to hot swap memory, which you could add without interrupting a running server, has been around for a very long time and the concept of scaling
    vertically by adding resources to an existing server is hardly new. So why, when we all started moving workloads to cloud, did we all forget about this as an option? The answer is simple: first generation clouds like Amazon can’t do it, that’s why. Cloud 1.0 providers forced customers to scale horizontally – ideal for their profits, but not for the apps and the folks that manage them.

    Why limit yourself to a provider that can only allow you to scale by adding more ill-fitting instances to your collection of virtual machines? Plenty of workloads benefit (hello, traditional relational database) more by simply adding more CPU cores or memory to an existing system rather than adding more instances. Does using a single scaling dimension make sense when you can double your possibilities?

    Second generation IaaS providers realize this and include vertical scaling without a reboot as a standard feature of their core offerings.

    3: Better and more consistent performance through dedicated resources

    Here’s a common scenario in a first-generation cloud. First, launch five VM instances. Then, perform benchmark testing on all five. Throw four away and keep
    the one good one.

    Why do people do this? Because over-provisioning (putting more virtual CPUs than there are actual CPUs on a physical server) and a wild assortment of mis-matched commodity hardware lead to inconsistent performance in first generation IaaS.

    We’re told to code around this or be creative in our deployment tools, but should we really settle for that? A second-generation cloud is more creative in its virtual resource provisioning.

    Dedicating CPU cores and RAM to a specific VM from pools of resources with better hardware quality as a foundation can be achieved using better virtualization techniques than Cloud1.0 providers use today. That means better and more consistent performance for customers.

    4: Ease-of-use: It should look like Visio

    When you design an application architecture and the machines that comprise it, what do you do? Most people use a tool like PowerPoint or Visio to graphically represent components and use connective lines to show their network connections or data flow. So why do all the major IaaS providers still use lists of items in tables with check boxes and make you mentally connect them? Instead of forcing people to visualize components, just represent them visually.

    Cloud Computing 1.0’s core audience was the developer, who is trained to think of the world as a set of abstract concepts that can be mentally linked together. With global IT spend at roughly $4 trillion and public cloud revenues at around $4 billion, a big chunk of the other 99.9 percent in available market needs to cater to a broader audience. Cloud 2.0 doesn’t ask people to make mental connections, it shows them in a easy-to-use graphical user interface. In fact, we’ve seen this before if you think about the kind of person who used an Apple IIe versus those who flocked to a Macintosh.

    Why Cloud Computing 2.0’s Time is Now

    VCRs got replaced by DVRs and streaming. Windows, not DOS, put a computer on every desktop and in every household. You don’t “Lycos” or “Alta Vista” anybody – you “Google” them. We’ve seen this pattern time and time again, where a first generation product creates a new, unimaginable marketplace but it always gets improved upon.

    1.0 is rarely endgame. What we are sure to see in years to come, and maybe even sooner, is an improvement in the features available in public cloud. Per minute billing is a great start, but more flexible instance sizes, live vertical scaling without a reboot, better and more consistent performance, and improved ease-of-use through graphical tools are among the features that Cloud Computing 2.0 promises to bring us.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    7:00p
    Nebraska, Iowa Lock Horns Again on $200 Million Project Oasis

    It looks like Nebraska and Iowa are squaring off again. The two Midwestern states have battled often over major data center projects, most recently the Facebook “Project Catapult” server farm that will be built in Altoona, Iowa.

    Now the economic development agency in Nebraska is once again pursuing a large project, this time with the codename “Project Oasis.” Omaha.com has the details on the mystery company’s interest in a site near Springfield in Sarpy County.

    Nebraska has been a finalist in large projects for Google, Microsoft and Facebook, all of which wound up going to Iowa. The exception has been Yahoo, which built a major production data center outside of Omaha. It looks like this latest project will also shape up as a Iowa-Nebraska showdown, as a large “codename” company was scouting sites earlier this year in Altoona, Iowa near the new Facebook site. The codename? Project Oasis.

    Large companies conducting data center site selection searches commonly scout multiple states in order to have several options. Working with multiple states also allows the prospect to gain negotiating leverage, as it seeks incentives from each states and then pits the economic development pitches against one another.

    If “Project Oasis” sounds familiar, it’s because it has been used as a codename before: In a 2010 project by Verizon Communications, which was weighing a huge project in upstate New York on the shores of Lake Ontario.

    That Verizon project didn’t work out, as it met with resistance from local residents, and Verizon opted to buy rather than build as it acquired Terremark for $1.4 billion. But you don’t suppose they’d just recycle codenames? Hmmmm …

    In any event, we’ll continue to track this project in whichever state it lands.

    7:15p
    DataBank Moves Up the Stack, Adds Managed Services
    DataBank-richardson-hallway

    The interior of DataBank’s new North Dallas data center, featuring a data hall on the left and office space and conference rooms on the right. (Photo: DataBank)

    Custom data center and colocation provider DataBank is moving up the stack a bit, announcing new managed services and Data Center Infrastructure Management (DCIM) capabilities. The managed services are part of the next phase in their strategy, along with an ongoing geographic expansion.

    The new suite of service offerings include security services and enhance client visibility of their colocated IT equipment housed within the company’s facilities. The rollout includes an expansion of data collection and monitoring with the addition of DCIM, “IT Stack” monitoring tools, and security solutions for Threat Management, Log Management, Scanning, and Distributed Denial of Service (DDoS) mitigation.

    “DataBank delivers the very highest-level infrastructure and environments in the markets we serve, and layering in these new services was a logical extension to our portfolio,” said Mike Gentry, DataBank VP of Operations at DataBank. “We are responding to customer demand and have moved forward with a number of best-in-class service partners to aid in the delivery of one-stop solution capabilities to enhance our end-user experience.”

    The DataBank building served as the main Federal Reserve Bank office in Dalas.  The building’s sturdy infrastructure, including generous floor loads and ceiling heights, provided an unusual opportunity to build the new economy in the footprint of the old. It has a strong power and power capacity story, residing on an extremely resilient portion of the Dallas power grid, known as the “700 Network,” and it has more power than space. It was using just 4 megawatts of 15 available when it was 85 percent full.

    After eight years and nearly filling up 130,000 square feet of data center space in the original DataBank building, the company began extending its model – first in the Dallas metroplex, and then in other cities around the US. It opened a new data center in Richardson, Texas and acquired VeriSpace, a data center provider in St. Paul, Minnesota to this aim.

    Private equity firm  Avista Capital acquired a majority interest in DataBank last year, and also sold the 400 South Akard Street property to Digital Realty Trust in a sale-leaseback transaction. This also led to the Richardson, Texas facility; Databank also leased a powered shell building from Digital Realty at its Digital Dallas campus in Richardson.

    7:45p
    Top Ten Data Center Stories, July 2013
    Data Center Knowledge readers in July enjoyed our stories that included numbers - the number of servers at Microsoft and investment numbers from Google.

    Data Center Knowledge readers in July enjoyed our stories that included numbers – the number of servers at Microsoft and investment numbers from Google.

    During the month of July, Microsoft news dominated our reader’s attention, with a story that Microsoft now has 1 million servers, topping our list of most popular articles. Another article about Google’s ongoing data center investment also garnered interest. Other topics this month include the immersion data center, flooding in Toronto impacting data centers, and how the HIPAA final rule impacts data center owners and cloud providers. Here are the most viewed stories on Data Center Knowledge for the month of July 2013, ranked by page views. Enjoy!

    Ballmer: Microsoft has 1 Million Servers – July 15 – Microsoft now has more than 1 million servers in its data centers, according to CEO Steve Ballmer, who confirmed the number during his keynote address during last week’s Worldwide Partner Conference.

    Google’s Data Center Building Boom Continues: $1.6 Billion Investment in 3 Months – July 19 – Google’s extraordinary data center building boom continues to drive its spending, as the company invested a record $1.6 billion in its data centers in the second quarter of 2013.

    The Immersion Data Center: The New Frontier of High-Density Computing – July 1 – Geoscience specialist CGG has filled an entire data center with tanks of servers submerged in a liquid coolant similar to mineral oil. Here’s a look at this unique data center in Houston and its implementation of immersion cooling technology from Green Revolution Cooling.

    Toronto Flooding KOs Data Center Cooling Systems – July 9 – A massive rainstorm caused widespread flooding and power outages in Toronto, which created challenges for some tenants at the city’s largest data center hub. 151 Front Street maintained power, but experienced problems with cooling systems.

    What the HIPAA Final Rule Means for Data Centers and Cloud Providers – July 9 – Data centers and cloud providers servicing the health care industry should take particular note that the Final Rule of HIPAA (that went into effect in March) clarifies that they are officially considered “business associates” under HIPAA and must therefore comply with all applicable privacy and security requirements. Matthew Fischer of the law firm Sedgwick, LLP, explains what data centers and their subcontractors need to do to be in compliance with HIPAA.

    SolidFire Raises $31 Million, Says Its SSD is Now Cheaper Than Disk – July 25 – Solidfire announced it has raised $31 million in funding led by Samsung Ventures. It also released the SF9010, which takes advantage of the rapid advances in the flash market and crosses a major cost threshold.

    365 Main Acquires Data Center in Bay Area – July 11 – Data center operator 365 Main has expanded its presence in the San Francisco Bay Area with the acquisition of an Evocative, Inc. facility in Emeryville, Calif.

    SoftLayer Becomes Part of IBM’s SmartCloud – July 8 – As IBM closes its acquisition of SoftLayer, the companies outline their synergies and highlight a recent win with a “born on the cloud” customer that represents the constituency IBM hopes to capture with the cloud deal.

    Number of U.S. Government IT Facilities Rises to 7,000 – July 25 – The number of IT facilities included in the Federal Data Center Consolidation Initiative (FDCCI) continues to grow. The number, which started at 432 in 1999, grew to 3,000 last year and has now exploded to nearly 7,000. The culprit: server closets.

    GI Partners Buys LA Telecom Hub One Wilshire – July 18 – Private equity firm GI Partners has acquired One Wilshire, the leading carrier hotel in Los Angeles and one of the most wired buildings in the world. The reproted deal price of $437 million reinforces the premium value of data center real estate.

    Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or FacebookDCK is now on Google+ and Pinterest.

    -

    8:00p
    What The Inside of an NSA Data Center Looks Like

    There’s been intense interest lately in the new National Security Agency data center in Bluffdale, Utah and its capabilities. The NSA isn’t likely to allow the world a look inside the new Utah data center anytime soon. But that hasn’t always been the case. A 2001 Discovery Channel documentary provided an unprecedented look inside the NSA’s facility in Fort Meade, Maryland – including the agency’s supercomputing facility. The systems were housed in a specially designed facility on the second floor of the two story building, while the first floor was dedicated to massive cooling units featuring 8,000 tons of water chilled flourinert, a liquid cooling agent used to bring down the temperature of electronic components. The facility provided an earlye xample of two-tier design that separates the IT equipment from mechanical and electrical infrastructure, an approach commonly seen today. The agency’s cryptologists rely on these computers’ power and speed to make and break codes. At the time, one of the agency’s most powerful supercomputers was a system from the Thinking Machines Corporation that is highlighted in this video excerpt from the documentary. This video runs 1:36.

    Other excerpts from this broadcast are available on the National Geographic web site. For additional data center videos, check out our DCK video archive and the Data Center Videos channel on YouTube.

    8:15p
    Report: QTS Planning to Go Public in IPO
    qts-suwanee

    Rows of cabinets fill the huge QTS (Quality Technology Services) data center in Suwanee, Georgie. The company is reportedly planning an IPO. (Photo: QTS)

    Speculation about imminent public offerings and acquisitions among leading providers is a constant in the data center business. It looks like one of those long-rumored IPOs may be closer to materializing.

    Bloomberg reported Tuesday that QTS (Quality Technology Services) is planning to pursue an IPO, citing “people with knowledge of the matter,” who say the company is working with Goldman Sachs and Jefferies Group on the offering.

    QTS declined comment. “As a privately held company, QTS does not discuss rumors or speculation about future financial transactions or events,” the company said. It made a similar statement earlier this year in response to reports that QTS was seeking to convert to a real estate investment trust (REIT), a move that could make the company more attractive to investors. The IRS is currently reviewing its rules for REIT conversions for data centers and other newer real estate property classes.

    Bloomberg said QTS has made a confidential filing with U.S. securities regulators and may release information to investors prior to an IPO later this year.  This process would be similar to that followed by Interxion prior to its public offering in 2011.

    A National Footprint Built Via Acquisitions

    QTS (Quality Technology Services) was founded in 2005, and has grown from a single facility in Kansas to a national chain operating more than 3.8 million square feet of data center space, including several of the largest facilities in the industry. QTS is the leading provider in the Atlanta market, where it operates a huge data center downtown and also has a major data center in the suburb of Suwanee. The company also has data centers in Miami; Richmond, Va.; Jersey City, N.J.; Dallas; Sacramento, Calif.; Santa Clara, Calif.; and thnree facilities in Kansas.

    QTS is part of the Quality Group of Companies, founded in 1962 by James Williams and is now headed by his son, Chad Williams. QTS has grown through acquisitions, and in 2009 announced a $150 million investment from private equity firm General Atlantic, a veteran player in the Internet infrastructure space. Earlier this year the company boosted its credit line to $575 million, and acquired a massive facility in Dallas as well as the former Herakles data center in Sacramento.

    QTS is distinctive in that it provides a broad spectrum of data center services, including wholesale retail space, retail colocation, and managed hosting. Its large floorplates in Atlanta, Richmond and Dallas provide QTS with unusual flexibility in configuring space and provisioning power for its customers.

    Earlier this week the company introduced a  website failover service, in which it will help customers monitor response time and configure DNS settings to manage a smooth transition in the event of an outage or sluggish response.

    Here’s a look at QTS’ data center footprint (Click the image for a larger version):

    qts-locations-470

    8:27p
    Cloud Price Wars: ProfitBricks Slashes Prices By Half
    supernap-powerspine

    Cloud computing company ProfitBricks will double its capacity at the SuperNAP in Las Vegas, along with slashing its pricing. Here’s a look at the massive power spine at the SuperNAP, which has experienced strong growth for its cloud ecosystem. (Photo: Switch)

    ProfitBricks has decided to skip the incremental cloud computing price cuts seen at leading players, and slash its pricing in half in a bid to make itself the clear price performance leader in the Infrastructure as a Service (IaaS) sector. The company was already competitively priced, so this means instance prices are roughly 50 percent lower than major cloud providers like Amazon Web Services and Rackspace.

    “Cloud computing pricing is inflated,” said Andreas Gauger, co-founder of ProfitBricks. “The industry didn’t adjust the price according to the savings that they actually have in their costs. There’s  talk of a price war going on, but the truth is the pricing is high, the margins are high.”

    “We are cutting our pricing of cores and RAM in half,” Gauger said. “It makes us the price performance leader in the space.”

    Is this really a drastic move, or is ProfitBricks just keeping the other cloud providers honest? At the very least, it may put ProfitBricks on the map for cost-conscious cloud consumers, and raise ProfitBricks’ profile in the cloud business. Gauger indicates that the company’s margins were high, and that margins have been adjusting slower than the cost. “The new pricing is roughly the same as the gross margin in 2011,” he said.

    A Page From the 1&1 Playbook?

    Gauger and the founding team of Profitbricks previously co-founded shared hosting provider 1&1 Internet, which reveals a bit of the strategy behind the tactic. 1&1 Internet is a mass market/shared hosting juggernaut that was very aggressive during the hosting pricing wars several years ago. 1&1′s pricing undercut the competition, and it worked out for them. However, there weren’t very many winners during the hosting pricing wars, with several providers struggling, folding, or being gobbled up in a consolidation play.

    Part of the reason the aggressive pricing worked for 1&1 was the sheer scale of the company. It could put pressure on its pricing and rely on the longtail. The truth is that, as with  cloud, the profit margins are really good on shared hosting

    Profitbricks will now be 45 percent to 66 percent less expensive than Amazon EC2, and 43 percent to 80 percent cheaper than Rackspace cloud instances. ProfitBricks doesn’t offer reserved instances, an Amazon feature that can slash cloud costs to even lower levels.

    Profitbricks is seen as a premium cloud, offering beefy hardware, an ultra-fast Infiniband network and a strong focus on a user-friendly management interface. ProfitBricks released a benchmark study from Cloud Spectator indicating its cloud runs at speeds twice that of Amazon Web Services and other major vendors.

    “We’re twice as fast and half as expensive,” said Gauger.

    Doubling Capacity in U.S.

    Profitbricks also disclosed that it is doubling capacity in the U.S. and Germany. The company houses its U.S> infrastructure at the SuperNAP in Las Vegas, so this announcement doubles as a win for them.

    Today’s pricing move will no doubt rekindle talks of commoditization of cloud. Based on Amazon’s public pricing, ProfitBricks customers save at least 45 percent in a one-to-one comparison. For example, an Amazon M1 Medium instance with 1 core, 3.75GB of RAM and 250GB of block storage is $0.155 per hour or $111.40 per month. A similar instance on ProfitBricks costs $0.0856 per hour or $61.65 per month.

    While the company was founded in 2010, its founders have been at the hosting and cloud game for many years, previously raising 1&1 to a top competitor. HP veteran Pete Johnson joined the Profitbricks team earlier this year. Johnson led HP’s enterprise efforts and was a founding member of HP’s public cloud team.

    A typical event in the cloud world these past few years is that a market leader like Amazon shaves a few pennies off its price and the rest of the competitors either follow suit, or note that they’re already cheaper. Profitbricks has skipped a few chapters ahead here.

    “It’s shadow boxing,” said Gauger. (The price cuts) are going too slowly. We have to speed it up.”

    The pricing will likely be attractive to CIOs and CTOs trying to justify the cost savings during this upcoming budget season. “It’s always surprised me that companies can outgrow the public cloud,” said Gauger. “There is no real reason for that to happen other than price, and now we’ve removed that barrier as well.”

    << Previous Day 2013/07/31
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org