Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, January 18th, 2013

    Time Event
    7:30a
    Infinity SDC Opens Slough Data Center

    UK data center operator Infinity SDC announced that it has opened its Slough data center facility, in the heart of the UK’s IT corridor. Slough is a key communications hub connecting the M4 corridor with the rest of the world.

    As Infinity’s first purpose-built data center, it was developed with real estate investment trust company SEGRO, and provides 92,000 square feet of technical space.  The facility is supported by 34MVA of power, and has 250kW data halls.  It provides connectivity options to multiple ISPs and a large choice of telecommunications carriers including BT, Virgin Media, Colt, Zayo, Level 3, Easynet and Geo Networks.

    “We are delighted that our new facility in Slough is complete and ready for clients to use,” said Stuart Sutton, Infinity CEO. “This site offers customers a whole range of dedicated, shared and modular co-location solutions to meet their exacting needs in a low risk area with low fibre latency to London.

    “Infinity Slough will provide much needed data centre capacity in the Thames Valley area, home to many of Europe’s leading IT and computing companies and a hub of national significance,” Sutton added. “The area already has an extremely high concentration of communications companies, and it will become increasingly significant in the future due to initiatives such as Crossrail. Infinity Slough is a fantastic addition to our portfolio of data centres in and around London, further strengthening and extending our offer to clients.”

    12:30p
    Data Center Jobs: Online Tech Seeking Managers

    At the Data Center Jobs Board, we have two new job listings from Online Tech, LLC, which is seeking an Infrastructure Manager and a Client Services Manager in Ann Arbor, Michigan.

    The Infrastructure Manager is responsible for maintaining and reporting on uptime objectives, handling prospect and client interaction during sales (prospect visits), deployments, and in the case of infrastructure incidents, working with outside independent auditors to ensure that we adhere to procedures, identifying process improvements for audits, clients, and general operational excellence, and working closely with Client Services Manager to ensure successful deployment. To view full details and apply, see job listing details.

    The Client Services Manager is responsible for measuring, analyzing and reporting on NPS (Net Promoter Score) data, designing and managing the client onboarding process, designing and managing the client help desk process, recruiting, training, and retaining 24/7 staff for deployments and help desk, coordinating client communications for maintenance windows, incidents, regular operations, and updates and working with product developers to create and maintain deployment punch lists for new products. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    1:00p
    Facebook Builds Exabyte Data Centers for Cold Storage

    Jay Parikh, VP Infrastructure Engineering, Facebook, presents on Facebook’s “cold storage” methodology, which the social media giant uses to store user photos. (Photo by Colleen Miller.)

    What do you do with an exabyte of digital photos that are rarely accessed? That was the challenge facing Jay Parikh and the storage team at Facebook.

    The answer? A dedicated data center at its Prineville, Oregon campus that could house older photos in a separate “cold storage” system that dramatically slash the cost of storing and serving these files. The facility has no generators or UPS systems, but can house up to an exabyte of data.

    Facebook stores more than 240 billion photos, with users uploading an additional  350 million new photos every single day. To house those photos, Facebook’s data center team deploys 7 petabytes of storage gear every month.

    But not all of that photo data is created equal. An analysis of Facebook’s traffic found that 82 percent of traffic was focused on just 8 percent of photos. “Big data needs to be dissected to understand access patterns,” said Parikh, the Vice President of Infrastructure Engineering at Facebook.

    Tiered Storage, With a Twist

    The answer was a tiered storage solution that could meet the needs of Facebook’s 1 billion users. Tiered storage is a strategy that organizes stored data into categories based on their priority – typically hot, warm and cold storage – and then assigns the data to different types of storage media to reduce costs. Rarely-used data is typically shifted to cheaper hardware or tape archives, a move that saves money but often with a tradeoff: those archives may not be available instantaneously. As an example, Amazon’s new Glacier cold storage is cheap, but it takes 3 to 5 hours to retrieve files.

    That wouldn’t work for Facebook, whose users want to see their photos immediately. “We need to have cold storage, but a fast user experience,” said Parikh, who discussed the project this week at the Open Compute Summit. “And we don’t want to use any more power than needed.”

    Facebook developed software that could categorize photos and shift them between the three storage tiers. The savings is captured through dedicated hardware that could store more photos and use less energy.

    Last year Facebook built a 62,000 square foot data center on its Prineville campus to house its cold storage, which can house 500 racks that each hold 2 petabytes of data, for a total of 1 exabyte of cold storage. Similar facilities will be built at Facebook’s data center campuses in North Carolina and Sweden, Parikh said.

    The cold storage data center has no generators or uninterruptible power supply (UPS), with all redundancy handled at the software level. It also uses computer room air conditioners (CRACs) instead of the penthouse-style free cooling system employed in the adjacent production data centers in Prineville.

    More Storage, Less Power

    Most importantly, each rack uses just 2 kilowatts of power instead of the 8 kilowatts in a standard Facebook storage rack. But Parikh said it will be able to store 8 times the volume of data of standard racks.

    How does it manage this? The hardware itself is not radically different, but uses a technology called shingled magnetic recording that features partial overlapping of contiguous tracks, thereby squeezing more data tracks per inch.

    Parikh said the system is architected so that different “chunks” of image data don’t share same power supply or top-of-rack switch to avoid a single point of failure that would lose data. And if a user deletes a photo, it is deleted from cold storage as well.

    Not many companies face storage challenges at the kind of scale seen at Facebook. But Parikh believes more companies will be confronting these massive storage issues.

    “Our big data challenges that we face today will be your big data challenges tomorrow,” he said. “We need to keep coming up with advanced solutions to our storage problems. The most important innovations are the problems people solve before the scale of the problem emerges. I believe big data is one of those problems. And we won’t keep up unless we work together.”

    1:30p
    Supercomputing & Efficiency: November 2012 Exascalar, Part II

    Winston Saunders has worked at Intel for nearly two decades and currently leads server and data center efficiency initiatives. Winston is a graduate of UC Berkeley and the University of Washington. You can find him online at “Winston on Energy” on Twitter.

    Winston SaundersWINSTON SAUNDERS
    Intel

    This blog post builds on the previously published Exascalar Results from November 2012 which gives insights into the Exascalar analysis of the November 2012 Green500 list. In Part II, I’ll look into some trends that highlight the “necessary but not sufficient” dependency of supercomputing leadership on efficiency.

    Trends Over Time

    The first trend is the time development of Exascalar. Overlaying the current Exascalar plot with the one from June 2012 emphasizes changes, as shown in the figure below.

    Click to enlarge graphic.

    Click to enlarge graphic.

    The most remarkable new systems are in the low-performance, high-efficiency regime. In this case, the leading efficiency system is the Beacon-National Institute for Computational Sciences/University of Tennessee based on the Xeon Phi architecture. Recall that the June 2011 Exascalar analysis showed a similar efficiency revolution being launched with the BlueGene Architecture. So watch this space.

    Looking solely at the ordinal Green500 and Top500 rankings on the Exascalar list conceals the vital role efficiency leadership plays in Exascalar. The number one Exascalar computer is first in performance and third in efficiency. But number two Exascalar is second in performance and 29th in efficiency.

    One way to look at this is a histogram of the efficiency and performance scalars of the Exascalar systems as shown below. Recall that the performance and efficiency scalars are just the negative logarithm of performance and efficiency normalized to the equivalent of 1018 flops in 20 MegaWatts. Each graph shows two curves, one for just the Top10 Exascalar systems, and for the entire population. Both curves are normalized to the size of the populations for display purposes.

    Click to enlarge graphic.

    Click to enlarge graphic.

    The “big” observation is that the range of system efficiency for the Top 10 systems is strongly biased toward the upper end of the distribution. The performance distribution for the top 10 spans a greater range and is “flatter.” Looking at the whole top500 Population there appears a deeper systematic difference between the distribution of performance and efficiency data. (If there’s interest – just request it in the comments section below – I can blog about it.)

    The last trend of interest is the historical trend of Exascalar. The top score didn’t change much but the overall trend remains consistent. The Top Exascalar “best fit” trend is consistent with factor of ten in efficiency or performance every 2.7 years. This is equivalent to a doubling time t2 = 0.8 years!

    The median trend diverges from the other two trend lines in the figure. There are several possible explanations why this is the case, but it’s likely related to the rapid pace of innovation at the top end (80% of the Top 10 Exascalar systems are new since 2011) systems or the rate of replacement for systems already in operation.

    Click to enlarge graphic.

    Click to enlarge graphic.

    This latter point might be worth some scrutiny. At the base of the “Exascalar triangle” the systems with 108 Mflops performance range in power consumption from roughly 2.0 MWatt to about 20 kWatt. That’s roughly a $2.0 Million difference in annual electricity expenses for the same performance.

    As always, your thoughts and comments are appreciated.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:30p
    “Zero Dark Thirty” Optimized With Avere Systems
    zero-dark-thirty3

    Some of the visual effects for the new movie “Zero Dark Thirty” were provided by Image Engine, which has saved money by using storage from Avere Systems.

    Network attached storage optimization provider Avere Systems announced that Image Engine, a company specializing in visual effects for feature films, has tapped Avere to dramatically reduce costs as part of its new storage architecture system.

    Storage costs were significantly reduced for Imagine Engine, with  a cluster of Avere FXT 3500 Edge filers.  Image Engine has been involved in feature films such as Zero Dark Thirty, Battleship, District 9 and The Twilight Saga: Eclipse and Breaking Dawn.

    “Our continued success has required us to do a greater amount of rendering which puts significant pressure on the servers and systems,” said Gino Del Rosario, head of technology, Image Engine. “By leveraging Avere, we are able to scale as well as send and route massively large files without any challenges. The Avere architecture separates storage performance from storage capacity allowing us more flexibility to address studio needs in a compartmentalized manner – something our previous solution addressed in a totally different way.”

    Access to VFX software currently used by Image Engine runs through the Avere Edge filer cluster including all 3D and 2D applications. One of the key features that has proven to be helpful for Image Engine has been the analytics provided by the graphical user interface, which enables the company to monitor performance in order to pinpoint hotspots, IOPS and latency trends.

    Image Engine announced it has provided over 300 shots for Zero Dark Thirty, which has been nominated for a 2013 Visual Effects Society Award for Outstanding Visual Effects in a Feature Motion Picture. “High-end ‘invisible’ effects such as digital environments and hard surface animation have become increasingly important core competencies at Image Engine,” said Steve Garrad, Visual Effects Executive Producer. “A film like Zero Dark Thirty relies upon Image Engine for highly nuanced, photorealistic effects, and I think the results speak for themselves.”

    3:00p
    Networking News: Carney Named CEO at Brocade

    Here’s a roundup of some of this week’s headlines from the network sector:

    Brocade names Lloyd Carney as CEO.  On Monday Brocade (BRCD) announced that its Board of Directors unanimously appointed networking industry veteran Lloyd Carney to the position of chief executive officer effective immediately. With nearly 30 years in the high-tech industry, Carney has held a number of senior leadership positions at various networking and semiconductor companies. Most recently he was CEO and board member at Xsigo Systems. ”I believe Brocade is poised to leverage its heritage of strong innovation and significantly disrupt the status quo in the data-networking industry,” saidCarney. “There are profound changes happening across high tech today and Brocade has a great opportunity to lead that transformation through differentiated products and customer focus. Success here will accelerate profitable growth for our company and drive further value for our shareholders. I am very excited and honored to lead Brocade at this time.” Carney succeeds Michael Klayko, who resigned in August after serving as served as CEO since 2005.

    Level 3 selected by NATO.  Level 3 Communications (LVLT) announced that it has signed a contract with the NATO Communications and Information Agency (NCIA) to install and maintain an IP Virtual Private Network  to be used by the NATO-Russia Council Cooperative Airspace Initiative (NRC CAI). The aim of this initiative is to enhance airspace transparency between NATO and Russia, identify any suspicious aircraft activities, and provide early warnings through the monitoring of NATO-Russian airspace. As part of this project, the Level 3 network will connect a range of NATO monitoring facilities and enable real-time display and observation of commercial airspace activities. ”The Level 3 network and our IP VPN services are designed for performance, security and productivity, and we are pleased to be selected by NATO to provide IP VPN-based infrastructure between the key locations of the CAI network,” said James Heard, Level 3 regional president of EMEA.. “With our services, NATO will be able to exchange air traffic information and direct voice coordination with the security and reliability they need.”

    Alcatel-Lucent and Reliance enter billion dollar long term contract - India’s Reliance Communications and Alcatel-Lucent (ALU), have announced an end-to-end network managed services contract aimed at delivering superior customer experience in Eastern and Southern India up to 2020. Extending an existing relationship between the two companies for over $1 billion, it will deliver world-class, seamless voice and data communications services to Reliance customers. Alcatel-Lucent will bring independent wireless and wireline teams together into a single network management organization, allowing Reliance Communications to strengthen its focus on growing its business. “We are happy to announce our new partnership with Alcatel-Lucent, which is a transformative leap from the limited scope and vision of traditional outsourcing of services,” said Gurdeep Singh, Chief Executive Officer, Wireless Business at Reliance Communications. ”This will enable Reliance Communications to take the lead in offering next generation telecom solutions that will meet and exceed the expectations of our customers, and help them to transit from voice-led usage to a seamless data experience across multiple devices and platforms.”

    Comcast selects Ciena for 100G.  Ciena (CIEN) announced that it is providing next-generation 100G coherent optical technology to Comcast Cable for use in its core network. The deployment at Comcast will enable growing customer demand for  high-capacity services and applications, such as HD video, mobile data and cloud computing. It also will help Comcast more easily evolve its network and create energy efficiencies by reducing the number of network components, including optical regenerators required in its network. Ciena is also providing Comcast with network management system capabilities and a range of professional services via its Ciena Specialist Services portfolio. “Adding Ciena’s WaveLogic 3 to our already installed 6500 Packet-Optical Platforms lets us leverage that investment to deliver more content, faster Internet speeds and enable new cloud-based applications for our customers, while also providing future core 400G scalability,” said Kevin McElearney, senior vice president, Network Engineering, at Comcast Cable. “As we scale beyond our 100G network, Ciena’s coherent optical technology will help us to deliver on our ongoing commitment to maintain and operate a high-performance, feature-rich advanced network.”

    3:30p
    Data Center Links: GoGrid, Phoenix NAP, Peak 10

    Here’s our review of some of this week’s noteworthy links for the data center industry:

    Phoenix NAP selected by Main Advantage Technology.  Phoenix NAP announced that Main Advantage Technology, a leading IT provider, has selected the data center for its hosting needs. “We succeed as a Managed Service Provider and IT consultants by tailoring the most reliable and effective solutions for each of our clients,” said Scott Barclay, President of Main Advantage.  “There are great benefits to virtualization and colocation, and large enterprises have been enjoying those benefits for some time now.  We needed the right mix of cloud services, physical servers, and data center services to bring those same benefits to our small-business clients.  Phoenix NAP provides all that and more.  The projects we’ve built around Phoenix NAP solutions have been stunning successes, and we have plans for many more.” The company will leverage the Phoenix NAP Secured Cloud, to assist its clients with their IT needs and to support mobile workforces without the need for large capital expenditures.

    GoGrid provides cloud for ScaleArc iDB.  GoGrid announced the availability of ScaleArc iDB in the GoGrid cloud. By transparently deploying ScaleArc iDB with their cloud or hybrid cloud infrastructure, GoGrid’s customers can take advantage of real-time visibility and analytics to solve problems in minutes, instant horizontal scaling, up to 24x faster query response times, and the ability to change database architecture on-the-fly without modifying applications. ScaleArc iDB is a Layer 7 SQL Traffic Engine that inserts transparently between applications and databases and provides SQL-aware load balancing and high availability. It can be used for MySQL or Microsoft SQL Server instances hosted on GoGrid’s cloud or hybrid cloud infrastructure. ”Combined with GoGrid’s cloud infrastructure, ScaleArc iDB can solve relational database issues that have plagued the industry for decades,” said Varun Singh, founder and CEO of ScaleArc. “GoGrid customers can now take their MySQL or MS SQL Server environments to the next level of performance combined with the flexibility and scalability of the GoGrid cloud.”

    Peak 10 validated for PCI Compliance.  Peak 10 announced that its data centers and cloud infrastructure have been validated for PCI DSS 2.0 Level 1 compliance. The company recently underwent a rigorous audit by an independent Quality Security Assessor (QSA) to ensure that it meets best practices and security controls needed to keep credit card data safe and secure during transit, processing and storage. The Payment Card Industry Data Security Standard (PCI DSS) is a proprietary information security standard for organizations that handle cardholder information. In addition to the PCI DSS audit, Peak 10 successfully completed annual company-wide compliance audits for SSAE 16 (Statement on Standards for Attestation Engagements 16), the Health Insurance Portability and Accountability Act (HIPAA) and the Health Information Technology for Economic and Clinical Health (HITECH) Act for its data center and cloud infrastructure operations. “While customers remain responsible for many aspects of the compliance of their technologies and applications, their use of Peak 10’s cloud infrastructure can help meet many of the requirements for compliance with PCI DSS and HIPAA/HITECH,” said David Kidd, Peak 10’s director of quality assurance. “The successful completion of this most recent series of audits is part of our continued commitment to maintaining a well-governed, high-quality IT service environment.”

    4:00p
    Friday Funny: Vote for the Best Caption

    It’s Friday and time for some chuckles! The end of the work week should always be capped off with a few laughs and a few pints.

    Please take a moment to vote on the caption suggestions for our latest cartoon about “hitting the links” by the “data center.” (Vote below!)

    The caption contest works like this: We provide the cartoon (drawn by Diane Alber, our fav data center cartoonist) and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner will receive their caption in a signed print by Diane.

    Check out more of Diane’s cartoons at her website KipandGary.com.

    Click to enlarge graphic.

    Click to enlarge graphic.

    Take Our Poll

    For the previous cartoons on DCK, see our Humor Channel.

    4:34p
    CoreSite Plans Major Data Center in Northern NJ
    The server hall of a data center operated by CoreSite, which is filing for a $230 million IPO.

    The server hall of a data center operated by CoreSite, which is building a new data center in Secaucus, New Jersey.

    CoreSite Realty has purchased a 280,000 square foot building in Secaucus, New Jersey for a new data center, and expects to invest $65 million to buy the facility and  redevelop the initial phase of 65,000 square feet of data center space.

    The facility, which will be dubbed NY2, is the company’s first data center in New Jersey facility, and a sign of continuing activity in the northern NJ market. CoreSite has a site in New York at 32 Avenue of the Americas, and the Secaucus facility will mark an important expansion for the provider.

    CoreSite is under contract to acquire the building, with the acquisition is expected to close in early February. The 280,000 square foot facility sits on 10 acres of land, which allows additional data center development as the market demands. At full build out, CoreSite expects it will offer 19 critical megawatts of capacity. Construction will start in Q1 2013, with turn-key capacity expected to be available in Q4 2013.

    CoreSite intends to ensure the availability of high-capacity and high-speed lit services as well as a robust dark-fiber tether between NY2 and CoreSite’s NY1 location at 32 Avenue of the Americas in Manhattan, enabling CoreSite to provide seamless interconnection across its New York campus.

    CoreSite in Building Mode

    The company has aggressively building out data center campuses across America. Focusing on network centric and cloud oriented applications, these data center campuses are network-dense.

    “CoreSite’s entry into Secaucus is an important step in the execution of our strategy to extend our U.S. platform supporting latency-sensitive customer applications in network-dense, cloud-enabled data center campuses,” said Tom Ray, President and Chief Executive Officer, CoreSite. “Our New York campus is designed to meet performance-sensitive customer requirements supported by our location at the nexus of robust, protected, low-latency network rings serving Manhattan as well as global cable routes to Chicago, Frankfurt, London, and Brazil. Additionally, customers are able to connect directly to service nodes for Amazon Web Services Direct Connect.”

    The Secaucus facility follows the launch of CoreSite’s previously announced 15 data center, located in Reston, Virginia. CoreSite’s national platform, which spans nine U.S. markets and includes more than 275 carriers and service providers and more than 15,000 interconnections.

    Direct Connections Support Cloud, Financials

    The availability of direct connections to high speed networks in NY2 will be particularly interest to financial firms looking to reduce latency and improve performance. Three network service providers have pre-committed to serve NY2, consisting of CoreSite partners Sidera Networks, Zayo, and Seaborn Networks, each of which provides high-performance network support to the financial services, cloud and network communities.

    “The new CoreSite data center in New Jersey fits perfectly with Sidera’s growth strategy,” said Clint Heiden, President, Sidera Networks. “This expansion gives CoreSite customers immediate access to over 40 financial exchanges and the Sidera Xtreme Ultra-Low Latency Network.”

    Open Cloud Exchange

    In addition to the new facility, the company also announced an Open Cloud Exchange, an offering looking to offer a range of cloud services to customers. The Exchange will offer best-of-breed partnerships and services from a broad range of providers.  It capitalizes on demand for hybrid infrastructures, letting Enterprises, Managed Service Providers (MSPs) and Systems Integrators (SIs) in CoreSite facilities connect directly, via a single resource, to the cloud service providers of their choice. This provides customers with flexible options to securely and easily connect to all types of cloud offerings.

    “We’re building the industry’s premier home for cloud services,” said Jarrett Appleby, COO, CoreSite. “With networks—the oxygen for cloud services—as the foundation, adding the industry’s leading cloud providers will create best-in-class scalability, management, automation, software, and many-to-many exchange capability. The Open Cloud Exchange offers our customers enormous provider flexibility, guaranteed performance, real-time monitoring, and easy management of cloud infrastructure services.”

    The initial four best-of-breed partners in Open Cloud Exchange are CENX, Rightscale, RiverMeadow Software and Brocade.

    • CENX will provide its CENX Automated Ethernet Lifecycle Management software specially designed for CoreSite’s Open Cloud Exchange, enabling easy, single sign-on management of Layer 2 cloud infrastructure services and full MEF CE 2.0 compatibility.
    • RightScale, will provide its platform for deploying and manage business-critical applications across public, private, and hybrid clouds. RightScale offers efficient configuration, monitoring, automation, and governance of cloud computing infrastructure and applications.
    • RiverMeadow Software will deliver its automated cloud onboarding SaaS developed specifically for migrating servers and workloads into and between Carrier Service Provider Clouds.
    • Brocade will provide the hardware infrastructure and switching logic at the heart of the Open Cloud Exchange.

    Planned future enhancements include the ability to connect to providers across multiple CoreSite locations within the same metro area; Connections between customers and providers in various on-net buildings throughout the country; and the Choice between numerous software and services providers to support performance sensitive customer applications through a marketplace portal

    The service is available immediately in seven campuses:  Los Angeles, San Francisco Bay Area, Chicago, New York, Northern Virginia, Boston, and Washington, DC.

    In addition to this monster of a facility from CoreSite, Northern New Jersey has been no stranger to activity these last few months. Internap announced a 100,000 square foot build in Secaucus last October, its third in the NY Metro region to address growing demand. With its supply of data center space in northern New Jersey running low, Digital Realty recently announced construction in Clifton. Last July, DCK reported that the appeal of the New Jersey market might be widening for a variety of factors

    7:22p
    Google Invests $600 Million to Expand in South Carolina
    GoogleDC_BerkeleySC

    The pond on Google’s data center complex in Berkeley County, SC, where the company has committed to another $600 million investment in its data center campus. ( Photo: Connie Zhou for Google)

    Google is dropping another $600 million on data center construction, and South Carolina is very happy about it. The company held a  groundbreaking ceremony in Berkeley County, S.C. this morning to announce that Google will expand its operations at the Mt. Holly Commerce Park in Berkeley County. The additional $600 million in investment at the site brings Google’s total investment at the site to over $1.2 billion.

    The data center in Berkeley County houses thousands of servers to support services such as Google search, Gmail, Google+ and YouTube. As demand for Google’s services grows, the company must ramp up data center to meet this demand.

    “Today’s announcement is another big win for South Carolina,” said Governer Nikki Haley in a release. “We celebrate Google’s decision to grow its footprint in Berkeley County with a $600 million investment. When a world-class company like Google decides to expand in the Palmetto State, it shows we are providing the sort of business environment that helps foster success.”

    A lot of states aggressively pursue data center business through various tax incentives because data centers are oftentimes a boon for the local economy.

    Building Upon Initial Phase

    “South Carolina and the Berkeley County community are great places in which to work and grow,” said Data Center Operations Manager Eric Wages. “When Google first announced plans to come to Berkeley County in 2007, we were attracted to not only the energy infrastructure, developable land and available workforce, but also the extraordinary team from the local community that made us feel welcome. Today’s announcement is just a continuation of our investment in the state. Google is proud to call Berkeley County home.”

    Google first announced plans for a South Carolina data center in 2007, making an initial investment of $600 million to get the center up and running. In November 2010, Google announced plans to construct a second building at the site, which is now serving traffic.

    The boon to South Carolina extends beyond data centers. Google is also involved in supporting science and mathematics programs in local schools. Since 2008, it has awarded more than $885,000 in grants to local schools and nonprofits. It also has helped implement a free, downtown Wi-Fi network in Goose Creek.

    Investing in the Community

    “Google has been a great partner, exceeding expectations when the data center was first proposed,” said Berkeley County Supervisor Dan Davis. “They have invested capital, created good jobs and more importantly partnered with local businesses to help them do business better.”

    “When our community came together to develop this business park, we wanted to attract leading companies that would establish deep roots and grow,” said S.C. Sen. Paul Campbell. “Google’s expansion is an example of how Berkeley County can serve the needs of the world’s most innovative and dynamic companies. I hope Google’s growth here prompts other growing businesses to put down roots here.”

    Google spends a lot on infrastructure. This is an understatement.  Here was a look at Google’s spending up to Q3 2012. Google also recently announced it was investing in more wind power in Iowa as well as is investing $300 million in expanding its data center operations there. Total investment blew past 1 Billion in infrastructure in Iowa.  The company also doubled the size of its Pryor, Oklahoma facility in April 2012. This is all news from roughly a year, mind you. DCK attempted to provide a FAQ about Google data centers last May.

    7:43p
    CyrusOne Completes IPO, Shares Trade Higher on NASDAQ
    cone-trading

    A screen shot of a NASDAQ screen showing the share price of CyrusOne shortly after it began trading this morning. (Source: CyrusOne)

    CyrusOne has completed its IPO, pricing its initial sale of shares at $19, above the projected range of $16 to $18, indicating healthy demand for shares of the colocation provider. Shares of CONE rose in price as they commenced trading on the NASDAQ, trading above $21.50 in early trading before settling back to about $21 a share (up 10 percent) in mid-afternoon trading.

    CyrusOne operates telecom Cincinnati Bell’s data center business, and has raised $313.5 million in its IPO. Cincinnati Bell hoped to raise at least $264 million, so all signs so far show a successful IPO. The banks managing the deal may buy another 2.5 million shares at the IPO price, adding to the total amount raised.

    CyrusOne will trade on the NASDAQ stock exchange under the ticker symbol CONE. Upon completion of the offering, Cincinnati Bell will own 71.6 percent of CyrusOne through its holdings of common stock and its interest in the CyrusOne LP limited partnership, which are exchangable for shares of common stock of CyrusOne.

    Cincinnati Bell acquired CyrusOne in 2010 for $525 million, seeing colocation as a potential growth engine. The deal has paid off handsomely, and the IPO for CyrusOne could allow Cincinnati Bell to benefit from investor interest in the data center and cloud computing sector, while shifting significant capital expenses off the telecom company’s balance sheet.

    << Previous Day 2013/01/18
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org