Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, December 13th, 2012
| Time |
Event |
| 12:30p |
Data Center Jobs: U.S. Securities & Exchange Commission At the Data Center Jobs Board, we have a new job listing from The U.S. Securities and Exchange Commission, which is seeking a Data Center Engineer in Washington, DC.
The Data Center Engineer provides comprehensive management, support and engineering of CA Spectrum, eHealth and NetQoS, and NetIQ AppManager products including designing architectural changes and implementing those changes, establish configuration management, and schedule, monitor and allocate resources as necessary, provides highly specialized technical advice concerning configuration and implementation of metric gathering tools for IT facility equipment, analyzes the metric, technical configurations and makes decisions on the best alternatives. Assists the capacity and availability program of SEC infrastructure, and leads the measurement of Key Performance Indicators for Event Management and drive efficiency and cost savings. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:00p |
Green House Data Expands to New Jersey  An aerial view of the massive solar power array on the roof of the DuPont Fabros NJ1 data center in Piscataway, New Jersey. Green House Data is the newest company to set up a data center in the building. (Photo: DuPont Fabros Technology.)
Green House Data, a Wyoming-based provider of colocation and cloud hosting services. has opened its first East Coast data center, the company said today. Green House will house its infrastructure with Net2EZ inside the NJ1 wholesale data center in Piscataway, New Jersey, operated by DuPont Fabros Technology. Net2EZ leases wholesale space within NJ1, which it uses to offer colocation services to its customers.
The new facility will expand Green House’s data center footprint, which includes facilities in Cheyenne, Wyoming and Portland, Oregon. The company is focused on providing environmentally sustainable hosting environments, and bills its cloud computing offering as “the world’s greenest cloud.” The company cited the 2 megawatt solar array covering the massive roof of the DuPont Fabros facility as consistent with its emphasis on environmental stewardship.
The move grows the Green House Data cloud to cover the United States from coast to coast, ensuring excellent access in North America and Europe, and providing cloud infrastructure as a service options to those organizations who must have low latency, high performance access to their applications at all times.
“The Green House Data cloud now blankets the entire nation,” said Shawn Mills, president of Green House Data. “We are proud to give our customers a geographically diverse cloud that is highly available, load balanced across locations, and provides additional failover in the case of an unforeseen event.”
“Because we are able to provide multiple, nationwide access points, our customers are able to reach their customers, faster, no matter where they are in North America or the Eurozone,” said Mills.
Green House is referring to its NJ1 facility as its ”Newark” facility, even though Piscataway is about 30 miles from Newark. The NJ1 facility has earned Gold certification under the LEED standard for green buildings, and DuPont Fabros says it operates with a Power USafe Effectiveness below 1.3. | | 2:30p |
ProfitBricks Brings Vision for Next-Generation of IaaS  ProfitBricks CEO Achim Weiss in front of a diagram of the company’s Infiniband data center network. (Images: ProfitBricks)
There’s a lot of talk across the industry about the promise of the software defined data center. ProfitBricks is one provider that believes it is on the forefront of this movement, but also believes in the power of brawny cores and a fast network to ensure its software delivers on that promise.
ProfitBricks seeks to differentiates itself from what it calls first generation IaaS. It is a cloud provider that touts the ability to provide both vertical and horizontal scale, flexibility in the network, and a data center design tool with an interface that makes building a virtual data center a fairly easy and straightforward endeavor. ProfitBricks calls itself the “second generation of cloud infrastructure,” and it has been growing at a quick clip since launching earlier this year.
ProfitBricks was founded in 2010 by Achim Weiss and Andreas Gauger, previous cofounders of 1&1 Internet, with funding from the founders and United Internet. It launched in the U.S. last September.
“We spent around 2 years in development of making it work correctly at high speed,” said ProfitBricks USA CEO Bob Rizika. “There’s 14 guys just doing kernel modifications.”
The end product is a virtual data center (VDC) offering that provides dedicated cores, and bigger, customizable instances. It offers free software defined networks, firewalls, double redundant storage and 24 hours a day, seven days a week of system administrator engineer-level personal support.
Scaling up Staff, Customers
ProfitBricks had about 1,000 customers as of late November. The company started with 30-40 engineers; now that number is up to 110 in Germany, and almost 30 in the US. ProfitBricks has beefed up the whole marketing organization, and is in the process of hiring an evangelist. It announced about a month ago that MIT, Stanford and CalTech are using their infrastructure, and it is looking at areas when they can partner.
ProfitBricks has targeted customers that need high performance, and has been attracting a combination of ISPs, software vendors, gaming startups, and big data customers. It has also done well with customers running older applications, as horizontal scale does not work with older applications.”A lot of applications have unique network requirements and custom sized (CPU/RAM) hardware requirements; the cloud seems like a scary place to these folks,” said Rizika. “We provide the flexibility they need.”
What exactly differentiates ProfitBricks from other cloud providers? Rizika compares what they do to Amazon Web Services.
“AWS has a lack of customized instances, with AWS you have to buy set server instances,” he said. “Forced horizontal scale and lack of security made a lot of guys sit on the sidelines.” The company touts its network flexibility, with user-defined network configurations, as a key differentiator.
“This is second-gen infrastructure, for the data center crowd,” said Rizika. ”The goal is delivering the true data center. ProfitBricks allows the design of servers and clusters exactly the way the customer wants. You can have different size servers in different areas. In the end of the day you apply it all at a patch panel.”
Software Tools Gaining Notice
Another key part of ProfitBricks’ offering is the Data Center Designer, its software management tool, which has won some fans among early customers. “The Data Center Designer allows our team to focus on our product and not on learning and implementing a string of acronyms and complex setup configurations,” said Profitbricks customer and USpin Founder Ethan Bagley.
The company says that its usage of Infiniband truly allows it to separate itself from the cloud pack, and create true vertical scaling. You can scale up to 48 gig of ram (which the company disclosed is about to increase).
“If you look at the top 15 providers, we are the first or second least expensive,” said Rizika. “We’re looking to democratize the whole industry. ”
When asked if he’s worried about the commoditization of cloud computing, and whether price wars would negatively affect ProfitBricks, Rizika was clear that, all though they were competitive in terms of price, they compete “solely on technology features and value, not price.”
“This is just a new level of pricing,” said Rizika.”I don’t want people to think there’s hidden stuff. It doesn’t matter if a user uses for a day month, year.” | | 3:00p |
CoSentry opens Lenexa, Kansas Facility Colocation provider CoSentry has opened a new carrier neutral data center in Lenexa, Kansas. The 60,000 square foot facility has 20,000 square feet of raised floor space, or space for around 1,000 48” cabinets. The facility is fed by three 2.5 megawatt power feeds from KCPL, and has an N+1 electrical service. There are three 2.5 megawatt diesel generators, and a minimum of 24 hours of diesel fuel capacity on site. There’s also three UPS systems with 6.6 MVA of redundant load capacity.
The facility was opened in a ceremony attended by 200, including Lenexa mayor Michael Boehm. Manny Quevedo, VP of Corporate Development, explains: “With the opening of our new data center in Lenexa, KS, CoSentry is continuing its track record of offering flexible, secure, onshore data center solutions across the country.”
The facility is dubbed a “six nines” data center – meaning it can support 99.9999% uptime – one upping the traditional five nines often seen in the industry. Its concrete walls are rated for 200 mile per hour winds, and there 24/7 onsite staff.
It’s one of the only facilities in the region to support VCE’s Vblock technology, which the company says has already led to demand. Capacity is priced either by cabinet, circuits or power usage. CoSentry offers colo from a few U’s to multiple cabinets/a whole suite. The company offers managed services as well.
A dual cooling system design is inside, the initial system being air-cooled chillers with integral economizers (N+1 configured) and high-efficiency computer room air handling units with N+1 capacity and dual path chilled water piping. The second chilled water system creates a high efficiency water-cooled evaporative chiller system. This system is designed to be implemented as UPS load grows beyond 50% of total capacity. Once this is added, either system can/will be used to cool the data center floor, based on time of year. The overall configuration will provide two completely separate chiller plant systems, each capable of supporting the load.
Silicon Prairie
There’s been somewhat of tech boom in Kansas. Also in Lenexa is mass market hosting giant 1&1 Internet, which houses its primary U.S. infrastructure there. So why Kansas?
The state of Kansas has aggressive business incentive programs to attract and grow business within the state. These incentives can offset up to 100% of state income taxes, sales taxes, payroll withholding taxes and certain property taxes for up to 10 years. In addition, Kansas has job training program incentives for businesses adding employees in the state.
The Kansas City region also attracted single-tenant data centers in the wake of the 9-11 terror attacks in New York, which heightened awareness of the need for out-of-region disaster recovery, as well as the need to address scenarios in which air travel is unavailable. This gave pause to providers with a “bi-coastal” backup plan, convincing more enterprises and financial service firms to consider locations low-disaster zones in the middle of the country.
There’s a bunch of emerging tech start ups there as well; the wider region is often referred to as Silicon Prairie. Silicon Prairie comprises parts of Nebraska, Iowa, South Dakota, Kansas, Minnesota, North Dakota, and Missouri. CoSentry has facilities in Kansas City, MO, Lenexa, KS, Sioux Falls, SD, Papillion, NE and Omaha, NE, putting it right in the heart of the, er, heartland. | | 3:24p |
Retrofitting Cold-aisle Cocooning Doesn’t Mean Massive Disruption Mark Hirst, product manager for Cannon Technologies’ T4 Data Centre Solutions, is a Data Center design expert with a background in electronic control systems and industrial networks.
 MARK HIRST
Cannon Technologies
Working around infrastructure that has evolved over time makes retrofitting hot/cold aisle containment a challenge. Multiple data and network cable runs, cooling pipes and mismatched cabinets mean many solutions will not work effectively. This column looks at the options available to those who want containment, but are not sure if their environment can handle it.
What is Hot/Cold Aisle Containment?
Hot/cold aisle containment is an approach that encloses either the input or output side of a row of cabinets in the data center. The goal is to effectively control the air on that side of the cabinet to ensure optimal cooling performance.
With hot aisle containment, the exhaust air from the cabinet is contained and drawn away from the cabinets. Cold aisle containment regulates the air to be injected into the front of the hardware.
In both cases, the ultimate goal is to prevent different temperatures of air from mixing. This means that cooling of the data center is effective and the power requirements to cool can, themselves, be contained and managed.
Challenges
Over time, all environments evolve. The most common changes in a data center tend to be around cabling and pipe work. What was once a controlled and well ordered environment may now be a case of cable runs (power and network), being installed in an ad-hoc way. In a well run data center, it is not unreasonable to assume this would be properly managed but the longer it has been since the last major refit, the more likelihood of unmanaged cable chaos.
The introduction of high-density, heat-generating hardware such as blade systems has seen greater use of water-based cooling. This requires changes to the racks and the addition of water pipes. These make enclosing a rack difficult as many solutions need to have pipework holes cut into them. The other challenge here is that you cannot simply drill a hole and the retrofit will not include disconnection and reconnection of pipes to run them through the holes.
These are not the only challenges. Just as the type of hardware in the cabinets has evolved, so have the cabinets themselves. What started out as a row of uniformly sized and straight racks may now be a mix of different depths, widths and heights. This is common in environments where there are large amounts of storage present as storage arrays are not always based on traditional rack sizes.
Cabinet design can also introduce other issues. If the cabinet has raised feet for leveling, something often seen with water-based solutions, there may be existing backwash of air under the cabinets. There may be gaps in the cabinets either down the sides or where there is missing equipment. These should already be covered by blanking plates. The reality in many data centers, however, is that there will be missing plates which is allowing hot and cold air to mix.
The floor also needs attention. Structurally, there may be a need to make some changes to accommodate the weight of any containment system. This is not just the raised floor but the actual floor on which the data center sits. The evolution of data centers and changes to equipment is rarely checked against floor loads. Before adding more weight through a containment system, it is an opportunity to validate loads
Floor tiles degrade over time. They get broken, replacements may not be the right size or have the right size of hole. No air containment system can be effective if there are areas where leaks can occur.
Prerequisites
It would be naïve to assume that retrofitting hot/cold aisle containment will not require some potential changes to the existing configuration. However, there are very few prerequisites to address:
1. Weights and floors, as mentioned above.
2. Each enclosure should ideally line up height-wise with its counterpart across the aisle. Don’t worry about small gaps, we will deal with those later.
3. The height of each pair of enclosures should ideally be the same. However, there are ways around this but within reason. A height difference of a few inches can be managed easily. A difference of two or three feet or more is increasingly common in older environments. Whilst most containment solutions could not cope with this, we have designed our retrofit solution specifically for such “Manhattan Skylines” which are highly prevalent in many older data centers and where a cost effective upgrade path to containment can significantly extend the useful life of the existing racks, data cabling and M&E infrastructure.
4. Normally, each row must line up to present an even face to the aisle that is being contained, in order to create an air tight seal.
The prerequisites may require a little planning inside the data center and in the most extreme case, require a little moving of cabinets to get the best fit. Again it is possible, as we have done with our own retrofit system, to design a solution for situations where it is not reasonable to move cabinets to create an even line to the containment aisle. | | 3:30p |
Data Center Jobs: Google Seeking Engineers At the Data Center Jobs Board, we have five new job listings from Google, which is seeking a Mechanical/Systems Engineer; Mechanical Engineer; Data Center Facilities Mechanical Engineer; Systems Engineer, Platforms; and an Electrical Engineer Data Center R&G in Mountain View, California.
The Mechanical/Systems Engineer is responsible for contributing to overall system architecture from chip to chiller with the ability to make interdisciplinary tradeoffs to optimize TCO of overall compute infrastructure, developing holistic mechanical designs from package to data center level optimizing for cost and efficiency, designing and specification of thermal management hardware including material selection, heat sinks, heat exchangers, air movers, pumps, and all supporting equipment, developing appropriate specifications and test procedures as necessary to ensure desired reliability and performance of electronic equipment, and mentoring junior engineers in all aspects of concept development, analysis, design, and specification. To view full details and apply, see job listing details.
The Mechanical Engineer is responsible for identifying and scope projects around innovation opportunities as well as technology risks, developing and executing designs for complex server systems backed by thorough engineering analysis, delivering robust solutions for tasks such as blind mating of high density interconnects, inject/eject mechanisms and large server racks, and taking responsibility for the productizing of large-scaled networked computer systems – including requirements gathering, concepts generation & detailed design, analysis & verification testing, cross functional reviews and manufacturing release. To view full details and apply, see job listing details.
The Data Center Facilities Mechanical Engineer is responsible for designing and defining the mechanical system design requirements and interfaces to plant infrastructures between the various internal stakeholders and the data center design/build team, supporting PM in the gathering of internal mechanical system design requirements and delivering solutions that both satisfy customer’s needs and meet plant interface requirements, providing conceptual design drawings as part of the design requirements and interface spec development, providing mechanical system analysis as required to support design concept, maintaining all data center related mechanical system design requirement and interface documents and managing and complying with the change control process. To view full details and apply, see job listing details.
The Systems Engineer, Platforms is responsible for designing for operability (DFO) review of all Platforms products intended to run as services, performing analyses both before and after service launch to establish quantitative models for service performance and degradation, establishing service level objectives (SLOs) that define successful performance, integrating new Platforms services into the existing environment, reusing tools and procedures where appropriate, recommending tools for monitoring, control, and automation to enable these services to function at large scale and with minimal human effort and error, and data analysis to assess service performance, identify weaknesses, and solve operational problems. To view full details and apply, see job listing details.
The Electrical Engineer, Data Center R&D is responsible for researching new designs, materials, technologies and construction methods of datacenter equipment and facilities, carrying new design concepts through exploration, development, and into mass production, defining data center system-level architectures, specifying performance requirements, and creating accurate conceptual design and programming requirements documentation, and producing analyses that ensure designs satisfy requirements, including predicted performance and power quality concerns. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:45p |
Data Center Fun For Charity It’s that time of the year. Visions of sugar plums start dancing in the heads of data center and IT staff. Being good neighbors to their communities and the world, data center teams are helping others and spreading good cheer.
Recently, cloud provider Rackspace got together 200 employees filled with holiday spirit and put together a seasonal musical video. Shot in the San Antonio HQ, it includes many “Rackers” in holiday garb! The number of views of the video was tied to donations to charity through Rackspace Gives Back. As of this morning, the video met its goal of a $10,000 donation and the video (above) continues to “rack up” views. (LOL)
In a combination of social networking and charity, Emerson Network Power will donate $1 (up to $5,000) for every new Twitter follower and new Facebook fan in December to One Laptop per Child, an organization that provides rugged, low-cost, low-power, connected laptops to the world’s most underprivileged children.
For the Bah! Humbugs! and Nervous Nellies who not ready for so much good cheer and are hunkering down for the Zombie Apocalypse, there’s a great blog post from Datacave that includes tips on surviving the Dec. 21 end-of-the-world event and features a photo of their Humvee.
Meanwhile, “Ho, Ho, Ho!” Happy Holidays to all! | | 7:59p |
Big Leases for Dupont Fabros in Santa Clara and Chicago  An aerial view of the new DuPont Fabros data center in Santa Clara, Calif.
Wholesale data center provider Dupont Fabros announced today that it has closed significant leases in multiple locations. After a few recent deals, the Chicago CH1 Data Center is now 100 percent leased, and in Silicon Valley, the Santa Clara SC1 Data Center is 75 percent leased.
Chicagoland’s CH1
Located in Elk Grove Village, IL, CH1 was constructed in two phases totaling 485,000 gross square feet, with 231,000 square feet of raised floor and 36.4 megawatts (“MW”) of critical load. CH1 now has nine tenants with a weighted average lease term of 9.9 years and will achieve a 13 percent unlevered GAAP return on invested capital. One tenant has the right to relinquish 2.6MW by notification in the first quarter of 2013 if they choose, but that hasn’t happened yet as of the company release.
Two new leases brought the facility to 100 percent leased. The first is in Phase I for 0.43MW and is with an existing financial tenant in the company’s New Jersey Facility. The second is in Phase II for 2.6MW, from an undisclosed existing customer in Northern Virgina. The 2.6MW lease will commence in three phases, the first for 1.3MW in Q1 2013, the remaining 1.3MW commencing in equal parts in Q3 and Q4 quarters of 2013.
Silicon Valley’s SC1
Located in Santa Clara, CA, the company signed a massive lease for 5.69MW with an existing tenant, jumping Phase I from 44 percent to 75 percent leased. The existing tenant recently renewed a lease at ACC3 in Ashburn, VA, and is also one of the customers leasing in Chicago. The new lease will be rolled out in three steps: 2.28MW in Q1, w.28MW in Q2, and 1.13MW in Q4 2013. The weighted average of lease terms in this facility is 8.8 years.
Other Recent Leasing Activity
Just this November, the company pre-leased 4.33 megawatts of space in Virginia, fully leasing ACC6 in Ashburn, Virginia. ACC6 is a massive facility of 262,000 square feet that was constructed in two phases of 130,000 square feet and 26MW of critical load. Phase I was delivered in September in 2011, and Phase II is expected January 1, 2013. Just last April, the company filled up 18 megawatts in that same facility.
To follow Dupont Fabros news, bookmark DCK’s DFT Channel. |
|