Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, September 17th, 2014

    Time Event
    12:00p
    Exec: Resurrecting Healthcare.gov Meant Dealing With Bureaucracy, Incompetence, Politics

    NEW YORK - The U.S. government’s IT infrastructure is broken, and it’s hurting the country. Applying basic engineering skills that are common at Internet companies could change that situation. But how do you convince technologists to adopt this difficult calling?

    If you’re Mickey Dickerson, you share a compelling story of how engineering saved the day on something that matters. Dickerson, the executive credited with getting the Healthcare.gov website back on its feet, brought his message to the world’s leading gathering of web performance specialists.

    At the O’Reilly Velocity New York conference, Dickerson challenged the audience to get involed in fixing the U.S. government’s IT infrastructure.

    “Engineering is critical to the functioning of society right now,” said Dickerson, who heads the new U.S. Digital Services. “We have thousands of engineers working on picture-sharing apps when we already have dozens of picture-sharing apps.

    “These are all big problems that need the attention of people like you. These problems are important, and fixable, but you have to choose to take them on. This is real life. This is your country.”

    Engineering nightmares, but fixable ones?

    Dickerson’s experience in resuscitating the healthcare site reflects both sides of the government IT experience – the project-killing bureaucracy and incompetence, along with the potential to make a huge difference on technology that can affect people’s benefits and lives.

    Until last Oct. 22, Dickerson was an engineering manager at Google. That’s when he became the head of the “tech surge” of Silicon Valley specialists tasked with rescuing the Healthcare.gov web site, which at that point had been largely offline since it crashed upon its launch three weeks earlier.

    What he found when he arrived at the Healthcare.gov offices in Herndon, Virginia, was shocking – especially so for the audience at Velocity, which is focused on web automation.

    “There was literally no dashboard,” said Dickerson. “There was no place to find out whether the site was up or down, except for watching CNN. Obviously, you’re dead in the water if you can’t see what’s going on with the site.”

    Fifty-five contractors were involved in the Healthcare.gov site, but none were tasked with keeping the site online so people could use it, he said.

    “Amazingly, there was no sense of urgency, because this was just like any other government project,” said Dickerson. “Government IT contracts fail all the time. There was almost no place where we could point to a decision and say we’d made the right one.”

    Politics drives deadline

    Dickerson had no idea how long it would take to get the site operational. Political realities forced the creation of a deadline, so the Obama administration announced that the site would be up and running by Dec. 1.

    The good news, Dickerson said, was that the site was so busted that implementing basic best practices could make a huge difference. “We created monitoring, and set up a war room,” said Dickerson. “We made it like Mission Control, with 100 people from all these contractors meeting twice a day, and I’d ask ‘what’s wrong today?’ And we’d identify who could fix it.”

    By Dec. 1, the site worked well enough that the media – the key arbiter in the effort to meet the deadline – decided that the team had met its task. By the April 1 deadline for enrollment in plans through the Affordable Care Act, more than 8.1 million people had registered, well beyond the initial goal of 7 million.

    “We didn’t expect to fix this,” said Dickerson. “We just gave it our best shot, because somebody had to. Most of this was labor-intensive, but not very hard.”

    Making a difference

    After months of 17-hour days, seven days a week, Dickerson returned home. That’s when the gravity of his experience sunk in.

    “I went back to my regular job, and tried real hard to care,” he said. “I realized that the impact [the healthcare.gov repairs had] on the country was totally disproportional to anything else I would ever do at my job.”

    So he went back to Washington and became administrator of the U.S. Digital Service, an effort to create a full-time team to take the techniques that worked with Healthcare.gov and apply them more broadly.

    Dickerson encouraged engineers to consider the U.S. Digital Service or the 18F unit of the General Services Administration, which is focused on transforming federal IT. There are also opportunities for government contractors, who are trying to get their arms around the changes in web operations, as well as some promising startups.

    The call to service reflects O’Reilly’s interest in the potential for technology to transform government. Founder and CEO Tim O’Reilly is a board member of Code for America, which builds open source technology for government use.

    12:30p
    Ohio Offers Amazon $81M Data Center Tax Break

    State of Ohio officials have offered Amazon $81 million in tax breaks in exchange for committing to build a $1.1 billion data center in the state, Reuters reported citing public records.

    As Data Center Knowledge reported earlier this month, officials in the city of Dublin, Ohio, are considering another incentive to lure the project. The town may soon offer Amazon a free 70-acre property, worth about $7 million, if it commits to building the data center there. Dublin’s City Council is expected to vote on the land incentive next week.

    Both state and local incentives are directed toward Vadata, an Amazon subsidiary.

    Amazon Web Services, the company’s Infrastructure-as-a-Service provider, currently has 10 data center locations, four of them in the U.S.

    When we reached out to Amazon to comment on the Dublin incentive, a company spokesperson did not comment on that specific project, but said AWS was always on the look-out for new data center locations.

    The incentives are tied to certain deliverables. The state tax break, for example, is in exchange for a promise not only to build the data center but to create at least 120 jobs. If Dublin approves the land-transfer offer, it will be tied to a commitment to start construction within the first year after the deal closes and to build at least 750,000 square feet before the end of 2024.

    3:26p
    Cisco Buys OpenStack Private Cloud Provider Metacloud

    Cisco has agreed to buy Pasadena, California-based Metacloud, a private OpenStack cloud provider, adding yet another facet to the Cisco cloud strategy. Metacloud employees will join Cisco’s Cloud Infrastructure and Managed Services organization. The acquisition is expected to close in the first quarter of fiscal year 2015.

    Metacloud offers turnkey private OpenStack clouds to enterprises that can be hosted or deployed on premises. The company recently raised $15 million and its install base has been growing quickly.

    Metacloud claims its solution is around 35 to 40 percent cheaper than public cloud on average, plus it comes with the advantages of a private solution, such as reliability and security.

    The acquisition also comes with Metacloud’s recently launched hosted service. Initially, the company started with running and managing production-ready private OpenStack clouds on-premises. The Infrastructure-as-a-Service business is a small but growing part of its revenue and gives Cisco cloud a managed private OpenStack offering. So far Metacloud has leveraged Internap as its data center provider for the offering.

    Metacloud has built a turnkey, scalable OpenStack solution that has been growing in popularity among enterprises. The company will be folded into Cisco’s cloud ambitions, which the company dubs “Intercloud,” a cloud of clouds, a focus on enabling hybrid cloud usage in the enterprise. Cisco’s roots are at the heart of the network so its strategy is around connecting all the pieces.

    Cisco recently said it would invest $1 billion in its cloud play. All major enterprise IT incumbents, such as HP, IBM and VMware, have been increasing cloud investment. These pledges promise to accelerate major acquisition and consolidation trends in the cloud space. Consolidation is already beginning to occur, as this acquisition and HP’s recently announced acquisition of Eucalyptus confirms.

    OpenStack continues to land on the radars of the major tech giants. Cisco already has its sights on OpenStack, with its OpenStack@Cisco team busy launching OpenStack-ready versions of Cisco infrastructure. The company  recently teamed up with Red Hat on an integrated infrastructure solution for OpenStack.

    It would be beneficial to Cisco cloud to retain the Metacloud team as the founders have a strong pedigree. Co-founder CEO Sean Lynch was previously at Ticketmaster’s infrastructure engineering team, ultimately running global operations for a company with $9 billion in annualized revenue. Co-founder and president Steve Curry was a founding member of Yahoo’s storage operations team, responsible for hundreds of petabytes of online storage, backup data and media management.

    “Cloud computing has dramatically changed the IT landscape,” said Hilton Romanski, senior vice president, Cisco Corporate Development. “To enable greater business agility and lower costs, organizations are shifting from an on-premise IT structure to hybrid IT – a mix of private cloud, public cloud and on-premise applications. The resulting silos present a challenge to IT administrators, as choice, visibility, data sovereignty and protection in this world of many clouds requires an open platform. We believe Metacloud’s technology will play a critical role in enabling our customers to experience a seamless journey to a new world of many clouds, providing choice, flexibility, and data governance.”

    3:30p
    T5 Kicks Off Portland Data Center Campus Construction

    Wholesale data center provider T5 Data Centers has kicked off construction of a large two-building data center campus in Hillsboro, Oregon.

    One of the two buildings has been pre-leased by an unnamed Fortune 100 company, and the Atlanta-based data center developer did not disclose its size. The second building, however, will be 110,000-square-foot 9-megawat data center. The company expects both to be ready for occupancy next summer.

    The company is building T5@Portland to LEED Silver standards. The design includes a free cooling system that will take advantage of the cool Pacific Northwest climate.

    Hillsboro (less than 20 miles east of Portland) is an active data center market, and the area surrounding it is referred to as “Silicon Forest” because there are a lot of technology companies there.

    Wholesale players Digital Realty Trust and Fortune Data Centers have data centers there, as do retail colo providers Telx and ViaWest. Among big-name data center end users in town are Intel, NetApp and Adobe.

    T5 announced its intentions to build the Hillsboro campus in 2012, when it acquired a 15-acre property there. The plans were likely on the backburner until the developer found an anchor tenant.

    When it made the announcement originally, the company said the campus would have 200,000 square feet of wholesale data center space total. Design of the campus was still being finalized at that time, however.

    T5 is also selling data center space in New York, North Carolina, Atlanta, Dallas and Colorado markets.

    3:30p
    DDoS Attacks: Why Hosting Providers Need to Take Action

    Dave Larson, Chief Technology Officer and Vice President, Product at Corero Network Security

    With no shortage of distributed denial-of-service (DDoS) attacks overwhelming the news headlines, many businesses have been fast to question whether they are well protected by their current DDoS mitigation strategy and are turning to their cloud and hosting providers for answers.

    Unfortunately, the sheer size and scale of hosting or data center operator network infrastructures and their massive customer base presents an incredibly attractive attack surface due to the multiple entry points and significant aggregate bandwidth that acts as a conduit for a damaging and disruptive DDoS attack. As enterprises increasingly rely on hosted critical infrastructure or services, they are placing themselves at even greater risk from these devastating cyber threats – even as an indirect target.

    The indirect target: secondhand DDoS

    The multi-tenant nature of cloud-based data centers can be less than forgiving for unsuspecting tenants. A DDoS attack, volumetric in nature against one tenant, can lead to disastrous repercussions for others; a domino effect of latency issues, service degradation and potentially damaging and long-lasting service outages.

    The excessive amount of malicious traffic bombarding a single tenant during a volumetric DDoS attack can have adverse effects on other tenants, as well as the overall data center operation. In fact, it is becoming more common that attacks on a single tenant or service can completely choke up the shared infrastructure and bandwidth resources, resulting in the entire data center being taken offline or severely slowed – AKA, secondhand DDoS.

    A crude defense against DDoS attacks

    Black-holing or black-hole routing is a common, crude defense against DDoS attacks, which is intended to mitigate secondhand DDoS. With this approach, the cloud or hosting provider blocks all packets destined for a domain by advertising a null route for the IP address(es) under attack.

    There are a number of problems with utilizing this approach for defending against DDoS attacks: Most notably is the situation where multiple tenants share a public IP address range. In this case, all customers associated with the address range under attack will lose all service, regardless of whether they were a specific target of the attack. In effect, the data center operator has finished the attacker’s job by completely DoS’ing their own customers.

    Furthermore, injection of null routes is a manual process, which requires human analysts, workflow processes and approvals; increasing the time to respond to the attack, leaving all tenants of the shared data center suffering the consequences for extended periods of time, potentially hours.

    DDoS attacks becoming increasingly painful

    The growing dependence on the Internet makes the impact of successful DDoS attacks – financial and otherwise – increasingly painful for service providers, enterprises, and government agencies. And newer, more powerful DDoS tools promise to unleash even more destructive attacks in the months and years to come.

    Enterprises that rely on hosted infrastructure or services need to start asking the tough questions of their hosting or data center providers, as to how they will be properly protected when a DDoS attack strikes. As we’ve seen on numerous occasions, hosted customers are simply relying on their provider to ‘take care of the attacks’ when they occur, without fully understanding the ramifications of turning a blind eye to this type of malicious behavior.

    Here are three key steps for providers to consider to better protect their own infrastructure, and that of their customers:

    • Eliminate the delays incurred between the time traditional monitoring devices detect a threat, generate an alert and an operator is able to respond; reducing initial attack impact from hours to seconds by deploying appliances that both monitor and mitigate DDoS threats automatically. Your mitigation solution should allow for real-time reporting alert and event integration with back-end OSS infrastructure for fast reaction times and the clear visibility needed to understand the threat condition and proactively improve DDoS defenses.
    • Deploy your DDoS mitigation inline. If you have out-of-band devices in place to scrub traffic, deploy inline threat detection equipment quickly that can inspect, analyze and respond to DDoS threats in real-time.
    • Invest in a DDoS mitigation solution that is architected to never drop good traffic. Providers should avoid the risk of allowing the security equipment to become a bottleneck in delivering hosted services and always allowing legitimate traffic to pass un-interrupted, a “do no harm” approach to successful DDoS defense.

    Enterprises rely on their providers to ensure availability and ultimately protection against DDoS attacks and cyber threats. With a comprehensive first line of defense against DDoS attacks deployed, you are protecting your customers from damaging volumetric threats directed at or originating from or within your networks.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:30p
    Data Center Jobs: Amazon.com

    At the Data Center Jobs Board, we have a new job listing from Amazon.com, which is seeking an Electrical Design Engineer in Garden City, New York.

    The Electrical Design Engineer is responsible for working with internal teams to understand user requirements, participating in site selection reviews, data center electrical designs and collaboration with other disciplines to create a construction document set, creation of designs which meet or exceed quality requirements and fall within budgetary requirements, working with regional vendors and manufacturers to specify the appropriate electrical equipment, and working with local utilities to understand and define site utility requirements. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    4:00p
    Yahoo Japan Deploys Brocade’s Ethernet Fabric to Support Hadoop Infrastructure

    Japan’s largest Internet company Yahoo Japan Corporation has deployed Brocade‘s Ethernet fabric solution as the network foundation for an enterprise-wide Hadoop-based Big Data project.

    Big Data is a key strategic initiative for Yahoo Japan. Hadoop is used in analyzing vast amounts of information from its portal to help improve overall quality of its services, as well as in the creation of new services. Hadoop is also used to support e-commerce and generate personalized and relevant content recommendations for users.

    The new automated network infrastructure connects hundreds of servers supporting thousands of nodes dedicated to supporting Hadoop analysts across the enterprise.

    The Brocade VDX Ethernet Fabric modular and fixed port switches replace a legacy Spanning Tree protocol-based network. Several Ethernet fabric solutions were tested and Brocade was selected because of performance, low latency, deep buffers as well as automated deployment and operational simplicity. It could handle the Big Data and eliminated manual configuration.

    The legacy network wasn’t suitable for the new Hadoop design because of data throughput limitations and inability to easily add capacity. Along with increasing operational complexities, it meant application performance and availability was put at risk.

    The new Hadoop environment is able to scale out with additional nodes quickly and cost-effectively and supports “bursty” high-speed communications between nodes, according to Yahoo Japan.

    “In addition to delivering the high-performance needed to support enormous traffic processing for our new Hadoop environment, the Brocade VCS Fabric has allowed us to reduce the time spent on network deployment and operation by more than half, which has significantly lowered operational costs,” said Kenya Murakoshi, Yahoo Japan’s manager of data center network, infrastructure engineering department, site operation division. “Compared to other fabric technologies, the simplicity of Brocade VCS fabrics is incredible, allowing us to quickly scale our network with tremendous flexibility.”

    High-density Brocade VDX 8770 modular chassis switches are used as aggregation switches that integrate a host of racks to create fabrics between the switches. The VDX chassis are connected to Brocade MLXe core routers using the switches’ high bandwidth multi-chassis trunking (MCT) feature. The company has also deployed fixed-port Brocade VDX switches to build fabrics for smaller Hadoop infrastructures used for other projects at Yahoo Japan group companies.

    4:30p
    Azure Websites Now Scales Better

    Microsoft’s Azure Websites can now scale above 10 instances. The service is also now supporting integration with customer’s virtual networks, the company announced this week.

    The ability to easily scale above 10 instances makes Websites applicable in a larger range of use cases. The virtual network feature grants your website access to resources running on a vnet (virtual network), including being able to access web services or databases running on Azure Virtual Machines.

    While clouds like Amazon Web Services require at least some development knowledge to navigate, Microsoft is setting up Azure, and more specifically Azure Websites to be easy to use for most.

    Microsoft continues to make mass market-type enhancements to its cloud, lowering the barrier to entry to start using Azure. This somewhat threatens the traditional mass-market hosting space in a way that bare-bones developer clouds do not.

    “Instead of buying new servers, you simply drag your Instance slider to get more machines,” wrote Erez Benari, program manager, Azure Web Sites. “Instead of having to deploy and configure the additional machines, Azure Websites ensures your data and apps are available from all instances immediately. The sizable array of 10 instances available in regular hosting plans is more that most customers will ever need, but for some larger customers it isn’t always enough to deal with high-traffic sites or Web services. If this is a situation that you find yourself in, we are happy to accommodate.”

    The virtual network connection capabilities complement the Hybrid Connections capability. Hybrid Connections offer the ability to access a remote application and the hybrid connections agent can be deployed in any network, connecting back to Azure. “This provides an ability to access application endpoints in multiple networks and does not depend on configuring a VNET to do so,” wrote Chris Compy, senior program manager at Azure Websites. “Virtual Network gives access to all the resources in the VNET and does not require installation of an agent to do so.”

    Through a new user interface, users can connect to a pre-existing Azure VNET or create a new one. For Azure Websites Virtual Network integration to work, users must have a dynamic routing gateway and have Point to Site enabled.

    The virtual network feature is being released in preview and is currently available only at the Standard tier. Standard tier web hosting plans can have up to five networks connected while a website can only be connected to one network. Several websites can be connected to the same network.

    Microsoft recently launched a slew of services to appeal to developers (via theWHIR).

     

     

    5:00p
    HP Builds Load Testing Service on Amazon Web Services Cloud

    Adding to its performance testing suite Hewlett Packard announced StormRunner, a new solution that provides a cloud-based platform for application quality testing and delivery. HP said StormRunner will join existing LoadRunner and Performance Center testing solutions in the suite.

    The new performance testing service is currently delivered via Amazon Web Services, but the company plans to extend it to its OpenStack cloud called Helion in the future.

    Catering to agile development teams HP said that StormRunner addresses the unique needs of agile testing by allowing test scripts to be reused between performance testing solutions and use StormRunner Load to dynamically scale.

    Software-as-a-Service delivery model for StormRuner means quick performance testing setup and scaling ability from one tester to more than one million geographically distributed web and mobile users. Genefa Murphy, senior director of go-to-market strategy for application delivery management at HP Software said, “HP StormRunner makes use of Amazon Web Services and, in the future, the HP Helion cloud based on the OpenStack cloud management framework to give developers access to infrastructure resources on demand.”

    “As enterprises continue to migrate applications and solutions to the cloud, they need to ensure that the performance of their applications will not degrade as the volume of users increases,” said Raffi Margaliot, general manager, Application Delivery Management, HP Software. “HP StormRunner Load is designed specifically to help Agile teams deliver scalable, high-performing cloud-based modern apps while also helping them capitalize on their existing investments in HP.”

    5:58p
    Amazon to Recycle Westin Data Center Heat in Seattle Offices

    Amazon is planning to use heat generated by data centers in Seattle’s Westin Building – one of west coast’s biggest network hubs – to heat office space on the future corporate campus nearby.

    Construction of the Seattle-based company’s massive high-rise campus in the city’s Denny Triangle neighborhood is underway.  On Monday, Amazon’s real-estate subsidiary Acorn Development received the first greenlight from Seattle City Council for the planned waste heat recycling system from city officials.

    Exhaust air from server farms can reach temperatures above 100 F, and there is a number of examples around the world where this energy is recycled by heating office space. Examples include a Telecity data center in France, a Telehouse data center in the U.K. and IBM data centers in Finland and Switzerland.

    Read the Data Center Knowledge special report on data centers that recycle waste heat

    Together with a company called Eco District, Acorn applied for a permit to build an underground pipe system that will carry warm water from data centers at the Westin building to the Amazon campus. The plans include carrying cool water back to the data centers after the heat has been extracted, which may potentially lower energy consumption of the data center cooling systems.

    According to GeekWire, which broke the story, Eco District is a company formed by Clise Properties, owner of the 34-story Westin building. San Francisco-based Digital Realty Trust bought a 49-percent stake in the building in 2006, entering a joint venture with Clise.

    In addition to data center space, the skyscraper, built in the early 80s, is home to the Seattle Internet Exchange, the largest non-commercial member-governed Internet exchange in the U.S. Participants peering on the exchange include Yahoo, Amazon, Twitter, Microsoft, Netflix, Google, Facebook, Telx, Peer 1, PeakColo and Zayo, among many others.

    Monday’s vote is a step forward but not a final approval for the project. Amazon and company still have to flesh out the design and submit another “more formal” proposal to the council, according to GeekWire.

    There are hopes among local officials that the project will serve as a launching pad for an extensive district heating system that recycles waste heat.

    City Council member Mike O’Brien told GeekWire that the Westin building generated more heat than the three Amazon buildings will need. “I see this project as a first step toward what I hope to be a district wide energy system that we can build off this as a catalyst,” O’Brien was quoted as saying.

    There are also other sources of waste heat the city has identified for potential district heating. One of them is a sewer line building owners can tap into.

    6:37p
    Equinix Launches Precision Time Stamping Service for Traders

    Equinix has teamed up with Perseus Telecom to offer a turnkey time stamping service for electronic trades its customers conduct on infrastructure hosted in its data centers.

    Certified time stamps with sub-nanosecond accuracy provide an objective measure by which to award trades that occurred first. In the high-stakes, precision trading world, accuracy is key.

    “In finance, an accurate time stamp is the difference between winning or losing a trade,” said Barry Smith, managing director, global capital markets, Equinix. “By making High Precision Time available through our data centers, our customers have a compelling alternative for streamlining their trade compliance activities and reducing operating costs, enabling them to apply those resources elsewhere.”

    Time-stamping all trades to the National Institute of Standards and Technology (NIST) is a regulatory imperative. Equinix and Perseus have simplified the task through High Precision Time, a new service that offers financial firms a standardized method of time-stamping trades globally.

    Perseus deployed the risk compliant time service in major financial Equinix data centers, allowing customers to connect to the NIST time scale in Boulder, Colorado. The data centers are Chicago (CH1 data center), Frankfurt (FR2), London (LD4), New York (NY4) and Tokyo (TY3).

    The alternative to the turnkey service is connecting to NIST through GPS or ingesting time across NTP (Network Time Protocol) over the Internet. Both methods are susceptible to disruption or malicious attack.

    Operating costs eliminated by the service include the purchase of stratum 1 atomic clock infrastructure, network costs for connecting to NIST and the human capital required to man systems and match trades on the back end.

    Time stamping is also available for other industries that need it, like online gaming and application hosts.

    Equinix does well with financial services, offering an ecosystem of more than 150 exchanges and trading platforms and 480 buy- and sell-side firms who are all potential customers of Perseus’ service. Similarly, customers coming from Perseus have access to more than 4,500 companies on Platform Equinix, across more than 100 data centers.

    The two companies have a longstanding relationship, including providing ultra-fast wireless and wireline trades. Perseus also offers an ultra-low-latency (67.68-millisecond roundtrip delay), trans-Atlantic wireless connection between Equinix’s NY4 and FR2 International Business Exchange (IBX) data centers. The wireless connection bests the ultra-fast fiber connection, also operated by Perseus through Equinix data centers, by over five milliseconds.

    6:38p
    Deutsche Telekom Denies Allegations of NSA, GCHQ Breach

    logo-WHIR

    This article originally appeared at The WHIR

    Documents provided to German newspaper Der Spiegel by NSA whistleblower Edward Snowden show that the NSA and GCHQ breached Deutsche Telekom and local telecommunications provider Netcologne, the paper reported Sunday. Deutsche Telekom responded quickly, saying that its investigation has discovered no breach so far.

    According to Der Spiegel, the Five Eyes’ “Treasure Map” program, which was unveiled to the public by the New York Times last November, indicates that both companies have been breached. Because Netcologne does not operate outside of Germany, any breach would likely have come from inside the country, Der Spiegel speculates. If so, then only German law would apply and bringing criminal charges would be much simpler than if it were attacked from another country.

    While Deutsche Telekom has found no evidence of a breach, it did not dismiss the report. “It would be completely unacceptable if a foreign intelligence agency were to gain access to our network,” a company spokesman said in a statement on Sunday.

    “We are looking into every indication of possible manipulations but have not yet found any hint of that in our investigations so far,” the spokesman said. “We’re working closely with IT specialists and have also contacted German security authorities.”

    GCHQ allegedly hacked Belgacom in July 2013, and reports that the NSA bugged German chancellor Angela Merkel’s cell phone for years preceded consideration of a separate European communications network.

    Deutsche Telekom opened a large data center in Germany earlier this year to attract German customers concerned about the security of their data.  If these latest reports are true, those data security efforts may be for nought.

    US providers have lost international revenue due to spying and data collections fears, and stand to lose billions in the next three to five years, according to a July study.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/deutsche-telekom-denies-allegations-nsa-gchq-breach

    8:56p
    Schneider Ships Its Most Powerful Perimeter Cooling Units Ever

    Starting in early September, Schneider Electric began shipping the two latest models in its line of in-door perimeter data center cooling units called Uniflair LE.

    Manufactured in Italy, these are serious AC units for big data centers, such as colo and cloud provider facilities. At about 200 kW, the units are second in capacity only to Schneider’s EcoBreeze line of indirect evaporative air handling units.

    Aside from being the highest-capacity perimeter cooling units the company has ever made, HDCV 4500 and HDCV 5000 come with a number of standard and optional features some of which are firsts for the French vendor. List price of the 4500 model is about $50,000, and the 5000 costs about $53,000.

    Underfloor fan modules

    The biggest first in both of the latest data center cooling products is an underfloor fan module. “Instead of fans being up inside the cooling unit, they’re placed in a separate fan module below the raised floor,” Joe Capes, business development director for Schneider’s cooling line of business in the Americas, said.

    The fans are blowing cold air directly into the plenum under the raised floor. The benefit is a lower overall air pressure drop as a result of the unit’s operation, which means the fans can spin slower and use less energy.

    To make the system scalable, Schneider offers the fan modules separately from the cooling unit. This way, any number of modules can be installed under the floor when the data center is being built but they don’t all have to be activated until later in the facility’s life, when the operator adds more cooling units to expand capacity.

    “You can actually place all the fan modules into their recesses on day one,” Capes said.

    An optional efficiency boost

    If you want more efficiency, you can add an optional high-efficiency filter plenum for about $5,000. “It’s essentially a filter box which mounts to the top of the HDCV unit, and it uses a low-pressure-drop filter arrangement to enable a pretty dramatic improvement in efficiency,” Capes said.

    The 20-inch-high box reduces fan power consumption by about 500 W when added instead of the standard MERV 8 filter. Again, lower pressure drop means lower fan speed, which is paramount in data center cooling if you care about energy consumption. An AC unit consumes the most power when its fans are running at 80 percent of their ability or higher, Capes explained.

    Slab coil for maximum cooling

    The main feature that makes the new units and other products in Schneider’s Uniflair LE data center cooling line efficient, however, is the cooling coil design.

    Instead of the traditional A- or V-shape coil, Schneider uses a slab coil, which is exactly what it sounds like. The slab shape has a large flat cooling surface area and also results in a lower pressure drop, Capes explained.

    Schneider offers Blue Fin coating for its coils as a standard feature, while its competitors offer it as an add-on, Capes said. The chemical coating helps prevent corrosion and adverse effects of condensation.

    Dual power supplies, tiny UPS for controller

    In addition to efficiency features, the new units come with a few resiliency options.

    One is a dual power supply with an automatic transfer switch. Schneider has been offering this as a custom feature for about one year, and has now turned it into an off-the-shelf option. The feature is a must in data centers built to Uptime Institute Tier III or Tier IV standards, Capes said.

    Another resiliency option is an ultra-capacitor backup power option for the unit controller. It’s essentially a tiny uninterruptible power supply just for the cooling unit’s controller to speed up the process of bringing the unit back to operational state in case of a power outage.

    “It kind of keeps the brains alive,” Capes said about the option, which costs about $350.

    Such cooling units are not usually put on UPS backup, so when there is an outage, they shut down and stay down for however long it takes the operator to start the backup generators. The ultra-capacitor essentially provides onboard reserve power to keep the controller alive so it’s ready to bring up the cooling unit as soon as the generators kick in.

    << Previous Day 2014/09/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org