Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, August 20th, 2014

    Time Event
    11:00a
    CenturyLink Unveils Private Cloud Product

    CenturyLink Technology Solutions has launched a new private cloud service, offering to set up elastic private IT infrastructure for customers in any of its 57 data centers around the world.

    Hybrid cloud, a setup where a company uses private cloud to host its mission critical applications and data in combination with public cloud resources for less sensitive data and applications or for capacity bursting, is widely viewed as the way forward for enterprises in adopting the cloud model for IT infrastructure.

    That is the market CenturyLink is looking to capture. Its private cloud instances will be federated with its public cloud. Users will be able to manage infrastructure consisting of both through a single interface.

    “It’s a private version of our public cloud offering, meaning it goes through the same 21-day cycle for new updates and features, is updated and managed using the same CenturyLink platform and can be put into any of our 57 data centers around the world,” a company spokesman wrote in an email.

    The private cloud offering has a much wider geographic reach than the company’s public cloud, which is hosted in 11 locations.

    CenturyLink has had a private cloud offering before, which came along with its acquisition of Savvis in 2011. Called Symphony Dedicated as part of Savvis’ product offerings, CenturyLink renamed it into Dedicated Cloud – a product that still exists and that the company still supports.

    The company has been going after the enterprise cloud market with a vengeance ever since it acquired Savvis. It has addressed competition from the big public cloud players, such as Amazon, Microsoft and Google, by slashing rates after the recent burst of “me-too” price cuts by the three giants.

    It has also been competing with the mid-size players like itself – companies like Rackspace and IBM’s SoftLayer business – expanding its managed services portfolio.

    Another big part of CenturyLink’s strategy in the cloud has been fostering talent and solutions that make developers’ work in the cloud easier. These capabilities received a major boost last year, when the company bought Platform-as-a-Service players AppFog and Tier 3.

    Just last week, CenturyLink announced Panamax, a brainchild of AppFog founder Lucas Carlson, who is now its chief innovation officer. Panamax is a solution for deploying and managing applications that consist of multiple Docker containers in the cloud.

    12:00p
    The Ten Most Common Cooling Mistakes Data Center Operators Make

    While data center operators are generally a lot better at cooling management than they were ten years ago, many facilities still face issues that prevent them from either using their full capacity or wasting energy.

    Lars Strong, senior engineer at Upsite Technologies, a data center cooling specialist, says the ultimate goal in airflow management is to have better control of cooling temperature set points for IT air intake, while minimizing the volume of air you’re delivering to the data hall.

    We asked Strong and Wally Phelps, director of engineering at AdaptivCool, another company that specializes in thermal management in data centers, to list some of the most common issues they see in data centers they visit. Here is what they said:

    1. Phantom leakage: This is leakage of cold air from the plenum under the raised floor into adjacent spaces or into support columns. Phelps says such breaches are fairly common and cause everything from loss of pressure in the IT environment to allowing warm, dusty or humid air from elsewhere to enter. The only way to avoid this problem is to go under the raised floor, inspect the perimeter and support columns and seal any holes you may find.

    2. Too many perforated tiles: There is absolutely no reason to have perforated tiles in hot aisles and whitespace areas. It is a waste of cooling capacity. It is also possible to have too many perforated tiles on the intake side of the racks. One red flag is lower-than-normal air temperature at the tap of IT racks, Phelps said.

    3. Unsealed raised-floor openings: While many data center operators have made an effort to seal cable openings and other holes in their raised floors, very few have actually finished the job, Strong says. The holes that are left over can cause a lot of cold air to escape into areas where it is unneeded. One particularly important place to look for unsealed openings is under electrical gear, such as power distribution units or remote power panels.

    4. Poor rack sealing: Doing things like putting blanking panels in empty rack spaces is as a common-sense thing as it gets in airflow management, yet not everybody does it. Some cabinets are not designed where the space between mounting rails and sides of the cabinet are sealed. An operator who cares about efficiency will seal those openings as well as potential openings under the cabinet, Strong says.

    5. Poorly calibrated temperature and humidity sensors: Sometimes vendors ship uncalibrated sensors, and sometimes calibration may go out of whack over time. This leads to ill-managed cooling units working against each other. Strong recommends that operators check both temperature and relative-humidity sensors for calibration every six months and adjust them if necessary.

    6. CRACs fighting for humidity control: Another good way to pit two CRACs against each other is to return air at different temperatures to adjacent CRACs. As a result, CRACs get different humidity readings and one ends up humidifying while the other is dehumidifying the air. Fixing this problem takes some finesse in understanding of the Psychrometric chart and setting humidity control points thoughtfully, Phelps says.

    7. Less is more: Many data center operators overdo it with cooling capacity. If there is more cooling than needed and no way to keep redundant CRACs off safely, the entire cooling scheme is compromised, since too many units are running in their low-efficiency state. This often happens when underfloor cooling temperature is high and certain racks are hard to keep cool, and a typical response from the operator is to bring more cooling units online. While counterintuitive, the correct response is actually running fewer CRACs at higher load, Phelps says.

    8. Empty cabinet spaces: This is another one from the series of obvious but for some reason not considered by everyone. When one or more cabinet spaces are left empty, the airflow balance gets skewed, leading to recirculation of exhaust air into the cold aisle or loss of cool air from the cold aisle, Strong says. The condition naturally leads to a cooling scheme that overcools and supplies more air than is really necessary to compensate for the losses.

    9. Poor rack layout: Ideally, you want to place racks in long hot-aisle/cold-aisle configuration, with main CRACs placed at each end of the rows, Phelps says. Having a small island of racks with no particular orientation does not help anybody. Neither does orienting racks front to back or orienting CRACs in the same direction as the IT rows.

    10. Not giving cooling management the respect it deserves: As Strong puts it, failing to consider the benefits of improving the way you manage cooling leaves an operator with stranded capacity and higher operating cost. Benefits from a simple thing like installing blanking panels can cascade, but they are often overlooked. In extreme cases, a well-managed data center cooling system can even defer an expansion or a new build.

    12:30p
    Blue Harbour Holdings Discloses 6.4 Percent Stake in Rackspace

    logo-WHIR

    This article originally appeared at The WHIR

    Investment firm Blue Harbour Holdings announced on Monday that it owns 6.4 percent of Rackspace Hosting, or around 9.1 million shares.

    Blue Harbour Group, formed in 2004, manages around $3 billion of capital and its current portfolio companies include Akamai Technologies, Tribune Company and Allscripts Healthcare Solutions.

    Blue Harbour already had a 2.5 percent stake in Rackspace in June but increased its stake to 6.4 percent in its recent regulatory filing.

    The news of the investment comes as Rackspace has been considering M&A options, hiring Morgan Stanley to lead it through the process. Last month, it was rumored that Rackspace was looking at going private, an idea that gained “significant traction” among its board members.

    “Rackspace has built an incredibly valuable business with excellent long-term prospects that we believe are not reflected in the company’s current share price,” Todd R. Marcy, a managing director of Blue Harbour said in a statement. “We believe the Board and management team are committed to closing this significant valuation gap. We look forward to a constructive, ongoing dialogue on the various alternatives Rackspace has to unlock and deliver meaningful shareholder value, whether as a standalone independent entity or pursuant to the current strategic review process.”

    Recently Rackspace announced $441 million in revenue in Q2 2014. Rackspace shares closed Monday at $31.82.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/blue-harbour-holdings-discloses-6-4-percent-stake-rackspace

    1:24p
    Software-Defined Networking: Beyond the Hype

    Patrick Hubbard is Head Geek and Senior Technical Product Marketing Manager at SolarWinds.

    For quite some time, software-defined networking (SDN) has been a white whale for many IT pros, representing a highly buzzed-about technology they can only aspire to implement . . . someday.

    However, the arrival of VMware NSX in the data center and Cisco’s Application Centric Infrastructure (ACI) is finally turning SDN from hype into bits in the data center.

    Although VMware NSX and Cisco ACI take a very different approach to SDN – VMware focusing on network virtualization and Cisco ACI focusing on butting the application at the center of the network – these technologies have the potential to completely transform the network management strategies we know today.

    What makes SDN unique?

    Before diving into today’s SDN landscape, it’s important to understand exactly what SDN is, and what makes this approach to networking so unique.

    A game-changing technology, the software-defined approach to networking takes control away from manual configurations in individual hardware devices and shifts it to a distributed model with a software application called the controller at its heart.

    This approach automates many of the ways networks are managed, providing network pros with the flexibility, programmability and agility to manage ever more complex and rapidly changing environments.

    While the benefits of SDN are clear, there’s still a healthy amount of criticism and doubt that SDN can be applied in a real-world IT environment. In fact, there are two camps of IT pros: those who are fully on board with SDN and its benefits and eager to implement, and those with reasonable skepticism for how it will work in their data center. Regardless of what camp you’re in, you can no longer ignore SDN.

    To come up to speed on SDN and how it can provide value to networks, there are a few key considerations you can take into account to ensure you’re not left in the dust now that SDN’s time has finally arrived.

    Building a sufficient knowledge base

    With any new technology implementation, there’s a level of education that’s needed to understand how to use that technology. SDN is no different. There are a number of vendor-specific certifications from Cisco, VMWare and HP that networking pros can reference to build their knowledge base. These certifications include, but are not necessarily limited to:

    Cisco
    - Business Application Engineer
    - Network Application Developer
    - Network Programmability Designer
    - Network Programmability Support

    VMWare
    - VMware Certified Associate, Network Virtualization
    - (Free) VMWare Network Virtualization course related to VCA-NV certification

    HP
    - HP ASE – SDN APPLICATION DEVELOPER V1

    Testing the SDN waters

    After building a sufficient knowledge base on SDN, the next step will be to work with SDN in an IT environment, making sure it meets the business’ needs, as well as those of the IT department. By creating a test center, you can take an experimental approach and begin implementing SDN in small stages. This approach will allow you to assess the value of SDN for your business and learn how to make SDN work for you.

    Ensuring the price is right for your budget

    In today’s business landscape, the needs of the IT department have become the needs of the business. So, with any new software deployment, you will need to make a business case for SDN in order to land the budget and resources needed to begin an installation. Understanding how SDN works in your environment and the value and cost savings it can bring to the business will arm you with the intelligence needed to make a sound business case for SDN.

    Securing SDN

    New and groundbreaking technologies tend to focus on innovation rather than practicality, so security is typically the last feature added to any new revolutionary piece of software. In other words, even though security is becoming increasingly important, it’s often prioritized later in the game. But with SDN making 75 percent of network and security configurations, the business risk for data breaches increases greatly without sufficient oversight. To curb this, you will have to be just as proactive and cautious about security as the admins working on legacy infrastructure.

    It’s only a matter of time before SDN becomes a mainstream technology. It’s time to get on board with SDN by educating yourself on the technology, gaining an understanding of how it can work for both the IT department and the business and how to make it secure.

    If you’re a network administrator, you should take a lesson from your systems counterparts – some of whom struggled a bit to catch up when virtualization adoption began in earnest. By understanding how developments happening now represent SDN’s ultimate imminence, IT pros can prepare to optimize their networks for software-defined environments.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:59p
    Benioff, Luczo Join VCs in $35M Round for PernixData

    Closing out a record fiscal year Silicon Valley storage virtualization startup PernixData announced it has closed an oversubscribed funding round, raising $35 million in a combination of venture capital and individual investments from Salesforce.com CEO Marc Benioff, Silver Lake partner Jim Davidson and Seagate CEO Steve Luczo. Menlo Ventures led the Series C with participation from previous investors Kleiner Perkins Caulfield and Byers, Lightspeed Ventures, Lane Bess, Mark Leslie and John Thompson.

    Funding from institutions and industry luminaries alike helps validate PernixData’s storage acceleration software and business strategy. Closing its first full fiscal year this summer, the company’s CEO Poojan Kumar said it experienced 42 percent quarter-over-quarter revenue growth and a record year for booked revenue.

    The company describes its flagship FVP software as taking advantage of both server flash and RAM as a platform for storage acceleration with unique features like write acceleration, clustering and installation within the hypervisor. It further augments virtualized environments and addresses storage I/O bottlenecks with a Flash Hypervisor to allow server flash to be aggregated across a virtualized data center.

    The company has partnered with leading flash vendors and almost 250 resellers around the world, including a global resale agreement with Dell. PernixData claims that in April of this year FVP became the first acceleration solution to run on both flash and RAM and to support any file, block or direct attached storage.

    PernixData co-founders Kumar and Satyam Vaghani founded the company in 2012 after careers at VMware. FVP software currently only supports the VMware hypervisor.

    Mark Siegel, managing director at Menlo Ventures, has joined the PernixData board in an advisory role. The company’s executive team has expanded, joined by former Dell manager Mike Arterbury as vice president of business development.

    “We are witnessing a fundamental shift in storage design where performance is decoupled from capacity using PernixData FVP software and server side flash,” Seagate CEO Luczo said. “This brings unparalleled performance and scalability to virtualized applications and makes flash technology a strategic component of tomorrow’s software defined data center.”

    4:33p
    Global Switch Completes First Phase of New Sydney Data Center

    Global Switch launched the first phase of Sydney East, its tenth data center. The facility is built on the provider’s campus next to its existing Sydney West data center, adjacent to the city’s Central Business District (CBD).

    The two data centers that are part of the project are expected to reach $300 million in cost, with the entire campus providing 67 megawatts of utility power and about 721,182 square feet of gross space.

    Sydney East will provide about 280,000 square feet with 32 megawatts of utility power to the data center when all three stages are complete. Sydney West next door is one of the largest data centers in the region, with about 450,000 square feet of space and 35 megawatt of power.

    The nearby CBD is home to some of the largest Australian companies and acts as an Asia Pacific headquarters to many international companies. The financial services industry is particularly active in the area.

    Many of these companies look to use colocation and leverage cloud, and Global Switch offers a sizable well-connected campus.

    The new data center offers high-density power solutions as well as energy efficient design features. It uses free cooling, reconfigurable power and cooling distribution and a green roof for thermal protection and solar glare reduction. The company will seek LEED Gold rating for the data center by the U.S. Green Building Council.

    Sydney East has direct access to the network-dense Sydney West facility. This includes more than 50 network and cloud providers, including Amazon Direct Connect. The addition of Sydney East means the campus provides four diverse entry points and six meet-me rooms.

    Australia has seen healthy cloud adoption. Data center growth follows in support.

    Damon Reid, managing director at Global Switch, said, “Our location on the edge of the CBD, which is an important characteristic across our portfolio, is a key driver and is proving very attractive to both service providers and end users.”

    Global Switch is building up its infrastructure to serve the larger Asia Pacific region, as well as for international companies looking to serve the region.

    “Global Switch Sydney is a core node not just for Sydney and Australia but in terms of connectivity and services, a key hub for the Asia Pacific region as a whole,” said Global Switch CEO and Executive Chairman John Corcoran. “With our data center in Singapore also having significant sub-sea cable system infrastructure present and our new data center in Hong Kong underway, Global Switch is developing a truly large-scale data center platform serving the world’s largest companies.”

    8:00p
    European Telco Viatel to Spend €125M on Data Centers, Fiber, Cloud

    European telco Viatel, part of the Digiweb Group, is investing €125 million in a major expansion of its fiber infrastructure, cloud services and data center capacity, together with funding partner Proventus Capital Partners. The company may also expand through potential acquisitions.

    Based in Dublin, Ireland, Viatel is one of the largest owners of transcontinental duct and fiber across Europe. It offers voice and managed services and already operates an extensive data center footprint. Like most telcos, it is looking to data center services as a big area of growth and said it will look at acquisitions that further those ambitions.

    CEO Colm Piercy said the company is also expanding sales, technical, product development and delivery teams in London, Paris, Amsterdam and Frankfurt.

    “This is an exciting time for Viatel,” he said. “While we already connect 150 data centers and thousands of multinational enterprises and organizations across western Europe, this investment will now enable us to enhance our services and to extend our reach further and deeper within Europe, and to connect new locations in the USA, Canada, the Middle East and Asia.”

    The company owns its infrastructure, delivering data, cloud, managed services and voice solutions. With a projected €100 million revenue run rate by year end 2014, this is a significant investment for the 200-employee company.

    Viatel is a preferred connectivity partner of London Stock Exchange. It is directly connected to the largest content distribution networks, Internet peering exchanges, public cloud platforms and Internet companies globally.

    << Previous Day 2014/08/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org