Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, December 2nd, 2014

    Time Event
    1:00p
    IO to Split into Two Separate Companies

    IO, the Phoenix-based data center provider best known for its modular data center containers, is going to be split into two companies. One, called IO, will continue operating as a data center provider, while the other, called BaseLayer, will be a technology vendor, selling data center containers and data center infrastructure management software IO.OS.

    The split will allow each of the companies to better focus on its core competency, said George Slessman, the company’s current CEO who will continue as CEO of the new IO after it is separated. He expects the separation process to be complete in the first quarter of 2015.

    Management made the decision to split the company in the summer, when they decided not to pursue an IPO, he explained. IO filed for a public offering in September 2013. But the company decided against it because of softness in the high-tech stock market. “We were not satisfied with economics of the transaction,” Slessman said.

    IO the Colo Company Will Stay the Course

    IO will be a BaseLayer customer and continue to develop and operate data centers and provide both containerized and traditional raised-floor data center space. The company has large facilities in Phoenix, nearby Scottsdale, Edison, New Jersey, and Singapore, as well as a smaller data center in Ohio. A Slough, U.K., facility is in the works.

    IO has a beta Infrastructure-as-a-Service offering for its colocation customers. The cloud is built using Open Compute hardware and OpenStack software.

    It is not an Internet-facing cloud, as the company is not trying to compete with the large public clouds like Microsoft Azure or Amazon Web Services, Slessman said. The first full general release of the product with availability zones in Phoenix and Edison is slated for mid-January.

    BaseLayer to Flex Engineering Muscle

    IO’s current CTO William Slessman (George Slessman’s brother) will be CEO of BaseLayer. The company will focus on engineering, development, and production of data center containers and DCIM software.

    It has data center modules for installation inside warehouses – the ones IO has been leasing space in – and “edge” modules, which can be installed outdoors in a location of the customer’s choice.

    BaseLayer will be headquartered at the campus where IO’s current container factory is. That is where the company produces its modular data center containers before they are shipped to one of its warehouses or other customer locations.

    Two High-Growth Companies

    According to George Slessman, if you were to look at IO’s growth separately from assets that will become BaseLayer, the company has grown at an average of 50 percent per year since its inception in 2007. For BaseLayer, the average growth rate has been around 100 percent since 2012, he said.

    The company’s leadership hopes that separately, the two firms will see accelerated growth.

    Without mentioning specific numbers, Slessman said IO was a “nine-figure business,” and BaseLayer was expected to become a “nine-figure business” coming out of 2015.

    IO to Focus on Expanding in Asia

    The post-separation IO will focus on expanding in Asia. The company has been experiencing faster rate of take-up in its Singapore facility than it has seen anywhere else, Slessman said.

    Within the region, IO is planning to aim primarily at India and China. The latter is an especially attractive market.

    “The center of the Internet moved to China when Alibaba went public,” Slessman said, referring to September’s float by the Chinese Internet giant, which according to Reuters was the biggest IPO in history. He himself plans to relocate to Singapore in January.

    Asia already has the largest number of Internet users in the world. It has the highest population but only about 35 percent Internet penetration rate, according to Internet World Stats. Growth in Internet penetration in Asia has grown at about 1,100 percent annually since 2000.

    1:00p
    HP Opens Up Converged Infrastructure for Use with Cisco Switches

    Among the avalanche of new data center hardware products HP unveiled at its Discover conference in Barcelona Tuesday was the company’s first converged system that supports top-of-rack switches by Cisco.

    Until now, HP’s all-in-one cloud infrastructure packages worked only with HP’s networking hardware. HP has been a fierce competitor of Cisco and made a lot of noise four years ago announcing that it would replace all Cisco gear in its data centers with its own products.

    But because Cisco switches are so pervasive in the world’s data centers, the company has decided to give customers the option of an HP converged system that includes Cisco gear, Brent Allen, group manager for HP Converged Systems, said.

    The new ConvergedSystem 700 is designed to support deployments of traditional enterprise workloads on top of a scalable Infrastructure-as-a-Service platform. Like all converged infrastructure systems, it is an integrated package that includes servers, storage, networking, and management software.

    IT vendors have been pitching converged systems as a way to unify disparate teams that manage enterprise infrastructure and streamline IT processes. “We tend to still see a lot of IT shops looking at their environments in terms of IT silos,” Allen said.

    Helion-Friendly Converged System

    HP rolled out another converged infrastructure system on Tuesday. Called Helion CloudSystem CS200-HC, it integrates with Helion, HP’s OpenStack-based cloud architecture, for easy hybrid cloud deployments.

    HP announced Helion in May along with unveiling a plan to invest $1 billion in all things cloud. The Helion initiative includes a public cloud operated by HP, as well as hosted and on-premise private cloud services.

    Helion also includes HP’s new Platform-as-a-Service offering based on the open source Cloud Foundry PaaS.

    New Servers, Storage, Software Unveiled

    Also at this week’s Discover, HP announced new Integrity Superdome X and NonStop X servers, 3PAR storage arrays that combine Flash and disk storage and integrate with its StoreOnce backup platform, as well as updates to various pieces of its IT management software.

    2:00p
    Uptime Institute to Evaluate All CenturyLink Data Centers for Operations Certification

    CenturyLink has achieved Uptime Institute Tier certification for several of its data centers, and now the company is turning focus toward certification for data center operations. CenturyLink will be the first provider to attempt to receive the Uptime Institute Management and Operations Stamp of Approval for its entire portfolio of data centers.

    Uptime will evaluate CenturyLink’s close to sixty data centers in a stringent process expected to continue several years. The provider will undergo intensive audits that scrutinize every aspect of how it manages and operates its data centers.

    Some other early receivers of the three-year-old M&O Stamp of Approval are Fortune Data Centers, Equinix, and Colt. All three were part of an eleven-member M&O Coalition that developed the assessment criteria for the Management and Operations Program and Site Assessment Service. No provider has pursued the stamp as extensively as CenturyLink is.

    The data center operations designation gives third-party assurance that site management satisfies industry-recognized criteria for 24/7 uptime. It’s a big investment and commitment on the part of CenturyLink. The end result is that customers gain peace of mind into operations, not just the facility itself.

    The audit takes into consideration everything from the processes for servicing equipment and investment in training to effectiveness of its communications to staff and subcontractors. “Each individual site has its own process, each site gets audited,” said Matt Stansberry, Uptime’s director of content and publications.

    “It’s all about operational excellence,” said Drew Leonard, vice president of colocation services at CenturyLink.”We’ve stood on that for a very long time as an operator. We’ve established a history of uptime that is born out of the way we operate, train – on the methods and practices and procedures.”

    People are Greatest Threat to Uptime

    The M&O stamp across the entire footprint will speak to enterprises increasingly looking to multi-tenant colocation facilities as part of their data center strategy.

    “Our customers are in multiple facilities,” said Leonard. “Part of what we want to achieve as a provider is standard operations for quality facilities. Last year we’ve made a commitment to Tier III for Design of Facilities. That step solidified and validated what we were doing. This is the next step. The biggest risk the data center has is the people who operate it. Even the most trained individual can make a mistake at any time.”

    Uptime said the leading cause of data center failures is operations related.

    “Folks used to say people caused 70 percent of outages, but it’s really more like 100 percent,” said Stansberry. “Most of the outages aren’t from equipment failure; it’s about core planning.”

    New Certification Gaining Traction With Providers

    The Management and Operations certification is fairly new for Uptime, first introduced in 2011 through a coalition of multi-tenant and enterprise data center operators that worked to develop a protocol. The Institute has long certified physical facilities, and the new certification expanded its focus to operations.

    Uptime has traditionally done well with enterprise data centers in terms of facilities certification but has increasingly certified multi-tenant provider facilities. Providers recognize value in certification as it gives customers peace of mind when it comes to the facility. The CenturyLink deal will help its M&O Stamp of Approval gain additional traction in the multi-tenant space for data center operations assessment.

    The M&O Stamp of Approval is an outcome-based guideline that looks at operations, developed based on analyzing the root cause of 20 years of outages in high performance data centers. Uptime has a rich knowledge base and maintains a history of member outages to help others learn from past mistakes.

    CenturyLink most recently achieved Tier III certification in Minnesota and Toronto for Design and Constructed Facility.

    4:30p
    Blurred Boundaries: Hidden Data Center Savings

    Jeff Klaus is the General Manager of Data Center Manager (DCM) Solutions, at Intel Corporation.

    Every disruptive technology in the data center forces IT teams to rethink the related practices and approaches. Virtualization, for example, led to new resource provisioning practices and service delivery models.

    Cloud technologies and services are driving similar change. Data center managers have many choices for service delivery, and workloads can be more easily shifted between the available compute resources distributed across both private and public data centers.

    Among the benefits stemming from this agility, new approaches for lowering data center energy costs have many organizations considering cloud alternatives.

    Shifting Workloads to Lower Energy Costs

    Every data center service and resource has an associated power and cooling cost. Energy, therefore, should be a factor in capacity planning and service deployment decisions. But many companies do not leverage all of the energy-related data available to them – and without this knowledge, it’s challenging to make sense of information being generated by servers, power distribution, airflow and cooling units and other smart equipment.

    That’s why holistic energy management is essential to optimizing power usage across the data center. IT and facilities can rely more on user-friendly consoles to gain a complete picture of the patterns that correlate workloads and activity levels to power consumption and dissipated heat like graphical thermal and power maps of the data center. Specific services and workloads can also be profiled, and logged data helps build a historical database to establish and analyze temperature patterns. Having one cohesive view of energy consumption also reduces the need to rely on less accurate theoretical models, manufacturer specifications or manual measurements that are time consuming and quickly out of date.

    A Case for Cloud Computing

    This makes the case for cloud computing as a means to manage energy costs. Knowing how workload shifting will decrease the energy requirements for one site and increase them for another makes it possible to factor in the different utility rates and implement the most energy-efficient scheduling. Within a private cloud, workloads can be mapped to available resources at the location with the lowest energy rates at the time of the service request. Public cloud services can be considered, with the cost comparison taking into account the change to the in-house energy costs.

    From a technology standpoint, any company can achieve this level of visibility and use it to take advantage of the cheapest energy rates for the various data center sites. Almost every data center is tied to at least one other site for disaster recovery, and distributed data centers are common for a variety of reasons. Add to this scenario all of the domestic and offshore regions where Infrastructure-as-a-Service is booming, and businesses have the opportunity to tap into global compute resources that leverage lower-cost power and in areas where infrastructure providers can pass through cost savings from government subsidies.

    Other Benefits of Fine-Grained Visibility

    For the workloads that remain in the company’s data centers, increased visibility also arms data center managers with knowledge that can drive down the associated energy costs. Energy management solutions, especially those that include at-a-glance dashboards, make it easy to identify idle servers. Since these servers still draw approximately 60 percent of their maximum power requirements, identifying them can help adjust server provisioning and workload balancing to drive up utilization.

    Hot spots can also be identified. Knowing which servers or racks are consistently running hot can allow adjustments to the airflow handlers, cooling systems, or workloads to bring the temperature down before any equipment is damaged or services disrupted.

    Visibility of the thermal patterns can be put to use for adjusting the ambient temperature in a data center. Every degree that temperature is raised equates to a significant reduction in cooling costs. Therefore, many data centers operate at higher ambient temperatures today, especially since modern data center equipment providers warrant equipment for operation at the higher temperatures.

    6:02p
    CyrusOne Brings Northern Virginia Data Center Online

    CyrusOne’s Northern Virginia data center is now up and running. The first 30,000 square feet of colocation space has been commissioned within seven months of the company breaking ground on the first 125,000 square foot building in Sterling. The building will have up to 12 megawatts of critical load.

    The data center is CyrusOne’s first foray into the Northern Virginia market, home to one of the world’s largest clusters of data center real estate.

    CyrusOne is a major player with a footprint of 1 million square feet of space in over 25 data centers across the U.S., Europe, and Asia. Much of its capacity is concentrated in Texas, Ohio, and Phoenix.

    There is a big opportunity to cross-sell the Virginia space to those existing customers, as well as to new potential customers, as Northern Virginia is a major Internet traffic hub in the country, 70 percent of all traffic passing through.

    A third of the new phase was pre-leased last month to a Fortune 50. The unnamed company is an existing CyrusOne customer.

    At full build, the 14-acre site is expected to accommodate a shell of approximately 400,000 square feet with up to 240,000 square feet of colocation space, 36,000 square feet of Class A office space, and up to 48 megawatts of critical load.

    “By expanding our footprint on the East Coast, we can better meet the expectations of our future and existing customers in this region,” Tesh Durvasula, chief commercial officer at CyrusOne, said in a statement.

    The Northern Virginia market continues to see both new builds, customer activity, and healthy growth on top of over 5.1 million square feet of existing data center space.

    RagingWire also has an upcoming Northern Virginia data center and has announced a strong sales pipeline there. The Equinix campus does extremely well, known for its connectivity and noting private links as its fastest growing business segment.

    CoreSite expressed a lot of optimism in the market ahead of opening of its second facility. DuPont Fabros revealed an existing customer subleased the 13 megawatts of capacity vacated by Yahoo at its ACC4 data center in Ashburn.

    7:51p
    Amazon Simplifies Discounts on Reserved Instances

    Amazon Web Services has simplified the way it offers cloud discounts to EC2 users that reserve cloud compute capacity in advance.

    Instead of providing different level of discounts depending on how heavily the reserved instances are used, AWS now offers a single type of reserved instance discounts. With the new model, size of the discount varies depending on whether the user pays for all reserved capacity upfront, partial capacity, or does not pay upfront at all.

    Users can reserve cloud VMs for one year or three years. Savings range from about 30 percent to 75 percent, depending on instance specs and length of the reservation.

    AWS has been offering discounted reserved instances since 2009. Its major rivals in the public cloud market, Google and Microsoft, have each done different things with commitment discounts.

    Google doesn’t offer cloud discounts on reserved instances on its Compute Engine. Instead, it offers a “sustained use” discount, which is applied automatically once a user runs an instance for longer than 25 percent of a billing cycle. If you use an instance throughout the entire billing cycle, your net discount on it will be 30 percent.

    Google introduced sustained use discounts earlier this year. It was a much simpler discount system than Amazon’s reserved instance model was.

    Microsoft Azure announced discounted commitment plans in 2013 but stopped offering them earlier this year. It still provides the discount (20 percent to about 30 percent) to users that subscribed before it nixed the plan.

    Big cloud providers have been battling it out by continuously slashing the rates they charge their users. Rewarding users who make upfront commitments to their services plays a role in that battle, but also has another purpose.

    Cloud discounts on reserved instances are a way to court long-term users to the public cloud service whose biggest appeal is the ability to use it temporarily and pay only for what you use. It is an alternative to making heavy investment in hardware and data centers.

    Here’s a break-down of the new model from Tuesday’s blog post by AWS Chief Evangelist Jeff Barr:

    • All Upfront – You pay for the entire Reserved Instance term (one or three years) with one upfront payment and get the best effective hourly price when compared to On-Demand.
    • Partial Upfront – You pay for a portion of the Reserved Instance upfront, and then pay for the remainder over the course of the one or three year term. This option balances the RI payments between upfront and hourly.
    • No Upfront – You pay nothing upfront but commit to pay for the Reserved Instance over the course of the Reserved Instance term, with discounts (typically about 30%) when compared to On-Demand. This option is offered with a one year term.
    8:30p
    FBI Investigates Hack into Sony Pictures Corporate Network

    logo-WHIR

    This article originally appeared at The WHIR

    A hack of Sony Pictures Entertainment (SPE) corporate network last week appears to have resulted in a major data breach, and the FBI confirmed it is investigating. Several movie titles, most of them yet to be released, have appeared on file-sharing websites, according to the Associated Press, though a direct connection to the hack has not yet been established.

    Portions of the internal network at SPE were knocked offline after employees received an image which said they had been “Hacked by #GOP,” which is reported to stand for “Guardians of Peace.” An image reported to be the one seen by employees has been posted to Imgur. That image includes claims that the hack had yielded “all your Internal data” (sic) including “secrets and top secrets,” and threatens to release data. It also says “We continue till our request be met” (sic), suggesting that the malicious actors hacking skills far outpace their English language ones. Corporate email services remained down over the weekend.

    Sony responded in a statement to Variety. “Sony Pictures continues to work through issues related to what was clearly a cyber attack last week. The company has restored a number of important services to ensure ongoing business continuity and is working closely with law enforcement officials to investigate the matter.”

    Yet to be released films Annie, Still Alice, To Write Love on Her Arms, and Mr. Turner, as well as Fury, which is currently in theaters, have all appeared online since the attack. Other data speculated to have been lost to hackers includes employee passwords, salary and compensation information for thousands of employees, including executives, and even pirated content possibly downloaded by an employee.

    The leading suspect in the attack in the media, if not in the FBI investigation, is North Korea. Sony is set to release The Interview on Christmas day, a movie depicting a CIA plot to assassinate North Korean dictator Kim Jong Un. North Korea called the movie an “act of war” and promised “stern” and “merciless” retaliation. Variety reports that this is only one of several possibilities being investigated.

    The FBI is joined in the investigation by Mandiant. Mandiant’s credentials in tracking international hackers were reinforced in 2013 when it identified China’s “Unit 61398” as a cyber-espionage group responsible for numerous attacks.

    The Sony Entertainment Network and PlayStation Network were knocked offline by a DDoS attack in August, with a group called “Lizard Squad” taking responsibility and issuing a bomb threat. Not only has Lizard Squad not been identified and charged, it also took down Xbox LIVE on Monday, according to GameZone.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/fbi-investigates-hack-sony-pictures-corporate-network

    9:00p
    FireEye Discovers Hackers Targeting Wall Street to Access Insider Trading Information

    logo-WHIR

    This article originally appeared at The WHIR

    Hackers familiar with the financial industry have been attacking over 100 companies in the healthcare, pharmaceutical and investment banking sectors since mid-2013. A report released by FireEye on Monday says the group it calls FIN4 focuses on gaining access to accounts of individuals that have access to insider information.

    This intimate, non-public knowledge of publicly traded companies could be used to to provide an unfair advantage in the stock market and is illegal outside of Securities and Exchange Commission (SEC) regulated legal insider trading.

    Hacks on US financial institutions have been prevalent this year. In October, JP Morgan reported over 76 million customer accounts were exposed through a security breach that had been undetected for months. In August, the FBI, NSA and US Secret Service Investigated Hacks at five US banks. Just last week, Sony entertainment was hacked by a group using malware. The company hired FireEye’s mandiant incident response team to clean up after the incident.

    Rather than using malware, as was the case in some of the other hacks this year, FIN4 gains access to the email accounts of their targets. The group targets executive management, legal counsel, researchers and others in advisory roles. FireEye believes the hackers focus on individuals that may have information about publicly traded companies that could affect the stock price. The hackers are particularly interested in mergers and acquisitions.

    Two-thirds of the over 100 targets are healthcare and pharmaceutical companies. Half of the targets are in the already volatile biotechnology sector.

    “We believe FIN4 heavily targets healthcare and pharmaceutical companies as stocks in these industries can move dramatically in response to news of clinical trial results, regulatory decisions, or safety and legal issues,” said the report. “In fact, many high-profile insider trading cases involve the pharmaceutical sector.”

    The hackers are using various techniques to steal passwords.

    “The group frequently employs M&A-themed and SEC-themed lures with Visual Basic for Applications (VBA) macros implemented to steal the usernames and passwords of these key individuals,” the report said. “Additionally, FIN4 has included links to fake Outlook Web App (OWA) login pages designed to capture the user’s credentials. Once equipped with the credentials, FIN4 then has access to real-time email communications—and presumably insight into potential deals and their timing.”

    Once the email account is compromised the group uses it to send detailed phishing emails to contacts using wording that indicates the group is both familiar with the financial industry and likely a native English speaker.

    Intimate knowledge of industry specific language and how acquisitions work gave FIN4 success in getting to potential targets despite the usual caution and security measures typical of these industries. “In several of our investigations, FIN4 targeted multiple parties involved in a business deal, including law firms, consultants, and public companies. In one instance, FIN4 appeared to leverage its previously-acquired access to email accounts at an advisory firm (“Advisory Firm A”) to collect data during a potential acquisition of one of Advisory Firm A’s clients (“Public Company A”).”

    The group even uses Outlook rules to delete incoming email containing the words “hack”, “phish”, and “malware” on a compromised account. This measure may prevent the target from receiving email from colleagues that suspect a breach.

    Hackers have been in the news a lot this year with Russia and China suspected in many cases. An unknown government using advanced hacking spyware attacked Russia and Saudi Arabia in November. Perhaps in response to growing threats, FireEye partnered with SingTel in October to strengthen security in the APAC region, an area highly targeted by hackers.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/fireeye-discovers-hackers-targeting-wall-street-access-insider-trading-information

    << Previous Day 2014/12/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org