Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 17th, 2014

    Time Event
    11:00a
    From Alimama to Apsara, Alibaba Operates a Powerful Proprietary Cloud

    Chinese e-commerce and cloud services giant Alibaba has developed an advanced proprietary technology stack to support its empire. The distributed system, living in data centers in China and Hong Kong, supports a multitude of cloud-based services, including rentable infrastructure resources and sophisticated Big Data analytics for marketers.

    As it prepares for its blockbuster IPO on U.S. market, through which the company is expected to raise up to $20 billion, Alibaba has been filing papers with the U.S. Securities and Exchange Commission which reveal a few interesting details about its infrastructure stack.

    Apsara: Alibaba’s cloud platform

    The company’s cloud provides a distributed computing infrastructure to support its e-commerce ecosystem, serving its own platform, its affiliates and Alipay, the provider of online payment solution for Internet businesses Alibaba spun off in 2011. The services include cloud servers (called Elastic Computing Servers), storage, relational database and content delivery network, all on a pay-as-you-go basis, much like offerings you can find in service portfolios of its U.S. counterparts, such as Amazon Web Services or Google cloud.

    The company says its primary cloud user base consists of mobile app developers, Internet gaming and online platforms, e-commerce and Internet finance firms and system integrators.

    The cloud platform is called Apsara. It is built using the Alibaba’s own proprietary technology that enables massive scalability. “A single Apsara cluster can be scaled up to 5,000 servers with 100 petabyte storage capacity and 100,000 CPU cores,” the company wrote in the SEC documents.

    Alibaba provides cloud infrastructure services through a subsidiary called Aliyun, which has been expanding data center capacity this year. As we reported in May, the company has recently launched a data center in Beijing and another one in Hong Kong.

    Alimama: Big on Big Data

    The technology stack includes sophisticated data-science capabilities, including deep learning, high-volume batch processing and multi-variable and multi-dimensional real-time analytics. These capabilities are used by Alibaba’s search and online marketing products, and by its small-and-medium-business loan operation for credit profiling and risk management.

    Called Alimama, the deep-learning system uses proprietary algorithms to judge advertising quality provided by publishers and predict click-through and conversion rates for marketers. It runs on an Apsara cluster of servers that can analyze terabytes of data points and model tens of billions of ad impressions.

    Alibaba’s distributed relational database management system, called OceanBase, can be scaled to hundreds of nodes. Also built on proprietary technology, it supports transaction processing on the company’s marketplaces.

    Forced to censor own customers

    While revelations of the past 12 months of the U.S. National Security Agency’s alleged backdoor electronic surveillance has U.S. service providers worried about adverse effects on their businesses, Chinese service providers have a whole different set of problems, required by the government to police its own customers by censoring their content.

    In the documents, Alibaba revealed that Chinese law requires it to monitor all websites hosted on its servers for content the government may find inappropriate.

    Alibaba is required to monitor “for items or content deemed to be socially destabilizing, obscene, superstitious or defamatory, as well as items, content or services that are illegal to sell online or otherwise in other jurisdictions in which we operate our marketplaces, and promptly take appropriate action with respect to such items, content or services.”

    Needless to say, the vague definition of illegal content puts the company at constant risk of liability. “It may be difficult to determine the type of content that may result in liability to us, and if we are found to be liable, we may be subject to fines, have our relevant business operation licenses revoked, or be prevented from operating our websites or mobile interfaces in China,” the documents read.

    11:30a
    VMware Cloud Services Chief Bill Fathers Joins Telx Board

    Bill Fathers, the man in charge of VMware’s cloud services business, has joined the board of directors of Telx, one of the biggest colocation and interconnection service providers in the U.S.

    As Telx and its peers in the data center provider space race to win business from cloud companies, it is advantageous to have a board member like Fathers, who runs the cloud services unit at one of the most important companies in the enterprise cloud space.

    According to a Telx statement, Fathers will advise the company on sales, marketing, product development and operations. “His [Fathers’] deep understanding of our business will help continue driving Telx’s success as a premier data center solution provider that fuels infrastructure, interconnection and business progress for our customers,” Telx CEO Chris Downie said.

    Companies like Telx have taken the role of marketplace enablers for the ecosystem of cloud service providers and their customers. Telx has traditionally emphasized its role as an operator of hubs where network operators, service providers and end users interconnect.

    Being a host and interconnection enabler for a diverse group of players puts it in a good position to build a cloud ecosystem. Telx’s competitor Equinix has devised a similar model.

    VMware’s vCloud Hybrid Service, an Infrastructure-as-a-Service offering geared toward enterprises, fits the profile of a provider that would use Telx facilities to reach customers. The company rolled the service out in May 2013 and said it would use colocation providers to host the infrastructure to support it, but has been hesitant to say publicly who those providers were.

    The company is offering seamless integration between customers’ own VMware vSphere environments with its public cloud, including automatic extension of their networking environment onto the VMware-operated infrastructure in colocation data centers.

    Downie has become the permanent Telx CEO only recently. He replaced the company’s previous chief exec Eric Shepcaro – who passed away last May – first as temporary CEO and now as a permanent replacement.

    12:00p
    Power Assure Expands DCIM Capabilities With App Performance Monitoring

    Power Assure has added Application Performance Monitoring (APM) to its EM/5 monitoring solution, which has recently been integrated with IT service management software by ServiceNow.

    The addition of APM drives deeper insights into the application stack in addition to IT and facilities monitoring. It helps identify and correlate relationships between applications and devices.

    Power Assure provides software-defined power solutions and tracking that mitigate data center power risk. The integration with ServiceNow extended the data center infrastructure management solution’s capabilities.

    EM/5 is a scalable, multi-tenant hosted data center monitoring solution that feeds real-time and historical metrics into the ServiceNow environment. The addition of APM makes it particularly useful for performance-sensitive customer-facing applications.

    With the addition of APM, it now tracks key application performance indicators, such as response time, throughput and transaction counts. It adds granular monitoring to all tiers of the application environment, meaning one spot to manage power all the way down to application components.

    “Tracking application performance, particularly of customer facing systems, has become vitally important for companies across virtually all industry sectors,” said Pete Malcolm, president and CEO of Power Assure. “By providing it as an integral part of our EM/5 solution, ServiceNow customers can now view and analyze IT, facilities and application metrics all in one place, making root cause analysis and rectification of problems much faster, and in many cases, before they are noticeable by users.”

    IT and facilities monitoring helps with physical and virtual resources, but application performance monitoring tracks the performance of the application residing on this infrastructure. Slow applications frustrate users, and in the case of e-commerce apps, it often leads to abandonment and lost revenues.

    By tracking response times, EM/5 can correlate bad performance with IT metrics such as CPU utilization. By examining transaction throughput, users can identify faults in back-end components, such as a slow database. The APM also provides trend analysis for better capacity planning and forecasting.

    12:30p
    Location, Location – Why a Data Center Hub May Not Be The Best Solution

    Robert Williams is technical director, Custodian Data Centre. As a specialist in the fields of data centre design, he is the primary author of this post. Please note Kate Baker, business strategist, co-authored this post.

    With fast network routes to Europe and the rest of the world, London has historically been the natural home for data centers looking to compete in a global market.

    However, with building space for expansion limited, data centers are often built on a small footprint of land towering upwards into the London skyline. Whilst able to cater for many servers and low network latency, many of these sites have limited power availability, thus having low capacity per cabinet.

    Beyond power, the physical demands

    Additional to the power challenge, companies choosing to colocate in a data center hub are positioning themselves in an area more likely to be targeted by terrorist activity. This is a growing concern for companies looking for disaster recovery sites that will meet auditors’ stringent specifications. Data centers located in areas with space around them is also a benefit that attracts auditors, as the security enables secure physical fencing at an optimal distance from your infrastructure.

    Companies also need to locate their disaster recovery at a site that is far enough away not to be impacted by outages in major cities and close enough for staff to access the site within a reasonable time frame from their offices.

    The sensible option would be to look at regional sites. However, network topography and customer needs often mean that many out-of-town locations, whilst offering better security options and access to power, are often unable to cater to companies which use latency sensitive applications or require fast transport links.

    Identifying the ideal location

    Companies looking for active replication, low latency, power availability and a secure location should be looking carefully at where to locate their servers. They need inner London connectivity in a safe outer London location. The geographic position needs to allow companies to enjoy all the benefits of a London data center plus attractive power capabilities and a secure location.

    In terms of connectivity, some facilities would argue that it is best to totally avoid London for complete peace of mind; however it is extremely unlikely for the whole city to be taken down. A data center with diverse geographic routing options enables companies to take full advantage of a major city or the capital’s infrastructure without the premium costs associated with being in the center.

    Another important benefit that out-of-city locations can offer is ample parking and secure unloading areas. Additionally, many regional sites are quicker to travel to than companies that try to travel across city networks and they can also enjoy close-by motorway networks or easily reachable airports.

    Facilities in regional locations have the ability to tap into a wider range of green technologies using systems such as fresh air cooling or hot and cold aisle containment. With cleaner air quality and lower average temperatures than intensely built-up areas, regional locations provide optimal conditions for filtration systems and the need to manage temperatures with chillers.

    The ideal location for a data center is one that is circa 80km from a city location, secure and not in a data center hub, with access to power and the ability to handle the requirements of an active-active data center solution. Many large organizations that have traditionally looked at London sites, now find that auditors stipulate that they base some of their colocation facilities outside the M25 with strong levels of connectivity.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

     

     

    2:00p
    How Intelligent Asset Management Directly Impacts Your Data Center ROI

    There’s no way you can intelligently control your complex data center platform without some type of proactive monitoring solution. The distributed nature of the modern data center has created new types of demands around resource utilization and control.

    Here’s the bottom line: As companies continue to absorb the flow of electronic data in astronomical quantities, greater attention is being paid to the infrastructure that holds the data and the escalating costs of its management.

    Traditionally, CFOs have only calculated the investments in IT hardware, software and related necessities, including networking, mobile, telecommunications and services that are critical in maintaining a higher-performing data center. Now, with growing issues in data security, compliance assurance, sustainability and capacity planning, businesses are under pressure to deploy effective management systems to reduce costs while identifying and eliminating inefficiencies. The data center has become a strategic advantage, and differentiator, in most industries.

    With that in mind, it’s important to understand exactly what can make a data center fail. In this whitepaper from RF Code, we learn that 73 percent of data center downtime is caused by human error. Additional manual Asset management processes that can drive higher costs also include:

    • Loss of data
    • Physical IT asset inventory collection
    • Physical IT asset inventory reconciliation
    • Manually locating misplaced IT assets
    • Manual IT Asset Repository Update
    • Replacement of Misplaced IT Assets
    • Penalties for Late Leased IT Asset Return

    Asset management means more than simply knowing what the company has purchased. As the need for data centers grows, propelled by the massive growth in consumer-driven data, it is essential that efficient and fiscally responsible processes are in place to accurately manage IT expenditure and company exposure. There are number of direct and indirect benefits to understanding and intelligently controlling your infrastructure assets.

    Download this whitepaper today to learn about how device intelligence also gives IT executives several cost-saving opportunities to make informed strategic decisions on data center operation and purchasing, capacity planning, human resource deployment, and outsourcing. Leveraging these opportunities will elevate IT’s ability to meet high-level directives to lower costs and increase efficiency — today and into the future. Remember, your ultimate goal is to better manage your IT infrastructure while directly aligning with your corporate goals.

    4:00p
    Cloud Provider DigitalOcean Rolls Out IPv6 in Singapore

    Cloud infrastructure service provider DigitalOcean is getting ready for the great IPv6 switch, citing customer demand. With the supply of IPv4 running short worldwide, the company has decided to launch IPv6 in Singapore first, followed by a wider rollout.

    DigitalOcean differentiates itself by providing simple deployment options to get up and running. It’s an approach targeted at developers, startups and small businesses. The company launched in 2012, and despite being a latecomer to the cloud scene, it has had exponential growth. It said IPv6 was its most requested feature.

    IPv6 allows for much more address space, and having both an IPv4 and IPv6 addresses will allow the team to see what users want from the feature and how they use it, as well as transition smoothly into native IPv6 networking. IPv4 can support about 4.29 billion addresses. While tremendous at the time of creation of IPv4, the limit is restrictive today. The 128-bit IPv6 protocol uses hexadecimal notation and it can support 2128  addresses.

    Mitch Wainer, DigitalOcean’s chief marketing officer and co-founder, said, “IPv4 is running out throughout the world. A lot of the developers out there are starting to prepare and transition to IPv6. We’ve also discovered that [Asia Pacific] has the highest adoption rate with IPv6, a reason we’re rolling it out in Singapore first.”

    Stateside, the company says it’s had to buy IPv4 space through brokerages due to limited supply. American Registry for Internet Numbers (ARIN) is down to the final /8 (around 16 million addresses) and has moved into Phase Four, the final phase, of its IPv4 countdown plan. “The reality is that we’re buying our IP addresses now, and they’re costing $8 per IP, which is crazy,” Wainer said . “We’re spending hundred of thousands on IP addresses. Europe is totally depleted. The number in Asia has depleted. The US is depleted, if not close.”

    San Francisco is up next for DigitalOcean’s IPv6 rollout.

    DigitalOcean expanded with a launch in Singapore in an Equinix data center last February, and in that short time it says it’s seen tremendous uptake there. It has also recently landed $37.2m in funding  “It’s just the beginning for us. There’s a lot more to come with the new round of funding, allowing us to scale tremendously,” said Wainer. “We’re at 90 employees at this moment, 115 by the end of this year.”

    4:30p
    SanDisk Buys Server Flash Vendor Fusion-io for $1.1B

    SanDisk has agreed to acquire Fusion-io, vendor of flash storage cards for enterprise-grade servers, for about $1.1 billion in cash.

    Fusion-io gains the experience and global scale of SanDisk, while SanDisk gains the hyperscale data center and Fortune 100 customers from Fusion-io, as well as a rich enterprise flash product portfolio.

    For more than 25 years SanDisk has dominated consumer, mobile and OEM markets for flash disks, but only recently has attacked the data center market for flash storage.

    In 2012 the company established SanDisk Ventures to invest in entrepreneurs and investors who share its vision for memory storage, cloud, data center, mobile, wearables and big data. SanDisk invested in Whiptail, which was successfully sold to Cisco, acquired enterprise SSD company SMART Storage Systems, and invested in flash-based network attached storage (NAS) solution provider Panzura.

    “Fusion-io will accelerate our efforts to enable the flash-transformed data center, helping companies better manage increasingly heavy data workloads at a lower total cost of ownership,” said Sanjay Mehrotra, SanDisk president and CEO.  “Customers will benefit from the addition of Fusion-io’s leading PCIe solutions to SanDisk’s vertically integrated business model.”

    Fusion-io has acquired a number of businesses itself over the years. The company went public in 2011 and had its founding members, including CEO David Flynn, depart last year. While its customer base and product portfolio continued to expand, the company logged five straight quarterly losses, with a 2013 accumulated deficit of $108.8 million. At the time Fusion-io went public its valuation was as high as $1.48 billion.

    “This transaction represents a compelling opportunity for Fusion-io’s employees, customers and shareholders,” said Shane Robison, chairman and CEO of Fusion-io. “Fusion-io’s innovative hardware and software solutions will be augmented by SanDisk’s worldwide scale and vertical integration, enabling a combined company that can offer an even more compelling value proposition for customers and partners.”

    5:13p
    Level 3 to Acquire tw telecom for $5.7B

    Level 3 Communications, the global network backbone giant, announced an agreement to buy U.S. metro connectivity service provider tw telecom for about $5.7 billion in cash and stock value.

    If U.S. trade regulators approve the transaction, Broomfield, Colorado-based Level 3 will gain tw telecom’s substantial metro footprint, which it says has little overlap with its own metro assets in the U.S. It will nearly double the giant’s metro fiber assets in North America.

    This is the biggest transaction in Level 3’s long string of acquisitions that extends back for more than a decade. The company has built a global empire by buying up domestic and foreign competitors.

    Level 3’s most recent acquisition that’s close in scale to the tw telecom deal was its $3 billion takeover of backbone operator Global Crossing in 2011.

    The tw telecom acquisition will add assets in nearly 80 North American markets to the 120 markets Level 3 currently serves in the region. The giant serves about 40 markets in Europe, Middle East and Africa and about 15 in Latin America.

    The deal will add 24,300 metro fiber route miles in North America to the 27,000 miles Level 3 already has in the region.

    Jeff Storey, Level 3 president and CEO, said the strategic acquisition would boost the company’s ability to gain market share. “The transaction further solidifies Level 3’s position as a premier global communications provider to the enterprise, government and carrier market, combining tw telecom’s extensive local operations and assets in North America with Level 3’s global assets and capabilities,” he said in a statement.

    Consolidation, metro assets to save $200M

    Level 3 CFO Sunit Patel told the Wall Street Journal that the company expected to see a substantial reduction in operational costs as a result of the acquisition – to the tune of $200 million a year.

    The company expects to save on connectivity it has to buy from metro providers to connect its corporate customers’ facilities to its network. Owning TW Telecom’s metro assets will be responsible for more than half of the expected savings, Patel said.

    Another reason for the projected savings is general stabilization of bandwidth prices over the past several years as the telco industry consolidates.

    Some of the most recent examples of the consolidation trend include the pending takeover of Time Warner Cable by Comcast, and AT&T’s bid for DirecTV.

    Level 3 struggling to make profit

    Despite high gains in stock value, Level 3 has been struggling to make a profit, reporting losses year after year. The company reported $109 million in losses for 2013 on revenue of about $1.6 billion.

    It has not made a profit since 1998, according to the Wall Street Journal, but its stock has about doubled in value over the past 12 months.

    6:00p
    Oracle Launches x86 Sun Servers Designed for Database In-Memory Option

    The latest in Oracle‘s line of workload-specific enterprise-class servers are the Sun Server X4-4 and X4-8, a four- and an eight-socket server, respectively. These are the first to include elastic computing features by adapting to workload demands in real time and, in line with the company’s hardware tradition, engineered to integrate tightly with its software.

    The X4-4 is designed for business intelligence and analytics workloads as well as server consolidation. It’s for any application that requires large-memory-footprint virtual machines and runs real-time analytics. The X4-8 is what the company calls the best choice for Oracle Database In-memory option and scale-up applications.

    It allows even more of the database to be memory-optimized. The company launched its in-memory database product earlier this month.

    Oracle worked with Intel to define a processor, the Intel Xeon E7-8895 v2, to be used in the new systems.

    It combines the capabilities of three different Xeon products into a single processor. “Oracle’s new four- and eight-socket servers can simplify the process of determining system configurations and purchasing while allowing dynamic repurposing of assets,” said Shannon Poulin, vice president and general manager of the Datacenter Marketing Group at Intel.

    Ehancements have also been made to the system BIOS, Oracle Solaris and Oracle Linux, which allow the system to dynamically clock up to faster speeds. This type of capability is attractive to customers like stock market trading companies, who need the servers for active daytime trading activity and nighttime stock portfolio processing.

    “The Sun Server X4-4 and Sun Server x4-8 further Oracle’s goal of simplifying IT and significantly reducing operating expenses for our customers by delivering products that are best for Oracle Database In-Memory Option and business analytics,” said Ali Alasti, senior vice president of hardware development at Oracle. “Through close collaboration with Intel, we are the first to announce servers based on the new Intel Xeon E7-8895 v2 processors and the first with unique capabilities that allow customers to dynamically address different workloads in real time.”

    Specs: Sun Server X4-8 supports 120 cores (or 240 threads), 6 TB memory, 9.6 TB HDD capacity or 3.2 TB SSD capacity, contains 16 PCIe Gen 3 I/O expansion slots and allows for up to 6.4 TB Sun Flash Accelerator F80 PCIe Cards. It is also the most dense x86 server with its 5U chassis, allowing 60% higher rack-level core and DIMM slot density than the competition.

    << Previous Day 2014/06/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org