Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, November 20th, 2014

    Time Event
    1:00p
    Iceotope Advances its Liquid Cooling Tech With New Design

    NEW ORLEANS, La. - Cooling specialist Iceotope has introduced a new version of its liquid cooling solution, a design known as PetaGen that supports extreme levels of power density and energy efficiency. The new design, which was introduced at the SC14 supercomputing conference, is the culmination of a five-year journey from its debut to production-ready systems.

    UK-based Iceotope is among a handful of vendors offering liquid cooling solutions for the data center market, seeking to help companies retool their infrastructures to handle the demands of cloud computing and big data crunching. Earlier this year, the company’s technology received a major vote of confidence in the form of a $10 million investment from Aster Capital and Ombu Group, along with a strategic sponsorship with global energy conglomerate Schneider Electric.

    Iceotope and other vendors in the space are fueling a renaissance of sorts for liquid cooling in data centers. The idea of bringing liquid coolant directly to the source of heat in the server took a back seat to air cooling in the 80s, but has recently seen growing interest in the market as processors get ever more powerful and data center operators look for ways to reduce the spend on their power-hungry mechanical cooling systems.

    In Iceotope’s approach, each server motherboard is completely immersed in a sealed bath of liquid coolant, which passively transfers heat away from the electronics. The system is nearly silent and requires no cooling outside the cabinet, which could allow data center operators to eliminate expensive room-level cooling systems.

    Angling for HPC Users

    In unveiling PetaGen at SC14, Iceotope is pitching its offering to end users focused on high performance computing (HPC), where extremely dense compute loads are ideal for non-traditional liquid cooling solutions. PetaGen is available in several form factors, supporting up to 60kW of IT loads housed in 72 blade servers powered by Intel Xeon chips. The company says the design can reduce power usage to a partial Power Usage Effectiveness (pPUE) of less than 1.1.

    Iceotope founder and Chief Visionary Officer Peter Hopton says PetaGen is designed for to cool high-end processors that are increasingly being used for big data and cloud workloads.

    “Some of these top-end chips are, for lack of a better term, un-coolable for other liquid cooling systems,” said Hopton. “PetaGen was designed to eliminate waste from IT and our new system has the capacity to help solve some serious problems caused by and facing the digital economy today.”

    An Opportunity in Heat Recycling

    Iceotope’s design uses a heat exchanger that transfers heat from the cooling liquid (3M’s Novec) to a water-filled cooling loop, which can operate with water as warm as 113 degrees F (45 degrees C). After the heat exchange, this water is hot enough to be useful in district heating systems that can warm greenhouses, buildings and even homes. The company sees this as a better option than the cooling towers used in air-cooled systems, that can require large volumes of water.

    “Most people don’t realize that modern IT use, along with today’s ‘always-on, always-connected culture is hugely damaging in terms of its environmental impact,” said Hopton. “We need to re-evaluate how we use IT now and how we design, build and operate the data center.”

    Liquid cooling allows users to save on space and energy and support higher densities. But it also tends to have higher up-front costs than traditional air cooling, and has been lightly adopted in traditional data centers, where new technologies gain traction slowly. But the HPC sector lives at the cutting-edge of compute loads, and is pushing barriers of density and cooling capacity.

    The first generation of Iceotope’s technology was installed at at the University of Leeds in early 2012, and the Poznan Supercomputing and Networking Centre in Poland.

    1:00p
    Facebook’s Open Source Virtual Machine HHVM Stabilized

    Open source virtual machine project HHVM (HipHop Virtual Machine) has made a breakthrough. Facebook and WP Engine, which provides a WordPress-based content management platform, have enabled HHVM and PHP to run side by side, making HHVM more feasible for production. While the news will be of interest to developers, HHVM’s maturity is something the industry at large should take into account.

    Facebook, a company that continues to innovate up and down the stack from data center to application, originally created the open source project. HHVM is all about speeding up PHP execution. PHP is the foundation of Facebook and is heavily used across the Internet and within enterprises.

    HHVM itself isn’t widely deployed or even widely known. It is extremely fast, but has not been considered production-ready because it sometimes causes system reboots. For the first time PHP and HHVM can run side by side on the same server, which increases speed, performance, and stability in production environments.

    WP Engine’s latest release, called Mercury, leverages the new capability.

    “HHVM is extremely fast running WordPress,” said Tomas Puig, head of labs at WP Engine. “Getting it ready and stable is truly an accomplishment.”

    WP Engine has over 23,000 customers, ranging from startups to Fortune 500. Modern websites with dynamic content built on HHVM can see an average response time increase 560 percent, the company said.

    Mercury is in Alpha Labs production. Alpha Labs is a team at WP Engine dedicated to technical invention both inside and outside the company.

    With Mercury, Puig said, WP Engine has achieved true data center redundancy using HHVM. The larger implications for the breakthrough of running HHVM and PHP on the same server means HHVM is ready and stable for wider adoption.

    Early on in the HHVM project, the system would sometimes require rebooting which led to the belief that it wasn’t stable enough for production environments. In Mercury, PHP kicks in instantly to handle the load when HHVM requires resetting.

    Puig said the Facebook team has been very responsive with the open source project. “We built the system in prototype and met up with them,” he said. “We felt we should be getting more performance, so they helped us tune the system for speed. They’ll close bugs. The Facebook team for open source is excellent.”

    “The enhanced speed and performance by HHVM in production environments is an obvious boon for PHP developers and the WP Engine Labs team has done an impressive job in democratizing HHVM for the open-source community,” said Paul Tarjan, head of Open Source for HHVM at Facebook.

    HHVM is designed to execute programs written in Hack and PHP and achieves superior performance by running PHP code exponentially faster than standard PHP 5.5.

    The HHVM team’s goal is to be able to run all existing PHP code out in the wild. It can already run the top 20 Github PHP frameworks out of the box.

    2:00p
    Latisys Signs First Tenant for Chicago Data Center Expansion

    Latisys has signed the first customer for its upcoming Chicago data center expansion. NORC at the University of Chicago has signed an early occupancy agreement. NORC is a large independent social science research organization.

    NORC is making the jump from operating its own data centers to Latisys, consolidating two existing on-premise data centers. The organization is moving its entire compute infrastructure, including its Secure Data Enclave environment — a statistical processing environment that enables customers to run big data applications and solutions.

    Latisys announced the 3.6 megawatt, 25,000 square feet Oak Brook expansion in October. The 146,000 square foot Chicago data center campus serves as the company’s Midwest hub.

    Technology is changing, and NORC is changing in response, according to its CIO Ron Jurek. It’s not necessarily about big data, but diverse data. There are more smart devices, more sensors being deployed, billions of connected devices leading to an explosion of data.

    Its business is growing alongside the explosion of data streams as its purpose is helping make informed decisions based on this data, helping determine what is most relevant. Latisys provides room for growth.

    Given the sensitive nature and recent security breaches at big box stores, security played an important role in the decision.

    “We did an analysis of a number of organizations,” said Jurek. “It ran the gamut, from mature to up and coming facilities. Security was definitely one of the drivers that made us look to Latisys. We also looked at services provided, things that might relate to our organization.”

    There continues to be a myth that multi-tenant data centers are not right for some of the most security-conscious organizations, when in actuality it is often a major upgrade.

    Jurek said the business is growing and needs to expand rapidly. “Safety, flexibility and reliability, not having to worry about the environmental and facilities, all were part of the decision.”

    It’s also an example of a business in downtown Chicago that chose the surrounding suburbs instead of the city proper. The suburbs used to be primarily used for disaster recovery purposes, but a tight supply started forcing production environments outside the city proper.

    “We did a comprehensive look within the city, suburbs, and we looked for companies with facilities outside of Chicagoland area,” said Jurek. “Disasters don’t call you up, so we looked at organizations with redundancy. Locations within the city were just a little too close to our business operations.”

    Still, Chicago is a primary data center market with lots of capacity under construction, particularly downtown. The Chicago market is seeing a pickup in development action, driven mainly by limited space at the city’s dominant carrier hotel, 300 East Cermak. QTS Realty, CenterPoint,McHugh Construction, and Ascent Corp. are all trying to develop data center properties in downtown Chicago. Industry veteran Hunter Newby and Amerimar recently acquired a data center hub at 717 South Wells.

    4:00p
    Photo Tour: New Facebook Data Center in Iowa

    Earlier this month, Facebook announced the launch of its newest massive data center in Altoona, Iowa, adding a third U.S. site to the list of company-owned data centers and fourth globally.

    Companies like Facebook, Google, Microsoft, and Yahoo design and build what are known as “web-scale” data centers. These are some of the most cutting edge facilities, both in terms of the IT architecture they house and energy efficiency of electrical and mechanical systems that support the hardware. They are also visually stunning.

    The company has shared some photos of the new facility, the first one to use a whole new network architecture its engineers designed to run its applications. Here is a look at the new Facebook data center:

    Entrance to Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Entrance to Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Facebook's Altoona data center - building two. (Photo: @2014 Jacob Sharp Photography)

    Facebook’s Altoona data center – building two. (Photo: @2014 Jacob Sharp Photography)

    Building Distribution Frame room at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Building distribution frame room at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Building Distribution Frame room at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Building distribution frame room at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Building Distribution Frame room at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Building distribution frame room at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A look down a cold aisle at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Entrance to a hot aisle in a data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Entrance to a hot aisle at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Hot aisles in a data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Hot aisles in a data hall at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Data hall at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    A technician at work in a data hall at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Cafe at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Cafe at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    The game room at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    The game room at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Front lobby at Facebook's Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    Front lobby at Facebook’s Altoona data center. (Photo: @2014 Jacob Sharp Photography)

    4:30p
    Six Crucial Attributes of a High-Performance In-Memory Architecture

    Fabien Sanglier, Principal Architect, Software AG Government Solutions, a leading software provider for the federal government.

    The drop in memory prices continues to increase popularity of in-memory computing technology. But while local memory is very fast, it can also be volatile. If not architected properly, a scaled-out application’s in-memory data can easily become inconsistent.

    The move from disk-based to memory-based data architectures requires a robust in-memory data management architecture that delivers high speed, low-latency access to terabytes of data, while maintaining capabilities previously provided by “the disk” such as data consistency, durability, high availability, fault tolerance, monitoring and management.

    Here are six of the most important concerns to address when evaluating in-memory data management solutions.

    From Disk-Based to Memory-Based: Six Areas of Consideration

    Predictable, Extremely Low Latency. Working with data in machine memory is orders of magnitude faster than moving it over a network or getting it from a disk. This speed advantage is critical for real-time data processing at the scale of big data. However, Java garbage collection is an Achilles’ heel when it comes to using large amounts of in-memory data. While terabytes of RAM are available on today’s commodity servers, it’s important to keep in mind that Java applications can only use a few gigabytes of that before long, unpredictable garbage collection pauses cause application slowdowns.

    Look for in-memory management solutions that can manage terabytes of data without suffering from garbage collection pauses.

    Easy Scaling with Minimal Server Footprint. Scaling to terabytes of in-memory data should be easy and shouldn’t require the cost and complexity of dozens of servers and hundreds of Virtual Machines. Your in-memory management solution should be able to scale up as much as possible on each machine so that you’re not saddled with managing and monitoring a 100-node data grid. By fully utilizing the RAM on each server, you can dramatically reduce not only hardware costs but also personnel costs associated with monitoring large server networks.

    Fault Tolerance and High Availability. Mission-critical applications demand fault tolerance and high availability. The volatile nature of in-memory data requires a data management solution that delivers five nines (99.999 percent) uptime with no data loss and no single points of failure.

    Distributed In-Memory Stores with Data Consistency Guarantees. With the rise of in-memory data management as a crucial piece of big data architectures, organizations increasingly rely on having tens of terabytes of data accessible for real-time, mission-critical decisions. Multiple applications (and instances of those applications) will need to tap in-memory stores that are distributed across multiple servers. Thus, in-memory architectures must ensure the consistency and durability of critical data across that array. Ideally, you’ll have flexibility in choosing the appropriate level of consistency guarantees, from eventual and strong consistency up to transactional consistency.

    Fast Restartability. In-memory architectures must allow for quickly bringing machines back online after maintenance or other outages. Systems designed to back up and restore only a few gigabytes of in-memory data often exhibit pathological behavior around startup or backup, and restore as data sizes grow much larger. In particular, recreating a terabyte-sized in-memory store can take days if fast restartability is not a tested feature. Hundreds of terabytes? Make that weeks.

    Advanced In-Memory Monitoring and Management Tools. In dynamic, large-scale application deployments, visibility and management capabilities are critical to optimizing performance and reacting to changing conditions. Control over where critical data is and how it is accessed by application instances gives operators the edge they need to anticipate and respond to significant events like load spikes, I/O bottlenecks or network and hardware failures before they become problems. Your in-memory architecture should be supplemented with a clear dashboard for understanding up-to-the-millisecond performance of in-memory stores, along with easy-to-use tools for configuring in-memory data sets.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:29p
    IBM Rolls Out Dedicated Version of Bluemix PaaS

    IBM has announced a dedicated version of Bluemix, its Platform-as-a-Service offering based on Cloud Foundry, the open source PaaS led by Pivotal. Bluemix Dedicated is a single-tenant version of the developer service, hosted on dedicated hardware in a SoftLayer data center.

    The benefits are greater control and security in cloud-based application development. It will appeal to those who are especially conscious about security and compliance. A new private API catalog has also been introduced for connecting to on-premise data securely.

    IBM is courting enterprise developers with cloud needs here. A dedicated version means customers can now work with more sensitive data on the Bluemix platform safely. It will appeal to enterprises making cloud transitions and hybrid cloud aspirations. Gartner said that nearly half of enterprises will use a combination of public and private cloud by the end of 2017.

    Until recently, the Cloud Foundry project was overseen solely by Pivotal, an EMC-controlled software company. In February, however, Pivotal created a non-profit foundation to drive the project and invited other companies, such as IBM, Rackspace, and VMware (also an EMC subsidiary), to sponsor and steer it.

    Cloud Foundry has enjoyed widespread support from the industry, many heralding it as the OpenStack or sometimes even the Linux of PaaS.

    The private API catalog also speaks to security needs. The APIs allow developers to securely and seamlessly connect data from backend systems of record to systems of engagement like mobile and social applications.

    IBM has invested a lot in Bluemix since it entered general availability earlier this year. It recently added several capabilities, including Watson APIs, allowing developers to tap into cognitive computing to turn apps into intelligent mini Watsons. It added Internet-of-Things functionality as well, a single service that enables someone to connect a device to the Internet, have it generate data, store that data and present it through an application the user has built on the Bluemix platform.

    Behind Bluemix is the SoftLayer infrastructure, which itself has seen $1.2 billion in investment, several data centers opening this year.

    In a statement, Steve Robinson, general manager for IBM Cloud Platform Services, said, “With Bluemix Dedicated now available in our global cloud center network, IBM is adding another on-ramp to the cloud for developers to move quickly and innovate but do so in a model that maintains the necessary levels of security and control.”

    The offering has a built-in private network and will initially offer runtime capability along with a core set of services. The capabilities will expand going forward. The initial services are Cloudant’s Database-as-a-Sevice, Data Caching to improve speed and responsiveness of apps, and runtimes so developers can run apps in the coding language of their choice.

    The company’s Bluemix Garage, tech hubs for working with Bluemix, has also expanded to Canary Wharf Group’s Level39 in the U.K., a large European accelerator space for financial, retail, and future cities tech companies. The Bluemix catalog is expanding into the London data center in tow.

    IBM’s approach to cloud has been to tap into its other complementary services and go above and beyond providing raw resources. It has sizable footing in the enterprise, and has greatly expanded when it acquired SoftLayer, which formed the cornerstone of its cloud. Even though its based on open source technology, Bluemix is a unique PaaS because it gives access to Watson and IoT capabilities.

    Despite a big focus on enterprise cloud, IBM is also courting startups with developer-friendly features and a recent cloud credit and mentorship program.

    6:30p
    Controlling Big Data and the Cloud – A Look at Object Storage and HP

    Explosive data growth, expansion of big data and unstructured data, and the pervasiveness of mobile devices continually pressure traditional file and block storage architectures. Businesses are exploring emerging storage architectures in hopes of providing cost-effective storage solutions that keep up with capacity growth, while also providing service-level agreements (SLAs) to meet business and customer requirements.

    But what kind of system can handle this much data? What type of server, storage, and hardware platform can control and manage open-source data and cloud systems like Ceph and OpenStack?

    Object storage software solutions are designed to run on industry-standard server platforms, offering lower infrastructure costs and scalability beyond the capacity points of typical file server storage subsystems. In this whitepaper, you’ll learn how HP ProLiant servers create a comprehensive and cost-effective object storage solution to address an organization’s next-generation scale-out storage and business needs.

    What are the benefits?

    • Support for petabyte scales and billions of objects
    • Lowers upfront solution investment and total cost of ownership (TCO) per gigabyte
    • Provides enterprise-class infrastructure monitoring and management
    • Does not require cluster software licensing as the cluster is scaled
    • Uses open source, minimizing concerns about vendor lock-in, and increasing flexibility of hardware and software choice
    • Direct integration with open source technologies like Ceph and OpenStack

    Download this whitepaper today to learn how the combination of HP, OpenStack, Ceph and object storage create a powerful model capable of data elasticity, information control and direct cloud integration. Most of all, learn how to create an architecture which enables a next-generation storage solution, thereby freeing your storage environment from traditional limitations.

    7:35p
    Dept. of Energy Gives AMD $32M Grant to Push Exascale Computing Forward

    AMD announced that it was awarded more than $32 million in research grants for exascale computing reseearch projects associated with the U.S. Department of Energy’s FastForward 2 program. Jointly funded by the DOE and National Nuclear Security Administration the FastForward 2 grants will fund research that targets exascale applications for AMD Accelerated Processing Units based on the open standard Heterogeneous System Architecture.

    FastForward 2 funds were announced as part of a larger $425 million allotment U.S. Secretary of Energy Ernest Moniz announced to help bolster U.S. efforts in exascale computing.

    This is the third time the DOE has awarded AMD an exascale computing research grant. Another recipient is NVIDIA. Intel has received such funding in the past as well.

    IBM recently secured a $300 million DOE contract for two new supercomputers using its OpenPOWER processor architecture. The department has 14 supercomputers on the latest release of the Top500 list of fastest computing systems in the world, including one in the second spot: Titan — a Cray XK7 with AMD Opteron 6274 processors.

    AMD says it will conduct research for an integrated exascale node architecture based on its HSA-enabled APUs. Through collaborative efforts it will also help define a new standard for memory interfaces that meets the needs of future-generation memory devices, including non-volatile memory and processing-in-memory (PIM) architectures.

    8:32p
    Microsoft Says Config. Change Caused Azure Outage

    Microsoft Azure team has published a post-mortem on the widespread outage the cloud went through Wednesday that affected about 20 services in most availability zones around the world.

    A configuration change meant to make Blob storage (Azure’s cloud storage service for unstructured data) perform better unexpectedly sent Blob front ends “into an infinite loop,” Jason Zander, a vice president of the Microsoft Azure team, wrote in a blog post. The front ends could not take on more traffic and caused problems for other services that used Blob.

    The last major Azure outage happened in August, when services were interrupted completely in multiple regions, and issues persisted for more than 12 hours. Multiple regions went down earlier that month as well.

    Problems during configuration changes are often cited as reasons for cloud outages. While companies do extensive tests of updates before they roll them out globally, they sometimes start behaving unpredictably when deployed at scale, as illustrated by the Azure outage.

    In September, Facebook went down briefly after its engineers applied an infrastructure configuration change.

    As a result of Tuesday’s Blob bug, Azure services were interrupted in six availability zones in the U.S., two in Europe, and four in Asia. Affected services included everything from Azure Storage to virtual machines, backup, machine learning, and HDInsights, Microsoft’s cloud-based Hadoop distribution.

    The Azure team’s standard protocol dictates that changes are applied in small batches, but this update was made quickly across most regions, which Zander described as an “operational error.”

    The bug showed itself during an upgrade that was supposed to improve performance of the storage service. The team had been testing the upgrade on some production clusters in the weeks running up to the mass deployment, but the issue did not surface during testing, Zander wrote.

    Once the issue was discovered, the team rolled the update back, but storage frontends needed to be restarted. Performance issues persisted for close to 11 hours on Wednesday.

    Because the outage had such a wide-ranging impact on the Azure infrastructure, the team could not communicate with its users through the cloud’s Service Health Dashboard, resorting to Twitter and other social media instead.

    Zander outlined four steps his team would take to prevent a repeat occurrence of such incidents:

    • Ensure that the deployment tools enforce the standard protocol of applying production changes in incremental batches is always followed.
    • Improve the recovery methods to minimize the time to recovery.
    • Fix the infinite loop bug in the CPU reduction improvement from the Blob Front-Ends before it is rolled out into production.
    • Improve Service Health Dashboard Infrastructure and protocols.
    9:00p
    Verizon Launches Marketplace as One-Stop Cloud Shop for Enterprises

    logo-WHIR

    This article originally appeared at The WHIR

    Verizon has announced its globally available Cloud Marketplace this week to provide a one-stop shop for enterprises on Verizon Cloud. Verizon Cloud Marketplace will speed innovation cycles with easy predictable cloud service and application deployments, the company says.

    Initially it will feature pre-built cloud-based services from AppDynamics, Hitachi Data Systems, Juniper Networks, pfSense and Tervela. Verizon will also sell a range of onboarding and consulting services to help customers effectively leverage the cloud.

    “Verizon Cloud Marketplace is all about simplifying and streamlining migration to the cloud, and enterprises using Verizon Cloud will now have access to a growing number of industry-leading cloud-based applications required to power their businesses in the digital age,” said Siki Giunta, senior vice president of cloud services, Verizon Enterprise Solutions.

    The marketplace is available now to public cloud and virtual private cloud-reserved performance deployments, with no-cost and bring-your-own-license pricing models to follow at launch, and metered billing options in 2015. Verizon is promising support featuring smooth handoffs to vendor teams for product-specific questions.

    New third-party virtual appliance and SaaS applications as well as a range of features will be rolled out in the coming months.

    Verizon also launched a revamped IaaS platform based on CloudStack and a modified Xen hypervisor a month ago. Giunta told Business Cloud News that the platform is targeted at developers, who will also presumably be the main customers of the new Cloud Marketplace.

    Verizon also launched Secure Cloud Interconnect in April to connect enterprise clouds from different services.

    Cloud marketplaces are becoming a staple of customer-facing service providers. Ingram Micro brought its marketplace to Mexico this week, and Dell launched its marketplace to public beta earlier in November. IBM joined the long list of providers with a cloud marketplace in April.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/verizon-launches-global-cloud-marketplace

     

    9:30p
    China Ramps Up Censorship as it Hosts World Internet Conference

    logo-WHIR

    This article originally appeared at The WHIR

    China is taking down its firewall in just one city for three days while it hosts the World Internet Conference. Unfortunately, this is temporary, and comes at a time when China seems to otherwise be increasing censorship.

    On Tuesday, the Guardian reported that Chinese Internet censorship watchdog Greatfire.org was being targeted and may be the reason that Verizon Edgecast is now blocked.

    On Monday, Verizon wrote in a blog post that it is being targeted by China. “We have been hearing from our CDN and Monitoring partners throughout the industry and our own customers that more sites, CDNs and networks are being filtered or blocked by the Great Firewall of China,” Verizon said the blog post. “This week we’ve seen the filtering escalate with an increasing number of popular web properties impacted and even one of our many domains being partially blocked…with no rhyme or reason as to why.”

    To access sites that would otherwise be unavailable due to censorship, Greatfire.org used Edgecast to host “mirror sites” that redirect users. “This was a deliberate attack on the websites that we have mirrored,” Charlie Smith, the pseudonym of a Greatfire.org co-founder, told the Guardian via email.

    With 630 million Internet users and growing, mobile internet subscriptions and mobile ecommerce in the APAC region, China is hardly an area service providers can ignore. Comments made at the conference by Chinese regulators seem to support the notion that censorship may actually increase as they strive to increase business opportunities on the Internet (such as the success of Alibaba). The Wall Street Journal reported that “Chinese Internet regulators and executives are using as a platform to assert ascendancy of Internet service that is carefully filtered, highly advanced and hugely profitable.”

    The firewall was also lifted earlier this month for the Asia-Pacific Economic Cooperation. Internet access during the conferences allows access to sites such as Facebook and Twitter which are normally blocked.

    The irony of having a world Internet conference in a country that censors it is not lost on observers. “Surely, officials must recognize that it would be absurd for something called the World Internet Conference to have online content restrictions imposed by one country,” Duncan Clark, chairman of BDA China, a Beijing-based consultant to technology companies said to Bloomberg. “China wants to establish a ‘great power’ relationship with the US on Internet governance, and Beijing will be increasingly vocal in attempting to shape global development.”

    For a country that wants to be involved in Internet governance, it has done a lot to disparage it’s image besides censorship. China is suspected in several recent hacks at the US State Department, the United States Postal Service, and banking institutions. However, Beijing did just make three arrests in the recent WireLurker malware attacks.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/china-ramps-censorship-hosts-world-internet-conference

    11:51p
    Rackspace Chairman Buys $2.5M of Company Stock

    Rackspace Chairman Graham Weston has bought $2.5 million worth of the cloud and managed services company’s stock, saying the move illustrated his assurance in the company’s success.

    “My willingness to invest in Rackspace expresses my belief in the company’s future,” Weston said in a statement. “I believe we can be the trusted partner to the rising wave of businesses who need help managing their cloud.”

    Weston, one of the company’s founders, held 13.2 percent of outstanding Rackspace stock prior to the purchase, closed Tuesday. He bought the additional stock on the open market at about $43 per share on average – a total of 58,480 shares.

    The price was close to the high end of the stock’s range that day, which was also the worst day for the company’s stock this week. Rackspace stock value continued going up Wednesday and Thursday.

    Weston also filed a plan to spend another $2.5 million on the company’s stock over the next year, “as part of his individual investment strategy,” according to a company statement.

    The company made the announcement after markets closed on Thursday, and its stock was up 1.2 points (2.71 percent) in afterhours trading.

    It reported solid third-quarter results earlier this month. Sales were up 18 percent year over year (Q3 revenue was $460 million), and earnings per share were $0.18 – up from $0.11 reported for the third quarter of 2013.

    Windcrest, Texas-based Rackspace started as a hosting company and gradually changed its focus on being an Infrastructure-as-a-Service cloud provider. As more and more companies joined the IaaS market, including giants Microsoft and Google, Rackspace shifted focus once again, blending its reputation for “fanatical support” with cloud services, billing itself as a “managed cloud” provider. It is now going after customers who want cloud but who also need a lot of assistance in managing their cloud infrastructure and applications.

    Rackspace has always stood out among cloud and hosting providers by being extremely hands-on with its customers and by loudly advertising its focus on support. It has also been at the forefront of several important technological advancements in cloud in recent years.

    The company was heavily involved in the creation of OpenStack, the open source cloud architecture that is now enjoying widespread industry support and deployment. Rackspace was the first to build a commercial public cloud service using OpenStack.

    The company was also one of the early adopters of hardware built using open source specs developed by Facebook for its own use and released into the public domain through the Open Compute Project. Rackspace’s OpenStack cloud runs on customized Open Compute servers.

    << Previous Day 2014/11/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org