Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, October 6th, 2014

    Time Event
    12:00p
    Algolia Gets Closer to California Users

    Algolia gives a website its own Google-like search engine, with microsecond-speed response and search-as-you-type capabilities.

    It has a lot of competitors, including giants like Microsoft Azure and Amazon Web Services, as well as a multitude of startups. There is also the open source Elasticsearch project. Competition is tough, and speed is perhaps the second most important search performance factor that makes a product stand out after quality of the results.

    And speed is what brought Algolia, which has offices in Paris and San Francisco, to San Jose, California, where it recently took some space at Equinix’s SV3 data center through the hosting company LeaseWeb. This is the startup’s first data center on the West Coast. Its servers also live on the East Coast, as well as in Europe and Asia.

    “We have an important user base [in California] and we want to reduce latency of search as much as possible,” Julien Lemoine, Algolia CTO and co-founder, explained.

    Algolia’s entire hardware and software stack is optimized to keep that response time within a few milliseconds, and once you optimize that, the only way to make it even faster is to get physically closer to the users.

    LeaseWeb is also hosting the company’s infrastructure on the East Coast and in Asia. In Europe, Algolia is using OVH. It is currently using only half a rack in San Jose, but scales weekly, according to Lemoine. “We already have several racks in US-East,” he wrote in an email.

    The company has an interesting way of scaling its infrastructure. It never adds capacity in one location, always deploying servers in sets of three, across three different availability zones. The three servers are perfect clones of each other.

    One Algolia bare-metal host server has:

    • Intel Xeon E5-2643v2
    • 128G of RAM
    • Three high-endurance 400G SSDs, (1.2TB per host)
    • 1Gbps connectivity in and out

    Algolia developed its own protocol that replicates data across the hosts and keeps them synchronized using the Raft consensus algorithm. Raft is also used by etcd, a key value store used for keeping servers in a cluster synchronized. Built by CoreOS, a hot San Francisco web-scale infrastructure startup, open source etcd is part of Google’s Kubernetes (its open source Docker container management software) and Pivotal’s open source Platform-as-a-Service system Cloud Foundry.

    All customer data is stored in memory and replicated across all three hosts, but there is enough SSD capacity reserved for each customer to accommodate 10 times the size of their data for re-indexing.

    Algolia’s secret sauce, its search engine, sits on an nginx (pronounced “engine-x”) web server, running as a C++ module. “All the backend technology was developed from the ground up with performance in mind,” Lemoine wrote.

    3:30p
    Rebooting Deduplication in Your Next-Generation Data Center

    Casey Burns, product marketing manager at Quantum, has extensive experience in the storage industry with a professional focus in data deduplication, virtualization and cloud.

    The data center is evolving as IT departments grapple with virtualization, cloud, and the shift from traditional data protection to emerging policy-based approaches to storage. These trends are converging to make a complex mess of data management, despite their promise to streamline operations, scale IT resources and lead businesses to smarter, data-driven decision making.

    As data centers have become more dynamic and difficult to manage, and as data workflows have become less linear, companies are facing a new reality: their deduplication strategies need a reboot. That includes rethinking what to expect from a purpose-built backup appliance.

    Trends impacting deduplication and the new data center

    A number of key business trends are driving the need to reboot deduplication, but there are three that stand out. First, businesses are increasingly tied to a variety of cloud architectures, and need to back up to these new environments effectively. Second, IT departments must now cope with even greater volumes of unstructured data —the associated metadata—and need to scale backup resources accordingly, while separating data suitable for dedupe from data that aren’t. Third, effectively prioritizing historical data for agile storage and retrieval has become increasingly critical to daily operations.

    Together these trends have organizations coming to depend upon purpose-built storage to handle the complex nature of their industry-specific business challenges, including the workflows that govern the movement of their data. Some may take deduplication for granted, but it makes things like disaster recovery viable for even modest implementations. The value of deduplication will continue to grow in next-generation data centers, and the solutions offering the lowest OPEX will have the greatest chance for success.

    Deduplication workflow considerations

    There are a number of considerations to determine how deduplication should fit into an organization’s modern data center and workflows. However, there is no silver bullet technology to rein-in data center complexity. The type of data, content, and frequency of access required all need to be evaluated in order to find the best deduplication solution.

    Virtual machines (VMs), for example, require many backup applications to work within more dynamic and virtual workflows, which they are ill-equipped to handle. This data type must be managed differently from traditional data.

    Handling unstructured data growth is also becoming a vital part of any data protection strategy, whether it’s archiving video content for future re-monetization, offloading static data from primary storage, or building a private cloud infrastructure. Due to the scale and access requirements of storing this data, traditional backup simply won’t work.

    Tiered storage technology, paired with deduplication, can help organizations align the value of their data with appropriate storage costs by applying the right technology at the right point of time. Taken a step further, we’re seeing many organizations—responding to data center complexity—now turn to a proactive data management model that is based on tiered storage, including backup tiers and active archive tiers that encompass smart data movement that fits within their unique workflows.

    Five key qualities to look for in deduplication

    With these considerations in mind, here are the five qualities to look for in a modern deduplication solution.

    1. Purpose-Built Backup Appliances: Next-generation data centers have a level of complexity that demand deduplication appliances that are purpose-built to the task. Built to work with a full range of backup applications, they are typically easy to install, offer the highest performance, and serve as a disk target for backup servers. Gartner predicts that by 2018, 50 percent of applications with high change rates will be backed up directly to deduplication target appliances, bypassing the backup server, up from 10 percent today*.
    2. Variable Length vs. Fixed Block Deduplication: Deduplication is not a pedestrian exercise. Software solutions typically have adopted a fixed block approach because it is the least compute intensive, but it generally doesn’t provide the maximum amount of data reduction. Variable block deduplication is resource intensive, but minimizes disk storage as data grows, and is the most efficient data reduction available, providing maximum disk storage density. Organizations that are a good fit for variable-length deduplication over a fixed-block approach include companies experiencing fast data growth, remote offices or virtualized environments. Variable-length deduplication can also cut network traffic – key for replication and disaster recovery.
    3. Scalability: Pay-as-you-grow scalability provides simple, predictable and easy-to-install storage capacity. This allows users to increase capacity by simply entering a license key, with no other on-site installation needed. Physical and virtual appliances are now available that scale from 1TB to over 500TB using a capacity-on-demand approach, allowing customers to add capacity with minimal or no downtime. The benefit of this approach is that users can avoid overprovisioning their storage and purchasing more capacity than they need.
    4. Monitoring and Reporting: Deduplication is hard to do well without the right management tools. Proactive monitoring and reporting of deduplication functions is often overlooked, but it enables precise business decision making and helps speed resolution time. Ideally, a data center manager should be able to monitor backups from their mobile device. Advanced reporting capabilities can free IT staff to focus on more strategic projects rather than managing backups.
    5. Security: Organizations are increasingly scrutinizing the security of their data at every step of the workflow, in an attempt to eliminate vulnerabilities before they are exploited. Hardware-based, military-grade AES-256 bit encryption of data at rest and in motion provides security without sacrificing performance. Be aware that software-based encryption approaches can often incur a performance penalty.

    Ready to reboot?

    The trends are clear: As data center volume and complexity increases, it’s critical to not just have deduplication, but smart deduplication that fits within an evolving data center and workflows.

    *Gartner, Magic Quadrant for Deduplication Backup Target Appliances, 31 July, 2014

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:26p
    HP: A Split Identity No More

    HP will split into two companies, one focused on enterprise, the other on PCs and printers. The company said Monday the new Hewlett-Packard Enterprise will design and sell next-generation technology infrastructure, software and services, while HP Inc will be the consumer personal systems and printing company. Shareholders will own a stake in both businesses through a tax-free transaction in 2015.

    The enterprise company will focus on HP cloud, big data, security and mobility and the new generation of technology and deployment models being driven by these sectors. Cloud computing has brought a great sea change to IT. A general move to cloud and hosted deployment models means legacy technology companies are forced to evolve, often at the displeasure of investors if they are public.

    Meg Whitman, who has until now been HP CEO, will be president and CEO of Hewlett-Packard Enterprise and Dion Weisler, who has let the company’s printing and personal systems business as executive vice president, will be president and CEO of HP Inc. Whitman will be chairwoman of HP Inc’s board and Weisler will be chairman of Hewlett-Packard Enterprise board.

    Partitioning HP Enterprise and HP Inc means one’s results and potential investor displeasure with them will not affect the other. As a single company, HP had two very distinct identities, which have now turned into two separate businesses, each focused on its unique set of problems.

    More layoffs

    The impetus for the company split isn’t positive, however. HP is approaching the fourth year of a five-year turnaround plan.

    The company has once again increased its previously announced layoff plans by 5,000 employees, reaching a total of 55,000. The reduction target has been continuously raised since Whitman first announced her turnaround plan in 2012 and said 27,000 employees would be laid off or sent into early retirement as part of the turnaround.

    In her prepared remarks Whitman was as optimistic as ever. “Our work during the past three years has significantly strengthened our core businesses to the point where we can more aggressively go after the opportunities created by a rapidly changing market,” she said. “The decision to separate into two market-leading companies underscores our commitment to the turnaround plan.

    “It will provide each new company with the independence, focus, financial resources and flexibility they need to adapt quickly to market and customer dynamics, while generating long-term value for shareholders.”

    Multi-sided enterprise business

    Earlier this year the company consolidated HP cloud play under the Helion brand. It built a cloud services platform called Helion and based on the open source OpenStack architecture.

    In addition to its massive traditional server business, HP Enterprise also has a fairly new line of “microservers” called Moonshot. Last week, the company became the first vendor to start shipping servers powered by 64-bit ARM processors (used primarily in smartphones) under the Moonshot brand.

    There is also the 3PAR enterprise storage business and a multitude of IT management, big data analytics and security software products.

    “Over the past three years, we have reignited our innovation engine with breakthrough offerings for the enterprise like Apollo, Gen 9 and Moonshot servers, our 3PAR storage platform, our HP OneView management platform, our HP Helion Cloud and a host of software and services offerings in security, analytics and application transformation,” Whitman said. “Hewlett-Packard Enterprise will accelerate innovation across key next-generation areas of the portfolio.”

    6:02p
    Stulz to Build More Cooling Gear in Maryland

    Stulz Air Technology has finished a major expansion to its data center cooling equipment factory in Frederick, Maryland.

    The German company’s logo is one of the most common logos you’ll see on cooling units inside a data center. Stulz makes everything from precision cooling units to humidifiers and economizers and has been manufacturing in Frederick since 2001.

    While based in Hamburg, the data center cooling company is a true U.S. manufacturer. Everything it makes for the North American market it makes in the U.S.

    Stulz President Joerg Desler said it was common for companies to have most of their components produced overseas but stamp their products as “made in America” simply because they were assembled domestically.

    “At Stulz, we do our own engineering, metal stamping, electrical wiring, powder coating, controls programming, piping and testing,” Desler said in a statement. “To us, that is what ‘made in America’ means.”

    The recently finished $5 million expansion in Maryland includes addition of a 41,000-square-foot building that will house a sheet metal fabrication area, a weld shop and two powder coat lines.

    The project also includes expanded space for a line of outdoor air handling units and large-capacity perimeter cooling systems in the older facility.

    Stulz expects to hire 25 permanent employees at the facility as a result of the expansion.

    One of the reasons the company expanded the factory was to be able to do more custom jobs for customers. “The mission critical cooling industry is becoming more sophisticated and customers are demanding a greater spectrum of custom and semi-custom solutions,” Desler said.

    6:58p
    Internap Expands New Jersey Data Center

    Internap has completed the second phase of its Secaucus, New Jersey, data center, doubling current space and power capacity. The data center’s first phase launched only in January of this year.

    The facility is key to Internap’s ambitions in the New York market, where the company says it offers higher power densities than competing facilities. Phase 2 adds another 13,000 square feet and 1.4 megawatts, equal-sized Phase 1 nearing capacity.

    Internap said some of its customers drew 15kW per rack. The average power draw in Secaucus (not provisioned or expected draw) is already upwards of 150 watts per square foot. “Based on the circuits in the installation queue, it will only go higher,” Michael Frank, vice president of data center services at Internap, said.

    Average draw per square foot across all of the company’s facilities is up 25 percent across all of its facilities. Some markets, such as Silicon Valley, are experiencing a 40 percent year-over-year jump.

    The Secaucus data center supports a suite of services that spans colocation, managed hosting and cloud computing.

    “The ability to hybridize IT infrastructure is another key offering that differentiates our data centers in the New York Metro market,” Frank on Internap’s blog.. “More and more customers are interested in using hybrid services within the data center, and we expect to see a larger number of AgileCLOUD customers in Phase 2 than we currently have in Phase 1.”

    Prologis built the property in 2012. Internap moved in July 2013 and began building out its data center infrastructure. Internap has the ability to expand up to 22 megawatts and 60,000 square feet of technical space there.

    In addition to offering higher power densities, the facility is also at a higher elevation, a key consideration for an area of New Jersey that was hit by Superstorm Sandy a few years ago.

    In the New York Metro, Internap also has a full facility at 75 Broad Street in Manhattan. It also has been migrating customers out of its data center at 111 8th Avenue in Manhattan, as its lease with building owner Google is expiring at the end of this year.

    Internap has more than 1 million square feet of data center space, spread across 16 facilities in North America, where it houses infrastructure for customers, including Costco, HBO, JP Morgan Chase, Microsoft, Nokia, Amgen, Southwest Airlines, Delta and McGraw-Hill.

    7:30p
    South Korea, Hong Kong Provide Fastest Internet Speeds: Akamai Report

    logo-WHIR

    This article originally appeared at The WHIR

    The global average internet connection speed increased by 21 percent from the first to the second quarter of 2014, according to Akamai’s newly released State of the Internet Report. The average speed passed the common definition of “broadband” for the first time, reaching 4.6 Mbps.

    The US increase was described as “modest to strong,” as 4k video readiness, meaning a 15 Mbps connection, more than doubled in 39 states, though actual broadband adoption rates grew in only half of US states.

    High speed content streaming has become a huge proportion of internet traffic as well as a political issue as the FCC considers regulating net neutrality and internet slow lanes. The FCC also issued a mixed report on telecom’s connection speeds in practice compared to as advertised in June in the midst of conflicts about bandwidth prioritization and accusations of throttling involving some of the same companies.

    According to the report, “all of the top 10 states had average connection speeds above the ‘high broadband’ of 10 Mbps threshold, as did 19 additional states across the country. The quarter-over-quarter trend was overwhelmingly positive, with four additional states joining Delaware, Washington, and Connecticut in having quarterly growth rates of 10 percent or more.”

    South Korea maintained its status with the top fastest average speed of any country, while Hong Kong’s average speed moved into second at 15.7 Mbps, followed by Switzerland and Japan. The average speed in the US was 11.4 Mbps, 14th in the world.

    Apple Mobile Safari continued to lead mobile traffic according to the report, with 49 percent of requests on all networks, and 36 percent on cellular networks.

    Among notable security observations, Port 80 past Port 445 as the most targeted by attack volume for the third time since Akamai began issuing reports in 2008. Attack traffic in Indonesia more than doubled to 15 percent, while China continued to lead attack traffic with 43 percent.

    The report also examines changes to the threat landscape such as the Heartbleed vulnerability. Its next report will likely include Shellshock, which Akamai itself has raced to deal with.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/south-korea-hong-kong-provide-fastest-internet-speeds-akamai-report

    8:10p
    Sentinel Kicks Off Massive New Jersey Expansion

    Sentinel Data Centers announced a plan to expand its already massive New Jersey data center in a project to add 200,000 square feet and 20 megawatts of power capacity.

    This will be a third phase of expansion at the NJ-1 site in Somerset, whose current 230,000-square-foot building is nearly full. The wholesale data center provider already signed lease contracts for some space in the upcoming third phase.

    New Jersey is one of the biggest and most active data center markets in the U.S. Manhattan’s high real estate and energy prices have created a bustling market across the Hudson.

    Internap, a retail data center services provider, made an announcement today that was very similar to Sentinel’s although much smaller in scale. Internap said it had expanded its Secaucus, New Jersey, data center with a new phase because its existing space in the facility was approaching capacity.

    Sentinel plans to bring the latest Somerset phase online in the second half of 2015.

    The company promotes the flexibility of its wholesale data center products as a differentiator. It builds out in modular building blocks with custom power density and resiliency levels for each customer suite.

    Todd Aaron, Sentinel co-president, said, “Flexibility remains a cornerstone of our offering and is central to insuring our customers’ long-term success.”

    9:41p
    VCE Brings First All-Flash Vblock to Market

    VCE announced the first version of its Vblock converged infrastructure solution with all-flash storage Monday.

    Vblock System 540 consists of Cisco Unified Computing System servers and networking and EMC’s all-flash XtremIO storage arrays. It also supports Cisco’s Application Centric Infrastructure technology, the giant’s answer to open software defined network products.

    EMC got its all-flash product line through acquisition of Israel-based startup XtremIO for $430 million in 2012. As the cost of flash memory inches down, companies increasingly consider all-flash storage for application acceleration.

    All-solid-state arrays are most commonly used in online transaction processing, analytics and virtual desktop infrastructure, according to Gartner. For now, users deploy them for specific workloads, but there is growing interest in using the technology to support multiple applications.

    XtremIO got the fourth-highest product score in a Gartner ranking of all-solid-state products. SolidFire SF was at the top, followed by Pure Storage FA and Kaminario K2.

    SolidFire, Pure Storage and Kaminario are very young companies, however, compared to EMC. On its Magic Quadrant for solid-state arrays, EMC is ahead of everyone else in terms of ability to execute, while Pure Storage is top on the completeness-of-vision axis.

    IDC has forecasted that the market for all-solid-state arrays will reach $1.2 billion in revenue by 2015.

    Converged infrastructure line widened

    VCE, which Cisco and EMC formed in 2009, announced System 540 along with four other new versions of Vblock and “technology extensions” for the system.

    The new Vblock System 740 is the highest-performance model. It includes EMC’s Vmax 3 storage, which increases storage performance, the company said.

    There is also the new Vblock System 240 for the mid-size business customers and remote offices. Its storage component is EMC’s entry-level VNX5200.

    Along with the new Vblock models, VCE added two Vblock-based integrated cloud management solutions: one for Cisco cloud management and the other for VMware cloud management. EMC-controlled VMware was one of the early investors in VCE.

    Cisco is offering its UCS Director software (management software for UCS infrastructure) as a cloud management solution. The company said users can provision virtual resources in the Infrastructure-as-a-Service manner using the Director.

    The VMware version of the solution comes with the software company’s vRealize Suite. The suite also enables cloud management, including automated provisioning and orchestration of private and hybrid cloud environments.

    10:00p
    Defense Dept. Mulling Cloud in Data Center Containers

    The U.S. Department of Defense is considering private government cloud services hosted in a data center container deployed on its premises.

    It is one of two options the DoD wants to evaluate as ways to integrate private cloud solutions by commercial service providers that will plug into its internal networks. The other option is to lease space in one of the department’s own data centers to the provider and host the cloud infrastructure there.

    The DoD has issued a request for information about the options. The government uses these requests as part of an information gathering process, usually followed by a request for proposals, which contains a more detailed outline of what a certain agency needs, soliciting specific product or services offers from vendors.

    While the two deployment models outlined in the document are fairly similar, the key difference is the physical separation of the containerized solution. The container would most likely be located close to a DoD data center it is serving and use that facility’s power and cooling resources.

    In practice, the government cloud solution would sit within the department’s network security perimeter regardless of the hardware’s physical location.

    The DoD is looking for cloud infrastructure services, including VM provisioning, object and block storage and support services, such as networking, identity, billing and resource management. The feds have not identified capacity of the private cloud, but the request is looking for information about a “small” 10,000-VM configuration, a “medium” 50,000-VM configuration and a “large” 200,000-VM one.

    There is a multitude of private cloud solution providers out there, but the pool can get narrow very quickly as providers have to go through the government’s strict security certification processes.

    Some providers, such as Phoenix-based IO, can provide both a container and a cloud infrastructure. IO builds its modules at a factory in Arizona and can ship them anywhere. It also has built a cloud services offering using the OpenStack architecture and Open Compute Project hardware.

    Vendors like HP (how Hewlett-Packard Enterprise) and Dell have both the containers and the IT infrastructure products to deliver the full package.

    Private cloud providers can also potentially partner with container makers to deliver the solution jointly.

    << Previous Day 2014/10/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org