Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 27th, 2014

    Time Event
    11:30a
    Microsoft Working on Re-configurable Processors to Accelerate Bing Search

    Balancing flexibility and a myriad of computational demands with the need for efficiency in web scale data centers drove Microsoft data center researchers and colleagues to examine hardware alternatives in a project known as Catapult. Delivered at the 41st International Symposium on Computer Architecture (ISCA) this month, a paper, titled A Reconfigurable Fabric for Accelerating Large Scale Data Center Services, describes an effort to combine programmable hardware and software that uses field programmable gate arrays (FPGA) to deliver performance improvements of as much as 95 percent.

    Building and operating data centers that can be flexible and efficient for hyperscale needs has been a challenge for a select handful of companies. Five years ago Google’s Luiz Andre Barroso and Urs Holzle published a paper about looking at a data center as a computer, denoting the company’s use of warehouse-scale machines in its approach to data center infrastructure.

    After announcing its $1.1 billion data center expansion in Iowa, Microsoft Director of Data Center Services Kevin Williams said that the new site would “house a Generation 5 facility incorporating fully-integrated software applications that are built as distributed systems into every aspect of the physical environment — from the server design to the building itself — and drive systems integration for greater reliability, scalability, efficiency and sustainability.”

    Catapult is a reconfigurable fabric embedded into each half rack of 48 servers in the form of a small board with a medium sized FPGA and local DRAM attached to each server. The Microsoft approach of using FPGAs will enable programmable hardware to specifically benefit Microsoft’s search algorithms. In the evaluation deployment outlined in the paper, Catapult’s interconnected nodes linked by high-bandwidth connections were tested on a collection of 1,632 servers to measure its efficacy in accelerating the workload of a production web-search service.

    Also making use of FPGAs to chase the hyperscale data center market, Intel recently said that it would produce a hybrid chip that bolts an FPGA onto its high-end Xeon E5 server chip. Specialized chips that can be programmed have enabled optimized workloads for things like bitcoin mining. Semiconductor company Altera makes reconfigurable logic with on-chip memory and DSP blocks for a software defined data center. The company announced it is working with Microsoft Research and Bing to accelerate portions of the web search engine.

    “We designed a platform that permits the software in the cloud, which is inherently programmable, to partner with programmable hardware,” said Microsoft Researcher Doug Burger. “You can move functions into custom hardware, but rather than burning them into fixed chips [application-specific integrated circuits], we map them to Altera FPGAs, which can run hardware designs but can be changed by reconfiguring the FPGA. We’ve demonstrated a ‘programmable hardware’ enhanced cloud, running smoothly and reliably at large scale.

    “This portends a future where systems are specialized dynamically by compiling a good chunk of demanding workloads into hardware,” Burger went on. “I would imagine that a decade hence, it will be common to compile applications into a mix of programmable hardware and programmable software.”

    If all goes well in research trials, Bing plans to roll out FPGA-accelerated servers to process customer searches in one of its data centers starting in early 2015.

    12:00p
    Instagram Migrates from Amazon’s Cloud into Facebook Data Centers

    Just as there are many reasons to move applications from internal data centers into the cloud, there are many reasons to move the other way. The recent migration of Instagram’s entire backend stack from Amazon Web Services’ public cloud into Facebook’s data centers was a good example of the latter.

    As they did, Instagram engineers hit quite a few unexpected snags and were forced to think outside the box and come up with a few sophisticated workarounds to make it work. Their story is a good reminder that workload mobility in the cloud remains an uneasy challenge cloud service providers and the huge ecosystem of vendors building solutions for cloud infrastructure users have yet to solve.

    Facebook founder and CEO Mark Zuckerberg indicated that Instagram would eventually take advantage of Facebook’s engineering resources and infrastructure in 2012, following the acquisition of the online photo sharing service by the social networking giant.

    Not as easy as it first seemed

    The team decided to move to make integration with internal Facebook systems easier and to be able to use all the tooling Facebook’s infrastructure engineering team had built to manage its large-scale server deployments. Following the acquisition, the engineering team found a number of integration points with Facebook’s infrastructure they thought could help accelerate product development and increase security.

    The migration project did not turn out as straight-forward as one would expect. “The migration seemed simple enough at first: set up a secure connection between Amazon’s Elastic Compute Cloud (EC2) and a Facebook data center and migrate services across the gap piece by piece,” Instagram engineers Rick Branson, Pedro Cahauati and Nick Shortway wrote in a blog post published Thursday.

    Forced to move to private Amazon cloud first

    But they quickly learned that it was not quite so simple. The main problem at this first stage was that Facebook’s private IP space conflicted with EC2’s IP space. The solution was to move the stack into Amazon’s Virtual Private Cloud first and then migrate to Facebook using Amazon Direct Connect.

    Direct Connect is a service Amazon provides at colocation data centers which is essentially a direct private network pipe between a customer’s servers and its public cloud. Targeted primarily at enterprises, it is designed to bypass the public Internet to avoid performance and security issues.

    “Amazon’s VPC offered the addressing flexibility necessary to avoid conflicts with Facebook’s private network,” the engineers wrote.

    EC2 not exactly best buds with Amazon’s VCP

    But moving applications form Amazon’s public cloud infrastructure into a private cloud is also not as simple as it sounds. Instagram had many thousands of EC2 instances running, with more spinning up daily. To minimize downtime and simplify operation as much as possible, the team wanted EC2 and VPC instances to act as instances on the same network – and therein lied the problem.

    “AWS does not provide a way of sharing security groups nor bridging private EC2 and VPC networks,” they wrote. “The only way to communicate between the two private networks is to use the public address space.” They took to Python and Zookeeper to write a “dynamic IP table manipulation daemon” called Neti, which provided the security group functionality they needed and a single address for every instance, regardless of which cloud it was running in.

    After about three weeks, the migration into private cloud was complete, which the three engineers claim was the fastest VPC migration of this scale ever. The stack was ready for departure to its next destination: Facebook data centers.

    Linux containers make custom tools portable

    This step of the process was made more complex because the Instagram team wanted to keep all the management tools it had built for its production systems while running on EC2. These were things like configuration management scripts, Chef for provisioning and a tool called Fabric, which did everything from application deployment to database master promotion.

    To port the tools into Facebook’s highly customized Linux-based environment, the team enclosed all of its provisioning tools in Linux Containers, which is how they now run on Facebook’s homegrown servers. “Facebook provisioning tools are used to build the base system, and Chef runs inside the container to install and configure Instagram-specific software,” they wrote.

    One migration wiser

    A project like this does not end without the team learning a thing or two, and the Instagram team walked away with a few takeaways. Some of the newly gained wisdom is to plan to change as little as possible to support the new environment; go for “crazy” ideas, such as Neti, because they might just work; make your own tools to avoid unexpected “curveballs”; and reuse familiar concepts and workflows to keep complexity to a minimum.

    12:30p
    Oracle Acquires MICROS for $5.3B

    Oracle has acquired integrated software and hardware solutions provider MICROS for about $5.3 billion. Columbia, Maryland-based MICROS provides hospitality and retail industries with on-site or hosted end-to-end integrated hardware, software and services. Its industry specific applications will compliment Oracle’s business applications, technologies and cloud portfolio.

    This is the largest deal in Oracle’s acquisitions strategy since it bought Sun Microsystems in 2010 for $7.4 billion.

    “MICROS has been focused on helping the world’s leading brands in our target markets since we were founded in 1977, including running more than 330,000 sites across 180 countries today,” said Peter Altabef, president and CEO of MICROS. “In combination with Oracle, we expect to help accelerate our customers’ ability to innovate and differentiate their businesses by utilizing Oracle’s technologies, cloud solutions and scale.”

    MICROS solutions deployed at many hotel, food and beverage and retail sites around the world surely overlap with Oracle installations in many cases. Bringing MICROS software and services into the Oracle suite of vertical solutions will enhance its offerings and installed client base for those industries and will allow MICROS to continue expansion of cloud-based solutions and increase research and development efforts.

    “Oracle has successfully helped customers across multiple industries, harness the power of cloud, mobile, social, Big Data and the Internet of things to transform their businesses,” said Oracle President Mark Hurd. “We anticipate delivering compelling advantages to companies within the hospitality and retail industries with the acquisition of MICROS.”

    2:00p
    The Practical Science of Data Center Capacity Planning

    Your data center is growing, you have more users connecting, and your business continues to evolve. As the modern organization places more demands around the data center model, administrators must be aware of their resources and their utilization. A big part of that is planning out capacity for both today, and the future.

    As the need to balance current and future IT requirements against resource consumption becomes more urgent, the data center industry increasingly views capacity planning as a way of achieving a critical component to planning a new build or retrofit. Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications.

    DCD Intelligence and Server Technology have put together this white paper to share some industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.

    As the paper outlines, there are several common factors that may impact capacity planning. Those factors include:

    • Industry specific Rack density and optimization requirements
    • Power
    • Cooling
    • Plan for business continuity and disaster recovery
    • Sustainability and ‘green’ performance
    • Budgeting

    Through it all, you need to apply the practical science of capacity planning to really create a powerful data center model. Key emerging industry trends toward Data Center Infrastructure Management (DCIM) and Software Defined Data Centers (SDDC) demonstrate a continuing need to look at the key balance between IT and communications and facilities management.

    Capacity planning brings together all the key resource and output factors that constitute a data center’s reason for commission and its means of fulfilling that. As critical resources become more expensive or scarce, being able to plan for future capacity requirements becomes more critical.

    Download this whitepaper today to learn how the power draw of IT and communications equipment will continue to rise, creating exponential demand for the power needed to run and cool it, while the cost of power increases and its ready availability is threatened.

     

    3:40p
    Microsoft Brings Machine Learning as a Service to Azure Cloud

    Microsoft said it is launching a cloud-based machine learning service, called Azure ML, in July.

    The service can be used to help create applications that can predict the future on the basis of previous data. It will help bake predictive analytics into applications, helping organizations use large amounts of data to provide all the benefits of machine learning to a wider audience.

    Data analytics technology is increasingly used and looked at in general to help improve services. This kind of services was completely out of the realm of possibility pre-cloud, as the technology wasn’t there and the cost to power the infrastructure capable of doing it effectively was far too great for most companies. Bringing machine learning to cloud computing puts an extremely valuable tool within reach of thousands of developers.

    “Microsoft Azure Machine Learning, a fully-managed cloud service for building predictive analytics solutions, helps overcome the challenges most businesses have in deploying and using machine learning. How? By delivering a comprehensive machine learning service that has all the benefits of the cloud,” Joseph Sirosh, corporate vice president of Machine Learning at Microsoft, wrote in a blog post.

    Azure ML will bring together new analytics tools, powerful algorithms that Microsoft developed for products like Xbox and Bing, and the company’s many years of experience, into one easy-to-use cloud offering.  This knowledge, combined with the infrastructure of the Azure cloud, not only gives access to experience and years of learning, it eliminates the high-cost hurdle.

    Azure ML will include a study tool for business analysts to get started, an API for deployment, and an SDK for writing applications. The open source R language will be used to write applications. Some select Microsoft partners are using an early version of the service, and the public preview will be available in July.

    Data analysis is complex, requiring many skills, time and talent. Automation and platforms like Azure ML leverage Big Data, which many are talking about, but few are making full use of.

    “The ease of implementation makes machine learning accessible to a larger number of investigators with various backgrounds–even non-data scientists,” said early adopter Bertrand Lasternas, of Carnegie Mellon University.

    For those that already have large teams dedicated to this, it reduces their time and expense.

    Azure ML will compete with the likes of IBM Watson. Google is also using Machine Learning in its own ways — one of which is to boost data center efficiency. Watson has enjoyed a number of applications, starting with destruction of regular folks like Ken Jennings on Jeopardy. IBM made it a cloud service, allowing companies to use structured and unstructured data to predict events like market opportunities and prevent fraud.

    Azure ML offers a data science experience that is directly accessible to business analysts and domain experts, reducing complexity and broadening participation through better tooling,“ noted early adopter Hans Kristiansen, of Capgemini.

    4:25p
    Online Tech Bets On Detroit Resurgence With New Data Center

    Online Tech seeks out high-end idle corporate assets in the Great Lakes region and turns them into enterprise-class infrastructure for the middle market. Its latest acquisition is a Westland, Michigan, data center in the Detroit metro area.

    It is a former Sprint-Nextel facility that the company has invested in and upgraded to provide premier colocation and cloud services to the area’s businesses. The 34,000 square foot data center is set to open August 1, with customers already touring the property.

    Detroit is not exactly known as a flourishing economy, but Online Tech, as well as demographic data, suggest a resurgence. The company has already had success in both Flint and Ann Arbor, Michigan, and in Detroit it is making another bet on a comeback.

    Online Tech’s now has five data centers in the Midwest, including the recently announced Indianapolis data center, and plans to continue expanding into new markets in the Great Lakes region.

    The Westland facility itself was quite attractive to OnlineTech. “Sprint-Nextel built a heck of a building, a building within a building, really,” said Yan Ness, OnlineTech’s co-CEO. “It wasn’t designed for multi-tenant, but that’s what we do.”

    He said there were lots of fiber providers coming into the facility, since it was a former enterprise data center, but it still needed some work to turn it into a multi-tenant building. The company invested in power infrastructure upgrades, added a raised floor, which the original data center did not have, and built a network operations center.

    “That’s what the market wants,” said Ness. “We have a process of what we need to do to these data centers. We also want customers to be comfortable, so we added amenities and offices.”

    Proven track record in the region

    Leveraging the prime but idle piece of corporate real estate at this time and in this place made great business sense to Online Tech’s leadership. “Our timing to be on the buy side and build side is the perfect time,” said Ness. “We look for supply-demand imbalance, and Detroit is a great opportunity.”

    The board and ownership group have been accommodating in Online Tech drive the deal because it has performed very well in Ann Arbor and Flint. “They (the board and investors) were confident in what we did here in 2012 and brought in some deeper-pocket investors for this,” Ness said.

    “We worked really hard to prove the concepts and delivery models up here in Michigan. We’ve fine-tuned our ability to deliver.”

    Detroit underserved, misrepresented

    Detroit is often used to exemplify financial crises, most of America used to seeing the city’s dilapidated and abandoned buildings on TV during the worst times of the recession. However, there is a turnaround occurring.

    “They’re pretty focused on the resurgence of Detroit,” Ness said. “Some of the demographic data is coming out to back the resurgence.”

    There is a fast-growing need for IT professionals (according to Ness there are tens of thousands of unfilled IT jobs), and a number of high-tech incubators and venture capital groups have sprung up. Detroit Venture Partners is an example of the latter. It includes investment from NBA superstar Earvin “Magic” Johnson, who has joined as general partner.

    Ness sees data centers as one of the former industrial powerhouse city’s resurgence opportunities. “Data centers are the indispensable infrastructure for today’s U.S. companies in the same way that large factories were in the 20th century and railroads were in the 19th century,” he said.

    Strength in compliance-heavy verticals

    OnlineTech is strong in the regulated computing market, with those with HIPAA and PCI compliance needs. “Healthcare is a target for us, with a lot of activity in the southeast. There are some big state of the art hospitals; Henry Ford Health is in Detroit,” he said.

    “We also do a lot of business with finance and insurance companies. All three require regulated computing because there’s HIPAA and credit card risks.”

    The company offers a range of infrastructure options, from cloud to three-rack deployments to 4,000 square foot data halls.

    5:00p
    Friday Funny: Pick the Best Caption for ‘Overhead Cabling’

    It’s Friday afternoon, the sun is shining and the weekend is showing its face! Let’s celebrate the last days of June with some Friday Funny humor!

    Several great submissions came in for last week’s cartoon so now all we need is a winner. Help us out by scrolling down to vote!

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon and we challenge our readers to submit a humorous and clever caption that fits the comedic situation. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon!

    Take Our Poll
    For more cartoons on DCK, see our Humor Channel. For more of Diane’s work visit Kip and Gary‘s website.

    6:57p
    NVIDIA Pitches GPU-Assisted 64-Bit ARM SoCs for HPC Workloads

    NVIDIA announced that multiple server vendors are leveraging its GPU accelerators to launch the world’s first 64-bit ARM development systems for high performance computing. The company made the announcement at the International Supercomputing conference (ISC14) in Leipzig, Germany, earlier this week.

    “NVIDIA has built the industry’s most comprehensive accelerated computing platform — including servers, software, development tools, processors and related technologies — optimized for the HPC industry,” said Ian Buck, vice president of accelerated computing at NVIDIA. “GPUs are the enabling technology that allow server vendors to build HPC-class systems around flexible ARM64 processors. The result is new highly innovative computing solutions for HPC (High Performance Computing).”

    Door Into HPC for ARM server processors

    ARM chips are used predominantly in smartphones and embedded devices, but there has also been some adoption of the low-power chip architecture — licensed to manufacturers by UK’s ARM Holdings — in the server market. Now, with NVIDIA’s CUDA parallel programming platform and its GPUs doing the heavy lifting,  ARM64 is able to take on HPC-class workloads, according to the chip maker.

    Three vendors will incorporate the solution, which consists of Applied Micro’s X-Gene ARM64 SoC and NVIDIA Tesla K20 GPU accelerators. NVIDIA’s CUDA-accelerated scientific and engineering HPC applications can be recompiled for ARM64 systems to take advantage of the solution immediately.

    • Cirrascale: RM1905D, a two-in-one 1U server ARM64 development platform with two Tesla K20 GPUs and one Applied Micro X-Gene 64-bit ARM SoC
    • E4 Computer Engineering: a low-power 3U, dual-motherboard server appliance with two Tesla K20 GPU accelerators
    • Eurotech: an ultra-high density, energy efficient and modular Aurora HPC server configuration, based on proprietary Brick Technology and featuring direct hot liquid cooling

    Applied Micro debuted the X-Gene ARMv8 64-bit Server SoC at this year’s Open Compute Summit and demonstrated that it can run production software with OpenStack (Icehouse) release using Ubuntu 14.04 LTS in a KVM virtualized environment on a server last month. At this week’s ISC14 Applied Micro announced that the ARM-based X-Gene SoC is finally ready, and that development kits are available immediately, with silicon production set to start soon.

    “The availability of accelerated 64-bit ARM servers is one of the most significant developments to hit the HPC market this year,” said Earl Joseph, IDC program vice president for HPC. “IDC believes there is substantial interest within the HPC community in evaluating GPU-accelerated 64-bit ARM systems for next-generation computing projects.”

    7:46p
    Contractor on Google’s Oregon Data Center Build Slapped With Labor Fine

    Construction workers on Google’s $600 million Oregon data center construction project didn’t get required rest times beyond their lunch breaks and their employer got slapped with a fine for it.

    But don’t worry, this is not a case of Google breaking its promise not to be evil. The culprit was steel contractor LPR Construction, of Loveland, Colorado, hired for the project in the Dalles.

    The state’s Bureau of Labor and Industries settled with the construction company accused of violating state wage and hour laws. LPR will pay a $20,000 fine after admitting the misdeed, Oregon Live reported.

    The state suspended an additional $8,000 in fines, which LPR will face if it continues to break labor laws. As part of the settlement, the company also agreed to submit reports from all of its work sites in Oregon over the next three years.

    A state investigation found 28 employees were not given proper breaks. The complaint was filed by an unnamed site visitor who claimed employees were working five to six days a week for 10-plus hours a day and were only given lunch breaks.

    While generally speaking this is considered a normal work week in the tech world, sitting at a computer inside an air conditioned room that often comes with Doritos, Mountain Dew and several breaks is different from building steel structures in the high desert.

    LPR worked on the project between December 2013 and February 2014. This is one of the state’s larger meal and rest period settlements in recent years.

    The state’s wage and hour division typically receives about 1,300 meal and break complaints a year and 2,300 claims regarding unpaid wages, according to Oregon Live.

    The Dalles Project is a two-story 164,000 square foot building that joined two single-story 94,000 square foot data centers.

    << Previous Day 2014/06/27
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org