Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, March 22nd, 2013

    Time Event
    12:00p
    Apple Hits 100% Renewable Energy in its Data Centers
    apple-maiden-aerial-solar-4

    An aerial view of one of Apple’s two major solar panel arrays in Maiden, North Carolina, which supply electricity to help support the power requirements for a nearby Apple data center. (Photo: Apple)

    In the wake of pressure from the environmental group Greenpeace, Apple said Thursday that it has achieved 100 percent renewable energy at all of its data centers, including facilities in North Carolina, Oregon, California and Nevada. The company also is using renewables to support office facilities in Austin, Elk Grove, Cork, and Munich, and its Infinite Loop campus at Cupertino.

    The road to renewable was a formidable one. Apple doubled the size of an already huge solar array in North Carolina, buying another 100 acres of land to support the expansion.  The two separate 100-acre solar arrays in Maiden, N.C. each produce 42 million kilowatt-hours (kWh) of energy annually. Apple also uses biogas from nearby landfills to power Bloom Energy Server fuel cells at its Maiden site.

    Although it’s been secretive about the project, the company has been vocal in its plans to use renewable power exclusively for its new data center in Prineville, Oregon. That energy will come from a mix of sources, such as wind, hydro, solar and geothermal power.

    Facebook also has data center nearby in Prineville that uses an evaporative cooling system in combination with the natural moderate climate to save on energy costs. Facebook initially faced heat from Greenpeace over using energy from PacifiCorp, which is derived largely from coal.

    Gary Cook, senior IT analyst at Greenpeace called Apple out at an Uptime Symposium saying that it and Facebook should  “wield (its) power to alter the energy paradigm.” Apple has since stepped up in a big way. Since 2010, it has achieved a 114 percent increase in the usage of renewable energy at corporate facilities worldwide, up to 70 percent overall from 35 percent.

    “Apple’s announcement shows that it has made real progress in its commitment to lead the way to a clean energy future,” Cook said in a statement Thursday. “Apple’s increased level of disclosure about its energy sources helps customers know that their iCloud will be powered by clean energy sources, not coal.”

    Cook insisted that Apple “still has major roadblocks” to meeting its 100 % clean energy commitment in North Carolina, where he said electric utility Duke Energy “is intent on blocking wind and solar energy from entering the grid.” Greenpeace called on Apple to disclose more details on its plans for using renewable resources in all its data centers.

    See Apple’s environmental impact statement for details of its announcement.

    apple-bloom-servers-470

    Apple has also deployed a 10 megawatt installation of fuel cells in Maiden. The Bloom Energy Servers use biogas from a nearby landfill to generate electricity to support Apple’s data center operations. (Photo: Apple)

    12:30p
    Why is Data Storage Such an Exciting Space?

    Srivibhavan (Vibhav) Balaram is the Founder and CEO of CloudByte Inc. He is a General Manager with more than 25 years of industry experience. He has spent 5 years working in the United States with companies like Hewlett Packard, IBM and AT&T Bell Labs.

    vibhav-photo-smVIBHAV BALARAM
    CloudByte

    For a while, the storage industry appeared to be fairly stable (read: little technology innovation), with consolidation around a few large players. Several smaller companies were bought out by larger players – 3PAR by HP, Isilon by EMC, Compellent by Dell. However, in the last year, we’ve seen a renewed action in the space with promising new start-ups, dedicated to solving the storage problems in the new-age data centers. So, what exactly is the problem with legacy storage solutions in new-age data centers?

    Evolution of Storage Technology

    For better perspective, let’s start with a quick recap of data storage technology evolution. In the late 1990s and early 2000s, storage was first separated from the server to remove bottlenecks on data scalability and throughput. NAS (Network Attached Storage) and SAN (Storage Area Networks) came into existence, Fibre Channel (FC) protocols were developed and large scale deployments followed. With a dedicated external controller (SAN) and a dedicated network (based on FC protocols), the new storage solutions provided data scalability, high-availability, higher throughput for applications and centralized storage management.

    Server Virtualization and the Inadequacy of Legacy Solutions

    Legacy SAN/NAS based storage solutions scaled well and proved adequate, until the advent of server virtualization. With server virtualization, the number of applications grew rapidly and external storage was now being shared among multiple applications to manage costs. Here, the monolithic controller architecture of legacy solutions proved a misfit as it resulted in noisy neighbor issues within shared storage. For example, if a back-up operation was initiated for a particular application, other applications received lower storage access and eventually, timed out. Further, storage could no longer be tuned for a particular workload as applications with disparate workloads shared the storage platform.

    Rising Costs and Nightmarish Management

    Legacy vendors attacked the above issues through several workarounds – including faster controller CPUs and recommending additional memory with fancy acronyms. Though these workarounds helped to an extent, the brute way to guarantee storage quality of service (QoS) was to either ridiculously over-provision storage controllers (with utilization below 30-40 percent) or dedicate physical storage for performance-sensitive applications. Obviously, these negated the very purpose of sharing storage and containing storage costs in virtualized environments. Subsequently, storage costs relative to overall data center costs increased dramatically. Being hardware-based, legacy vendors didn’t see any reason to change this situation. With dedicated storage for different workloads, there were several storage islands in a data center which were chronically un-utilized. Soon, “LUN” management became a hot new skill and also a nightmare for storage administrators.

    The New-Age Storage Solutions

    With the advent of the cloud, today’s data centers typically have 100s of VMs which require guaranteed storage access/performance/QoS. Given the limitation of legacy solutions to scale in these virtualized environments, it was inevitable that a new breed of storage start-ups cropped up. Many of these start-ups chose to simplify the “nightmarish” management either by providing tools to observe and manage “hot LUNs” (a term to denote LUNs that serve demanding VMs) or by providing granular storage analytics on a per-VM basis. However, the management approach does not really cure the “noisy neighbor” issues, leaving a lot of other symptoms unresolved.

    Multi-tenant Storage Controllers

    There is a desperate need for solutions which attack the noisy neighbor problem at its root cause i.e., by making storage controllers truly multi-tenant. These controllers should be able to isolate and dedicate storage resources for every application based on its performance demands. Here, storage endpoints (LUNs) will be defined in terms of both capacity and performance (IOPS, throughput and latency). These multi-tenant controllers will then be able to guarantee storage QoS for every application right from a shared storage platform.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:30p
    NVIDIA Conference: GPU Can Power Big Data Analytics

    The 2013 GPU Technology Conference is underway in San Jose this week, and NVIDIA shared how it is taking on the top trends in IT with GPU-powered big data analysis, accelerated virtual desktops, and a visual computing appliance. The event conversation can be followed on Twitter hashtag #GTC13.

    Keynoting at the beginning of the conference, NVIDIA (NVDA) CEO Jen-Hsun presented about breakthroughs in computer graphics and the NVIDIA GPU roadmap. He shared GPU computing examples and updated participants on remote graphics and product announcements. Jen-Hsun talked about the next two GPUs coming from NVIDIA: the Maxwell – with unified virtual memory for GPU operations to see the CPU memory and vice versa, and the Volta, which is energy efficient and has a new technology called stacked DRAM. Volta will address the problem of access to memory bandwidth. It will have DRAM on same silica sub-strate, and achieve one terabyte per second of bandwidth.

    GPU-Powered Big Data Analytics

    NVIDIA demonstrated several case studies for how GPUs are being used to tackle big data analytics and advanced search for both consumer and commercial applications. Companies such as Shazam, Salesforce.com and Cortexica use GPUs to process massive data sets and complex algorithms for audio search, big data analytics and image recognition. Top music application Shazam uses GPU accelerators to rapidly search and identify songs from its 27 million track database. Shazam is growing rapidly, with 10 million song searches a day and 2 million new users joining the service every week and its database doubling over the last year.

    “GPUs enable us to handle our tremendous processing needs at a substantial cost savings, delivering twice the performance per dollar compared to a CPU-based system,” said Jason Titus, chief technology officer of Shazam Entertainment. “We are adding millions of video and foreign language audio tracks to our existing services, and GPU accelerators give us a way to achieve scalable growth.”

    Visual Computing Appliance

    NVIDIA introduced a visual computing appliance, enabling businesses to deliver ultra-fast GPU performance to any Windows, Linux or Mac client on their network. The GRID Visual Computing Appliance (VCA) is a GPU-based system runs complex applications like those from Adobe and Autodesk, sending their graphics over the network to be displayed on a client computer. With the click of an icon a virtual machine, called a workspace, can be created for a dedicated, high-performance GPU-based system. These workspaces can be added, deleted or reallocated as needed.

    “NVIDIA GRID VCA is the first product to provide businesses with convenient, on-demand visual computing,” said Jen-Hsun Huang, co-founder and chief executive officer, NVIDIA. “Design firms, film studios and other businesses can now give their creative teams access to graphics-intensive applications with un-compromised performance, flexibility and simplicity.”

    The 4U GRID VCA appliance houses 16 NVIDIA GPUs and GRID VGX software, providing NVDIA Quadro-class graphics performance for up to 16 concurrent users, with low-latency, high-resolution and maximum interactivity. The appliance will be available in the United States in May.

    “We’ve had enormous success using remote GPU acceleration on our content-creation applications,” said James Fox, chief executive officer at Dawnrunner, a San Francisco-based film production company. “Thanks to NVIDIA GRID VCA, we don’t spend weeks configuring workstations and transcoding files and projects. Instead, we have more time to deliver a higher quality product for our customers. And we can take on new projects with tighter deadlines.”

    Companies embrace NVIDIA GRID

    NVIDIA announced that enterprises can deliver GPU-accelerated virtual desktops and professional graphics applications from the cloud to any device, anytime, anywhere. Dell, HP and IBM are offering NVIDIA GRID-based servers. Citrix, Microsoft and VMware are offering NVIDIA GRID-enabled software. The Dell PowerEdge R720, HP ProLiant WS460c and SL250, and IBM iDataPlex dx350 MR server contain the NVIDIA GRID K1 and K2 boards.

    “Enterprises want to take advantage of the growing trends towards globalization and mobility by virtualizing desktops and applications so users can work from anywhere, anytime on any device — while enabling the company to secure its core IP,” said Bob Schultz, group vice president and general manager, Desktops and Apps at Citrix. “By leveraging NVIDIA GRID K1 and K2, combined with Citrix XenDesktop and Citrix XenApp with HDX technology, enterprises can deliver the most graphics-intensive applications to users who require rich, interactive experiences from any device.”

    NVIDIA GRID enterprise solutions uses NVIDIA GRID VGX software, which unlocks the virtualization and remoting capabilities of NVIDIA GRID GPUs and is licensed by Citrix for use in XenDesktop, XenApp and XenServer; VMware for use in vSphere and Horizon View; and Microsoft for use in RemoteFX. The GRID K1 boards use four Kepler GPUs and 16GB of memory, and K2 boards use two higher-end Kepler GPUs and 8GB of memory.

    “Enabling customers to virtualize their multiple workloads is a challenge Dell is committed to, and the NVIDIA GRID technology enables our solutions to be more powerful for design and graphics-intensive applications,” said Sally Stevens, vice president of Dell PowerEdge marketing. “Starting with the PowerEdge R720 server this month, and later including Dell Precision workstations and end-to-end Dell Desktop Virtualization Solutions (DVS) Enterprise stacks, Dell will offer a range of robust graphics-virtualized solutions, enabling new customer mobility and data security opportunities that accommodate a wide range of graphics performance requirements.”

    To keep up with Big Data news, bookmark DCK’s Big Data Channel. To stay updated on virtualization, check out our Virtualization Channel.

    2:00p
    European Supercomputer to Map Human Brain

    While it is six times faster than its predecessor, JuQueen, a new supercomputer recently unveiled at Jülich Supercomputing Centre in Jülich, Germany, uses one-sixth of the energy. The supercomputer is the fastest in Europe and capable of performing quadrillions of calculations per second. A group of doctors, computer scientists and others will be embarking on a 10-year-long project to use the computer’s capabilities to map the entire human brain – from individual cells to large areas of the brain. The video runs 2:20 minutes.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    2:30p
    Friday Funny: Raised Floor Adventures

    It’s Friday and time for a few laughs. Towards that end, we run our caption contest on Fridays, with cartoons drawn by Diane Alber, our fav data center cartoonist! Please visit Diane’s website Kip and Gary for more of her data center humor.

    First, we must announce the winner of the “Pot of Gold” cartoon: Congrats to Colton Brown of Dupont Fabros who submitted, “Don’t worry, Gary! He’s our new private equity investor!”

    This week we present “Raised Floor Adventures.” Diane writes, “I’ve heard that people have found all sorts of weird things under the raised floor. . .” Enter your caption suggestion below.

    The caption contest works like this: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion.

    The winner will receive his or her caption in a signed print by Diane.

    bunnies-floor-lrg

     

    For the previous cartoons on DCK, see our Humor Channel.

    3:00p
    Time Lapse: Lone Mountain Data Center Build

    Ever wish you could speed up time? Well, ViaWest compressed time in this video of a data center build. ViaWest, a privately-held data center, cloud computing and managed services provider, recently opened its Lone Mountain data center facility in North Las Vegas. This short time-lapse video shows the construction of the Lone Mountain facility. This new data center in ViaWest’s fleet is a Tier IV design. The video runs 1:30 minutes.

    For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.

    3:00p
    Cloudera Partners with T-Systems on Cloud Analytics

    seamicro-sm15000

    Enterprise Hadoop provider Cloudera announced a European partnership for delivering analytics as a service, and AMD’s SeaMicro has obtained CDH4 certification from Cloudera for its SM15000 server.

    Cloudera and T-Systems Partner

    Cloudera announced that it has reached a strategic agreement with European IT systems provider T-Systems to deliver cloud-based data analytics solutions based on Cloudera’s Platform for Big Data. T-Systems is Deutsche Telekom’s corporate customer arm. Cloudera Enterprise powered by Cloudera Impala RTQ for real time analytics, will provide the data management infrastructure layer, enabling native data integration, visualization and analysis at scale. T-Systems will integrate Cloudera with existing cloud computing infrastructure and deliver on its strategic vision to develop big data analytics solutions as a key element of its IT solutions portfolio.

    “Our customers don’t want to have to worry about the hardware and software for big data,” says Christian Wirth, Vice President BI & Big Data at T-Systems.  ”They don’t want technology, just a reliable service. We can offer precisely this — which is what makes our new offer with Cloudera so special.”

    “We are excited to be working with T-Systems, one of Europe’s foremost IT service providers and a trusted global leader in cloud-based business solutions for the enterprise,” said Tim Stevens, Vice President of Business and Corporate Development at Cloudera. “Leaders choose leaders to partner with and this partnership is further validation that Cloudera is the big data solutions leader that enterprises trust. T-Systems’ unique cloud-based application of our Platform for Big Data will enable unparalleled scalability for data management and analytics and offer a great way for enterprises to more easily leverage the power of Hadoop.”

    SeaMicro SM15000 certified for Cloudera Hadoop

    AMD announced that the SeaMicro SM15000  server is now certified for CDH4, Cloudera’s Distribution Including Apache Hadoop Version 4. With up to 512 processor cores and over five petabytes of storage, the SM15000 is a power-efficient big data server platform. The SM15000 was released last year, featuring SeaMicro’s network fabric to extend beyond the chassis and connect directly to massive disk arrays – putting 5 petabytes of storage in a 10 rack unit system.

    “The CDH4 certification assures our customers that the SM15000 completed and passed strict testing and performance requirements,” said Tim Stevens, VP of Business and Corporate Development at Cloudera. “Leveraging the deep domain expertise and expanding knowledge base offered by Cloudera and the greater Cloudera Connect partner ecosystem, AMD can enable its customers to bypass the complexity associated with deploying and managing Hadoop and put their data to immediate use. We’re committed to helping enterprises achieve the most from their big data initiatives, and we’re pleased that AMD has completed certification of the SM15000 on CDH4.”

    << Previous Day 2013/03/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org