Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, March 7th, 2014

    Time Event
    12:15p
    Facebook Adopts IKEA-Style Pre-Fab Design for Expansion in Sweden
    faacebook-lulea-2-470

    An illustration of Facebook’s new data center design and what it will look like in its first implementation in Luela, Sweden. Click the image for a larger view. (Image: Facebook)

    Facebook has begun building a second huge server farm in Lulea, Sweden, and has totally reworked its data center design for the project. The new building will span 25,000 square meters (270,000 square feet) and will combine factory-built components with lean construction techniques.

    Significantly, it will also eliminate Facebook’s distinctive penthouse cooling system, which uses the entire second story of the building to process fresh air to cool its servers.

    The new approach to data center construction, which Data Center Knowledge described in detail last month, is known as a Rapid Deployment Data Center or RDDC. Facebook says this new approach will be more efficient, use less material and be faster to deploy.

    “Just as the great Swedish company Ikea revolutionized how furniture is designed and built, we hope that Luleå 2 will become a model for the next generation of data centers,” Facebook said in its announcement of its Lulea expansion.

    “We expect this new approach to data center design will enable us to construct and deploy new capacity twice as fast as our previous approach,” Facebook’s Marco Magarelli said in a blog post at the Open Compute Project. “We also believe it will prove to be much more site-agnostic and will greatly reduce the amount of material used in the construction. Our newly announced second building at our Luleå, Sweden, campus will be the first Facebook data center to be built to our RDDC design.”

    Chassis and Kits

    In a session at the recent Open Compute Summit, Magarelli discussed two new design concepts that Facebook has developed for future facilities. One involves using as a modular “chassis” approach to construction, shipping large pre-fabricated building blocks that can be rapidly put together, much like Legos, to create a building. The second design focuses on the use of IKEA-style kits filled with lightweight parts that can be assembled on-site to create rows of racks and ducting inside a data hall.

    Magarelli writes that the design initiative began in Facebook’s labs in October 2012.

    “The first idea developed during the hack was employing pre-assembled steel frames 12 feet in width and 40 feet in length,” Magarelli wrotes. “This is similar in concept to basing the assembly of a car on a chassis: build the frame, and then attach components to it on an assembly line. In this model, cable trays, power busways, containment panels, and even lighting are pre-installed in a factory.”

    “The second concept developed during the hack was inspired by the flat-pack assemblies made famous by Ikea. Our previous data center designs have called for a high capacity roof structure that carries the weight of all our distribution and our cooling penthouse; this type of construction requires a lot of work on lifts and assembly on site. Instead, as Ikea has done by packing all the components of a bookcase efficiently into one flat box, we sought to develop a concept where the walls of a data center would be panelized and could fit into standard modules that would be easily transportable to a site.

    “In this scheme, we employ standard building products like metal studs and preassembled unitized containment panels that are then erected onsite and are fully self-supporting. These panels are limited to a width of 8 feet to maximize the amount of material that can be shipped on a flatbed trailer without requiring special permits for wide loads.”

    See Magarelli’s blog post for additional details on the new design.

    12:30p
    CloudBees Raises $11 Million, Led by Verizon

    Platform as a Service provider CloudBees announced that Verizon Ventures led an $11.2 million financing round to grow its business, SAP advances its cloud strategy for SAP HANA, and Informatica launches Cloud Spring 2014 for simplifying integration across cloud and hybrid environments.

    CloudBees raises $11 Million.  Enterprise Platform as a Service provider CloudBees announced that it has closed an $11.2 million Series C financing round, led by Verizon Ventures. The round also included existing investors Matrix Partners and LightSpeed Venture Partners, as well as new investor Blue Cloud Ventures. This latest financing round brings the total investment in CloudBees to $25.7 million. The new funds will be used to drive continued revenue growth by rolling out additional product capabilities, fund sales expansion and extend the reach of the CloudBees brand. “PaaS and continuous delivery are transforming the way enterprises create business applications and deliver value to the business by accelerating the way applications are built and deployed,” said Sacha Labourey, founder and chief executive officer of CloudBees. “CloudBees is at the center of this evolution and we are excited to have a group with the stature of Verizon Ventures lead our Series C investment round. We will invest in initiatives that continue to improve our platform and strengthen our go to market capabilities.”

    SAP advances cloud options, strategy for SAP HANA.  SAP announced next steps for delivering on its cloud strategy for SAP HANA. Simplified pricing, deployment and accessibility options for SAP HANA herald a significant business shift to cloud-based consumption and pricing models. SAP announced new offerings for SAP HANA Cloud Platform. Customers now have the ability to choose from three offerings: SAP HANA AppServices, SAP HANA DBServices, and SAP HANA Infrastructure Services.  Customers buy through a consumption model and can either implement end-to-end platform use cases or choose additional options as needed such as predictive analytics, spatial processing and planning. SAP announced new and enhanced offerings for SAP HANA Cloud Platform. Startups, ISVs and customers can now build new data-driven applications in the cloud. This platform-as-a-service (PaaS) offers in memory-centric infrastructure, database and application services to build, extend, deploy and run applications. New offerings are available for a simplified trial and purchase experience in sizes ranging from 128 GB to 1 TB of memory on SAP HANA Marketplace.

    Informatica launches Cloud Spring.  Informatica (INFA) announced Informatica Cloud Spring 2014, a new release of its award-winning cloud integration platform. “Informatica Cloud Spring 2014 changes the game for cloud integration with Informatica Cloud Designer,” said Ash Kulkarni, senior vice president and general manager, Data Integration, Informatica. “IT developers get a drag-and-drop palette for advanced integration scenarios and line-of-business applications users get a wizard-based, self-service interface – all in one unified design environment.” The new solution introduces Informatica Cloud Designer, which provides powerful, self-service integration design capabilities directly in the cloud. These capabilities make it easier for business analysts, SaaS application administrators and other Line of Business (LOB) users to collaborate with each other and with IT to speed to market new applications and data for the business.

    1:34p
    HP and DreamWorks Bring Peabody & Sherman To Life
    sherman-peabody

    HP Converged infrastructure powered the 70 million render hours required to create the new DreamWorks movie “Mr. Peabody & Sherman.” (Image: DreamWorks

    DreamWorks Animation (DWA) has once again collaborated with long time partner HP (HPQ) to produce the studio’s 28th animated movie Mr. Peabody & Sherman. DreamWorks relied on HP Converged Infrastructure, high performance workstations, printing and cloud to empower the innovative techniques used in making the film.

    The making of Mr. Peabody & Sherman required more than 70 million render hours with servers processing an average of 500,000 render jobs per day. With 200 Terabytes of storage holding more than 800 million data files, animators processed 118,000 individual computer-generated frames and 250 billion pixels to create the 82 minute film. A total of 15 percent of the animation was rendered using the cloud.

    “DreamWorks Animation’s long term relationship with HP has allowed us to spearhead the future of animated movie-making,” said Kate Swanborg, Head of Technology Communications and Strategic Alliances. “HP’s Converged Infrastructure technologies enabled our artists and engineers to deliver a film with spectacular, cutting-edge visuals for audiences to enjoy.”

    With studios in the U.S. and India DreamWorks used HP Z800 and Z820 high-performance workstations in conjunction with HP DreamColor professional monitors that support 1 billion active colors, as well as HP ProLiant servers and scalable converged storage featuring HP 3PAR StoreServ Storage. DreamWorks also relies exclusively on HP Networking to provide secure, 24/7 availability allowing creative teams to work from any location, on any movie, at any time.

    2:21p
    The science behind successful data center capacity planning

     

    The dynamic nature of today’s business has placed new demands around the modern data center. Resources utilization is increasing as the proliferation of cloud computing and IT consumerization continues to grow. Organizations are now tasked with supporting a growing user count, much more data – and all of this must be done efficiently. A big part of the question – in very many data center cases – revolves around capacity. The important concept to understand here is that the term data center capacity is truly evolving. New technologies and solutions are abstracting services, creating new data center delivery models, and optimizing overall management strategies.

    Key emerging industry trends towards Data Center Infrastructure Management (DCIM) and Software Defined Data Centers (SDDC) demonstrate a continuing need to look at the key balance between IT and communications and facilities management. Capacity planning brings together all the key resource and output factors that constitute a data center’s reason for commission and its means of fulfilling that. As critical resources become more expensive or scarce, being able to plan for future capacity requirements becomes more critical.

    Data center capacity planning can be a complex undertaking with far-reaching strategic and operational implications. This white paper from DCD Intelligence, sponsored by Server Technology, examines industry insights and lessons on the practical steps that are needed to develop a successful power and capacity planning strategy.

    capacityplanning

    As with any data center deployment, expansion or development project – considerations around current and future demands are absolutely critical. This white paper looks at key decision elements and helps create a logical map around data center capacity planning points. This includes:

    • Industry specific regulations
    • Rack optimization
    • Power utilities
    • Cooling
    • Green issues/meeting corporate and social responsibility commitments
    • Budgeting
    • Software Tools in Use

    The planning process doesn’t really ever stop. In fact, as your data center and organization evolve, new industry dynamics will influence how capacity is assessed. For example, it’s already absolutely critical to assess the implications that cloud computing and advanced virtualization has on utilization and capacity planning. Consider this – Multi-tenancy Cloud services are almost always cheaper than private or hybrid Cloud environments because data center or lab resources can be shared and utilization increased. One research note indicated that IT equipment tends to draw 50% of its maximum power when idling, so ensuring applications are always running is a vital element in improving efficiency.

    Download this white paper today to understand just how much concepts around data center capacity planning have truly evolved. There are now numerous considerations around an ever-expanding data center model. Organizations are now building their business model directly around the data center – which makes intelligent capacity planning strategies even more important.

    2:26p
    CyrusOne Breaks Ground on Houston Hub for Energy Data Crunching
    CyrusOne_HoustonWest3

    An illustration of what the CyrusOne Houston West campus will look like upon completion. (Image: CyrusOne)

    IT infrastructure provider CyrusOne officially started construction of its third data center at its Houston West data center campus, which serves as a center of high-density computing power for the energy industry. The company broke ground Thursday on the project, which will include 428,000 square feet of raised floor and up to 96 megawatts of critical load. The third building on the campus will eventually boost Houston West to more than 1 million square feet.

    The facility will be constructed in two phases. Phase 1 will have up to 48 megawatts of critical power load, consisting of a 321,000 square foot powered shell with 214,000 square feet of raised floor and 43,000 square feet of class A office space. The first data hall will have 54,000 square feet of raised floor and is expected to be operational by early 2015.

    “CyrusOne’s ability to deliver high-performance compute (HPC) solutions, including its ultra-high density HPC solutions for the oil and gas industry, has made the Houston West campus a preferred data center destination and geophysical computing center of excellence for the energy sector,” said Gary Wojtaszek, president and chief executive officer of CyrusOne. “Adding a third facility at the Houston West campus enables us to serve the growing number of customers from the energy industry as well as other industries who need low to ultra-high density colocation space for their mission-critical infrastructure.”

    Houston-based Kirksey Architecture designed the new facility, and kW Mission Critical Engineering is providing facility-engineering services.

    The facility will be built to a 2N power redundancy. Customers have access to the CyrusOne National IX, a data center platform that links a dozen of CyrusOne’s facilities in multiple metropolitan markets. CyrusOne was also recently the first to receive multi-site data center certification from the Open-IX association.

    The company has 25 carrier neutral facilities across the Unite States, Europe, and Asia. The company is building out a lot this year, having also announced a 48 megawatt facility in Northern Virginia.

    2:30p
    DDN Advances Object Storage with Major Platform Updates

    Data Direct Networks (DDN) answers the unstructured data challenge with major updates to its advanced WOS Object Storage Platform, and announces that it is joining the Active Archive Alliance.

    Major advances to WOS Object Storage

    DDN introduced a significant update to its WOS object storage platform, adding new software features. The WOS 360 full-spectrum object storage update adds a broad set of sophisticated new data protection, archive, collaboration and distribution capabilities designed specifically to meet the five key object storage requirements of customers today: scalability, accessibility, efficiency, reliability and performance.

    WOS 360 can leverage options for single data center, multiple data center for data protection and data sharing, as well as long-term retention archive storage with the introduction of the new WOS Archive Node hardware option. WOS was designed as a true object storage platform with a flat, single layer address structure where objects are stored in a contiguous group of blocks so that disk operations are minimized, performance is maximized and disks are used at full capacity. It offers a platform that provides local, replicated and globally distributed erasure coding.

    With the latest WOS update, DDN introduces Global ObjectAssure, a new kind of distributed erasure coding, designed for multi-site deployments where customers require very low-cost infrastructures for very large data sets. Global ObjectAssure, with local and global data protection capabilities, is designed for use cases in which low network overhead and high levels of data durability are key decision criteria. DDN is also introducing the new, low-cost WOS Archive Node. For organizations looking to build large scale active archives, the Archive Node in combination with Global ObjectAssure helps drive down costs by up to 20 percent.

    “As more organizations re-evaluate their storage architectures to meet the demands of modern massive scale environments, there is a rapid acceleration in the interest and deployment of object storage platforms,” said Molly Rector, Chief Marketing Officer at DDN. “Some of the largest data environments in the web, cloud and research fields are DDN WOS customers who are setting the roadmap for DDN product innovation. Our WOS 360 announcement today which includes new software and the Archive Node hardware reflects DDN’s commitment to listening to our customers’ business requirements and delivering technology to help solve their rapidly changing data storage needs.”

    DDN joins Active Archive Alliance

    Expanding its ecosystem for archive solutions, DDN announced that it is joining the Active Archive Alliance. User expectations ask vendor solutions to offer seamless inter-compatibility and ease of use as data is migrated across the tiers of storage. The Active Archive Alliance is the industry association users turn to for thought leadership and demonstrated vendor compatibility for long-term, cost effective data retention solutions. DDN WOS and GRIDScaler technologies address active archive use cases and provide collaboration, access and tiered storage capabilities.

    “DDN offers tremendous expertise in high performance, scale-out storage solutions,” said Molly Rector, Chief Marketing Officer at DDN. ”As demand increases among our users to offer multiple storage tiers coupled with the ability to tune performance, access and data protection parameters, we look forward to collaborating with the Active Archive Alliance to define industry requirements and educate end users on best practices. Technology is evolving rapidly; DDN and the Active Archive Alliance are jointly committed to helping users access the best technologies to meet their data storage and management requirements.”

    3:01p
    Data Centers Go to College: New Masters Degree Offered By SMU

    It’s hard to find skilled staff, but the education system might finally be catching on to the importance of the data center. Southern Methodist University (SMU) in Dallas will begin offering a master’s degree in data center systems engineering, while Northwestern University in Chicago is offering a graduate level course on data centers this semester.

    Two industry leaders are helping establish these efforts, as Compass Data Centers CEO Chris Crosby is helping SMU develop its curriculum, while Ubiquity Critical Environments CEO Sean Farney is teaching the course at Northwestern.

    It’s surprising there aren’t graduate programs tailored to the data center, considering that the workforce associated with data center operations tops 4 million, according to the U.S. Department of Labor. That number is growing, with an expected increase of 800,000 by 2016, and 2 million by 2018. Approximately 70 percent of these workers have a bachelor’s degree or higher. Thus far the primary college-level curriculum has been on online course from the Institute for Data Center Professionals at Marist College.

    Starting this fall, SMU will offer a new master’s degree in data center systems engineering. This is the first program in the United States to offer a multidisciplinary graduate degree specific to the data center. The program is focused on preparing professionals for a leadership role in this field as technical contributor or manager.

    “Diverse Combination of Highly Specialized Skills”

    “Our society has become intimately linked to a variety of digital networks including social media, search engines, e-commerce, gaming and big data,” said Marc Christensen, Dean of the Lyle School. “Data center design is a fascinating challenge due to the millions of dollars lost per second of outage.  The proper management and design of these datacenters require a diverse combination of highly specialized skills, and SMU Lyle is uniquely positioned to offer a degree that will connect all the needed technical disciplines.”

    Dallas is a fitting background for the program, given the abundance of data centers located in or near the city. Approximately 50 data centers exist within the greater Dallas area.

    The Master of Science in Datacenter Systems engineering is built around 5 core courses, broadly covering the industry. Elective specializations are found within three technical areas:

    • Facilities, Infrastructure and subsystems
    • Data systems engineering and analytics
    • Computer networks, virtualization, security and cloud computing

    Enrollment is expected from current professionals in industry and government, as well as undergrads in engineering, science, mathematics and business preparing to enter the data center field for the first time.

    “A Long Unfulfilled Need”

    Compass CEO Crosby is volunteering his time to help build the curriculum and do some guest lecturing to students when classes start this fall. He’s also helping SMU raise awareness of it in the industry so that the inaugural class of graduate students is as strong as possible.

    ”SMU’s Master of Science in Datacenter Systems Engineering program addresses a long unfulfilled need in the datacenter industry,” said Crosby. “Its comprehensive, cross-disciplinary curriculum provides the breadth of knowledge professionals need for success in this complex industry with numerous interdependencies.”

    Data centers on the whole have hit prime time. The program is open to both full and part time graduate students and is available on the Dallas campus, as well as through distance learning via the Bobby B. Lyle School of Engineering’s distance education program. All of the Lyle School courses are broadcast live with recordings available afterwards to students.

    More information about SMU’s Master of Science in Datacenter Systems Engineering is available at the SMu web site or by contacting the Lyle Graduate Programs Office at 214-768-2002.

    Northwestern Examines Impact of Data Center Business

    The course at Northwestern takes a look into the data center’s impact in the work around us. Farney covers history, development, tech, and financials.

    “Data centers are no longer just the purview of the data center technician,” said Farney. “We’re now in a ‘data centered’ economy. You can’t get through a day without somehow using a data center service.”

    Farney points out that 43 percent of the market cap of the top 50 global companies is related to data center infrastructure. The data center is impacting everything we do in the background while we use our phones, tablets and computers, and more. The course examines data center outsourcing and consolidation and is targeted to  first year grad year students.

    “I have students ranging from highly technical to more business focused,” said Farney. “So I have to have a broad appeal. First, I define what is a data center – mechanical, electrical, big robust buildings. I talk about what they look like and what they do. Then I talk logically about data centers.”

    Farney believes it’s time to increase the focus on data centers in education.

    “We need to look at this industry in graduate programs,” said Farney. “The massive amount of capital spend going on in the industry. We are spending in the data center, it’s creating jobs; business schools should be paying attention going forward.I hope educational institutions will get on board with how important data centers are. “

    7:27p
    Friday Funny: The Luck O’ The Data Center

    It’s Friday and time for a bit o’ lucky humor before the work week ends. Diane Alber, the Arizona artist who created Kip and Gary, has a new cartoon for Data Center Knowledge’s cartoon caption contest. We challenge our readers to submit a humorous and clever caption that fits the comedic situation. Please add your entry in the comments below. Then, next week, our readers will vote for the best submission.

    Here’s Diane’s thoughts on this week’s cartoon, “Sometimes you need a little luck when your managing a data center…”

    Click to enlarge.

    Click to enlarge.

    Hearty congrats to Mike from IO, who submitted last week’s winning caption, “I pictured virtual currency to be a bit less. . . Heavy.”

     

    For the previous cartoons on DCK, see our Humor Channel. For more of Diane’s work visit Kip and Gary‘s website.

    << Previous Day 2014/03/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org