Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, April 15th, 2013
| Time |
Event |
| 12:09p |
Data Center Jobs: GE Energy Management At the Data Center Jobs Board, we have a new job listing from GE Energy Management, which is seeking a Critical Power, Technical Solutions Director in Baltimore, Maryland.
The Critical Power, Technical Solutions Director is responsible for proactively promoting and positioning GE early in the project phase to create value and influence, developing high level and own strategic relationships with key decision makers at mission critical electrical consultants, developing Critical Power solutions covering all of GE product lines such as UPS, Paralleling Switchgear, Automatic Transfer Switches, Power Transformers, Substations, Switchgear, Switchboards, Bus-way, Panel-boards, Power Management/Automation/Software, and Service Solutions, driving identification, prospecting and creation of new sales opportunities/pipeline using Sales Force.com and other commercial tools, and working closely with the Region Sales and Commercial Operations teams on projects throughout the sales cycle; from opportunity identification to post order close out. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 12:28p |
Teradata Brings More Speed, Power to Analytics Platform Teradata (TDC) today introduced the Teradata Active Enterprise Data Warehouse 6700 platform, a new core analytic brain for big data sets, along with a new fabric-based hyper-speed nervous system. The new offerings expand the company’s Unified Data Architecture, which brings together Teradata, Teradata Aster, and Hadoop technology, as well as partner tools such as SUSE Linux, Intel Xeon processors and now Mellanox InfiniBand.
Active Enterprise Data Warehouse platform 6700
“With adoption of fabric-based computing, Teradata offers a high-speed, private analytic network with flexible configurations as the backbone of the entier Teradata Unified Data Architeture,” said Scott Gnau, president Teradata Labs. “The speed of connection and robust management of fabric-based computing empowers our customers to take hyper-speed analytics and business insights to a new level.”
The new 6700 platform leverages the fabric-based computing benefits of its BYNET software on high-speed InfiniBand networking. BYNET is Teradata’s secret sauce, providing on-the-fly, in-query data sorting, unique point-to-point and broadcast communication functions. Using InfiniBand from Mellanox, communication is 20 times faster, and brings scalability, performance and reliability for processing large amounts of data. Workload performance has been boosted by 40 percent over previous version by leveraging the Intel Xeon processor E5 family. The Teradata Active Enterprise Warehouse has up to eight times more memory per cabinet, with a typical system reaching multi-terabytes of memory. These system performance enhancements enable customers to do the same data warehouse work using up to 50 percent less energy as compared to the Teradata platform of two years ago.
“Data analytics has become a competitive asset for today’s enterprises that enables quicker reaction and response to changing business conditions and consumer behavior,” said Motti Beck, director, Enterprise Market Development at Mellanox Technologies. “When the Teradata Unified Data Architecture is integrated with Mellanox’s end-to-end InfiniBand solutions as the fabric-based computing foundation, customers reap the benefits of data analytics performance and response time. Together with Teradata, we enable organizations to transform their data into a competitive business advantage.”
Teradata Enterprise Access for Hadoop
Teradata announced Enterprise Access for Hadoop to offer business analysts streamlined, self-service, cost-effective access to Apache Hadoop. As a part of the Unified Data Architecture, the new technology enables business analysts to reach through Teradata directly into Hadoop to find new business value from the analysis of big, diverse data. Offering Teradata smart loader for Hadoop and Teradata SQL-H the solution features robust security, workload management, and comprehensive standard ANSI SQL support on Apache Hadoop.
“As a key component of the Unified Data Architecture, Hortonworks Data Platform provides a reliable Apache Hadoop distribution for enterprise users,” said Bob Page, vice president, products, Hortonworks. “The deep integration of Hortonworks Data Platform with both the Teradata data warehouse and Teradata Aster discovery platform provides users with easy access to data, while giving enterprise IT the advanced security it requires.”
Teradata Studio with Smart Loader for Hadoop enables point and click convenience for browsing moving data between Teradata and Hadoop for analysis and self-service intelligence. Teradata SQL-H gives any user or application across the enterprise direct, on-the-fly access to data stored within Hadoop through ANSI SQL, leveraging the security, workload management, and performance of the Teradata data warehouse. The integrated data ware house is a cornerstone of the Unified Data Architecture, enabling real-time delivery of intelligence to front line decision-makers.
“Today’s announcement of Teradata Enterprise Access for Hadoop is another example of our agressive commitment to building out the Teradata Unified Data Architecture,” said Scott Gnau, president Terdata Labs. “Teradata Enterprise Access for Hadoop empowers organizations to dig deeply into files and data residing in Hadoop and combine the data with production business data for analyses – and action.” | | 12:56p |
Sixth Key to Brokering IT Services Internally: Prove What You Delivered Dick Benton, a principal consultant for GlassHouse Technologies, has worked with numerous Fortune 1000 clients in a wide range of industries to develop and execute business-aligned strategies for technology governance, cloud computing and disaster recovery.
 DICK BENTON
Glasshouse
In our last post, I outlined the fifth of seven key tips IT departments should follow if they want to begin building a better service strategy for their internal users: building the order process. That means developing an automated method for provisioning services via a Web console that can satisfy today’s on-demand consumers.
Measuring and Communicating Your Outcomes
This post covers how to prove what you delivered, because without metrics, monitoring and reporting that demonstrates you’ve fulfilled Service Level Agreements (SLAs), your service consumers and your management won’t know that you’ve met your commitments.
Service offerings and the subsequent signed SLAs will typically contain two types of service delivery metrics. The first group comes under quality of service and may include performance (e.g. IOPS), availability (scheduled hours) and reliability (number of nines). The second group covers protection attributes including operational recovery point and recovery time objectives, as well as disaster recovery point and recovery time objectives. Some organizations also include a historical recovery horizon and retrieval time as service attributes. Service offerings may also typically offer some level of compliance or security protection, and most importantly, the offerings should include the cost of the deployable resource unit of the service offering.
Determining KPIs
It is very important that the process to establish service offering metrics includes the very people who must execute to the key performance indicators (KPIs) around each service. The operations staff must strongly believe in its ability to deliver to the target metrics. This is not the time to allocate stretch goals. In fact, nothing is more detrimental to consumer satisfaction (and IT morale) than IT failing to meet a published goal. Initial metrics must be absolutely achievable, and operations people must believe that they have an excellent chance of meeting those targets. Once operations have settled in, and the bumps have been worked out, then the process of using upper and lower thresholds and tracking actuals within the desired ranges can start to drive improvements and a better service level for the next service catalog publication. This means IT is now visibly improving its service levels and thus consumer satisfaction.
Determining how to measure service attributes can require some creative thinking. You need a metric that can actually be captured and trended. The service indicators can be measured relatively easily for servers, storage and networks. However, operational protection service indicators can be more challenging. The dimension of time frame is also important. For example, will your metric offer a standard for a single point in time, a trend between upper and lower thresholds during the operational day or a standard at peak periods of the day? It is important to focus your choice of metrics on measures that the end consumer can understand and value. If you are going to differentiate between services based on such metrics, they need to be in “consumer speak” rather than “IT speak.” Formulating an appropriate policy on metrics, their time frame and their reporting should be a fundamental part of your service catalog
Realistic Measurements
The prudent CIO will take steps to ensure that each of the attributes mentioned in the service offering (as detailed in the organization’s service catalog) can be empirically tracked, monitored and reported. These indicators should be established with target operations occurring between upper and lower thresholds. Using a single target metric instead of upper and lower thresholds can inhibit the ability to intelligently track performance for continuous improvement, and can result in a potentially demoralizing black-and-white picture for the operations team. In other words, you either made it or you didn’t. With a range of “acceptance” metrics, the IT organization can ensure their own “real” target is smack in the middle of the acceptable range, with consumer expectations set at the lower threshold. It is important to ensure that the end consumer perceives the lower end of the range as an acceptable service level for the resource they have purchased. This approach gives IT some wiggle room, while the system and the processes and people supporting it go through the changes needed to deliver effective services. More importantly, it also provides an incentive to rise above the target with service level improvements.
Now, Measure!
Now that you know exactly what it is you are measuring and how the attributes will be measured, you have a specification for selecting an appropriate tool or tools to support your efforts. Unfortunately, finding the tools to produce the metrics can be a challenge. There are few, if any, that can work across the range of infrastructure and the vendors who provide it. Typically, more than one tool is required. Many organizations have chosen a preferred vendor and stick with that vendor’s native tools, while others have selected two or more third-party tools with the hope of staying viable as vendors constantly enhance and improve their products. However, at the end of the day, a simple combination of native tools and some creative scripting will provide all the basics you need.
Finally, the prudent CIO will develop and publish a monthly “score card” showing which divisions or departments are using which service offerings, how much those service offerings cost, and most importantly, how IT performed in meeting its service level objectives for the period and in comparison to the previous reporting period. This provides a foundation on which new relationships and behaviors can be based, with IT being able to empirically prove that they delivered what they promised, and in some cases, beat what they promised.
This is part of a seven part series from Dick Benton of Glasshouse Technologies. See his first post of the series.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:30p |
HP Converged Infrastructure Reference Architecture Design Guide The modern data center environment has become the heart of almost any organization. Because of this, there has become a greater emphasis on creating efficiency around data center systems. This means eliminating complex distributed resource platforms in favor of optimized converged infrastructures.
In HP’s Architecture Design Guide, you are able to see how information technology as a “service” has moved from concept to reality. Early adopters have already deployed major solutions, and it has become a standard objective for mainstream Information Technology (IT) architects and planners. HP has adopted the term “Converged Infrastructure” to describe how HP products and services can address this approach. This Reference Architecture guide provides a business and technical view of the adoption process.
Today’s IT demands span the data center, from capacity to technology, processes, people, and governance. In this technical guide, HP outlines how its Converged Infrastructure opens the door to new approaches, and can enable IT management to defer or avoid costly data center expansions. For example:
- Simplification: Collapse siloed, hierarchical, point-to-point infrastructure into an easily managed, energy-efficient, and re-usable set of resources.
- Enabling growth: Efficiently deploy new applications and services, with optimum utilization across servers, storage, networking, and power.
- On-demand delivery: Deliver applications and services through a common framework that can leverage on-premise, private cloud, and off-premise resources.
- Employee productivity: Move resources from operations to innovation by increasing automation of application, infrastructure, and facility management.
HP Converged Infrastructure enables organizations to achieve these goals while getting ahead of the growth curve and the cost curve. In working with data centers today, administrators must design environments capable of high density and efficient scalability.

[Image source: HP: The transformation to HP Converged Infrastructure]
This Reference Architecture Design Guide will outline all of the major components which fall into the HP Converged Infrastructure design. This includes:
- Virtual resource pools
- Working with FlexFabric
- Using the Matrix Operating Environment
Download HP’s Architecture Design Guide to learn how to create a more efficient data center platform. As the need for user density grows, it’s important to ensure that the right technologies are in place to help facilitate that expansion. | | 2:45p |
The Rise of the Worker-Friendly Data Center  The recreation area at the new CyrusOne data center in Carrollton, Texas features a spiral slide between stories, fitness machines, a rock climbing wall and a putting green. (Photo: Rich Miller)
CARROLLTON, Texas – As you walk through the new CyrusOne data center near Dallas, the tour winds through a modern cafeteria with a stylish eating area, an annex housing foosball tables, and a gaming nook with an advanced video game system. This opens onto one of the most unique spaces in the facility – a two-story recreation area featuring a climbing wall, putting green, and a spiral slide allowing a speedy trip from the fitness machines on the upper level.
The recreation area at CyrusOne reflects a new focus on the data center as a work space for busy professionals, complete with amenities to help them be more productive and unwind a bit. Data centers are designed primarily to house thousands of servers, but the nondescript concrete bunker of the past is giving way to campuses optimized for humans, complete with comfortable offices, conference rooms, theaters and gaming areas.
“We are pairing all our next generation facilities with industry-best office space so it’s a uniquely comfortable experience for our customers,” says Kevin Timmons, CTO at CyrusOne. “They will have ample room to relax, connect, or grab an espresso in an environment a short stroll away from their infrastructure.”
Differentiating Multi-Tenant Facilities
It’s a trend seen primarily in multi-tenant data centers, where customer amenities offer an opportunity to differentiate a facility in a competitive market. This has meant more attention to the needs of data center staff, a unique breed of workers that historically have had to labor in 100-degree hot aisles, work on laptop carts, and traverse man-traps and biometric security just to get to the restroom.
CyrusOne, which offers both colocation cages and wholesale suites, is among a growing number of companies seeking to create more comfortable working environments for data center staff.
- At Vantage Data Centers, office space and other customer amenities account for about about 20 percent of the space in its new 60,000 square-foot facility in Santa Clara, Calif. In addition to two 20,000 square foot (3 megawatt) data halls, the building includes 12,000 square feet of conference rooms, kitchenettes, locker rooms with showers and Class A office space.
- The hallways of the RagingWire Enterprise facility in Ashburn, Virginia, are lined with original artwork, including some pieces created by RagingWire staffer Julie Bjorgum from recycled materials from the construction of the facility. The data center also features abundant office space and conference rooms, as well as a colorful break area and dining space, and a separate area for gaming and video.
- The SuperNAP in Las Vegas also features many visual flourishes usually seen in enterprise office space and includes a plush theater that is available for customer events.
- IO has nearly 80,000 square feet of office space at its huge IO Phoenix data center, which also includes meeting rooms and several amphitheaters.
CyrusOne’s Timmons says the focus on amenities was driven by demand from customers, who have quickly snapped up all the available office space in the company’s Texas facilities. When Timmons and his team set out to design new facilities in Phoenix and Dallas, they included a generous office component in each project.
The Phoenix facility features 96,000 square feet of Class A office space and conference rooms, complete with a glass atrium and facade. In Dallas, CyrusOne has 30,000 square feet of office space for its headquarters operations, and another 30,000 square feet of office space for customers, plus the cafeteria, conference rooms and recreation areas.
These customer-friendly flourishes have their greatest value in the premier data center markets – including Silicon Valley, northern Virginia and Dallas – where customers can choose between a number of service providers. But they also hold appeal for enterprises that operate data centers at their headquarters buildings or on a corporate campus, where the servers are within walking distance of office space for IT staff. These companies with on-premises data centers are a key target audience for multi-tenant data centers, and the availability of office space and amenities could ease the decision to shift gear to third-party facilities.
Here’s a look at some of the other amenities at the CyrusOne Dallas facility:
 The data center features a cafeteria and dining area with a sleek modern design. (Photo: Rich Miller)
 Not your ordinary vending machine: This unit is fully stocked with a variety of cables and connectors that customers may need, provided at cost by CyrusOne. (Photo: Rich Miller) | | 3:30p |
How Intelligent Storage Controllers Have Revolutionized the Industry  The storage of data (and lots of it) is a continued business demand. The storage industry is evolving to keep pace.
The data center environment continues to evolve. Current market and business demands have changed to revolve around cloud computing, more devices, and a focus on the end-user computing experience. Large or small – infrastructure is what has been keeping organizations operational. Within the data center, numerous technologies all work together to help bring powerful technologies to other sites, branches and the end-user. A major part of this environment has always been the storage component.
Over the past few years, the storage controller has advanced far beyond a device which only handles storage needs. With more cloud and IT consumerization – managing data, space and future storage requirements has become a greater challenge. So, as other technologies evolved; storage did as well. With modern storage appliances, organizations are able to do so much more than ever before. In all effects – storage has helped revolutionize how we work with and control data. Remember, resources are still expensive. So, why not deploy intelligent technologies which not only optimize, but can scale directly as well.
- Logical storage segmentation/multi-tenancy. As organizations grow – many will develop regional departments or branch offices. In some cases, administrators would have to deploy a new storage controller to numerous locations; even if they needed just a bit of non-replicated storage. Now, modern controllers can be logically split to facilitate the delivery of “virtual storage” slices to various departments. Unlike simple storage provisioning – the branch administrator would receive a graphical user interface (GUI) and a “virtual controller.” To them, it looks like they have their own physical unit. In reality, there is a main storage cluster which has multi-tenancy enable. The primary admin can see all of these slices, but the branch administrators will only see the slice that they are provided. Those private instances can be controller, configured, and deployed all without impacting the main unit.
- Storage thin provisioning. Storage utilization and provisioning has always been a challenge for organizations. With virtualization and many more workloads being placed onto a shared storage environment, organizations needed a way to better control data. With that came the technology around thin provisioning. Thin Provisioning utilizes the on-demand allocation of blocks of data versus the traditional method of allocating all the blocks at the very beginning. In using this type of storage-optimized solution administrators are able to eliminate almost all whitespace within the array. Not only does this help with avoiding the poor utilization rates, sometimes as low as 10 percent- 15 percent, thin provisioning can also optimize storage capacity utilization efficiency. Effectively, organizations can acquire less storage capacity up front and then defer storage capacity upgrades in line with actual business usage. From an administrative perspective, this can reduce data center operating costs, like power usage and floor space, which is normally associated with keeping large amounts of unused disks spinning and operational.
- Connecting to the cloud. No core data center function can escape the demands of the cloud. This includes storage technologies. With more systems connecting into the cloud, storage technologies have adapted around virtualization, cloud computing, and even big data. There really isn’t any one major, cloud-related, storage advancement. Rather, numerous new features and technologies have surfaced which directly optimize, secure and manage cloud-based workloads. For example, solid-state and flash storage arrays have been growing in number when it comes to high IOPS workloads. Technologies like VDI require additional resources to allow hundreds and even thousands of desktops to operate optimally. Another example is geo-fencing data and storage. In creating regulatory compliant storage environments, organizations can now fully control where their data goes and where the borders are required. Not only does this help with file sharing, it helps companies control how their data lives in a public or private cloud scenario.
- Controlling big data. It really didn’t take too long for storage vendors to jump on the “big data” bandwagon. The big picture here is that data and the utilization of data will continue to grow. Storage vendors like EMC and NetApp took proactive approaches in partnering and deploying intelligent systems capable of supporting big data initiatives. For example, Open Solution from Netapp delivers a ready-to-deploy, enterprise-class infrastructure for Hadoop so businesses can control and gain insights from their data. Furthermore, in partnering with server makers – storage vendors are now able to deploy validated reference architectures which provide reliable Hadoop clusters, seamless integration of Hadoop with existing infrastructure, and analysis on any kind of structured or unstructured data. From EMC’s perspective, their powerful Isilon scale-ready platform for Hadoop combines EMC’s Isilon scale-out network-attached storage (NAS) and EMC Greenplum HD. In working with these types of technologies, organizations are able to utilize a powerful data analytics engine on a flexible, efficient data storage platform.
With so many vendors pushing hard to advance the storage market, the above list can truly become much longer. Market trends clearly indicate growth in the consumer market as well as within the business organization. This means more end-points, many more users and a lot more data. Furthermore, high resource workloads demand smarter storage solutions which work to prevent bottlenecks.
In creating your data center, always plan around core components which are driving technological advancement. This means deploying scalable servers, solid networking components, and an intelligent storage system which can control growing data demands. As the market continues to push forward, administrators will need to work with storage solutions which meet business requirements both now and in the future.
For more on storage news and trends, bookmark our Storage Channel.
| | 8:14p |
Pica8 Launches Open Data Center Framework As the Open Networking Summit 2013 gets underway this week in Santa Clara, Pica8 announced its Open Data Center Framework. The framework is designed to provide the essential building blocks toward an eventual transformation to programmable data center networks, including OpenFlow 1.2 and Open vSwitch. Designed for cloud and data center service providers, the framework is an extension of its Open Networking vision, blending the conceptual benefits of the server and conventional networking worlds.
“Server best practices are now also driving initiatives for networks, in particular, simplifying the planning and execution of upgrade processes,” said Seamus Crehan, President of Crehan Research. “Pica8’s framework for a programmable network strives to lay the foundation for an improved way to upgrade network devices, paralleling what is coined a ‘rip and replace’ model on the server side.”
The Pica8 Open Data Center Framework will continue to leverage SDN to develop components needed to manage and provision the network. OpenFlow 1.2 and Open vSwitch bring capabilities such as GRE tunneling for overlays, traffic engineering to optimize network resources and SDN-based network taps for ensuring application flow performance. Software release 1.7 leverages these resources, and is available immediately on all four of Pica8′s 1 GbE and 10 GbE open switches.
“For many, utilizing SDN in their data center represents the future. And the proof in the proverbial pudding will be when managers can centrally define the application flows as needed so that applications run faster and more efficiently,” said Brad Casemore, Research Director, Datacenter Networks at IDC. “Pica8 is seeking to address this challenge, looking to provide IT shops with reduced operating costs while offering network managers greater control and flexibility.” | | 8:50p |
Verizon Terremark Backs Cloudstack and Xen Here’s another big win for open standards: Verizon Terremark, the cloud computing arm of the huge telco, says it is investing in Xen project and CloudStack. These are the company’s first active investments in open cloud projects. While Verizon Terremark says it has long been supportive of open standards, it believes now is the right time to get formally involved in the open-standard ecosystem.
It’s an interesting revelation, given the company’s very VMWare roots. VMware was an investor in Terremark, which was one of the first companies to roll out a vCloud-based public cloud offering. So why the open source love now?
Verizon Terremark believes supporting open source programs is important because they increase the overall market acceptance of these platforms, thus allowing the company to provide additional choice to its customers. The bottom line: the company believes open standards are driving innovation, and it needs to be able to provide choice as hybrid cloud becomes the major play going forward.
Participating in Cloudstack, Contributing to Xen
The company is endorsing the Cloudstack project and actively participating in the community. With Xen, It is making a monetary contribution to the development project and joining the Linux foundation as an advisory board member (the Linux Foundation is the new home of the Xen project).
The investment grew out of the existing close relationship with Citrix, the company said. Citrix currently supports the Verizon Terremark portfolio of enterprise-class IT services. As open cloud wars are heating up, Verizon Terremark is a nice notch in the belt of CloudStack. There’s room for more than one open cloud standard, but there’s definitely a race to win support from the enterprise heavyweights.
Verizon Terremark says it is investing in technologies that allow it to bring high quality products to market, while also helping participate in the long term development of key components of the cloud service delivery platform.
“From our perspective, investing in open source technologies at this stage of market development makes sense because it accelerates sharing, technology and ecosystem growth and reduces development and go-to-market costs,” writes Chris Drumgoole, SVP Global Operations, in a company blog.
Focus on Security
On one hand, it’s surprising given Verizon Terremark’s history with VMWare, but makes sense given its relationship with Citrix. In terms of cloud, the company is largely identified as a VMware shop, focusing on security-centric verticals such as its large federal business.
However, even the most enterprise-centric companies are embracing open standards. Strategies are shifting to supporting hybrid infrastructures, public and private cloud deployments, so companies can no longer solely focus on one type of cloud, but rather enabling cloud usage as a whole.
Verizon Terremark sees many benefits in supporting open standards, with Drumgoogle listing some in his blog post:
- API, application and technology sharing – Open source virtualization platform capabilities and applications make it easier and faster to develop programs and reduce training and compliance costs for end users. Technology sharing leads to higher quality, more robust implementations.
- Ecosystem and market growth – Open standards allows developers to build rich systems of cooperating solutions which foster a market and encourage a higher level of adoption by businesses of all sizes as well as developers and consumers.
- Cost reductions – Standards lower the barrier to entry for new technology companies as well as service costs for established players. End users ultimately win with increased price competition and innovation.
The road forward is paved with Open Standards. All of the large OEMs hopping on OpenStack is one example of this mentality permeating throughout the industry. Verizon Terremark is spreading its chips, hedging its bets and committing to moving cloud forward in general, because of potential it has on its business. Although this is its first active investment in Open Cloud projects, it will definitely not be the last in terms of supporting the open source movement. |
|