Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 24th, 2014

    Time Event
    11:30a
    T5 Lands Financial Customer in Atlanta Data Center

    T5 Data Centers continues to attract financial customers to its data centers. The company announced the signing of a new lease with an undisclosed New York-based financial services company for space in its Alpharetta, Georgia data center, dubbed T5@Atlanta.

    “T5 continues to attract discerning customers such as financial services companies and healthcare firms that need to maintain sensitive data, and address security as part of their own compliance requirements,” said Tim Bright, Senior Vice President, of T5 Data Centers. “They come to T5 because of our reputation for reliable service, operational stability across our national portfolio, our willingness to customize our security, and power redundancy and resiliency. With backgrounds in Enterprise Data Center development, operations, and consulting, T5 approaches data center design differently by designing the kind of data center our clients would build themselves, even before we start customization.”

    T5@Atlanta is a 100,000 square foot, Tier III server-ready data center, with nearly 55,000 square feet of raised floor.  The critical IT power load installed is 6,000 kilowatts, expandable to 9,000. The facility consists of multiple data suite sizes and densities within a secure, bunkered facility. Each data hall is separated by slab-to-deck, fire-rated walls.

    One unique feature of the facility is its power redundancy. Five separate substations including two on Georgia Power’s “Hi-Reli” system for mission-critical users feed T5@ATLANTA. The 25-kilovolt feeds from the substations are encased in concrete with an automatic transfer switch on site. Three additional substations can also feed the facility if necessary.

    T5 has seven sites across the United States in Atlanta, Los Angeles, Dallas, and Charlotte with new projects announced in Portland, New York, and Colorado. The company recently reported leasing success at its Dallas campus, where it has leased 15 megawatts of wholesale space. T5 most recently entered the New York market with a facility in Westchester. Each data center is purpose-built to give customers total control of their own dedicated hall.

    12:52p
    Growth in Data Center Industry Demands Ongoing Professional Development

     Tom Roberts is President of AFCOM, the leading association supporting the educational and professional development needs of data center professionals around the globe.

     There’s good news on the horizon for IT professionals: Corporate IT spending is expected to take off from 2014 to 2016, as companies adopt new technologies, according to research firm IDC.

    In order to have these new tools implemented in data centers, analysts say the U.S. IT technical workforce of 3.9 million or so needs to increase by almost 50 percent within the next two years. North America is not alone; for the final quarter of 2013, there was shortage of 4,600 IT professionals across Australia.

    Skills Shortage

    The lack of workers doesn’t refer to actual bodies, but people with the skills needed in today’s world. For example, professionals with knowledge of data warehousing, business intelligence and SAP are sorely needed “Down Under.” In the U.S. and Canada, cloud computing is going to require 2.7 million specialists by 2015.

    IDC also reports that the U.S. government’s $30 billion electronic health records (EHR) initiative has created an even greater shortage of health information technology workers over the next few years, above the 50,000 the U.S. government previously estimated.

    Big Data, considered the next frontier for innovation, competition and productivity, requires people with deep analytical skills as well as analysts who know who to use the data. There’s a shortage in this arena as well. Data center managers who have been in the profession for many years may not think they need to become experts about the above technologies, but their CEOs probably do.

    They basically only have a few options: Ignore the new technologies going on around them and be replaced with a newcomer or revamped veteran; go back to school; or attend as many educational conferences as they can to become accredited in some of the topics just mentioned.

    Human Impacts of New Technologies

    According to the Cisco Global Cloud Index Study, “The growth of data and these new technologies affect not only IT systems and infrastructures, but also the practitioners that design, install, operate and manage them.  New job roles are emerging that require next-generation skill sets.”

    The study goes on to say that 60 percent of data center operations cite lack of suitably qualified staff as one of the major issues they will face in the coming years.

    In AFCOM’s most recent State of the Industry survey, a few critical trends emerged involving the need for ongoing skill development for data center professionals specifically:

    • According to the AFCOM Survey, when respondents were asked why their budget was increasing, the top 30 percent answered “increased training and certifications.”
    • 56 percent indicated the need for more training in facilities management, 43 percent in operations and process management, and 39 percent pointed to network training.
    • When asked which skills required certification, 40 percent responded networking, 36percent facilities management, and 35 percent systems design and analysis.

    The McKinsey Global Institute is predicting the possibility of a shortage of 1.5 million skilled and qualified data center managers by 2015.

    That shortage could be met through conferences like Data Center World – Global (which convenes next week), tapping into an industry association like AFCOM to help advance your data center profession, or seeking certification from one of many reputable education providers.

    Either way, the clear fact exists:   The data center industry demands more skilled workers. Opportunity or threat, the future is wide open for those who seek knowledge.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    12:54p
    How Amazon Stays On Top in the Cloud Wars

    Amazon Web Services was the first major player in cloud computing, and has maintained its clear leadership position as rivals like Microsoft and Google have poured billions of dollars into competing platforms.

    How has Amazon, a company with its roots in online retailing, managed to dominate one of the major tech battlegrounds of our era?

    The answer? Amazon has done many things right:

    • The company’s early adoption among developers gave it an unparalleled ecosystem of experimenters.
    • Requests for new features allowed AWS to maintain its lead in functionality.
    • Its focus on scale and data center automation has created an extremely efficient infrastructure, making it resistant to pricing pressures.
    • Through education and certification services, Amazon has cultivated a talent pool of workers skilled in their use of AWS and its services, expanding beyond its initial developer-focused base and into all market segments.

    The competition, however, is growing. New clouds are rising and prices are being aggressively cut. In order to understand Amazon’s ability to maintain its leadership, it is important to look at how it built its cloud empire.

    The Early Days: A New Business Model

    Many companies say they’ve been doing cloud computing for years. However, Amazon was the first to offer a true utility compute offering in 2006. This allowed it to gain an early lead.

    Companies like Rackspace soon followed into the market, acquiring online storage provider Jungle Disk and unique VPS hoster Slicehost in 2008 to form the basis of its cloud offering. The business models for the two companies were very different; Rackspace focused on providing the best customer experience, positioning as a premium cloud product. The same tactic of “fanatical support” served Rackspace well during the mass market hosting pricing wars, when it refused to participate in the race to the bottom for prices. Rackspace has built itself a nice cloud business, but it still hasn’t challenged Amazon.

    In another corner, there was SoftLayer (now part of IBM), which  focused on dedicated servers, later rebranded as “bare-metal” servers. SoftLayer was innovating in terms of automation, shrinking the provisioning time and automating the purchasing of dedicated servers to the point where it was a utility, just like cloud. Under IBM, it now has the scale to achieve new heights. IBM/SoftLayer is building a nice cloud business, but it also hasn’t challenged Amazon.

    Out of the three pillars of early cloud, Amazon offered the bare bones utility compute and storage, the cloud that most resembled a utility, like electricity. It was a playground for developers and startups who wanted to experiment, which led to building applications and businesses atop of AWS. By 2007, Amazon already had 180,000 developers using AWS. It was this early foundation, as well as continued evolution, that kept the company on top.

    Building A Cloud Ecosystem

    The playground-like atmosphere, coupled with the fact that AWS was “bare bones,” meant a lot of developers were able to innovate and scale without the traditional hurdles of upfront cost. Amazon basically open-sourced the platform and allowed others the ability to build functionality atop of raw infrastructure.  Since this was the ‘Wild West’ days of cloud, several companies built businesses that provided functionality atop of AWS.

    One of the poster children of this movement was – and continues to be – Rightscale. Letting techies go wild building businesses atop of AWS in the early days meant that Amazon was the platform where many new technologies were rolled out, either through these third parties, or by Amazon itself.

    Companies like Netflix, which relies 100 percent on AWS to this day, kicked the tires for larger purposes, creating their own tools like Chaos Monkey (which was open sourced in 2012). Chaos Monkey provides auto-scaling and seeks to rid failures by intentionally causing chaos in the system. This is just one example of how AWS was as resilient as you wanted it to be, depending on the hands which handled it.

    Other clouds had to catch up, either building or partnering for similar functionality and toolsets. The big push for cloud agnosticism meant that many third-party cloud-handling businesses (think scaling, cost control etc.) began to offer their services on other clouds. This was one factor that allowed Amazon to build out functionality without necessarily threatening its ecosystem; in order to survive, its ecosystem had to either do it better or extend across different clouds. AWS adding services forced these companies to expand the breadth of their offerings or perish.

    1:00p
    All-Flash Storage Provider SolidFire Opens AsiaPac HQ In Singapore

    Singapore continues to attract AsiaPac operations for U.S. tech companies. The latest is all-flash storage systems provider SolidFire, which has opened a new regional HQ to support a growing number of international customers. Within Asia-Pacific, more than 15 customers have already deployed SolidFire’s all-flash arrays.

    “Singapore’s robust infrastructure and strong connectivity to the rest of the region made it the optimal city to headquarter SolidFire’s Asia-Pacific operation,” said Tim Pitcher, SolidFire VP of International. “Through policies that welcome global business, Singapore has built a strong ecosystem of innovation that will be beneficial for SolidFire to join. Our expansion into Asia and our growing European presence allows us to quickly respond to and support growing product demand in these regions.”

    SolidFire’s expansion follows a year of solid growth, as well as several strategic partnerships with VMware, OpenStack, and Citrix. The company has landed several service provider customers including COLT, ViaWest, Internap, ServInt, SunGard, and more.

    The recently released Element OS Version 6 added key functionality and protocol support that expands the company’s appeal into the enterprise.  Those functionalities include new support for Fibre Channel, Real-Time Replication, Mixed Node Clusters, and Integrated Backup and Restore

    “Strong customer response, along with a growing enterprise demand for predictable, high-performance cloud storage infrastructures has accelerated our ability to expand our presence globally,” stated Dave Wright, Founder and CEO of SolidFire. “Each of these large scale enterprises is at a transition point. They are all defining their Next Generation Data Center and are benchmarking themselves against the operational and economic efficiencies of large scale public clouds. Storage is at the heard to that transformation – SolidFire is at the heart of that transition. We grew our teams close to 80% in 2013 and expect to continue that trend throughout 2014 as enterprise IT evolves. We have a call out to the best in the storage industry to join our team this year.”

    Asia-Pacific expansion will be driven out of SolidFire’s regional office in Singapore, led by Kris Day, Managing Director for Asia-Pacific.

    2:00p
    Creating Next-Gen Visibility in the Modern Data Center

    There have been some pretty amazing advancements around data center technologies. By turning to converged infrastructure solutions that combine computing, networking, storage, software, and orchestration products in a single package, organizations have been able to make huge strides toward highly efficient and optimized service delivery from their data center.

    Still, a major gap exists across all vendors of converged infrastructure solutions, which is real-time visibility into the traffic and transaction flows. In this whitepaper, we learn how extending visibility throughout the data center, across all islands of IT including the virtual server and virtual network, is a simple matter of deploying right-sized Gigamon components, such as the high-density GigaVUE-HD4 or GigaVUE-HD8 fabric node as a central aggregation point for SPAN and/or TAP ports.

    Here’s the cool-factor:  Gigamon’s Unified Visibility Fabric architecture is built on a very powerful framework. The multi-layer approach involves:

    • Services Layer—Aggregation, filtering, replication, and intelligent packet modification which are the core functions of the GigaVUE fabric nodes
    • Management Layer—GigaVUE-FM delivers a central provisioning point across physical, virtual and other elements in today’s Software Defined Data Center architecture
    • Orchestration Layer—Offers programmability, automation, and tool integration for future advancements in data center technology
    • Applications Layer—Monitoring tools can perform more efficiently with applications like de-duplication, and in the future with intelligent flow-based sampling enabled by FlowVUE

    Download this whitepaper today to truly understand how a powerful visibility infrastructure can dynamically impact your environment. Direct benefits include:

    • Big data visibility
    • Creating the “4th layer” visibility standard
    • Delivering Visibility-as-a-Service
    • Integrating the Visibility Fabric
    • Creating next-generation data center visibility – for today and the future

    As your business evolves and grows – the data center model will become an even more critical component for your organization. The four-layer architecture that Gigamon describes follows the concepts of data center standard reference architectures, which today direct all paths of data center evolution towards the transformation of data center servers, networks, and storage into services offered to meet the needs of each business function within the enterprise.

    2:30p
    Tips for Preventing Data Center Downtime

    Let’s face facts: Your data center has become an integral part of your organization. In fact, many organizations are actively building entire business flow models around the capabilities of their data center platform.

    So, downtime in the data center becomes unacceptable because downtime can cost companies millions of dollars per hour in lost revenue. In this whitepaper from Gigamon, we learn how to ensure the efficient operation of the data center, reduce bottlenecks, prevent outages, and maintain security. To accomplish this – it is vital to carefully monitor and analyze all the traffic within the modern data center.

    It’s important to understand that the modern data center is comprised of Switches, Routers, Firewalls, Application Servers, IP Services (DNS, RADIUS, and LDAP), Virtualized Applications, and Storage Area Networks. Often, customers understand the need for monitoring the data center, but do not monitor their data center as securely and efficiently as possible. So – what are some great ways to reduce downtime in the data center?

    Download this white paper today to learn Gigamon’s approach to creating a more resilient data center platform. Tips include:

    • Building a secure traffic visibility fabric
    • Reducing the monitoring burden and increasing the effectiveness of existing tools
    • Quickly introducing new tools and monitor new applications
    • Securing monitored data
    • Eliminating SPAN/mirror port contention

    Your data center will only become more important to your organization. The proliferation of cloud computing, virtualization and information in the data center has created new types of challenges for the IT environment. Downtime costs money. To offset this impact it’s critical to deploy a powerful monitoring and data center visibility framework. With technologies like Gigamon, users will be able to better understand user experience, determine threat vulnerabilities, and maximizedata center performance while lowering the total cost of management.

    3:00p
    HP Introduces XP7 Enterprise Storage

    HP introduces XP7 enterprise storage for mission critical applications, PernixData launches new features for its FVP virtualization software, Micron releases a new enterprise-class SATA SSD, and SanDisk announces 15 nanometer technology.

    HP introduces XP7 Enterprise Storage.  HP (HPQ) unveiled the XP7 enterprise-class storage disk array, with increased system density and ultra-high, flash-driven performance. The XP7 features 3 million IOPS (I/O per second) with sub-millisecond response times. The XP7 is able to consolidate up to 4.5 PB of internal storage onto a single array, and features a new disaster recovery capability with multiarray virtualization to prevent business disruption during or after online data migrations. It also secures data with Federal Information Processing Standard (FIPS) 140-2 encryption readiness. “Today’s mission-critical workloads cannot afford to forfeit performance or scalability for increased disaster recovery and data protection,” said David Scott, senior vice president and general manager, HP Storage. “With the introduction of HP XP7 Storage, we are supplying customers with a single system that is highly efficient, with features that can reduce the risk of data loss and provide nonstop data availability.”

    PernixData launches new FVP features. PernixData announced today the availability of new PernixData FVP features designed for ubiquitous storage acceleration.  With the new version of FVP software, companies can accelerate any virtualized application using server RAM and/or flash, making it ideal for deployment in any server environment. New FVP features include the ability to virtualize any high-speed server resource, optimizing storage though decoupling storage from capacity, and ‘metro clustering’ for synchronous replication between hosts for complete fault tolerance. Storage is at an amazing inflection point where performance is being extricated from capacity to optimize application behavior and cost effectively scale IOPS, without disrupting years of investment in shared storage infrastructure,” said Satyam Vaghani, CTO and co-founder of PernixData. “By making FVP easy to deploy in any environment, and robust enough to accelerate any virtualized application using flash or RAM, PernixData is making decoupled storage architectures the de facto standard in virtual data centers.”

    Micron releases new SATA SSD. Micron (MU) announced a new enterprise-class solid state drive (SSD) designed specifically for data center storage platforms. The new M500DC SATA SSD addresses enterprise applications requiring greater data throughput due to rapid growth in mobile applications, cloud services and connected devices. It uses 20nm MLC NAND Flash technology and fifth generation custom firmware, and integrates the Micron Extended Performance and Enhanced Reliability Technology (XPERT) feature suite. The XPERT architecture intelligently integrates the storage media and controller into a comprehensive architecture that meets the high workload demands of the data center by extending drive life, protecting data during power failures and ensuring overall data integrity. The M500DC is available in 120, 240, 480 and 800GB capacities. ”System administrators are realizing that there is a need for an SSD that delivers more enterprise-features than a client drive at a more affordable price than most enterprise drives,” said Greg Wong, founder and principal analyst at Forward Insights. “Products such as Micron’s M500DC SSD offer data centers the optimal balance of enterprise class features, performance and price for demanding 24/7 enterprise workloads.”

    SanDisk announces 15 nm technology. SanDisk (SNDK) announced the availability of its 1z-nanometer (nm) technology, and will ramp on both two bits-per-cell (X2) and three bits-per-cell (X3) NAND flash memory architectures in the second half of 2014. The 15nm technology uses many advanced process innovations and cell-design solutions to scale the chips along both axes. SanDisk’s All-Bit-Line (ABL) architecture, which contains proprietary programming algorithms and multi-level data storage management schemes, has been implemented in the 1Z technology to deliver NAND flash solutions with no sacrifice in memory performance or reliability. “We are thrilled to continue our technology leadership with the industry’s most advanced flash memory process node, enabling us to deliver the world’s smallest and most cost effective 128 gigabit chips,” said Dr. Siva Sivaram, senior vice president, memory technology, SanDisk. “We are delighted that these new chips will allow us to further differentiate and expand our portfolio of NAND flash solutions.”

    5:00p
    Data Center Lighting and Efficiency

    As data center operators, engineers and architects consider different ways to make their data centers more efficient, often the last areas they consider is data center lighting. While lighting only comprises 3 to 5 percent of a data center’s energy load, it’s one of the easiest areas to address and one that will help take a data center with good Power Utilization Effectiveness (PUE) to one with great PUE.

    With that in mind – one of the standard ways to optimize the delivery of resources is to deliver them “on-demand.” Basically, provide the resource when it’s needed, not supply it continually. The same can be applied for lighting. In this whitepaper from CommScope, we see how the most efficient way to provide light in a data center is to use it precisely when and where it is needed.

    This is effectively a lights-out data center approach. While many data center operators believe they operate a lights-out type of facility, in actual practice, they do not. In facilities with this type of policy, lights are turned on manually across a large swath of space when a technician enters the racks to get to a particular small section of the site, such as an aisle.

    Moving forward, the data center will have to operate as efficiently as possible. This means intelligently controlling lighting as well. Fortunately, there is an inexpensive tool to determine exactly how much lighting energy is used in their data center. This device is a HOBO U9 Light On/Off Data Logger.

    By placing the HOBO in the data center an operator can track precisely how much lighting is used over a period of time. This empirical information can be used to guide data center operators to select the approach that works best for them.

    Download this whitepaper today to learn how by combining multiple sensors for motion, lighting, energy metering, and temperature in a single device, CommScope Redwood’s sensor can be deployed at each and every light fixture, creating a dense grid with coverage for every 100 square feet of building space. Here’s the cool part – this follow-me lighting and efficiency control method can scale and become distributed with you systems.

    Basically, you’re deploying a solution which collects sensor information and extends the information to other systems to deliver a platform that provides an optimally efficient way to improve productivity, enhance efficiency and reduce energy costs.

    << Previous Day 2014/04/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org