Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 27th, 2014
Time |
Event |
12:00p |
CoreSite is Newest Data Center Arrival in Secaucus  CoreSite CEO Tom Ray welcomes attendees Tuesday at the opening of the company’s newest data center in Secaucus, New Jersey. (Photo: Rich Miller)
SECAUCUS, N.J. - Many think of this North Jersey town as the gateway to the Meadowlands, nestled alongside the famous wetlands and its namesake sports complex. But for a growing number of companies, Secaucus is the gateway to the Internet.
CoreSite Realty is the latest arrival in Secaucus, located about five miles due West of lower Manhattan. On Tuesday the company opened the doors on its 283,000 square foot facility, which is its second in the New York market, building on its original presence at 32 Avenue of the Americas in Manhattan.
The Secaucus site provides low latency fiber access to major data centers in New York City, while offering significant savings on the cost of power. Industrial electric rates are about 10 cents per kilowatt hour in Secaucus, compared to 17 cents in Manhattan, according to CoreSite.
“We believe that not as many kilowatts (of data center capacity) need to be in Manhattan where it costs twice as much,” said Tom Ray, the CEO of CoreSite. “But they want to be near Manhattan. We feel there’s a great long-term opportunity here, and a pretty good short-term opportunity as well. Our first steps are very encouraging.”
A Growing Data Center Cluster
CoreSite has some familiar neighbors. Its new facility is around the corner from a new Internap data center and not far from a major campus for Equinix.
CoeSite has invested about $100 million to retrofit an existing building, a process that took about eight months. The first phase includes 65,000 square feet of data halls on the first floor of the building, where the electrical and cooling infrastructure also resides. The second floor provides space for seven data halls, which are each about 12,000 square feet and provide 1.5 megawatts of crtical power for IT equipment.
The new building provides CoreSite with plenty of space to grow beyond the footprint of its original data center in 32 Avenue of the Americas, which opened in 2008. The company’s expansion arrives at time of considerable movement in the New York area data center market, where the activity is driven by two developments – the aftermath of Superstorm Sandy and the shrinking footprint for data center space at the Google-owned carrier hotel at 111 8th Avenue.
Direct Connect
Last month CoreSite added Amazon Direct Connect in Secaucus, giving customers access to Amazon cloud services through a private, enterprise-grade network connection. AWS Direct Connect helps customers reduce bandwidth costs, improve network security and achieve more consistent network performance.
CoreSite now has 16 data centers in nine markets, with facilities spanning 2.5 million square feet of data center space. The company has been a strong performer on Wall Street, gaining 16 percent in 2013 after a 55 percent gain for its shares in 2012. Ray says CoreSite isn’t always flashy, but has benefited from its focus on execution.
“There’s no magic or silver bullet,” said Ray. “You pick a location that works, with service delivery that excels. We’re methodically executing our plan, and adding more inventory in a business that works.”
Later this week we’ll provide a look at the design of new CoreSite Secaucus data center. In the meantime, here’s a look at an ice sculpture of the CoreSite logo that was featured during the reception after Tuesday’s opening.
 | 12:30p |
Intel Adds Virtualization Platforms for Industrial Systems At the Embedded World event in Nürnberg, Germany this week Intel (INTC) unveiled the Intel Industrial Solutions System Consolidation Series, providing an accelerated path to implement state-of-the-art industrial embedded systems. This is the first pre-integrated, pre-validated embedded virtualization product that allows customers to merge and manage multiple discrete systems into a single machine.
“More and more, the industrial sector is looking to technology for innovative ways to become even more efficient and competitive,” said Jim Robinson, general manager of Segments and Broad Market Division, Internet of Things Solutions Group at Intel. “By bringing together what have typically been multiple subsystems within industrial equipment into a single computing platform, Intel’s application-ready platform makes it easier and more affordable for OEMs, machine builders and system integrators to deliver consolidated, virtualized systems.”
The new series bundles an embedded computer with an Intel Core i7 processor and a pre-integrated virtualization software stack with Wind River Hypervisor. It is preconfigured to support three partitions running two instances of Wind River VxWorks for real-time applications and one instance of Wind River Linux 5.0 for non-real-time applications. Baosight, one of the largest system integrators in China, used the Intel Industrial Solutions System Consolidation Series to create iCentroGate, a secure data collection and communication product that merged the tasks of two separate devices into one CPU. Using the Intel solution, the company reported saving an estimated 60 percent of development time and 50 percent of development costs.
“The Intel Industrial Solutions System Consolidation Series is helping us provide a unique and innovative solution to our mainstream customers, which gives us a huge technological advantage,” said Dong Wensheng, general manager of Baosight’s R&D Division. “By starting with the Intel application-ready platform, our development cost and time have been reduced significantly.”
Intel also released a new version of Intel System Studio, a suite of software tools tailored to developers creating industrial embedded systems including those using the Intel Industrial Solutions System Consolidation Series. The new software and tool suite provides highly optimized build and performance analysis tools to help ensure functional reliability throughout the system life cycle. Intel System Studio is available free of charge for a limited time if purchased with the Intel Industrial Solutions System Consolidation Series. System Studio is a part of the Intel Developer Program for Internet of Things. | 1:30p |
Elliot Raises Bid for Riverbed to $3.3 Billion The banks are buzzing with billions, as mergers and acquisition activity in the technology sector continues to accelerate. Elliot Management has raised its bid for Riverbed to $3.3 billion, and Intel invests in big data and cloud companies in China. All recent deals put together would still not total the massive deal last week, with Facebook (FB) acquiring mobile messaging company WhatsApp for $16 billion.
Elliot raises Riverbed bid to $3.3 billion
The New York Times reports that hedge fund Elliot Management has raised its bid for Riverbed Technology to more than $3.3 billion, and continued to criticize Riverbed for failing to begin a process to sell itself. In January Elliot offered approximately $3 billion,with a written letter to either accept its buyout offer, or to seek other offers. The new offer raises January’s $19 per share offer to $21 a share, after Riverbed turned down the initial offer, saying it was inadequate. Riverbed’s continued resistance to Elliott stands in contrast to Juniper Networks, which announced a new integrated operating plan last week in response to Elliot.
“We believe shareholders, the actual owners of the company, should be outraged by the board’s behavior,” Jesse Cohn, the Elliott portfolio manager leading the campaign, wrote. “This behavior is inconsistent with the fiduciary responsibilities of a public company board, whose obligation is to maximize value for stockholders.” Riverbed said in a statement that it is considering Elliott’s new proposal. The company added that it is focused on its own turnaround strategy and is still open to buying back shares to bolster its stock price.
Intel Capital invests in mobile and cloud. Intel Capital announced that RF Solution provider Newlans has closed $15 million in Series B funding to accelerate the company’s development and commercialization of its Programmable Duplexer targeting front end radio frequency modules for 4G LTE mobile devices and small cells.
“Newlans is pioneering new approaches to RF front end tuning,” said Stefan Wolff, Vice President, Intel’s Platform Engineering Group & COO of Intel’s Wireless Platform R&D. “They are truly redefining the way RF filtering is being implemented and we are pleased to see them apply this disruptive technology to mobile devices. We look forward to working together with Newlans to commercialize this critical technology.”
Last week Intel Capital announced investments in big data company Shanghai Yeapoo Information Technology, network storage provider BlueWhale, and China Cloud – a leading Chinese cloud infrastructure operator. “We’re proud to support BlueWhale, China Cloud and Yeapoo as they shape the cloud computing and big data ecosystems in China,” said Arvind Sodhani, executive vice president of Intel and president of Intel Capital. “Each company has demonstrated early success and we’re looking forward to helping them grow.” | 1:30p |
Bringing DCIM Technology into Your Data Center Lara Greden is a senior principal, strategy, at CA Technologies. Her previous post was titled, Using DCIM to Achieve Simplicity in the Face of Complexity. You can follow her on Twitter at @laragreden.
 LARA GREDEN
CA Technologies
DCIM technology is core to Facebook’s approach to managing capacity and achieving efficiency in the data center. At the recent Open Compute Summit in San Jose, California, Tom Furlong (here’s a video link to his talk) spoke about how Facebook defined and implemented DCIM technology to accommodate 42 use cases.
As part of their process, Facebook conducted a Proof of Concept (POC) with third-party DCIM vendors to collect and rationalize thousands of data points in real-time, or near real-time, before investing in a platform.
While Facebook may have different criteria in how they approach DCIM, you may want to consider the following factors before deciding how to bring DCIM into your data center:
Evaluate Scalability
Ensure that the vendor’s DCIM technology has the scalability and architecture to meet the specific design of your data center or portfolio of data centers, whether owned or leased from a colo provider. You will be pulling on the order of hundreds or thousands of data points. Start with an evaluation of your requirements, and then task the vendors to demonstrate how they can achieve those requirements in your environment. Through product demonstrations, interviews, site visits or, if appropriate, a proof of concept, you want to make sure that the scalability and architecture will produce the results you require.
Know It Will Integrate
One of your requirements is likely something along the lines of “gather data points from across the physical equipment and systems in our data center, including CRACs/CRAHs, variable speed fans, chillers, BMSs, branch circuit monitoring, PDUs, UPSs, generators, servers, sensors, and more.” Your data center uses equipment from a variety of vendors, so you want to be sure the solution will provide integration to meet your use cases, both now and in the future.
One of the aspects to consider is how the solution integrates with the physical data center. Evaluate that the data gateway supports the varies device protocols and frameworks necessary, such as SNMP, XML, CSV, IPV6, BACnet, Modbus TCP/RTU, WMI, IPMP, EnergyWise and others. Also assess the requirements, if any, for installing, configuring, and maintaining additional hardware.
Another of your requirements is likely to be crafted around the need to integrate upstream with IT Management systems, such as the service desk, workload automation, change management, incident management, and native workflow solutions. Ask for concrete examples of how the DCIM solution will integrate with the systems, processes, and workflows you’ve defined in your requirements study.
Refine Your Requirements Study
To ensure successful adoption of DCIM technology, your program design should be grounded in a deep understanding of the end users and end goals. As a result, you should approach DCIM from a program perspective instead of a project perspective, and understand which requirements will put you on the road to success. As with any large scale project, start with the use cases and requirements that will provide immediate value and build from there. Think of your requirements as a roadmap, and use the early learnings from the first phase to help refine requirements for subsequent phases.
One of the first areas of value realization from DCIM comes when managing data center systems manually is no longer cost effective. By bringing together the data sets with DCIM, you will realize greater value due to visualization of combined data sets and intelligence based on analytics across the data sets. DCIM often comes to the forefront when data center consolidation programs are underway, and it is essential when data center capacity management is a core functional discipline in your organization.
What Are Your Objectives?
As with any enabling technology, there is no single reason for deploying DCIM technology. For some, it is about capacity management. For others, it is about uptime and availability. For others still, it is about efficiency. But most likely, it is a combination of all of those mission-critical objectives. As your organization goes through the process of evaluating DCIM technology for your data centers, build on the wealth of lessons learned in the industry and give particular scrutiny to how DCIM can be fine-tuned to meet your business objectives.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 1:30p |
Six Considerations in Choosing a Colocation Provider Choosing a colocation provider can be tough. There are so many new types of use-cases that necessitate a move into a colocation–that administrators are often left with more questions than answers. Questions include: What are the connectivity options between my site and the colo data center? How does my colocation provider ensure security and compliance? Can my colocation provider ensure optimal cooling and power capabilities? The reality is that working with a colocation provider will take some research.
Still, with the proliferation of cloud computing, an ever-expanding user model, and new requirements from today’s business world – expanding your infrastructure into a colocation facility absolutely makes sense.
In this white paper from CenturyLink, there is an overview of the six critical consideration points required to make a colocation decision. They include:
- Breadth of Capabilities
- Data Center Location
- Connectivity Options
- Security & Compliance
- Support Services
- Power & Cooling
From physical location to network integration, there are important elements to consider when placing your hardware with a colocation company. Asking the right questions can ensure an optimal deployment. Any latencies or points of failure need to be eliminated or minimized to ensure the performance of your business applications. Redundant systems ensure your business will continue to operate and serve your customers, no matter what unforeseen events may arise. And having a secure environment protects your business from intrusions that can have a devastating impact on your business.
Remember, each business’s needs are slightly different, and you should bear in mind the operational dynamics that make your business unique. Download this white paper today to learn the details around the six considerations which are relevant to all colocation environments that clients deploy. In creating your next-generation infrastructure, it’s critical to know what your organizational demands are, both today and in the future. Working with the right colocation provider can help your enterprise stay agile and scale to the needs of your evolving user and enterprise. | 2:00p |
CA Launches Management Cloud for Mobility CA Technologies (CA) has introduced a comprehensive Management Cloud for Mobility. Delivered as a cloud service, the new offering consists of Enterprise Mobility Management (EMM) to manage and secure mobile devices, applications and content; Mobile DevOps to accelerate application development and deployment; and Enterprise Internet of Things (IoT) to enable the adoption of internet-connected devices.
“The explosion of new devices, applications, content and transactions in the mobile environment has opened new challenges for enterprises of all sizes, worldwide. The CA Management Cloud for Mobility will help enterprises transform the challenges of the new mobile economy into significant opportunity,” said Ram Varadarajan, general manager, New Business Innovation, CA Technologies. “With solutions that power innovation, drive productivity and accelerate the development of new mobile applications and offerings, CA is extending its leadership in IT management to help businesses capitalize on the next technological and business shift – mobility.”
Enhanced Mobile Experience
The EMM suite addresses the management and security for four areas – devices, applications, content and email. Working across multiple development tools, languages and methodologies, Mobile DevOps makes it easier to build and test API-based mobile applications, and gain insights into performance, user experience, crash and log analytics, and automate and support these mobile applications when deployed onto millions of devices. Four new products are available within the Mobile DevOps suite.
“The growth of mobile devices and apps in the enterprise is blurring the lines between personal and corporate data and leading to new challenges for enterprises,” said Chris Hazelton, Research Director, Mobile & Wireless, 451 Research. “The EMM space is evolving as companies are looking for more complete offerings that support customers and productive mobile employees. Secure container technologies must provide enterprise scalability and comprehensive security across app development, delivery and support, all while preserving the familiar native look and feel of nearly any mobile device an employee chooses to carry.” | 3:00p |
DataStax Enterprise 4.0 Gives in-Memory Option to Cassandra DataStax, which focuses on enterprise implementations of Apache Cassandra, has announced version 4.0 of its database platform, adding a powerful new in-memory option. DataStax Enterprise 4.0 also features enterprise search enhancements.
“In order to scale and remain successful, organizations need the lightning performance that an in-memory database can offer,” said Robin Schumacher, vice president, products, DataStax. “DataStax Enterprise 4.0 is the first NoSQL database to combine an in-memory option with Cassandra’s always-on architecture, linear scalability and multi-datacenter support giving businesses what they need to build and scale online applications with zero downtime.”
Objects created in-memory optimize performance and deliver increased speed for read operations. DataStax Enterprise 4.0 includes Cassandra 2.0 integration, which adds new features such as lightweight transactions and CQL enhancements that make it easier to migrate from RDBMS. It also has the latest version of DataStax’s visual monitoring and management solution, OpsCenter 4.1 has capacity planning and custom graphing enhancements.
DataStax says new search features in 4.0 help developers build applications quicker while enhanced internal cluster communications deliver faster search operations, even for thousands of concurrent requests. For developers, in-memory objects act as typical Cassandra tables, so they are completely transparent to applications and developers and have no learning curve. Administrators can decide whether to assign data to in-memory objects, spinning disks or SSDs all in the same database cluster, making performance optimization easier than ever.
“We protect our customers from being exploited online by compiling, comparing and analyzing data to conduct security operations and mitigate threats,” said Jason Atlas, vice president, engineering and technology, IID. “Database performance is critical in high-velocity environments like ours, and DataStax Enterprise’s new in-memory option will provide a valuable speed boost for our deployment.” | 3:30p |
IBM Issues Watson Mobile Developer Challenge  The IBM Watson supercomputer.
At Mobile World Congress this week in Barcelona IBM is encouraging mobile developers to create apps powered by Watson, Cisco’s Quantum virtualized packet core has passed portability testing, and Radware announces a SDN and NFV solution strategy for mobile carriers and service providers. The Mobile World Congress Twitter conversation can be followed on #MWC2014.
IBM launches Watson mobile developer challenge. IBM announced a global competition to encourage developers to create mobile consumer and business apps powered by its Watson supercomputer. The program, being driven by the newly formed IBM Watson Group, aims to encourage developers to spread cognitive computing apps into the marketplace. The challenge will encourage developers around the world to build sophisticated cognitive apps that can change the way consumers and businesses interact with data on their mobile devices. Through this initiative, mobile developers can take advantage of Watson’s ability to understand the complexities of human language, “read” millions of pages of data in seconds and improve its own performance by learning. ”The power of Watson in the palm of your hand is a game-changing proposition, so we’re calling on mobile developers around the world to start building cognitive computing apps infused with Watson’s intelligence,” said Mike Rhodin, Senior Vice President, IBM Watson Group.” Imagine a new class of apps that deliver deep insights to consumers and business users instantly – wherever they are – over the cloud. It’s about changing the essence of decision making from ‘information at your fingertips’ to actual insights.”
Cisco Quantum vPC demonstrates portability. Cisco (CSCO) announced that its Cisco Quantum Virtualized Packet Core (Quantum vPC) has passed portability testing with Berlin-based European Advanced Networking Test Center (EANTC). This first-of-a-kind functionality test emphasized that leading mobile Internet intelligence capabilities can be extended by service providers in their networks and allow them to offer a range of new services such as machine-to-machine (M2M) and sponsored data. Tests performed included LTE (4G) and 3G radio-access technologies using IPv4 and IPv6 and simulating the equivalent of 10,000 end users. The EANTC tests confirmed that the virtualized and physical evolved packet core (EPC) solution that supports the same features and functions as the Cisco Quantum vPC is completely hypervisor and hardware independent. “With all the discussions about virtualization in the industry these days, it is nice to see a vendor ready for independent verification,” said Carsten Rossenhövel, EANTC managing director. “We were able to verify Cisco’s claims that their new virtualized packet core is based on the same Cisco StarOS code and provides the same look and feel as their Cisco ASR 5000 Series.” Using a catalog of virtual functions such as the Cisco Quantum vPC and working in conjunction with the infrastructure, the Cisco Evolved Services Platform help ensure the right type of experience for end users regardless of how or where they connect to the network.
Radware announces next generation SDN and NFV Strategy for mobile. Radware (RDWR) announced a next-generation software-defined networking (SDN) and network functions virtualization (NFV) solution strategy for mobile carriers and service providers. Radware’s new solution strategy leverages SDN and NFV to pioneer an integrated security and application delivery framework that seamlessly enables comprehensive cyber defense and service delivery as network-wide native services. The new product line for mobile carriers and service providers includes new control-plane and data-planes solutions such as DefenseFlow – a SDN DDoS offering, Alteon NFV – a NFV-compliant ADC VNFC, and SteerFlow – a service–delivery control-plane application. “Mobile carriers and service providers can greatly benefit from Radware’s deep working relationships with the SDN and NFV ecosystems as the company has played an integral role in defining standards and technologies in this area from the ground up,” said David Aviv, vice president of Advanced Services. “Through Radware’s new solutions strategy, mobile operators can seamlessly integrate applications to automate virtual data center workflows, communicate with SDN controllers, scale on by leveraging the NFV infrastructure, and much more.” | 4:52p |
Dell Awarded $37 Million Deal With GSA and Homeland Security Dell has been awarded five year, $37.1 million IT services contract in support of General Services Administration. Dell was selected by the United States Department of Homeland Security (DHS) and the United States General Services Administration (GSA) to provide those agencies with IT services and solutions worth up to $22 billion. These contracts will allow Dell to help both government institutions operate more effectively and efficiently through the use of IT and IT services.
GSA Service Desk Support
Dell was one of a number of contractors awarded an Indefinite Delivery Indefinite Quantity (IDIQ) contract with a period of performance of 10 years and an aggregated ceiling value of more than $20 billion. Under this contract, Dell will help DHS develop, maintain and implement a full-range of IT solutions to help the agency better accomplish its mission. The $37.1 million contract came through prime contractor AAC and Dell will complete it through its Nashville, Tennessee and Oklahoma City, Oklahoma consolidated service centers and will expand the successful implementation of helpdesk support as Dell has implemented for other government customers.
“First and foremost we are looking forward to working with these key government customers to leverage our core competencies and help them accomplish their missions more efficiently and effectively,” said George Newstrom, vice president and general manager, Dell Services Federal Government. “This is an unmistakable indication that Dell remains one of the premier providers of IT services to the U.S. government and we’re looking forward to building on the momentum we’ve created within our business recently.”
Dell to build Government Cloud for NRC
Dell announced that it has been selected by the U.S. Nuclear Regulatory Commission (NRC) to build and install an on-premises, federal government compliant private cloud. Dell designed the cloud solution specifically for the NRC to help it achieve its business needs of reducing IT costs, simplifying operations, providing new technology and capability to the NRC user community and satisfying the Office of Management and Budget direction for each agency to establish a cloud instance. Through the private cloud deployment process, Dell Services Federal Government will help the NRC consolidate data centers, replace aging equipment and take advantage of modernized IT services to deliver improved IT performance and an enhanced customer experience.
“We’re focused on helping customers select the right cloud for their unique business needs, including security and compliance. The NRC is a perfect example as it has strict standards and requirements that its systems must support in order to carry out its vital regulatory oversight mission,” said George Newstrom, vice president, Dell Services Federal Group. “The Dell cloud solution is built to meet and exceed those requirements. We are, of course, humbled by the continued trust and confidence that NRC places in our end-to-end solutions and services and our team.” |
|