Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, May 15th, 2014
Time |
Event |
12:00p |
Sabey Earmarks Half of Manhattan Skyscraper for Office Space Sabey Data Center Properties is planning to replace walls on many floors of the former Verizon building in Manhattan it bought three years ago with floor-to-ceiling windows and turn those floors into office space.
Since the acquisition, the Seattle, Washington-based data center developer and operator has marketed its east-Manhattan skyscraper primarily as data center space. Leasing some of the building as office space has always been in the plans – and there are already tenants with both data centers and offices there – but earmarking 15 floors of the 32-story building for “window-walling” signals an increased focus on offices.
While supply of fully built-out data center space in Manhattan is tight, there are highly connected buildings with lots of space (including Sabey’s) whose owners prefer to sign a lease before spending capital on building out data halls. There is also plenty of relatively low-cost data center space available in New Jersey – just across the Hudson River.
Office space on the island comes at a premium. Commercial real estate services firm CBRE reported in April that Manhattan landlords had increased prices on about 1.4 million square feet of available office space on the island because of high demand, sparked primarily by an influx of technology companies.
Connectivity and Nice Views
John Sabey, the company’s president, said the opportunity to provide offices in such a highly connected building in Manhattan was significant. There are many companies looking for office space on the island, and the building at 375 Pearl Street, with 1 million square feet of floor space total, has plenty to offer.
In addition to connectivity, robust mechanical and electrical infrastructure and the option to have office and data center space in one building, it offers stunning views, since there are no other buildings around that are this tall. “The views up and down the river and back into the city and … Midtown are quite spectacular,” Sabey said.
Most companies in the market for Manhattan offices are in the tech industry, including giants like Google and Microsoft, he said. There is also some interest from Manhattan’s traditional financial vertical. The building would also be suitable for health and life sciences companies.
While the landlord may turn up to 15 floors into office space, it is starting with less than half that. “We will do probably six or seven floors right away, but we have the ability to continue to install windows very quickly if a large user comes along,” Sabey said.
Market Will Decide
Ultimately, the market will decide what the split between data center and office space in the building will be. A little over one floor (about 45,000 square feet) is built-out office space today, and about 150,000 square feet of data center space – a lot of it occupied.
Sabey bought 375 Pearl in 2011 for $120 million in partnership with local developer Young Woo. The company said then it would upgrade power capacity from 18 to 40 megawatts and enhance network connectivity. The building was officially opened for business under its new name Intergate.Manhattan in 2013.
Tenants who have moved in since the opening include telco and managed services provider Winstream and Datagram, a New York-based hosting company whose previous data center in lower Manhattan experienced a prolonged outage after electrical infrastructure in the building was flooded in the aftermath of Hurricane Sandy in October 2012. Datagram now occupies both data center and office space at Intergate.Manhattan. | 12:23p |
Software-Defined Data Centers: The Next Big Thing or All Hype? Matt Smith works for Dell and has a passion for learning and writing about technology. Outside of work he enjoys entrepreneurship, being with his family, and the outdoors.
Corporations love to get their hands on new technologies and proclaim them as the next big thing or the “saving grace” of their networks and data centers. One of the newer developments along those lines is a move towards virtualization, and a leading trend in that movement is the software defined data center (SDDC).
But what is all this “software defined” talk all about, and will it really provide an effective alternative for growing businesses?
In the most basic terms, SDDC and all the other “software-defined” services and concepts is a virtualization of the elements of its infrastructure and then increasing the accessibility of the datacenter through an API. The data center, specifically, includes all the concepts of software-defined networking, storage, automation, security and more. The goal, here, is to make it easier to provision, manage, and operate many of the low-level components, such as the CPU, networking and other storage solution.
That means that if there is ever an issue, the IT manager can now control, fix or respond to them without having to be in a certain place or have to deal with multiple hardware issues. Basically, the SDDC is driven by software, creating a more flexible and agile computer system to address the ever-growing needs of the modern business environment.
A Layer of Abstraction
A software-defined data center is designed to provide an additional layer of abstraction on top of the normal hardware infrastructure and public or private clouds. The idea of this kind of abstraction is to allow IT departments to define their own requirements and achieve the necessary levels of performance, security, and availability.
The notion that a data center could be abstracted into smaller units that could then be priced and sold as a utility has changed the way many companies are deal with their networking and storage concerns. By separating – or abstracting – the physical devices from the way the company actually needs or wants to use their resources, these systems suddenly become more agile and flexible. It means that it is possible to create a solution that is based on end-user requirements and more aligned along application and service demands.
Making the Transition
If your company is looking into, or are in the beginning stages of moving to, an SDDC, what do you need to know in order to make the transition easier? EMA analyst and blogger Torsten Volk outlines three major areas that will require some kind of investment in order to implement an SDDC:
- Capacity Management: You can’t provision resources that don’t exist. Your company will need to have enough capacity for all the necessary applications and services. This means that your company or IT department will need to be more “application aware” and really understand the requirements of every application workload.
- Multi Virtualization and/or Cloud Management Platform: Due to the many vendors and technologies your company’s data center uses, you may need to implement a multi-virtualization and multi-cloud management platform to oversee everything.
In order to provide the most reliable services, most data centers use a mix of technologies from multiple vendors. Add to that the tendency of many businesses to jump on new technologies and cloud resources, and things get even more complex. An effective management platform and process is critical to get the most out of these resources.
- 3. Configuration Management: Depending on what applications you need, you can be more efficient by having a “devops” mentality and implementing automatic provisioning.
Many companies are reaching for a more rapid and consistent deployment, which can add a lot of responsibilities on the IT team. By automatically allocating resources through the right configuration management processes, you can make the entire process much easier and minimize some of the troubleshooting activities that normally come later.

Be Aware of the Challenges
With any new technology, there are going to be some challenges that may catch a company unaware. There are not always insurmountable problems, but their perceived difficulty could cause a company to rethink the value of an SDDC.
There are four different areas that were presenting managers with the most difficulty:
Gaining visibility across technology boundaries – Virtualization allows companies to bring together network, storage, computing, and application components. However, when all these elements are in a single place, it can blur some of the traditional boundaries, creating some confusion for managers.
Managing workload mobility – Workloads no longer need to stay in a single physical location, which has led to a number of benefits for some companies. Dealing with these rapid changes, though, can be a challenge if the IT team isn’t fully prepared.
Making sure storage isn’t forgotten – Virtualization opens up a lot of new possibilities, but some companies are concerned that some of their storage could be left behind. Companies that have previously relied on disk-dependant storage may have to take special precautions to make sure nothing is lost when they switch to a software-defined environment.
Virtualizing applications and leveraging automation with stability – While virtualization can make a company’s system more agile and flexible, it must still balance that all with the stability of their other processes. Companies must reconsider the way they manage and provision their resources and application and make sure the switch does not lead to any unnecessary downtime.
A New Approach
All of these challenges mean that IT managers will need to rethink the way they approach their networking, storage, and security and tackle them as a whole. While many will hesitate to the initial move due to the more challenging aspects of the SDDC, in the end, it will help create a sleek, easier-to-manage automated network.
When it’s done right, the abstraction from the hardware level can lead to an environment in which we are no longer limited by the rigid limits of traditional data centers or the need for specialized knowledge in those hardware components. A software-defined data center creates a lot of new opportunities, if the company is ready to take advantage of them.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 12:30p |
Online Tech Acquires Indianapolis Data Center Online Tech has acquired a data center in Indianapolis — its first facility outside of Michigan. The company has been working on an aggressive expansion plan to bring its compliance-focused cloud and hosting services to other parts of the country and this is the first major step in executing that plan.
Online Tech’s core customer base is in healthcare and financial services. Indianapolis is a large city that the company believes is underserved.
“Indianapolis is the thirteenth largest city in the United States—even larger than San Francisco—and we believe its businesses are underserved by secure cloud computing providers,” said Mike Klein, Co-CEO of Online Tech. “The city’s large population of healthcare companies must ensure that all patient data remains safe and HIPAA compliant, which fits well with our healthcare IT focus and expertise. Indianapolis also has a growing community of financial, retail, e-commerce and software businesses.”
The 44,000-square-foot data center will support 3 megawatts of electrical load and 16,000 square feet of raised-floor space. The company expects to complete renovations in the third quarter.
The facility has two separate utility feeds, redundant power infrastructure and is rich in fiber. The purchase of the property and planned upgrades represent an investment of $10 million, which is also expected to create up to 25 permanent jobs for IT, sales and data center professionals in Indianapolis.
This is Online Tech’s fifth data center. The company has 100,000 square feet of space in four facilities in Michigan.
“This is our first facility outside of Michigan and is the latest in a series of sizable facility investments to expand our infrastructure, including a major new data center in metro Detroit and the expansion of our Mid-Michigan facility,” said Yan Ness, the other co-CEO. “We see demand for our services across the Great Lakes region and nationally, and our long-range growth plan includes expansion into other Great Lakes markets.” | 1:00p |
AMD and Canonical Claim OpenStack Performance Record Need 168,000 virtual machines provisioned quickly? There’s an app for that. Well, a finely-tuned collaboration anyway, with AMD‘s SeaMicro SM15000 and Canonical‘s Ubuntu OpenStack. AMD announced that its SeaMicro SM15000 server set an industry benchmark record for hyperscale cloud computing with a demonstration that highlights how OpenStack can quickly and reliably provision on-demand computing services at scale.
OpenStack — the popular open source cloud infrastructure software — is one of the hottest new workloads, and hardware vendors are racing to make sure service providers and enterprises deploy it using their products.
The test provisioned 168,000 virtual machines on 576 physical hosts. The first 75,000 virtual machines were deployed in 6.5 hours, AMD said. The bare metal servers, storage and networking were provisioned using MaaS (Metal as a Service), part of Ubuntu 14.04 LTS and Canonical‘s Ubuntu OpenStack.
“This record validates that the SeaMicro SM15000 is well suited for massive OpenStack deployments,” said Dhiraj Mallick, corporate vice president and general manager at AMD’s Data Center Server Solutions division. “The combination of Ubuntu OpenStack and the SeaMicro SM15000 server provides the industry’s leading solution to build cloud infrastructure that is highly responsive and ideal for on-demand services.”
One Chassis to Deploy Them All
The SeaMicro server has always been about web-scale, dense, massive capacity solutions with innovative architectural features such as its Freedom Fabric. In 10-rack units, a SM15000 server links 512 compute cores, 160 gigabits of I/O networking and more than five petabytes of storage with a 1.28 terabyte freedom fabric. It supports next-generation AMD Opteron processors, Intel Xeon E3-1260L and E3-12565Lv2 and E3-1265Lv3 (Haswell) and Intel Atom N570 processors.
The SM15000 recently received a 2014 Silver award from the Edison Awards in the Applied Technology, Research and Business Optimization category. Last year the company introduced the SeaMicro OpenStack blueprint, empowering OpenStack compute, storage and networking layers to take full advantage of the scalable SM15000 server.
Canonical OpenStack 14.04 provides SeaMicro SM15000 integration with support for the system’s RESTful API. The optimized Ubuntu OpenStack distribution with tools for provisioning, building and managing leverages the SM15000 scale, fabric and speed to present a hyperscale cloud in a box.
Processor vendor AMD bought microserver maker SeaMicro in 2012 for about $330 million. The company said the acquisition was about SeaMicro’s Freedom Fabric technology more than anything else. It has continued selling microserver systems using the SeaMicro brand, however. | 2:00p |
Top 9 Mistakes in Data Center Planning Your data center is at the heart of your organization so there is constant demand and capacity challenges. How do you effectively plan out a data center deployment, upgrade or new build-out? Basically, how can you avoid making major mistakes when entering the “build and expand”world?
In this white paper from Schneider Electric, we learn that the key lies in the methodology you use to design and build your data center facilities. All too often, companies base their plans on watts per square foot, cost to build per square foot, and tier level—criteria that may be misaligned with their overall business goals and risk profile. Poor planning leads to poor use of valuable capital and can increase operational expense.
While there are numerous consultants in the field to help you find your way, assessing ideas and input can be overwhelming. Organizations with critical capacity requirements in the 1-3 megawatt range may fall into this risk category. The critical nature of mid-size users is no less important than mega users; however internal technical expertise to drive proper expansion plans may be limited. The result is information overload from multiple sources, leading to confusion and poor decision making.
So – what are these top mistakes? Let’s take a look:
- Mistake 1: Failure to take total cost of ownership (TCO) into account
- Mistake 2: Poor cost-to-build estimating
- Mistake 3: Improperly setting design criteria & performance characteristics
- Mistake 4: Selecting a site before design criteria are in place
- Mistake 5: Space planning before design criteria is in place
- Mistake 6: Designing into a dead-end
- Mistake 7: Misunderstanding PUE
- Mistake 8: Misunderstanding LEED certification
- Mistake 9: Overcomplicated designs
There will be many data center deployments which hit road-blocks and hurdles. You can avoid quite a few problems by avoiding these mistakes. Download this white paper today to see how through proper planning using the TCO approach – you can create a data center facility that meets your organization’s performance goals and business needs today and tomorrow. | 5:00p |
DE-CIX Expands Into Two Telx Facilities in New York Deutsche Commercial Internet Exchange’s U.S. subsidiary DE-CIX NY has installed exchange infrastructure in two Telx data centers in New York City, continuing its expansion in the North American market.
The data centers — NY1 and NY2 — are inside two of the most important interconnection facilities on the East Coast: 60 Hudson Street and 111 8th Avenue. Telx, which has built a reputation for providing top-tier connectivity options, has a sizable presence in both buildings.
DE-CIX NY was announced in September of 2013 and has been expanding on the wave of the Open-IX movement. Open-IX, a non-profit industry organization, wants to create a new network of neutral member-governed Internet exchange points (IXPs). Read about how this approach, also often referred to as the “European model,” is different from the exchange model that dominates the U.S. market currently here.
DE-CIX NY is a carrier- and data center-neutral Internet exchange distributed across several carrier hotels and data centers throughout the New York-New Jersey metro area. It is among several European-style internet exchanges building up their presence on American soil.
“When expanding an Internet exchange, we look to where our prospects and customers need us,” said Frank Orlowski, chief marketing officer for DE-CIX. “The New York-New Jersey metro area continues to act as a global hub for communications, and this agreement with Telx, which provides strategically located facilities in the heart of Manhattan, marks a milestone for our expansion in the region.”
DE-CIX will offer internet service providers, carriers and other Telx customers additional opportunities for network management by expanding peering to multinational enterprises, increasing options for low-latency transfer of data, as well as business continuity via a single cross connect.
DE-CIX recently announced it signed Akamai, one of the leading providers of content delivery network services, for its exchange at 111 8th Avenue. The Telx data center at this building is called NY2. Owned by Google since 2011, this is one of the world’s most wired buildings. In addition to Telx, it houses data centers by Digital Realty Trust and Equinix, among others.
Google has 500,000 square feet of office space in the building and also occupies space across the street in the Chelsea Market building.
Telx’s NYC1 data center is located at 60 Hudson Street, another major nerve-center for international communications. | 7:17p |
Data Center Jobs: McKinstry At the Data Center Jobs Board, we have a new job listing from McKinstry, which is seeking a Lead Electric Critical Facility Engineer in Altoona, Iowa.
The Lead Electric Critical Facility Engineer is responsible for performing routine maintenance tasks in accordance with McKinstry Safety Policy and Procedures, inspecting buildings, grounds and equipment for unsafe or malfunctioning conditions, troubleshooting, evaluating and recommending electrical system upgrades, ordering parts and supplies for maintenance and repairs, soliciting proposals for outsourced work, working with vendors and contractors to ensure their work meets McKinstry and Client standards, and performing all maintenance to ensure the highest level of efficiency without disruption to the business. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 10:20p |
Rackspace Hires Morgan Stanley to Evaluate Acquisition or Partnership Rackspace Hosting, one of the last major U.S. cloud infrastructure service providers that have not been swallowed by giants, is getting serious about a partnership with a bigger player or an outright acquisition.
The Windcrest, Texas-based company has hired Morgan Stanley to evaluate a number of proposals it has received. “In recent months, Rackspace has been approached by multiple parties who have expressed interest in exploring a strategic relationship with Rackspace, ranging from partnership to acquisition,” Rackspace representatives wrote in a filing with the U.S. Securities and Exchange Commission Thursday.
Morgan Stanley is on board to look at the existing proposals as well as other alternatives for advancing the company’s strategy.
Rackspace spokesman did not provide any comment beyond the statements included in the filing. The company said it would not talk about this process publicly until its board made a decision on a specific partnership or transaction.
Stock Shoots Up After Announcement
The stock market welcomed the news by the company whose shares have been on a steady decline for the past 1.5 years. Rackspace’s stock was trading at about $80 per share at one point during the first quarter of last year, but lately, the company has been having a hard time reaching even half that amount.
After trading at close to $28 per share for most of the day Thursday, Rackspace’s stock shot up to more than $32 per share following the announcement.
Differentiating From Giants
The provider’s executives spent the bulk of the time allotted for their earnings call earlier this week talking about how different the company was from the cloud infrastructure services giants and how different its target market was from theirs.
They focused so much on differentiation messaging because the said giants, namely Google, Amazon and Microsoft, had all dramatically slashed prices of their cloud services in March. Google cut Infrastructure-as-a-Service rates by 30 percent to 85 percent, depending on the kind of service. Amazon and Microsoft both followed almost immediately with announcements of cuts similar in magnitude.
Those announcements left many wondering how they were going to affect players like Rackspace. Amazon, Google and Microsoft all have large businesses outside of IaaS offerings, while for a company like Rackspace, IaaS is core.
On the earnings call, Rackspace CEO Graham Weston said the giants’ price cuts would not have as much of an effect as many would think, since Rackspace was after a different customer base. The big players cater to developers who want raw compute and storage resources deployed in the cloud, while Rackspace is after companies who choose to outsource cloud infrastructure deployment and management to an outside expert.
The Real Competition
Rackspace has other giants to compete with, however: giants who have bought up companies it used to compete with toe-to-toe. They include Verizon, which bought Terremark, CenturyLink, which bought Savvis, and IBM, which now owns SoftLayer.
Those are the companies Rackspace sees as its direct competition, its president Taylor Rhodes said on the earnings call this week.
Being acquired by or entering into a tight partnership with a company of similar caliber would give Rackspace more fire power to compete with those firms. The data center footprint it has built out and the technology those data centers house are far from trivial.
Rackspace engineers were on the original team that created OpenStack, the open source cloud infrastructure software that has since become the de facto standard cloud architecture alternative to proprietary clouds from the likes of Amazon. Rackspace reportedly has the largest production deployment of OpenStack in the world.
The company operates nine data centers in six markets, including Chicago, Dallas-Fort Worth, northern Virginia, London, Hong Kong and Sydney. Those facilities house more than 100,000 servers. |
|