Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 8th, 2017
| Time |
Event |
| 1:00p |
N. Virginia Landgrab Continues: Next Amazon Data Center Campus? The competition for land with entitlements suitable for large campuses in red-hot Northern Virginia data center market continues unabated.
Corporate Office Property Trust, a publicly traded REIT that’s built a lot of shell buildings for Amazon data centers in the region, appears to be in the process of entitling land for another data center campus. Northern Virginia is home to the largest cluster of Amazon Web Services data centers.
A rezoning application has been submitted to Loudoun County for a 141-acre parcel of land within the Route 28 Taxing District. The rezoning application for “Paragon Park” is consistent with 100 percent data center use and greater building density on the property, up to 1.0 FAR (floor area ratio).
The property is in the Broad Run Election District, on the east and west side of Pacific Blvd. (Route 1036), on the south side of West Severn Way (Route 1748), and on the north side of the W&OD Trail.
The Usual Suspects?
COPT Acquisitions of Columbia, Maryland, and Atlanta-based real estate agency Eugenia Investments submitted the application. COPT specializes in developing mission critical facilities for government agencies and defense contractors. However, its other significant business segment is developing data center shells in Northern Virginia, primarily for Amazon data centers (click images for larger maps):


Source: Jones Lang LaSalle
Last year, COPT contributed six of the Amazon data center shells into a 50/50 joint venture with San Francisco-based GI Partners, a data center private equity firm. According to the joint announcement:
“The venture acquired six of COPT’s existing, single-tenant, data center properties that contain a total of 962,000 square feet. The unconsolidated venture raised $60 million of 10-year mortgages that bear interest at 3.4% to finance approximately 40% of the value of the properties. GI Partners’ affiliate purchased its interest in the venture for approximately $44 million. COPT realized $104 million in proceeds from these transactions.”
It appears that COPT may be preparing the ground for the next crop of Amazon data centers in Loudoun County.
Equinix’s Ground Zero Expansion
Equinix recently purchased four parcels totaling 34.5 acres of land for expansion in Ashburn, Virginia, adjacent to its iconic main campus location, for $1 million per acre — a record high land price for the Northern Virginia data center market.

Equinix’s newly purchased plot in relation to its existing campuses (Map by Allen Tucker, JLL)
As the suitable properties located in Loudoun County get more expensive, developers and users most increasingly look to higher density development featuring multi-story data center designs.
Read more: Equinix Heats Up Data Center Alley’s Landgrab Rush
A typical single-story data center footprint might cover just 35 percent of the land. The high price of land in Loudoun County may begin to accelerate a trend toward two-story or three-story data center designs, which would help new development compete, since higher floor area ratios help reduce cost per square foot.
In markets with high barrier to entry, like Silicon Valley, existing industrial and office facilities are demolished to make way for multi-story data center designs. Data center developers are typically paying around $100 per square foot just for sites. In order to remain competitive, the latest phases built by CoreSite Realty and Vantage Data Centers in Santa Clara are four-story designs, with mechanical equipment on the first floor and three stories of data halls.
As available real estate in the Northern Virginia market gets tighter and tighter, we’re likely to see a similar trend there as well. | | 4:30p |
Microsoft Pledges to Use ARM Server Chips in Challenge to Intel By Dina Bass and Ian King (Bloomberg) — Microsoft Corp. is committing to use chips based on ARM Holdings Plc technology in the machines that run its cloud services, potentially imperiling Intel Corp.’s longtime dominance in the profitable market for data-center processors. Microsoft has developed a version of its Windows operating system for servers using ARM processors, working with Qualcomm Inc. and Cavium Inc. The software maker is now testing these chips for tasks like search, storage, machine learning and big data, said Jason Zander, vice president of Microsoft’s Azure cloud division. The company isn’t yet running the processors — known for being more power-efficient and offering more choice in vendors — in any customer-facing networks, and wouldn’t specify how widespread they eventually will be.
“It’s not deployed into production yet, but that is the next logical step,” Zander said in an interview. “This is a significant commitment on behalf of Microsoft. We wouldn’t even bring something to a conference if we didn’t think this was a committed project and something that’s part of our road map.”
Microsoft is planning to incorporate the ARM chips as it develops a new cloud server design, which it will discuss Wednesday at the Open Compute Project Summit in Santa Clara, California. The company is announcing new partners and components for the design, first unveiled last year, as it moves closer to putting the machines into its own data centers later this year. Because the design is open-source, meaning it’s freely available to be used and customized, other companies are also likely to use variations.
See also: Microsoft Said to Cut Purchases of HPE Servers for Cloud Service
Both the server design, called Project Olympus, and Microsoft’s work with ARM-based processors reflect the software maker’s push to use hardware innovations to cut costs, boost flexibility and stay competitive with Amazon.com Inc. and Alphabet Inc.’s Google, which also provide computing power, software and storage via the internet. While large cloud companies have moved toward greater use of unbranded servers, storage and networking gear, Intel chips have remained one of the sole big-name products widely in use. Microsoft’s work with ARM, in progress for several years, could pave the way for a real challenge to Intel, which controls more than 99 percent of the market for server chips.
Take a deep dive into Microsoft’s Project Olympus at Data Center World this April (that’s next month!), where Kushagra Vaid, general manager of Azure hardware infrastructure, will deliver a keynote on using the open source approach to cloud server design. More about the conference here.
While Intel is among companies making components to work with the Project Olympus design, ARM-chip makers such as Qualcomm and Cavium are also in the running, increasing the chance that other server customers will begin to use these processors. ARM, which licenses its chip designs to manufacturers, is owned by Japan’s Softbank Group Corp.
Any challenge to Intel’s dominance in server chips is a threat to its most profitable business and main revenue driver as demand for PC processors continues to shrink. The company’s Data Center Group turned $17.2 billion of sales into $7.5 billion of operating profit in 2016, and Intel has been running ads that say,”98 percent of the cloud runs on Intel.”
See also: Microsoft Joins Facebook’s Push to Disrupt Telco Infrastructure
Microsoft’s server spending decisions have the potential to impact suppliers’ bottom lines — its Azure service is No. 2 in cloud infrastructure behind Amazon, and it’s one of the biggest server buyers. Last month, computer maker Hewlett Packard Enterprise Co. reported disappointing quarterly revenue, citing “significantly lower demand” from a major customer. That client was Microsoft, people familiar with the matter said.
This isn’t the first time ARM manufacturers have taken aim at the server market. Other chipmakers have promised computer components—based on the ARM technology that dominates in mobile phones — that would loosen Intel’s stranglehold, yet none have done so. That may be changing this year as Qualcomm, one of the few companies that can rival Intel’s spending on research and design, begins offering its first server processor and as other chipmakers finally field long-promised chips that are capable of competing.
“This is a marathon, it’s not a sprint. I’m not starting to count the dollar bills any time soon,” said Anand Chandrasekher, a former Intel executive who heads Qualcomm’s serverchip unit. “One day in a few years we will wake up and say ‘this is pretty cool when did that happen?”
Intel didn’t immediately return requests for comment.
Microsoft will give an update on its work on Project Olympus today in a keynote speech by Kushagra Vaid, general manager of Azure Hardware Infrastructure at Microsoft, as part of a track of sessions on the Microsoft design at the conference. Partners including Qualcomm, Intel, Dell Technologies, Hewlett Packard Enterprise, Advanced Micro Devices Inc. and Samsung Electronics Co. are making chips, servers and components for use in the Microsoft design, said Vaid, who spent 11 years at Intel before joining Microsoft. | | 5:30p |
This Unusual Type of Data Center Lease Can Save You Millions John Heiderscheidt is a licensed attorney and a data center broker. He handles development and compliance for MDI Access, Inc.
The time has come. Your company needs to relocate its data center. You’ve been assigned to find the new site and reduce operational expenses (Op Ex) in the process. Your first instinct is to move to the cloud. “It’s saved others money,” you think to yourself. And, of course, your CFO has heard that too.
What you haven’t heard is that a cloud migration can be fraught with headaches. It requires careful planning and familiarity with hyperscalability. Cloud migration breeds staff apprehension. In some instances, cloud migration comes with the expense of updating application architecture. By the time you factor in security vulnerability, the savings doesn’t seem all that compelling. Fortunately for your company, there’s another way to reduce Op Ex and remain in the enterprise environment that offers you more control over your infrastructure. You can use the true value of your data center lease to subsidize your rental obligations.
See also: How to Survive a Cloud Meltdown
Whether you sign your next lease with a traditional data center landlord or a cloud infrastructure provider, you’re signing a lease. That lease creates an income stream that can be sold on the open market at a capitalization rate. In the traditional model, your lease conveys only the right to use the data center or cloud infrastructure in accordance with the written lease terms. Your landlord retains 100 percent of the income associated with the value of that lease. But what if you found a landlord willing to share a percentage of the value created by your lease?
Let’s look at the difference in Op Ex by reviewing two simple examples:
Ex. 1: Your company signs a 10-year lease with a data center provider at a rate of $125/kW with a power draw of 500kW. Your lease liability is $7.5 million. The value of that lease, at a cap rate of .075 percent is $10 million. Your company receives none of that value.
Ex. 2: Your company signs a lease with a data center developer at a rate of $125/kW with a power draw of 500kW. Your lease liability is $7.5 million. The value of that lease, at a cap rate of .075 percent is $10 million. The developer gives your company $1.5 million from the sale of your company’s lease, reducing your rent obligation to $6 million or $100/kW.
In example 2, your company lowered its Op Ex by 15 percent with just the stroke of a pen. It also found a data center that will be newly constructed. This means brand new infrastructure. It also means more input and control over design decisions than you ever thought possible from a traditional data center landlord. Best of all, none of the headaches that come with transitioning to a cloud platform. This is tenant equity participation, and it is the new frontier of reduced Op Ex in the data center industry.
Ideal candidates for tenant equity participation have a critical power requirement of 250/kW or higher, and will be relocating in three years or less. If your company fits this mold and isn’t convinced about the benefits of the cloud as a serious migration alternative, start exploring the New Frontier of Op Ex reduction today.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:00p |
AI for Everyone: Salesforce Einstein Wants to ‘Democratize’ Artificial Intelligence  Brought to You by Talkin’ Cloud
Hiring one data scientist could cost your organization around $100,000 per year, but with Salesforce Einstein, you may not need to budget for that role just yet.
On Tuesday, Salesforce Einstein – the CRM giant’s foray into baked-in artificial intelligence – was launched in general availability, months after Salesforce first launched Einstein at Dreamforce in October.
This week Salesforce unveiled more specifics around what AI can do for its customers and partners, providing case studies and customer perspectives at an event in its San Francisco office that was live streamed on its website.
“Now everyone has a data scientist,” a senior engineer at Salesforce told the audience of investors and customers on Tuesday, describing how Salesforce Einstein leverages data from its email, CRM and social that it has collected “from the very beginning.”
The idea behind Einstein, according to Jim Sinai, VP of Salesforce Einstein Marketing, is that Salesforce is “democratizing artificial intelligence and putting it inside of every single CRM feature so that everyone has a data scientist in their day.”
“We had to solve for the fact that every single one of our customers are different,” Sinai said in an interview with Talkin’ Cloud. “They use our functionality differently; they customize it to their business and their industry so the whole way of doing AI which is a very human-led process of data sampling, feature selection, model selection, etc. is not scalable. The core principle of AI is that it is automated machine learning. We’re actually using AI to build AI.”
Einstein Vision is the first platform feature to reach general availability; it is a set of APIs that will enable customers and partners to use image recognition in their CRM and applications.
“In the spectrum of AI one of the holy grails is helping computers understand and see images, and how to unlock information from the vast world of photography,” Sinai said. “How do companies take image recognition and put them into their customer relationship workflows, things like visual search, brand detection, and even product identification?”
Customers will be able to train Einstein Vision to recognize their own brand and products, he says. For example, Salesforce customer Coca-Cola can use Einstein to help retailers identify which products to restock. In a demo by Richard Socher, chief scientist at Salesforce, he shows how a retailer could take a photo of their fridge with an iPad and Einstein would identify how much inventory of each product there is, which products need to be reordered, and which products the retailer should add based on customer demand, and actually order the products, all from within a single application and experience.
The use of Einstein is not limited to big companies like Coca-Cola, though. A mom-and-pop roofing installer could use Einstein Vision to identify what type of roof a lead has by knowing their address that they enter on a lead form on a website; from within Salesforce CRM Einstein can pull up the lead’s house on Google Street View and identify whether the roof is slopped or not, Sinai said.
“We’re putting Einstein right into the base of the platform so what that means is partners can extend the power of Einstein to all their applications,” he said. “All these insights that Einstein is putting into all of these clouds are extensible to every application. If you are a pure sales cloud ISV you can take all the insights from Einstein and extend them into your application. If you’re a channel partner on Marketing Cloud or in Commerce Cloud you can help your customers get to more successful and more impactful ROI, faster.”
Salesforce says that the possibilities with Einstein and AI are limitless, and are much more accessible than other AI services which may require a lot of investment with consultants and integrators, Sinai says.
“We have this unique problem that all of our features – whether it’s an opportunity or a marketing email feature – they’re all general features that are different for customers based on their industry, the size of their customer, how they customize it,” he said. “We’re able to learn from all that data and automatically select the right model that is going to lead the customer to the right result. And because we’re building this at the application-layer we can integrate it right into the UI, we can get the feedback from the users right away.”
With Einstein, Sales Cloud customers will be able to see a “Lead Score,” which evaluates leads based on top predictive factors. Sinai said that one of its customers of Sales Cloud is already seeing results.
Silverline, a consulting firm out of New York, is seeing 30 percent higher close rates with Einstein, Sinai says.
“Their reps are saving as much as two hours a week on tasks that are being automated by Einstein,” he says.
Salesforce, IBM Team Up in Artificial Intelligence
In the lead up to Tuesday’s event, Salesforce announced a partnership with IBM on Monday whereby they will deliver joint solutions that combine IBM Watson and Salesforce Einstein.
The partnership will initially see an integration of IBM Watson APIs into Salesforce to bring predictive insights from unstructured data with customer data delivered by Salesforce Einstein.
The partnership will also enable customers to bring together on-premise enterprise and cloud data with specialized integration products for Salesforce, according to a press release.
“Within a few years, every major decision—personal or business—will be made with the help of AI and cognitive technologies,” Ginni Rometty, chairman, president and chief executive officer, IBM said in a statement. “This year we expect Watson will touch one billion people—through everything from oncology and retail to tax preparation and cars. Now, with yesterday’s announcement, the power of Watson will serve the millions of Salesforce and Einstein customers and developers to provide an unprecedented understanding of customers.”
This article originally appeared on Talkin’ Cloud. | | 7:21p |
OCP Launches Marketplace for Open Source Data Center Hardware Open Compute Project, the open source data center and hardware design community Facebook founded six years ago, has launched an online marketplace where companies can shop for official OCP-accepted hardware as well as hardware “inspired” by specs and designs open sourced through the project.
While operators of massive, hyper-scale data centers, the likes of Facebook and Microsoft, have used OCP to source hardware that’s custom designed for their workloads and to drive down the cost of their data center hardware by having vendors compete to supply essentially the same products, OCP gear has not been easy to source for smaller data center operators who do not buy at the same volumes.
The OCP Marketplace appears to be an answer to that problem, which has been frequently cited as one of the biggest impediments to wide adoption of OCP hardware by enterprises. Long order delivery times have been another major barrier.
Read more: Why OCP Servers are Hard to Get for Enterprise IT Shops
OCP chairman, Jason Taylor, announced the OCP Marketplace at the organization’s annual US summit in Santa Clara, California, Wednesday. The marketplace currently lists 70 products that are ready to purchase from the vendors specified on the site, he said.
The range of vendors listed is rather limited at the moment: Hewlett Packard Enterprise, the Taiwanese hardware manufacturer Wiwynn, the network equipment maker Edgecore Networks, also based in Taiwan, US supplier Penguin Computing, and the Japanese systems integrator ITOCHU Techno-Solutions Corp.
The only OCP-accepted products listed now are five data center switches by EdgeCore. The supplier also lists three other switches, based on the 100GbE Wedge 100 switch designed by Facebook.
HPE lists five of its CloudLine servers, which are OCP-inspired. The line represents the company’s off-the-shelf play in commodity hardware for hyper-scale data centers.
There are also servers and storage systems by Wywinn and HPE.
Explore the OCP Marketplace here |
|