Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, May 2nd, 2017
Time |
Event |
12:00p |
Micro-Data Centers Out in the Wild: How Dense is the Edge? As places like factories, big stores, and logistics warehouses get increasingly instrumented and automated, demand for computing capacity to process the data those instruments generate in real-time is growing, fueling the rise of micro-data centers, which provide a relatively simple way to extend the edge of the corporate network to these facilities.
This beast is different from the edge data centers built by companies like EdgeConneX, the relatively large colocation facilities, where the networks of content providers like Google and Netflix end, their traffic picked up by last-mile ISPs who take it to their end users.
The micro data center on the factory floor, for example, is the edge of the network whose core is the enterprise data center, often located far away from the plant. Data collected from factory equipment gets processed at the edge, and only a small fraction of it is sent back to the core.
How big are these edge deployments, and what is their power density? By most industry accounts they will only grow in number, so those are important questions to ask. For answers, we turned to Alex Pope, director, unified infrastructure, at Vertiv (formerly Emerson Network Power), who’s seen firsthand many micro data center deployments by the company’s customers, under its previous and current brand. (Disclaimer: the company was still Emerson at the time of the interview.)
At least today, power densities in most edge deployments are similar to average densities in enterprise data centers. According to Pope, edge nodes in telco central offices tend to be higher-density, while edge computing infrastructure in office buildings, retail outlets, and manufacturing facilities ranges from 3kW to 6kW per cabinet, in rare cases reaching as high as 10kW.
Here’s a more detailed look at five micro-data center edge use cases:
In the Office
In offices, micro data centers run corporate applications locally and support connectivity to central enterprise server farms. Typical density is 2kW or 3kW per cabinet, and most deployments are single cabinets, which is low enough to require no dedicated cooling capacity. “It’s small enough that if you do the room right you don’t need active cooling,” Pope said.
His assessment is that we’re in the early days of micro-data centers in offices, and that this category of edge deployments will be the last to see substantial increases in compute requirements.
In the Store
Retail outlets on the other hand are growing their in-store computing capabilities. Pope and his colleagues often see multiple networks in a single store, where security systems, registers, and inventory management will run on separate networks.
There’s also growing use of big data analytics in stores, as retailers try to identify shoppers who walk in in real-time and offer them discounts or advertise products they’re most likely to purchase, making the brick-and-mortar shopping experience more like shopping online.
Some retailers will even provide more or less network bandwidth to your device based on how important of a customer they think you are, Pope said. Someone who’s likely to make a big purchase will have faster WiFi, for example, while a child watching Netflix will see grainy video on their screen.
Probably the biggest growth driver for compute capacity in stores today is security, as retailers install more and more surveillance cameras that generate more and more data that needs to be stored and processed, he said.
While typical edge data center power density in these locations is 3kW to 6kW, Pope has seen deployments at 8kW to 10kW. Total capacity per site ranges from 40kW to 80kW.
In the Distribution Warehouse
Take one hop back on the travel path of a consumer product – to the distribution warehouse – and you’ll see more edge compute capacity. As retailers compete with Amazon, they’re stepping up their home delivery game, which means putting more distribution centers closer to where customers are (many former stores get converted to warehouses for this purpose).
And distribution centers need compute power to manage transactions, inventory, and shipping data to ensure you get your orders as quickly as possible. Onsite systems that manage this can total anywhere from 40kW to 200kW, with power densities in the 6kW to 8kW range.
On the Factory Floor
Shifting attention from the distribution warehouse to the manufacturing floor, you will see computing systems of density similar to that in retail outlets: 3kW to 6kW. There can be anywhere from five to 15 cabinets per plant, Pope said.
Factories often use the hub-and-spoke architecture, where distributed networking equipment around the campus collects sensor data and passes it along to a centralized computing room for processing. “Most of the production is run by those centralized compute rooms,” he said.
The Edge of the Telco Cloud
Computing rooms are also what many telo central offices are becoming. As telcos transform their networks to support virtual network services and automate network management using software, a telco network becomes a cloud, while the central office becomes a cloud edge node.
This is where you’ll see some of the higher-density micro-data center deployments, in the neighborhood of 8kW to 10kW per cabinet. Like retail distribution warehouse edge nodes, central office data centers range from 40kW to 200kW total per site.
Simple to Buy, Deploy, Expand
While there are energy efficiency benefits to these self-enclosed, bite-size bits of IT infrastructure, the main reason micro-data centers are popular across the aforementioned edge computing use cases is simplicity of procurement and deployment, Pope said. Customers pick from a menu of configurations and get a single product, including everything from power and cooling equipment to fire suppression.
Capacity of the edge nodes is expandable, and vendors design them with the expecation that customers may eventually want to add UPS battery capacity, cooling, or racks. Pope noted that he sees more customers planning for expansion at the edges of their networks. “We see that absolutely being part of the plan,” he said. | 4:36p |
Future Allen Campus Would Be CyrusOne’s Largest Texas Data Center Although the 65-plus acres of land in Allen, Texas, is zoned for agriculture, CyrusOne doesn’t plan on growing anything there except its burgeoning data center business.
The Dallas Business Journal reported on Monday that the company is proposing to build what could be one of the largest data center campuses in Texas, just outside of Dallas.
According to the plans filed with the City of Allen Planning and Zoning Commission, “The Concept Plan shows three separate data center buildings to be constructed in three phases. The building on Phase I will be 350,950 square feet; the building on Phase II will be 619,100 square feet; and the building on Phase III will be 412,800 square feet, for a total of 1,382,850± square feet.”
The data center buildings have been designed with two floors and will have a maximum height of 56 feet. The assets will be projected by concrete walls measuring between eight and 20 feet.
See also: Cyrus One Expands North Texas Data Center
The vote by the commission is expected today. If approved, this will be the eleventh and largest facility owned and operated by CyrusOne in Texas.
Meanwhile, construction on the last two buildings in its Carrollton campus in North Texas is underway. Upon completion, it will accommodate up to 670,000 square feet of data center space and total about 80 MW. In the last 18 months alone, Cyrus One says it has doubled the amount of people managing the operations there.
Its other Texas campuses are located in Dallas, Austin, San Antonio, and Houston. The company also has data centers on both coasts of the US and in the Midwest. Its international footprint consists of data centers in Singapore and the UK. | 5:01p |
DartPoints to Expand Micro-Colocation Facilities to Five US Cities DartPoints’ plan to create a national network of micro-colocation facilities designed and built by Schneider Electric in 100 kilowatt increments began in a tiny data center in Dallas.
A little over two years later, the colocation provider has announced much bigger news: It is branching out into five new US markets with these “private colo” facilities, all centrally managed by Schneider’s DCIM software called StruxureWare.
Expansion cities include: Phoenix; Las Vegas; St. Paul; Kansas City, Kansas; and the Greater New York area. Each facility will be based on the same edge model the companies tested in Dallas.
DartPoints said it has been able to deliver fully redundant 100kW systems—either on-premise within buildings or at colocation campuses, claiming substantial cost savings.
See also: Micro-Data Centers Out in the Wild: How Dense is the Edge?
“When DartPoints first came to us, they were looking to bring a model to market that was different from the traditional colocation business, offering the same economies of scale as the larger data centers but with a smaller footprint,” Mike Hagan, a Schneider VP of sales, said in a statement.
According to Schneider, these data centers can be replicated anywhere in the US within 45 days.
“Our key business driver is going wherever the customer is located and bringing applications there to drive latency down and reduce cost,” Hugh Carspecken, CEO of DartPoints, said in a statement. “Schneider Electric’s edge solutions enable us to easily build to a user’s needs without the compatibility issues that full customization requires, while streamlining design and construction for quick deployment, flexibility and scale.”
DataPoint said it expects its revenue to double in 2017.
See also: DartPoints Plans National Network of Micro-colo Sites | 5:30p |
What to Consider When Using DBaaS Around the World Ariel Maislos is CEO of Stratoscale.
It’s been a few years since “anything as a service” started gaining popularity. Since then, providers have improved the performance and reliability of these services, and the largest companies in the world have begun to adopt them. When it comes to Database as a Service (DBaaS), the reason is simple: Instead of setting up and taking care of the whole system by yourself, you can delegate the setup and ongoing care to someone else. If you decide to go with a cloud provider like AWS or Azure, you’re delegating tasks to some of the top experts in the world.
Although reallocating your data is always a sensitive process, the fact is that out-of-the-box DBaaS solutions are more secure than out-of-the-box on-premises installations. With DBaaS, the OS is all set up, the firewall is up and running and the database has the latest patches.
Types of Databases
There are two major database camps: relational databases and NoSQL databases. Relational databases are most frequently relied upon because the technology is a mature and proven concept with most database managers. NoSQL databases have gained in popularity over the past decade. The benefit of using NoSQL databases is that you can easily store unstructured data to avoid creating tables at the beginning. Although this can be extremely useful, it is also a tempting way for developers to simply store data that they’ll index at a later date. While NoSQL databases are not at fault for that per se, they do enable developers to quickly accumulate a lot of technical debt.
Almost every cloud provider has its own set of relational and NoSQL databases as a service. When choosing which one to use, you should not only decide between relational, NoSQL or a combined approach, but also investigate which database model best suits current and future needs.
Benefits of DBaaS vs. Hosted Solutions
Before DBaaS became a viable alternative, hosted solutions were a must. Hosted solutions shouldn’t be thought of only as on-premises servers. Think of them instead as at least one small virtual machine running MySQL to complex systems, combined with your own infrastructure and rented infrastructure all over the world. The biggest difference between DBaaS and hosted solutions isn’t that one has an infrastructure and the other doesn’t, but who takes care of that infrastructure.
With hosted solutions, you are responsible for OS updates, DB updates, security, network power outages and software crashes, among other things. With DBaaS, on the other hand, these problems are delegated to IT professionals that work for your cloud provider. DBaaS does have a premium in price, but only if you ignore the maintenance, upgrade and setup costs of hosted solutions. Calculating those costs, DBaaS should end up being cheaper, in addition to being more reliable. Another benefit of using DBaaS over hosted solutions is that you don’t have to take care of complex licensing — everything is included in the price.
Hybrid Solutions
There are scenarios when you want to combine your hosted databases with DBaaS. Some of the most common are:
- Partitioning your hosted database and keeping old data in DBaaS
- Using DBaaS as a read-only database to support an increased read load
- Migrating your hosted database to DBaaS in phases to make the whole database available to the clients
When considering hybrid solutions, you need a certain amount of bandwidth for some of these scenarios, security precautions for both the hosted database and DBaaS, and a DBaaS that can support this kind of hybrid scenario.
With a DBaaS using DB engines from Oracle or Microsoft, you’ll get all the support you need for hybrid connections, replication, and other enterprise grade benefits, though the price will be higher. There are some cheaper DBaaS solutions that are good enough for various cases and are much cheaper, but they won’t necessarily provide you with the flexibility you need.
Scaling Possibilities and Problems
Scaling is one of the most important reasons to migrate from a hosted solution to a DBaaS. It is relatively easy to implement and use scaling, up to a point. The problem is that you can easily scale up and scale down. That means that you can increase and decrease the performance of your database with a click of the mouse, or on an automatic trigger or rule. Another possible problem with this type of scaling is that you can’t scale up infinitely. There will come a time when your DBaaS will be running on the best possible machine available from your cloud provider and you won’t be able to scale any higher.
At that point, it’s time to consider scaling out and scaling in. For stateless web apps, this is an easy process because they don’t need to remember any data, but for databases, it requires a restructuring of the whole database — something that can’t be done in a couple of minutes. If your system requires you to scale, be sure to first scale up as much as possible and anticipate the moment when you’ll need to scale out so that you can engage your DB experts to restructure the database and scale it over more than one database. After restructuring everything, you still won’t be able to scale out as you want, but you’ll be able to scale those instances up and down.
Multi-tenant Applications
If you’re developing a multi-tenant application, you should take extra care in scaling and partitioning of your data.
Common design practices for placing tenant data follow three distinct models:
- Database-per-tenant. Each tenant has its own database. All tenant-specific data is confined to the tenant’s database and isolated from other tenants and their data.
- Shared database-sharded. Multiple tenants share one of the multiple databases. A distinct set of tenants is assigned to each database by using a partitioning strategy such as hash, range, or list partitioning. This data distribution strategy is often referred to as sharding.
- Shared database-single. A single, sometimes large, database contains data for all tenants, which are disambiguated in a tenant ID column.
Designing a multi-tenant application is a process that takes time and warrants careful consideration. Some DBaaS solutions can help you with this process.
To provide every tenant with a database that meets your performance requirements, you need to either rent each database with a specific amount of resources or you need to scale it on demand. Of course, as mentioned earlier, it takes time to scale.
A nice approach to addressing this problem is to use “elastic pools” of databases. The idea is to rent a certain amount of resources and add a number of databases to that pool. Since the resources are already provisioned to you, databases can scale inside the pool’s resources almost instantaneously. A multi-instance architecture will help you to develop a more fluent solution and save money.
Minimizing Vendor Lock-in
Every cloud provider is trying to solve a problem for their clients and each one has a different idea of the ideal solution. As a result, you may end up using some services and features that are specific to only one cloud provider.
Cloud services rarely fail, but it happens and it’s up to you to assess how much risk you can take. If it’s okay with you to have your data stored at only one cloud provider, regardless of whether you’re keeping it in only one region or spread all over the world, that’s fine. On the other hand, if you need to continue operating no matter what, you should distribute your services over different regions and over different cloud providers, and perhaps even keep a backup hosted solution on-premises. To make this work, you’ll need to do a lot of planning and testing, and it will cost considerably more than using just one cloud provider – but you’ll have great redundancy.
Next Steps
DBaaS solutions are becoming more popular, secure and useful every day. If used properly, they can help you to save money on infrastructure and maintenance, while spreading your resources all over the world.
Nonetheless, there are some considerations, especially with respect to security that you should carefully consider before jumping on board this train. A nice approach would be to start small and undertake the migration in phases using hybrid solutions; then decide if you want to go all- in to DBaaS or keep the hybrid solution.
All major cloud providers have extensive documentation, tutorials, and tools to help you migrate your databases; still, it also makes sense to employ a company that resells the cloud through a partnership so they can assist you with the process and provide better insight.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:09p |
Report: Shaw Looking to Sell Data Center Unit ViaWest Canadian telco Shaw Communications is considering a sale of ViaWest, the data center services business it acquired for $1.2 billion just three years ago.
Shaw has retained Toronto-Dominion Bank to help it organize an auction for the business unit, for which it hopes to get significantly more than it paid, Reuters reported, citing anonymous sources. ViaWest operates about 30 data centers in the US.
The news comes the same week as two US telcos, separately, closed deals to sell off massive data center portfolios. Verizon Communications sold 29 data centers to colocation provider Equinix for $3.6 billion, and CenturyLink got $1.86 billion for 57 data centers sold to a group of private investors, who used the portfolio as the platform for a newly formed data center service provider called Cyxtera.
See also: Why Equinix is Buying Verizon Data Centers for $3.6B
Like Shaw, Verizon and CenturyLink bought into the data center business rather than building it organically. According to the Reuters article, analysts have ramped up pressure on Shaw to offload its data centers after the Verizon and CenturyLink deals were announced last year.
CenturyLink executives said the company was not willing to commit to the level of investment required to continue growing its data center business.
See also: Here’s What’s Next for CenturyLink’s Data Center Business
Both Verizon and CenturyLink have continued to provide various enterprise technology infrastructure services using the data center assets they’ve sold, turning from data center owners to customers of their respective asset buyers.
Shaw has recently been divesting non-core assets, investing instead in its mobile business, which competes with the wireless units of BCE, Rogers Communications, and Telus Corp. It used some proceeds from the sale of its media assets last year to pay for the C$1.6 billion acquisition of Wind Mobile, according to Reuters.
Catch up on recent developments at ViaWest on DCK | 6:56p |
AMD Stock on Course for Worst Day in More Than a Decade Bu Ian King (Bloomberg) — Advanced Micro Devices Inc.’s stock is on course for its worst one-day decline in more than a decade after its second-quarter forecast deflated hopes that a new range of chips will take sales from Intel Corp.
The stock fell as much as 21 percent, the biggest intraday drop since January 2005, reversing the run up it enjoyed this year. The company, which has spent months touting the ability of its new Zen design, reported sales that were in-line with analysts’ estimates and profitability that fell short of some projections.
The first versions of Zen, called Ryzen, were on sale for about a month in the first quarter. The report Monday didn’t provide enough proof the product will meet expectations that fueled the quadrupling of Advanced Micro Devices’ shares last year.
See also: AMD Paves Road Back to Data Center with High-Performance Naples SoCs
Revenue in the current quarter will increase 17 percent, plus or minus 3 percent, from $984 million reported in the first quarter, the Sunnyvale, California-based company said in a statement. That indicates sales of about $1.15 billion, compared with the average analyst estimate of $1.12 billion, according to data compiled by Bloomberg. Gross margin, or the percentage of sales remaining after deducting costs of production, will be 33 percent, Advanced Micro Devices said.
“While the company appears to be making progress with its new Ryzen product, first-quarter results clearly fell short of high expectations the stock was reflecting,” wrote Srini Pajjuri, an analyst at Macquarie Capital Inc. “The window of opportunity may be closing as Intel is expected to ship its own new products in the coming months.”
Net loss in the period was $73 million, or 8 cents a share, compared with a loss of $109 million, or 14 cents, a year earlier. Sales rose 18 percent. That compared with an average estimate of a loss of 7 cents a share on revenue of $984 million.
AMD is the second-largest maker of graphics processors used in add-in cards for gamer PCs, behind Nvidia Corp. It’s also in the process of rolling out new designs for that market.
Zen designs, branded Ryzen, went on sale in the first quarter for high-end desktop computers. Other models for laptops and servers will follow later this year.
See also: AMD Reaches Deal With Alibaba to Use Chips in Cloud Service |
|