Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, June 16th, 2015
| Time |
Event |
| 12:00p |
Forsythe’s Massive Chicago Area Data Center Achieves Tier III Constructed Facility Forsythe Data Centers has achieved Uptime Institute Tier III Certification for Constructed Facility ahead of next week’s grand opening of its suburban Chicago data center.
Located in Elk Grove, Illinois, the facility is a massive 221,000 square feet with 30 MW of critical IT load. In addition to data center space, the new facility is home to Forsythe’s Technology Evaluation Center and the Forsythe Integration and Configuration Center, both differentiators according to the company. Forsythe helps clients to plan, test, procure, build, migrate, house, manage—and even finance—the infrastructure to go inside their data center suite.
Given this is the company’s first foray into data center facility construction, achievement of Tier III Constructed Facility is all the more impressive. It is notably harder and more time consuming to obtain Constructed Facility than Design Certification. The new facility one of only two in the state to achieve the certification and one of fewer than 50 in the country.
“Any design can be defeated during construction and testing,” said said Julian Kudritzki, COO of Uptime Institute. “Forsythe has achieved Tier III Certifications of Design Documents and Constructed Facility, demonstrating ‘truth in certification’ through rigorous planning, execution, and system testing,”.
The Tier Certification of Constructed Facility process includes load-bank demonstrations under real-world conditions, according to Kudritzki.
Forsythe is not a new company in the data center space, despite this being its first foray into data center development. Founded in 1971, Forsythe is one of the largest independent IT integrators in North America, serving approximately 1,000 U.S. and Canada-based companies. It has a mature data center practice, specializing in data center migration assessments and optimization. That expertise reveals itself in what the company dubs Retail+.
The company offers private data center suites and several hands-on services. The suites range in size from 300 to 4,000 square feet, with 200 to 300 Watts per square foot of power density. Each suite has dedicated power infrastructure with 2N UPS and dedicated cooling infrastructure with N+1 precision air conditioning. Generator backup is building-based and shared across suites until a client reaches 4,000 square feet and then generator support is private.
The suites place them somewhere in between wholesale offerings and retail colocation. Forsythe also offers the shorter contract lengths often seen in retail, along with several additional services above and beyond.
Migration services a differentiator
“We see customers wanting more than ping, power and pipe,” said David Carlson, sales general manager Forsythe Hosting Solutions. “Our big differentiator is we help them migrate. Forsythe is positioning as more than just a colocation conversation.”
In addition to providing colocation space, Forsythe said it is a “one-stop shop” for IT infrastructure and data center needs, with both remote hands and on-site support for strategy and migration. The company also offers managed hosting and security services, private cloud, business continuity and disaster recovery services.
This year is viewed by many in the industry as a tipping point year for metro networks, driven by massive growth of data center and cloud that are overtaxing today’s network infrastructures and exposing their inflexibility. It’s the reason why the company has also made a big investment in owning its own fiber into the city. The data center has a dedicated connection to 350 Cermak, the major carrier hotel in downtown Chicago.
A partnership with Ciena networks was announced earlier this year. “Ciena provided us a little more flexibility and redundancy,” said Scott Bunnell, senior manager, Network Services Forsythe. “The encryption model drove us towards Ciena. As much as security is important in the data center, connectivity out of the data center is just as important. Ciena’s secure modules give customers a little more security than somewhere else.”
Ciena’s Duncan Puller, vice president of Data Center and Cloud, also noted growing importance of secure connectivity as the general move to As-a-Service increases. “What I found that’s really interesting, is more and more of the content and applications are proliferating and the data center is the point of confluence where everything comes together,” said Puller.
Ciena’s core business is in packet optical infrastructure, but Puller said the data center interconnect business is something that Ciena’s invested in more heavily in recent times.
Forsythe is also pursuing US Green Building Council LEED certification, SSAE-16 on the data center’s controls and ISO-9001-2008 on its quality management system. The company’s headquarters is in nearby Skokie, Illinois, where it also has a data center. | | 3:55p |
How to reduce backup costs with energy-smart storage There’s definitely been a boom in the amount of data that today’s businesses and modern data centers have to manage. With the proliferation of new devices, new ways to deliver content, and cloud computing, it’s clear that growth around data consumption will continue to grow. Meanwhile, we’re seeing administrators struggle with controlling the flow of information, how to best store it, and how to create cost-effective backup solutions.
In this whitepaper, we see that there’s more to implementing a backup solution than simply creating a strong backup strategy. It’s also about controlling the data within it, making it much more resilient, and deploying it on disks capable of good energy utilization and high-capacity. New kinds of tools allow backup storage solutions to operate at lower costs by integrating:
- Intelligent power management tools
- Greater reliability
- Less power usage
- Better data storage economics
- Easier overall system and array management
These tools also allow administrators to worry less about the data they’re storing and focus on creating even better retention policies for the business. Now, you are able to work with array solutions capable of powerful anti-vibration technologies and even advanced cooling features. Download this whitepaper today to learn more about these tools and the direct benefits of working with intelligent energy-smart arrays capable of high-capacity storage solutions. | | 4:40p |
Software Still Playing Catch-Up to Flash Memory Advancements DeWayne Filppi is Director of Solution Architecture at GigaSpaces.
For as long as most can remember, applied computer storage architectures have been based on allocating volatile, expensive RAM and persistent, cheap disk. Recent trends in flash memory technology have provided more options for architectures seeking to optimize performance with an eye on cost. In the current marketplace, however, software solutions that capitalize on flash-based storage are lagging in development. Thus, software solutions that exploit flash storage offer a competitive edge.
A Bit of Storage History
The split of storage into fast/expensive and slow/cheap(er) is rooted in history. From the mid 1950s, RAM/core memory was the location for programs and working set data, whereas tape or drum might be the source and sink for persistent/secondary data. Disk drives arrived in the 1960s, replacing drums and relegating tape to archival storage. Since the ’60s, the disk drive has dominated secondary storage due to advancements in controller technology and density.

Over time, the cost/GB of disk drives has plummeted while device bandwidth has increased; however, latency (or random bandwidth) is ultimately constrained by the need to rotate and seek. Unfortunately, improvements in seek time and rotational speed have advanced slowly relative to interface speed and storage density.
The Arrival of Flash and What the Future Holds
Flash memory appeared in the 1980s and became commercially successful for small devices requiring removable storage (e.g., digital cameras). NAND flash memory — as a block oriented, random access, persistent, solid state alternative to hard drives — was initially too expensive and capacities were too small to compete broadly with disk drives. Since then, flash capacities have expanded and prices have dropped making flash solid-state drives (or SSDs) affordable on laptops, where their speed and low energy consumption is attractive.

As price/capacity measures continue falling, flash is displacing disk storage where high performance, durability and energy efficiency are valued more than bulk storage capacity. Some projections have SSD storage competitive with enterprise-class disk storage as early as 2017.
Other factors are beginning to tip the scales towards flash. For applications that value increased performance as much as raw storage capacity per dollar, flash wins. Plus, applications like real-time analytics, market data, mobile, IoT, real-time ecommerce, and others value flash over disk. It is conceivable flash will soon trump disk as the cost/GB differential continues to shrink.
The Current Reality—Hybrid Solutions
Despite the closing gap, we will be living in a hybrid storage world for some time. Flash memory does not replace disk storage; rather, it provides another “tier” of storage as part of a palette of options storage architects can use, optimizing for factors like latency, throughput, device cost, energy/cooling cost, capacity and durability.
The addition of another storage tier has increased system complexity. To minimize complexity, flash SSDs have typically imitated disk drives. This permits a standard file system interface to be presented to applications, including databases. This imitation has performance costs of between one and two orders of magnitude in sustained I/O operations per second (IOPS). In order maximize performance, a native API must be used, which adds significant complexity compared to the standard file system interface.
To minimize complexity and maximize performance, application platforms and APIs must step in and provide seamless, high performance access to flash devices. At the API layer, cross-vendor APIs are being developed. At the software platform layer, data already abstracted as objects (and typically mapped to tables) can be mapped by a higher-level API to both database tables and flash storage.

The marriage of these two concepts — a portable flash API and a universal object mapping layer — makes the conceptual boundary between storage tiers disappear. The allocation of data becomes a deployment decision, not a programming decision. This simplifies data locality architecture and programming, seamlessly placing data that needs the immediacy of RAM, the speed and persistency of flash, and the bulk storage of disk in its proper place.
Software Solutions Lag
So the good news is that flash storage capacity and affordability are improving. The bad news is that software systems are playing catch-up.
The basics are in place at the file-system level. The basic approach to employing flash is to substitute conventional disk drives with their flash equivalents. The Flash Translation Layer (FTL) provided by the drive presents a seamless integration point at the OS level. Another strategy is automated tiered storage, which uses flash as a persistent, least recently used (LRU) cache for conventional disk storage. Asynchronous processes periodically move cold data to disk and hot data to flash.
At the data-tier level, we see some consistency as well. All major vendors integrate flash drives/modules as an option and have native interfaces to flash devices. Another strategy is using high-bandwidth flash as a cache on the server node, while retaining disk storage in its traditional role. All these strategies give users access to flash technology using standard SQL methods.
Unfortunately, at the middle/processing tier, platform support for flash is inconsistent. The limited number of middle-tier caches that exist and can persist to disk can easily transition to flash. However, if you are not seeking a middle-tier cache but a data grid, only a few data grid vendors have programmable flash integrations.
Conclusion
Flash is rapidly becoming a competitor to spinning disk, but software systems are behind. We have seen adoption at the file system and database layers, where easy payoffs were available, but uptake at the middle tier has been slower. Flash will continue eating away at the disk storage market, driven by the insatiable desire for ever-higher data velocity and low latency. There are competitive advantages for enterprises that capitalize on flash, but vendors delivering those capabilities are limited.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:07p |
Legrand Acquires Raritan, DCIM Business Spun Off North American subsidiary of the French electrical equipment vendor Legrand has agreed to acquire Raritan, a Somerset, New Jersey-based supplier of data center infrastructure products. Raritan’s data center infrastructure management software will be spun off as an independent company, the companies announced Tuesday.
The new DCIM company will be called Sunbird Software. It will act as Raritan’s partner, chaired by Raritan founder and CEO Ching-I Hsu, according to a news release.
Completion of the spin-off will be contingent on completion of the acquisition, which the companies expect to happen within the next 30 to 60 days.
Raritan will operate as an independent business within Legrand. Financial terms of the transaction were not disclosed.
Visit the Data Center Knowledge DCIM InfoCenter for latest DCIM news and in-depth guidance on DCIM solutions. | | 6:32p |
IBM Launches Its First Cloud Data Center in Italy IBM has opened its first cloud data center in Italy as part of an ongoing global infrastructure expansion effort. Located in Cornaredo, in the Province of Milan, it provides Italian customers with a local option for hosting data and applications in the IBM cloud and a low-latency connection to the wider global platform.
The 2.8 MW data center has room for about 11,000 servers. IBM said the data center would particularly appeal to local clients in regulated industries.
Italy is an emerging cloud market with over 30 percent year-over-year growth, according to the Polytechnic University of Milan’s Observatory of Cloud and ICT as a Service. Total market spend in the country is more than $1.3 billion.
IBM has been growing its SoftLayer-powered cloud data center empire rapidly. In Europe, it recently opened new facilities in France and Germany.
It has launched data centers in Canada, Mexico, and Japan. A data center in Chennai, India, is expected to come online soon.
The company has over 40 cloud data centers worldwide, 24 of which are SoftLayer-powered. The number of SoftLayer data centers has more than doubled since IBM made its $1.2 billion commitment to investing in cloud infrastructure last year.
With data centers in many of the major markets, the company is now looking to the most promising growth markets.
“The Italian IT sector is changing as startups and enterprises alike are increasingly turning to the cloud to optimize infrastructure, lower IT costs, create new revenue streams, and spur innovation,” Marc Jones, SoftLayer CTO, said in a press release.
“The Milan data center extends the unique capabilities of our global platform by providing a fast, local on-ramp to the cloud,” he said. “Customers have everything they need to quickly build out and test solutions that run the gamut from crunching Big Data to launching a mobile app globally.” | | 10:10p |
Study Identifies Common Pain Points in Big Data Projects While more than four out of five companies are taking on Big Data projects this year, almost all (92 percent) of them are still seeing obstacles in their implementation, according to a global study from CA Technologies.
There is significant spending occurring around Big Data, this year being a major tipping point. However, several pain points are being uncovered. Despite these pain points, almost 90 percent of organizations are seeing or anticipating benefits of increased revenue from these projects.
Companies are using Big Data analytics to better understand customers. Organizations are reporting or anticipating increased revenue thanks to improved competitive positioning, ability to provide new products or services and more effective marketing campaigns – all were reported by over 90 percent of respondents noting a positive effect.
The biggest obstacle is around insufficient existing infrastructure (at over 30 percent of respondents) followed by organizational complexity, security and compliance concerns, lack of budget and resources, and lack of visibility into information and processes. All five major obstacles are pain points that Big Data platform providers have attempted to address in recent updates.
A workaround to insufficient existing infrastructure is the use of cloud. MapR and Hortonworks both updated their Hadoop platforms to better handle distributed clusters across any cloud infrastructure. However, with distributed infrastructure, security and compliance becomes complicated, leading to necessary enhancements on this front.
Organizational complexity and visibility issues are being addressed through better centralized administration, improved user interfaces, and customizable views that extend data insight beyond data scientists.
Spending on these projects is expected to increase from 18 to 25 percent over the next three years, according to the study. The amount of data organizations have to deal with is also increasing significantly, by an average of 16 percent in the last two years and predicted to rise almost 25 percent in the next two years. Almost all respondents acknowledged that major investments are required for Big Data projects to work well and more than half seeing scaling existing projects to address more data sources as a major priority.
| | 11:32p |
More than Half of Small Businesses Plan to Run Entire IT in Cloud by 2020: Report 
This article originally appeared at The WHIR
Corporate adoption of cloud-only IT will more than double in the next two years to 26 percent, according to a survey released by cloud management, analytics, and security services company BetterCloud. The survey also found that over half of small and medium-sized businesses (SMBs) will run 100 percent of their IT in the cloud within five years.
BetterCloud surveyed 1,500 IT professionals representing organizations in 53 countries for its “Trends in Cloud IT,” asking them about cloud adoption, cloud office systems, and application usage. The results show that rapid cloud adoption is expected, but just how rapid depends on several factors.
The survey found that 12 percent of companies currently run all of their IT in the cloud. By 2020, 62 percent are expected to run cloud-only IT, although the curve starts to level off at that point, and a minority of organizations believe they will never run all IT from the cloud.
Companies using Google Apps are more than twice as likely to be cloud-only at this point, and 17 percent more are expected to be cloud-only in five years (66 to 49 percent). Other major factors in cloud-only adoption include the company’s size and age, with smaller, younger companies significantly more likely to adopt cloud, and less likely to have legacy services they have already invested in.
Almost all businesses (96 percent) less than five years old plan to run all IT from the cloud by 2026, while larger enterprises will only reach 50 percent cloud-only adoption in 2025.
The adoption of cloud applications is much different, interestingly. Enterprises are expected to go from 18 cloud applications today to 52 in 2017, a 185 percent increase while SMBs expect an 86 percent increase.
“It’s expected for larger organizations to run more cloud applications, but it’s the acceleration of cloud app usage that is impressive,” BetterCloud CEO and founder David Politis said in a blog post on the survey’s first batch of data. “The data shows that we could be reaching a tipping point where enterprises are truly embracing cloud applications.”
Cloud service providers have geared offerings towards enticing SMB adoption while the window of opportunity in the market is wide open. 1&1 Internet and CodeGuard both launched solutions with lowered barriers to entry already this month.
This first ran at http://www.thewhir.com/web-hosting-news/more-than-half-of-small-businesses-plan-to-run-entire-it-in-cloud-by-2020-report |
|