Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 6th, 2015
| Time |
Event |
| 12:00p |
Yahoo to Double Quincy Data Center Capacity Using Computing Coop Design Yahoo announced plans to expand its already massive data center campus in a rural Central Washington town. The 300,000-square-foot expansion in Quincy, home to one of the largest data center clusters in the US, will effectively double Yahoo’s data center capacity there.
As a provider of digital media content and other web services, Yahoo, like its competitors, has to constantly expand its data center capacity to make sure it has the infrastructure in place to support new products and services and a growing user base.
Yahoo is using its unique Computing Coop data center design for the expansion, Mike Coleman, the company’s senior director of data center operations, said in an email. The shape of Yahoo data centers built using this design resembles shape of a chicken coop, hence the name. In fact, when the company first introduced the design about five years ago, it was called the Yahoo Chicken Coop but later renamed.
The shape maximizes the use of outside air for cooling and minimizes the need for electrical fans to pull air into the building. The coop was also designed to shrink the time it takes to bring new data center capacity online.
The announcement follows the launch of a massive Yahoo data center expansion in Lockport, New York. Lockport is a newer, East Coast counterpart to Yahoo’s Quincy campus. The company launched its first data center there – it was also the first place where Yahoo used the Computing Coop design – in 2010. It has since stuck with the design, using it, with slight modifications, for subsequent expansion.
Its other big US data center campus is in Omaha, Nebraska. Yahoo also owns and operates a data center in Singapore and leases space in numerous other US and international facilities.
Tax Breaks Fuel Data Center Construction
Earlier this year, state lawmakers approved an extension of tax breaks for data center operators which were set to expire. Companies that operate data centers in rural areas of the state are exempt from sales tax on purchases of IT and supporting infrastructure equipment.
“One of the reasons we decided to expand our data center in Quincy was because of the attractive rural tax incentives offered by Washington State,” Coleman said.
The state started providing tax exemptions for data centers in 2010, but the exemptions expired the following year. Washington Research Council, an economic development think tank, said the expiration resulted in a shift of data center construction to other states until the exemption was reinstated in 2012.
WRC estimated in a 2013 report, its latest on the subject, that companies had built about 2.5 million square feet of data center space in Central Washington between 2006, when data center construction in the region took off, and 2013. About 2 million square feet of that space was in Quincy, whose population is about 7,000.
Between 2006 and 2013, Yahoo, Microsoft, Sabey, Intuit, Dell, Server Farm Realty, and Vantage built data centers in the region, ranging from mid-size (Yahoo’s 45,000-square-foot facility in Wenatchee) to mega-scale (Microsoft’s 500-square-foot data center in Quincy).
What States Get in Return for Tax Breaks
States use tax incentives to attract data center projects as they do to attract other industries. A recent analysis by the Associated Press found that states issued about $1.5 billion in tax breaks for data center projects over the past 10 years. Data center tax breaks, however, have been controversial, because data centers aren’t massive job creators.
Making a “conservative” estimate, WRC said the 12 data centers built in Central Washington between 2006 and 2013 created about 480 direct jobs. Data centers also create indirect jobs, such as landscaping, maintenance, and security. The most important types of indirect jobs they create are through purchases of electric power and water, according to the organization.
The report identifies numerous other indirect jobs that result from data center operations, which are more difficult to tie directly to data center projects, such as higher demand for services like restaurants, healthcare, and education.
Data centers, like other commercial or industrial construction projects, beef up the local tax base. Quincy, for example, grew its assessed value from $260 million in 2006 to $1 billion in 2009, despite a downward trend in residential property value in the state, according to WRC.
It’s Not All about Tax Breaks in Central Washington
Central Washington has other attractive attributes for data centers in addition to tax breaks. It is in a geologically stable area, and the climate is conducive to using outside air for free cooling, John Sabey, president at Sabey Data Centers, a major Seattle data center developer and provider, said.
There is also an abundance of hydro power, which, at about $0.25 per kWh, comes at some of the lowest rates in the “developed world,” he said.
Sabey’s Quincy data center is next door to Yahoo’s campus there. Its customers in Quincy include large banks, large technology companies, a movie studio, and a major Content Delivery Network provider, among others, Sabey said.
Coleman also said tax incentives were only one of the reasons Yahoo chose to expand in Quincy. Climate and affordable clean energy were important factors in the decision, as well as access to a skilled workforce. | | 3:00p |
The Modern Data Center: Challenges and Innovations James Young is the Director of CommScope’s Data Center Practice in Asia Pacific.
The modern data center is a complex place. The proliferation of mobile devices, such as tablets and smartphones, place an ever-increasing pressure on IT departments and data centers. End-user and customers’ expectation levels have never been higher, and the demand for data shows no sign of slowing down. Data center managers must manage all of these elements while also remaining efficient and keeping costs under control. So where does the data center go from here?
One thing I have noticed in the evolution of the modern data center is that the facilities are gaining importance as energy efficiency and IT management have come to the forefront. Maximizing an organization’s resources is vital, and that means delivering more to facilities and equipment without expending more on staffing. IDC forecasts that during the next two years, 25 percent of all large and mid-sized businesses will address the power and cooling facility mismatches in their data centers with new IT systems and put a 75 percent cap on data center space used. So, there again is the crucial challenge of doing more and innovating while keeping budgets and spend under control.
Where Does the Data Center Go Next?
At the heart of the data center evolution is IT’s rapid rate of change. If you examine enterprise data centers, then you might observe the ways that cloud computing and hyperscale innovations are displacing traditional enterprise systems, with new paradigms pioneered by innovators like Amazon and Google. With new options being developed, enterprises now have to chart strategies for cloud computing, including public, private or hybrid cloud.
Taking the Data Center Forward
These specific needs and challenges that the modern data center face requires working with the right tools and solutions. Modular, purpose-built data center infrastructure allows organizations to develop data center services based on need − when capacity rises and where capacity is needed. For example, we’ve observed in Singapore that most data centers operate slightly above 2.1 power usage effectiveness (PUE). This means that companies spend more on cooling their data center rather than on operating and powering the IT equipment. It is a simple challenge: drive efficiency without impacting operations. You want to drive PUE down to approximately 1.06, regardless of where you need to operate, and reap huge energy savings while better serving customers. If done right, there is a positive environmental impact.
Better PUE is a mandatory step in this process. The PUE journey continues as evidenced by Amazon, which had recently taken to harnessing wind to power its data centers. Modular data centers will play a major part in this PUE journey, thanks to more efficient use of energy and greater flexible support for resiliency and compute density.
When it comes to capacity, however, it’s hard to be certain about how much capacity is going to be needed. Fortunately, there are multiple ways of measuring capacity.
At one time, capacity was measured by examining the amount of space, power and cooling utilized. Organizations are now looking at the data center as a “factory,” evaluating the amount of equipment available in it, how productive that kit is and where improvements and efficiencies can be made versus competitors. In the IT space, the biggest competitors – in my view – are the hyperscale cloud providers.
The Enterprise Versus the Hyperscale Provider
While it’s no surprise that the complex, multi-layered enterprise has more elements to manage, it is good to know that there are new tools available to help them do so. Data center infrastructure management (DCIM) solutions enable enterprises to measure the amount of work obtained out of every watt sent into the IT equipment and to use this data to drive new efficiencies. In 2013, the Ponemon Institute researched data center outages and found that facilities equipped with DCIM recovered from outages 85 percent faster.
More Data, Smarter Decisions
Diving deeper into the productivity of IT equipment empowers data center managers to make more informed decisions in real-time, optimize the investment and use of IT resources. PUE targets should be continuously lowered, but eventually facility improvement reaches a limit given the data center’s basic design. Effective use of IT energy, enabled with DCIM tools, picks up where PUE management alone begins to fail. These tools and systems are essential for enterprises to remain competitive against their peers and also against major hyperscale players.
The mix of different operating models will likely always remain with some organizations being less dependent on the IT equipment they use choosing instead to outsource all their IT or software needs to an expert third-party supplier. Since the data center is the core of their business, others may choose the alternative path. Ultimately, the network remains vital for any of these strategies to succeed.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:21p |
Gartner Symposium/ITxpo 2015: Day Two Keynote 
This post originally appeared at The Var Guy
By Charlene OHanlon
The second day of Gartner’s big IT event in Orlando, Florida, kicked off with a keynote discussion by IBM CEO Ginni Rometty, who declared the era of cognitive computing has begun.
“Everyone talks about being a digital company. But when everyone’s digital, what differentiates you?” she said. “The next big trend is the cognitive era. These are systems that understand unstructured data and reason and learn. And that’s important.”
To that end, IBM today announced at the Gartner Symposium/ITxpo 2015 it has formed a consulting organization “dedicated to helping clients realize the transformative value of cognitive business.”
The new practice will incorporate IBM’s Watson cognitive platform, which encompasses 28 engines underpinned by 50 technologies, Rometty said, as well as the company’s business analytics expertise. Machine learning, advanced analytics, data science and development will be a part of the cognitive experience, according to the company.
“Digital business plus digital intelligence is the new era of cognitive computing,” Rometty said.
During the keynote, Rometty also touched on IBM’s evolving role in the IT landscape, transforming itself as it strives to help its customers transform themselves. “Our job to take your company from one era to the next,” she said.
“IBM’s journey is close to [the journey of] many people in the audience,” she said. “I believe it’s about how to continue to move from one era to the next. You have lots of assets already that you want to make more valuable. The winners will take those assets and new assets together and that’s what will make their business more differentiated, more competitive.”
Analytics will play a large role in that differentiation—something we’ve heard before. Cognitive computing, however, will take that differentiation one step further.
Rometty was quick to note cognitive computing will not replace the human element. Rather, analytics and cognitive will create more demand for data scientists and change the collective skill set of workers.
“Cognitive is about augmenting what man does, not replacing it. This is learning and dialogue,” she said. “With so much information today, it is impossible for you to keep up. We are coming into a time when this will allow us to do what we couldn’t do before, or do it better.
“We will go through a journey on this,” she continued. “Eventually what it will do from a skills perspective is everyone will have some sort of skill around data analytics. It will change how the world educates its professionals. And it will create whole new sets of jobs that allow you to enter markets you could not have done before. It will create jobs with reach, and individual jobs will become richer.”
One job in particular will see a definite change, Rometty believes—that of the developer.
“I believe folks will have to assemble, compose and integrate, and they will have to understand information in a broad way, so developers and data scientists will blend,” she said.
Rometty rounded out her discussion by noting her hope that the next generation’s first experience with IBM is through Watson.
“I hope Watson allows a child to learn to his or her fullest capacity,” she said.
This first ran at http://thevarguy.com/information-technology-events-and-conferences/100615/gartner-symposiumitxpo-2015-day-two-keynote | | 5:00p |
Using Service Providers’ Native Cloud Management Tools As we said last week in our guide to selecting cloud management tools, there are multiple sources for those tools, and one of them is the cloud service provider itself. Today we dig deeper into this option.
When working with a provider tool set, it’s important to take into consideration the type of provider you are working with. Hosting solutions offer a variety of benefits and options. Many times an organization will want the provider to manage their entire environment, from the hardware to hypervisor, and even up through the applications themselves. Other times, hosting providers will only manage the hardware and stop at the hypervisor level. Picking the right solution will depend on what your goals are.
What to Look for in Provider Tools
Provider tools for cloud management can be powerful additions into an already existing monitoring and management system. Look to try and leverage the following:
- Hardware-level visibility: Depending on the type of cloud agreement, provider tools should have good visibility into the hardware layer. Cloud data centers live off of shared resources. Having visibility into how these resources are being used is important to maintaining cloud health.
- Hypervisor management: Oftentimes, provider tools will give the administrator the ability to see and manage their hypervisor. Again, depending on the type of agreement, hypervisor management can be an involved process or a fairly simple one. If the provider environment is only partially managed, look for tools that have visibility into each hypervisor on all physical hosts.
- Application control: With a provider-based design, some contracts ask for control over application sets being delivered over the cloud. In very specific designs, some provider tools will look at applications and help manage them. This is often true for Software-as-a-Service applications hosted in the cloud. Look for tools that give granular visibility into the application. This includes user count, licensing, and even updating or patching specific application modules.
- Automation: Application development lifecycle automation is a great feature that can be offered by a provider. This helps with application development and other parts of the lifecycle. Other automation features include spinning up additional VMs to help offset user workload count. Look for tools which fit your specific cloud model and ensure that they are able to meet the needs of your automation procedures.
- User load-balancing: As a part of the contract, tools concentrating on the “economies of scale” methodology will help organizations spin up new VMs only as required. This means there will not be any dormant resources. Look for tools which help monitor your existing management tools and allow an administrator to adjust to user count dynamically.
- WAN utilization: When using a provider, monitoring WAN usage is always important. Most of the time, WAN utilization is monitored by the provider. Still, working with provider tools to gain visibility into bandwidth usage in the cloud can be powerful. If WAN optimization is part of the IT function, ensure that there is a provider feature capable of this type of visibility.
- Environment health metrics: Spanning multiple data centers, cloud environments create a truly distributed infrastructure which can become challenging to manage. When these environments are hosted, provider tools can be used to monitor the health of multiple end points, as long as they’re all under the same provider.
Provider Tool Cautions
Service provider cloud management tools, for the most part, should act as a strong supplement to an existing tool set. Depending on the size and complexity of the environment, provider tools can only provide a limited view into an environment, since the provider’s ultimate goal is to manage the majority. With provider tools, administrators can have an extra layer of visibility, but it may not always be enough. Here are a few things to look out for:
- Over-reliance: Provider tools can be limited. Administrators should be aware of their functionality and where they fall short. In many cases, they should be used to complement existing tools already monitoring the cloud environment.
- Training: Just like third-party cloud management tools, a provider-native tool set is a new software package for administrators to learn. The better you understand the tool set, the better it can be leveraged.
- Limited visibility: As mentioned earlier, provider tools can be limited. It’s in these cases where understanding the full capabilities of that tool set can really help.
- Accessibility: There will be times where provider tools are only accessible through a portal or a web link. This may reduce their effectiveness if the data must be seen locally or outside of the cloud environment.
- Management and configuration: Depending on the contract, providers may very well limit the extent to which administrators are able to manage and configure the monitoring tools. Provided as a service, additional configuration settings and customization may come at a price.
Regardless of the type of cloud infrastructure, administrators must always be prepared to manage their data center resources. As environments continue to evolve, it will be up to the IT managers to know and understand the type of visibility that is required to keep their organization functioning properly. There are many types of tools out there. Planning around an environment and having a granular understanding of what that cloud infrastructure is trying to deliver will help dictate the right monitoring and management tool set to use. With unique needs and business drivers, tool sets must be able to adapt to the needs of the IT and business unit. | | 6:04p |
Facebook to Build Third $200M North Carolina Data Center Five years since construction crews broke ground to build the first Facebook data center in Forest City, North Carolina, the company announced plans to construct a third data center building on the campus.
“We’re happy to announce that we’ll be constructing our third data center building in Forest City, which will represent an additional capital investment in excess of $200 million,” Keven McCammon, site manager for Facebook’s Forest City data center, wrote in a blog post.
The company has been on a data center construction tear this year, which indicates that its user base continues to grow quickly. It is expanding its data center campuses in Altoona, Iowa, and Prineville, Oregon, and kicking off construction of its first data center in the Dallas-Fort Worth region.
Construction of the first Facebook data center in Forest City started in late 2010. It was the second data center the company had designed and built on its own. The site currently has two primary data center buildings and a cold storage data center, a facility designed specifically to store files social network users don’t access frequently, such as old photos.
The third data center building will house the latest hardware designs to come out of the Open Compute Project, the Facebook-led open source hardware and data center design initiative. OCP started with server, storage, and electrical infrastructure designs, but later added networking switch hardware as Facebook started designing its own.
In a nod to support the company has received from local officials, McCammon wrote that the decision to build the first Facebook data center in Forest City was due in part to “the commitment and the vision of the community and leadership here in Rutherford County.”
The data center campus currently has 125 full-time employees, he said.
Facebook designed and built its first company-owned data center in Prineville, Oregon, shifting its infrastructure strategy from using mostly leased facilities. The first building in Prineville came online in 2011.
Today, it owns and operates data centers in North Carolina, Iowa, and Oregon, as well as in Sweden. | | 7:05p |
Report: Tesla Battery Plant in Reno to Include Data Center The massive Tesla Motors battery plant that’s currently under construction just outside of Reno, Nevada, will include a data center, adding to the data center cluster that’s forming east of the city.
The future $5 billion Tesla Gigafactory’s footprint will be 10 million square feet, some if which will be used to build a data center, Reno Gazette Journal reported, citing permits the electric-car company filed with local officials. The data center project will involve robotics, according to the publication.
Tesla has been picking up pace at the site. It has applied for more than 20 construction permits in the last four months alone, which is as many as it applied for in the first year since it started construction in Reno.
The Reno site neighbors a rapidly growing Apple data center campus. Also nearby, Las Vegas data center provider Switch announced plans to build a $1 billion data center, which it claims will be the biggest in the world. eBay has signed up as anchor tenant for the Switch facility. Rackspace has been exploring the area as a potential location for its next data center.
While Tesla will be using the data center it is planning to build in Reno, it is also trying to break into the data center market as a vendor. Earlier this year, the company introduced an energy storage solution for buildings, announcing a pilot deployment at an Amazon Web Services data center on the West Coast.
Aiming to provide energy storage to help companies use intermittent renewable energy sources, such as solar and wind, the solution isn’t only for data centers – Target, Enernoc, and Jackson Family Wines have also launched pilots – but data centers could become a primary category of end users, if Tesla can demonstrate that economics of the technology makes sense for mission critical facilities that not only consume a lot of power but also require high levels of reliability. | | 9:11p |
IBM Watson Researchers: Carbon Nanotubes Could Power Future Computers 
This post originally appeared at The Var Guy
A group of IBM Watson researchers said that they’ve discovered a new way to make transistors other than silicon to carry the electrical impulses back and forth required to drive high performance computing.
IBM scientists said they found a new way to hook increasingly thin wires to carbon nanotubes without incurring the resistance and heat building up that compromises silicon chips currently serving as the bones and brains of computing.
IBM said the results of its research will be reported in the October 2 issue of Science magazine.
Silicon transistors carrying information on a chip, while getting smaller year after year, are fast approaching their physical limitations. With Moore’s Law–the number of transistors in a dense integrated circuit will double approximately every two years–running out of steam, semiconductor researchers have been searching for years to find a replacement vehicle.
IBM now believes it’s found something.
“These chip innovations are necessary to meet the emerging demands of cloud computing, Internet of Things and Big Data systems,” said Dario Gil, IBM Research Science & Technology vice president.
“As silicon technology nears its physical limits, new materials, devices and circuit architectures must be ready to deliver the advanced technologies that will be required by the Cognitive Computing era,” he said. “This breakthrough shows that computer chips made of carbon nanotubes will be able to power systems of the future sooner than the industry expected.”
In particular, carbon nanotube chips could exponentially improve high performance computing, enabling faster analysis of Big Data, increasing the power and battery life of mobile devices, data transmission of the Internet of Things, and enabling cloud data centers to deliver services more efficiently and economically, IBM said.
Previous IBM research showed that carbon nanotube transistors can operate as excellent switches at channel dimensions of less than ten nanometers–equivalent to 10,000 times thinner than a strand of human hair and less than half the size of today’s leading silicon technology.
IBM’s new contact approach overcomes the other major hurdle in incorporating carbon nanotubes into semiconductor devices, which could result in smaller chips with greater performance and lower power consumption.
This first ran at http://thevarguy.com/computer-technology-hardware-solutions-and-news/100615/ibm-watson-researchers-carbon-nanotubes-could-power- |
|