Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 2nd, 2016
| Time |
Event |
| 1:28a |
Do You Know the Real Cost of Your Data Center? When data center operators examine data center cost, they generally look at high-level metrics, such as gigabytes of storage or Power Usage Effectiveness. These do matter of course, but to get to the real cost, you have to zero in on lower-level components.
Do you know how much the flash drives on your servers cost? How about the CPUs or DRAM cards? A different vendor supplies each one of those components, and they make a big difference in total cost of ownership of every data center.
Web-scale data center operators like Google and Facebook learned this lesson long ago. For years, they have been re-examining each individual component of their IT gear, looking for ways to get it cheaper.
Enterprise data center operators can apply that wisdom too. One of the featured keynote speakers at the Data Center World Global conference in Las Vegas later this month is Amir Michael, a long-time web-scale infrastructure veteran, who spent years examining data center cost at both Google and Facebook.
In his session, Michael, CEO and co-founder of the analytics-driven data center management software startup Coolan, will examine each component of total data center cost, including power, facility, network, CPU, DRAM, flash, and hard drives, and share which of those components make the biggest difference in overall cost.
Join Coolan’s Amir Michael and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website. | | 11:00a |
Switch Gets Tier IV for Second Las Vegas Data Center Switch, operator of the big Switch SuperNap data center campus in Las Vegas, has secured the Uptime Institute’s Tier IV Gold certification for a second data center on the campus, SuperNap 9.
SuperNap 9 and the previously certified SuperNap 8 facilities on Switch’s four-data center Las Vegas campus are two of about 30 constructed data centers in the world to have received Tier IV certification. Most of those certified facilities are single-user enterprise data centers, not multi-tenant colocation facilities like the SuperNaps.
Tier IV is the highest certification level in Uptime’s four-tier rating system for data center infrastructure reliability, and the certifications, which require a lengthy and expensive facility examination by Uptime, have been a big part of Switch’s marketing strategy.
Read more: Data Center Design – Which Standards to Follow
Read more: Uptime Institute’s Tier Classification System Explained
Until last year, Switch focused on the Las Vegas market, expanding its campus there and attracting many big-name customers, such as eBay, Google, Amazon, Intel, HP, and Boeing, among others. But last year it went into expansion mode, announcing large construction projects in Reno, Nevada, and Grand Rapids, Michigan, as well as overseas, in Italy and Thailand.
Switch said its future data center outside of Reno, where eBay will be the anchor tenant, will be the world’s largest. The first building on campus will be 1.2 million square feet in size and provide 150MW of power capacity and 82,000 tons of cooling.
 Rendering of the planned Switch Tahoe Reno SuperNap data center campus (Image: Switch)
The Reno data center will neighbor the Tesla battery plant there and a big and expanding Apple data center campus.
Also last year, the company started investing in renewable energy for its data centers, making a commitment to powering its entire footprint with clean energy. Those efforts started with two solar energy contracts in Nevada, totaling 180MW. | | 1:00p |
Equinix to Open New Data Centers on Four Continents Equinix unveiled a plan for another phase of global expansion Wednesday. The data center colocation and interconnection giant will launch four new data centers across four continents in the coming months.
The new facilities in Dallas, São Paulo, Tokyo, and Sydney will add about 200,000 square feet of data center space total, or new capacity for about 4,000 server cabinets. Once the four sites are fully built out, Equinix’s global footprint will reach 14 million square feet of data center space.
The Redwood City, California-based company remains in aggressive expansion mode, following two major data center provider acquisitions last year – Telecity Group in Europe and Bit-isle in Japan – and the announcement of a big new data center construction project in Northern Virginia.
The company is courting cloud service providers, both Infrastructure-as-a-Service and Software-as-a-Service, and enterprise data center users that want to use cloud services. Equinix is reeling enterprises into its data centers by offering them access to a variety of cloud providers and the ability to connect to them using direct, private network links, bypassing the public internet.
Equinix’s latest global expansion, market by market:
- Tokyo: TY5 will be close to Equinix’s existing TY3 data center near Tokyo’s financial district. Both facilities are aimed at financial services customers.
- Dallas: One of the hottest data center markets in the US, Dallas-Fort Worth is a key network interconnection hub. Dallas is the heart of the internet in the south of US and an important gateway to Latin and South America. Equinix’s DA7 data center will primarily serve enterprise and telecommunications companies.
- São Paulo: Equinix’s SP2 data center here is almost full, and the upcoming SP3 facility will almost double its capacity in the market. Brazil is a growing IT outsourcing market, where more and more companies use colocation and cloud services.
- Sydney: The new SY4 data center in Sydney will be near the city’s central business district and provide access to Southern Cross Cable Head, a submarine cable system that interconnects Australia and other markets in the Asia-Pacific region.
In a statement, Equinix president and CEO Stephen Smith said global businesses were increasingly relying on interconnection to provide rich user experience everywhere around the world. These trends will only accelerate as more data is stored in edge markets to support the Internet of Things.
“Our focus on continually expanding our global interconnection platform means that wherever you grow, we’ll be there,” Smith said. | | 5:52p |
Utilities Brace for Unexpected Shocks to the System Chris Collier is Vice President, Director of KernEDGE.
Although utility companies handle routine business with complex Enterprise Resource Planning (ERP) and customer relationship management (CRM) systems, they are often confronted with unexpected legislation, social or financial demands. When this occurs, it can require swift changes to accounting and customer service procedures, as well as the processing of many thousands or even millions of customers to stay in compliance.
Whether utilities must suddenly accommodate Smart Meter opt-outs, Medical Baseline mandates, energy savings assistance, solar credits or LED customer upgrades, the required changes can be quite a shock to the system.
In such cases, comprehensive and accurate case management is required, whether by standard customer service document request or state Public Utilities Commission mandated programs.
As such, the primary challenge for utility companies is to effectively manage, document, and centralize an influx of new, inbound applications, as well as verify if applicants qualify for certain programs. Archiving of critical documents such as signed applications, change of address, complaints, bankruptcy claims, and power of attorney is often required. Proof of compliance is required by the PUC, other agencies, and auditors, as well as for legal protection.
Still when unexpected challenges arise, utilities have no choice but to deal with them. This leaves utilities with limited and often unpalatable options. The first is to throw labor at the problem. But hiring new employees and setting up call centers only raises operating costs and does little to handle the next shock to the system.
The other option is to pay existing ERP or CRM providers to add on new modules or programming to accommodate the required changes. However, this is often an even more costly approach that can take many months and hundreds of thousands of dollars to implement. Since this approach only addresses the challenge at hand, additional costs are required the next time the utility faces an unforeseen challenge.
Fortunately, there are electronic document management solutions designed to address just such a scenario that are already being used by major utilities.
Shocks to the System
Utility companies can require a fast response to market changes for a variety of reasons that create accounting or customer service challenges:
When Smart Meter opt outs became necessary due to consumer privacy concerns, by PUC mandate, one major energy utility had to give customers the ability to opt out within a 30-day timeframe. The utility not only had to manage a variety of forms, but also note that certain existing analog meters must not be changed or schedule crews to replace the Smart Meters with analog meters if consumers opted out.
In another example, when a Medical Baseline law mandated that utilities must not turn off consumers’ power if they require life saving machines even with unpaid bills, the implementation had to be flawless to prevent potential litigation. Documenting their efforts to reach consumers by a variety of methods was required for compliance.
State-mandated Energy Savings Assistance programs can also require utilities to offer low-income customers a discounted utility rate based on a set of income criteria that must be verified.
In states and municipalities that mandate solar credits, the move to alternative energy can overwhelm both accounting and customer service with rooftop solar compatibility, charge back solar credit issues, and the like.
Faster, Cheaper Compliance
The ideal process for responding to a utility market change that requires a fast response would be to collectively gather all the different document types into a centralized platform. The electronic document management solution would be flexible enough to import from multiple sources and create a workflow system that allows individual departments to easily follow documents wherever they go – for better management, control, service level tracking, and compliance.
Fortunately, the advent of secure Software-as-a-Service (SaaS) solutions, tailored to the utility’s unique workflow requirements, is allowing seamless function with existing ERP and CRM systems.
Because the software suite is offered as SaaS and usable by anyone with a browser, it does not require any capital investment, requires minimal IT attention, can be connected to existing ERP or CRM solutions, and can go from concept to implementation in less than a month.
In order to streamline utility companies’ handling of abrupt marketplace shocks, the software suite can be tailored to work seamlessly within the framework of each utility’s workflow and applications. This includes archiving and indexing all documents, so they are easily searchable with time stamps for every step of the process.
Unlike complex ERP software that must be programmed, the software suite is quickly adaptable to business process requirements, such as adding new offices, user groups, required authorizations, and government regulations. This flexibility is designed into the system because it is easily configured into an existing software engine as definable parameters, rather than programmed from scratch.
While the electronic document management software suite costs a fraction of ERP reprogramming and implementation, it is very secure, with built in redundancies and backup power from Tier IV data centers.
For utility companies that must brace themselves for continual, unexpected market changes, the bottom line is that such cloud-based SaaS solutions will allow them to nimbly adapt and remain compliant at a fraction of the time and cost.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:01p |
Tech Industry Employment Grew 3 Percent in 2015  By The VAR Guy
The US technology sector is in good shape, at least from an employment perspective, according to the new Cyberstates 2016 report released by the Computing Technology Industry Association (CompTIA). Nearly 200,000 new jobs were added in 2015, bringing the total number of US tech workers to over 6.7 million. This represents a year-over-year (YOY) increase of 3 percent, the highest growth rate the tech industry has seen in over a decade. Compare this to the 2.1 percent YOY national employment growth average for last year.
CompTIA found that over half of the growth was in IT services, which added 105,400 new tech jobs in 2015. This is good news for channel companies making investments in growing their services offerings. IT services drive much of the activity in the deployment, integration and management of many of tech’s hottest trends, including business process automation, data analytics and other business intelligence (BI) innovations.
Read more: Which Data Center Skills are in Demand Today
“Much of this growth can be attributed to the current trends in cloud computing, mobility, automation and social technologies that are reshaping businesses large and small,” said Tim Herbert, senior vice president, Research and Market Intelligence, CompTIA. “Momentum behind the Internet of Things (IoT) continues to grow, while the critical importance of cybersecurity shows no signs of abating.”
Though these technologies are poised to deliver exciting changes for businesses in all sectors, CompTIA acknowledges that few will be able to take full advantage of them without the help of the channel. Solution providers, MSPs, VARs, distributors and cloud service providers are among the organizations of the channel ecosystem the report says will play an integral role in supplying these emerging technologies and servicing customers’ ongoing needs.
The report also alludes to the growing need for IT consultative services, saying that organizations across sectors recognize that building digital workflows requires a new approach to software and equal attention paid to all layers of the stack.
On the wage front, CompTIA reports that on average, tech workers earned $105,400—more than double the US average private sector wage. Nearly 12 percent of all US private sector payroll can be attributed to the tech industry. Among all tech subsectors, IT services again took the lead in terms of highest payroll at $232,100.
On the state-by-state level, California lead in employment, wages, payroll and number of establishments. Texas, New York and Massachusetts followed behind.
This first ran at http://thevarguy.com/var-guy/tech-industry-employment-grew-3-percent-2015-67-million | | 6:23p |
The Life Cycle of a Data Center Your data center is alive.
It is a living, breathing, and sometimes even growing entity that constantly must adapt to change. The length of its life depends on use, design, build, and operation.
Equipment will be replaced, changed, and may be modified to best equip your specific data center’s individual specification to balance the total cost of ownership with risk and redundancy measures.
Just as with a human being, the individual care and love you show your data center can lengthen the life of your partnership.
This, best utilizing and tailoring your data center to extend its life cycle, is addressed by Morrison Hershfield Critical Facilities Practice Lead, Steven Shapiro, in his upcoming Data Center World presentation, “The Life Cycle of a Data Center”.
“Life cycle cost, sometimes referred to as return on investment, at its simplest level, is the study of an infrastructure system or component for the data center that takes into account all of these issues to develop a clear and concise means to decide which systems or components to choose that will provide the lowest life cycle cost for the facility,” Shapiro explained.
“The lowest life cycle cost, or the shortest return on investment, is the best investment that can be made for the data center. These studies do not take into account the preferences of the operations staff, or the ease of operation or maintenance unless that ease directly translates into dollars and cents.”
Systems that may be subject to these types of evaluations range from general building construction to each of the mechanical electrical, fire protection and plumbing system in the facility.
By the end of this presentation the attendees will understand the life of the most expensive facility in their portfolio, their data center.
Key questions that will be addressed, and key problems that will be solved in this session are:
- What decisions must be made to develop the basis of design for my data center?
- What impact do these decisions have?
- an explanation of Total Cost of Ownership for the various components of the data center infrastructure.
- How does maintenance impact my life cycle?
- How do I make my facility scalable?
- How do my initial decisions impact my ability to grow my facility?
- Does commissioning have a place in the life cycle of the data center?
- Is it different if I own my data center or if I am in a Colo?
Steven Shapiro will be presenting “The Life Cycle of a Data Center” Thursday, March 17th from 10:45 – 11:45am in the ‘Tradewinds AB’ room at Data Center World.
Shapiro will also be presenting “EPMS – What is it, do I need it, isn’t it DCIM?” exploring the value an EPMS system provides when utilized within the data center for operations and forensic applications Tuesday, March 15th from 1:00 – 2:00pm in the ‘Islander Ballroom D’ at Data Center World.
Join Steven Shapiro and 1,300 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas, NV, for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.
This first ran at http://www.afcom.com/news/life-cycle-data-center/
| | 7:03p |
Amazon Will Ship Your Cloud Data to You … on a Truck It takes a long time takes up a lot of expensive bandwidth to push 100 terabytes of data across a Wide Area Network.
Amazon’s answer to moving those kinds of data volumes from customer data centers to its cloud data centers has been to ship its customers high-capacity storage servers. The customer uploads their data to the server, which then gets shipped back to Amazon for upload to the cloud.
Amazon announced the service last year. Today, the company started offering the same service, but in reverse. If a customer has accumulated a lot of data in their AWS environment and wants to move it elsewhere, Amazon will put it on its Snowball data shipping servers and ship them to the customer.
It saves a lot of time. According to Amazon’s estimates, it can take more than 100 days to transfer 100TB of data over a WAN, while it takes less than one day to upload that much data to two Snowball appliances – each can currently hold 50TB. Add however long it takes to ship it from point A to point B, and the total time to move that much data is much shorter than 100 days.
A Snowball server is in a rugged container and has all the necessary cabling. Data on it is encrypted using encryption keys provided by the customer. They are not stored on the Snowball itself. A Kindle displaying the shipping label is attached to the container, and the label switches automatically every time the container is ready to be picked up to go to its next destination.
The new service determines automatically how many appliances will be needed to ship the customer’s cloud data. The data has to be stored on Amazon’s cloud storage service called S3. |
|