Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, July 22nd, 2014
| Time |
Event |
| 12:30p |
The Age of Wearable Data Engines Suresh Sathyamurthy is the Sr. Director of Product Marketing and Communications at Isilon Storage Division of EMC.
The speed and volume of mobile data have reached a new level. In its annual report last February, Cisco went on record with an eye-opening prediction – mobile data traffic will exceed 190 exabytes by 2018. That’s 11 times more data than 2013. In turn, the number of mobile devices and connections is predicted to rise from seven billion in 2013 to 10 billion in 2018.
This is happening as our customers advance from smartphones to wearable devices like fitness trackers, smart watches and Google Glass. The latest “Chicken Fat” ad from Apple also demonstrates all the data wearable devices can generate through a variety of exercises. What does this mean for those who purchase and manage storage in the enterprise?
To 190 exabytes and beyond, preparing for the impact
The explosion of mobile data growth creates new workloads that define new storage architectures, ranging from traditional dual control architectures to scale-out data lakes and hybrid cloud architectures. They will also incorporate new components such as flash and cloud tiers.
Wearable technology, like Google Glass and fitness tracker, eliminates the last barrier to unconscious data demands as users instantly record images and data with no regard to storage space. This takes the consumer from a “conscious decision” to make and search data to a nearly automatic “subconscious” event.
Wearable devices mean website servers have to be prepared for unpredictable traffic spikes. Social sites like Facebook and Instagram could potentially experience 10 times the volume of photos and videos, and updates, such as spikes seen from the Ellen DeGeneres selfie at the Oscars.
With all that in mind, here are five ways wearable technology could impact your storage infrastructure and suggestions on how to prepare.
1. Larger volumes of cold vs. critical data, and knowing how to tell them apart As the volume of information begins to snowball from wearable devices, the line between critical and cold data will start to blur. Understanding and identifying the differences will be important as you divide your infrastructure between expensive and inexpensive storage or be able to tier data within the same storage infrastructure.
It’s fair to assume that most of the data will fall under the non-critical category. IT will have to look at storage systems that not only hold a lot of data but also can do it on the cheap and yet still distinguish between critical and cold data.
2. Potential for new level of analytics Analytics will also play a big part in capitalizing on mobile data, representing a combination of human/machine-generated data that will offer new insights to consumer behavior. Imagine knowing how many shoppers are in a store, what products they looked at, how long they looked at it, and whether or not it led to a purchase. Then, imagine using that information to change product placements the very next day. While in-store cameras do much of this already, wearable devices will provide a new level of personalization.
3. Greater demand for geo-distributed data centers Wearable device users could also affect the placement of data centers. For example, as Google Glass adds a new level of mobility for recording data, data centers will have to be equally distributed around the world to provide the shortest amount of distance between them and the person uploading images. A geo-distributed system will be necessary to keep up with customers who could be anywhere, anytime effortlessly uploading or pulling down data.
4. More demand for multi-tenant storage infrastructure Companies like Google, Facebook, Twitter and service providers are already uniquely positioned to take advantage of the wave of wearable apps. Their storage infrastructures are in place with a multi-tenant environment that supports developers as well as customers. As more Google Glass code is shared with the developers, we’ll see greater demand on service providers to provide access to their backend infrastructure.
5. Changes in the way internal clients access and use the infrastructure Organizations that are used to heavy text data access by employees can expect more video content to manage as wearable devices enter the workplace. The technology offers new avenues for employee training, records management or general communications. For example, if Glass is used by bank tellers, like a surveillance camera in the room, all data collected by the teller will be stored and possibly accessed later to resolve disputes.
While perceptions over the reality of consumer adoption of devices like Google Glass and fitness trackers are mixed, plenty of consumers can’t wait to get their hands on them. And when consumer devices go from being carried to being worn, a convenience barrier will be broken, making new demands on IT rather inconvenient for others. Regardless, businesses must be prepared for the changes that will be ushered in by wearables.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
AccelOps Adds Machine Learning Feature in Latest IT Monitoring Release AccelOps, which has a sophisticated analytics engine that monitors health and security of a company’s entire IT stack (from in-house data center to provider cloud), has launched the fourth release of its software, adding smarter anomaly detection capabilities and widening the range of systems on the network it is able to monitor.
The company also added support for Microsoft Hyper-V and Red Hat KVM hypervisors. The solution previously ran on VMware ESXi only.
Using analytics in IT operations is a recent trend, and a number of companies have emerged to help customers cope with complexity of their IT environments that now extend far beyond the on-premise data center.
AccelOps specializes in security, performance and compliance monitoring, offering all of the above on one screen. The basic idea is to analyze operational data generated by the customer’s IT infrastructure to identify trends and improve operations accordingly.
The new Statistical Anomaly Detection feature is based on a new algorithm that provides traffic on all devices on the network and alarms the admins when it detects anomalies.
Here’s the full list of additions in AccelOps 4:
- Statistical Anomaly Detection: AccelOps’ new machine-learning algorithm profiles traffic and metrics on all the devices on the network, triggering alerts when anomalies are detected or thresholds are reached.
- Acceleport: AccelOps enables users to now “tunnel in” between the AccelOps Collector and Supervisor to reach any server on the system, making it ideal for organizations with remote sites, managed service providers (MSPs) and managed security service providers (MSSPs).
- Watch List Creation: AccelOps automatically creates watch lists of users, websites or applications to keep an eye on.
- Synthetic Transaction Monitoring: Using Selenium, synthetic transaction monitoring records moves to enable identification of problems on websites and applications before problems affect end-users or customers.
- Hypervisor Agnostic: AccelOps runs on VMware ESXi, and now on Microsoft Hyper-V and Red Hat KVM.
Jason Brown, CTO at NetCuras, one of AccelOps customers, said he had always loved the breadth of data the solution was able to collect and sift through. “In the new release, AccelOps adds some very useful troubleshooting tools, as well as system upgrades and dashboard customization capabilities,” he said. | | 1:30p |
Microsoft Adds Private Azure Connectivity in Five US Markets, Hong Kong and Singapore Microsoft has added seven Equinix data centers to the list of locations from which customers can connect privately to its Azure cloud.
Five of the new locations are in the U.S. (Atlanta, Chicago, Dallas, New York and Seattle), and two are in Asia (Hong Kong and Singapore). This brings the total number of locations from which customers can connect directly to Microsoft data centers hosting Azure infrastructure to 10.
Benefits from such deals between cloud service providers and colocation companies are numerous.
They’re good for colo companies because their data centers become more attractive as points from where enterprises can connect to public clouds without going through the unsecure and unreliable public Internet. Public cloud providers benefit from having more access to colo companies’ customers and network carriers that provide connectivity services at their facilities.
Several other cloud and colocation providers have made similar deals, which aim primarily at tackling the opportunity in making public cloud more palatable for enterprises with strict security requirements and massive data repositories. Today, most users of Azure connect to the service using the Internet, and traditional enterprises use public cloud services sparingly. Equinix and other colo players also offer direct access to Azure’s largest competitor Amazon Web Services.
Big step forward in ambitious expansion plan
In April, Microsoft announced a plan to expand ExpressRoute (the name of the cloud connectivity service) to 16 Equinix data centers around the world. It started with deployment in three locations – Silicon Valley, Washington, D.C., and London – and this week’s announcement is a major step forward in executing that plan.
Microsoft has struck a similar deal with London-based data center provider TelecityGroup, which will eventually make private Azure links available in 37 data centers across Europe. At the moment, the service is only available in Telecity’s Amsterdam facilities, with a London location coming up.
Interxion, another European colocation giant, is planning to host direct connectivity points for both Azure and AWS in its data centers as well.
New network carrier deals for Azure ExpressRoute
In addition to partnering with data center providers directly, cloud companies also partner with network carriers, which expand geographic reach of these services to many more locations.
Microsoft has ExpressRoute deals with Level 3, Verizon, British Telecom and SingTel, and this week it announced new agreements with European carrier Orange Business Services and Japanese provider Internet Initiatives Japan.
“In the coming months, we will be able to onboard early adopter customers for these partners,” Ganesh Srinivasan, senior program manager for networking at Microsoft, wrote in a blog post announcing the new ExpressRoute locations and partnerships.
Microsoft expects availability of ExpressRoute in London via Level 3, Orange and Verizon in the near future. A SingTel service in Singapore is also coming soon, according to Microsoft. | | 2:00p |
Data Center Jobs: ViaWest At the Data Center Jobs Board, we have three new job listings from ViaWest, which is seeking a Data Center Engineer in Richardson, Texas; a Data Center Engineer in North Las Vegas, Nevada; and a Data Center Manager in North Las Vegas, Nevada.
Each Data Center Engineer is responsible for monitoring the buildings HVAC, mechanical and electrical systems; performing preventive maintenance; site surveys; replacement of electrical and mechanical equipment; reading and interpreting blueprints, engineering specifications, project plans and other technical documents; performing operation; installation and servicing of peripheral devices; assisting with equipment start-ups; repairs and overhauls; and preparing reports on facility performance. To view full details and apply, see job listing details: Texas or job listing details: Nevada.
The Data Center Manager is responsible for developing and maintaining positive relationships with clients; overseeing the scheduling, maintenance, and monitoring of all heating, ventilating, air conditioning, water, electric and other systems to ensure efficient operation; inspecting the facility and generating inspection reports; cultivating productive and proactive working relationships with property management and other tenants in order to jointly resolve issues; and building a strong team of technical experts to maintain infrastructure. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.
| | 5:02p |
LeaseWeb Expands Asian Presence With Pacnet Data Center in Singapore Amsterdam-based hosting provider LeaseWeb opened data center within Pacnet’s CloudSpace II facility in Singapore. CloudSpace II opened earlier this year and is Pacnet’s newest data center.
LeaseWeb is one of the world’s larger hosting providers with 60,000 physical servers under management worldwide. The new Singapore data center expands its reach in Asia-Pacific, joining an existing location at an Equinix facility in the country.
Pacnet lands a growing customer that is looking to expand further across Pacnet’s footprint. The agreement provides for future data center expansion of LeaseWeb Asia Pacific into other locations in the region, including Hong Kong, Sydney and Tokyo.
The data center is also the first location to host LeaseWeb’s bare-metal cloud services in the region.
The facility is connected to Pacnet’s worldwide 4-terabit-per-second IP backbone comprised of 52 Points of Presence (PoPs), 33 Internet Exchanges.
LeaseWeb’s customer base is predominantly European, but the company has been expanding globally, including a growing North American presence. Based in Haarlem in the Netherlands, LeaseWeb is part of the Ocom Group. It has several sister companies, including colo provider EvoSwitch, pan-European network provider FiberRing and modular data center player DataXenter.
LeaseWeb Managing Director of Asia Pacific Bas Winkel said the company had an ambitious growth plan throughout the region. “This will enable customers from Europe and the U.S. to easily enter Asia Pacific markets, supported by our high-performance infrastructure solutions,” he said.
LeaseWeb has a slew of infrastructure services, including virtual, bare-metal and private cloud, dedicated servers, shared hosting, as well as some retail colocation. It also provides managed services, such as backup and service level guarantees. | | 5:33p |
Big Switch Intros SDN Fabric to Bring Web-Scale Networking to Non-Web-Scale Customers Big Switch Networks announced a new switching fabric it claims will bring Web-scale networking done by the Internet giants to a broader customer base. Called Big Cloud Fabric, it combines Software Defined Networking with bare-metal networking hardware.
Along with the new fabric solution, the company announced that it has secured its first million-dollar customer, but did not name the company. Its products are now running in 16 data centers, and its partner roster has grown with addition of A10 Networks, Canonical, Citrix, Dell, Fortinet, Hortonworks, Microsoft, Mirantis, Quanta, Red Hat and Riverbed.
“The innovators in data center networking over the last five years have primarily been the hyperscale players such as Google, Facebook, Microsoft, and Amazon,” Big Switch CEO Douglas Murray said in a statement. “Our mission is to bring this hyperscale design to data centers worldwide, enabling companies to achieve improved operational efficiency and delivering on the original promise of SDN.”
The product is based on the company’s Switch Light Operating System on physical leaf and spine bare metal switches. The fabric uses Broadcom’s Trident II silicon and is designed for 10G and 40G connectivity.
The Big Cloud Fabric will be offered in two editions, using either a physical leaf-and-spine fabric, or a fabric that combines physical and virtual, where leaf, spine and vSwitches are controlled by the Big Cloud Fabric controller.
Bit Tap monitoring software updated
The company also announced the latest release of its network monitoring application Big Tap. In version 4.0, it added support for distributed, remote data centers and a rich filtering feature set, including deeper packet matching, applicable for tunneled packets, such as those found in mobile protocols and VXLAN.
It also provides additional compatible hardware choices, including Dell open networking hardware as well as Edge-corE witches. | | 6:10p |
SoftBank Data Centers to Host VMware’s IaaS Cloud in Japan VMware and SoftBank Commerce and Service (a SoftBank Telecom subsidiary) have brought VMware’s vCloud Hybrid service to Japan, the first Asian market for the Palo Alto, California, company’s Infrastructure-as-a-Service offering.
For its part , VMware will build, manage, operate and support the vCloud service and provide the primary sales route to market through its ecosystem of partners. SoftBank will contribute its data centers, network and a dedicated sales force, including more than 7,000 re-sellers in Japan.
The service was announced last year and has been available out of five data center sites in the U.S. and UK, three of them CenturyLink facilities. VMware has plans for further expansion in Asia Pacific, as well as EMEA, and a U.S. federal government offering is in the works.
The vCloud Hybrid service in Japan will launch as a beta program, slated for general availability in the fourth quarter. It will initially offer compute, storage, networking, data protection and disaster recovery, the company said.
VMware CEO Pat Gelsinger said, “VMware vCloud Hybrid Service is growing quickly in the U.S. and UK, and the capability we are talking about today addresses Japan’s data locality, privacy, security and sovereignty challenges. More such deployments will follow, each tailored to suit the needs of key markets in Asia Pacific.”
The joint venture between VMware and SoftBank is being billed as a service with the channel community in mind. vCloud Hybrid Service can now be sold the same way any other VMware product can. VMware says it has a network of almost 200 vCloud Service providers in Japan that operate clouds based on its software.
Next up: China
In addition to Japan, another Asian market VMware is going after is China. The company states that it has been pursuing a strategy of partnering with key Chinese technology, distribution and service provider partners to produce Chinese solutions for Chinese organizations.
It is exploring a hybrid cloud service with the cloud computing branch of China Telecom, under the brand name CT E-Surfing Hybrid Cloud Services. China Telecom, according to VMware, serves the largest Internet user base in the country. | | 7:35p |
GE Claims Breakthrough in Fuel-Cell Tech, Launches Fuel-Cell Subsidiary General Electric unveiled a fuel-cell subsidiary called GE Fuel Cells and said it was building a new fuel-cell manufacturing plant in upstate New York. Fuel cells use natural gas or biogas to generate electricity and have seen some adoption in the data center space.
GE Fuel Cells is essentially a corporate-backed startup tasked with commercializing a new fuel-cell technology the company says may bring the cost down substantially. GE said its scientists have reached a breakthrough in solid oxide fuel-cell technology, which led to the formation of the new subsidiary and construction of the manufacturing plant.
Fuel cells generate energy by triggering a chemical reaction, and the designs used to date require costly materials to generate that reaction, according to the vendor. GE’s new fuel cells use stainless steel in place of platinum and rare metals, which should bring the cost down.

Fuel cells are a cleaner alternative to coal-fired power plants and have been used in tandem with wind and solar farms. Apple has taken this approach to powering its data center in Maiden, North Carolina, and eBay’s latest data center in Utah relies entirely on fuel cells for power.
The two projects are exceptions, however. The majority of fuel cell deployments at data center sites have been experimental and supplementary to grid power.
“The cost challenges associated with the technology have stumped a lot of people for a long time,” said Johanna Wellington, advanced technology leader at GE Global Research and the head of GE’s fuel cell business. “But we made it work, and we made it work economically. It’s a game-changer.”
The announcement brings a heavyweight player to the fuel cell market. Of companies selling fuel cells into the data center space, the most successful has been Bloom Energy, whose products power the Apple and eBay facilities, and have been deployed at CenturyLink’s California data center.
Other companies with fuel-cell products for data centers include Hydrogenics, whose hydrogen-powered fuel cells are sold by CommScope as an alternative to diesel generators, and ClearEdge Power. | | 8:35p |
SoftLayer Adds InfiniBand to Offer High Speed Networking Performance in the Cloud 
This article originally appeared at The WHIR.
SoftLayer customers will benefit from faster connections between bare metal servers as the company announced on Tuesday that it has added support for InfiniBand networking architecture.
IBM, SoftLayer’s parent company, said that InfiniBand technology enables high network throughput and low latency between bare metal servers, making it ideal for high-performance computing applications in the cloud.
InfiniBand is an interconnection specification that is used by supercomputers to deliver 56Gb/s per link, and carry multiple traffic types over a single connection. It is used mainly in high performance computing, but also in big data, cloud, database, storage and web 2.0 applications.
“As more and more companies migrate their toughest workloads to the cloud, many are now demanding that vendors provide high speed networking performance to keep up,” SoftLayer CEO Lance Crosby said. “Our InfiniBand support is helping to push the technological envelope on what can be done on the cloud today. It showcases our innovation when collaborating with customers to help them solve complex business issues.”
Recently, the InfiniBand Trade Association said that InfiniBand is the most used interconnect by the world’s fastest supercomputers. It now represents 44.4 percent of the TOP500 at 222 systems. The TOP500 list is published twice a year, and ranks the top supercomputers around the world.
InfiniBand technology has been deployed at a handful of cloud providers as they look for ways to address the networking speed bottleneck. Cambridge, Mass.-based ProfitBricks is another cloud service provider that leverages InfiniBand technology. It claims that in using InfiniBand, it is able to offer customers 80 times the connection speed of its competitors who use Ethernet connections.
InfiniBand is also used by Atlantic.Net, a cloud and hosting provider with data centers in Dallas and Florida.
As IBM looks for ways to spend its $1.5 billion cloud investment, upgrading SoftLayer technology and adding new services is a recent area of focus. Last month, SoftLayer launched a service that allows customers to establish a private connection between their existing IT infrastructure and their SoftLayer compute resources, called Direct Link.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/softlayer-adds-infiniband-offer-high-speed-networking-performance-cloud | | 9:00p |
NIMBY and the Data Center: Lessons From the Battle of Newark When you announce your next project, will it be greeted with praise or protest? That’s an important question for data center developers in the wake of the collapse of a controversial project in Delaware.
On July 10 the University of Delaware terminated its lease with The Data Centers LLC, which had planned to build a large data center supported by a 279-megawatt energy generation facility featuring combined heat and power (CHP) that would allow it to operate “off the grid.” It was heralded as a forward-looking data center cogeneration project that would bring jobs and up to $1 billion in investment to the town of Newark.
The project was met with resistance by members of the local community, which coalesced into Newark Residents Against the Power Plant, a grass-roots group that made masterful use of social media to drive an alternate narrative: that the power plant was a threat, and developers were hiding critical facts about their plan. This scrutiny was a major factor in the university’s decision to back out of the project.
Lessons for future projects
It would be easy for the industry to view the “Battle of Newark” as an isolated incident involving a unique project, with limited relevance to other developments. But data center developers should pay close attention, as the Delaware debate introduced a new scenario: the data center as a political hot potato.
It’s instructive to examine the tactics adopted by NRAPP in mobilizing opposition to The Data Centers’ project:
- The group quickly deployed a web site and built a following through an e-newsletter, Facebook and Twitter.
- NRAPP tracked every development on its web site and social channels, posting 150 documents, hours of videos of local meetings and maps of the data center site and the surrounding neighborhood.
- It assigned “neighborhood captains” to distribute yard signs, T-shirts and door hangers and hand out flyers and fact sheets about the project.
- The group mobilized support from other organizations, including the Delaware Audubon Society, The Sierra Club, student groups, university faculty, the Delaware Coalition for Open Government and various environmental groups.
- NRAPP tapped the skills of professionals in its membership, using the Freedom of Information Act (FOIA) to gain access to documents, and filing a Superior Court lawsuit to oppose the project.
“We have proven that the very concept of the ‘done deal’ is now dead,” the group said in a statement. “The community’s voice is powerful in shaping our future. The significance of this effort extends beyond the power plant and will have a positive impact on our community for years to come. It also serves as a powerful example for other communities facing similar challenges.”
That last sentence is worth considering, as NRAPP has created a template for future efforts to mobilize opposition to data center projects. So how do developers avoid a protracted battle over future projects? There are several “lessons learned” from the Newark fiasco they should keep in mind.
‘Power plants’ are problematic
Data centers don’t usually freak people out. But power plants do.
There haven’t been many NIMBY (Not In My Backyard) disputes about data centers. Staffing is minimal, so they don’t place a burden on local traffic or schools. Noise and emissions can usually be contained on site. These projects boost the tax base and are seen as symbols of the new economy. That’s why many data center builds are announced by the governor at a press conference.
But when a power plant wants to move into the neighborhood, it can prompt a very different reaction. The on-site power project developed by The Data Centers was innovative for the data center industry. It’s fair to say that the developers expected their project to be welcomed as a data center. But the neighbors looked at the plans and saw a power plant.
This is a power-obsessed industry. On-site power is being included in more and more data centers, albeit rarely at the scale attempted in Newark. Large banks of diesel backup generators sometimes attract scrutiny, as was the case in Quincy, Washington. But this equipment is well known and understood by most homeowners.
Developers implementing new approaches to on-site power generation would do well to either build in remote areas or be prepared to take time to explain the technology to local officials and residents. Which brings us to our next “lesson learned.” |
|