Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, May 14th, 2014
| Time |
Event |
| 12:00p |
‘CDN Replacement’ Startup Instart Logic Raises $26m Series C Instart Logic, a content delivery network startup, has closed an oversubcribed $26 million Series C funding round. The company, which has more than 10 patents pending, says its CDN services are tuned to cloud, the proliferation of mobile devices and the last mile, billing itself as a “CDN replacement.”
The growth in mobile devices, wireless access and application size has created new bottlenecks in so-called “last-mile” networks, affecting application performance. Better and faster content delivery is becoming a more mainstream need, and the traditional CDN focus has been on addressing the “middle mile.”
Instart’s latest round was led by new investor Kleiner Perkins Caufield & Byers (KPCB) with participation from prior investors Andreessen Horowitz, Greylock Partners, Sutter Hill Ventures and Tenaya Capital.
“Application performance and user experience have never been a more strategic, competitive differentiator,” Matt Murphy, general partner at KPCB, said. “Instart Logic’s application streaming approach is fundamentally different, delivering dramatic results for their customers and making existing content delivery networks obsolete.”
Instart CEO Manav Mital said his technology had introduced some of the world’s most performance-obsessed companies to a new way of accelerating cloud applications in less than 12 months, and it was not going to stop there. “We’ve only scratched the surface of our technology potential, and we are pleased that KPCB is joining us to take our business into the next phase of growth.”
The company calls its technology “intelligent streaming.” Its cloud-client platform identifies which parts of a web page should be prioritized in the load process and puts them in the front of the line, which means a website or Software-as-a-Service provider’s customer can interact with the content sooner.
Recent additions of new features and services include PCI compliance (extremely important for retailers and anyone else dealing with credit card information) and enhanced HTML streaming, which ensure both security and performance.
Bad performance can be just as harmful as an outage. Reports from analytics companies like Keynote and Compuware Gomez during the holiday seasons show that even slight latency often causes customers to abandon websites, and it is latency during key functions that most negatively affects the end-user experience.
Instart said in March it had grown 500 percent since its launch in the summer of 2013. Its core customer base consists of large and mid-sized companies serving millions to tens or hundreds of millions of page views each month, including Internet retailers, online travel and hospitality companies, multi-channel vendors, SaaS providers and online media and gaming companies.
Some of the most recognizable clients include Washington Post, Omni Hotels & Resorts, Volcom, Wine.com, Dollar Shave Club, New Relic, Kongregate and Gogobot.
CDN is a busy market, with companies like Akamai, LimeLight and Highwinds running ahead of the pack. Big Internet companies, such as Google and Amazon, provide CDN services of their own.
The last CDN provider to drive such a high level of investment in this market was Edgecast Networks, which secured $54 million in financing before being acquired by Verizon in November 2013. Verizon bought the company to further improve and increase its ability to meet exponential growth in online content and to broaden its portfolio of site acceleration services for digital enterprises.
Another player that is closer to Instart than the traditional CDN providers is CloudFlare, which has enjoyed great success for its site acceleration services, particularly among mass-market hosting providers and big websites.
| | 12:30p |
Red Hat Contributes ManageIQ Code to OpenStack Red Hat announced it will contribute software it gained when it acquired ManageIQ in 2012 to the OpenStack community and also provide integration and orchestration content for lab automation. The software company made the announcement at the OpenStack Summit this week in Atlanta, Ga.
This platform and content pairing help ease the building of development and testing clouds based on OpenStack, the popular open source cloud architecture and software. Additional content will be provided by a partner ecosystem joining Red Hat in the ManageIQ community, including AutoTrader Group, Booz Allen Hamilton, CiRBA and Chef.
ManageIQ is a solution for enterprise cloud management and automation and currently serves as the basis for Red Hat’s hybrid cloud management product CloudForms. That commercial product provides support for management of hybrid clouds built on OpenStack, VMware, KVM, Microsoft and Amazon Web Services technologies.
As a company founded on open source principles, Red Hat’s contribution of ManageIQ adds a mature code base to OpenStack. The contribution continues investment in the cloud platform by the company, which has been a leading corporate contributor of code to many OpenStack releases and core technologies that complement the OpenStack ecosystem.
Like with all software platforms, the key to fostering growth is attracting developers. In supplying valuable software and community resources to a collaborative group of open source developers Red Hat is investing in solving cloud management challenges and strengthening its position for cloud, Linux, middleware, storage and virtualization technologies.
“Some industry players have focused on building an open source cloud infrastructure with OpenStack and selling expensive, proprietary management software on top of that,” Joe Fitzgerald, general manager of cloud management at Red Hat, said. ”We believe the entire cloud should be open with no lock-in, so we are contributing this valuable code base to open up the management stack for the first time.”
Web-scale, cross-domain integration and agile, open platforms may all sound like cloud hype terms. However, this is really what Red Hat is helping to build and leverage – management capabilities for virtual, private and hybrid cloud infrastructures.
The ecosystem of partners that ManageIQ and Red Hat have built are also investing in the ManageIQ community and providing complementary integrated solutions to the greater project.
“Success in today’s digitally-driven economy means delighting customers, which is only possible by using web-scale IT to move at extreme speed and scale,” Ken Cheney, vice president of business development at Chef, said. ”Red Hat’s ManageIQ Community brings together best-of-breed open source, cloud computing and IT automation technologies – all core components of web-scale IT – and offers them on a single, open platform.”
Red Hat expects to launch the new ManageIQ open source community in the coming weeks. | | 12:37p |
Understanding Hadoop-as-a-Service Offerings Raymie Stata is CEO and founder of Altiscale, Inc. Raymie previously served Chief Technical Officer at Yahoo! where he played an instrumental role in algorithmic search, display advertising and cloud computing. He also helped set Yahoo’s Open Source strategy and initiated its participation in the Apache Hadoop project.
Hadoop has clearly become the leading platform for big data analytics today. But in spite of its immense promise, not all organizations are ready or capable of implementing and maintaining a successful Hadoop environment. As a result, the need for Hadoop coupled with the lack of expertise in managing large, parallel systems has resulted in a multitude Hadoop-as-a-Service (HaaS) providers. HaaS providers present an outstanding opportunity for overwhelmed data center admins that need to incorporate Hadoop but don’t have the in-house resources or expertise to do so.
But what kind of HaaS provider do you need? The differences between each service offering are dramatic. HaaS providers offer a range of features and support, from basic access to Hadoop software and virtual machines, from preconfigured software in a “run it yourself” (RIY) environment to full service support options that include job monitoring and tuning support.
Any evaluation of HaaS should should take into account how well each of the services enables you to meet your business objectives while minimizing Hadoop and infrastructure management issues. Here are five criteria that help distinguish the variety of HaaS options.
HaaS should satisfy needs of both Data Scientists and Data Center Administrators
Data scientists spend significant amounts of time manipulating data, integrating data sets and applying statistical analyses. These types of users typically desire a functionally rich and powerful environment. Ideally, data scientists should have the ability to run Hadoop YARN jobs through Hive, Pig, R, Mahout and other data science tools. Compute operations should be immediately available when the data scientist logs into the service to begin work. Delays in starting clusters and reloading data are inefficient and unnecessary. “Always on” Hadoop services avoid what can be frustrating delays that occur when data scientist must deploy a cluster and load data from non-HDFS data stores before starting work.
For systems administrators less is more. Their job typically entails a set of related management tasks. Management consoles should be streamlined to allow them to perform these tasks quickly and with a minimal number of steps. If the administrator must configure a set of parameters then they should be exposed while avoiding parameters that are managed by the HaaS provider. Similarly, low-level monitoring details should be left to the HaaS provider. The administration interface should simply report on the overall health and SLA-compliance of the service.
HaaS Should Store “Data at Rest” in HDFS
HDFS is the native format for storing data in Hadoop. When data is persisted in other formats it must be loaded into HDFS. Storing data persistently in HDFS avoids the delays and the cost of translating data from another format to HDFS.
After initial data loads, users should not have to manage data in storage systems that are not native to Hadoop or be required to move data into and out of HDFS as they do their work. HDFS is industry tested to provide cost effective, reliable storage at scale. It is optimized to work efficiently with MapReduce and Yarn-based applications, is well suited to interactive use by analysts and data scientists, and is compatible with Hadoop’s growing ecosystem of third-party applications. HaaS solutions should offer “always on” HDFS so users can easily leverage these advantages.
HaaS Should Provide Elasticity
Elasticity should be a central consideration when evaluating HaaS providers.
Another consideration when evaluating HaaS providers is the ease with which the service manages elastic demand. In particular, one should consider how transparently the service handles changing demands for compute and storage resources. For example, Hadoop jobs can generate interim results that may be temporarily stored. Does the HaaS transparently expand and contract storage without system administrator intervention? If not, Hadoop administrators may need to be on call to adjust storage parameters or risk delaying jobs.
Also consider how well the HaaS manages workloads. Environments that support both production jobs and ad hoc analysis by data scientists will experience a wide range of mixed workloads. How easily does the service adjust to these varying workloads? Can it effectively manage YARN capacity and related CPU capacity? | | 12:55p |
Datapipe’s Platform to Analyze Enterprise Usage of AWS Datapipe is a service provider that views Amazon Web Services not as competition, but as an advantage. Today, the company introduced Datapipe Cloud Analytics for AWS, a managed service offering that uses data-mining and intelligence tools to analyze AWS usage patterns across the enterprise.
The security-centric hosting provider offers everything from traditional and cloud hosting to a suite of managed services. The company has increasingly offered managed services around AWS driven by customer demand.
“A large set of hybrid-type clients want the benefits of cloud-based usage in Amazon,” Craig Sowell, senior vice president of marketing at Datapipe, said. “Where we find great demand is with the security- and compliance-minded and with general enterprise. They want to continue to adopt AWS, but want a partner to help them do that.”
This is an enhanced offering, building on a previous service called Cloud Reports, which was based on Datapipe’s acquisition of Newvem in September 2013. Newvem provides analytics and workload management for AWS and serves as the foundation for the new offering.
“This is the next generation of capability,” Sowell said about the new offering. “We rebranded to Cloud Analytics because [there] is a business-intelligence engine at the heart of it.”
There has been an increased push in the market to provide more detailed analytics around cloud infrastructure by both cloud providers and third-party tool vendors, such as the acquired Newvem and Cloudability. A big difference here, apart from functionality, is that it is a managed service.
“We’ve reached a tipping point in the industry and traditional service management tools are no longer enough to enable IT operators to optimize all of their AWS services in a cost-effective and scalable manner,” Datapipe CEO Robb Allen said. “By combining the Datapipe Cloud Analytics platform with our industry-leading managed services, we’re helping our customers tackle the operational and governance challenges of successfully managing their public cloud deployments.”
At the core of Datapipe Cloud Analytics service is its Big Data Engine that connects a client’s AWS account via a secure, non-invasive API to collect and analyze raw usage metrics in near-real-time, along with cost and billing statements that provide a detailed and granular view of cost data. The Datapipe Cloud Analytics platform then layers on sophisticated insights, analytics, recommendations and tools to help optimize cloud usage and costs.
The service provides visualization and control for 100 percent of existing AWS services, according to the vendor.
There are some new capabilities around cost, giving a more robust view of the cost spending in the dashboard environment. It adds functionality around business groups and governance for cost allocation reports and chargebacks.
The service also simplifies and makes recommendations on disaster recovery best practices and provides one-click access to AWS-savvy Datapipe support engineers around the clock.
There are also new features around security, such as monitoring and security group configurations and optimization through reserved instance analysis. | | 1:30p |
CyrusOne Breaks Ground on Second Facility in Phoenix CyrusOne has broken ground on the second data center at its campus in Chandler, Ariz., which is right outside of Phoenix. At full build, the building will have 60,000 square feet of data center space and up to 12 megawatts of power. A recent large contract at the existing facility on campus — a 41,000-square-foot deal — necessitated speeding up construction of the second facility.
Kevin Timmons, the company’s CTO, said the deal was with a large technology company but did not name the customer. ”Demand is so strong that we are moving up our plans to build this second facility by nearly a year,” he said.
CyrusOne has been on a building tear in general, expanding in several Texas markets, as well as planning a sizeable data center in Northern Virginia’s data center alley.
The expansion will add to the more than 77,500 square feet of space already commissioned in Chandler, and the company will have room for seven more data centers on the campus going forward. CyrusOne purchased the 57-acre parcel of land that houses its Phoenix campus in 2011, breaking ground in May 2012.
Driven by the need to store rainwater, the company employed an unusual roof design with the first Chandler facility.
The Arizona market began as a popular disaster recovery choice for companies in southern California before growing into its own booming market over the years. The new data center will be aimed at Fortune 1000 companies looking to locate in an area — considered one of the safest in the country, free from earthquakes, tornadoes and hurricanes.
“Arizona is proud of its reputation as one of the most business-friendly data center locations in the country,” Arizona State Representative Jeff Dial said. “More than just low rates of seismic activity and other natural disasters, we offer CyrusOne and its customers a robust fiber network, competitive electricity rates, low-latency connectivity to West Coast cities and a highly educated technology workforce.”
| | 2:00p |
As Software-Defined Storage Gains, is Physical Storage in Trouble? As the evolution of the data center and cloud continues, IT shops are continuously looking for ways to make their infrastructure operate more efficiently. We saw this happen at the server level with virtualization, and many other physical aspects of the modern data center have begun to be abstracted as well. It happened with networking, at the compute layer and now storage.
Software-defined technology is much more than just an IT buzz term. It’s a new way to control resources on a truly distributed plane. The ability to abstract powerful physical components into logical services and features can help a cloud platform scale and become more robust. It also allows the data center to control key resources more efficiently. One of those resources, which was beginning to sprawl physically quite a bit, was storage. Storage admins would have to buy bigger controllers, more disks and additional shelves just to keep up with modern data and cloud demands. Something had to give.
The rise of software-defined storage got a lot of people really excited. So much so that several players have already dived right into the SDS pool.
Other vendors have jumped into the mix as well. Nutanix, for example, was recently granted a patent for their software-defined storage solution. The patent provides clarity as to how software-defined storage solutions are optimally designed and implemented, detailing how a system of distributed nodes (servers) provides high-performance shared storage to virtual machines (VMs) by utilizing a “Controller VM” that runs on each node. Ultimately – this presents the entire data center with powerful, scalable, technologies.
With all of that in mind, why should the traditional storage infrastructure worry? Well, there are some real reasons as to why SDS is already making an impact on today’s storage ecosystem.
- Software-defined storage is agnostic. SDS doesn’t really care what storage you have sitting in the backend. It could be DAS, FC, FCOE, or iSCSI. The virtual service or appliance simply asks you to point the storage repository to the SDS VM and it’ll do the rest. The storage can be a spinning disk or flash array. Applications and data requests are then passed into the appropriate storage pool. The beauty is the intelligence, control and scale that can be achieved with SDS. Effectively, administrators can gain unified control over a heterogeneous storage environment.
- More logical control, less physical requirements. You’re basically offloading a lot of the controller functionality onto virtual appliances. This means that storage controllers can be a bit smaller, utilizing less disk. Intelligent storage controls, routing and optimization can all happen on the virtual level while still interacting with a number of different underlying storage platforms. This logical layer can span multiple data centers and aggregate various storage environments under one roof. From there, administrators are able to better control storage resources and optimize utilization without having to buy additional disks.
- Making storage smarter and more distributed. The ability to control all storage components from a virtual machine has quite a few benefits. One of this is the ability to create a directly extensible storage infrastructure. With a virtual storage controller layer utilizing SDS , you’re able to aggregate your storage environment and then distributed from data center to cloud. Ultimately, SDS platforms won’t care which hypervisor you’re using or which physical controllers you have. It’ll only want to be presented the appropriate resources. From there, the VMs will be able to communicate with one another while still living on heterogeneous platforms.
- Say hello to commodity storage. This is a big one. A major conversation point in the industry is the boom in commodity hardware usage. This is happening at the server level, and other data center levels as well. Big shops like Google are already building a lot of their own hardware as well as services. With the concepts around software-defined storage, there isn’t anything stopping anyone from buying a few bare-metal servers, and filling them up with their own spinning disk and flash storage. From there, they can deploy an SDS solution which can manage all of these disks and resources. In fact, you can replicate the same methodology over several data centers and cloud points. These distributed locations could all be running commodity storage and hardware while all being controlled by a singular SDS layer. That’s pretty powerful stuff – and certainly potentially disruptive to the traditional storage methodology.
Before anyone in the storage community gets nervous, we need to remember that some of the big players are already deploying their own version of a software-defined storage solution. NetApp delivers software-defined storage with clustered Data ONTAP, OnCommand management, and FlexArray software. Similarly, EMC’s ViPR technology probably has it the closets out of the big storage vendors when it comes to SDS. Their model introduces a lightweight, software-only product that transforms existing storage into a simple, extensible, and open platform.
Despite these advancements, many IT shops are already re-evaluating their storage situation. Before they purchase a new controller or an extra set of shelves, administrators are looking at the software-defined option. Regardless of the chosen direction, one thing is for sure: storage is evolving, very rapidly, to meet the demands of the user and the cloud.
Data is becoming much more critical in an ever-connected world. Now, storage environments need to be smarter, easier to manage, and highly efficient. In some cases, this might mean the introduction of a software-defined storage platform which goes on to manage the rest of your storage platform. Either way, examine your storage infrastructure carefully and make sure it aligns directly with your business vision. | | 6:00p |
Microsoft Funds Academic Data Center Efficiency Research Projects Microsoft has dished out $160,000 in grants to four academic research projects focused on data center energy efficiency.
Research teams from four different U.S. universities each received $40,000 from the Redmond, Wash., tech giant to look into resource-efficient cloud computing, improving data center efficiency and cost through software reliability analysis, provisioning cooling systems and microgrids.
While compute, network and storage technologies progress by leaps and bounds, innovation in mechanical and electrical infrastructure technologies used in buildings that house the hardware moves very slowly. Nearly half of all energy a typical data center receives is used by the building’s cooling systems and lost as the energy goes through multiple conversions along the electrical chain.
The data center research teams that received the grants come from Stanford University, Carnegie Mellon University, Rutgers University and South Dakota State University.
Microsoft awarded the grants as part of its annual Software Engineering Innovation Foundation program, which usually supports software engineering research teams around the world. This is the first year the company included data center innovation and energy efficiency in the program, Sean James, senior research program manager for Microsoft’s Global Foundation Services division, wrote in a blog post announcing the grants.
The company requests research teams to submit proposals for grants each year. Out of 100-plus proposals submitted in the latest round, 12 received the funds.
Work With Academia Yields Big Ideas
Like many technology companies do, Microsoft partners with academia on an on-going basis, using the research community as a source of innovation.
“As Microsoft continues to find ways to transform the energy supply chain toward greater efficiency and reduced environmental impact, we have seen that driving innovation in energy often requires a close partnership between industry and academia,” James wrote.
Microsoft’s recent project to test the idea of installing a fuel cell directly into the IT rack as a power source for the hardware came out of the company’s collaboration with the National Fuel Cell Research Center at University of California, Irvine, for example.
In November 2013, the company published a white paper, outlining fuel cells that convert methane into electricity installed directly into the rack. The approach has the potential to reduce consumption of coal energy and increase energy efficiency by eliminating a lot of electrical infrastructure equipment that sits between the utility substation and the IT rack at a typical data center.
This research project is a piece of a greater vision – something Microsoft researchers call “Data Plant.” The idea is to install data center modules at water treatment plants or landfills, where methane is abundant, and to use methane to fuel computing.
Microsoft has already deployed a proof-of-concept data plant at a water-treatment facility in Cheyenne, Wyoming. | | 6:44p |
Safeguarding Equipment While Reducing Power Consumption As one places more infrastructure into the current data center platform, one can begin to run into power and control challenges. As the cost of power continues to rise, while the demand for computing capacity grows at an unprecedented rate, balancing the costs of cooling equipment against the need for uninterrupted uptime presents a constant challenge.
Here’s something to consider: Studies show that raising air intake temperatures by just 1 degree Fahrenheit (or about .6 degrees Celsius) can reduce a data center’s annual power expenditures by from 2 percent to as much as 5 percent. Clearly, increasing temperatures is a sure path to ongoing power savings that enhance your organization’s bottom line, year after year. But determining the “right” amount of cooling in the data center can seem nearly impossible: If a data center is too hot, equipment reliability suffers. Over-cool your data center and energy bills can needlessly skyrocket. It is a fine line, but one that must be addressed if maximum value of IT as a strategic advantage is to be realized. In this whitepaper from RF Code, we quickly see direct examples around the need for better power controls and monitoring within the data center.
It’s important to understand that deploying efficiency does not have to impact the performance of your data center. As the whitepaper outlines, there are several key factors to consider when looking at power consumption and utilization. For example:
- Learning How to Keep your Gear Safe While Still Reducing Power Usage
- Understanding How to Increase Temperatures and Decrease Costs without Risking Equipment Failure
- The Power of Deploying Wire-Free Sensors from RF Code: Real-Time Environmental Visibility in the Data Center
- Taking Rack Cooling Index (RCI) & Return Temperature Index (RTI) into Consideration: Simple Metrics for Measuring Efficiency
- Following ASHRAE Guidelines & Achieving Data Center Efficiency with RF Code
Download this whitepaper today to learn how power controls can have direct cost benefits and direct infrastructure improvements. For example, a typical 8,000 square foot data center has an annual power expenditure of about $1.6 million. Raising the air intake temperature set point just 2 degrees results in annual savings of at least $64,000.00. By following ASHRAE’s latest guidelines, many data centers can increase temperatures by 10 or more degrees without risking equipment failure, delivering massive savings directly to the organization’s bottom line. | | 7:00p |
CenterPoint Plans 12-Story Data Center Next to Chicago Data Hub Chicago officials have approved plans for CenterPoint Properties Trust to build a 12-story data center next door to the city’s primary Internet data hub at 350 East Cermak. Industry sources say interconnection specialist Equinix has signed on as the anchor tenant for the project, which will provide much needed expansion space for companies seeking access to the hundreds of networks doing business at 350 East Cermak, which is owned by Digital Realty Trust.
On April 24 the Chicago Plan Commission approved a redevelopment plan to create an entertainment and business district near McCormick Place, the convention center on the Chicago lakefront. The project, spearheaded by the Metropolitan Pier and Exposition Authority, includes a new basketball arena for DePaul University, a 1,200-room Marriott hotel, retail space and the new server farm. The plans for the data center call for office space and below-grade parking, in addition to mission-critical space.
The Chicago Tribune reported that some neighbors expressed concern during community meetings about noise from generators at the proposed data center. Developers say the presence of the Marriott next door will provide an incentive for proper sound management, and CenterPoint said it plans to install sound-absorbing barriers.
Limited Space in Downtown Chicago
Downtown Chicago has a limited supply of finished data center space, creating a challenge for companies seeking a large footprint.
“The Chicago area is one of the tightest data center markets in the country,” commercial real estate brokers from Avison Young write in a recent update. “There is less than 2 MW of wholesale space in downtown Chicago, commanding some of the highest pricing in the U.S. It may be one of the only markets in the country where rental rates will increase during 2014.”
Equinix is one of the largest tenants at 350 East Cermak, the 1.1-million-square-foot former printing plant that is now 100 percent leased for wholesale space. As the industry’s largest provider of interconnection services, Equinix would be an attractive anchor for a new project, serving the role of a “magnet.”
A number of developers are pursuing new data center projects in downtown Chicago, including Ascent Corp. and perhaps QTS Realty. Existing providers include Digital Capital (725 S. Wells) and Server Farm Realty.
Space in the Suburbs
While space remains limited downtown, there’s more capacity coming online in the western suburbs, where a cluster of data centers have sprung up to meet demand for space near Chicago. Avison Young says Digital Realty has leased the entire first phase of its Digital Chicago project in Franklin Park. Digital Realty has reportedly signed leases with seven tenants, including a 1.5 megawatt deal with SingleHop hosting.
The three-building, 22-acre data center Digital Chicago campus will have the capacity to accommodate up to twenty 1.125 MW Turn-Key Flex data center PODs, or 32.6 MW of IT load.
Other companies with new projects or expansions in suburban Chicago include DuPont Fabros Technology, Forsythe Technology, Latisys, ByteGrid, Server Farm Realty and Continuum. |
|