Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, October 13th, 2014
Time |
Event |
12:00p |
Salesforce Gets Into Cloud Business Intelligence Market After about two years of engineering in stealth mode, Salesforce rolled out Wave, its cloud-based big data analytics service. This will be the biggest announcement at the company’s three-day Dreamforce conference kicking off in San Francisco today.
Salesforce is pitching Wave as an Orbitz or a Travelocity for business intelligence. Just like travel sites enable users to sift through flight information from around the world in seconds to find the flights they need, Wave enables users to quickly and easily answer questions about their business, according to Salesforce.
Someone who oversees call centers for a company, for example, can look at a visualization of call time statistics from different facilities and correlate this data with call center costs and sales results.
A salesperson using Salesforce’s customer relationship management service can decide which products to sell to which customers by combining demographic data with purchasing patterns over a period of time. A marketing exec can make decisions about a campaign by analyzing customer reaction to trends or products.
A growth spurt in cloud BI market
There is a preponderance of legacy on-premise business intelligence tools, provided by the likes of Teradata, IBM and Oracle. There is also a small but growing segment of cloud business intelligence services, where Salesforce is now trying to carve out a place for itself.
The San Francisco-based company, best known for its CRM software, is not alone among business software giants taking a crack at the cloud BI market.
Oracle announced its Business Intelligence Cloud Service at its OpenWorld conference in San Francisco in September. SAP has its Business Objects BI OnDemand service, including BI for Salesforce.
This week’s announcement is significant because it will further expand the market for cloud BI tools, Dave Vesset, program vice president for business analytics and big data at IDC, said. The market has been around for a while now, so it’s hard to do anything remarkably different from technology standpoint, but Salesforce stepping in is in itself a significant development, he said.
 Salesforce’s Wave works across platforms and has extensive mobile capabilities. (Image: Salesforce)
Analytics for the rest of us
Until now, the company has largely relied on partners – such as SAP – for analytics and had some basic business intelligence functionality of its own on the CRM platform. But it wasn’t enough to provide modern interactive data and predictive analysis, Vessel said.
The big opportunity for Salesforce will be selling it to its existing customers who until now either didn’t have advanced business intelligence capabilities or were using third-party products. There are always customers out there that prefer to get as much functionality as possible from a single vendor, and that will be the primary audience for Wave, Vessel said.
“It’s not a replacement for an enterprise data warehouse, at least for now, and I don’t think it ever will be,” he said. “I don’t’ think it’s their intention.”
Pretty much all large companies use some type of a business intelligence tool already and have dedicated business analysts who know how to use those tools. But there are also people in customer service, sales and marketing, who use analytics tools primarily by putting in a request with the analysts and IT departments. That can be a lengthy process, and services like Wave aim at eliminating it altogether by abstracting the complexity and exposing user-friendly graphic interfaces that are easy to learn.
Built to handle data’s fluid nature
Stephanie Buscemi, senior vice president for Salesforce Analytics Cloud, said this accessibility for non-technical users was one of three things that differentiated the new service from legacy analytics solutions. The other two were its extensive usability on mobile devices and its focus on providing a platform.
As a platform, Wave can be integrated with a variety of data sources, such as enterprise resource planning or CRM, and developers can use APIs to build custom applications that incorporate Wave functionality.
“It’s about the fluidity and the ever-changing nature of data,” Buscemi said.
A major data center expansion
Part of the engineering effort for Wave was adding a lot of capacity in Salesforce’s data centers. “This is a huge, huge expansion of all our data centers,” she said. “A lot more pods and much more capacity.”
A pod at Salesforce is a standardized package of hardware and the minimum increment it expands IT capacity in. According to Buscemi, the company used the same standard pod architecture to add capacity in support of Wave as it uses for other services. “That’s kind of core to the whole model [at] Salesforce,” she said.
Two-tiered subscription pricing
The company has not disclosed pricing for Wave, but Buscemi said it will charge for it on a subscription basis like it charges for other products. There will be two types of licenses, explorer and builder, that will be priced differently. Detailed pricing information will be available when the product goes into general availability on October 20, she said. | 3:30p |
Key Components of Data Center Optimization Prashant Baweja is currently working as a consultant with Infosys Ltd.
With increasing competition and high expectations from customers, businesses are working diligently to provide the best possible value at the lowest possible price for its customers. One way to achieve this is by optimizing the costs which a company is incurring across units.
IT infrastructure makes up a large part of a company’s cost and the biggest chunk can be traced to the data center. With increasing cost, effort and focus need to be on optimizing the data center. Companies that place an emphasis on data center optimization will see a variety of benefits including:
- Increased ROI
- Increased infrastructure utilization, increased virtualization and storage efficiency
- Lower operating cost
When looking at ways to optimize a data center, there are a few key areas of consideration.
Data center location. A well thought out location will help reduce capital and operational costs for businesses. There are a few factors to take into account when determining your data center site:
- Geographical location
- Electricity
- Telecommunication infrastructure
- Tax rates
- Construction
- Human resource, etc.
Data center operations: in-house or outsource. You can keep your operations in-house or choose to outsource them. Regardless of the decision, there are pros and cons to both. In recent years, however, the trend to outsource operations has increased due to higher levels of cost effectiveness.
Another aspect of operations which helps data center optimization is the introduction of Data Center Infrastructure Management (DCIM) tools. With these tools, redundancy can be avoided and better value can be achieved.
Scalability, growth plans. Scalability and growth are important factors when determining data center and IT strategies. Without proper planning and consideration, optimization will difficult. Modularity goes hand-in-hand with scalability and when applied to data center facilities, can reap huge benefits for the business.
Data center disaster recovery strategy. Data center disaster recovery hasn’t always been associated with optimization. However, because these sites sit idle for long periods of time, they present a great avenue for firms to optimize their costs.
Storage and backup strategy. Storage options and your backup strategy can play an important role in reducing costs for your business. Retention of data and archiving are major areas where optimization can be done and better value can be provided to customers.
Data center energy consumption. Energy consumption in your data center is also a key consideration. For example, how efficiently your building is cooled will have a huge impact on cost. And it’s not just the facility – remember to consider the efficiency of the devices within.
Human resources. A major factor to consider (and one that is often overlooked by businesses) is human resources. A human resource department that is involved in data center monitoring, servicing, maintaining and operating can be key to identifying points from which errors arise.
Additionally, proper training and education need to take place in order to facilitate and solve problems faster. Outlining processes and creating a proper knowledge repository will increase efficiency and keep costs down.
IT operations. Another crucial area to consider is application provisioning, roll-out and support processes at the corporate level. Tweaking these areas and making them more efficient will optimize infrastructure usage and positively impact your data center.
Above are a few areas of consideration when looking to optimize your data center. A well thought out plan and a thorough discussion with stakeholders should provide optimal results to your company.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| 4:00p |
Data Center Trends Featured at Orlando Data Center World Keeping up with the ever-evolving technology and facilities trends in the data center industry can sometimes be challenging. To that end, data center managers and staff attend industry conferences to gain critical knowledge.
This year, Data Center Knowledge is helping with that effort, by curating a group of sessions on the latest trends at Orlando Data Center World event that begins next week.
Since August, we have reported on some of the featured trends by talking with panelists and speakers. In case you missed the posts, here’s a recap. Enjoy!
Register Now for Orlando Data Center World
Orlando Data Center World kicks off on October 19, featuring sessions, networking and more, including the Data Center Knowledge curated industry 20 trends. Check out the conference and register at Orlando Data Center World conference page. | 4:55p |
QTS Brings Mega Data Center in Dallas-Fort Worth Online QTS Realty Trust is officially opening its latest mega data center in the Dallas-Fort Worth market this week. The facility is a retrofitted former semiconductor plant in Los Colinas, between Dallas and Fort Worth, the company bought in early 2013. Extensive work has gone into repurposing it into a 700,000-square-foot, 140-megawatt behemoth of a data center.
QTS pre-sold about 26,000 square feet in the facility before opening the doors. This is from many existing customers on both coasts looking for a location in the middle of the country, company officials said.
Rich in fiber-optic infrastructure, the Dallas-Fort Worth area is a key Internet hub, which makes it a desirable location to colocate in. It also has power rates that are lower than the national average.
QTS continues to diversify its regional mix with the new data center. Once primarily known for its massive facility in Atlanta, it’s success was too tied up in that market alone. But recently, the provider has been diversifying its footprint using the same model in various key markets.
“Our mega data center strategy combined with our redevelopment strategy has us excited by this asset,” said QTS Chief Operating Officer, Operations and Development, Jim Reinhart. “It’s an infrastructure-rich facility with great power and fiber. It’s also possible to double the size to 1.4 million square feet down the road with enough critical power to support.”
The company likes to acquire extremely large buildings with good power infrastructure for pennies on the dollar and refurbish them as premier data centers. In addition to Dallas, it bought the former Sun-Times Plant in Chicago and McGraw Hill’s former New Jersey data center.
It entered northern California in 2013 with the acquisition of Herakles and its 92,000-square-foot facility in Sacramento.
The company now owns and operates close to 4 million square feet of data center space.
Differentiating in a crowded market
The Dallas data center market is diverse. In downtown Dallas there is the Infomart (Now Infomart Data Centers, following merger with Fortune Data Centers). Most providers are just north of Dallas, while the new QTS facility just north of them.
Other providers in the wider DFW metro include Digital Realty, CyrusOne, Equinix, T5, Databank, retail colocation startup Patronus, and several others. Cloud providers Rackspace and IBM’s SoftLayer both have roots in the area.
QTS leadership hopes the sheer size of the data center and broad product capabilities will be two points of differentiation for it in the crowded Dallas data center market. Resting between Dallas and Fort Worth and close to DFW airport, it will appeal to both major business hubs, they said.
QTS offers clients hybrid environments. The product portfolio is called the 3 Cs, comprising of everything from cloud and managed services to wholesale deals over 10 megawatts.
“Customers don’t have a one size that fits all when it comes to infrastructure,” said Dan Bennewitz, QTS chief operating officer, sales and marketing. “We can accommodate any need. Customers can see the pathway to whatever their business model has in store.”
A separate organization devoted to compliance allows it to go after specific verticals like healthcare and financials. QTS is currently in the FedRAMP certification process which will increase appeal in the federal space, helping both Dallas and its Richmond facility in particular. | 5:30p |
Startup Zoomdata Raises $17M to Push Real-Time “Sharpening” Approach to Big Data Big data analytics startup Zoomdata announced a $17 million Series B funding round led by Accel Partners. The two-year-old Reston, Virginia-based startup specializes in providing what it describes as extremely fast big data exploration and visualizations.
The company has patented a big data technology it describes as real-time data “sharpening,” which provides an instant approximated sketch of query results and then streams continuous updates to the visualization.
Existing investors from Zoomdata’s 2013 $4.1 million Series A in 2013, including NEA, Columbus Nova Technology Partners, Razor’s Edge Ventures and B7m joined the latest round. The company said it will use the funds to expand its sales and marketing teams and accelerate product development.
Its big data technology comes as part of a platform that includes a suite of pre-built connectors to popular data sources, such as Hadoop, Cloudera Impala, Redshift, MongoDB, Apache Spark and many others. Zoomdata says it has 20 enterprise customers, including a major accounting firm that uses its product to give customers real-time access to interact with and explore the latest retail pricing trends from a MongoDB datastore.
On the presentation side Zoomdata visualizations are offered in an HTML5 touch-first interface, where the company says users can explore billions of records in seconds via a web browser on PCs, smartphones and tablets.
Zoomdata founder and CEO Justin Langseth said, “With the rapid adoption of modern Hadoop, NoSQL and Spark datastores, we saw an opportunity to disrupt the legacy market for enterprise and embedded reporting, dashboarding and analytics with a powerful visual platform designed for the business user. We are making it faster and easier to interact with big data through two key innovations: our patented micro-query technology, combined with our stream processing engine, allows Zoomdata to render big data into compelling visual views within seconds, tapping directly into both historical and real-time data across both legacy and modern data stores.” | 7:18p |
Seagate Leads $15M Series B for Hybrid Storage Startup Reduxio Israeli hybrid storage startup Reduxio announced it has completed a $15 million Series B financing round led by Seagate. Existing investors Intel Capital, Carmel Ventures, and Jerusalem Venture Partners joined the round.
Reduxio’s storage operating system that actively distributes data across multiple storage tiers and has patents pending for inline deduplication and compression and data recovery. The company’s Tier-X hybrid storage system integrates SSDs and disks into a multi-tier storage pool and uses algorithms to identify and automatically relocate blocks of dta between tiers to optimize for performance.
Reduxio CEO Mark Weiner was previously CEO at Exanet, a storage vendor that was acquired by Dell. Prior to that he led StorAge Networking, which was sold to LSI.
John Williams, who was recently appointed as president at Reduxio, came from NetApp, prior to which he held various roles at F5 Networks and 3Com.
“This investment underscores our continued commitment to further strengthen our position as a storage solutions leader from components to systems to services,”Rocky Pimentel, president of the Global Markets and Customers division at Seagate, said in a statement. “Reduxio has built an architecture that can truly leverage the capabilities of hard disk drives, solid state storage, and future non-volatile technologies together in a single system. We believe that systems that successfully integrate multiple media types can deliver compelling price/performance and reliability benefits and will have a unique position in the market.” | 7:34p |
SDN Startup Fiber Mountain Emerges from Stealth Networking startup Fiber Mountain emerged from stealth at Interop earlier this month, introducing a suite of optical switch technologies.
The Cheshire, Connecticut-based company refers to its technological approach as “Glass Core” — software-controlled fiber optic connectivity that emulates benefits of direct-attached connectivity from any port to any other port in the data center, be it on a server, a storage device, a switch or a router.
Fiber Mountain founder and CEO M.H. Raza said the technology is different from other software-defined network solutions in that it provides a physical fiber path between end points rather than a virtual one. Raza describes it as connectivity virtualization: fiber-optic cables that can be programmed in software.
The Glass Core is accompanied by the vendor’s Alpine Orchestration System, which is a single-pane intelligent software application that can discover and visually present every device and connection within a data center network. With AOS the network layer allows Programmable Light Paths to be created, which can connect any two points across the glass core, whether 10 Gbps, 40 Gbps or 100 Gbps.
Fiber Mountain’s claim to cost savings comes from using PLPs instead of passing packets from switch to switch, thus eliminating the wasted processing power and cooling capacity. Instead of sending all traffic to a core switch and back down, which adds to latency, its approach avoids as much packet processing as possible.
The possibilities expand even further once adoption of silicon photonics in servers increases.
“For too long, the answer to exponentially expanding network traffic caused by bandwidth-hungry applications has been bigger, more complex hardware in the data center,” Raza said. “This has resulted in huge capital expenditures and operational issues related to size, complexity and power consumption of the data center’s expanding architecture.
“This is simply not sustainable. Fiber Mountain has taken an approach to helping enterprises simplify the network, and manage growth through software-based intelligence accompanied by a reduction in hardware proliferation, allowing data centers to reach hyperscale at a fraction of the cost.”
Fiber Mountain offers a series of physical-layer fiber products to make up the fiber optic connectivity fabric. The AllPath SDN devices are primary to the Glass Core topology, and Alpine Connect fiber products are comprised of cross-connect panels, standard modules, and custom modules.
Fiber Mountain offers an AOS turn-key appliance as well, with integrated AOS application software. The company says its architecture will also work with white box top-of-rack switches. | 8:00p |
Explaining the Community Cloud Cloud computing continues to steamroll ahead as more organizations adopt the platform, but to be clear, what we’re really seeing is an increase in how organizations are utilizing the Internet. Marketing terms aside, there are more resources out there, better underlying systems for support and a greater need to distribute data.
When cloud computing first emerged, we had some distinct models to work with. Hybrid became the more popular option among enterprises. But the challenge was connecting a public and private instance together to form a robust and secure cloud environment. Now, technology has come far enough where creating that link is much easier. There are direct partnerships between private cloud companies and public cloud providers to create more secure and robust connections. This unification of technologies through APIs, connectors, and virtualization has created more markets and services around cloud delivery methodologies.
There is clear growth in the amount of traffic being pushed through the modern data center. Why? The user, the business, and most of all the technology have all evolved. The current generation revolves around a new “on-demand” lifestyle where workloads and information must be available anytime, anywhere and on any device. Mobility has become the new normal, and cloud computing is the engine to deliver all of this next-generation content.
The community cloud
In growing up cloud, we’ve seen three major models arise:
- Public Cloud.
- Hybrid Cloud.
- Private Cloud.
Got those three? Good. Because now there is a fourth option gaining some traction in the IT world. Several organizations have begun looking and working with community cloud platforms. Think of it as a public cloud environment, but with set levels of security, privacy, and even regulatory compliance of a private cloud.
A community cloud is a multi-tenant platform which allows several companies work on the same platform, given that they have similar needs and concerns.
- One example of using a community cloud would be to test-drive some high-end security products or even test out some features of a public cloud environment. This is great for organizations that are driven by compliance and regulatory measures. Government, healthcare, and some regulated private industries are leveraging the added security features within a community cloud environment. Instead of just provisioning space in a public cloud, organizations can test and work on a cloud platform which is secure, “dedicated,” and even compliant with certain regulations. The really interesting part is that with a community cloud, the presence can be either onsite or offsite.
- Or, as another example, several organizations may require a specific application that resides on one set of cloud servers. Instead of giving each organization their own server in the cloud for this app, the hosting company allows multiple customers connect into their environment and logically segment their sessions. The customer, however, is still using the same pieces of hardware as other folks are. However, everyone is hitting these servers with the same purpose — to access that one application — which is what makes it a community cloud.
The reality here is that as technology and cloud-based tools expand, there will be more uses for some type of cloud-hosted architecture. Several large cloud providers have already created some type of community cloud offering. There are small and big benefits to working with a certain type of cloud model. The bottom line is that the diversity in cloud computing offerings allows organizations and engineers to find pieces of the cloud that can help enable their business and practice. | 8:30p |
How to Shut Down a Legacy Service without Losing Customers 
This article originally appeared at The WHIR
In the context of a competitive fight over market share, it was an understandable strategic decision by EMC to shut down its Atmos cloud storage service back in 2010. EMC has survived and thrived, and in 2014 a series of companies made the same decision. Cloud storage service closures were announced in March, April, and June by AVG, Canonical, and Dell respectively.
Closing down a legacy service means turning down revenue and if a company’s customers are frustrated during the process of migrating from a service a company is closing down, then the reputation of its core, continuing services could suffer.
This process is hardly new, however, and some well-established hosting and network service companies have been through it, and come out stronger. In fact, most of the longest established companies have dropped services along the way.
“In the late 90s it was kind of all things to all people,” New York Internet CEO Phil Koblence says in an interview with the WHIR. Back then NYI provided a range of services related to development, email, and even hardware procurement, which no longer make sense in its product portfolio.
“You can’t be in this business 18 years without having transitioned a lot…the only thing that’s the same is the person on the other end of the phone.”
A Range of Benefits
If closing a service means turning down money and taking on risk, then why do it? One reason is to eliminate the cost of providing legacy services, which can increase even as the service moves towards obsolescence.
“The goal is not only to offer relevant services to customers, but also making sure that we don’t bog down our technical support staff with some of these legacy services that while they might be noisy on the help desk, don’t necessarily provide our core customers with the solutions that they need,” Koblence says.
He gives the example of a managed email service formerly offered by NYI, which made sense in the past, but is no longer relevant to the managed colocation and large-scale managed hosting customers it serves.
The core customers of AVG, Canonical, and Dell likewise probably did not rely on those companies’ cloud storage services the way they do on their core offerings. These three companies will likely not miss their cloud storage divisions, and their customers won’t either, if their transitions are properly taken care of.
Putting Murphy’s Law into Practice
Coordinating a transition on a custom time frame between a legacy service provider, a new service provider, and a customer is bound to be a drawn-out process, Koblence says. He has learned from experience to expect the unexpected.
“What we try to do is offer an almost limitless transition period because everything is reputation in this business.” That transition period includes “months and months of notice periods and access to both systems,” he says.
Dell is using an extended notice period to prepare customers for the closure of its DellData Safe service. The closure was announced in June 2014, but will take place officially in June 2015. Canonical’s April announcement gave customers until the end of July to retrieve their data, a lead of four months. AVG gave five months when it closed LiveKive.
The amount of notice necessary to maintain positive customer relations will vary with the provider, service, and customer. The other elements of the transition will vary similarly, making for what Koblence describes as a painstaking process.
“The same is true for eliminating services or transitioning services in any way,” he says. “There’s a lot of psychology that goes into it in terms of basically coaching the customer through the transition process.”
Reputation Uber Alles
“If the goal of these transitions is to make sure that the customer is left with a better experience overall; it’s important that we make sure that the logistics of the migration is not something that makes them uncomfortable,” Koblence says.
All those pains are worth taking if it means maintaining a strong service reputation while shedding an unprofitable service. The weakest service in a company’s portfolio may affect customer’s perception of its core services. Assisting customers in transitioning to a service provided by a company which specializes in that specific service can improve overall customer experience.
Reputation can not only be protected by making a transition comfortable, it can be grown into brand strength with the increased focus on core services. When Cloudbees announced its closure of the RUN@cloud PaaS in September, it simultaneously rebranded as “the Enterprise Jenkins Company” to reflect its commitment to the open source continuous delivery software.
As demand for continuous delivery services increases, Cloudbees is positioned to leverage its experience, focus, and branding related to Jenkins, but it will have to ensure smooth transitions for the few hundred customers affected by its closure of RUN@cloud.
After all, Koblence reminds us: “you’re only as good as your last tech support experience.”
This article originally appeared at: http://www.thewhir.com/web-hosting-news/shut-legacy-service-without-losing-cusotmers | 10:39p |
EMC Buys OpenStack Cloud Builder Cloudscaling EMC has acquired Cloudscaling, a company that sets up private OpenStack clouds on hardware of customers’ choice in their own data centers that integrate with the major public clouds, such as Amazon Web Services and Google Compute Engine.
OpenStack is the most popular open source cloud architecture, and EMC is the latest incumbent IT vendor to join the ecosystem in a major way. Until now, its OpenStack play consisted of support for the architecture by its storage management software called ViPR.
Vendors like HP, IBM, Cisco and Oracle already have wide-ranging cloud strategies that revolve around OpenStack.
EMC spokesman Dave Farmer said the company made the acquisition to increase the variety of cloud technologies it supports. “To further extend our breadth of cloud platform support, including VMware and others, EMC has signed a definitive agreement to acquire Cloudscaling,” he wrote in an email.
Cloudscaling’s solution is targeted at enterprises. The company loudly advertises the fact that its private clouds sit behind customers’ firewalls, which is appealing to security- and compliance-conscious enterprises.
As such, it is a competing product to the proprietary cloud technology by VMware, in which EMC owns an 80-percent stake. VMware is going after the enterprise cloud market in a big way, and its hybrid cloud offering, called vCloud Air, competes directly with the type of hybrid cloud infrastructure Cloudscaling is pushing.
That EMC has made an acquisition in the OpenStack space regardless of VMware’s cloud strategy speaks to the level of importance the open source cloud technology has in the eyes of major enterprise IT vendors.
There also have been rumors that EMC leadership was considering parting with its stake in VMware.
EMC did not disclose the size of the Cloudscaling acquisition, but Bloomberg reported it was less than $50 million, citing anonymous sources. Farmer said the deal was “in the final run-up to closing.” |
|