Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, July 14th, 2014
Time |
Event |
4:01a |
ByteGrid Buys NetRiver, Expands to West Coast, Hooks Clients With Incentives on East Coast Data center developer and service provider ByteGrid has bought data center provider NetRiver, whose 45,000 square foot data center in Lynnwood, Washington (just outside of Seattle), will be its first property on the west coast.
The deal, whose terms were not disclosed, also gives the provider — which until now has only operated data centers in Washington, D.C., Atlanta, Cleveland and Chicago markets — a managed service capability it plans to extend across all of its locations.
McLean, Virginia-based ByteGrid generally goes after markets that are not first-tier, neither second. “We’re focused on markets that are underserved — not necessarily Tier 2 markets — but those with high demand,” the company’s CEO Ken Parent said.
He refers to them as “mid-market” areas.
Seattle is a fairly active data center market with high concentration of high-tech companies, including headquarters by Microsoft and Amazon in nearby Redmond. Data center providers serving the region include Equinix, Digital Realty Trust, Internap and Zayo, among others.
The biggest wholesale data center developer and provider in the region is Seattle-based Sabey Data Centers.
According to ByteGrid, NetRiver is a dominant player in the “north end of Seattle.” In addition to colocation and managed services, it provides cloud hosting. Its offers its colocation customers metered power billing and choice from a variety of power redundancy levels.
As part of the deal, ByteGrid acquired leasehold rights of a smaller data center in Spokane, Washington (about 280 miles east of Seattle), which serves as a disaster-recovery base for Seattle customers.
ByteGrid said it will expand capacity within the 4.5 megawatt NetRiver data center and has a long-term plan to expand on other portions of the property.
Maryland intros incentives for data center customers
As it tackles the opposite coast, ByteGrid also announced a potential boost to its east-coast business, where the state of Maryland has created a new economic development fund that will provide up to $12 million in energy rebates for data center customers, hoping to attract new businesses to Montgomery County, where ByteGrid operates a data center.
Steve Silverman, county director of business and economic development, said the program was structured to draw companies that will attract revenue. When these companies come in, it produces personal property tax.”
Leveling playing field with Virginia
The aim is to make energy costs in Maryland more competitive with Virginia. Energy rates in Maryland are roughly one-third higher than in the neighboring state.
Maryland deregulated its energy market around 1999, along with a lot of other states, hoping providers would come in and competition would drive down the cost of energy.
“Ironically, Virginia, viewed [as] more business friendly, regulates energy supply,” said Silverman. “The states that regulate have been able to keep power costs down.”
ByteGrid’s 214,000 square foot data center in Silver Spring, Maryland, is just outside of Washington, D.C. The campus has 92,000 square feet of data center space, an 80,000 square foot hard shell next to the main facility for custom build-outs and a tape storage facility for long-term storage, important for enterprises required to keep data for long periods of time by regulators.
Large defense contractor Lockheed Martin is one publicly disclosed customer. Another one is the U.S. Department of Labor, which used the facility as a site to consolidate its infrastructure into.
Parent said the provider just landed a new high-profile financial services firm as another tenant.
“We are extremely pleased to now offer this incentive grant to our data center customers in Silver Spring, making our colocation and enterprise data center offerings even more attractive,” he said. | 12:00p |
Five Most Interesting Things for Data Center Pros in Box’s Pre-IPO Filing There is always a few interesting data center tidbits in docs high-profile firms hand to the SEC before they go public. Digging through the docs to find them, however, is not necessarily something a data center manager or an IT ops person has the time to do, so we do it for you. The biggest upcoming IPO in the high-tech world everybody is talking about is by Box, so here they are: the five things in the cloud storage and collaboration firm’s recent SEC filing data center pros would find most interesting (well… at least the five things we found most interesting):
1. Data center capacity planning is a real headache for Box. Right-sizing the infrastructure so that you don’t spend too much but can still handle a growth spurt is an age-old problem in the data center world, and Box is not an exception.
It just so happens that CEO Aaron Levie and CTO Dylan Smith chose a business that is so capital-intensive that it’s extremely difficult to have a healthy profit margin (as we mentioned in last week’s coverage of Box’s recent $150 million funding round). Much of that capital goes to hardware and the data center resources to house it, and any miscalculation in capacity planning will have a sizable impact on operating results.
In the company’s own words: “If we overestimate the demand for our cloud-based storage service and therefore secure excess data center capacity, our operating margins could be reduced. If we underestimate our data center capacity requirements, we may not be able to service the expanding needs of new and existing customers and may be required to limit new customer acquisition, which would impair our revenue growth.”
The problem here is that right now just a few customers use Box to organize all of their files. Cloud storage and collaboration tools, however, are a relatively new way of doing things for enterprises, and it is very possible that more of them will look to Box for a complete content storage solution. While it would be welcome news for a company like Box, inability of its infrastructure to absorb the growth would be disastrous for the business.
2. Box does not own any data centers. Partially because it needs the flexibility to expand capacity quickly when it needs to, Box’s current data center strategy relies entirely on commercial data center providers: “Colocation allows us to quickly expand capacity geographically using data centers and networking providers.” The company does, however, either own or lease all the servers, networking and storage gear it houses in those data centers and uses only its own employees to manage the infrastructure.
There are two primary Box data centers in northern California and a disaster recovery site in Las Vegas (3.6 megawatts total across the three sites). The company has not made a secret of the fact that its primary data center provider is Equinix (copy of a contract with the provider included in the package of documents submitted to the SEC is further proof).
Another data center provider contract on the list is with Switch Communications — operator of massive data centers in Las Vegas that is more likely than not the provider that hosts Box’s disaster recovery site.
3. Box has been able to beat its uptime SLA. At the disaster recovery site in Las Vegas, the company uses a third-party storage backup solution. But that is only one part of the meticulously engineered infrastructure that delivers more uptime than the company promises to clients. Box’s uptime SLA is 99.90 percent, and its average monthly uptime for the 12 months ending with January of this year was 99.93 percent.
The design uses a redundant network infrastructure and server clusters designed in a way that ensures a cluster stays up even if individual nodes fail. The servers are deployed in high-availability pairs, and the entire environment is replicated at the disaster recovery site. “Customer files are backed up to another location that would be out of any disaster zone due to proximity,” the company’s reps write in the SEC documents.
To prevent hacker penetration, Box uses Secure Socket Layer (SSL) encryption for data in transit and 256 bit Advanced Encryption Standard (AES) for files at rest. It provides customer IT admins with detailed logs of uploads, downloads, preview and deletion of their content with all the associated metadata.
4. Third-party-verified performance lead over competition. In addition to the three primary data centers, Box has a number of edge locations elsewhere around the U.S. and around the world hosted by Equinix. In 2012, it announced an expanded relationship with the provider, adding locations in Chicago, Ashburn, Amsterdam, Sydney, Hong Kong and Tokyo. These locations made up what the company calls the Box Accelerator, providing faster service to customers in the surrounding regions.
Box has since added Portland, New York, Sao Paulo and Singapore to the Accelerator network. Stefan Apitz, senior vice president of operations at Box, told ZDNet that wherever Equinix does not have a data center, the company uses Amazon Web Services to place a Point of Presence for the Accelerator network.
 Box’s global Accelerator Network lives in Equinix and Amazon data centers. (Image: Box)
Neustar, an Internet analytics company, has validated Box’s claims that the network provides 2.7-times-faster average upload speeds than the closest competitor across all locations. It’s hard to say who Box considers to be its closest competitor, but the other leaders in the space are Citrix, EMC and Dropbox, to name a few.
When a customer uploads a file, Box’s proprietary routing technology, which runs on each node in the infrastructure, finds the fastest path for the data to take.
5. Schneider Electric is one of Box’s biggest customers. As of the end of January, Box had 25 million users and provided services to more than 225,000 organizations, but one customer that would be of particular interest to someone in the world of data centers is Schneider Electric. One would be hard pressed to find a data center that does not use at least some Schneider gear.
The French energy management behemoth has nearly 70,000 employees using Box — a deployment that has grown from the initial 2,000-person one in 2012. Schneider stores more than 20.2 terabytes of data on Box in a centrally managed, collaborative content store. Content is shared internally across divisions and numerous subsidiaries as well externally with partners and customers. | 12:30p |
The Role of Product Leadership in Data Center Strategy Callum Wallace is a Consultant in the TMT practice at Berwick Partners (an Odgers Berndtson Co.)
Rarely a day goes by in the data center sector without a new vendor springing up in the market or an existing business expanding. The reasons are clear: the market is buoyant and there is money to be made.
As a Search Consultant, I regularly work with data center teams and investors to help shape their leadership teams as they embark on new strategies or ventures. I have been lucky to work with a range of organizations from investors to startups and beyond.
I have observed that what really makes companies stand out and ultimately successful, is the quality of the product. Unfortunately, product management and marketing in the data center sector is often overlooked when building teams.
Build it, they may not come
Conventional thinking dictates that it is very difficult to differentiate a data center offering (outside of wholesale/retail colocation). The strategy often boils down to finding a site with good location, connectivity and power, and then executing the selling of that space better than your competitors – ‘whoever shouts loudest wins.’
However, the proliferation of data centers and the desire for clients to consume the services in a myriad of ways means that you can no longer follow the “build it and they will come” strategy.
Regardless of the sector, all customers require minimums in security, flexibility, price and uptime. Once achieved client’s primary needs can differ quite significantly:
- Media Sector: scalability, flexibility, agility
- Public Sector: ultra-high level security (physical and virtual)
- Banks: ultra-low latency
- Charities: green credentials and value
The options are endless and despite extensive cross over between sectors, building ‘standard’ and selling one size fits all is becoming commercially challenging. Therefore, making the glove fit the hand becomes the real key to long term data center growth and profitability. This responsibility falls with the owner of the product.
Standing out in the crowd
Take Amazon Web Services (AWS) for example, they have a sales and marketing team that is undeniably good. However, when taking into account the extraordinary and exponential growth of their offering, it is not as spectacular when compared to the market norm. The differentiator is simply that they had a significantly better offering than anyone else for their target market. While their competitors built indistinguishable products and crack sales teams, Amazon’s product gurus swallowed the market through product vision and engineering wizardry.
Enforcing the importance of the product
Data center vendors are getting into increasingly complex services and architectures such as on demand services, systems management, hardware (flash storage, converged infrastructure) and software (automation, DCIM, orchestration).
The broadening of the data center sector is enforcing the importance of the product role in data center leadership teams, as the offering must factor in a highly complex matrix of available technology and client desires. The product owner should therefore interface with every part of the business to create a homogenous strategy and roadmap:
- CEO: who are our target markets, what do they want to buy, why do they want to buy it and can we deliver that?
- CFO: investment cost and revenue predictions (particularly important around annuity services).
- CMO: working to understand the target markets and delivering marketing strategies that appeal to those verticals. Exceptional vertical marketing (driven by product) is often the difference between opening an opportunity and winning/losing the deal.
- COO: can we build what our target customers need?
- Head of Sales: if we build this, can we sell it and how quickly?
Once a clear vision and roadmap has been defined, it becomes easy for the business to align behind this strategy. A clear and definable strategy will inevitably aid the on-going health of the business.
One size does not fit all
Larger companies may wish to hire a product director, for smaller companies this may not be possible. In this scenario I would advocate clearly assigning the responsibility of product to a single person and empowering them within the organization so they can be held accountable to deliver the roadmap and vision.
To be clear, I am not advocating that all data center vendors have to verticalize the market and build sector specific solutions. Depending on location, availability of cheap sustainable power and good connectivity the best strategy may be to build a vanilla solution for all. This, however, should only be done if the product owner has come to this conclusion through deep scrutiny of the situation.
Ultimately the sector is expanding at a rapid pace and demand continues to grow. Nevertheless if your product or offering cannot be differentiated from the competitors then you have an unenviable position.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:00p |
Data Center Jobs: CBRE At the Data Center Jobs Board, we have a new job listing from CBRE, which is seeking a Sr Project Manager (Data Center and Labs) in San Jose, California.
The Sr Project Manager (Data Center and Labs) is responsible for managing all facets of project management (budget, schedule, procurement, quality & risk) for individual real estate projects including planning, design, construction, occupancy and closeout, demonstrating capability to read, understand and apply standard to complex documents affecting real estate projects, including but not limited to: agreements/contracts, leases, work letters, project charters, surveys and drawings, interfacing directly with clients to define project requirements, and preparing scope of work, project delivery resource requirements, cost estimate & budget, cash flow, work plan schedule & milestones, quality control, and risk identification.
To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 11:16p |
Microsoft Touts Partner Momentum, Launches Azure Certification for Software Vendors Kicking off Monday its Worldwide Partner Conference, which takes place this week in Washington, D.C., Microsoft reported continuing partner momentum and growth of its Azure cloud services business.
Azure is being used by more than 57 percent of the Fortune 500 and an average of 8,000 customers are signing up each week, the company said. By the end of the year, the cloud services will be available in 16 regions worldwide.
Microsoft also introduced a certification program called Microsoft Azure Certified for Virtual Machines, which gives software vendors the ability to label their products as certified to run on Azure and to list them in the Azure Gallery. Early program members include Oracle, SAP, Azul Systems, Bitnami, Riverbed Technologies and Barracuda.
“This new logo certification program will give Microsoft partners new opportunities to promote and sell their applications and services on Azure,” wrote Bob Baker, director of cloud and enterprise partner marketing at Microsoft.
Baker also mentions that The Microsoft Open Technology VM depot now hosts more than 1,100 open source machine images that can be deployed on Azure. The Azure Data Marketplace hosts over 800 applications. The library of images and the number of applications keeps growing, making Azure a more flexible tool for partners to leverage when building out their portfolios.
The company also rolled out the Microsoft Cloud Solution Provider Program. Partners in this program will be able to directly provision customer subscriptions and provide one monthly bill for both their own and Microsoft services. They will also directly manage their customer subscriptions with in-product tools in the Partner Admin Center and own the technical support relationship.
The partnership conference demonstrates that Azure Cloud is integrating into the partner ecosystem quite nicely. These conferences used to be focused around SharePoint and Exchange resellers, but Microsoft is now enabling wider portfolios from infrastructure to apps as well as a way for partners to market their applications via the Azure marketplace.
InMage acquired for Azure business continuity
Last week, in the run-up to the conference, Microsoft announced acquisition of InMage, which develops and delivers disaster recovery systems and will be used to bring better business continuity capabilities to Azure. Terms of the deal were not disclosed.
InMage will enhance backup, replication and quick data and application recovery capabilities in case of a system failure.
“This acquisition will accelerate our strategy to provide hybrid cloud business continuity solutions for any customer IT environment, be it Windows or Linux, physical or virtualized on Hyper-V, VMware or others,” wrote Takeshi Numoto, corporate vice president of cloud and enterprise marketing at Microsoft. “This will make Azure the ideal destination for disaster recovery for virtually every enterprise server in the world. As VMware customers explore their options to permanently migrate their applications to the cloud, this will also provide a great on-ramp.”
InMage Scout technology is being integrated into the Azure Site Recovery service. It collects data changes from production servers as they occur directly in-memory before they are written to disk and sends them to a software appliance called the InMage Scout Server.
It reduces I/O load on production servers and eases backup with granular recovery of data. Microsoft had already announced plans to enable data migration to Azure with Scout, and InMage brings in lots of technology around managing data. | 11:30p |
IBM SoftLayer: One Year After the Acquisition IBM is one year into its SoftLayer acquisition, which has transformed and helped form the backbone of its cloud offerings. IBM spent $2 billion to anchor its cloud portfolio and has racked up several clients, integrated numerous services and expanded its infrastructure footprint since. Today, on the deal’s anniversary, IBM is launching a range of new cloud services for the enterprise based on the SoftLayer infrastructure.
IBM also detailed some of the new services in the works, some that have been integrated since the acquisition and a few targets for the rest of its multi-billion-dollar investment in growth of the cloud business. The sum total of achievements for the 12 months shows good progress.
Thousands of companies have migrated to SoftLayer in the past year. “Cloud is mainstream, large organizations are using it,” said George Karidis, COO at SoftLayer. “A few years ago it was the three guys with the dream; now it’s the largest banks in the world. They want the credibility of an IBM with the very reliable combination of the SoftLayer platform.”
Nearly half of IBM’s top 100 strategic outsourcing clients — including some of the world’s largest enterprises — are already implementing cloud solutions with IBM as they transition to a hybrid cloud model.
The company’s hybrid proposition means clients can maintain on-premise control of key applications and data while moving other workloads, like systems of engagement, to the cloud for quick access to data, expansion of new services and cost reductions. It means keeping secure backend services alongside dynamic front-end services on the cloud, leveraging both security and agility.
New services on Bluemix and cloud marketplace
In addition to offering more than 300 services within the IBM cloud marketplace based on SoftLayer, IBM is releasing several new options via its Bluemix Platform-as-a-Service offering and the cloud marketplace:
- Watson Engagement Advisor allows organizations to gain timely and actionable insights from Big Data, transforming the client experience through natural conversational interactions with the system, which get smarter the more it is used. Watson runs on IBM Power Systems, powered by its POWER8 processors, integrated into SoftLayer’s cloud infrastructure.
- Watson Developer Cloud on Softlayer allows access to Watson for third-party developers, entrepreneurs, academics and system integrators looking to harness Watson’s cognitive capabilities in the products and services they bring to market.
- Aspera high-speed transfer technology is now also available on SoftLayer, enabling users to move large unstructured and structured data quickly and securely. IBM acquired Aspera to speed big data file transfers in December 2013.
- Code name Elastic Storage is a new software-defined storage-as-a-service offering built on SoftLayer. It provides organizations with high-speed access to large volumes of data and seamless data management between their on-premises infrastructure and the cloud.
- Jumpgate allows for interoperability between clouds by providing compatibility between the OpenStack API and a provider’s proprietary API.
On the marketplace, there are data and analytics offerings, as well as SoftLayer services, such as multi-enterprise Relationship Management Software-as-a-Service that connects and manages shared business processes across a variety of communities, Time Series Database, which connects applications to the Internet of Things, and Analytics Warehouse, which provides an agile platform for data warehousing and analytics.
SoftLayer has expanded hourly billing for bare-metal servers, bringing the pay-as-you-go benefits of virtual server consumption to dedicated resources. Bare-metal servers provide the increased performance and privacy that many enterprises desire.
Another addition is Cloud Modular Management, a fully automated service management system to help companies to more easily govern new cloud application environments. It enables companies to pick the services they want to manage on their own or have IBM manage for them.
Billions of investment in cloud
In addition to the $2 billion SoftLayer acquisition, the company has devoted:
- $1.2 billion investment to expand SoftLayer’s global data center footprint to 40 locations globally covering every major geography and financial center by 2015 and opening additional SoftLayer facilities in Melbourne, Toronto and Washington, D.C., in the third quarter of 2014. London officially opened today and the company recently added a location in Hong Kong. Karidis said there’s more expansion coming in Europe, with Paris and Germany in the fall. There’s also plans for Sydney, Brazil and China.“There’s great customer traction and they’ve invested more money than we could have,” said Karidis. “We have been expanding dramatically in the US, and adding capacity in all our US markets. We’re adding capacity in Dallas, San Jose, Seattle and Virginia.”
- $1 billion to launch a Watson business unit, with Watson running on SoftLayer
- $1 billion investment to establish a cloud PaaS (Bluemix) to help developers build cloud applications running on SoftLayer. IBM is extending its Bluemix developer platform on SoftLayer to more than 50 services and claims that the PaaS is now the largest deployment of Cloud Foundry — its open source foundation.
- The launch of the IBM Cloud marketplace and cloud acquisitions including Cloudant, Silverpop and Aspera.
The company says enterprises are moving core operations to IBM Cloud and SoftLayer. Some recent customers include Macy’s, Whirlpool, Daimler subsidiary moovel Gmbh and Sicoss Group.
In addition to new clients, more than 1,000 business partners have signed on to offer services on SoftLayer. They include global players, such as Avnet, Arrow Electronics and Ingram Micro, as well as cloud-based services and solution providers, such as Mirantis, Assimil8, Silverstring, Clipcard, SilverSky and Cnetric Enterprise Solutions.
“We are through all the hard parts of the integration,” said Karidis. “The team that’s here is excited and motivated. I don’t know if the integration will ever be done –we’re one of the few acquisitions that has not been gone away and has remained its own brand. We still wear SoftLayer t-shirts and business cards, the only difference is it now says ‘an IBM company’. The brand still exists, all of those pieces are still SoftLayer. We do what we have to do to build our business, and the rest of IBM works to make it work on our software. Watson is a great example.”
“In its first year, SoftLayer has proven to be a pivotal acquisition for IBM Cloud,” said Erich Clementi, senior vice president, IBM Global Technology Services. “SoftLayer has quickly become the foundation of IBM’s cloud portfolio, anchoring our infrastructure, platform and software-as-a-service offerings. “ |
|