Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, February 18th, 2013
| Time |
Event |
| 12:30p |
SoftLayer and the Intricacies of Asia-Pac Expansion  A look at some of the densely-populated racks within the new SoftLayer data center in Singapore.
Asia has become the hot expansion market for data center service providers. The region’s infrastructure growth trails its surging Internet population, which is why companies including Google, Amazon, Rackspace, Digital Realty and Equinix have all been expanding in the region.
But expansion decisions aren’t simple in the Asia-Pacific region, as providers must weigh multiple variables in audience and operating environment before investing millions of dollars in a new location. An interesting case study is provided by SoftLayer Technologies, one of the world’s largest hosting providers.
Global expansion has been a priority for SoftLayer in recent years. The company is based in Dallas, but 40 percent of its customer base is based outside of the U.S. While the customer base has been international, the company’s infrastructure has just recently started to extend beyond the U.S. Its quest to expand reveals the challenges and rewards many service providers face when choosing where to expand in AsiaPac.
More Infrastructure in Asia-Pacific
The company has a data center in Singapore, and added network locations in Tokyo and Hong Kong in 2011. ”Opening a data center in Singapore gave us an opportunity to do a few things in the region,” said Mark Quigley, SoftLayer’s Director of International Operations. ”It’s a place for American companies to house their infrastructure.”
While the company has made significant advancements in its Asia-Pac business, Quigley discussed the unique challenges in setting up shop in Japan in particular. Quigley spent a lot of time in Asia during 2012 in order to help learn the culture and shape SoftLayer’s operations in the region.
He finds that in Japan, the culture breeds two contrasting business realities that create challenges and opportunities for companies like SoftLayer: Japan is insular and Japan is global.
Rapid Global App Deployment
Japan is insular because IT purchases there are made through either Japanese firms or foreign firms that have spent decades building trust and reputation. It’s hard for an outsider to establish a business quickly, and the process of getting established can be time-consuming and expensive. “A difficult part would be trying to figure out the telephony support,” said Quigley. “Asian business culture tends to value face time.”
However, as Quigley points out, Japanese businesses also have a huge need for global reach.
“The capital investment required to go global is negligible compared to their forebears, because they don’t need to build factories or put elaborate logistics operations in place anymore,” said Quigley. “Today, a Japanese company with a SaaS solution, a game or a social media experience can successfully share it with the world in a matter minutes or hours at minimal cost, and that’s where SoftLayer is able to immediately serve the Japanese market.”
That’s why SoftLayer isn’t yet planning to open an office in Japan. It has a network location in Tokyo, an existing customer base, and a number of relationships with existing partners like Parallels and Citrix who have solid footing there already. It will continue to seek the right partnerships to enable growing its business in the region.
Doing Your Homework
Right now, the company is doing its due diligence and seeking to understand the market in Japan. This reflects SoftLayer’s methodical approach to expansion decisions.
“It doesn’t make sense to make a push. It takes time,” said Quigley. “There’s Japanese business culture to contend with. We’d attracted some Japanese customers, but we haven’t marketed to the Japanese audience yet. If we were going to make a full on push, there’d have to be significant changes.”
One promising development is that Japanese companies are becoming more comfortable shifting their IT infrastructure from on-premises facilities to third-party providers.
“From a technology adoption perspective, we have in-country companies like KDDI, NTT, that are actively providing colocation services, dedicated hosting services, and have started cloud infrastructure (i.e. Nifty cloud by Fujitsu),” said Quigley. “All are seeing that Japanese companies are starting to outsource their infrastructure.”
SoftLayer wants its network within 40ms of everyone on the planet, and the Japan PoP is part of this, allowing it to “cross both ponds” to connect U.S. data centers and international facilities. The Japanese market is a great opportunity, but as Quigley discovered, it’s a tough nut to crack. It takes time and commitment to penetrate the market, and it’s one that remains almost paradoxically insular and global. | | 1:00p |
Data Center Jobs: Geist At the Data Center Jobs Board, we have a new job listing from Geist, which is seeking a Regional Sales Manager – NW US in Sacramento, California.
The Regional Sales Manager – NW US is responsible for developing a business plan and sales strategy for the market that ensures attainment of company sales goals and profitability, the performance and development of channel partners, initiating and coordinating development of action plans to penetrate new markets, willingness to participate in trade shows within assigned geographic region, willingness to travel to other markets in assigned geographic region as the business grows, and providing timely, accurate, competitive pricing on all completed prospect applications submitted for pricing and approval, while striving to maintain maximum profit margin. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 1:30p |
Whatever Happened to High Availability? Kai Dupke is Sr. Product Manager, SUSE LLC, a pioneer in open source software and enterprise Linux.
 KAI DUPKE
SUSE
You don’t hear a lot of about high availability (HA) these days, what with all the media attention focused on cloud computing. Five years ago, high availability and clustering was a big part of the IT conversation. These days, not so much. But high availability is still a key part of the IT narrative, whether you hear about it or not.
High availability has been lost in the din about cloud computing, because high availability has not been an expectation of the cloud computing story. IT shops looking at cloud computing are seeking the benefits of agility and lower cost instead.
In the past, application development on the UNIX and Linux platforms traditionally took the stance that the infrastructure would shoulder most, if not all, of the high availability (HA) responsibilities. The storage layer would include RAID arrays, the networking layer multiple network configurations, and the operating system would include HA features that would ensure maximum uptime for the application.
There is some HA workload at the application layer, of course: support for clustering is one way application developers have been able to incorporate HA features.
High Availability Still in Infrastructure Layers
Even as enterprise customers move to a more virtualized infrastructure, such as private clouds or virtual data centers, HA is still very much centered at the infrastructure layers, not at the application layer. There may be some HA support at the virtual layers, naturally, but that’s still part of the infrastructure narrative.
Listening to the public cloud story, however, you get a much different tale. In public clouds, the expectation of the infrastructure layer is not as high as the older legacy systems. It’s more of a commodity, get-what-you-pay-for mentality when it comes to the infrastructure, so the application developers have to take the only path open to them: build HA functionality into their applications.
This is not to tear down the public cloud; the flexibility and cost structure of the public cloud is part of why it can work for so many organizations. Plus, there is the very real logistical challenge of trying to apply HA principles to a public cloud. As Japan learned to its dismay in 2011, the capability to support public clouds en masse is impossible with current technology.
Cloud Doesn’t Work For Everything
But HA is still a necessary part of IT, because not every IT department needs all of its services out in the cloud.
First, there’s the very real migration costs to the cloud. Because clouds today do not provide HA, customers are asked to rewrite their applications. Because cloud misses a crucial feature, customers have to take action and spend money to do something the infrastructure should be do anyway.
It’s been cool to watch marketing departments turn this additional workload into a benefit. It’s like selling a car without a steering wheel. “Bring your own wheel to make sure no one uses your car because there is no wheel installed,” is how some companies are selling the cloud.
The fact is, the biggest inhibitor for cloud computing is the lack of infrastructure support needed for many business-related applications. This is complicated by the fact that most of these apps are third-party applications and not even built by the companies using the applications.
In order to obtain the benefits of HA in the cloud, you could argue that these third-party companies should open the setup of and access to their applications? Sounds good on paper, but that means that every third-party vendor will end up creating their own way of doing this, multiplying the effort of getting HA at the application level.
What’s the answer? | | 2:00p |
Smart Routing Speeds Infrastructure for Digital Ad Firm Cloud computing, often depicted as the vast treasure trove of seemingly limitless resources, is not a “one-size-fits-all” proposition. While there are certainly a wide array of needs that cloud computing can meet, some businesses are finding that they need a more customized set of infrastructure components to really maximize their business.
OwnerIQ, a digital advertising intelligence and media buying company, is an example of an organization that started in the cloud, but moved to the managed services provider Internap for a number of reasons. OwnerIQ is privately held and based in Boston. They are back by area investors: Kepha Partners, Atlas Venture, Common Angels, Egan-Managed Capital, Massachusetts Technology Development Corporation and Longworth Venture Partners.
“OwnerIQ needed a distributed footprint for hosting. Performance, low latency and ability to handle large data transactions were all very important to OwnerIQ,” said Raj Dutt, Senior Vice President of Technology, Internap.
Managed Hosting Still A Strong Solution
“Big data is associated with the cloud, but cloud doesn’t always make sense, depending on the customer’s requirements,” he said. Internap also provides cloud services, so Dutt’s perspective is not driven by “cloud-knocking.”
“We have an ad platform that powers a digital ad solution,” said Eric Mabley, cofounder, executive VP ad platforms, OwnerIQ. “We work directly with brands and their agencies. We work with the digital agency of record, brand planners, media buyers.”
Mabley started the company in the mid-2000s. At first, the company’s mission-critical technology was hosted with a different service provider who was more “rack and stack,” he said. It’s all at Internap now, because of the networking capabilities and the cost savings, he said.
Multiple Infrastructure Needs
OwnerIQ has two parts to its business: a data marketplace where retailers and manufactures share their audience behavior data which is analyzed and leveraged for digital advertising opportunities (which the company calls “ownership targeting”) and media purchasing through national advertising exchanges from publishers and other media companies, Mabley said.
The data marketplace and the media buying function each have their own unique infrastructure needs, he said.
On transactional side of the house – working on the data marketplace analyzing big data for audience behaviors with data sets from major retailers and manufacturers such as Cisco, Green Mountain Coffee or RCA – there is need for storage space and compute cycles. The audience data is kept in data warehouses sitting near each other. But the data doesn’t just sit there.
“We are not in the business of data storage. We are in the business of data analysis,” Mabley said. For example, audience behavior from a visitor to a web site selling high-end appliances might indicate that person owns gourmet kitchen appliances. That might be an attractive audience target to a company selling life insurance products. So, one manufacturer’s data set might yield an advertising audience profile that can be reached through digital advertising from another company.
Mabley said, “We distill that data and use it to steer us when we see an advertising opportunity. We need horsepower to churn through the data and push the data around.” Dutt, who worked closely with the OwnerIQ team, noted that OwnerIQ had requirements for low internal latency to meet the big data analysis needs.
To participate in the ad buying auctions hosted on the East and West Coast and in Chicago where there are tens of billions of opportunities in the U.S. and Canada each day, OwnerIQ needed bicoastal hosting, a global content delivery network (CDN) and to take advantage of Internap’s IP network.
Low-latency Important in Ad Buying
Speed equals business advantage. It comes down to fractions of milliseconds when participating in ad purchasing auctions, not unlike traders in the world’s equity markets, according Mabley.
“It’s where a 1/20 of millisecond is important to take advantage of auction price in a timely manner, or we won’t get a bite at the apple,” Mabley said. “We have 50 servers across Internap optimized for traffic latency and a 99.9% success rate participating in auctions.”
What Internap has to offer, which speeds the ability to purchase in a fast-paced auction environment, is their patented routing solution, Managed Internet Route Optimizer (MIRO). MIRO is a route optimizing technology which uses a smart approach to routing traffic across the Web.
So as Dutt said, Internap is “hitting it from all angles.”
Mabley noted, “We had experience with different providers, including leveraging the cloud. We found that networking was a bottleneck.”
For more on managed hosting, bookmark Data Center Knowledge’s Managed Hosting Channel. | | 4:00p |
Best of the Data Center Blogs: Feb. 18 Here’s a roundup of some interesting items we came across this week in our reading of data center industry blogs.
How Energy Efficiency Efforts Can Spell Trouble When the Power Goes Out - From the Schneider Electric Power blog: “A number of important trends are helping companies save lots of money on electric bills by making their data centers more efficient. While this is certainly a worthy and justified endeavor, it does not come without risk – namely, the risk of trouble should the power go out. IT equipment is typically backed up by uninterruptible power supplies (UPSs) which supply power until generators come on-line following the loss of utility power. Cooling system components, however, are not typically connected to a UPS; some may not even be connected to the backup generators. The result is the air temperature in the data center may rise quickly following a power failure.”
Ethernet Congestion: Drop It or Pause It - At the Data Center Overlords, an entertaining post on networking: “Congestion happens. You try to put a 10 pound (soy-based vegan) ham in a 5 pound bag, it just ain’t gonna work. And in the topsy-turvey world of data center switches, what do we do to mitigate congestion?”
Have Data Centers Become Passe? - From Chris Crosby at CompassPoints: “When does the novelty wear off something? Certainly not if you’re the first to own the next big thing, whatever that thing may be. But how many other people have to own one before it’s no big deal anymore? … Despite my lack of bona fides as a follower of conspicuous consumption, I will say that I think data centers have crossed over into the land of everyone has one of those.”
Building the Next-Generation Media Enterprise - From the Equinix Interconnections blog: “Increasingly, media companies are using digital connections, with suppliers, vendors, services providers, digital platforms and distribution partners to innovate, collaborate and become more efficient. Internally, within organisations, cloud, SaaS and other offerings are transforming the workflows and internal operations of media enterprises.” |
|