Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, August 26th, 2015
| Time |
Event |
| 12:00p |
How Edge Data Center Providers are Changing the Internet’s Geography As people watch more and more of their video online versus cable TV or satellite services, and as businesses consume more and more cloud services versus buying hardware boxes and software licenses, the physical nature of internet infrastructure is changing. Whether your drug is Bloodline on Netflix or Duck Dynasty on Amazon Prime, the web content companies go to great lengths to make sure you can binge in high def. The same goes for cloud services, for whom performance on the user’s end means everything in their busy, competitive market.
In recent years, the big push has been to improve the quality of these high-bandwidth web services to users outside of the top metros like New York, Los Angeles, or San Francisco. And the best way to do it has been caching the most popular content or web-application data on servers closer to the so-called “tier-two markets,” places like Phoenix, Minneapolis, or St. Paul. This push has created a whole new category of data center service providers that call their facilities “edge data centers.” These are facilities that quite literally extend the “edge” of the internet further from the traditional internet hubs in places like New York, Northern Virginia, Dallas, or Silicon Valley.
Examples of companies that describe themselves as edge data center providers include EdgeConneX, vXchnge, and 365 Data Centers. Building something that can be truly called an “edge data center” requires a different set of considerations than building your standard colocation facility. It’s about creating interconnection ecosystems in cities away from the traditional core markets.
That EdgeConneX went from zero data centers two years ago to two dozen today and growing, and that vXchnge bought eight Sungard data centers in tier-two markets in one go earlier this year illustrates just how quickly demand for edge data centers is growing.
The Edge is a Place
Ultimately, location is the main way for companies like EdgeConneX to differentiate from the big colo players like Equinix or Interxion. Edge data center providers are essentially building in tier-two markets what Equinix and its rivals have built in the big core markets: hubs where all the players in the long chain of delivering content or services to customers interconnect and exchange traffic. These hubs are where most of the internet has lived and grown for the bulk of its existence, and edge data center companies are building smaller hubs in places that don’t already have them but are becoming increasingly bandwidth-hungry.
Take for example CloudFlare, a Content Delivery Network and internet security services company. To host its infrastructure in all the top markets, CloudFlare uses the big colos, including Equinix, Interxion, and TelecityGroup, which Equinix recently acquired. But it also recently became an EdgeConneX customer, because EdgeConneX could extend its network to places where the likes of Equinix don’t have data centers, Joshua Motta, director of special projects at CloudFlare, said.
You don’t see Equinix facilities in places like Phoenix, Las Vegas, or Minneapolis, he said. The distinction isn’t as clear-cut as it may seem at first, however. While it’s true that Equinix doesn’t have facilities in those three markets specifically, it does have presence in other cities that can be considered second-tier markets: places like Boston, Denver, and Philadelphia. But the size of Equinix data centers in those markets is very small compared to its marquee facilities in Silicon Valley, New York, London, or Frankfurt. Compare, for example, the nearly half a million square feet across seven Equinix data centers in the New York metro to 13,000 square feet in a single facility in Boston, and you’ll get the idea.
Transport Versus Peering: a Question of Cost
Boston is also an example of a market without a good data center option for CloudFlare, Motta said. There are plenty of data centers in town, and there is even an internet exchange, but there isn’t a data center where transit providers (companies that carry traffic over long distances), access networks (the home or business internet service providers), and content companies come together and interconnect, he said. It’s possible that this problem is specific to CloudFlare, he warned, since the company prefers not to pay for peering with access networks. | | 1:00p |
IBM Launches Blue Box Private OpenStack Cloud Services on SoftLayer Less than three months since buying the private cloud company Blue Box, IBM announced today that it has ported the private cloud software it gained via the acquisition to the IBM SoftLayer cloud.
“Blue Box is now available in 40 plus data centers around the world,” Angel Diaz, VP of cloud architecture and technology at IBM, said. “We think the fact that it took us less than 90 days to make the port shows how flexible Blue Box really is.”
Based on an implementation of OpenStack, the managed hosted service removes all the complexity associated with deploying, updating, and managing an instance of OpenStack from the internal IT organization, he explained.
As an open source framework, OpenStack has the potential to reduce licensing costs associated with commercial IT management software. But as technology, OpenStack is still relatively immature.
Most IT organizations don’t have the engineering resources to deploy it, much less manage it on their own. For that reason, Diaz said, some organizations will take advantage of the shift to OpenStack to rely more on external IT service providers.
Nevertheless, organizations have the option of deploying the Blue Box private cloud on-premise or in the Softlayer cloud, and development work surrounding OpenStack is clearly ongoing. As such, OpenStack this time next year should be simpler to master as both the modules that make up OpenStack become more robust, and the automation frameworks surrounding it become more sophisticated.
In the case of IBM, Diaz said, the company is squarely focused on not only expanding the number of use cases for OpenStack, but also improving overall scalability and interoperability. IBM is specifically focused on improving the robustness of the Neutron networking software that comes with OpenStack and concretely demonstrating the level of interoperability that needs to exist between various implementations of the open source framework.
After all, without that interoperability, OpenStack becomes more of a joint research and development project that does more to benefit vendors than it does an IT organization.
Obviously, OpenStack has a ways to go before achieving mass adoption. IT organizations, said Diaz, have made it clear they don’t want all their workloads running in a multi-tenant public cloud. As for any of the gaps that currently exist between OpenStack and rival commercial software platforms, Diaz said, with over 500 IBM developers working on OpenStack those gaps will be closed very quickly. | | 3:00p |
The Future Is Now: Digital Transformation Manufacturers Can’t Afford to Miss Sivakumar Gurupackiam is Vice President of Discrete Manufacturing at NTT DATA, Inc.
As manufacturing organizations are becoming increasingly global, their ecosystems are growing in complexity with redefinition of market place, service models and the establishment of new symbiotic business relationships.
For multi-national manufacturers, these challenges have to be tackled while balancing the needs of a complex global supply chain and local/regional market needs. Regulatory challenges, newer disruptive digital business models, and continuously evolving products leveraging technology to enable and differentiate businesses all tend to threaten the existence and sustenance of market leadership.
With these challenges, it is incumbent upon manufacturing leaders to introduce lean and agile technologies to optimize operations, drive innovation and comply with a growing number of regulatory changes – all while meeting customer demands. With manufacturing organizations now expected to be agile and receptive to customer/consumer demands and other market forces, it is essential that manufacturers meet the digital business imperative while maximizing returns from their legacy systems.
How Can Technology Help Me Transform My Business?
We find ourselves in an incredibly dynamic phase of a technology evolution, and it impacts everything from peoples’ “anywhere, anytime” lifestyles to their expectations of full transparency at such a degree it could potentially topple a government. While legacy systems and earlier technology were adequate to run yesterday’s business, they may not provide the necessary agility, visibility and competitiveness required in today’s environment. The impact and application of digital technologies varies greatly from one industry to another. However, within the manufacturing industry, this can also be said for the different positions within the supply and demand chain.
While we have all heard technology buzzwords such as social, mobile, cloud, big data and the Internet of Things, manufacturers now need to think about how to incorporate these new technologies. It is important to understand what these technologies are, but even more importantly, is to know how they can be used to transform operations. Once this is understood, manufacturers can leverage these technologies as building blocks to differentiate themselves and, in some cases, disrupt their competitors’ businesses.
Incorporating a digital business strategy can provide numerous opportunities and advantages, including: consistent and improved customer experience; seamless automated connectivity between systems, things and people to enable better service; and the ability to make better decisions and transform manufacturing practices based on a myriad of data sources. To strategize, manufacturing leaders should find the best approach for incorporating new technologies while remaining competitive, and tailor their recommendations for improvement for both short- and long-term needs.
Challenges Impeding Transformation
Embarking on this transformation is often deterred by traditional challenges and conflicting priorities such as legacy system architecture, increasingly tightening budgets, decreasing resources, and stereotypical “IT inertia” as seen by business leaders. While new market entrants can build their digital business solutions from the ground up with flexible, cost-effective and advanced technologies, most organizations must maintain core and legacy systems to keep the business running.
Too often we see large-scale IT projects fail to deliver the promised ROI within an established timeline, making it increasingly difficult to justify for executives. While this internal struggle continues, IT improvements remain crucial to ensuring ongoing innovation. Quite often the definition of “value” or “benefit” is debated when searching for common ground between IT and business.
Sound Strategy, and Modernization – A Formula for Success!
It is vital that IT leaders make a business case for implementing modern IT strategies to the board and C-level executives who need greater visibility and engagement to understand and approve upgrades. Fortunately, the need to evolve technology platforms while satisfying business requirements can be achieved with thoughtful planning. However, companies can and should also invest in IT solutions to improve business performance and time to market.
To resolve the challenges and embark on a path of constructive and meaningful digital strategy for future growth, IT and business organizations should come to a joint realization of how to best adopt digital technologies in their environment. Realizing digital transformation is a necessary investment that can only be ignored at the cost of future obsolescence. Understanding that IT infrastructure improvements are a major investment, companies should prioritize lower cost projects first to establish processes that will enable a smooth transition for future, more substantial investments and reduce the stereotypical “IT inertia” seen by business leaders.
Organizations should have the right balanced long-term strategy for digital transformation and legacy maintenance. Partnering with the right service provider also helps bring a third-party perspective and experience to effectively build a strategy that would be a right fit for the organization.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:58p |
Storage Startup Scality Closes $45 Million Series D Funding With Sights on 2017 IPO 
This article originally appeared at The WHIR
Object-based storage startup Scality has completed a $45 million Series D funding round, bringing the total investment in Scality to $80 million.
In its announcement, Scality said it will use the funds to continue to increase its North American salesforce, continue to expand internationally and actively support its resellers. The company said this will position itself for an initial public offering in 2017.
Scality’s last round of funding was in July 2013, when it raised $22 million. Since then, Scality has increased its revenue four-fold, grew its staff from 42 to 160, and announced major customers Deluxe, RTL II, and Phoenix.
Scality’s primary product, Scality RING, is storage software that enables customers to build petabyte-scale storage infrastructures using industry-standard x86 servers, regardless of the vendor. It abstracts the underlying hardware to create a single pool of storage.
More than half of the server market now resells Scality RING software thanks to reseller agreements with HP, Dell and other server manufacturers.
Scality RING is used for a variety of capacity-driven workloads such as cloud services, video, and enterprise archiving. For instance, Libero, Italy’s largest email provider, deployed an email storage solution using Scality RING on Dell servers in 2011. And 7 of the top 20 telecommunications companies use Scality RING for mission critical applications.
Scality is based in San Francisco, but has a large international userbase. Scality significantly expanded its European and Asia Pacific presence in 2014. It opened a Japanese subsidiary, Scality KK, in March of this year to address the company’s growing Japanese customer base, and opened another office in Singapore.
“Based on our extensive analysis, Scality is a leader in object-based storage – a market that represents $21.6 billion in 2015, and is growing 21.5 percent annually,” stated Ashish Nadkarni, Research Director at IDC. “Scality is unique in how its solution blends the file and object technologies that are required to bridge new and existing customer needs. This new funding round is further validation of Scality’s strength in the market and strong value proposition.”
The latest investment round saw new investor BroadBand Tower, Inc., joining existing investors Menlo Ventures, IDInvest, the Digital Ambition Fund, Iris Capital, and Omnes Capital and Galileo Partners. The Series D funding round was completed with participation from 65 percent of Scality’s employees, who will have an equity stake in the company they’re growing.
This first ran at http://www.thewhir.com/web-hosting-news/storage-startup-scality-closes-45-million-series-d-funding-with-sights-on-2017-ipo | | 5:57p |
Intel Pumps More Cash Into Big Data Startup BlueData Intel has for the third time invested in BlueData, a Silicon Valley startup that makes it easier for companies to stand up and operate Big Data infrastructure, such as Apache Hadoop and Spark, in their own data centers or in the cloud. The two companies have also entered into a commercial partnership.
The processor maker led the recent $20 million Series C funding round. BlueData, founded by a group of VMware alumni, has now raised a total of $39 million.
Companies want to use Big Data analytics to derive value of all the data they and their customers generate, but technologies like Hadoop continue to be difficult to implement, which is why companies like BlueData are gaining traction.
We explained how BlueData’s platform Epic makes Hadoop easier in this profile.
As Intel brings new technologies into the data center, such as new memory tech or new CPUs, BlueData’s job is to make sure its platform can take advantage of that innovation, BlueData CEO Kumar Sreekanti said.
The involvement of Intel, which designs server platforms for majority of the world’s data centers, is important to quality and adoption of new technologies such as BlueData’s. It was collaboration between Intel and VMware that made server virtualization the success that it was, Sreekanti, a former VMware VP of research and development, said.
Effectiveness of the new Big Data platforms, like Spark, will depend on CPU innovation – be it encryption, or in-memory computing – in a big way.
On the marketing side of the relationship, Intel will help promote BlueData and cross-sell the platform to its customers, according to Sreekanti.
While it continues to invest in R&D, the startup will use the latest funding round primarily to get the word out and sell. “Majority of the focus is building the sales and marketing effort,” Sreekanti said.
Some of the early adopters of Epic include Comcast, Orange, and Symantec. The company is in talks with numerous other customers, Sreekanti said but could not disclose their names. | | 6:43p |
Google Declares its Cloud Container Engine Ready for Production About 10 months since it first announced the managed version of Kubernetes, its open source Linux container management system, Google has launched the cloud service, called Google Container Engine, into production today, announcing the service is now production-ready.
Google is very likely the world’s largest user of Linux containers. From Search to Gmail, every single Google service runs in Linux containers, Craig Mcluckie, a Google product manager, wrote in a blog post. The company spins up more than 2 billion containers across its global data center infrastructure weekly.
“Container Engine represents the best of our experience with containers,” Mcluckie wrote.
While Linux containers have been around for years, they have become extremely popular over the last two-three years thanks to Docker, an open source project with a company that goes by the same name behind it.
Google and its cloud rivals Amazon Web Services and Microsoft Azure have all embraced the push to take containers into the mainstream and introduced cloud services around the technology.
Docker has created a standard for container images and a system that makes it easy to use them. The promise of Linux containers is more efficient utilization of data center resources and a much easier time writing applications for any type of infrastructure – on-premise or in the cloud, inside VMs or on bare-metal servers.
Others have since proposed container standards alternative to Docker, but after about a year of back-and-forth, many companies that participate in the Linux container ecosystem found some common ground.
Container Engine takes the usability angle even further, offering fully managed container clusters in the cloud without worries about things like availability or software updates on the user’s part. All the developer has to do is define what kind of CPU and memory a container needs, how many replicas it should have, and its “keepalive” policy, and the cloud service sets up the infrastructure accordingly.
Red Hat, Microsoft, IBM, VMware, and Mirantis, a leading OpenStack cloud infrastructure company, have all been integrating Kubernetes into their platforms. According to Mcluckie, this means users will be able to move their containers between Container Engine and on-premise environments based on any of those platforms. | | 8:51p |
Rackspace Email, Hosted Exchange Back to Normal After Data Center Issue 
This article originally appeared at The WHIR
Rackspace is reporting that its email services are back to normal after a network connectivity issue impacted its Email and Hosted Exchange services Wednesday morning.
According to the Rackspace status page, the issues occurred from approximately 9:20 am to 11:10 am CDT. Customers of Rackspace Email and Hosted Exchange in its IAD data center in Ashburn, Virginia were impacted.
Rackspace said its engineers rerouted Hosted Exchange traffic to its Chicago data center at around 9:36 am CDT to mitigate impact to customers.
The company reported its email systems were back to normal as of 2:03 pm CDT. Engineers are continuing to monitor the environment.
Cloud Office Control Panel also experienced residual issues due to delays in database synchronization for changes made during the time of impact, Rackspace said. “Customer initiated changes through the control panel may take up to an hour to replicate,” the company said.
On Twitter, Rackspace told some users to expect mail to trickle in as there is “a lot of mail queued to be delivered.”
Rackspace has a 100 percent SLA and offers customers credits in the event of downtime. Customers should refer to the company’s Mail Hosting Services Terms and Conditions which states customers can request a credit through their control panel within 7 days following the end of the downtime.
Recently, Rackspace was named as a leader in Gartner’s “Magic Quadrant for Cloud-Enabled Managed Hosting” in both North America and Europe.
This first ran at http://www.thewhir.com/web-hosting-news/rackspace-email-and-hosted-exchange-back-to-normal-after-data-center-issue |
|