Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, October 7th, 2014
| Time |
Event |
| 12:00p |
BIME Out With Latest Release of Business Intelligence SaaS Cloud business intelligence provider BIME has released version 6 of its platform after 12 months of development, built and architected to run completely on cloud.
Business intelligence remains one of the last bastions of the on-premises software world. The space is ruled by enterprise software from the likes of SAP and IBM. While many BI apps are becoming offered as Software-as-a-Service, the SaaS counterparts to on-premise apps are generally viewed as underpowered. BIME, however, believes BI can be done in the cloud without compromise.
BIME CEO Rachel Delacour said that there still remains some psychological barriers to performing BI in the cloud, but in terms of technology, SaaS BI has surpassed on-premise offerings. The goal is to be able to perform sophisticated BI but keep it simple to use. “As a SaaS vendor, it’s your responsibility to mask complexity,” said Delacour.
BIME allows users to consume and connect data sets across the web, query and build dashboards on tablets and mobile devices. Because it runs in the cloud, there is no need to invest on servers to upload and refine data. It looks like a consumer app, but is a fully functioning BI solution.
The BI is able to tap into a wealth of cloud application data in addition to on-premises data. A drag-and-drop interface allows a user to connect to big data in the cloud, such as Google Big Query and Amazon Red Shift. It connects to consumer web apps like Dropbox or Facebook and business web apps like Salesforce.com and can connect to on-premise data securely. “If you told me a few years ago that I would be able to look at billions of rows across different technologies, I would have been skeptical,” said Delacour.
It connects to these data sources using a security protocol. For on-prem data, BIME uses a proxy from the cloud. “We have created a way where we are not obliged to connect the data where it resides, while we are able to ask to work in a sort of delegated mode, sending queries to the database,” she said.
Alternatively, customers can duplicate on-premise data and bring it to the cloud. “It’s about providing options.”
Architecting for cloud offers several advantages, according to BIME. It means the company has access to a lot of usage patterns as a SaaS vendor that on-prem BI vendors don’t have, allowing it to further refine the application. “We are able to iterate and understand so much more quickly than on-prem BI vendors,” said Delacour.
For customers, performing BI in a SaaS model means there is no financial risk, opening up BI to a larger swath of companies.
Delacour’s roots are as a cost controller. In that role in the past, she used several BI tools and was frustrated. “We need to bring BI to the cloud to make it modern, to take advantage and leverage multi-tenancy,” she said. “Just because you are on the cloud, you can’t be a simple pre-packaged solution. That’s not BI. You need the same level of features of traditional on-prem BI. We enable customers to connect the data where it is.”
“The core of our V6 — one of the most complex JavaScript applications ever built — ensures that BIME users will always be on the edge of what is possible to run in a browser,” said CTO and co-founder Nicolas Raspal. “If you know how to build a presentation, you can use the entire web as your data warehouse, create compelling visualizations and share insights.” | | 12:00p |
Merger Creates National Data Center Provider Fortune Data Centers and the Dallas Infomart have merged, combining resources and assets into a single company that will operate as Infomart Data Centers. The merger creates a national player with presence in some of the largest markets under a moniker that reflects one of the most famous technology buildings in the country, the Dallas Infomart.
The company also announced it is opening a fourth data center in Ashburn, Virginia, next year. It recently bought the facility from AOL, which also used it as a data center.
ASB Real Estate Fund wholly owns both companies, so there are no buildings changing hands. Combined, the company will operate 2.2 million square feet of wholesale data center and office space. It has more than 100 megawatts of available power across in Dallas, Ashburn, San Jose, California, and Hillsboro, Oregon. Fortune management team will lead the company.
ASB is a $5 billion real estate investment trust that is part of Washington, D.C.-based ASB Capital Management, one of America’s oldest and largest institutional money management firms.
The Informart is one of the largest, most interconnected buildings in the country. With more than 1.6 million square feet of space, it is home to major colocation providers, including Equinix (the largest tenant), ViaWest, Cologix and NTT America, to name a few.
“Five years ago, it was about being a trusted provider with a good track record — now those are table stakes,” John Sheputis, formerly CEO of Fortune and now CEO of Infomart, said. “What’s happening is, I’m seeing more RFPs (Requests For Proposals) asking ‘where else are you?’ and ‘can you grow with me?’ This is driven by the changing nature of market demand. This is a nationally-branded, consistent service across geographies.”
The company is planning to bring its new data center in Ashburn’s data center alley online in 2015. It plans to upgrade power infrastructure at the former AOL facility for higher densities.
“Hats off to the people who designed it,” said Sheputis. “If you’ve been in that market for a while, there are a few people that made data center alley what it is, and AOL was one. It’s a beautiful, well-engineered building, but density is low. We will do a substantial renovation to increase energy density.”
Fortune’s data centers are in San Jose and Hillsboro, just outside of Portland. The Hillsboro area is known as Silicon Forest due to its high concentration of technology companies.
“Our customers wanted us in more locations and now we are in four of the most important markets in the country,” said Sheputis. | | 12:30p |
Altiscale Makes Hadoop Easier for SQL Pros Altiscale, a Hadoop cloud startup founded by former Yahoo CTO Raymie Stata, has added one of the most important capabilities in today’s Hadoop market to its cloud offering – the ability to use SQL to run queries on data stored in Hadoop.
SQL is one of the most popular languages used to manage and analyze structured data, and Hadoop is one of the most popular frameworks for storing and processing unstructured data. There are a lot more people who know SQL than there are people who know Hadoop, however, which is why many businesses built around Hadoop have introduced the SQL-on-Hadoop capability.
Altiscale provides fully hosted and managed Hadoop cloud service that requires no knowledge about operating infrastructure underneath the open source framework on the user’s part, Steve Kishi, the company’s vice president of product management, said. The startup leans on this aspect heavily in trying to differentiate itself in the active Hadoop market.
“We provide Hadoop as a SaaS (Software-as-a-Service) service at the end of the day,” Kishi said. “Our customers don’t have to think about infrastructure.”
Competitors, such as Amazon Web Services’ Elastic MapReduce, require users to install and operate Hadoop on cloud infrastructure.
Another Hadoop cloud startup, Qubole, offers its service on top of the AWS cloud but can also set up on-premise Hadoop clusters with elastic capacity provisioning at customers’ own data centers.
Altiscale has its own data center infrastructure to run the service on, pitching it as Hadoop out of the box. “That’s pretty critical in this space because of the complexities of Hadoop,” Kishi said.
He describes the startup’s billing model as similar to a modern cellphone subscription plan. Customers pay for resources they use.
If you signed for 10 TB of HDFS storage capacity and 10,000 task hours of compute, for example, you would pay about $2,500 per month, Kishi said.
Data center provider as channel partner
Altiscale uses colocation providers to host its infrastructure in one facility on the east coast and two on the west coast. The company has a symbiotic relationship with Carpathia Hosting, its east coast data center provider, which has referred some of its colocation customers to it.
“We’re finding that data centers can be a very good channel for us,” Kishi said. “We’re finding it to be very profitable.”
Some of Altiscale’s bigger customer wins came as a result of Carpathia referrals. Carpathia benefits by offering its customers a Hadoop-as-a-Service provider whose infrastructure they can connect to directly in the data center.
“We can set up direct connects between racks inside the data center and transfer data more efficiently that,” Kishi said. | | 1:05p |
AMS-IX Comes to CME’s Chicago Data Center AMS-IX subsidiary AMS-IX USA has lit up an Internet exchange in the CME Group’s data center at 350 E. Cermak in Chicago.
The non-profit Amsterdam Internet Exchange operator has been expanding in the U.S. aggressively since it first entered the market earlier this year. It has been expanding as part of the Open-IX initiative, meant to create alternatives to the largest Internet exchanges in the country, most of which are operated and controlled by Equinix.
Moving into the data center owned by CME, operator of the Chicago Mercantile Exchange, is a major win for AMS-IX. The Internet exchange now gets to sell to CME’s content and financial services customers that lease colocation space there.
The building, owned by Digital Realty Trust, is also one of the most important carrier hotels in the country and home to numerous other data center providers, including Equinix and Telx, both major AMS-IX USA competitors.
After launching a distributed exchange in three data centers in the New York market – two in Manhattan and one in Piscataway, New Jersey – AMS-IX USA lit up a PoP at Digital Realty’s 365 Main facility in San Francisco in September. The big benefit of a distributed exchange is the ability for users in different data centers make peering agreements with each other to reduce inter-data center transit costs.
Each distributed AMS-IX exchange is limited to a single metro. The three New York locations are one exchange. The San Francisco PoP is going to be linked to another one at a CoreSite data center in San Jose. AMS-IX said it was also working with another data center provider in the Chicago market to establish a second PoP in the key Midwestern metro.
“Tenants of CME’s Cermak Hosting Facility benefit from reduced transit costs by peering directly with AMS-IX Chicago,” AMS-IX CEO Job Witteman said in a statement. “This means they also have the ability to peer with parties that are connected to the AMS-IX Chicago from another data center location.”
Role of the Open-IX Foundation, originally started by a group of big network users, such as Netflix and Google, is to standardize. The foundation has released a set of certification requirements for Internet exchanges and for data centers that host them. Certified exchanges hosted by certified data centers can link to each other to become distributed exchanges.
AMS-IX Chicago is “compliant” with the Open-IX standards, meaning the company considers it compliant but it has not actually gone through the certification process. Its Bay Area exchange has the same status. | | 3:30p |
New Post-Data-Center Model Takes Shape in the Cloud Rick Braddy is Founder, President and CEO of Softnas and the former Chief Technology Officer of the CITRIX Systems XenApp and XenDesktop group and former Group Architect with BMC Software.
Millions of words have been written in recent years about how the cloud revolution is changing the way that computing at almost every level is being done.
But I was particularly struck by the analogy drawn by Nicholas Carr in his book, “The Big Switch – Rewiring the World, from Edison to Google.”
As Carr explains, businesses in need of electrical power in the late 1800s were required to build and manage their own power plants. But by the 1900s, most businesses had moved away from power plants and taken advantage of commercial power distribution.
Turning to today, Carr points out that the Internet is analogous to the power distribution system, because it’s an information distribution system that’s rewiring how data is accessed and delivered.
So, from my perspective, cloud-based Infrastructure-as-a-Service (IaaS) and Software-as-a-Service (SaaS) are the new application and IT infrastructure “power plants,” which replace the need for most traditional data centers.
Powering the cloud with agile solutions
Indeed, the new post-data-center model that’s currently taking shape and taking hold in the cloud depends on IaaS, which enables companies to rent the infrastructure needed to power IT systems, data and applications in a truly flexible and cost-efficient way.
Best-of-breed commercial applications are also available, thanks to the SaaS rental model, which eliminates the need for software development, installation and maintenance in many cases.
Taken together, both IaaS and SaaS provide more agile solutions, particularly when compared to the traditional in-house approach, which has typically involved a protracted, expensive procurement and deployment cycle.
Virtualization the ‘secret sauce’ for IaaS
This new post-data-center model makes tremendous sense for several reasons. But the key reason, in my view, is that virtualization has changed how IT infrastructure is organized; in the process, this has helped pave the way for IaaS to become a broad market reality.
More specifically and technically, application workloads and data now run on top of the virtualization layer, and they no longer require direct bonding to the underlying hardware.
To put it simply, virtualization is the “secret sauce” for IaaS.
Virtualizing commercial applications occurred at the presentation and desktop layers first (i.e., CITRIX); this was followed by virtualizing the server workloads (i.e., VMware); and virtualizing data and data delivery is now taking place. Once everything is virtualized – applications, compute workloads and data – companies will be able to deploy a technology strategy that unites IT and the business because it will meet all but the most demanding and unusual business needs. I say “most,” because supercomputing will still require dedicated hardware.
Beyond traditional, capital-intensive data centers
Bearing all this in mind, I believe it’s time for CIOs to stop justifying their traditional – and extremely capital-intensive – data centers.
And, to be fully successful and effective, they need to modify their technology emphasis away from in-house IT infrastructure and toward the Iaas and Saas cloud rental model.
The new day-in and day-out goal absolutely must be delivering “IT-as-a-Service.”
That means leveraging a combination of IaaS and SaaS; shifting focus in order to meet SLAs, RPOs and RTOs; reducing IT project backlogs; and providing better IT services, rather than just running a series of expensive and out-dated IT “power plants.”
The ultimate objective for IT in the 21st century is no longer just keeping the lights on. Instead, it’s to add critical and much-needed strategic value to the business.
That said, it’s going to be increasingly difficult to grow the top or bottom lines unless CIOs are at least willing to take the first step with a hybrid cloud that’s more than an off-site backup and business continuity target – one that runs the company’s B2B, end-user apps and even virtual desktops in the cloud.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 4:00p |
Open Compute and the Enterprise Ever wish you could start with a clean slate and build your data center from the ground up with the most optimized equipment? Well, a few years back, Facebook engineers did just that– designing and building their own servers, power supplies, server racks and battery backup systems. The result was the Facebook hyperscale data center in Prineville, Oregon. And then in 2011, they shared and open sourced the specs for the hardware.
This move, based on the open source methodology used for years in some software development projects such as Linux, has grown into an open source hardware movement and a foundation, called the Open Compute Project. Facebook was joined by numerous tech companies who wanted to strip away the fluff and get to the bare bones of the hardware.
This month at the Orlando Data Center World conference, Craig Finch, Principal Consultant, Rootwork InfoTech, will participate on a panel called,”Open Compute: Not Just Hyperscale Anymore.” Data Center Knowledge asked him a few questions about Open Compute in the enterprise space.
“Enterprises today are facing problems that the hyperscale operators faced several years ago,” Finch said. “Therefore, it is wise to consider the solutions adopted at hyperscale. Major players have committed to the Open Compute Project, indicating that it has the potential to lower the total cost of ownership for large-scale IT deployments.”
Advantages of using Open Compute hardware
Finch explained, “In theory, Open Compute hardware should be less expensive. Hardware would become a commodity, with identical, interchangeable hardware available from several vendors who compete only on price. Open Compute hardware is also simpler and more standardized, which should cause fewer compatibility problems with hardware and drivers.” Facebook recently said that Open Compute has saved them billions of dollars.
Servers seem to be the most popular implementation for Open Compute. “So far, the Open Compute server seems to be the most widely available component,” Finch said. “However, there may be more room for OCP to have a major impact in the storage market. It is already possible to buy an enterprise-grade “white box” server at a competitive price, but existing enterprise storage solutions tend to be proprietary and very expensive.”
The future of the adoption of Open Compute Project in the enterprise
“The size and number of companies joining the effort are not as important as what those companies provide. It is critically important that major operating system and hypervisor vendors (e.g. Citrix, Microsoft, Red Hat, Suse, Ubuntu, VMWare) commit to supporting OCP hardware. If the operating system or hypervisor has mature, stable drivers for OCP hardware, enterprises will start deploying OCP hardware if they see a cost advantage,” Finch said.
Discuss trends in Open Compute
Want to explore more on this topic? Attend the session on “Open Compute: Not Just Hyperscale Anymore” or dive into any of the other 20 trends topical sessions curated by Data Center Knowledge at the event. Visit our previous post, Data Centers and Cloud: A Perfect Storm.
Check out the conference details and register at Orlando Data Center World conference page. | | 4:29p |
Supercomputing Conference 2013 (SC13) The Supercomputing Conference 2013 (SC13) will be held November 17 – 22 at the Colorado Convention Center in Denver, Colorado.
Spotlighting the most advanced scientific and technical applications in the world, SC13 will bring together the international supercomputing community for an exceptional program of technical papers, tutorials and timely research posters.
The SC13 Exhibition Hall will feature exhibits of the latest and greatest technologies from industry, academia and government research organizations; many of these technologies will be seen for the first time in Denver.
For more information – including speakers, sponsors, registration and more – follow this link.
To view additional events, return to the Data Center Knowledge Events Calendar.
| | 4:36p |
IBM SoftLayer Brings Melbourne Data Center Online IBM has officially launched the first SoftLayer cloud data center in Australia. The company first announced the plan to launch operations out of a Digital Realty Trust facility in Melbourne in August.
The site replicates SoftLayer cloud data centers elsewhere around the globe. It has capacity for more than 15,000 physical servers and will provide 10Gbps network connections to the SoftLayer cloud.
IBM continues its $1.2 billion global investment to expand the physical cloud footprint. The plan calls for cloud data centers in all major geographies and financial centers. This launch follows recent launches in Toronto, London and Hong Kong, all with an initial capacity for 15,000 physical servers and room to grow within Digital Realty facilities.
Still left on the global tour are China, Washington, D.C., and Dallas for federal customers, India and Mexico City. The company plans to expand in the Middle East and Africa in 2015.
The facility provides Australians with in-country data residency as well as a faster way to reach the cloud locally. It builds out SoftLayer’s Asia Pacific presence further and complements data centers in Hong Kong and Singapore. Asia Pacific customers also gain redundancy options within the region.
“Melbourne is our first data center location in Australia and a significant milestone for SoftLayer,” Lance Crosby, SoftLayer CEO, said. “We can now bring all the benefits and advantages of SoftLayer’s cloud platform to customers in country or to customers looking for an Australian location.”
One customer using services out of the new facility is Atmail, a Queensland-based email-messaging platform company with about 4,500 customers worldwide. “SoftLayer’s new location in Australia is huge for us,” said Mark Phillips, vice president of global sales at Atmail. “Data sovereignty issues are top-of-mind for many of our customers in Australia, so the ability to now move data to this particular region is very advantageous to us.”
Other SoftLayer customers in the region include Fluccs, Rightship, Loft Group, HotelsCombined, Digital Market Square, Bugwolf, Cartesian and Portland Software. | | 5:09p |
Docker Buys Testing Tools Startup Koality Docker, which automates application deployment on any in-house data center or cloud infrastructure using Docker containers, has acquired Koality, a small startup with a software testing platform that enforces continuous integration.
Koality has worked with Docker extensively in the past. Both the team and the product complement Docker. The acquisition furthers Docker’s strategy to have a full suite that helps developers take an application from development to deployment in production and brings new talent on board.
The Koality team and technology will be integrated into the development efforts behind Docker Hub Enterprise, a product initiative that will allow enterprise IT teams working on distributed applications to collaborate on their modular components, while maintaining them in a private software repository behind their firewall.
Launched as an open source project less than two years ago, Docker the company brought its first production-ready commercial release only in June, but its influence on the data center is already palpable. CEO Ben Golub discussed the company’s impact with Data Center Knowledge recently. Docker raised a $40 million Series C in September to help it further develop its products and ecosystem.
Koality’s software is popular within the Docker community. It is used to significantly shrink continuous integration cycles. The technology provides users with an intuitive toolset that simplifies workflow complexity around managing software versioning, upgrading and updating.
“There’s been an obvious synergy between Docker and Koality since our first meeting; it was clear how much we could accomplish together,” said Jonathan Chu, CEO and co-founder of Koality. “The entire team is looking forward to using our technology and expertise to help fuel the Docker movement – one that will shape the future of application development.”
The capabilities will allow Docker to form more services around its popular open source Docker containers and help the company monetize the technology. The company said that eliminating these convoluted workflows has spurred application innovation, even in highly regulated industries that require transparent processes to ensure that rigid security and compliance guidelines are met.
“As we surveyed our ecosystem to see how our technology partners were helping enterprises build distributed applications, Koality stood out in terms of its flexibility to integrate in with existing toolchains and processes while delivering practical, intuitive solutions around the application development lifecycle,” Golub said. “As an added bonus, the members of the Koality team came out of Palantir and also acquired a sensitivity for the issues facing some of the most security conscious organizations in the world.”
Docker Hub is the name of the centralized hub of centralized Docker resources. It has more than 40,000 community-contributed “Dockerized” applications available to those using the open platform. There has been a three-fold growth of content in Docker Hub over as many months, the company said.
Another recent bid to extend capabilities and talent came last July. Docker acquihired Orchard Labs, a two-man operation with an open source tool called Fig that performs container orchestration and a hosted Docker solution last July. | | 5:23p |
Rackspace to Provide Fanatical Support for Google Apps Rackspace is now providing its trademark fanatical support for Google Applications. The service provider will resell and support the Google for Work app portfolio. It has been providing support for Microsoft business software.
Rackspace has competed with Google on the productivity apps front, acquiring cloud-based applications, such as Webmail, and offers collaboration tools and storage, much like Google. But the company said it didn’t see Google as a competitor. The move is about making options available, and Rackspace’s key differentiator in the cloud space: support. The company hopes to compete with commoditized public clouds through support, which is the company’s biggest value add to the equation.
“Rackspace strives each day to provide a world-class service experience on top of best-in-class technologies, and our new managed service offering of Google Apps for Work is a prime example,” said Taylor Rhodes, president and CEO of Rackspace. “Companies want and need a trustworthy provider to offer technical expertise and support to help them succeed with their cloud applications. This partnership is an exciting step forward in advancing the future of cloud-based office productivity and collaboration.”
Rackspace does support many other vendor technologies from the likes of VMware, Microsoft, Cisco and EMC.
The partnership with Google reflects a larger trend of hosting providers moving up the stack from hosting to offer other business services such as email and productivity applications. Rackspace will handle migration deployment and account management services when purchasing Google Apps for Work.
The apps recently underwent a rebranding to help drive business adoption. The suite includes Gmail, Google’s cloud storage Google Drive, Hangouts and Office applications.
“This is an ideal time for Rackspace to announce support of Google Apps for Work, as SMBs are looking to the cloud now more than ever before to support the need for office and collaborative applications that can be accessed from anywhere,” said Chris Chute, research vice president, Global SMB Cloud and Mobility Practice at IDC. “Rackspace’s focus on making SMB businesses successful through its managed cloud services and Fanatical Support team is an advantage for smaller customers who many times require this type of assistance in order to take advantage of the cloud productivity market.”
The Fanatical Support option has been added to Google for Work, purchased directly from either Google, Rackspace, or Google channel partners. It is not available to the Rackspace channel, but might be added in the future. | | 5:32p |
IoT Networking Consortium Membership Jumps The Open Interconnect Consortium, an industry association focused on networking for the Internet of Things, announced its membership has reached 32 members.
The OIC seeks to define connectivity requirements to improve interoperability between billions of devices making up the IoT. This standard will be an open specification that anyone can implement and will be easy for developers to use.
The standard will include IP protection and branding for certified devices (via compliance testing) and service-level interoperability. There will be an open source implementation of the standard.
Open consortiums currently rule the roost for all things Internet Infrastructure, with organizations popping up across the entire stack in order to help drive innovation through openness.
OIC is a young organization founded in July. The original five founding group companies were Atmel, Dell, Intel, Samsung Electronics and Wind River.
The OIC board comprises of a diverse leadership drawn from companies that span a broad range of industry verticals, including Samsung, Intel, MediaTek and Cisco employees.
New member companies include Acer, ActnerLab, Allion, Aepona, Cisco, Cryptosoft Ltd, Eyeball Networks, Global Channel Resource, Gluu, IIOT Foundation, InFocus, Laplink Software, Mashery, McAfee, MediaTek, Metago, NewAer, Nitero, OSS Nokalva Inc., Realtek Semiconductor Corp., Remo Software, Roost, SmartThings, Samsung Electro-Mechanics, Thug Design, VMC and Zula.
“We are following a proven path of innovation with the OIC by encouraging industry-wide collaboration, and our board members represent our commitment to provide a standard across a broad range of market sectors facing challenges from emerging IoT technology trends,” said Jong-Deok Choi, executive vice president and deputy head of Software R&D Center at Samsung and OIC president. “Our leadership and growing membership will create a single standard to solve connectivity and interoperability challenges in order to support the billions of connected devices coming online.” | | 6:00p |
Vantage Closes Wholesale Deal in Santa Clara MarkLogic has selected Vantage Data Centers for high-density wholesale data center space in Santa Clara, California. MarkLogic is an enterprise NoSQL database platform company that sells to large enterprises. Terms of the lease were not disclosed, but it is a wholesale deal with room for MarkLogic to expand further in the data center as the company grows. The company is currently undergoing the migration to Vantage.
MarkLogic offers a schema-agnostic Enterprise NoSQL database coupled with powerful search and flexible application services. It sees its chief competition as Oracle.
“MarkLogic’s data center needs are entirely to support our engineering process,” said Jeff Thomas, senior director of IT and facilities at MarkLogic. “Every year we turn out a major release of our product.”
Thomas said that the company had outgrown its previous retail-colocation data center after using it for eight years.
MarkLogic is a prime example of a retail colocation customer “graduating” to wholesale. Vantage continues to go after these types of customers looking to make the switch. Moving from retail to wholesale angle is something it has previously pitched.
“We chose Vantage to ensure our infrastructure could quickly adapt to meet the rapidly changing needs of our business,” said Thomas. “Vantage runs a highly efficient facility and has a strong reputation for excellent customer service. Most importantly, we now have the ability to scale quickly within a single data center footprint at Vantage.”
Vantage’s campuses in Silicon Valley, California, and Quincy, Washington, include four enterprise-grade data centers totaling over 100MW of potential capacity. | | 6:30p |
Spire Healthcare Moves to Telehouse North in London Telehouse helped Spire Healthcare with an extensive data center migration leading up to Spire’s recent $1 billion IPO. Telehouse recently completed HIPAA compliance and has been targeting the healthcare vertical. Spire represents a major win in that space.
The goal was to set up a centralized ICT infrastructure in the UK within Telehouse’s data center. Spire migrated infrastructure to Telehouse North, including a ‘UK First’ core infrastructure for Spire’s new client management software.
The process of moving its infrastructure was completed in 18 months. The companies say that the new infrastructure resulted in a 50 percent reduction in lead-time for Spire and better client satisfaction.
“We saw Telehouse as the best provider to ensure our data was secure,” states Phil Peplow, head of IT at Spire. “Additionally, the exceptional level of service Telehouse provides is a huge factor in transforming what could have been a difficult move into a very smooth transition.”
Telehouse North is a seven-floor steel and concrete construction with 104,600 square feet of floor space. It benefits from an on-site primary substation the nearby Telehouse Docklands campus.
Originally opened in 1990, Telehouse North is the primary home of the London Internet Exchange (LINX). The building has rich connectivity, with over 100 carriers present. | | 8:35p |
SolidFire Raises $82 Million in Series D Funding 
This article originally appeared at The WHIR
Flash storage company SolidFire announced Tuesday that it has secured $82 million in series D funding, bringing its total funding to $150 million. SolidFire also announced a product line expansion to include enterprise storage solutions for less than $100,000.
SolidFire’s strategy is to expand its market by lowering the entry price to its flagship SF Series cloud-scale flash arrays. The new SF2405 starts at 35TB of effective capacity and 200,000 predictable IOPS, while the SF4805 doubles the SF2405’s density and delivers Solidire’s lowest price per gigabyte.
The funding round was led by new investor Greenspring Associates, as well as a major sovereign wealth fund. Current investors NEA, Novak Biddle, Samsung Ventures and Valhalla Partners also participated.
“Just as flash has disrupted the legacy disk storage market, SolidFire continues to disrupt the all-flash array market by delivering a storage platform that goes far beyond the basics of raw flash performance,” SolidFire’s founder and CEO Dave Wright said. “Additional funding allows us to continue to extend SolidFire’s technical advantages over the competition and will deepen our sales, marketing and channel enablement to meet the growing global demand for SolidFire’s leading all-flash storage architecture.”
SolidFire’s revenue has grown 50 percent quarter over quarter in 2014, after 700 percent growth in 2013 as flash has swept through the cloud market. The company also more than doubled its staff in the past year and is hiring again. Efforts in several of the areas mentioned by Wright can already be seen in news from SolidFire’s over the past year.
On the technical side, the company launched a new block storage solution for OpenStack clouds in May, and to better address global demand it announced a new Asia-Pacific headquarters in Singapore in April. SolidFire also partnered with ServInt in January to upgrade ServInt’s VPS offering.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/solidfire-raises-82-million-series-d-funding |
|