Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, June 9th, 2015
| Time |
Event |
| 8:00a |
Verne Improves Data Center Connectivity in Iceland With Level 3 Deal Moving to significantly expand the breath and scope of networking services available in its data center facilities in Iceland, Verne Global today announced a strategic alliance with carrier Level 3 Communications to improve data center connectivity at its campus.
Under terms of the alliance, Verne Global CTO Tate Cantrell said, the provider of hosting facilities primarily in Europe will now be able to offer customers access to Tier I carrier services.
“We also have a 45-acre campus on a former NATO base that enables submarine cables to terminate directly in our data center,” said Cantrell. “But this alliance represents a new stage of connectivity being added to our campus.”
Sundeep Samra, product strategy manager for data center and content delivery networks at Level 3, said the carrier’s data center connectivity services will include everything from CDN capabilities to complete managed network services for Verne customers.
Verne raised $98 million to expand its Iceland data center campus early this year.
The company has been making the case for hosting applications in its data center facilities because of access to inexpensive hyrdro-electric power provided by a vast network of dams initially built to support the smelting of aluminium, and cool climate that enables use of free cooling year-round. According to Cantrell, access to those energy resources enable Verne to guarantee low-cost energy contracts for as long as 20 years.
While Iceland itself is subject to volcanic activity, the NATO facility is built on the side of the island facing west. This means volcanic ash from any eruption generally blows east toward Europe, rather than toward Verne’s data centers.
Because Iceland is part of Europe, Verne data centers are certified to meet all European certifications for security and privacy, Cantrell said. Most customers there seek out Verne for less expensive sources of power than what is typically available in the rest of the continent.
Data sovereignty regulations are also making Verne a more attractive option for companies looking to expand into European markets, said Cantrell. He also noted that regulations such as the US Patriot Act tend to push European companies looking to do business in the US to data centers in close proximity to North America but outside of the reach of US jurisprudence.
Of course, Iceland is not the only country where data centers operate in and around the Arctic Circle; but it does have a local government that is highly committed to providing low tax rates for investments in strategic industries such as data centers. The degree to which Verne can convert all those attributes into a thriving hosting industry remains to be seen. However, with support from a Tier I carrier such as Level 3, the odds of making that transition have just improved. | | 12:00p |
MapR Improves Real-Time Analytics With Latest Hadoop Release MapR has released version 5.0 of its Hadoop distribution with enhancements around powering real-time analytics for business, security, and self-service data exploration.
There are several new features in the major platform upgrade aimed at providing a single platform for a variety of data needs. The company laid the groundwork across several incremental releases, improving distributed systems capabilities and introducing new functionality like Apache Spark and Drill. The 5.0 release brings everything together as well as extensions with key functionality such as Elasticsearch and more.
New auto-provisioning templates have also been introduced to speed deployment of Hadoop clusters on infrastructure of choice, whether it be in-house, by a service provider, or in private or public cloud.
While Big Data is often leveraged to figure out trends in data looking back, there is increasing demand for real-time analytics and being able to tune a business in response to that real-time data. Many Hadoop players have improved their platforms with a focus on real-time needs. MapR’s platform is meant for processing all needs side-by-side on a single platform.
“The theme in 5.0 is around data agility and helping companies respond faster with informed data,” said Jack Norris, chief marketing officer for MapR. “A key aspect of that is real-time applications, as well as bringing agility in terms of administrators. All of this happens in the backdrop while maintaining a secure environment.”
The 5.0 release extends the MapR Real-time, Reliable Data Transport framework, used in the MapR-DB Table Replication capability, to deliver and synchronize data in real time to external compute engines. The MapR-DB capabilities are similar to an enterprise-grade HBase used for high-scale, low-latency real-time applications.
The first supported external compute engine is Elasticsearch, which enables synchronized full-text search indices automatically without writing custom code. Elastic raised $70 million last year and has been on a tear, winning big names such as Comcast, eBay, Facebook, Mayo Clinic, Foursquare, SoundCloud and Tinder; and connecting deeper with the Hadoop space through a connector.
“We are pleased to be working with MapR on integrating its real-time delivery framework with Elasticsearch,” said Jobi George, global partner director at Elastic, in a press release. “Customers want search indices automatically synchronized with the latest data updates. The MapR architecture makes this easier for application developers who need to let their end users search for data almost immediately after it is updated.”
MapR 5.0 also includes comprehensive security auditing, Apache Drill support, and the latest Hadoop 2.7 and YARN features.
Additionally, MapR has comprehensive auditing for all data accesses via JSON log files. This enables extensive reporting, validation and quick analysis with Apache Drill.
Organizations are increasingly deploying multiple applications on a single Hadoop cluster, said Norris, adding that one in five MapR customers deploy more than 50 separate applications on a single cluster. The latest MapR release auto synchronizes storage, database, and search indices to support complex, real-time applications.
The enhancements follow several others such as better clustering across distributed systems, the addition of Apache Spark for real time, and recently added support for Apache Drill for self-service data exploration. The Drill Views feature has been added, allowing secure access to field-level data in files to ensure only authorized data can be analyzed by specific analysts.
The company rolled out on-demand training for Hadoop earlier this year, which saw over 20,000 participants. | | 1:00p |
Edge Data Center Firm 365 Sells Dallas, N. Virginia Facilities Colocation service provider 365 Data Centers has sold two data centers from its extensive asset portfolio and upgraded power and cooling systems to increase capacity in several other facilities.
The company’s business model centers on providing data center capacity close to densely populated areas for companies that need to deliver digital content to end users. These are so-called “edge data centers,” where web-content publishers or cloud service providers cache content to speed up its delivery to their users. Demand for edge data center capacity is on the rise, and a handful of companies have chosen to pursue the opportunity as a business strategy.
365’s focus is on Tier II US markets. The company avoids the biggest data center markets in the country, such as New York, Dallas, and Silicon Valley, where it would be competing with the data center provider industry’s largest players.
It sold its data centers in Dallas and Reston, Virginia, both of which are considered Tier I markets. Colocation and managed services provider InfoRelay Online Systems bought the Reston facility. 365 did not say who its Dallas data center went to.
Both facilities are relatively small. The Dallas data center is about 13,000 square feet, and the Reston one is just over 11,000 square feet.
The edge data centers that underwent infrastructure upgrades are in Cleveland, Detroit, and Buffalo, New York. The company also recently announced it had upgraded its Nashville facility, home to the city’s first internet exchange NashIX.
The current iteration of 365 was formed in 2012, when the company raised funding and acquired 16 data centers from Equinix, which wanted to get rid of some facilities it had gained through its acquisition of rival Switch & Data. The company was then called 365 Main, named after the big San Francisco colocation data center and carrier hotel now owned and operated by Digital Realty.
Prior to the acquisition of Equinix facilities and starting up again, 365 was dormant for about two years, following sale of its previous portfolio to Digital.
365 raised a $55 million credit facility in September 2014. | | 3:30p |
Embrace Hadoop for Handling Your Sensitive Data Dale Kim is the Director of Industry Solutions at MapR.
If you’ve followed the big data buzz in the last few years, then you are probably familiar with Apache Hadoop and its growing popularity. You might know it as a great system for cost effectively running large-scale analytics and processing. Hadoop has evolved significantly from its early days when it was used for internet search indexing, into a framework that is valuable for many different applications and a wide range of enterprises.
Some of the more popular uses for Hadoop today include: data warehouse optimization, fraud/anomaly detection, recommendation engines, and clickstream analysis, all of which can address personally identifiable information (PII) such as social security numbers and credit card numbers. How does a framework built for search indexing become applicable for vastly different use cases? Hadoop’s versatility is enhanced by the many open-source projects added over the years, including Apache HBase, Apache Hive, Apache Mahout, and Apache Pig. Those help on a functional level, but what about security?
Due to Hadoop’s growing proliferation, its security capabilities have been under scrutiny lately. Questions have been raised about Hadoop’s security and whether it is ready for production use; this is an unfortunate mischaracterization. If organizations new to Hadoop hear that there is significant risk of exposure, then they will likely delay their adoption. In the meantime, enterprises continue to face serious big data challenges while resorting to other solutions that might not adequately address their issues.
The question is not whether Hadoop is ready for secure environments. It already runs in some of the most security-conscious organizations in the world in financial services, healthcare and government. You can find numerous case studies on the internet. The real issue is identifying the right approach for your specific environment.
In some deployment models, organizations fence off a Hadoop cluster with firewalls and other network protection schemes and only allow trusted users to access it. This is the most basic type of implementation that does not necessarily depend on specific security capabilities in Hadoop. As an extension to this, a model can prohibit direct login to the cluster servers, and users are given data access via edge nodes combined with basic Hadoop security controls. In a more sophisticated approach, native Hadoop security controls are implemented to give access to more users while ensuring any data access is performed by authorized users. In still more advanced environments, Hadoop security capabilities are fully deployed in conjunction with monitoring and analytical tools on Hadoop clusters to detect and prevent intrusion and other rogue activities.
The fact that organizations are using Hadoop on sensitive data today strongly supports Hadoop’s legitimacy. Therefore, it is worthwhile to pursue a deeper understanding of its security capabilities by talking to Hadoop vendors and third-party security providers, just as organizations should do for any new deployment. It’s important to document what’s important for you first, and then look for specific features that support your priorities. They should largely mirror the requirements you currently have in your other enterprise systems.
What capabilities are available in Hadoop? First of all, authentication is always required for secure data, and there’s Kerberos integration as a start, along with alternate and enhanced authentication capabilities by some Hadoop and third-party vendors. Second, authorization or access controls in Hadoop are available to grant and deny permissions for accessing specific data. Third, auditing can be done in a variety of ways in Hadoop to handle business requirements such as analyzing user behavior and achieving regulatory compliance. Finally, encryption is supported, though it is an often misunderstood capability because it sometimes is misused as a means for access controls. Rather, it should be used to protect data-in-motion (data sent over the network) and data-at-rest to protect data even if physical storage devices are stolen. A specific area of encryption for data-at-rest is obfuscating sensitive elements in files, essentially making the data non-sensitive while retaining analytical value. This type of encryption is handled by a variety of third-party vendors for Hadoop.
There are several options for handling access control within Hadoop, and one challenge today is that no universal standard exists. This means you must do a bit more investigation to determine what option is right for you. Some technologies take a “build-it-as-you-go” or “follow-the-data” approach, and some take a data-centric approach. Fortunately, this lack of standards should not deter you because the various approaches simply mean different levels of people and processes need to be applied to data security. That’s no different than the practices we’ve applied to other enterprise systems.
More than anything, organizations should be comfortable that production environments with sensitive data are already running on Hadoop, and the security capabilities are only getting better.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:08p |
Mesophere’s Data Center OS Now in GA, Including Free Version Mesophere has launched its Data Center Operating System into general availability and announced a free community edition of the software available in the cloud alongside an enterprise edition that can be deployed anywhere.
Based on the open source Apache Mesos project, the data center OS abstracts and manages all IT systems holistically to help IT organizations dramatically increase utilization rates of their infrastructure, Mesophere CEO Florian Leibert said.
“Today data centers are made up of overprovisioned silos,” he said. “We can get utilization rates in those environments to 80 to 85 percent.” Most utilization rates inside data centers today still hover around 20 percent, according to Leibert.
A public beta of Mesosphere came out in May as a cloud service available via Amazon Web Services and Microsoft Azure.
Originally developed at the University of California at Berkeley, open source Mesos has been employed at web-scale companies such as Twitter and Airbnb. At its core it enables IT organizations to treat server, storage, and networking resources as a single pool of logical resources that can all be addressed via a common set of APIs.
The basic idea behind the data center OS is to not only manage IT infrastructure holistically, Liebert explained, but to apply analytics to make sure that workloads run at the most optimal time. For example, most Hadoop jobs can be run overnight because they are batch-oriented, which should free up IT infrastructure for transaction processing applications.
The challenge that IT organizations face today is that every application workload tends to have IT infrastructure resources allocated in isolation, which results in a huge amount of overprovisioned resources.
The biggest challenge facing Mesophere and other companies attempting to drive other types of software-defined data center architectures, however, may not be the technology itself, but rather the way data centers are organized.
IT teams are generally organized around server, storage, and networking disciplines that often operate in a semi-autonomous manner. One of the objectives behind making a free community edition available, Liebert said, is give those IT organizations an opportunity to gain some firsthand knowledge of how a data center operating system platform will alter the way they manage their existing processes.
Arguably, inefficiencies in the data center make shifting to some form of SDDC architecture all but inevitable. After all, IT organizations can’t afford to keep throwing IT staff at scalability challenges. Not as clear, however, is what form of SDDC will ultimately take hold as IT organizations begin to make this transition in larger and larger numbers. | | 5:37p |
iomart Acquires London Public Cloud Consultancy SystemsUp for £12.5M 
This article originally appeared at The WHIR
UK cloud company iomart has acquired public cloud consultancy SystemsUp, a company that offers design and delivery of public cloud services, for £12.5 million (around $19.1 million).
Based in London, SystemsUp is a self-described specialized “IT technical consultancy dealing solely with service providers as trusted partners.” Its partners include Google, AWS and Microsoft. According to its website, its skills include system management, virtualization, business continuity, storage and security.
In an announcement on Monday, iomart said that it has paid £9.0 million in cash as an initial consideration. A further contingent consideration is due in respect of delivery of revenues, the company said, and is estimated to be between £1.0 million to £3.5 million.
“The market for cloud computing is becoming incredibly complex and the demand for public cloud services is increasing at pace,” Angus MacSween, CEO of iomart, said in a statement. “With the acquisition of SystemsUp, iomart has broadened its ability to engage at a strategic level and act as a trusted advisor on cloud strategy to organisations wanting to create the right blend of cloud services, both public and private, to fit their requirements.”
Filling the role of trusted advisor has been an important one for cloud service providers to take on in recent years as competition heats up. Customers with specific needs want to work with service providers that can help them understand the best approach to building their cloud infrastructure.
SystemsUp also has G-cloud designation from Google and is an authorized government partner to Amazon Web Services. A recent survey showed that more UK government agencies would adopt cloud services if cloud service providers could convince them of improved service delivery and reduced costs. A consultancy like SystemsUp could help government agencies in the UK decide the right cloud adoption approach for their workloads.
SystemsUp “will retain its role as an impartial, agnostic, expert consultancy on public cloud strategy” as part of iomart, the company said.
“We have already built up a significant reputation and expertise in helping organizations use public cloud to drive down IT costs and improve efficiency,” SystemsUp managing director Peter Burgess said. “As part of iomart we can leverage their award winning Managed Services offerings to deepen and widen our toolset to deliver a broader set of cloud services, alongside continuing to deliver the strategic advice and deployment of complex large public and private sector cloud projects.”
Iomart acquired London-based hosting and cloud company ServerSpace for £4.25 million in December 2014.
This first ran at http://www.thewhir.com/web-hosting-news/iomart-acquires-london-public-cloud-consultancy-systemsup-for-12-5m | | 6:19p |
Data Center and Rack Power Requirements: the Hybrid Approach Today’s modern data center is rapidly evolving into an architecture which requires better uptime and more efficient resource utilization. One of the biggest things we’ve seen most recently are the new requirements around rack and power agility. Data center administrators are continuously looking for ways to better control how power is flowing into their racks and how well that power is being managed.
A key factor to consider for power management is the type of power transfer switch being used. When you examine a modern rack, you’ll commonly see one of the following switches: relay-based Automatic Transfer Switches (ATS) or Static Transfer Switches (STS). However, both of these technologies come with inherent design flaws which can have a negative impact on both performance and reliability.
In this whitepaper, we explore how data centers can overcome the design challenges of ATS and STS with a new hybrid power transfer switch. Hybrid switches combine new kinds of systems to deliver better economics and faster transfer times of 4 to 8ms! Not only that, it also involves a new kind of rack power control mechanism which directly helps with eliminating electrical arcing that can lead to transfer failure.
Download this whitepaper today to learn more about this new hybrid technology, which is directly impacting your data center and rack ecosystem. | | 7:34p |
Nutanix Intros Hyper Converged Infrastructure Management Platform, Own Hypervisor At its inaugural .NEXT conference today Nutanix announced a management platform for its hyper converged infrastructure alongside a much-anticipated implementation of its own hypervisor that’s based on the open source Kernel-based Virtual Machine software.
In addition, the company announced it has split its product portfolio into a Nutanix Acropolis infrastructure and Nutanix Prism management software. Included in the Acropolis platform is a new App Mobility Fabric software that enables IT organizations to convert one hypervisor into another, which Nutanix expects will be used to convert commercial hypervisors, such as VMware’s or Microsoft’s, to its new KVM implementation that has been hardened for enterprise application environments.
“We want customers to be able to convert hypervisors seamlessly,” Greg Smith, senior director of product and technical marketing for Nutanix, said. “We want the entire infrastructure stack, including virtualization, to become invisible.”
Abstracting various elements of IT infrastructure to be able to manage them all as a single system is a hot topic these days. Also today, a company called Mesosphere launched into general availability its Data Center Operating System, which, as the name implies, treats the entire data center as a single computer.
The degree to which that actually occurs inside any IT organization is likely to vary. But Nutanix has been at the forefront of a shift towards hyper converged infrastructure platforms, where servers and storage are all physically and logically integrated with one another.
To that end the Acropolis Distributed Storage Fabric, included in Acropolis, can now mount volumes as in-guest iSCSI storage for applications with specific storage protocol requirements, such as Microsoft Exchange.
At its core, Nutanix says, the Acropolis platform is more efficient from a storage perspective than other platforms because is includes support for erasure coding storage optimization, while Nutanix EC-X technology reduces the storage required for data replication by up to 75 percent when compared to other traditional data mirroring technologies.
The end goal is to elevate the IT management team in a way that enables them to service application requirements on demand without having to manually manage each component of the IT infrastructure environment individually, Smith said.
Proving its own hypervisor based on open source software also goes a long way in reducing the costs created by relying on commercial hypervisors.
Thanks mainly to that simplification market research firms such as Technology Business Research are now forecasting that the market for converged infrastructure systems will reach over $19 billion by 2018. What portion of the overall IT infrastructure market that number will represent remains to be seen.
Nutanix, with help from a reseller agreement struck with Dell, has jumped out to an early lead in the hyper converged infrastructure category. But over time it’s not clear whether hyper converged infrastructure is going to dominate the data center or prove to be a momentary trend, once software-only approaches to providing the same capabilities become more robust. |
|