Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 19th, 2015

    Time Event
    12:00p
    Two Big Steps Towards OpenStack Interoperability

    Two big steps towards OpenStack interoperability were made at The OpenStack Summit in Vancouver this week. The first is introduction of vendor certification, built on work by the DefCore committee in defining the core of OpenStack. The second is OpenStack Identity Federation is now supported by over 30 cloud vendors. ID Federation provides an identity management framework across OpenStack clouds, and leads to easier multi- and hybrid cloud scenarios.

    When combining ID Federation with the work the DefCore committee is performing, it becomes a one-two punch in a bid to create a worldwide OpenStack cloud. Defining the core of OpenStack and certifying OpenStack offerings means unparalleled interoperability between OpenStack clouds, and ID Federation means unparalleled accessibility between OpenStack clouds worldwide.

    The DefCore committee was initially established to define what it means to be an OpenStack cloud and has now introduced a certification program. The program includes a set of interoperability tests for products that want to qualify as “OpenStack Powered”.

    ID Federation was first introduced in OpenStack Kilo. It comes out of work Rackspace performed with Cern, the European Organization for Nuclear Research and home of the Large Hadron Collider (LHC). Rackspace is Cern’s partner for their cloud.

    Cern has a gigantic, interconnected network of cloud sites not run by the same entity – think of Universities and their clouds connected by Cern. “The universities want to use native authentication systems and connect,” said Van Lindberg, VP of Legal at Rackspace and OpenStack board member. “It is one of the largest private clouds in the world. We worked with them to create a means to federate identity.”

    ID Federation means clouds like Cern can share a common framework for accessing regardless of where the user resides. As DefCore better defines standards and testing for what it means to be an OpenStack cloud, OpenStack will be better standardized worldwide extending the federation possibilities.

    DefCore was formed with the aim of defining the “OpenStack Core” as chartered by the by-laws and guided by governance. DefCore is still just a fledgling effort, according to Van Lindberg, “I think that it represents the real future of where OpenStack is going,” he said. “For the first time we have an executable specification, or what it starts to mean to be an OpenStack cloud.”

    By decoupling your ability to use cloud from the vendor you use, it means you have an application that runs just fine no matter what data center you use. “It all runs on one cloud, that is why this is so significant,” said Lindberg. “All those companies have more capabilities, more bandwidth to compete with Amazon.”

    The DefCore Committee was formed during the OpenStack Summit in Hong Kong. Lindberg said defining OpenStack’s core has been tricky. “For a year and a half, the difficulty was deciding how we were going to decide,” he said. “There were lots of ways to create a system that benefits one party more than others, and we don’t want that.”

    After figuring out the process, DefCore worked on figuring out implementation. By the OpenStack Paris Summit, a draft specification was released. In a recent board meeting in March, it released the full specification.

    “We have not only the theoretical framework but the seed,” said Lindberg. “Is DefCore as big as we want it to be? No. Not by a longshot. It will take two or three years. I believe that we will get to a place where we have a growing body of core functionality. This core functionality just works everywhere – I believe we will start to see a time where people will use OpenStack cloud owned by different people, in relatively transparent regions similar to AWS regions.”

    Lindberg also believes we will see huge amounts of innovation and work at higher levels in the stack once the core is defined. “All of these companies will try to solve it so they can say they do it better than anyone else.”

    Several cloud providers will announce certification week.

    12:10p
    MapR Hadoop Distribution Now Shipping With Apache Drill

    Apache Drill 1.0 is now shipping with the MapR Apache Hadoop distribution. Apache Drill is a Schema-free SQL engine for Big Data, which opens up self-service data exploration capabilities to a wider audience.

    Drill makes self-service SQL analytics available without requiring pre-defined schema definitions. It eliminates dependence on IT to get schemas ready prior to exploration – the tool works directly on the data without need to process and transform the data to a table-like structure.

    The big pillars of big data are massive analytics at scale, real-time data and interactive querying on the data. “What Apache Drill represents is the interactive query,” said Jack Norris, chief marketing officer, MapR. “While there have been different SQL on Hadoop offerings, what makes this so powerful is it’s a schema free engine.”

    Drill interacts with data both in legacy transaction systems and new data sources, such as Internet of Things (IoT) sensors, web click-streams and other semi-structured data. It supports popular business intelligence (BI) and data visualization tools.

    Data volumes are picking up, and this is becoming a big issue. Customers need to analyze the data, but don’t know what’s in it because the structure needs to be set up. Drill frees up the data for self-service.

    “The availability of Apache Drill in the MapR Distribution is a major milestone for the SQL-on-Hadoop project, which is significant in delivering real-time insights from complex data formats without requiring any data preparation,” said Matt Aslett, research director, data platforms and analytics, 451 Research in a press release. “Apache Drill is an example of MapR collaborating with others as part of the Apache development process on new technologies to expand the Hadoop portfolio.”

    Drill also includes granular security and governance controls required for multi-tenant data lakes or enterprise data hubs.

    “If you’re going to have data exploration being able to provide security is imperative,” said Norris. “As you look at servicing the broad population and moving it into different use cases, often there’s a security aspect. The ability for granular security control is a big deal. The same file can be accessed by different users, and users can access different portions with different permissions.”

    MapR growth has doubled year over year, according Norris. Perhaps more importantly, the company is seeing acceleration in mission critical and real-time deployments, with several customers having multiple use cases on a single cluster.

    “That really points to the journey we’ve seen. Many companies start with a cluster for data scientists and experimental use, and then move into production use and real time applications. This growth isn’t about reporting or asking bigger questions, it’s about companies impacting business as it happens. We recognized this at the beginning for production and real-time uses.”

    The company rolled out on-demand training for Hadoop earlier this year, which saw over 20,000 participants.

    12:51p
    Report: Amazon’s Massive $1.1bn Ohio Data Center Project Swaps Townships

    There has been heated competition in Ohio to land a $1.1 billion Amazon Web Services Data Center project. That project was recently reported to be split across three townships: Hilliard, Dublin and Orange Township. Columbus Business first reports that Amazon subsidiary Vadata has withdrawn an application to rezone 75 acres in Delaware County’s Orange Township in favor of New Albany.

    Several counties have been vying for the massive project listed under Amazon subsidiary Vadata, . Last week, New Albany council unanimously approved tax incentives for the project in a bid to attract part of the project. Architect Mark Ford notified Orange Township in a letter on Monday that Project Sandstone’s application is being formally withdrawn. Amazon officials aren’t commenting.

    The Orange Township location is surrounded by homes, and the Columbus Dispatch reports that many residents were concerned about noise from the data centers.

    Amazon is no stranger to Not In My Backyard (NIMBY), also embroiled in a controversy in Northern Virginia regarding a proposed power line from Dominion feeding the planned data center.

    The tax package offered by New Albany most likely tipped the scales, however Orange Township had offered Amazon a healthy 15-year 100 percent property tax abatement.

    Dublin offered Amazon land valued at $6.8 million and performance incentives worth up to $500,000 over ten years.Hilliard offered a real estate tax abatement valued at $5.4 million, wage tax rebates and permit fee waivers. Construction in Hilliard has reportedly begun.

    This is in addition to incentives from the Ohio Tax Credit Authority, worth an estimated $81 million.

    The New Albany facility will employ 25 with an annual payroll of $2 million. Amazon could potentially invest $300 million of the planned $1.1 billion project in New Albany. Hilliard legislation suggests its portion of the project will also be a $300 million investment.

    A city community development director said that the New Albany data center will include a 150,000-square-foot building on 68 acres north of state Route 161 and east of Beech Road in New Albany’s International Personal Care and Beauty Campus.

    2:00p
    The Evolution of the Data Center: Shifting from the “Model T”

    Sureel Choksi is President and Chief Executive Officer at Vantage Data Centers. Follow Sureel on Twitter: @sureelc.

    The evolution of the data center is under acceleration. The standard builds and designs of 10 or 15 years ago are rapidly giving way to more flexible models that can match an increasingly sophisticated customer base. This change is not unlike the trajectory of the Model T Ford and its cookie cutter design, which was revolutionary in its era, but eventually could not keepup with the pace of society or provide consumers the full spectrum of what they wanted in a vehicle. What does this mean for the data center industry? It means change is underway, and change propelled by smart innovation can be a very good thing.

    New Era, New Definitions

    If you know your way around data centers, you’ll have noticed something very interesting over the past few years. Remember when the wholesale market was defined as a megawatt and up? Now you see demand on the customer side for wholesale that starts around 500KW. I’ve even seen reports that define wholesale as 250KW and above. The once standard historical breakpoint of 1MW is no longer relevant.

    These definitions are changing in part because customers have an increasingly sophisticated understanding of their needs and what they want from the market. Businesses are much savvier now when it comes to data infrastructure strategy, and they have to be in order to stay competitive. The Model T “one-size-fits-all” norm has become a hindrance to companies that want to stay apace with, and even be in front of, tech revolutions like Big Data and cloud. Data explosion, cloud services, and virtualization are some of the key drivers behind what has become a complex data center market.

    Total Cost of Ownership – The Road to Efficient Models

    In conjunction with Big Data, we’ve seen the proliferation of mobile devices, a steadily increasing adoption of social media platforms, machine-to-machine sensors that drive smart cities, and, of course, the Internet of Things (IoT). Every one of these technological advances generates data, and at rates that have no precedent in history.

    The physical infrastructure of a data center has become more expensive in the sense that you need much more of it to house all the data that’s growing exponentially year by year. How does a company grapple with that expense?

    Smart businesses today will address their data infrastructure needs as a course on efficiency and cost. Location plays a big role in this, as does power—access to it and the quantity needed to optimize the data center. Businesses today should be asking themselves, “How effective is my power usage?” “Am I using more redundancy than I need?” “Is my data center set up to grow as my business grows?”

    What is a Flexible Data Center?

    One company may want to run its legacy software on servers with double back up power and in close proximity to its corporate offices. At the same time, it prefers to run new applications in the cloud via a direct connection from the local data center, and also run a beta test on an aisle of local servers without any costly redundancy at all.

    Another company may want to outfit its data center with new Open Compute Project racks and servers—hardware intended to drive efficiency through creating power-density and reduced hardware redundancy.

    And yet another company may want to hold to a traditional model for its existing data infrastructure, but ensure that they have space for future growth to work with a range of new models.

    Flexibility is just that—meeting customers’ needs and helping them build and run data centers that fit what they do.

    In the old-fashioned model of a data center it was difficult, if not impossible, to provide flexible, mixed-power use spaces to customers. You could have any “color” data center you desired as long as it was Model T black. Today, providing tailored solutions to each company isn’t just a bespoke, nice-to-have strategy, it’s imperative.

    Data centers have to embrace the spirit of innovation with intelligence and agility. Otherwise, your data center business is likely to one day be preserved in a historical museum with freeze frame images of the past—like the Model T.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Maxta Updates Hyper-Converged Platform

    Looking to make it simpler to manage hyper-converged systems Maxta today unveiled MxInsight, a set of tools that provides access to historical reporting, enhanced policy management, and quality of service (QoS) management for storage devices embedded inside the platform.

    In addition, Maxta also announced support for Kilo, the latest update to the OpenStack cloud management framework. Maxta says it has developed both Nova and Cinder drivers for OpenStack with full integration with all Kilo features.

    Finally, Maxta unfurled MxCloudConnect, a remote call home capability that can be used to proactively deliver maintenance and support notifications.

    Based on Intel processors, Maxta CEO Yoram Novick says the Maxta platform is unique in that it can run any hypervisor in a way that allows IT organizations to scale compute and storage up or out independently of one another.

    “What’s been missing is the storage side of the equation,” says Novick. “Maxta is the missing link for the software defined data center in a box.”

    MxInsight extends those capabilities by providing insights into both real-time performance and capacity utilization via a unified user interface, while at the same time making it possible for IT administrators to set policies such as replication factors, data rebuild priority, and compute/storage affinity at the virtual machine level.

    Locked in battle with system vendors that are several times its size, Maxta is making a case for maintaining server and storage independence in a hyper-converged world. Instead of having to pre-configure systems and hope for the best, Novick says the Maxta platform can be more easily adjusted to support the compute and storage requirements of application workloads that tend to change over time.

    Market research firms such as Technology Business Research (TBR) are forecasting that a major shift in underway towards these types of platforms. By 2018, TBR estimates that converged infrastructure will worth $19 billion.

    Whether those predictions will pan out, of course remains to be seen. Maxta, which counts Intel among its investors, is clearly betting that not only it will, but that the shift to hyper-converged systems inside the data center will lead to traditional server vendors being upended by relative upstarts such as Maxta.

    Naturally, it may take a few years yet for the shift to hyper-converged systems to become a new everyday reality inside the data center, especially when you consider how entrenched systems and storage administrators currently are in their respective domains. But as is often the case a better economic model for deploying integrated sets of servers and storage resources may yet wind up changing roles inside the data center whether administrators like it or not.

    5:00p
    IBM Extends OpenStack Reach

    At the OpenStack Summit, IBM announced it is extending the reach of its OpenStack support to now include an implementation running in beta on the IBM SoftLayer cloud as a service.

    Angel Diaz, vice president of cloud architecture and technology for IBM, says this latest implementation of OpenStack is designed to complement existing IBM support for deploying the open source cloud management framework on premise and in dedicated hosting environments.

    IBM also announced today that it is working with Intel to extend OpenStack to take advantage of the Intel Trusted Execution Technology (TXT) to provide hardware monitoring and security controls.

    IBM also took the opportunity afforded by the conference to engage in some bragging rights. A survey of 2,255 business and technology decision-makers conducted by Forrester Research finds that compared to its competitors twice as many organizations either use or plan to use IBM as their primary hosted private cloud platform, and nearly twice as many firms use or plan to use IBM when implementing multiple vendor cloud solutions.

    “There are going to be multiple clouds that need to be connected,” said Diaz. “There is no single cloud.”

    Diaz adds that one other reason that IBM has garnered so much OpenStack momentum is that customers has come to view its OpenStack distribution as a gateway to a variety of higher value added cloud services such as the IBM Watson Cloud.

    But as vendors continue to rally around OpenStack integrating those clouds becomes easier. At present there are multiple distributions of OpenStack, but Diaz says that providing access to a common set of application programming interfaces greatly simplifies integration challenges across heterogeneous clouds.

    For that reason IBM has provided hundreds of developers to OpenStack project that participated in 11,676 code reviews, implemented 68 blueprints and fixed 520 bugs across a total of 232,382 lines of code. IBM also claims that one of its most important contributions to OpenStack has been the development of RefStack-client compliance testing tool.

    While interest in OpenStack is obviously strong, the amount of OpenStack code running in production environments remains comparatively small. But given the amount of momentum behind OpenStack it’s clearly only a matter of time before OpenStack gets deployed in production environments. The only thing not as clear is to what degree OpenStack will supplant existing proprietary frameworks or simply be deployed alongside them. In either case the amount of IT effort required to bring OpenStack into those production environments will be substantial.

     

     

     

     

    6:00p
    How DDoS has evolved into new threats against a data center

    Today’s business world is becoming ever more reliant on the data center. With more workloads, more end-points, and a lot more data, demands around resources and efficient technologies continues to grow. The data center has become the heart of any modern organization. With virtualization and cloud computing at the helm, many are saying that it’s great to be in the data center business. Although this may be the case from an infrastructure side, we can never forget that as more people move towards a type of platform – the bigger the target becomes.

    Cloud computing has given rise to many new types of services for organizations. These include hosting options, data center extensions and even new disaster recovery strategies. With the increase in cloud utilization comes the very real increase in security threats.

    There’s little doubt that as the size, frequency and complexity of distributed denial of service (DDoS) attacks continue to rise, hosting and cloud service providers must have solutions in place to protect the availability of their infrastructure and services. Now, there are three specific types of attacks which attackers can utilize to bring a system to a halt:

    • Volumetric Attacks
    • TCP State-Exhaustion Attacks
    • Application-Layer Attacks

    A recent Arbor Network security report from Arbor illustrates how attacks are growing in size, complexity and frequency:

    • Use of reflection/amplification to launch massive attacks: The largest reported attack in 2014 was 400Gbps, with other large reported events at 300, 200 and 170Gbps with a further six respondents reporting events over the 100Gbps threshold. Ten years ago, the largest attack was 8 Gbps.
    • Multi-vector and application-layer DDoS attacks are becoming ubiquitous: 90 percent of respondents reported application-layer attacks and 42 percent experienced multi-vector attacks that combine volumetric, application-layer and state exhaustion techniques within a single sustained attack.
    • DDoS attack frequency is on the rise: In 2013, just over one quarter of respondents indicated they had seen more than 21 attacks per month; in 2014, that percentage has nearly doubled to 38 percent.

    “Arbor has been conducting the Worldwide Infrastructure Security Report survey for the last 10 years and we have had the privilege of tracking the evolution of the Internet and its uses from the early adoption of online content to today’s hyper connected society,” said Arbor Networks Director of Solutions Architects Darren Anstee. “In 2004, the corporate world was on watch for self-propagating worms like Slammer and Blaster that devastated networks the year before; and, data breaches were most likely carried out by employees who had direct access to data files. Today, organizations have a much wider and more sophisticated range of threats to worry about, and a much broader attack surface to defend. The business impact of a successful attack or breach can be devastating – the stakes are much higher now. “

    So, what do you do in these types of situations? You get smart and fight back! A new security term has been circulating the industry. Next-generation security platforms are much more than just physical boxes sitting in the data center. There has been a leap in security technologies where advanced engines are doing much more deep diving than a regular UTM firewall would.

    • Virtual Appliances and Security Virtualization. No longer bound by the physical aspect, virtual appliances can roam between physical hosts and have visibility into more parts of the network. Plus, they’re easier to manage from an agility perspective. Furthermore, administrators can dedicate a virtual security appliance to a specific function. This means an appliance can reside departmentally doing a certain type of service for that team. This would prove to be much more expensive when doing something similar with a physical device. Furthermore, new kinds of tools allow you to integrate security directly into storage repositories, and even virtual machines. This allows you to scan data before they even hit a VM. These are advanced virtual security mechanisms which enable a new level of infrastructure security.
    • New Cross-Device Security Engines. Advanced deep scanning engines like data-loss prevention (DLP), intrusion detection/prevention services (IPS/IDS), and even device interrogation helps lock down an environment. Creating intelligent network monitoring algorithms allows administrators to control what data flows in and out of the environment. Furthermore, these new engines help control the various consumer devices that are trying to enter the environment. Here’s the reality – you can now control any IP-based devices which attempts to connect into your network. This kind of granular control automates the security process especially when you have a lot of different devices connecting into your data center.
    • Create a network which acts as a sensor and an enforcer. The three kinds of attacks that I mentioned earlier are absolutely impacting the modern network and data center architecture. This is where next-generation security technologies must meet the rest of your data center architecture. This means having intelligent policies and monitors running at the edge – and on your network. Is a specific port on a core switch experiencing a burst? Is an application sitting internally suddenly getting anomalous traffic? Are there mal-formed packets hitting a service or site? An intelligent security and networking architecture will sense these kinds of attacks and then enforce appropriate policies to stop the traffic and prevent any damage. Today, there is a lot of intelligence available which span from the blade, through your network, and out to the edge.

    As organizations continue to grow their cloud presence, security administrators need to look at other options to help them protect their internal environments as well as their cloud infrastructure. The reality is that security will be an ever-evolving challenge for data centers and organizations of all sizes. As more environments go digital – the threat vector will continue to evolve. By enabling new kinds of security strategies throughout the entire architecture – you’ll be able to proactively prevent and stop new kinds of advanced attacks.

    7:00p
    KEMP Unveils Faster Application Delivery Controller

    Moving up in application delivery controller (ADC) weight class, KEMP Technologies is making use of the latest generation of Intel processors to launch an ADC appliance that can provide up to 30Gbps of application throughput and 30,000 SSL transactions per second.

    Christopher Baker, product marketing manager for KEMP Technologies, says the LoadMaster 5000 and 8000 series ADCs make use of multiple 10G Ethernet interfaces to push KEMP Technologies into the higher end of the data center market for the first time.

    “Most people think of us in terms of being able to support Windows workloads,” says Baker. “These offerings will move us into larger data center environments running Oracle and SAP software.”

    Baker says the latest Intel Xeon processors enables KEMP to eliminate the need to invest in field programmable gate arrays (FPGAs) to reduce costs, while at the same time being able to embed SSL encryption processing, high capacity intrusion protection and detection software, software defined networking (SDN) functionality and Web Application Firewall (WAF) software all on the same ADC.

    The LoadMaster 5000 and 8000 series, claims Baker, is also the only ADC currently able to provide adaptive traffic steering capabilities via direct integration with SDN controllers. While there are not many SDN controllers deployed just yet, Baker notes that it’s just a matter of time before SDNs and ADCs become natural extensions of one another.

    Baker says that the advent of faster Intel processors is making it possible for KEMP to add more functions on top of the ADC as part of an effort to consolidate the number of appliances that need to be deployed inside a data center. Of course, providers of ADCs are not the only providers of appliances with similar ambitions. Vendors that manufacture firewalls, for example, are taking advantage of faster processors to add more functionality to their platforms.

    In the meantime, the data center is becoming home to a mix of physical and virtual ADC appliances. Baker says IT organizations will choose between the two depending on the amount of congestion and attributes of the application workloads being run inside any segment of the data center.

    Clearly, ADCs have come a long way from when IT organizations primarily relied on previous generations of load balancers to distribute application workloads across the data center. While many data centers still rely on a basic load balancer, the rise of more sophisticated ADCs makes it possible to deploy application workload spanning hundreds, sometimes even thousands, of virtual machine and switches at true Web scale.

     

     

     

     

    7:24p
    GE Launches Digital Wind Farm with Cloud Infrastructure to Boost Production by 20 Percent

    logo-WHIR

    This article originally appeared at The WHIR

    The GE Digital Wind Farm uses cloud technology to boost energy production by up to 20 percent. The company announced Tuesday that it is using digital infrastructure along with turbines to help generate up to $50 billion with a new Digital Wind Farm package.

    GE Wind PowerUp* technology was introduced 18 months ago and improves turbine efficiency that creates the profitability increase for each turbine. “We envision that we are going to develop many more apps to come, as we work with more and more customers,” Anne McEntree, head of GE’s renewable energy unit told TheStreet. “No one is even thinking about doing what we are launching today, the way we are coupling industrials and big data.”

    Greener technology utilizing existing infrastructure is becoming of greater interest to the industry. Last year researchers at Stanford found that current algorithms were causing 80 percent of CPU power to go unutilized. Apple and Google have both worked to increase the efficiency of data centers to make them more sustainable. Data centers account for two percent of all energy consumption in the US and that number is likely to rise as the appetite for cloud services grows.

    GE said that it will utilize the existing power grid more efficiently with the GE Digital Wind Farm by using the cloud along with advanced turbines. “Once the turbines are built, their embedded sensors are connected and the data gathered from them is analyzed in real time with GE’s Predix software, which allows operators to monitor performance from data across turbines, farms or even entire industry fleets,” according to the press release. “The data provides information on temperature, turbine misalignments or vibrations that can affect performance.”

    The system gathers data about performance and over time uses analytics to become more predictive and more effectively run the turbines. “Big data is worthless without the insight to take action, and our vision for the industry is to use today’s data to predict tomorrow’s outcomes,” said Steve Bolze, president and CEO of GE Power & Water. “By harnessing the full power of the Industrial Internet, we can create a world where wind farms learn, adapt and perform better tomorrow than they do today.”

    This first ran at: http://www.thewhir.com/web-hosting-news/ge-launches-digital-wind-farm-with-cloud-infrastructure-to-boost-production-by-20-percent

    << Previous Day 2015/05/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org