Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, April 7th, 2016

    Time Event
    9:00a
    The ‘Smarter’, Software-Defined Data Center

    John M. Hawkins is VP of Marketing and Communications for vXchnge.

    If you take a look around everything is getting “smarter”. There are smart phones, smart watches, smart cars, and the list of smart devices is growing rapidly. As consumers, we expect that there will be more and more smart devices.

    Recently, I went into an electronics store and asked if they had a smart device I could put on my dogs’ collar to check to see how much exercise he was getting. Moments after I asked the question, the store clerk looked at me as though I was from Mars and quickly replied, “We don’t have anything like that but it’s a good idea.” While the clerk wasn’t aware of a device to track my pets’ activity, I was able to determine, with a quick google search on my smart phone, that there are devices out there to track my dogs’ activities.

    In fact, there are more smart devices and electronic gadgets on earth than people. However, just creating the device and producing the data isn’t good enough. We need to collect and process the data into information, which is what the Internet-of-Things paradigm is all about. Given the current state of IoT maturity and the desired state for IoT this means that we will need software design patterns expanded and developed upon for security, application programming frameworks, and information data models to name a few necessary improvements and features – which means that we will need smarter software defined data centers to support these new architectures and features.

    In a recent survey of data center decision makers conducted by vXchnge, a majority (93.14 percent) of respondents believe software will define the data center of the future, and 77 percent plan to make the move to a software defined data center (or SDDC) within the next five years.

    The consumer will be accustomed to smart everything. This means that we will need to have a data center that can support these new software models; for this to happen data centers must evolve. It’s critical that the data center’s capabilities are in place to support the new paradigms. So, how do we get smarter in order to keep up?

    A Software-Defined Data Center (or SDDC) is a SMARTER data center which supports the demands of future compute and application deployment models such as Internet of Things, cloud, Platform-as-a-Service, Software-as-a-Service, and other models on the verge of becoming mainstream.

    Consider the evolution of the telephone for example. We once, not too long ago, used rotary or button-push wall phones to “reach out and touch someone”. Today we use smart phones, and in particular, mobile apps on a daily basis. The practical use of mobile apps is not something we could have considered back in the wall-phone days. Today, we tap on our glass device and access virtual buttons that take us to the information we want, regardless of where in the world it is.

    We can be smarter about the vision for tomorrow if we can better understand what’s happening today and plot our next steps. The software-defined data center offers advanced features and functionality, real-time analysis and reporting, and on-demand remote access. A true SDDC provides remote visibility into the data center just as if you were there in-person. As devices and infrastructures get smarter, the SDDC provides a smarter way to interface with all of the moving parts. The software defined data center runs on a platform that is used to integrate and orchestrate all the “smart data center” devices that are being developed.

    Interestingly enough, in the data center decision maker survey conducted by vXchnge, 81 percent agree that having customers in close proximity to their data is “very” or “extremely” important, and 89 percent of the respondents used cloud for their compute storage and network needs.

    This data leads me to believe that the two key use cases the SDDC will need to enable are ‘the Edge’ and also have the ability to support hybrid cloud. The proximity and use of cloud are leading indicators that the full shift to a single cloud model indicates that hybrid cloud is an area we will see more growth. The data centers located in what once were cornfields, where power is cheap, might not be the 100 percent direction the cloud goes. Of course, there are merits for this model. The data suggests that the cloud model might need to shift to support the functional requirements of cloud which means location will be important.

    We can also expect to see many other exciting advancements with the SDDC where our data centers will become smarter. Not only will they be able to capture metrics, they will give us the ability to make better decisions. The software-defined data center will provide information – not just data – about security, IT audits, asset management, DC infrastructure management, capacity and lifecycle management to name a few.

    All of these leading indicators and trends suggest that the Software-Defined Data Center era is upon us, and it’s time to embrace the change. Is your data center providing the services and support that will deliver the outcomes your business demands?

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

     

     

    10:00a
    Rackspace and Equinix Partner for Managed OpenStack

    With its transition from the one-time powerhouse of the cloud platform space to a data center management provider now complete, Rackspace announced Thursday the availability of a fully managed service option for customers deploying OpenStack cloud platforms, including on their own premises, as well as in any data center, anywhere.

    In a development that underscores just how dramatic a change of direction this is for Rackspace historically speaking, colocation leader Equinix has stepped in to provide an option for hosting Rackspace-managed OpenStack platforms through its worldwide locations.

    “We are taking the OpenStack product and delivering it in pre-engineered, cabinet-level solutions that not only allow us to provide the OpenStack layer, but (also) an OpEx model. explained Darrin Hanson, vice president and general manager of Rackspace’s OpenStack private cloud operations, in an interview with Datacenter Knowledge. “We will own the assets; we will basically host it back to you in your own facility, and all you’re responsible for as a customer is the space and the power in the data center, and making sure it meets the minimum guidelines.”

    Your Place or Theirs

    Rackspace’s move is the latest step in a path set in motion in 2014 that has since culminated in managed service offerings for customers of VMware, Amazon, and Microsoft Azure; and just last February, customers with Red Hat OpenStack. A move to provide similar managed services for customers on Google Cloud may not be far behind, especially considering that just Wednesday, Google and Rackspace announced their joint development of Open Compute Project specifications for a 48V open rack using IBM Power9 processors.

    Yet the most extraordinary element in this turnaround story is the inclusion of Equinix. As Rackspace told us, Equinix will be its preferred third-party provider for colocation. So where Hanson noted that customers will be responsible for space and power, they’ll have an OpEx option available for those as well.

    “Rackspace will take care of all the data center and connectivity management in that solution as well,” explained Ryan Yard, Rackspace’s director of solution engineering, speaking with Datacenter Knowledge. “On a customer site, with this solution, they’d have to manage the power and some of the connectivity. In an Equinix solution, Rackspace would manage that power via Equinix and then manage the connectivity. So we’re extending all of our sites and capabilities into all of the Equinix sites.”

    This brings Equinix’ Cloud Exchange squarely into the picture. Just this week, Equinix announced it is supercharging its Cloud Exchange service with the addition of Data Hub — a way to bring customer data closer to the edge of connections with over 500 SaaS providers. Thursday’s announcement means that customers will now be able to effectively hire Rackspace to build cloud platforms that connect Cloud Exchange member services, using open APIs, to applications built on PaaS platforms on top of OpenStack. For example,Cloud Foundry and Red Hat’s OpenShift provide those applications to their own customers as services, according to Rackspace.

    “Rackspace chose Equinix facilities to deploy this new service,” said Sean Iraca, Equinix senior director of cloud innovation, in a note to Datacenter Knowledge, “because of the breadth of cloud services available on inside Equinix and on the Equinix Cloud Exchange, as well as the global reach of the platform.”

    We asked Iraca about his company’s view of the difficulties Rackspace portrayed that its customers may have had with deploying and managing OpenStack in the past. “OpenStack can be complex, and Equinix provides the best environment, together with services from Rackspace, to run an OpenStack deployment,” Iraca responded.

    Would an organization that prefers not to manage its own infrastructure fit the profile of a company with developers willing to build custom apps and services, for a PaaS platform running on OpenStack — one they’d probably have to manage themselves?

    “Usually if you are using OpenStack, it means you want to orchestrate infrastructure through the OpenStack APIs in addition to PaaS,” responds Gary Chen, IDC’s research manager for Software-Defined Compute. “Most OpenStack apps are custom developed, so (the customer) would have developers on staff. Most are Web-based or partially/fully cloud-native designs, which run on OpenStack well. They could be using PaaS systems, though the customer would have to install and manage that layer.”

    But an app that’s very fast-paced and susceptible to rapid change — for instance, one developed using continuous integration / continuous deployment (CI/CD) — would be something you’d find from an organization that’s focused on the product rather than its own infrastructure, said Chen.

    So an organization choosing to devote humanpower and resources to differentiated, customer-facing services, could very well fit the profile Rackspace is looking for: one whose growth strategy does not include devoting resources to undifferentiated infrastructure.

    Rackspace will remain the single point-of-contact for customers choosing the Equinix option, said Rackspace. However, according to Equinix’ Iraca, “customers should be perceived as ‘joint’ customers of Rackspace and Equinix.”

    What Changes?

    “OpenStack has always been the first product that Rackspace delivered as-a-service, outside of our data center,” said Rackspace’s Hanson.

    “Up to this point, what that has looked like is a design session where we helped you understand what your OpenStack cloud needs to look like, and then if you decided to deploy it in your own data center, our level of consultation was limited to the bill of materials, the network configuration, and instructions or side-by-side assistance with standing up your environment after you worked with your procurement teams to lay out all the capital, buy or repurpose the gear from your existing inventory, and get it all configured. And if it’s in your data center, then we provided support remotely, and from a monitoring perspective, for just the OpenStack layer.”

    There was a lot of hand-holding involved, and plenty of opportunity for Rackspace to demonstrate its “fanatical” devotion to leading customers through the morass. Under the new arrangement, as Yard said, Rackspace will go so far as to assist customers in procuring hardware that meets the agreement’s specifications, for hosting within customers’ own data centers or within Equinix locations.

    For organizations seeking to deploy their own services for the first time on cloud platforms, the adoption process for OpenStack has been notoriously difficult. As one user complained on OpenStack’s own forums, “What other software requires the person installing it to do so much work?”

    “OpenStack is a collection of projects; (it’s) not inherently a product,” explained Rackspace’s Yard, in the collection’s defense. “You have various members of the community whom I think of as ‘curators,’ who take different projects and put them together, but they still allow for the customer to configure that project in any way they see fit. I think that’s where, in my mind, the problem lies.”

    For a small, static private cloud deployment, he said, the platform doesn’t really provide too many operational challenges. . . up until the time you want to scale, update, or patch it. “Rackspace’s opinion is that this ‘-as-a-service’ model is a prescription for how to deploy and set up OpenStack at scale.”

    Hardware

    Rackspace gave Datacenter Knowledge an early peek at those hardware specifications. There will be two main “flavors” of cabinets, we were told — “HPE” and “Open Compute.”

    “HPE-flavor” cabinets will consist of 22 servers, networked using a Cisco top-of-rack switch, Cisco firewall, and F5 Networks load balancers. The “compute” version will feature 2 12-core processors, 256 GB of memory, and 12 960 GB SSD drives. By comparison, the “storage” version will feature 2, 6-core processors and 128 GB of memory, with the slack taken up by 10 4 TB HDD drives and 10 960 GB SSD drives.

    This configuration, said Yard, enables support for object storage (via OpenStack Swift), block storage (OpenStack Cinder), as well as a converged array based on SAS, simultaneously within a single cabinet.

    “Open Compute-flavor” cabinets are slightly denser, featuring 24 servers built using Quanta’s F06D chassis — which, the manufacturer says, follows the latest Open Compute 3.0 specifications. It utilizes Quanta’s “hyper-converged infrastructure,” which the company premiered last year in an effort to address the needs of OCP-compliant customers. In that design, compute and storage resources are co-resident. Buildouts, we’re told, will be slightly different than for “HPE-flavor.”

    Red Hat — which began certification of its own version of OpenStack when managed by Rackspace — had yet to make a statement about the possibility of certifying OpenStack Everywhere deployments at the time of this filing. A Rackspace spokesperson told Datacenter Knowledge that Red Hat will not be certifying deployments, at least at this stage.

     

    4:19p
    Volkswagen Picks Mirantis for OpenStack Private Cloud
    By Talkin' Cloud

    By Talkin’ Cloud

    OpenStack startup Mirantis announced that Volkswagen Group has selected OpenStack as its platform for its private cloud that will support internal and consumer-facing applications.

    Volkswagen selected Mirantis after a “comprehensive and rigorous selection process” that showed Mirantis as the “most stable and fastest to implement among industry OpenStack distributions.”

    Volkswagen told Business Insider that it is not planning to rip-and-replace existing infrastructure but rather “surround-and-drown it over time.” The company plans to run all net new applications across its 12 VW brands on the new OpenStack cloud, eventually replacing legacy systems.

    As many reports have noted, the win is a blow to OpenStack providers Red Hat and VMware. Mirantis’ revenue saw a surge in late 2015 and early 2016 with enterprise wins, including VW, but also a major US bank with a 1,000-node installation and a financial information firm. The company is now reportedly planning for a transition to a public company.

    “As the automotive industry shifts to the service economy, Volkswagen is poised for agile software innovation,” Mario Müller, Volkswagen VP IT Infrastructure said in a statement. “The team at Mirantis gives us a robust, hardened distribution, deep technical expertise, a commitment to the OpenStack community, and the ability to drive cloud transformation at Volkswagen. Mirantis OpenStack is the engine that lets Volkswagen’s developers build and deliver software faster.”

    “OpenStack is the open source cloud standard offering companies a fast path to cloud innovation,” Mirantis SVP, Marque Teegardin said. “It is our privilege to partner with Europe’s largest automaker and we are thrilled to support them as they use the software to out-innovate competitors and expand their business on a global scale.”

    Original post published at: http://talkincloud.com/cloud-computing-and-open-source/volkswagen-picks-mirantis-openstack-private-cloud

    4:39p
    Jeff Bezos: AWS Will Break $10 Billion this Year
    By WindowsITPro

    By WindowsITPro

    Jeff Bezos is bullish on the cloud, pegging AWS’ sales for this year at $10 billion in a recent letter to shareholders. But he said there was a surprising source of that success.

    “One area where I think we are especially distinctive is failure. I believe we are the best place in the world to fail (we have plenty of practice!), and failure and invention are inseparable twins,” Bezos wrote. “To invent you have to experiment, and if you know in advance that it’s going to work, it’s not an experiment. Most large organizations embrace the idea of invention, but are not willing to suffer the string of failed experiments necessary to get there.”

    AWS is not one of those failures: Bezos touted customers like GE, Johnson & Johnson and Pinterest while noting that AWS now encompasses 70 different services for compute, storage, databases, analytics, mobile, Internet of Things, and enterprise applications. That includes things like Amazon’s barebones workmail offering (read how it compares to Microsoft and Google’s offerings) as well as its monstrously successful database, storage, and server offerings.

    Bezos also noted that their cloud offerings had focused aggressively on cutting prices, with 51 price cuts over the years. Those cuts haven’t been able to keep big names like Apple and Dropbox from fleeing for cheaper pastures, but for many businesses, particularly fast growing enterprises, Amazon is still the default cloud option.

    Bezos wrote he expected that to remain true for some time — and that Amazon had the potential to capture even more data over time.

    “As the team continues their rapid pace of innovation, we’ll offer more and more capabilities to let builders build unfettered, it will get easier and easier to collect, store and analyze data, we’ll continue to add more geographic locations, and we’ll continue to see growth in mobile and ‘connected’ device applications,” he wrote. “Over time, it’s likely that most companies will choose not to run their own data centers, opting for the cloud instead.”

    Original article published at: http://windowsitpro.com/cloud/jeff-bezos-aws-will-break-10-billion-year-driven-amazons-failures

    8:20p
    Microsoft Cloud App Security Hits General Availability
    By Talkin' Cloud

    By Talkin’ Cloud

    Microsoft is putting its Adallom acquisition to good use this week as it launches general availability of Microsoft Cloud App Security.

    According to an announcement on Wednesday, its Microsoft Cloud App Security service helps companies “design and enforce a process for securing cloud usage” and start controlling cloud app security via policy within minutes.

    Microsoft Cloud App Security will address the growing threat of Shadow IT. According to recent data from Microsoft, on average, each employee uses 17 cloud apps, but many organizations have no idea what is in use or whether these apps meet their security or compliance requirements. What’s more, in 91 percent of organizations, employees grant their personal accounts access to the organization’s cloud storage.

    A separate study earlier this year by Blue Coat’s Elastica Cloud Threat Labs team found that sensitive data leaks can cost an average organization up to $1.9 million, highlighting the financial impact of Shadow IT.

    According to Microsoft, the product has two main components: “discovery of cloud usage in the company using log-based traffic analysis and granular control for sanctioned apps leveraging API-based integration.”

    In a blog post announcing Microsoft Cloud App Security, Microsoft said the product includes:

    App Discovery: Cloud App Security identifies all cloud applications in your network—from all devices—and provides risk scoring and ongoing risk assessment and analytics

    Data Control: With special focus on sanctioned apps, you can set granular controls and policies for data sharing and loss prevention (DLP) leveraging API-based integration. You can use either out-of-the box policies or build and customize your own

    Threat Protection: Cloud App Security provides threat protection for your cloud applications leveraging user behavioral analytics and anomaly detection

    Original post published at http://talkincloud.com/cloud-computing/microsoft-cloud-app-security-hits-general-availability

    << Previous Day 2016/04/07
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org