Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 16th, 2017

    Time Event
    12:00p
    BMW’s Connected-Car Data Platform to Run in IBM’s Cloud

    BMW is planning to use IBM’s cloud infrastructure to deploy its new platform for collecting data from connected cars. IBM announced it’s become a “pilot partner” with the German automaker this week.

    Specifically, it’s partnering on the automaker’s CarData project that was announced at the end of May at the “Mobility of Tomorrow” automotive event in Berlin and which is being billed as the first OEM open data platform.

    CarData promises to be a new way of accessing the reams of data — mileage, average fuel consumption, event data, service information and the like — that is collected by an automobile’s onboard computers. Under the plan, the data is encrypted and transmitted to BMW’s secure servers, utilizing a SIM card permanently installed in the vehicle for enhanced security. With the user’s consent, the data can then be made available to third party service providers, such as insurance companies and repair shops.

    Providing the data center infrastructure to collect and analyze connected-car data is a nascent use case for cloud providers that’s bound to become a big market in the near future, especially as self-driving vehicles start to really kick into high gear. Just this past March, Japanese telco NTT Communications partnered with Toyota to build a global data center network for connected-car data and to research the best ways to design such a network. Ford is building a $200 million data center in Flat Rock, Michigan, specifically in response to the deluge of data it expects connected cars to generate.

    BMW board member Peter Schwarzenbauer told the crowd at the Berlin auto show:

    “BMW CarData will take the connectivity of our vehicles to a new dimension. Our BMW ConnectedDrive customers will be able to take advantage of new, innovative, and customized third-party services in a quick and easy manner. Protecting vehicle data is part of our understanding of premium in the highly-connected vehicle. This is what customers expect from us. In this way, we are allowing customers to decide what happens with their data. This is precisely the philosophy behind BMW CarData.”

    Buzz words aside, it seems like a good idea.

    About 8.5 million BMW vehicles already have the built-in SIM card required to use the system. All that’s needed to gain free use of the feature is to register through the company’s online ConnectedDrive portal. After that, the car’s owner decides with whom and how to share their data “with the click of a mouse.” Service providers using the system must be registered with BMW CarData and they will only receive the data they need to perform the authorized service. All data transfers are encrypted.

    “For customers, CarData means security, transparency and control over data from your own car, combined with the many benefits of customized services,” Schwarzenbauer said.

    Eventually, BMW sees CarData moving into other areas, such as the infotainment and smart home arenas.

    The connected-car platform will utilize IBM’s Platform-as-a-Service called Bluemix to securely and rapidly transfer encrypted vehicle data to these customer-authorized third parties, while giving them the intelligence and cloud tools needed to build customized offerings.

    “We will integrate Bluemix with the data APIs from BMW CarData,” an IBM spokesperson explained to Data Center Connection. “This will enable third parties to develop new services on top of this by using the full catalog of 150 plus microservices in IBM Bluemix. This is including the Watson services which are available in Bluemix.”

    Big Blue evidently sees this as opening a door of opportunity, as the company says it will also act as a neutral server for extended vehicle access that will include vehicles from automakers other than BMW.

    “The concept of a neutral server fosters innovation by establishing a single point of contact for multiple parties to access vehicle data from various manufacturers, thereby reducing integration cost whilst ensuring fair competition,” said Dirk Wollschlaeger, a general manager with IBM’s global automotive department.

    3:00p
    Have Your Scale, and Object Too

    David Flynn is CTO of Primary Data.

    The IT department has one of the toughest jobs in the enterprise. While maintaining application performance today, IT teams are increasingly being asked to handle the massive data growth coming tomorrow. Cloud and scale out storage are top of mind for most IT teams facing these challenges. In fact, Gartner’s 2017 Strategic Roadmap for Storage predicts that “by 2021, more than 80 percent of enterprise unstructured data will be stored in scale-out file system and object storage systems in enterprise and cloud data centers, an increase from 30 percent today.”

    Rapid data growth is straining IT budgets, fueling this remarkably fast adoption rate and pushing enterprises to transition to the cloud even sooner. A metadata engine enables enterprises to deploy a more powerful scale-out file system that can seamlessly integrate with on-premises object or public cloud storage. This can help enterprises adopt scale-out systems and the cloud much more rapidly, at far less cost, and with much less risk. Let’s take a closer look at how.

    Tomorrow’s Scale-Out System on Your Existing Hardware

    As Gartner notes, many enterprises are keen to transition from standard NAS systems to a scale-out NAS platform to manage the rise in unstructured data, but are waiting to find a solution that can meet both short-term and long-term needs. Long-term capabilities enterprises are looking for include: massive scalability of performance and capacity; the ability to tier data across clusters with different services and performance capabilities, according to policy for flexibility and cost-optimization; and the ability to tier data across resources from different vendors or even commodity x86 servers to accelerate the adoption of the latest innovations.

    Metadata engine software can meet these performance, scalability, and manageability requirements. It does this by separating the architecturally rigid relationship between applications and storage to deliver several key benefits. First, it enables data to be managed and moved across all storage, rather than within a single storage silo. This allows the right data to be placed on the right storage at the right time to meet data’s performance, protection, and price needs. Secondly, it moves metadata (control) operations out of the data path to enable parallel I/O, which improves scalability dramatically to support billions of files. Finally, deploying a metadata engine for data management is fast and easy, as only metadata needs to be migrated into the system—data is instantly assimilated, in place, without application impact. This capability turns complex migrations that might take weeks or months into a simple, non-disruptive process that can be completed in minutes.

    Making Order of Unstructured Data Chaos

    Beyond making NAS highly performant, efficient and scalable, a metadata engine’s ability to assimilate different types of storage resources into a global namespace extends to object and cloud storage to ensure enterprises will be well-equipped for the coming onslaught of unstructured data. With a metadata engine, admins control whether data moves to on-premises object stores (for example, for compliance purposes) or to one or multiple clouds (for example, to different availability zones for DR) based on the businesses objectives for data. In addition, data on object/cloud stores are still visible as files.

    This offers several benefits: first, managing files versus objects is more intuitive; secondly, files retrieved are instantly usable, without the need to modify applications to use object data; and finally, data can be retrieved at a file-granular level, making an active archive use case more cost-effective by minimizing bandwidth charges. For example, without a metadata engine with file-level access, if a company needed to restore a single file from a large backup bundle, they would need to pay the bandwidth charge to move the entire backup bundle on premises and then rehydrate the bundle to restore the file. If the bundle contained video and audio files, these bandwidth charges could be significant. A metadata engine maintains access to data in the cloud as files and can retrieve just the file that is needed.

    A metadata engine gives enterprises powerful capabilities that enable IT to meet the challenges of rapid growth of diverse data, including an explosion in the volume of unstructured data. It does so by transforming new and existing NAS resources into an immensely powerful scale-out platform that integrates seamlessly with on-premises object stores and the cloud. These capabilities enable IT to achieve the flexibility, agility, and cost saving benefits of the software-defined data center years sooner to keep their business ahead of the pack. Indeed, as the great philosopher Socrates said, “the secret to change is to focus all of your energy not on fighting the old, but on building the new.”

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
    3:30p
    What IT Pros Need to Know About Multi-Cloud Security

    Brought to you by IT Pro

    Having workloads distributed across multiple clouds and on-premises is the reality for most enterprise IT today. According to research by Enterprise Strategy Group, 75 percent of current public cloud infrastructure customers use multiple cloud service providers. A multi-cloud approach has a range of benefits, but it also presents significant challenges when it comes to security.

    Security in a multi-cloud world looks a lot different than the days of securing virtual machines, HashiCorp co-founder and co-CTO Armon Dadgar said in an interview with ITPro.

    “Our view of security is it needs a new approach from what we’re used to,” he said. “Traditionally, if we go back to the VM world, the approach was sort of what we call a castle and moat. You have your four walls of your data center, there’s a single ingress or egress point, and that’s where we’re going to stack all of our security middleware.”

    At this point it was assumed that the internal network was a high-trust environment, and that inside of those four walls, everything was safe. “The problem with that assumption is we got sort of sloppy,” Dadgar said, storing customer data in plaintext and having “database credentials just strewn about everywhere.”

    Of course, IT pros can no longer assume that this is the case, and must take a different approach, particularly in a multi-cloud environment.

    Cloud connectors, APIs create more entry points for hackers

    “Now many of these organizations don’t have one data center. They don’t even have one cloud,” he said. “They may be spanning multiple clouds and within each cloud they have multiple regions, and all of these things are connected through a complex series of VPN tunnels or direct connects where the data centers are connected together on fiber lines, those things are probably tied back to their corporate HQ and the VPN back there. It’s truly a complex network topology where traffic can sort of come from anywhere.”

    Dadgar is one of the founders of HashiCorp, which launched in 2012 with the goal of revolutionizing data center management. Its range of tools – which the company has open sourced – manage physical machines and virtual machines, Windows, and Linux, SaaS and IaaS, according to its website. One of these tools, called Vault, “secures, stores, and tightly controls access to tokens, passwords, certificates, API keys, and other secrets in modern computing.”

    Dadgar sees Vault as one of the newer tools that security pros are looking to in place of middleware, but it’s not just technology that needs to change in a multi-cloud environment.

    “Security practitioners are trying to figure out how to bring security to Wild West situation,” Dadgar said, noting that the approach from the security professional has changed as they need to work closely with developers and operators.

    “Security people are being pulled more intimately into application delivery process as well as having to totally recast the set of tools they use, and take more of a service provider approach as opposed to a sort of invisible hand,” he said. “Security has to have a seat at the table, developers and operators have to be aware of it, and there’s a necessary tooling change.”

    These changes include ensuring that data is encrypted both at rest and in transit, and taking a hygienic approach to secret management, he said.

    “One of the things that kind of protected us in the old world was it was a lot more obvious when you were making a mistake when you physically had to rack and stack servers and move cables around,” Dadgar said. “Now that we’re in the cloud world and everything is an API, it’s not so obvious what’s happening. If I make a slight change to configuration it’s not necessarily obvious that this is bypassing my firewall.”

    One example of this is the recent OneLogin security breach where customer data was compromised and hackers were able to decrypt encrypted data. OneLogin, a provider of single sign-on and identity management for cloud-based applications based in San Francisco, said “a threat actor obtained access to a set of AWS keys and used them to access the AWS API from an intermediate host with another, smaller service provider in the US.”

    In a post-mortem on its blog, OneLogin said, “Through the AWS API, the actor created several instances in our infrastructure to do reconnaissance. OneLogin staff was alerted of unusual database activity around 9 am PST and within minutes shut down the affected instance as well as the AWS keys that were used to create it.”

    Security common sense, policies still have place in multi-cloud 

    David Gildea has worked in IT for 17 years, and is the founder and CEO of CloudRanger, a SaaS-based server management provider based in Ireland. He said that enterprises often don’t know that they must take the same precautions with cloud vendors as they do with their data center and on-premise IT. Part of this is ensuring that the vendors they work with have just enough access to get their job done, he said.

    “If you have access to this one tool and it gets compromised then it’s a huge problem for enterprises,” Gildea said.

    Part of the problem that he sees is that enterprises don’t have the right security policies in place when they enter the cloud, and then the problem is perpetuated as more workloads are spun up across clouds.

    “What happens over and over again is you get this proof of concept that turns into a production application and there are no standards or policies set at the very beginning so things start off on a bad footing and that spreads to other clouds,” he said.

    Along with the lack of security policies there is also a lack of testing, Gildea said.

    “What we see [is that] things just aren’t tested; business continuity for example. Everyone has backups in some way shape or form but what happens is it’s not tested. There is a great assumption there that the cloud is going to do everything for you.”

    This article originally appeared on IT Pro.

    4:12p
    Gartner: Amazon Isn’t Only E-Commerce Giant with Cloud Prowess

    Brought to you by IT Pro

    The cloud services arm of Chinese e-commerce giant Alibaba is emerging as a company to watch in the Infrastructure as a Service market in the U.S., according to the latest Magic Quadrant for IaaS report from Gartner.

    While Amazon Web Services maintains its runaway lead, followed by Microsoft and then Google, Alibaba Cloud is ranked fourth in terms of ability to execute.

    This is the first time Alibaba Cloud has been included on the Magic Quadrant for cloud IaaS, according to CNBC.

    Like Amazon, Alibaba’s primary business is e-commerce; according to a recent report by Bloomberg, Alibaba’s e-commerce business makes up more than 82 percent of sales. But Alibaba’s cloud business is growing at almost twice the rate of its core business, reaching almost $1 billion last year from $63 million in 2012.

    Alibaba launched its cloud in the U.S. last year, and has been adding data centers to grow its infrastructure footprint. This week it announced at a conference in Shanghai plans to open data centers in India and Indonesia by March 31, 2018.

    According to Gartner, Alibaba’s international offering does not have the same features or level of performance as its China offering, which could hurt its reputation and growth prospects in the U.S. In addition, it “has very little in the way of unique differentiation compared to other hyperscale providers”, taking “liberal inspiration from competitors when developing service capabilities and branding.”

    Also, customers in the U.S. could be hesitant to select Alibaba because it is a Chinese company, Gartner says, and there are concerns around controversial legislation such as its Cybersecurity Law, which went into force this month.

    Gartner said that AWS is most commonly used for strategic, organization wide adoption, while Microsoft is seen as more of a strategic enterprise cloud partner, according to a report by ZDNet. Google, on the other hand, is a good option for cloud-native companies.

    An April report from Synergy Research said that in Q1 2017, Microsoft, Google and Alibaba achieved annual growth rates exceeding 80 percent.

    This article originally appeared on IT Pro.

    << Previous Day 2017/06/16
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org