Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, July 23rd, 2015

    Time Event
    12:00p
    The IoT Standards Wars

    For the Internet of Things to reach its full potential, a single communications standard is needed. A range of devices and “things” need to be able to communicate with cloud and perhaps more importantly with one another. Several early standards and frameworks have emerged in attempt to define the wider framework that will enable IoT interconnection.

    The problem is consortia, foundations, and standards are multiplying. These consortia are somewhat paradoxically competing to be the most open and interoperable.

    The Internet of Things requires an agreed-upon communications standard so that the information generated by devices can be shared and cross pollinated to create new and useful cross-functionality. Data isn’t as useful when it exists in silos.

    A single standard will enable all these devices from different manufacturers to communicate with one another, clouds and proprietary data clouds, and do it securely and privately. The Open Interconnect Consortium and AllSeen Alliance both want there to be a single standard for IoT. The problem is they don’t necessarily agree with one another and each insists that it should be the standard.

    OIC and AllSeen are two of the more recently active competing standards among several. Both recently announced impressive membership gains and market traction.

    While they are both member organizations, OIC was born out of Intel, and AllSeen was born out of Qualcomm. Both insist that their way is the right way. So who will be VHS and who will be Betamax?

    Recent membership gains for AllSeen include IBM and Pivotal, while OIC added IBM and National Instruments. Not only are there multiple consortia, the member companies also frequently join multiple consortia.

    “We’re seeing companies join multiple consortia and placing multiple bets, evaluating from the inside,” said Gary Martz, a member of the OIC marketing team and Intel product manager.

    Both Martz and AllSeen senior director Philip DesAutels believe that one standard will eventually emerge.

    OIC, the younger of the two, has seen a hockey-stick growth curve with its membership and just released its 0.9.1 specification. The core 1.0 specification will be submitted for review at the end of August.

    AllSeen’s framework is the open-source AllJoyn. “AllJoyn’s goal as a software project is to create and maintain and deliver production-ready code,” said DesAutels. “It’s a very mature framework that has been around for about five years and is in tens of millions of products.”

    DesAutels added AllJoyn will likely hit a billion devices in about a week; the open source code is bundled into Windows 10.

    Where AllSeen and OIC Disagree

    AllSeen focuses on device-to-device and starts at the home extending outward. OIC was born out of business applicability of IoT extending into consumer.

    AllSeen is different, said DesAutels, because it’s completely open and will never require a specific vendor to implement, and it focuses on product-to-product communications and doesn’t require a cloud in the middle.

    Martz said the right approach is a combination of open source and industry specifications. Intel’s interests in IoT standards is simply creating markets, he said.

    “Open source will help speed solutions to market, while the industry specification means we can go to other standards bodies and make liaison agreements,” said Martz. “The consumer space can be quicker and looser about security, privacy, and authentication. The enterprise space needs a different approach.”

    So who’s right?

    IoT and Data Centers

    Data center service providers care about IoT because more devices mean more data, more bandwidth and ultimately more backend infrastructure is needed. However, both IoT standards organizations believe there are misconceptions out there.

    Martz said that while large quantities of data are being collected, the impact on data centers is misrepresented. The storage footprint is very small for the kinds of data being generated. However, he added that data centers really benefit when it comes to manipulating and working with the data through analytics.

    AllSeen’s Philip DesAutels believes the data creation angle is a misconception altogether.

    “It’s about device-to-device and product communications,” said DesAutels. “We want things in your house to talk through APIs with one another and interact in safe ways. This orchestration requires things to talk locally, openly, through a robust protocol,” as opposed to constantly reporting back to the data center.

    DesAutels gives an IoT-enabled lightbulb as an example. A consumer doesn’t want to turn on that lightbulb by opening up a smartphone, having that smartphone talk to a cloud on the opposite coast, which then remotely turns on the light in a few seconds.

    “That’s not a world that anybody wants,” he said. “That’s a reflection of where we were several years ago with IoT.”

    Instead, IoT’s value is about what he dubs “accidental orchestration.” As more devices are IoT-enabled and standardized, they will talk to other devices for cool, useful functionality, much of which hasn’t even been dreamed of yet. A large part of the future of IoT cuts out the data center.

    A small example of accidental orchestration, said DesAutels, is when he starts watching a moving on Hulu, communication occurs locally between devices to dim the lights in the room automatically – no need for a cloud in the middle.

    Another example could be a link from fridge to automatic water control to shut off the water main and send a text message if there’s a potential flooding problem; flooding is the number-one home insurance claim.

    “Accidental orchestration is those kinds of thoughtful use cases where pieces stitch together that matter,” said DesAutels.

    Accidental orchestration doesn’t cut the data center out of the IoT world altogether.

    “Now If I was the company making that bulb, I have some problems to deal with,” said DesAutels. “I have an ERP to track manufacturing, and I have to keep track of serial numbers, etc. If I sell you a couple of lightbulbs, you activate them, and I have to provision devices, warranty, service, and support; I have to look at devices over time measuring performance over lifetime, usage, in a way that’s associated with you that respects your privacy and has no gaping security holes.”

    There will be a ton of activity generated on the backend to make IoT seamless from a user perspective. There will also be frequent communication with data clouds for uses like telemetry in cars or a thermostats getting weather information.

    AllSeen isn’t solely device-to-device focused, said DesAutels. It has a gateway agent, a bridging technology that connects the local network to the world. The gateway agent also performs device management and is the gatekeeper for uses like telemetry.

    There is also a device system bridge, contributed by Microsoft. “The world is filled with other networks,” said DesAutels. The device system bridge provides connectors for proprietary, specialized systems, such as Bacnet and EnOcean.

    There’s a bit of foundation fatigue out there. There are foundations for everything. It can be frustrating when there are several competing IoT standards with ultimately the same aim; in this case, making sure Internet of Things works and everything can communicate through overarching IoT standards.

    3:00p
    At Facebook Data Centers, New Protocol Helps Add Servers Faster

    Things move fast at Facebook. The company has been known to have new engineers write and deploy new features to the platform on their first day on the job.

    Combined with the rate of growth of its user base, this kind of application-development speed requires a level of nimbleness on the part of the social network’s infrastructure team that’s rarely seen in enterprise data centers. Shortening the speed to deploy is a constantly moving target for both software and data center capacity at Facebook.

    The company has developed a blocks-of-Legos approach to building out its data centers to shorten the time construction takes, and a lot of the hardware design work that’s now open source through the Open Compute Project has to do with accelerating the time it takes to mount a new server onto a rack.

    One of the Facebook data center team’s latest projects had to do with shrinking the time it takes between the point a server is physically installed in a data center and the point it comes online and starts crunching numbers. The project was to switch from an old version of Dynamic Host Configuration Protocol to a new one.

    When a new device is connected to a network, DHCP is used to assign it an IP address from a defined range configured for that network. It’s a small but important function in IT infrastructure management that has proven to make a big difference in the time it takes to deploy new servers in Facebook data centers.

    The code for the previous version of DHCP, called ISC DHCP (ISC stands for Internet Systems Consortium) has been around for close to 20 years. The new alternative, called Kea, is more appropriate for today’s IT, which, especially in the case of a company like Facebook, bears little resemblance to IT of 20 years ago.

    Facebook recently switched from ISC DHCP to Kea in its data centers and as a result has seen major improvements in the speed with which its sys admins are able to spin up new servers, according to a blog post by Angelo Failla, a Facebook production engineer.

    The company uses the protocol to install the operating system on new servers and to assign IP addresses to out-of-band management interfaces. These are interfaces sys admins use to monitor and manage servers remotely, whether or not they are switched on or have an OS installed.

    The old model was simply too slow for Facebook’s scale and pace of change. Techs working in its enormous data centers constantly add or replace things like network cards or entire servers, and every change could take up to three hours to propagate, slowing down repair times.

    ISC DHCP has been one of the biggest reasons it’s been so slow. To add or replace a part, the techs would load a static configuration file into DHCP servers, but the servers would have to restart to pick up the changes. “The DHCP servers were spending more time restarting to pick up the changes than serving actual traffic,” Failla wrote.

    With the help of Kea, the new set-up stores things like host allocation and subnets in a central inventory system. When a DCHP server needs to be deployed or changed, the system grabs the configuration information from the inventory. This means there’s no longer a need to generate static configuration files and reload DHCP servers for changes to take effect.

    Facebook’s new DHCP application runs in Tupperware, which is the company’s own Linux container technology that’s similar to Google’s Borg, according to Failla.

    The old model was also less resilient. The company used to have two redundant DHCP servers per cluster of servers, but if both of them went down, the cluster suffered.

    The new approach is a virtual cluster of DHCP servers distributed across the network. They manage a common pool of IP addresses, and any virtual DHCP machine can assign an address to any other device on the network. This way, if local DHCP servers in a cluster fail, the system can recover faster.

    With the new stateless design, it takes one or two minutes to propagate changes in the system, Failla wrote, which is more than an incremental improvement from three hours.

    3:30p
    When Self-Service BI Actually Means Self-Service

    Chris Neumann is Founder and Chief Product Officer for DataHero.

    Ease-of-use is paramount for data analysis in today’s fast moving, data-driven business landscape. However, the assumption tied to Business Intelligence (BI) is that the tools are designed solely for analysts, and are rarely consumable by the typical business user. To debunk this perception, companies are largely advertising their solutions as being self-service. But how much credibility do these claims hold? How do we truly help the majority of the business population that needs to work with data, but aren’t trained to do so?

    The term “self-service” is most often used for marketing speak but fails to live up to the hype. Self-service should be an approach to data analysis that grants users the access and ability to work with the information they need to analyze without depleting IT resources and time.

    We live in an era where applications don’t require users to be experts – take Dropbox and Survey Monkey as examples. Anyone can sign-up and intuitively navigate through these tools and get value immediately. As a result, companies are able to significantly improve both internal operations and how they conduct business externally. Each person is able to accomplish more in a shorter period of time, because the services at-hand don’t require training or a strong understanding of the technology.

    Despite this trend, data solutions have lagged, still requiring users to be analysts or data experts, even though everyone now has access to data from a variety of sources. Traditional BI solutions work with data stored in a data warehouse but ignore the data outside of the centralized data system. The issue with this approach is that data not only lives in warehouses, but also lives in services business users rely on everyday – like Microsoft Excel, Google Analytics, Salesforce, HubSpot and more. Why not empower the “average Joe” to derive value from the data no matter where it sits? Digital marketers, sales associates and customer reps make up only a snapshot of the professions that can use data to improve their jobs and make important data-driven decisions.

    Cloud software is overwhelmingly departmental in nature. Often times, different departments’ requests for reports from IT may not take priority, particularly for smaller teams. When using traditional BI tools, it is difficult to justify the cost and time that is required for IT to pull data from cloud services to analyze specific data sets for an individual department.

    BI solutions need to rely less on human intervention, and more on automation to eliminate the common pain points that deter business users. Solutions that normalize and classify data automatically from disparate cloud services eliminate custom ETL (extract, transform, load) and ease the burden on IT. By automating ETL and leveraging machine learning classification engines, self-service data analytics becomes a reality. This, in turn, empowers everyday business users to make sense of their data.

    The more self-service tools on the market, the less time IT spends on responding to ad-hoc requests from individuals or departments. As a result, IT teams can focus their efforts on effective strategic IT planning and oversight of performance.

    In reality, data analysis shouldn’t be difficult. Most of us know the questions we want answered; what we need are solutions that are truly self-service. Luckily for businesses, solutions like Dropbox are paving the way for self-service to become more of the norm across all types of tools and applications. Now, it’s time for companies to view data analysis in the same light.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Canadian Telecom Shaw Communications to Offer Cloud Services Out of New Data Center

    logo-WHIR

    This article originally appeared at The WHIR

    Shaw Communications announced it will offer enterprise cloud and data management services out of a new data center in Calgary beginning this fall. Shaw will offer cloud hosting, colocation, and managed services from its new 40,000 square foot facility to provide a range of hybrid options for business customers.

    Shaw leveraged ViaWest in building the new data center, its first since it acquired the Colorado-based cloud and data center provider for $1.2 billion a year ago.

    “Businesses across the country are becoming increasingly reliant on more complex, and always-on applications to interact with their customers, which requires an increased level of IT resource costs and support,” said Ron McKenzie, Senior Vice President, Shaw Business. “The launch of our new Calgary data centre will allow us to help organizations meet these expanding IT needs by providing fully configurable hybrid solutions that will simplify functions, manage costs and help our customers grow their business.”

    The Calgary data center will operate with a projected power usage effectiveness of 1.4, high density pods for demanding customer applications, cutting edge power delivery equipment and energy efficient cooling, onsite security and multiple security zones, as well as customer workstations and meeting rooms. It will be used to offer solutions within the Canadian market, but also to extend ViaWest’s footprint to benefit its US customers.

    Consumer telecommunication services have made up the majority of Shaw’s revenue in the past. More recently, ViaWest, and to a lesser extent the Shaw Business unit have had higher growth than the company’s consumer and media divisions, the Globe and Mail reports. ViaWest and Shaw Business make up roughly 15 percent of Shaw’s EBITDA, but the company reported a significant decrease in profit due to cable-cutting in June.

    ViaWest acquired security and compliance company AppliedTrust in June.

    This first ran at http://www.thewhir.com/web-hosting-news/canadian-telecom-shaw-communications-to-offer-cloud-services-out-of-new-data-center

    7:02p
    Intel, Rackspace to Build Beachhead for OpenStack’s Next Phase in Texas

    Recognizing the huge role Rackspace has played in the birth and development of OpenStack, Intel has partnered with the Windcrest, Texas-based data center and cloud services outfit on a new initiative to get OpenStack into more enterprise data centers – tens of thousands more.

    More and more enterprises have been dabbling in OpenStack, and some, including companies like PayPal, Walmart, and Bloomberg, have used it to stand up private clouds that run production workloads. But it’s not easy at this stage in OpenStack’s evolution.

    “Bloomberg is running a ton of services on OpenStack, but they spend a lot of time adding to the actual source [code] itself to make it enterprise-ready,” Jason Waxman, VP and general manager of Intel’s Cloud Platform Group, said on a call with reporters today.

    Today, the financial data and news giant’s use of OpenStack as far as customer-facing services in production are concerned is limited to relatively small web services, John O’Connor, manager of data center operations at Bloomberg, said in an interview. The company uses OpenStack primarily for development, but the goal is to “aggressively move legacy applications in that direction,” he said.

    Large OpenStack Dev Center Coming to San Antonio

    OpenStack adoption by telcos and cloud service providers is further along than enterprise OpenStack adoption, and Rackspace operates the biggest OpenStack cloud of them all. To speed up the process of making the open source software more enterprise-friendly, Intel and Rackspace are planning to build an OpenStack Innovation Center in San Antonio, Texas, where a large dedicated OpenStack developer team will sit side-by-side with the Rackspace techs who operate a real-life OpenStack cloud day in and day out.

    “This is not yet another [OpenStack] distribution,” Waxman said. The developers will focus on adding and improving enterprise features to the core open source code base.

    Some of the center’s initial goals include improving scheduling and networking features, adding Linux container services, and fixing bugs in general.

    The most successful enterprise OpenStack clouds today scale to a few hundred nodes, according to Waxman. The initiative’s goal is to enable enterprise clouds that scale to thousands of nodes.

    Intel and Rackspace plan to stand up two 1,000-node server clusters that will be available to developers free of charge so they can test and validate OpenStack code. Expected to come online within the next six months, it will be the largest OpenStack development cloud in the world, Waxman said.

    Intel Says Cloud Adoption Not Fast Enough

    Intel’s investment in the center is part of a broader initiative Intel announced today to speed up adoption of cloud computing in general. The company estimates that half of the world’s applications run on cloud infrastructure now and expects 85 percent of them to run in the cloud by 2020, Diane Bryant, senior VP and general manager of Intel’s Data Center Group, said.

    “This transition to cloud is actually not happening fast enough,” she said. Impediments to growth include a rapid pace of innovation in the cloud software stack, too many options for things like hypervisors, orchestration, or developer environments, and numerous configuration choices within each of those options.

    It takes a high degree of expertise and weeks or months of work to stand up a cloud today with all the mandatory enterprise features like high availability, version control, and management features that are consistent with existing enterprise IT management systems.

    Intel’s goal is to eliminate all these impediments, and it plans to invest in companies, technology, industry collaboration, standards development, and market development to bring cloud computing into tens of thousands of data centers that don’t use it today, according to Bryant. “There will be a continued stream of investments that we will be making,” she said.

    Intel’s primary interest is to make sure cloud infrastructure of the future can take advantage of advanced features the company builds into its processors.

    The chip family that powers the majority of the world’s clouds today is Intel’s Xeon E5, which according to Waxman is in 95 percent of cloud servers deployed. There’s also growing interest in Xeon Phi processors for high-performance computing in the cloud, he said.

    Intel officials did not reveal the total size of investment under the initiative, saying only that it would be big. “We’re a big company so ‘big’ for us is big,” Waxman said.

    8:02p
    RiT Updates DCIM Software for End-to-End Network Management

    Infrastructure management software company RiT Technologies announced that its CenterMind software has been enhanced to provide end-to-end connectivity management for complex and dynamic networks. With the goal of enforcing best-practice policies, improving change management, and reducing operational costs, the new capabilities comply with the latest AIM (Automated Infrastructure Management) standard.

    RiT’s CenterMind DCIM software solution seeks to aid data center managers with asset management and energy efficiency through real-time monitoring, capacity, and provisioning tools, as well as a holistic view of the data center.

    RiT says the new edition of the software provides a detailed connectivity map to facilitate rapid execution of moves, adds, and changes with full information requirements to minimize errors and delays. The company said these features enable data center managers to quickly detect points of failure, prevent downtime, and identify underutilized equipment.

    RiT’s VP of products Kobi Haggay said the enhancements help “IT managers perform change management on virtual networks by taking the guess work out of troubleshooting, maintenance, decommissioning, and executing new deployments.” He added that “in addition to introducing new resource efficiencies, connectivity management is critical for organizations to address the challenges of dynamic virtual environments.”

    10:28p
    Report: Second Alibaba Cloud Data Center Coming to US

    Ramping up its push into the US cloud services market, Chinese internet giant Alibaba’s cloud arm Aliyun is planning a second cloud data center on US soil. The announcement comes about four months after the company announced the launch of its first US data center in Silicon Valley.

    The global cloud service market is enormous and growing, and the biggest share of the revenue comes from customers in the US. One recent analyst estimate pegged total annual cloud-services revenue at more than $16 billion on a worldwide basis.

    Intel estimates that about half of applications deployed today run on some kind of cloud infrastructure. By 2020, the company expects 85 percent of applications to run in the cloud, which includes both public and private clouds.

    As the cloud market grows, it also drives a lot of business for data center service providers. Aliyun’s Silicon Valley expansion was in a service-provider data center, although the company did not disclose who that service provider was.

    In interviews, data center providers consistently name cloud as one of the biggest demand drivers for their services. Stewart Thompson, director or real estate for the Americas region at Equinix, for example, said the world’s largest data center service provider has had “a lot of success building out space for cloud providers.”

    Aliyun, China’s largest cloud service provider, plans to launch its second US cloud data center in several months, as part of a global expansion plan, Li Jin, senior product director at Aliyun, said in a press conference in Beijing this week, according to a China Daily report. Hi did not disclose any more details about the plan, saying only that it also included opening cloud data centers in Singapore, Japan, and Europe.

    Another part of the plan is building out a global content delivery network. Aliyun already operates the largest CDN in China.

    Also on Wednesday, Aliyun announced a Data Protection Pact, which is a formal commitment to protecting customers’ data privacy.

    11:15p
    Trend Micro Adds Cloud, Data Center Security Platform to Azure Marketplace

    Security software vendor TrendMicro has made its Deep Security cloud and data center platform available on the Microsoft Azure Marketplace to help customers protect their Windows applications and services, our sister site Talkin’ Cloud reported.

    The addition is expected to give Microsoft customers an easy way to set up and deploy security for their cloud workloads and improve existing security options within Azure, according to the announcement.

    The entire article can be found at: http://talkincloud.com/cloud-computing-security/trend-micro-brings-deep-security-solution-azure-marketplace

    << Previous Day 2015/07/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org