Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, October 2nd, 2015

    Time Event
    12:00p
    Microsoft Intros Private Azure Government Cloud Connectivity

    Among the slew of announcements Microsoft unleashed during its AzureCon event earlier this week was the announcement that ExpressRoute, the service that connects customers’ data centers to Azure cloud servers directly and privately, is now available to government agencies using Azure Government, the cloud availability regions hosted in data center halls built specifically for government clients, isolated from infrastructure supporting Azure services for the private sector both physically and logically.

    Direct network connectivity to cloud services is a quickly growing market, because it allows users to take advantage of the infrastructure flexibility cloud provides without sacrificing application performance and increasing attack exposure by using cloud services over the public internet. Previously available to customers using Azure’s non-government availability regions, the service is now available to agencies who opt for hosting their cloud infrastructure in data centers designed specifically for government customers and staffed by personnel that goes through special security screening.

    Microsoft data centers that host Azure Government, launched in December 2014, are in Virginia and Iowa, but ExpressRoute service for the government cloud is available out of Equinix data centers in Chicago and Ashburn, Virginia. Direct connectivity between Equinix and Microsoft data centers and, if necessary, agencies’ own facilities is enabled by network carriers AT&T, Verizon, and Level 3.

    Microsoft chose Ashburn because of its proximity to its own Azure data center in Virginia and to Washington, D.C., John Harvey, director of business development for national cloud programs at Microsoft, said. Chicago is close to the Microsoft data center in Iowa, and he expects most ExpressRoute customers in Ashburn to use Chicago as the fail-over site.

    Gov. Cloud Traction with Local Law Enforcement

    Driven by a number of IT reform initiatives launched over the past five years, cloud services are in demand by federal agencies. Microsoft, because of its long history as a vendor to the federal government, has gotten a lot of traction in that market, and so have Amazon and VMware.

    However, Azure Government is for all public-sector agencies; not just the federal government. Microsoft is working with Riverside County officials in California, for example, to migrate infrastructure from the county’s own data center to the government cloud.

    Together with Vievu, a body-worn camera vendor, Microsoft is in talks with Oakland Police Department in California about setting up cloud infrastructure to transport and store video shot by police officers’ body cameras, Harvey said.

    Interest in police body cameras is on the rise, following recent police violence incidents around the US that sparked mass protests. If more police officers wear body cameras, however, police departments will have to make major investments in IT infrastructure to transport and store the video those cameras collect. Cloud infrastructure, accessible through trustworthy private connections, can be an effective solution for them.

    “We are working with a number of agencies directly on body camera initiatives,” Harvey said.

    Generally, Microsoft has put a lot of effort into courting local law enforcement agencies as cloud customers. Earlier this year, the International Association of Chiefs of Police issued a formal recommendation that cloud storage services for all criminal-justice data, including video, comply with the Justice Information Services Security Policy devised by the FBI. Microsoft, in response, has made sure Azure Government complies with the FBI’s guidelines.

    Answers to the Big Cloud Questions

    Harvey expects most government agencies that use Azure Government to opt for ExpressRoute. Security is an important driver, but it’s also about performance. Core enterprise applications simply don’t perform well enough if hosted in the public cloud.

    “When you start talking about enterprise workloads, having that level of connectivity is going to allow you to do more interesting things,” he said.

    Brian Hoekelman, VP of business and cloud ecosystem development at Level 3, one of the network service providers enabling ExpressRoute for Azure, said enterprises usually start using Azure over the internet for development and testing, but once they’re ready to deploy in production, they turn to ExpressRoute. The dynamics are similar for AWS, which has a similar service called Direct Connect, he said.

    ExpressRoute takes more time and money to set up than simply provisioning VMs through a web browser. But “the performance benefits outweigh the flexibility of going over the internet,” Hoekelman said.

    And enterprises seem to be catching on. For service providers like Level 3 and Equinix, providing direct private links to public cloud services is a new and rapidly growing source of revenue. “From a [business] unit percentage growth perspective, it’s one of our fastest growing products,” Hoekelman said.

    Security and performance were the two most frequently cited impediments to adoption of public cloud services by enterprises about five years ago, when cloud hype was really picking up. The standard line was that while infrastructure elasticity and the pay-as-you-go model of public cloud were attractive, security and performance weren’t good enough for serious enterprise workloads in production; only for test and dev.

    With services like ExpressRoute, Direct Connect, or Google’s Carrier Interconnect cloud service providers seem to have found a way to address those concerns. Customers don’t get quite the ease of use of provisioning VMs through a browser and paying with a credit card – the initial setup takes some time and professional services – but what they get is close to that and at the same time more secure and with more predictable performance.

    3:00p
    Report: Millions Wasted on Keeping IT Labs in On-Prem Data Centers

    When it comes to hosting services, there seems to be a natural inclination to think in terms of the deployment of production applications. But a survey of 150 IT professionals conducted by Spiceworks at the behest of Vantage Data Centers suggests that millions of dollars is being wasted on supporting labs located inside on-premise data centers.

    Vantage CEO Sureel Choksi said the primary issue appears to be a location bias that results in 94 percent of the IT professionals surveyed opting to support lab facilities in-house, even though those labs don’t require the same level of IT support as a production environment.

    “A lab doesn’t need the same level of resiliency as a production application environment,” said Choski. “Companies can save a lot of money by taking a more hybrid approach to making use of hosting services.

    Vantage, a Santa Clara, California, data center provider, has recently been marketing its facilities as IT lab space. Citing expensive office real-estate in Silicon Valley, the company’s executives have said demand for lab space outside of corporate office buildings has been rising.

    Its efforts so far have paid off. Vantage has leased lab space to at least two tenants in Santa Clara: one is security software giant Symantec; the other’s name has not been disclosed. Both were large multi-megawatt deals.

    Respondents to the survey stated that challenges they experience when it comes to data center labs include management and maintenance (41 percent), the cost of infrastructure (39 percent), the physical space required (34 percent), and the time required to configure and deploy the labs (33 percent).

    For the most part, labs are used to either create applications or test how various pieces of IT infrastructure should be integrated together. Choski said service providers like Vantage give organizations the option to outsource those lab facilities to a secure third-party data center without having to pay for all the power requirements needed in a typical production environment.

    Nearly two-thirds of respondents reported their organizations’ R&D labs’ IT infrastructure runs in data center space that is shared with other application environments. In addition, more than half of the respondents (62 percent) are deploying IT hardware for research and development in a commercial office, which is typically more expensive than using space provided by a hosting service provider.

    In fact, Choski noted, many organization not only move their research and development labs into more secure hosting facilities, they frequently relocate the research and development staff to those environments as well.

    Overall, survey respondents said the most important factors when considering a data center lab environment include security (62 percent), reliability (54 percent), ease of management (49 percent), flexibility (49 percent), TCO (47 percent), and ease of deployment (47 percent).

    3:30p
    Friday Funny: Hole in the Wall

    Gotta love construction at the office…

    Here’s how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon.

    Congratulations to Darrell, whose caption won the “Ceiling” edition of the contest. Darrell’s caption was: “How do we get down without spilling our coffee?”

    Lots of submissions came in for the “Server pileup” edition – now all we need is a winner. Help us out by submitting your vote below!

    Take Our Poll
    For previous cartoons on DCK, see our Humor Channel. And for more of Diane’s work, visit Kip and Gary’s website!
    4:00p
    Weekly DCIM Software News Update: October 2

    Geist advances its Environet Facility software to version 4.4; ZPE Systems launches NodeGrid 3.0; and Vapor IO says that it has partnered with Applied Micro to power its Edge Controller with 64-bit ARM chips.

    Geist releases version 4.4 of Environet Facility. Geist announced version 4.4 of its Environet Facility DCIM software, featuring a new Analytics engine and refreshed interface. The new release also contains an updated RESTful API, which provides a simple, reliable, and scalable solution for sharing information between multiple software systems.

    ZPE Systems launches NodeGrid 3.0. ZPE Systems announced the availability of version 3.0 of its NodeGrid software, featuring Zero-Touch Provisioning, Environmental and Power Monitoring with NodeStash data collection, correlation, and natural language search and dashboard. NodeStash continuously measures critical data points of all managed devices in a multi-vendor IT Infrastructure.

    Vapor IO server management controller to be powered by ARM Chips. Vapor IO announced that it has partnered with Applied Micro, whose 64-bit ARM processor will be the brain of the Vapor Edge Controller, a centralized, shared top-of-rack server management controller meant to replace proprietary Baseboard Management Controller in each individual server in the rack.

    5:31p
    Unusual Malware May Infect IoT Devices to Protect Them: Symantec

    logo-WHIR

    This article originally appeared at The WHIR

    Symantec has been tracking an unusual malware that targets Internet of Things (IoT) devices. Called Linux.Wifatch it appears that the malware is being used to secure infected devices instead of using them for malicious activities.

    According to a blog post by Symantec on Thursday, most of Wifatch’s code is written in Perl and it targets several architectures, shipping its own static Perl interpreter for each of them. When a device is infected, it connects to a peer-to-peer network that distributes threat updates.

    What’s unusual, according to Symantec, is that the code does not ship any payloads used for malicious activities, such as DDoS attacks.

    Symantec recommends users reset an infected device to remove the Wifatch malware; however devices could become infected again. Users should keep their device’s software and firmware up to date and change default passwords.

    As the number of IoT devices grow, so do the variety of security threats. IoT will likely force changes in policy and security practices at most organizations as 55 percent of IT decision makers at US SMBs surveyed last year expect new security threats and the extension of existing threats to new devices to be a major concern.

    Mario Ballano of Symantec said that it has been “monitoring Wifatch’s peer-to-peer network for a number of months and have yet to observe any malicious actions being carried out through it.”

    “Wifatch not only tries to prevent further access by killing the legitimate Telnetdaemon, it also leaves a message in its place telling device owners to change passwords and update the firmware,” he said.

    The author chose not to obfuscate the Perl code, suggesting that they aren’t worried about others being able to inspect it.

    Although it does seem to be unlike most malware, Symantec said Linux.Wifatch is still a piece of code that infects a device without user consent. Symantec will continue to keep “a close eye on Linux.Wifatch and the activities of its mysterious creator.”

    It is estimated that Wifatch’s network includes tens of thousands of devices, with 32 percent of infected devices in China, and 16 percent in Brazil. Only 5 percent of infected devices are in the US.

    Development of IoT is more advanced in Asia Pacific, according to a recent report, with 26 percent of developers in APAC likely to be working on IoT projects, compared to developers in North America (22 percent).

    The vast majority (83 percent) of infected devices are ARM architectures.

    This first ran at http://www.thewhir.com/web-hosting-news/unusual-malware-may-infect-iot-devices-to-protect-them-symantec

    5:42p
    Amazon Adds Open Source Elasticsearch Platform to AWS Cloud

    varguylogo

    This post originally appeared at The Var Guy

    Elasticsearch, the open source, distributed Big Data analytics platform, is at the center of Amazon‘s newest cloud service, Amazon Elasticsearch, which the company rolled out this week.

    Elasticsearch is a Java-based open source framework for searching textual documents at massive scale. It is designed to be highly scalable and compatible with cluster-based distributed-computing infrastructure.

    The platform also has a rich API and web interface integration, which makes it an obvious choice for Amazon in building its newest cloud service. Now, the company offers user-friendly Elasticsearch clusters through the AWS interface.

    “You can launch a scalable Elasticsearch cluster from the AWS Management Console in minutes, point your client at the cluster’s endpoint, and start to load, process, analyze, and visualize data shortly thereafter,” AWS Chief Evangelist Jeff Barr wrote in a blog post introducing the service.

    Elasticsearch isn’t a new technology. It has been around since 2010. And Amazon isn’t the first company to offer convenient, cloud-based Elasticsearch clusters. Google also offers “Click to Deploy Elasticsearch” on Google Compute Engine.

    Still, Elasticsearch on AWS adds another product to Amazon’s portfolio of cloud offerings. It also makes it that much easier for organizations to take advantage of Elasticsearch, an open source technology whose relevance will only grow as big data becomes increasingly important to the market.

    This first ran at http://thevarguy.com/open-source-application-software-companies/100215/amazon-adds-open-source-elasticsearch-platform-aws-cloud

    7:14p
    How to Select the Right Cloud Management Tools

    Before you deploy cloud, you need to decide which cloud management tools you will use. There are multiple sources. In virtualization suites, some tools are included natively, for example. There are also third-party tools that promise single-pane-of-glass management across numerous distributed data centers. As we’ll examine later, there are pros and cons to using each kind. Your cloud management tool choices should be informed by the needs of your specific workloads.

    As with any technology, the ability to monitor cloud infrastructure in conjunction with other interdependent components will dictate just how robust an environment should be. Private, public, or hybrid cloud may each require its own set of tools.

    But there will be common considerations among all major cloud management tool sets. With major infrastructure components, administrators must have clear visibility into their environment. Good tools and monitoring software should include the following features:

    • User count. At any time, an administrator must know how many users are accessing the cloud environment, which server these users are on, and which workloads they are accessing. This type of granular control allows IT administrators to properly balance and manage the server-to-user ratio. The only effective way to load-balance cloud servers is to know who is accessing them and in what number.
    • Resource management. Deep resource visibility comes on multiple levels. As discussed, it’s important to see how physical resources within the cloud are used. This also means viewing graphs, gathering statistical information, and planning for the future. Visibility and management revolve around an administrator’s ability to see which resources are available, and where they are allocated. Improper allocation can quickly become too expensive.
    • Alerts and alarms. A healthy environment with good cloud visibility includes alerts and alarms to proactively catch issues. By catching problems before they become outages, an organization can maintain higher levels of uptime. Being able to set up alerts so that the proper administrator is notified depending on the issue is important. If a storage alert goes out to a server admin, the response may not be as fast as it would be had the alert gone out to a storage administrator.
    • Failover capabilities. With good visibility comes the ability to fail over cloud servers without creating user downtime. If an error or issue is caught, administrators can fail users over to a host capable of handling that volume. In many environments, this task can be automated. If a physical host goes down, the VMs residing on the host will be safely migrated and balanced among other available servers, with alerts sent out to appropriate parties.
    • Roles and privileges. Good visibility also means having roles and privileges built into the environment. This means that the storage team has access only to cloud-based storage components, and the virtualization team can have access to VM management. This isolation of roles creates effective audit trails. It also greatly reduces the risk that a team member will make the wrong changes to the system.
    • SLA considerations. When working with a third-party provider, visibility into service-level agreements is also critical. This means monitoring uptime and environment usage. Depending on the type of SLA, different metrics are important to the administrator. This might mean monitoring the number of VMs working or adjusting downtime requirements.
    • Testing and maintenance. Just as with any other infrastructure, cloud environments require maintenance and testing. Tools that help administrators with server patching, updates, and other general maintenance tasks are valuable. Planning for testing of bandwidth or failover capabilities must also be in place.

    Most of all, it’s important to make sure your cloud management tool set is directly aligned with your data center strategy and your business goals. Remember, your underlying infrastructure is the main driver for your entire business. Without the right management tools your go-to-market strategy could suffer.

    << Previous Day 2015/10/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org