Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, September 17th, 2015

    Time Event
    12:00p
    Ten Key Figures from Latest Progress Report on US Government IT Reform

    While estimated savings from US federal government IT reform initiatives of the last five years are now in the billions of dollars, they are still far from the potential savings the Office of Management and Budget, tasked with managing implementation of the reforms and tracking their progress, estimates can be achieved.

    According to the latest report on the reforms by the Government Accountability Office, while a handful of agencies have made substantial progress optimizing the way they use IT infrastructure, most of the 26 agencies and departments that participate in the efforts are having trouble complying with the processes OMB has devised.

    Then federal CIO Vivek Kundra launched the Federal Data Center Consolidation Initiative in 2010, recognizing that the government was spending too much money on its sprawling data center infrastructure. Today, OBM estimates the government can save about $3 billion from the initiative by the end of this year.

    Kundra followed FDCCI with a 25-point IT Reform Plan, which expanded reform to include efforts beyond simply shuttering redundant data center, things like optimizing IT acquisition, improving operational efficiency, use cloud services, sharing IT resources by agencies, and shortening release cycles. In other words, Kundra and his team wanted government IT to be more like corporate IT, with a more centralized IT organization.

    Another important initiative launched in March 2012. Called PortfolioStat, it requires agencies to take a more holistic look at their IT portfolio to identify redundant systems and applications (rather than redundant facilities), consolidate them, while leaving the core systems in place and sharing them.

    One month after the release of PortfolioStat, OMB formalized the sharing requirements in the IT Shared Services Strategy. This strategy requires agencies to share commodity IT resources, such as software licenses, email systems, and human resource systems. OMB estimated there were billions of dollars to be saved from sharing these resources.

    Here’s a summary breakdown of estimated savings from federal IT reform initiatives of the last five years:

    Fed IT DC consolidation savings pie chart

    Here are 10 key figures from the latest GAO report that paint a picture of the current state of federal government IT reforms:

    $80 billion:

    Estimated total federal departments and agencies spend each year to meet their IT requirements.

    $3.6 billion:

    Estimated total 24 of the 26 agencies that take part in OMB’s IT reform initiatives saved between fiscal 2011 and fiscal 2014 as a result of their participation. The two agencies that did not report any savings were NASA and Office of Personnel Management.

    $2.5 billion:

    Estimated savings attributed to departments of Defense, Homeland Security, and Treasury, and the Social Security Administration. That’s only four agencies contributing close to 70 percent of the total savings by 24 agencies.

    $2 billion:

    Portion of the total savings attributed to government data center consolidation and optimization efforts.

    5:

    The number of agencies that implemented OBM’s guidance on submitting plans for reducing IT spending and reinvesting the dollars saved. That’s five out of 27 agencies that were required to submit such plans.

    0:

    The number of agencies that tracked performance of their reinvestment efforts. The report highlighted four select agencies that documented proposed IT reinvestments of $350 million in the development of their fiscal 2014 budgets. They were the Social Security Administration and the departments of Education, Interior, and Labor. They weren’t the only agencies that submitted reinvestment plans. OMB selected them because they proposed to reinvest more than others.

    OMB’s “cut and reinvest” program started in 2012, requiring agencies to include in their 2014 budgets ways to reduce their IT spend by 10 percent of their overall spending and plans for reinvesting from half to all of the money saved.

    1:

    The number of agencies that met all requirements of OMB’s PortfolioStat initiative, designed to identify and consolidate redundant systems and applications. All agencies addressed four of the initiative’s seven requirements.

    $1.1 billion:

    Total amount OBM estimated agencies have saved as a result of PortfolioStat in fiscal 2013 and 2014.

    1,690:

    Approximate number of government data centers agencies have closed between February 2010, when FDCCI kicked off, and end of May 2015, according to Data.gov.

    2,430:

    Approximate number of government data centers agencies expect to have closed between 1 June and end of this month.

    Most Gov. “Data Centers” are Tiny

    The numbers of data centers closed and slated for closure can be surprisingly high. That’s because agencies count everything from a three-square-foot IT footprint to a 90,000-square-foot facility as data centers for the purposes of FDCCI. Of the 4,000-plus “data centers” already closed or scheduled to be closed before the end of the month, about 2,700 have gross area of 100 square feet or less. About 1,000 of them are between 100 and 1,000 square feet. The largest facility on the list is a 90,000-square-foot Department of Defense data center, expected to be closed before 30 September.

    3:00p
    C3 Launches Cloud Hosting Servers at Switch Data Center in Vegas

    logo-WHIR

    This article originally appeared at The WHIR

    Cloud services provider Cloud Computing Concepts (C3) announced on Wednesday General Availability of its cloud hosting infrastructure in Las Vegas, located within the Switch SuperNap data center campus.

    According to an announcement by the company, the new Las Vegas location adds to its locations in Miami and New York City, strategically extending its cloud hosting capabilities across the US. It will offer its full suite of services via the Las Vegas facility.

    The new location will serve as a primary facility for customers on the West Coast, and a backup and replication facility for organizations on the East Coast. It will also serve as a point of presence for public and private network traffic.

    C3’s Las Vegas infrastructure features flash-based storage, 16GBPS multi-path fiber channel storage networking, micro segmentation, and intelligent capacity management. It is fully interconnected to C3’s existing infrastructure in both New York and Miami.

    “Today is an exciting day for the team at C3, for our clients, and for our partners,” C3 CEO Rick Mancinelli said in a statement. “Today we extend our transport network and our hosting capabilities west and become a truly national Cloud Services Provider.”

    At the beginning of the year, Switch unveiled plans for its $1 billion, 3 million square foot SuperNap data center campus on 1,000 acres of land near Reno, Nevada.

    This first ran at http://www.thewhir.com/web-hosting-news/c3-launches-cloud-hosting-infrastructure-in-supernap-data-center

    3:30p
    Smart vs. Dumb OOB: How to Tell If Your Out-of-Band Management Strategy Passes the IQ Test

    Rick Stevenson is CEO of Opengear.

    Out-of-band (OOB) management of remotely located network infrastructure has an essential role to play in IT’s future. However, a certain complacency with OOB has hindered the full adoption of its most potent tools. While the history of IT certainly didn’t end in the ’90s, this is where many new OOB devices have their functionality frozen in time.

    Like a bewildered Fred Flintstone inexplicably entering the world of the Jetsons, a slew of newly released products still provide the same pre-Internet modem technology that users in the last century used to get onto bulletin board systems. While simplistic products like this can technically wear the name “OOB management solution,” I’m going to be a bit unkind and label them “Dumb OOB.” We live in the age of the cloud and at the dawn of the Internet of Things (IoT), where more advanced OOB solutions and true network intelligence are about to become downright essential.

    We’re a good bit into the 21st century, a time when distributed edge networks deliver key data and services through the cloud, and enterprise dependence on always-on connectivity means that downtime can be astronomically costly and damaging. This makes the resilience of distributed edge networks the lifeblood of an enterprise. Factor in the arrival of the IoT introducing everything from connected cars to smart online appliances, and it’s clear that soon even common household devices will require always-on connectivity to function optimally.

    What’s more, the latency of the connections between IoT devices and the network performing their data processing is key to their functionality. If the smoke detector in your smart home detects the beginning of a fire, you want that information processed and help to be called for via the fastest connection available (not a dial-up modem), even when primary networks are down. The infrastructure supporting the smart home will require smarter OOB – and the resilience it provides – to ensure continuity and uptime for what will be mission critical applications.

    Given today’s integral connections between data centers and the cloud, these sites now depend heavily on Internet/WAN access devices, firewalls, routers, and switches, which can all strain under the demanding burdens of data throughput, cyber-attacks, firmware exploits, table overflows, adverse environmental conditions, etc. With Dumb OOB there’s no automated intelligence actively preventing issues from occurring.

    The Smart OOB Approach

    The Smart OOB approach is to include backup connectivity that protects business continuity at the hardware layer, with capabilities for automatic responses, diagnosis, and repair using a suite of remediation utilities for common issues. In this way, issues are addressed before they become harmful outages. Think of this as having a virtual network admin staffing each remote sites, trained to perform recovery scripts and lessen the impact of both cyber sabotage and human error. Where resilience is absolutely critical and human personnel are unavailable, intelligent OOB systems can provide the needed assurances of uptime.

    When a router’s Internet/WAN connection fails, as can – and will – happen, admins using Dumb OOB are stuck entering commands over a high latency dial-up connection. The smarter alternative is OOB management that utilizes a high-speed cellular connection, ideally 4G LTE. Cellular is cheaper, easier to implement, and delivers bonus advantages such as across-the-board support for SMS, Internet access, and private network connectivity.

    Like a good doctor performing regular checkups in the practice of preventative medicine, smarter systems will monitor and log system health and environmental conditions to detect faults and perform repairs before failures can occur. Just as with humans, if a router is smoking, that’s not a good sign of its health (but a dumb doctor won’t ask). Smart monitoring of temperature, humidity, smoke, water, and more can create alerts so that admins can jump into action to mitigate the damage. Smart OOB gateways with on-board storage can log and back up the configuration states for routers, firewalls, and switches will allow an admin to repair or upgrade those configurations remotely (whereas Dumb OOB will simply forget). Smarter systems can even automatically remediate issues when detected to avoid larger problems from arising in the first place.

    Take a case where environmental monitoring finds that the temperature in the rack is too high. This sensor reading could trigger an automatic response, where the OOB device takes the most graceful approach to shutting down equipment and load sharing battery power to the most mission critical hardware, then alerts a human admin who accesses equipment via an OOB connection and is able to perform a remote repair, all without end users experiencing any downtime.

    With robust OOB capabilities like this available, it would be, well, dumb for organizations reliant on network connectivity not to enjoy the network resilience that’s so vitally important, especially with the IoT arriving.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:51p
    Cannon T4 to Lease Prefab Modular Data Centers

    Offering customers a temporary solution to immediate data center capacity needs, Cannon T4 announced a leasing service for its modular data centers.

    The UK company is offering its 6-meter and 12-meter Globe Trotter data center containers for lease without upfront capital commitment. It is bringing a pay-as-you-go model to using data center capacity that can be placed anywhere it is needed. The minimum commitment is one year. Globe Trotter is the transportable option in Cannon’s portfolio of modular data centers.

    The service is fairly unique. There are colocation service providers that offer space in modular data centers – IO is the most prominent example – but those modules are housed inside big facilities operated by the providers. Numerous vendors, including HP, Schneider Electric, and Cannon T4, also sell modular data centers.

    Cannon T4’s new offering is aimed at companies with “medium-term capacity challenges,” the company said in a statement. It can be a way to expand capacity temporarily, while a more traditional brick-and-mortar data center is being built. It can also be used as a failover facility while the main data center is being worked on.

    The company is offering custom configuration options. The modules come with power, cooling, cable infrastructure, and standard 19-inch cabinets. If needed, Cannon T4 will pre-install servers and switches as well. Typical density is 4kW to 6kW per rack, but it can go up to 20kW, the company said.

    The box can withstand harsh environments, and its cooling system can work in temperatures from -46C to +58C. Users can choose between direct expansion cooling and direct or indirect free cooling options.

    Here are the full Cannon T4 Globe Trotter specs.

    6:31p
    Facebook Infrastructure Chief to Chair Open Compute Foundation, Succeed Frankovsky

    Board of directors of the Open Compute Project Foundation, Facebook’s non-profit that champions open source data center and hardware design initiatives, has named a successor to Frank Frankovsky, who has been the board’s chair since it was created four years ago.

    Jason Taylor, Facebook’s VP of infrastructure, will be the new OCP Foundation president chairman starting in October. He will chair the board until October 2016, according to the foundation’s new rules, which establish a yearly chairmanship rotation among companies represented on the board: Facebook, Goldman Sachs, Intel, Microsoft, and Rackspace.

    “Our hope is that this new approach will allow for each of these companies to bring their unique experience and perspectives to bear on the work we’re all doing together,” the organization said in a blog post.

    Frankovsky, a former VP of hardware design and supply chain at Facebook, has been the human face of OCP and its main evangelist since its start, and his departure is a major change for the foundation. He left Facebook last year to found and lead a cold-storage startup that used Blu-ray disks as the storage medium — a concept developed at Facebook — but stayed on the OCP Foundation’s board. His company, Optical Archive, was acquired by Sony earlier this year.

    Facebook started OCP in 2011, open sourcing its custom server specs and infrastructure designs used at its first Prineville, Oregon, data center – the first data center the social network designed and built for its own use. It relied on leased facilities prior to the opening at Prineville.

    The goal was to bring the open source software ethos to hardware design. Like Google, Amazon, and Microsoft, Facebook designs its own hardware and infrastructure management software to fit the needs of its scale – something incumbent hardware vendors weren’t making products for at the time.

    The approach has resulted in a lot of savings for Facebook, whose CEO Mark Zuckerberg said last year OCP had resulted in $1.2 billion in savings.

    Goldman Sachs was one of the early participants in OCP and has made numerous contributions to the project. The company has been involved in development of OCP hardware, firmware, and BIOS. Jon Stanley, VP at Goldman’s IT and services division, told us in an interview earlier this year that 70 percent of new server purchases the company would make this year would be OCP servers.

    Another major player in the financial services world that was involved with OCP from the early days was Fidelity Investments, which also has been considering an OCP deployment at scale.

    While the value of switching to OCP in typical enterprise IT shops has been questioned, the biggest of financial services firms have been actively looking for ways to take advantage of the approach.

    Read about Goldman’s, Fidelity’s, and other financial services giants’ OCP plans here

    Intel has been involved with OCP from the early days as well. It’s hard to develop server specs without the involvement of the company that designs the majority of server platforms running in data centers around the world. One of the most interesting OCP projects with Intel has been the disaggregated rack, where individual server components can be swapped out as needed, and where servers share common infrastructure resources, such as power supplies and cooling fans.

    Microsoft joined OCP more recently than other members of the board. The company became an official member early last year and also announced it had switched to a uniform strategy for server procurement for all of its cloud services. Its new server design specs are based on OCP designs.

    Rackspace is a long-time supporter of OCP. The OpenStack infrastructure that supports its cloud services – both cloud VMs and bare-metal cloud – is built on its version of OCP servers.

    6:41p
    SAP Looks to Move Beyond CRM in the Cloud

    talkincloud

    This article originally ran at Talkin’ Cloud

    While customer relationship management applications lead the way in terms of establishing software-as-a-service as the first major class of service in the cloud, for all intents and purposes CRM software is generally used to create a database of customers that makes it simpler to manage sales teams. While there’s no doubt that CRM software represents a billion dollar software category, SAP this week started making a case for moving beyond it.

    With the launch of SAP hybris-as-a-service on the SAP HANA Cloud Platform, Carsten Thoma, president of the customer engagement and commerce business unit at SAP, said the time has come to transform the way the front office inside most organizations operates. In fact, Thoma said CRM as a concept is dying. In its place will rise an integrated set of applications that as a category SAP refers to now as customer experience and commerce.

    Whether that name sticks as a category remains to be seen. SAP is contending that the time has come rethink how the front office operates altogether. As SAP sees it, the rise of digital enterprise requires organizations to tightly integrate their sales, marketing and service functions. The hybris software that SAP acquired in 2013 provides an omni-channel foundation for managing customer interactions that occur both physically and online. There’s no reason, for example, that sales and service people should not be immediately aware of all the touchpoints the organization as a whole has had with any given customer, said Thoma.

    While that sounds ideal from a customer engagement perspective, the number of companies that have the processes in place to support it are relatively few. Inside most organizations today sales, service and marketing are separate fiefdoms. SAP is making the case for eliminating the data silos that those separate groups create using the SAP in-memory computing platform running as a cloud service. But in order to do that there needs to be a conscious effort to focus on customer experience in a way that reorganizes an organization under the auspices of, for example, a chief customer officer that has a mandate from the CEO to change the way a company fundamentally operates.

    Arguably, it’s hard to make the case for doing that unless the solution providers helping SAP make that case are themselves making that journey. For that reason, Brad Weatherly, a principal with Ernst & Young, says the global systems integrator has transformed the way its 60,000 customer facing representatives engage customers in a way that brings the right people with the right skill sets together in a time manner. To make that occur, Ernst & Young is using SAP hybris to turn marketing loose in a way that drives more opportunities for the solution provider, said Weatherly.

    Of course, this isn’t the first time that SAP has moved to reinvent the way businesses operate to drive adoption of a new software category. Before the emergence of suites of ERP there used to be any number of best-of-breed applications that specifically addressed finance and the supply chain. While a handful of best-of-breed applications still exist in those categories, over a span of a decade SAP drove the adoption of ERP. How long it will take to drive CEC as a category is anybody’s guess. But it’s pretty clear at this point that SAP is gearing up to force that issue at the highest levels of the organization.

    This first ran at http://talkincloud.com/saas-software-service/sap-looks-move-beyond-crm-cloud

    7:24p
    Device42: DCIM Solution Designed by IT Professionals

    Founded in 2010 and headquartered in New Haven, CT, Device42 creates software for the Data Center and IT Infrastructure Management (DCIM) market. Their software, also named Device42, is includes comprehensive functionality from asset management to energy control. At the helm is their founder and CEO Raj Jalan who works very closely with co-founder and CTO, Dr. Steve Shwartz.

    The Device42 solution was designed by IT professionals to enable data center operators and managers to do their jobs more effectively using a single, centralized software solution; one that is easy to own and operate.

    Device42 provides functionality found in these point solution areas (and others as well):

    • DCIM – Data Center Information Management.
    • IPAM – IP Address Management (IPAM).
    • ITAM – IT Asset Management.
    • Power Monitoring and Control – Power monitoring provides users with historical trends and real-time information and client configured power alerts.
    • Software License Management – Organizations can create and maintain a comprehensive, accurate profile of the software deployed on Windows and Linux machines across the entire IT infrastructure.

    Customers of Device42 are located in 22 different countries and include Apple, Splunk, New Relic, Activision, Verizon, Cisco, Carnegie Mellon, Mercedes Benz, NCAA, Mayo Clinic, and others.

    Specifically, Device42 software addresses a broad range of functionality needs in the daily management of a corporation’s comprehensive IT ecosystem. These functionalities include:

    • Data center/IT asset & inventory management
    • IP address management
    • Password management and tracking
    • Patch panel cable management
    • Power and environmental monitoring and control
    • Hardware and software dependency mapping
    • Software license management

    “Having worked many years managing data centers, I experienced, first hand, the challenges of operating multiple, high-priced, disparate data center and IT Infrastructure management solutions,” says Raj Jalal, CEO and founder of Device42. “That’s why we created Device42 software; software that is very affordable, easy to deploy and use, and comprehensive in its capabilities. Put simply, Device42 provides IT managers with functionality that dramatically helps them identify, visualize, and manage their IT infrastructure using a single, unified console. Now, Device42 is redefining the way businesses manage their physical and virtual IT environments and, more importantly, the results they achieve.”

    device42

    10:13p
    Alcatel-Lucent Acquires Mformation for IoT Security Smarts

    varguylogo

    This post originally appeared at The Var Guy

    By Michael Cusanelli

    Alcatel-Lucent has acquired mobile and Internet of Things security platform provider Mformation as the French tech giant looks to solidify its grip on the growing mobile device security market.

    Mformation will be absorbed into Alcatel-Lucent’s IP Platforms organization so service providers and enterprises can gain access to a secure and scalable application-independent IoT security and control platform, according to the announcement. Alcatel-Lucent said it plans to market the platform for use in multiple industry verticals, including automotive, health care, utilities, manufacturing and the burgeoning digital home market.

    The financial details of the acquisition were not disclosed.

    “Our portfolio and our customer base are highly complementary to Alcatel-Lucent’s Internet of Things aspirations,” said Rakesh Kushwaha, founder and chief technology officer of Mformation, in a statement. “We look forward to leveraging our mobile and IoT security capabilities to enable customers to fully exploit the benefits of a connected world.”

    Mformation is a New Jersey-based IoT security and device management solution provider that currently has more than 20 million service provider customers worldwide. Alcatel-Lucent said the addition of Mformation’s platform to its portfolio will help it become a leading force in the mobile device management and security space, as the number of connected devices is predicted to exceed 70 billion by 2020.

    “As connected devices become an ever-increasing part of our social and economic fabric, we need to ensure that these devices, and the networks they navigate, are secure, reliable and efficient,” said Bhaskar Gorti, president of Alcatel-Lucent’s IP Platforms organization. “Mformation’s cloud-enabled IoT platform will enable our customers to rapidly deploy IoT services that can be trusted and managed efficiently.”

    Alcatel-Lucent was last in the news in April, when Nokia announced it would acquire the networking equipment maker for a whopping $16.6 billion in shares. The deal is still pending and is expected to close during the first six months of 2016.

    This first ran at http://thevarguy.com/information-technology-merger-and-acquistion-news/091715/alcatel-lucent-purchases-mformation-iot-security-s

    << Previous Day 2015/09/17
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org