Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, June 24th, 2014

    Time Event
    11:30a
    Puppet Labs Certifies Network and Storage Vendors That Gel With its DevOps Tools

    Puppet Labs, which has seen rapid growth in sales for its IT automation software as DevOps tools prove themselves among enterprises, has Puppet Supported, a certification program that comes out of the gate with a number of leading vendors on board, including Arista Networks, Brocade, Cisco, Cumulus Networks, Dell, EMC, F5, Huawei and NetApp.

    Puppet is extending the benefits of automation to networking and storage by certifying Puppet Enterprise for platforms and devices. This promises to remove some bottlenecks, particularly around the network, and help spur the adoption of both automation and DevOps.

    The end goal is enabling the fully automated data center. Puppet Enterprise with network and storage devices brings the benefits of automation to networking and folds it into overall management of the entire data center.

    Working to pull SDN into DevOps

    The additional collaboration and certification will allow organizations to deploy software faster with fewer errors, as well as rapidly adapt to fast-changing business needs. While cloud and IT automation have dramatically reduced the time it takes to provision, in most cases networking and storage are still provisioned manually, creating bottlenecks.

    “One of the things we noticed, once the compute automation is sorted out, there’s still a bottleneck on the network and storage side,” said Puppet CIO Nigel Kersten. “You have to wait for that to be provisioned. Network automation hasn’t been nearly as adopted as systems managed on the compute side.

    “We’re seeing a whole bunch of terms like SDN (Software Defined Networking) and Application Centric Infrastructure (a Cisco concoction) out there. The fact you can pull in network and storage into one configuration allows people to take that app-centric approach.”

    Puppet is working with these vendors so that they work well with its DevOps tools. It will be delivered in the form of modules once it undergoes rigorous testing.

    Puppet will test performance and scaling, as well as making sure there are no bugs. “We’re working with partners to take our framework and to take their code through it, making a painless out-of-the-box experience for customers,” said Kersten.

    Certification will expand reach and ease of use, which Kersten hopes will make Puppet appeal to a new crowd.

    Puppet benefits from big vendor APIs

    There is a lot of change occurring in the software defined infrastructure space. Interestingly, areas that have been difficult to automate, such as network, are encouraging the big vendors to provide high-class APIs, which helps companies like Puppet.

    “Vendors are getting more pressure to provide APIs and it means we can port automation,” said Kersten. “We expect to see this happen more and more in the open space as well, with OpenStack doing lots of great things.”

    A recent survey done by Puppet not only found that companies that invest in DevOps have higher IT performance, but also managed to link IT performance to better business performance. “We’ve shown that companies that invest in these practices actually outperform others,” said Kersten. “IT is not a cost center but a business driver.”

    Puppet is in a period of fast growth, expanding globally following a recent $40 million round. It has more than 80,000 registered users, its software running on about 10 million systems. A thriving community has contributed almost 2500 modules, which are a sort of modified templates for using Puppet.

    “The recent funding is about grabbing the opportunity, pouring fuel on the fire,” said Kersten. “We have a scalable business model, but we wanted to take it to the next level and push global expansion. It was a sign of confidence from investors.”

    12:00p
    Violin’s Latest Flash Array Features Enterprise Data Services

    With the launch of its Concerto 7000 all-flash array Violin Memory is introducing comprehensive data services software featuring synchronous and asynchronous replication and stretch metro cluster capabilities.

    Data services have been around for a long time for legacy mechanical disk vendors, and the company has reproduced those data services — which enterprises expect and rely on — in its all-flash arrays.

    The array’s business continuity features empower the enterprise to use flash for business continuity across geographically dispersed data centers with remote asynchronous replication, zero RPO (Recovery Point Objective) and RTO (Recovery Time Objective), and WAN-optimized replication. Other data services include storage snapshots, thin provisioning, LUN and capacity expansion, as well as advanced data protection and storage scaling.

    “Violin has led the adoption of all-flash storage systems by both enterprises and service providers,” said Tim Stammers, senior analyst at 451 Group. “Now, Violin is driving flash usage to encompass a wider range of applications.”

    Designed for concurrent workloads

    The array can be used by a variety of workloads simultaneously while retaining high performance — a capability of Violin’s Flash Fabric Architecture.  This is fourth-generation hardware from Violin and performs at over 500 sustained IOPS at less than .5ms (microseconds) with a mixed, heavy workload, the company said.

    A 70TB array can fit in just 3RU, and raw capacity scales to 280TB in a fully configured 18RU. The solution draws 500 watts per rack unit.

    The economics of all-flash storage lie not just with the cost per gigabyte, but in how the entire data center benefits from smaller footprints, more efficient storage, and the staffing resources required to maintain flash. Focusing solely on cost per gigabyte should conclude that an ‘all-tape’ data center is the optimal solution.

    Violin claims its architecture and data services will aid in avoiding over-provisioned storage, reduce the number of cores and servers required and reduce the amount of data center space, power and cooling required to support it.

    “The storage market has always been plagued by an overemphasis on dollar per gigabyte,” said Greg Wong, founder and principal analyst at Forward Insights. “Now it’s about the cost per data center. An all-flash array is a negative cost to the CIO when a solution brings together a one-of-a-kind hardware architecture and advanced software features to sustain enterprise workload performance while reducing hardware, software licenses and power.”

    Violin Memory went public last fall and lists Global 500 enterprises as customers. It has had thousands of deployments worldwide with 10s of petabytes delivered.

    12:30p
    What Kind of Cloud Buyer Are You?

    Matt Gerber is Executive Vice President of Sales and Marketing at 2nd Watch.

    If your company is experimenting by migrating a single application to the cloud, there’s no need to reinvent processes. But what if you are looking to move entire departments of users and applications to AWS or Azure? That can mean a full-scale transformation of IT operations.

    As with houses, vacations and cars, the more you spend, the more you’ll need to consider and plan around. Below, we talk about three basic levels of cloud engagement that are becoming commonplace in today’s market, with planning ideas at each level.

    Calculated cloud infrastructure buyer

    The calculated cloud infrastructure buyer has or is about to dip his/her toe in the proverbial cloud water. They’re eager to experiment but not ready to fully commit. They often begin by acquiring enough capacity to move a single, “safe” app to the cloud, one that won’t bring the business down if it fails. They may migrate an application that’s suffering or is costing too much to support as usage grows. We know a Fortune 500 food products B2B company that had a pricing application for its restaurant customers that was running slowly; customers were complaining. After moving the application to AWS they can now ramp up resources quickly as needed and they’ve also dramatically improved response times.

    Preparation tips: Fortunately, as a calculated buyer, your cautious moves won’t require you to make major changes to your IT organization. Yet plan for how you will track the progress and success of the cloud application and make sure that it’s in compliance with any governance rules such as access and security. Someone in the IT department will need to oversee and report on your pilot projects.

    Market-driven cloud infrastructure buyer

    The market-driven cloud infrastructure buyer has many applications in the cloud. These applications consist of all of the “edge” apps such as the public website, marketing applications, account management, order processing, e-commerce or customer service. The cloud enables rapid adjustments, which is common in customer-facing applications, and it can also handle workloads that go up and down with no warning. We know a Fortune 500 consumer products company that moved from the calculated stage to the market-driven stage on AWS. The company now has over 50 web properties on Amazon today, a growing investment that helps them be responsive in the highly competitive consumer goods industry.

    Preparation tips: Get strategic about planning consumption, management and provisioning of your cloud footprint. Moving to the cloud is like building a house. Major cloud providers offer access to high-quality building materials – compute, storage, auto-scaling, VPC and more. Yet they don’t build the house for you. Consider whether you have the proper skills on hand to design your cloud, build it, and maintain and manage it for performance and reliability. If you don’t have the skill sets in-house, look for a cloud-focused services partner that can reliably deliver a full suite of services. Business and IT alignment around cloud strategy is also critical at this stage.

    All in cloud infrastructure buyer

    Are you a cloud groupie? Then you belong here. Your company is moving its entire data center to the cloud and is aggressively transitioning both edge and core apps to the cloud. We know of a Fortune 500 division that’s shutting down its data center operations this year and moving everything into AWS. Managers are seeing notable cost and agility benefits, and predict a day soon in which other divisions of the company will join them in the cloud.

    Preparation tips: You’ll need to make a fundamental change in how you manage IT when everything is running in the cloud. That shift entails having a comprehensive service catalog for users, embracing DevOps culture and practices, and viewing infrastructure as a commodity to leverage as needed. Create an internal cloud services brokerage group that manages vendor relationships, forecasts and plans consumptions, tracks metrics and monitors cloud performance and user experience. A full-time dedicated person inside the company responsible for directing cloud strategy and overseeing management is a necessity.

    Likely migration to the cloud

    With cloud computing, so much has changed even over the last year. Companies are no longer considering whether to move to the cloud, but when, how and how much. As your company moves through the above levels of adoption, or versions of them, aim to move beyond a focus on security, governance and compliance. Your sights should be on a constant optimization of the performance, economics, usability and market benefits of your cloud infrastructure.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Hitachi Data Systems Adds Cloud Tiering To Storage Platform

    Hitachi Data Systems Corp., a Hitachi subsidiary, announced a new set of technologies in the Hitachi Content Platform (HCP) portfolio designed to help employees work from anywhere on any device while letting IT optimize where the data resides, be it on-prem or in the cloud.

    The basic premise behind the HCP portfolio is that content should be mobilized while retaining security and adhering to data governance. There are three integrated pieces: HCP the object store, HCP Anywhere the file sync and share solution and the Hitachi Data Ingestor (HDI).

    HDI is a file or cloud on-ramp, a file service to remote and branch offerings, which now has simplified provisioning and management to get branch and remote offices up and running in minutes.

    The big update to HCP  introduces adaptive cloud tiering, which lets organizations move data to and from a choice of leading public clouds, including Google’s, Amazon’s and Microsoft’s, based on changes in demand and policies set by the organization. It’s meant to provide a balanced approach to security and cost by controlling what’s kept in-house and what’s stored in the public cloud.

    There are new capabilities to synchronize data across multiple active sites for improved productivity. HCP Anywhere acts as a single point of control for user sync and share and for remote and branch office file services.

    “We’re enabling workforce mobility and secure hybrid cloud. IT can create policies to move and authorize data and the data remains encrypted,” said Tanya Loughlin, director of file, object and cloud product marketing at HDS.

    HDS has over 6,300 employees worldwide and its parent company Hitachi is massive, making $93.4 billion in revenue last year. The company sees opportunity in what it calls the Social Innovation Business.

    It aids in the transition from traditional IT to private and public clouds and provides the systems to enable a mobile workforce. HCP is a software-only offering developed entirely in-house by HDS.

    HDS’ original focus was storage hardware, but the business is now about evenly split between software and services, and hardware. The company offers a spectrum of delivery deployment and financial models.

    “Instead of buying outright CapEx, we offer things like onsite management and onsite without CapEx,” said Loughlin. “There are a number of service providers leveraging innovative models. They’re looking for a vendor to be a partner and that’s where these models were born.”

    The HCP portfolio is built on object storage and includes archive, backup-free and hybrid cloud storage on a single platform. IT organizations and cloud service providers can store, share, synchronize, protect, preserve, analyze and retrieve file data from a single system.

    2:00p
    A ScaleMatrix Case Study – Power Management, Uptime and Cloud

    Let’s look at the modern data center. At the core, it sits as the foundation for all key technologies impacting the current organization. As more users, applications and workloads connect into the data center platform, there will be even greater demands around resource utilization.

    It’s big business for data centers these days. The competition is fierce and the options can be overwhelming and expensive. In this whitepaper and case study from Server Technology, we learn how ScaleMatrix provides a totally different kind of customer and IT experience to separate itself from other cloud/colocation providers in the marketplace today.

    ScaleMatrix is a revolutionary new kind of data center. It offers clients premiere colocation services, public cloud, and private cloud services. Built from the ground up for the cloud – for high bandwidth applications — by data center owners and operators, it’s designed to offer 30 percent savings in energy with a unique high-density approach and patent pending infrastructure.

    Promising 100 percent uptime to its customers, ScaleMatrix needed to have reliable rack power distribution and constant visibility to all available power data. They needed to have the most accurate and reliable PDUs, polled continuously for power, temperature and humidity through SPM, to a single pane of glass dashboard that the NOC team monitors, 24/7/365.

    Download this case study and whitepaper today to learn how ScaleMatrix directly impacts:

    • 100 percent Uptime Starting Day One
    • High Density Environment Optimization for Energy Efficiency
    • Power Monitoring and Management
    • Cloud Services and Colocation Facilities

    Your data center will continue to evolve. Just look at how many new types of workloads are being placed into the modern data center platform. The constantly connected, mobility-driven industry is creating more demands around rich content delivery. Through it all, uptime, resiliency and efficiency are critical components to a healthy data center. We can no longer afford downtime or outages.

    Now, find out what it takes to align your data center requirements with your organizational goals.

    5:00p
    Spanish Banking Giant Santander Builds $500M São Paulo Data Center

    Banking giant Banco Santander Brasil, a unit of Spain’s Santander Group, opened its first large data center in South America, which represents its latest massive investment in banking technology.

    The 1.1 billion reais ($493.2 million) 50 megawatt facility is located in Campinas, Brazil, about 60 miles way from São Paulo. The bank achieved Tier IV certification from the Uptime Institute for both design and facility for DC1 and DC2, a first for both Santander and the country.

    The data center took three years to plan and build, triples the company’s physical installations for data warehousing (5 million gigabytes), improves monitoring systems and provides more space for server expansion, said Santander Brasil CEO Jesus Zabalza.

    The company initially announced an investment of $270 million for the technology center.

    The facility is on a plot of land whose size is about 10.8 million square feet. There’s not much more detail about the data center, given that it’s an enterprise facility, which companies usually prefer to keep under wraps, but given the numbers and the Tier IV certification, it’s quite a substantial project.

    “This data center complies with the most demanding international standards and reflects the enormous confidence and potential that Santander has in Brazil,” Zabalza said.

    Santander has two data centers in Spain, one in U.K. and one in Mexico. It has invested around $30 billion in Brazil since 1982.

    The group is the largest banking firm in the Eurozone by market value. It ranks as the number-three privately owned bank in Latin America’s largest country, according to Zabalza.

    The bank was the result of several mergers, particularly during the 90s, that turned it into a massive powerhouse.

    The initial merger that kicked-off the bank was in 1991: Banco Central and Banco Hispanoamericano. The merger was not without its bumps, with former BCH executives not happy and taking retirement with significant payouts.

    The company continued its acquisition spree in the 2000s, buying several banks. During the recent banking crisis, the company acquired into the U.S. by purchasing the 75.65 percent of Sovereign Bancorp it did not yet own, for about $3 a share rather than the $40 it was trading at pre-crisis.

    The Bernie Madoff Ponzi scheme was said to have cost the bank €2.33 billion.

    São Paulo Governor Geraldo Alckmin praised the project and the Spanish bank’s role in Brazil.

    5:30p
    Enterprise Security Startup Tanium Gets $90M From Andreessen Horowitz

    Heavy duty enterprise security management startup Tanium raised $90 million in its first venture funded financing from Andreessen Horowitz. This is the venture capital firm’s second-largest investment ever and a huge vote of confidence for the security firm.

    Aimed at big customers with big security needs (the global 2000), Berkeley, California-based Tanium allows IT pros with highly distributed enterprise architectures to detect and mitigate damages from outages and cyber-attacks. Their edge is giving real-time visibility and access to data in seconds rather than hours, days or weeks it takes usually.

    It helps distribute and install updates and shut down processes or executables instantaneously.

    The enterprise security management company’s technology acts as the “central nervous system” for half of the Fortune 100, five of the top 10 largest global banks and four of the top 10 global retailers, it said. Founded in 2007, Tanium spent five years building and refining its technology and now has a completely new communications architecture that addresses challenges posed by modern enterprise networks.

    Tanium collects and processes billions of metrics across endpoints in real time and lets enterprises rapidly change the state of those endpoints. It allows proactive identification and fixing of operational issues.

    “Enterprise networks have grown exponentially during the last decade while the technology to secure and manage these systems has stagnated. The result is that simple, preventable issues [that] result in debilitating outages and attacks cannot be identified or managed until it’s too late,” said Orion Hindawi, co-founder and CTO of Tanium. “By completely rethinking how IT professionals manage, secure and maintain the end points in their network Tanium gives them the ability to interrogate, manage, update and secure their systems in a fraction of the time.”

    Andreessen Horowitz is a major investor in the tech space, and this sizeable round can be viewed as a major endorsement. The $950 million Menlo Park-based venture capital firm that was launched in 2009.

    Marc Andreessen and Ben Horowitz are the general partners. Most recently, the firm invested in Mesosphere, a startup with technology that aims to centralize management of IT resources across multiple data centers and clouds.

    With the latest bet on Tanium, there’s perhaps a mini-theme emerging to their investments, which is centralized management and security across increasingly distributed computer systems.

    Tanium also announced the appointment of Steven Sinofsky, board partner at Andreessen Horowitz, to its board of directors.

    “The Tanium team has accomplished nothing short of a complete reinvention of how IT professionals manage, secure, and maintain the endpoints in their network: every node on the network can now be interrogated, managed, updated and secured, instantly from a browser,” Sinofsky said. “As an already successful and profitable company with dozens of customers in massive, mission-critical and global deployments, this type of innovative and inventive technology can only come about from a team with years of experience and a depth of understanding of the enterprise.”

    7:30p
    Intel’s Next-Gen Xeon Phi (Knights Landing) to Use Silicon Photonics

    Intel says it has re-architected a fundamental building block of high performance computing systems, announcing the next generation of its Xeon Phi coprocessor (code named Knights Landing) with Micron’s Gen2 Hybrid Memory Cube technology and a new interconnect technology called Omni Scale Fabric. This will be the first interconnect to take advantage of Intel’s silicon photonics technology.

    The company made the announcement at this week’s International Supercomputing Conference in Leipzig, Germany, claiming that the upcoming chip will be the first serious step in the pursuit of exascale computing.

    Powered by more than 60 HPC-enhanced Silvermont cores, the next-gen Intel Phi coprocessor is expected to deliver more than 3 TFLOPS of double-precision performance and three times the single-threaded performance compared with the current generation. Not due out for another year, it will be a standalone processor mounted either directly onto the motherboard or plugged in as a PCIe card.

    Intel is a CPU leader in HPC, but tracks behind Nvidia in coprocessors for supercomputers. Although Intel-based systems account for 85 percent of the world’s most powerful supercomputers on the June 2014 Top500 list, only 17 use Xeon Phi, versus 44 that use Nvidia coprocessors.

    “Intel is re-architecting the fundamental building block of HPC systems by integrating the Intel Omni Scale Fabric into Knights Landing, marking a significant inflection and milestone for the HPC industry,” said Charles Wuischpard, vice president and general manager of workstations and HPC at Intel. “Knights Landing will be the first true many-core processor to address today’s memory and I/O performance challenges.

    “It will allow programmers to leverage existing code and standard programming models to achieve significant performance gains on a wide set of applications. Its platform design, programming model and balanced performance makes it the first viable step towards exascale.”

    High-bandwidth memory

    To help alleviate I/O bottlenecks, the upcoming Intel Phi coprocessor will contain up to 16GB of high-bandwidth on-package memory, Intel developed jointly with Micron. It delivers five times the bandwidth of DDR4 memory, as well as better energy efficiency and density than current GDDR-based memory.

    The on-package memory is based on Micron’s Gen2 Hybrid Memory Cube Nand flash DRAM chip.

    Supercharged by photonics

    The introduction of Intel’s silicon-photonics-based Omni Scale end-to-end interconnect will give a big boost to systems with Knights Landing processors when launched. It will be an inflection point in performance of HPC systems.

    Developed using a combination of acquired intellectual property from Cray and QLogic, as well as Intel’s own in-house innovation. Omni Scale will include a full product line offering consisting of adapters, edge switches, director switch systems and open-source fabric management and software tools.

    The new interconnect is not Infiniband, although it is compatible with it. Through a TrueScale upgrade program the current Intel True Scale Fabric can be migrated to Intel Omni Scale Fabric, so customers can transition to new fabric technology without change to their applications.

    Knights Landing coming to NERSC

    In April the National Energy Research Scientific Computing Center (NERSC) announced an HPC installation, planned for 2016, that will serve more than 5,000 users and support more than 700 extreme-scale science projects. It will be take advantage of the next-gen Xeon Phi chips.

    “We are excited about our partnership with Cray and Intel to develop NERSC’s next supercomputer ‘Cori,’” said Sudip Dosanjh, NERSC director. “Cori will consist of over 9,300 Intel Knights Landing processors and will serve as an on-ramp to exascale for our users through an accessible programming model.

    “Our codes, which are often memory-bandwidth limited, will also greatly benefit from Knights Landing’s high speed on-package memory. We look forward to enabling new science that cannot be done on today’s supercomputers.”

    8:00p
    US Data Center Providers Neutral on Government Access to Customer Data Stored Overseas

    During a May meeting with a group of CIOs in Berlin Microsoft General Counsel Brad Smith witnessed something unusual: a CIO for a German state came in carrying a copy of a legal decision. It was a magistrate’s decision in a federal court in New York on the question of whether the U.S. government could get unilateral access to data in a data center outside the U.S. with a warrant and without knowledge of the data’s owner.

    The decision was in the government’s favor, and the CIO said that until it was changed, there wasn’t even a remote possibility that his organization would contemplate putting its data in any American company’s data center, Smith recalled. He told the story while presenting at the GigaOm Structure conference in San Francisco earlier this month.

    “At one level, that is part of what is at stake here,” Smith said about Microsoft’s current court battle with the government over access to a customer’s data stored in its data center in Dublin, Ireland. The legal opinion the German CIO brought to the meeting ordered Microsoft to comply with a U.S. law enforcement warrant, asking it to hand over the data.

    Microsoft has support in data privacy protection fight

    As Microsoft continues challenging the government on the issue, other U.S. tech giants have joined it. They include Apple, Cisco, Verizon and AT&T, among others, arguing that the power of a U.S. warrant does not extend beyond U.S. borders, and that includes data stored overseas by U.S. companies.

    U.S. technology companies’ ability to retain their historic leadership in the market is at stake, Smith said. If customers don’t trust them to protect their data, they simply will not use their products and services.

    Data center providers not taking sides

    There is another group of U.S. companies on whom the judge’s decision has direct impact: data center providers. So far, however, this group has generally chosen to remain uninvolved.

    Redwood City, California-based Equinix, the world’s largest data center colocation company, for example, has taken a neutral position. An Equinix spokesperson sent us a statement acknowledging that this was a crucial time for the industry, but chose not to comment on Microsoft’s case.

    “We’re in the business of enabling the Internet and ensuring that our customers can satisfy the most stringent data sovereignty requirements,” the statement read. “We work around the world to respect local regulations and our commitment is to stay apolitical in these matters.”

    The response of CenturyLink, a major colocation, hosting and cloud infrastructure service provider, was along similar lines: “U.S. law enforcement warrants extending to overseas operations, owned by U.S.-based companies, is an important issue and we are watching the case closely.”

    CenturyLink competitor Internap chose to pass on commenting altogether, and representatives of CyrusOne did not respond to a request for comment.

    Rackspace sides with Microsoft

    One exception was Rackspace, the Texas-based provider whose vice president and associate general counsel Perry Robinson wrote us a note saying the New York magistrate’s decision in Microsoft’s case was disappointing. “Rackspace believes that those laws that prohibit a U.S. law enforcement agency from executing a U.S. warrant to search physical property located in another country apply equally to digital property which is located in another country,” Robinson wrote.

    “We continue to operate under the belief that we are prohibited from accessing and disclosing customer data stored on servers or storage devices in our data centers without a properly issued, lawful request from a court with jurisdiction over both Rackspace and the data sought. We will oppose any court orders for customer data that does not adhere to the fourth amendment.”

    Not my data – not my problem

    It is possible that some data center providers that choose not to voice an official opinion on the matter are standing back because they do not own the servers or the data that sits in their data centers. Jelle Frank van der Zwet, global marketing manager at Interxion, a Holland-based colocation giant, said that while a provider may have an opinion on the matter, they do not necessarily have a role to play in it.

    In many countries in Europe, he said, laws do not make data center providers responsible for their customers’ data. “When it comes within our [realm of responsibility] is the moment a policeman knocks on the door and says ‘I need that server,’” he said.

    In such cases, local law will apply, depending on the country. “In Switzerland, under certain circumstances, we probably have the right to say ‘no’ to get into the data center,” van der Zwet said. “In other countries we just have to.”

    Key law is 30 years old

    In Microsoft’s case, however, the U.S. government is requesting data stored outside of the U.S. without regard for local laws. Microsoft would like to have the ability to stay neutral, to keep the issue between the government and the individual whose data the government is seeking, but it lost that ability when it was served with the warrant.

    Microsoft and its supporters are pushing for reform. The key piece of legislation, the Electronic Communications Privacy Act, was enacted in 1986 to address a very different technological era. The law did not foresee that people and businesses would eventually start storing their data outside their homes or offices.

    “If we’re going to retain the trust in U.S. technology, we need to answer some fundamental questions,” Smith said.

    The first question is whether the government can get access to a person’s data without their knowledge. Another question (which has direct impact on data center providers, since it is a question that bothers enterprises) is what happens when there is no criminal case and the government serves a subpoena?

    “Our answer is serve the subpoena on the customer, not on the data center or the service provider,” Smith said. “Don’t put the data center provider in the middle and especially don’t do it in a context where the data center provider can’t even let the customer know.”

    << Previous Day 2014/06/24
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org