Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 15th, 2016

    Time Event
    2:15a
    Drones: Is the Airspace above Your Data Center Secure?

    Do you think the person who made this video broke the law?

    That’s the question Adam Ringle, a security and emergency services expert, asked the audience during his keynote Monday at the Data Center World Global conference in Las Vegas.

    The answer is no, but there’s some subtlety here. While flying a drone over a private property like Walmart data center in the video above is not a criminal act, it is a violation of FAA rules. FAA, or Federal Aviation Administration, has seen its airspace regulation duties expand greatly in recent years with the explosion in the use of drones.

    Ringle, who runs a security and emergency services consulting and training practice, was at Data Center World to educate data center operators about the potential security threats Unmanned Aerial Systems can pose to facilities they operate, what they can do to protect themselves from those threats, and how they can use the technology themselves to improve security.

    A $10,000 thermal camera attached to a drone can easily map what’s inside your data center from the air using thermal imaging technology, he said. And that’s just one of the potential threats.

    According to Ringle, the known and common drone threats are:

    • Corporate espionage
    • Contraband delivery
    • Weaponized systems
    • Delivery of hazardous materials or explosives
    • Facility infiltration
    • Hacking and spying
    • Theft of data with raspberry or snooping
    • Privacy issues
    • Airspace conflicts
    • Accidents
    • Mechanical failure

    Like with any new technology, it takes some time for the law to catch up, and the legal framework for drone operation is still taking shape. The only case law on this subject is 60 years old.

    United States v. Causby was a 1946 case, in which a farmer sued the government for trespassing, because low-flying military planes caused his chickens to throw themselves against walls and die. The court ruled that Causby was owed compensation, but the majority opinion also established that a landowner had control of “as much of the space above the ground as he can occupy or use in connection with the land.”

    “That is the only case law that exists on this topic,” Ringle said.

    The FAA, however, has already devised guidelines for consumer, commercial, and government operators of UASs. There are also different rules for different types and sizes of drones. Data center operators, he said, should get familiar with these rules and create policies for dealing with unwelcome visitors from above.

    If you see a drone flying above your data center, but you don’t see the pilot, for example, they are probably violating FAA rules. If you do see the pilot, they are obligated to show you their FAA documentation if you ask them.

    FAA also mandates that UASs must stay a certain distance away from buildings. “Technically, they can’t fly it close to your building, period,” without certain FAA approvals, Ringle said.

    Some data center operators may also think about integrating the use of drones into their own security programs. There are drones, for example, that can bring down an unwanted UAS by dropping a net with weights on it, which disables the rotors.

    One important initiative is NoFlyZone, which maintains a database of private properties whose owners don’t want drones flying over them. The organization maintains a database of GPS locations of these properties and drone makers who participate program their drones to avoid those locations.

    Home owners can sign up for free at www.noflyzone.org. Currently, the initiative is limited to homes, but a paid subscription model for businesses is in the works, Ringle said.

    Join 2,000 of your peers at Data Center World Global 2016, March 14-18, in Las Vegas for a real-world, “get it done” approach to converging efficiency, resiliency and agility for data center leadership in the digital enterprise. More details on the Data Center World website.

    4:01a
    Modular Data Center Firm IO Raises $505M

    IO, the colocation provider known for leasing out space inside data center modules housed in large warehouses it operates around the world, has secured $505 million in financing from Deutsche Bank and Macquarie Capital.

    The funds have allowed the company to consolidate US debt and buy two of the four properties its US data centers occupy, IO president, Tony Wanger, said.

    The announcement is the latest example in what has been a steady stream of investment in US data center providers this year.

    Earlier this month, modular data center startup Keystone NAP raised $15 million to fund its data center project in Pennsylvania. In February, Silicon Valley-based Vantage Data Centers announced a $300 million investment to help fund data center construction in California and Washington State.

    In January, Atlanta-based T5 Data Centers said it had raised $70 million to pay for data center construction in North Carolina.

    Echoing what representatives from other data center companies have said recently, Wanger said lenders are generally more knowledgeable about the data center sector today than they were even five years ago, so data center providers have easier access to capital than they did back then.

    The new IO is one of the two companies that resulted from a split of the old IO last year. The old IO was both a data center provider and a data center technology company, designing and building modular data centers and data center management software.

    The two businesses were separated, forming the new IO, a data center provider, and BaseLayer, a technology company. IO continues to provide data center space inside BaseLayer modules housed in its warehouses, acting as a BaseLayer customer.

    Read more: Modular Data Center Firm IO Split into Two Separate Companies

    Today, the company has two data centers in its native Phoenix market – one in Phoenix and the other in the Phoenix suburb of Scottsdale – one data center in Springsboro, Ohio (just outside of Dayton), and another one in a former New York Times printing plant in Edison, New Jersey. The company also operates data centers in Slough (near London) and Singapore.

    Slough is the most recent market IO has entered. The data center was officially launched in June 2015, with Goldman Sachs as the anchor tenant. Goldman was also instrumental in IO’s expansion into Singapore in 2013, where it was the anchor tenant.

    The properties IO has bought out with the help of its new financing are the two properties in the Phoenix market. The company also recently bought a third property there, a nine-acre piece of land adjacent to its Phoenix facility, where it plans to build a three-story data center.

    4:16p
    The Case for Composable Infrastructure

    Ric Lewis is the SVP & GM for Data Center Infrastructure, HPE.

    After all that we’ve been hearing about software-defined everything and infrastructure as code for the past couple of years, CIOs could be forgiven for looking around and saying “Hey, it’s 2016, where’s my programmable data center?” The fact is, automation still hasn’t pushed very far into today’s enterprise IT. But the wait may be over with the arrival of a new category of infrastructure – composable infrastructure – that delivers many of the long-awaited benefits of programmability.

    “Composable” is actually not a bad word to describe it. Composable infrastructure is a single package comprising three things: a new kind of hardware, a software intelligence to control it, and a single API to interact with it. What’s new about the hardware is that it turns the core elements of the data center – compute, storage and networking fabric – into pools of resources that can easily be assembled or “composed” to fit the needs of any application. All three elements are designed from the ground up to work together as one. They can be deployed in any kind of operating environment, be it bare-metal, virtualized, or containerized.

    The native software intelligence provides a single management interface and handles complexity behind the scenes, making the whole system software-defined. And the API enables humans to communicate very simply and easily with the infrastructure, turning tasks like provisioning infrastructure for a new application into the equivalent of one click: a single line of code. The API also plugs into DevOps tools like Ansible, Chef, Docker and Puppet, so that developers can make the infrastructure dance using the tools they’re already familiar with.

    That’s it. Sounds simple, and in essence it is, however dazzling the underlying technology may be. But that simplicity enables CIOs to address one of the biggest challenges they face today: how to find an efficient way to meet the infrastructure demands of the new breed of applications based on mobile, cloud, big data technologies.

    Give Me Next-Gen Apps – But Don’t Touch Those Dials

    The proliferation of the new generation of apps is only going to accelerate. When you can make a few touches in an airline app on your smart phone and you’ve checked flight availability, seat positions, standby lists … you know you’re looking at the new face of IT. Indeed for many of us, the new face of IT is the new face of business – any business. Consumers love it. CIOs love it too, and they love that as a result they’re getting called more often into the C-suite discussions that matter, around revenue, profit, growth.

    At the same time, the rise of next-gen apps puts CIOs in a bind. It calls for super-flexible, development-friendly infrastructure that you can set up quickly and change easily and often. But how to provide that while continuing to ensure total reliability for the mission-critical, don’t-mess-with-the-dials applications – enterprise resource planning, databases, email – where constant change is the last thing you want?

    Until now the answer, often enough, has been to keep the usual on-premises setup for the traditional applications and turn to the big public cloud providers for the new breed. But IT leaders have a long list of reasons to keep data on-premises: security, compliance, performance, ease of data-sharing across applications. Cost is a factor too; you can easily run up big bills if you have a large amount of traffic going on public cloud.

    Not Either/Or, But Both

    Composable infrastructure neatly resolves the dilemma by supporting both the traditional and the new environments. Here’s how it works; take a mobile banking application, for example. The bank’s system has two components. A mobile back-end receives requests from the app on your phone and figures out what do – transfer some money, show a balance. And behind that there’s a traditional database that keeps track of the accounts and does all the computation behind the scenes.

    With composable infrastructure, the bank can easily assign resources for both types of application. When you plug in compute, storage, or networking capacity, the infrastructure automatically discovers it and makes it available for any workload. To provision the mobile back-end, the bank selects a software template from a built-in library and assigns it via single line of code. Let’s say it’s a containerized application that uses the open source tool Docker. The workload simply drops into the infrastructure at the right ratio of compute, storage and networking resources. The resources scale independently and automatically.

    Deploying the traditional database application works the same way. The bank can run it in a different environment than the containerized mobile back-end – bare metal, virtualized, doesn’t matter. No need for any configuration; the infrastructure configures itself.

    If this sounds reminiscent of a cloud services portal, it should. Composability has the same infrastructure-as-code attractions for developers: they can just pull whatever storage and compute they need and get apps into production quickly without getting tied up in the details of infrastructure configuration.

    On the traditional IT side, composability offers more benefits in addition to the stability that’s needed for the bet-the-business legacy apps. It’s not unusual for companies to overprovision their traditional infrastructure by 70 percent or more because they want to be ready for peak loads. Composable infrastructure spreads spare capacity across all of the applications running in the datacenter and makes it instantly available, so it reduces cost immediately by reducing the need to overprovision.

    Standing up new hardware can be painfully slow in the traditional world. It can take close to a month from when a new box arrives in the shop till when it’s actually usable, because of all the provisioning and configuring involved. With composability, it takes just that one line of code – three minutes.

    Composable infrastructure can be deployed incrementally, side-by-side with existing infrastructure, in a way that makes sense for the business. It can scale across multiple chassis or multiple racks. You can start with a pilot program, perhaps as part of the standard refresh cycle, to become familiar with the technology. As the concept gains traction in the market, vendors will be climbing on board in increasing numbers, and it’s important to know whether what they’re claiming as composability is the real thing – see the infographic for some pointers.

    As a description of a new way to arrange and orchestrate data center resources, the “composable” metaphor is pretty apt. At any rate, it’s one that’s about to become very familiar to IT leaders.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:23p
    Amazon Web Services Turns Ten Years Old
    By WindowsITPro

    By WindowsITPro

    It sounds like a work of fiction to say that a bookstore would grow to one day power much of the web’s critical infrastructure, but 10 years ago, that’s exactly what happened when Amazon launched S3, rewriting the rules around hosting and helping launch an ever-growing cloud industry.

    In the original 2006 press release, Amazon dubbed S3 storage for the Internet, and while it wasn’t the cheapest way to store the file, it took away many of the headaches that web developers and systems administrators faced when trying to make something reliably accessible.

    Amazon S3 is based on the idea that quality Internet-based storage should be taken for granted, said Andy Jassy, vice president of Amazon Web Services, at the time. It helps free developers from worrying about where they are going to store data, whether it will be safe and secure, if it will be available when they need it, the costs associated with server maintenance, or whether they have enough storage available.

    Since then, Amazon has gone on to dominate the industry it helped create, although offerings from Rackspace and, now, Microsoft Azure continue to mean that the space is anything but settled, particularly when it comes to the hybrid cloud preferred for internal IT operations.

    Still, 10 years is quite a milestone in Internet time, and several key figures took a minute to evaluate the division’s growth and lessons learned.

    My favorite might be the 10 lessons that Werner Vogels shared, covering everything from the immortality of APIs to the power of an ecosystem that flourishes beyond your businesses walls.

    Amazon AWS might not be everywhere, but these days, it seems always just a hop or two away, which isn’t bad for a service that has been up and running for less time than American Idol.

    This first ran at http://windowsitpro.com/cloud/amazon-web-services-turns-ten-years-old

    6:15p
    Microsoft Continues Expanding Cloud Data Center Empire

    Microsoft announced another round of expansion of its already widespread global cloud data center footprint.

    The company announced two new cloud regions specifically to provide services to the US Department of Defense. A cloud region usually consists of one or more data centers.

    The WHIR, our sister site, has the full story:

    Microsoft Azure Beefs Up Government Cloud Credentials

    Microsoft also launched its new cloud region in Germany, which is hosted by its competitor Deutsche Telekom. DT is acting as a “data trustee,” a scheme devised to avoid a repeat of the scenario Microsoft is facing with its Dublin, Ireland, data center.

    The company has been locked in a legal battle with the US government over one user’s personal data stored in the Dublin facility. The government has ordered Microsoft to turn over the data, because it belongs to a person under criminal investigation, but the company is maintaining that the government’s jurisdiction does not extend to a data center overseas.

    Handing control of user data to a non-US company like DT is a way to avoid this situation altogether.

    Read more on our sister site, Windows IT Pro:

    Microsoft Opens up Azure Cloud in Germany Even It Can’t Access Easily

    11:44p
    Merger of Two Healthcare Giants Makes IT Transformation Inevitable

    How do you scale your IT infrastructure to three times its capacity while your budget stays about the same? This was the question the IT team at St. Joseph Health has been faced with as the 16-facility hospital system with $5 billion in annual revenue is going through a merger with Providence Health and Services, which was announced last year.

    Their answer is a complete transformation of the healthcare system’s IT infrastructure that is a case study in transitioning legacy IT to the latest and greatest in what the market has to offer. The strategy includes all flavors of cloud – public and private, IaaS, PaaS, and SaaS – converged infrastructure, network virtualization, and DevOps-style IT automation.

    Robert Rice, VP of infrastructure and operations at St. Joseph, and Shawn Arcus, the organization’s enterprise data center manager, talked about the transition Tuesday at the Data Center World Global conference in Las Vegas.

    “We’re changing the very nature of what our data centers look like,” Rice said. “And we are pushing more and more to the cloud.”

    Their goal is to maintain a distributed infrastructure but be able to manage and scale it centrally. St. Joseph’s hospitals are in California, Texas, and New Mexico, but the addition of Providence expands the footprint to Oregon, Washington, Montana, and Alaska.

    They plan to implement the transition over the course of about two years.

    Today, the team is maintaining the legacy environment, while adding some Software-as-a-Service cloud applications, such as Salesforce, Office 365, and Workday.

    The stage that’s taking place this year is implementing a centralized private cloud as well as regional cloud infrastructure where the hospitals are located. The centralized cloud is built on Vblock systems, Cisco’s converged infrastructure, and hosted at one of the Switch SuperNAP data centers in Las Vegas, which has been St. Joseph’s primary data center for some time now.

    They also standardized the non-converged server configurations in their data centers, going from 18 to three.

    The regional cloud is built on top of hyperconverged infrastructure appliances by Nutanix, which integrate compute and storage and run advanced storage management software. The Nutanix boxes have enabled substantial reduction of the physical data center footprint in those regional locations, enabling it to shrink from entire rows of IT to a few rack-unit spaces in each of them, Rice said.

    The next phase of the transition, planned for next year, will be adding public cloud infrastructure services, such as Amazon Web Services, Microsoft Azure, and VMware’s vCloud Air, to take advantage of on-demand capacity.

    While these are all huge technological changes, Rice has not had to make the case for the technology itself or its security to all the hospital CEOs and CFOs in the organization. What he did have to do was illustrate the cost reductions that the transition made possible.

    The infrastructure is already more scalable and fluid – if latency allows, the team can, for example, decide to host an application in Texas, where data center costs are much lower, than in Las Vegas – and it takes a much smaller team to manage that infrastructure.

    But the transition is not only technological. It is an operational, organizational, and technological change, which has been one of the hardest parts of this project. Rice had to convince top management at the hospitals, for example, to stop working with technology vendors (providers of healthcare-specific systems) whose products did not support the new IT model.

    This was difficult, but once the discussion turned to cost, he was able to illustrate that if they wanted to continue using those vendors, they would have to pay extra. “I gave a pretty convincing total cost of ownership picture,” he said.

    << Previous Day 2016/03/15
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org