Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, May 21st, 2015

    Time Event
    12:00p
    Cloud and Collaboration – How the Data Center is Bringing People Closer Together

    It was a Wednesday morning when I received a call I was eagerly waiting for. I had a great conversation with one of my closest friends. We talked technology, how he was doing, and some of the things he was working on. The call was crystal clear and our conversation ran uninterrupted. Earlier in the week, we exchanged emails and shared pictures via Facebook. We also had the chance to chat live for some time and exchange ideas and news from our respective corners of the world.

    The local time in Bagram, Afghanistan was 7pm and my friend, who is now a Major in the US Army, was taking a quick break from his night shift work.

    After our conversation and some thought, I realized how far we’ve come – where this type of communication is easily possible now in such remote locations. Only a few years ago, this type of conversation would have been much more difficult to conduct. On top of that, having the ability to chat, Skype and even share content like pictures and videos over such distances are pretty amazing.

    Very recently, Cisco took a look at the global average download and upload speed. Here’s what they found:

    Key Results

    • The global average fixed download speed is 17.3 Mbps, and the global median fixed download speed is 11.1 Mbps.
    • The global average fixed upload speed is 8.8 Mbps, and the global median upload speed is 3.8 Mbps.
    • The global average mobile download speed is 6.3 Mbps, and the global median mobile download speed is 4.8 Mbps.
    • The global average mobile upload speed is 2.6 Mbps, and the global median mobile upload speed is 1.3 Mbps.

    Regional Fixed Download and Upload Speeds

    • Average fixed download speeds: Western Europe leads with 20.0 Mbps, and Asia Pacific follows with 18.8 Mbps.
    • Average fixed upload speeds: Central and Eastern Europe leads with 12.2 Mbps, and Asia Pacific follows with 12.1 Mbps.

    Regional Average Mobile Download and Upload Speeds

    • Average mobile download speeds: North America leads with 10.1 Mbps, and Western Europe follows with 9.5 Mbps.
    • Average mobile upload speeds: Central and Eastern Europe leads with 4.9 Mbps, and North America follows with 4.3 Mbps.

    Network Latency

    • Global average fixed latency is 47 ms.
    • Asia Pacific leads in average fixed latency with 40 ms, and Western Europe closely follows with 46 ms.
    • Global average mobile latency is 198 ms.
    • North America leads in average mobile latency with 101 ms, and Western Europe follows with 113 ms.

    Take a moment to really understand those numbers. As it stands today, the global median fixed download speed is 11.1 Mbps. Just imagine what this will be a few years from now. This kind of speed, lower latency, and improved mobile communications are creating powerful new collaboration mechanisms.

    This kind of evolution is not only helping bridge the gap for many business around the world – it’s also helping bring people closer together.

    So, what happened?

    • Intelligent WAN Technologies. Site-to-site replication, better connectivity points, and more WAN optimization have all helped to increase our abilities to distribute data. We are now able to control protocols, place emphasis around specific workloads, and better monitor global data distribution at an absolute granular level. Our ability to create dedicated networks spanning the globe has been truly enhanced over the past few years. Now, direct communication links can be set up between globally distributed data centers utilizing high-bandwidth private lines. Furthermore, WANOP has now been virtually abstracted to run as a remote client or even a virtual appliance. Traffic shaping and optimization have reached new levels of control. Our ability to place QoS policies and ensure better traffic flow have truly come a long way.
    • Unified Communications and Next-Generation Collaboration. Telephony and general communications has progressed quite a bit. Now, we’re able to deploy IP telephony as well as unified communications tools which allow for high-speed, secure video/voice conferencing all over the world. New types of virtual and physical branch exchanges can be deployed within secured data centers to facilitate an even greater amount of communications capabilities. Our ability to control new kinds of content is enabling an entirely new world around collaboration and communication. We’re securely delivering applications and data to many new devices, including IoE and IoT architectures. Furthermore, there is a lot of new intelligence within our collaboration systems. For example, voice/video/collaboration data can be specifically tied to a WANOP policy which can be easily geo-fenced and device specific. This kind of granularity ensure optimal user experience; in a truly seamless fashion.
    • More Bandwidth, More Speed, Less Latency. Now that we have the backbone to support it all – fiber networks have been growing extremely rapidly. This means more bandwidth is becoming available all over the world. Private links between data centers can be setup where massive amounts of uninterrupted bandwidth can be used. Everything from cellular technology to data center connections has become more powerful. Our routing capabilities continue to increase as core Internet data centers adopt technologies which are capable of processing more traffic, faster. Trends show that WAN speed will only continue to improve for both the business and the consumer. Just look at Google Fiber as an example. Our ability to control traffic flowing through the data center is improving as well. Intelligent route, network, and switch platforms control the flow of data. Furthermore, load-balancing helps keep workloads stable and seamlessly passes users to the appropriate data center with the most optimal data repositories. All of this is being done in a more autonomous fashion – optimizing user experiences and enabling for better data center resource control.
    • Better Data Center Resources and Cloud Controls. Picking up from the previous point, as the global infrastructure continues to expand, more resources will become available at those sites. This includes locations like where my friend was stationed. High-density equipment has already been deployed onsite where efficient chassis and blade environments are capable of provided large amounts of throughput, resources and computing power. These data centers are then securely connected to other global sites to facilitate communications and data distribution. Through it all – data center and cloud resource control is further enabled by hyperscale technologies, better optimization policies, and of course – virtualization. There is a lot of intelligence being built into both the modern data center and the cloud. Automation and orchestration controls allow administrators to dynamically provision and de-provision resources based on current and future demands. All of this creates better data center economics and improved data delivery.

    I listed four examples where technology has played a big role in our ability to utilize everyday communication means to explore the world. Of course, there are so many other tools and solutions which further help us to connect. However, it’s the simple things that we can now use (Facebook, Skype, Gmail) to help us “conquer” distance and bring friends closer together. In today’s world of turmoil and uncertainty, having some small home comforts is truly priceless. For the those that are overseas, being able to take a little piece of home with them allows them to have just that little bit more to look forward to.

    There’s no doubt that communications, data center and WAN technologies will continue to evolve. But it’s not just about the business side of things. As technologists, it’s important to remember that we’re not only helping corporations – we’re also helping people.

    12:30p
    Taming Your Organization’s Unwieldy Data Jungle

    Tom Scearce is product marketing manager at Attachmate and Novell for the enterprise file management market.

    By 2020, research firm IDC predicts a 4,300 percent increase in worldwide data and that annual data creation will reach 40 zettabytes. To put that in perspective, 40 zettabytes is equivalent to 360 million years of BlueRay quality video or the data created if every citizen posted three tweets per minute for the next 598,867 years.

    Does your enterprise have a plan for handling this data explosion?

    For most companies, the answer is no. Before files were digitized they were much more intrusive – if a stack of papers and file folders littered your work area and impeded your productivity, you probably felt compelled to organize. Today, all of this clutter has gone digital and lives on your organization’s file servers. But since these files are out of sight now, they’re also out of mind. So what was once a tidy garden of tabs, labels and manila folders is now an overgrown jungle of unstructured data.

    IT managers have ignored runaway storage growth largely because the marginal cost of more disk space is low, and they’re swamped with more urgent problems. But global trends like ubiquitous connectivity, widespread smartphone adoption and the drive to collaborate productively are making companies re-think the importance of files. As file creation accelerates, demand to access those files grows. It’s clear that enterprises need a plan to manage the coming “filepocalypse.”

    Accordingly, there’s been a surge of market activity around all things related to data and files. From file sync and share services to file networking tools, there are dozens, if not hundreds of products and services available today. In the file sync and share sector alone there are close to 100 companies, a number that is sure to increase. In addition to being overloaded with files, IT departments also have countless file-sharing tools to help sift through them. So, how can organizations tame these file jungles?

    The first step to an orderly filing system is managing file overload. First, identify all files, how and when they got there, and who owns them. It’s important to understand what’s sitting on your servers – there may be multiple copies of the same files, files that are no longer needed, or sensitive information needing special governance.

    You could deploy a small team of IT pros to sift through the entire file system, but that consumes time, money, and leaves room for human error. A safer, more efficient approach is to automate file reporting so you can visualize the entire file system. This gives you clear insights into areas of risk and opportunity, and reveals the actions you can take to make your file system more manageable.

    Of course, a well-managed file system is not a one-time event. It’s important that your files stay organized. Otherwise, any cleanup effort is a waste of time. One of the best ways to maintain order is to tailor a storage policy based on each employee’s role within the organization, even as roles change. For example, when a new hire starts he or she automatically gains access to files appropriate to his or her job function. Later, when that same employee changes roles or departments, access rights automatically adjust, ensuring that the system stays organized and files remain secure.

    Now that your files jungle has been tamed into an orderly and productive garden, you’re ready to wow your company’s mobile workers and the people with whom they share files. Look for solutions that give users the ease of use and productivity they need without compromising IT’s charter to secure and manage the underlying data. In the absence of a user friendly option sanctioned by IT, employees will turn to public cloud-based services that may lack the security and data management features your firm requires. Choosing the right file sharing solution will improve your return on investment.

    Thinking differently about file management and sharing may not seem like a top priority for your organization today. But those files contain one your company’s most vital assets: information. With a new file management process, and a file sharing system that meets the needs of users and IT, that file jungle you tamed into a garden will soon be a sustainable farm that safely yields valuable crops for your VIP customers around the world.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    Nexenta Announces Availability of Open Source Software Defined Storage Platform NexentaEdge

    At the Vancouver OpenStack summit, software-defined storage company Nexenta announced the general availability of its NexentaEdge Block and Object Storage platform, as well as a strategic alliance agreement with Canonical and its Ubuntu OpenStack.

    NexentaEdge was launched late last summer, as a scale out object storage solution with high performance block, swift, and S3 object services. Running on Intel-powered commodity servers, the Open Source-driven Software Defined Storage (OpenSDS) offering from Nexenta is optimized for big data, OpenStack clouds, and petabyte scale object based active archives.

    Citing the disruptive power of those environments Nexenta Chief Product Officer Thomas Cornely said that “OpenStack clouds in particular require both high performance block services and extremely scalable object repositories.” Cornely added that his company “is unique in its ability to concurrently meet these requirements and further drive down storage costs with inline deduplication and compression of all data at any scale.”

    Nexenta is reportedly close to being ready for an IPO in the next year or so, and has taken investments from SanDisk in 2013, and Dell last year.

    Nexenta also jointly announced its strategic alliance agreement with Canonical, which will allow Canonical’s Ubuntu OpenStack customers to benefit from Nexenta’s full spectrum of OpenSDS products for their large scale private and hybrid clouds. The two companies said that they will build solutions that benefit large-scale Ubuntu OpenStack deployments on OpenSDS instances. Nexenta also joined the Canonical Charm Partner Program and will support the Juju service model from Canonical.

    Helping to push the software defined storage disruption and greater adoption, Canonical launched Ubuntu Advantage Storage at the OpenStack summit – with metered pricing based on data stored, independent of total storage capacity deployed. The offering features NexentaEdge, as well as Ceph, Swift and SwiftStack. Canonical works with vendors to help leverage OpenStack clouds built on Ubuntu and the Canonical OpenStack distribution and said that user requested features are available in open source SDS technologies, such as scale-out management, RDMA transport enablement, support for SSD and NVMe acceleration, and flexible cache tiering.

    “OpenStack deployments come with radical storage requirements,” said Thomas Cornely, Chief Product Officer at Nexenta. “By partnering with Canonical on Ubuntu Advantage Storage, NexentaEdge delivers its streamlined architecture and unique functionality – inline deduplication, smart placement, and extremely high data integrity — in a production-ready, fully-supported package that addresses the storage needs of the most demanding cloud deployments while bringing new options to OpenStack administrators.”

    1:21p
    Digital Realty Opening $150m Singapore Data Center

    Digital Realty is investing $150 million on a second, 13.2-megawatt data center in Singapore to address growing demand for data center infrastructure in the city-state and interconnection needs in Asia Pacific. Delivery of the powered shell and an initial 3.2 megawatts is expected in late 2015.

    The company entered into a purchase agreement for a former 175,000 square foot paper storage facility and is converting the non-technical building into a data center. Legal completion of the deal will occur sometime next month, although power and fiber works are already underway.

    Singapore is the network connectivity and business hub for Southeast Asia and one of the key hubs for the region at large. Data center demand is booming across all Asian business centers as foreign companies expand infrastructure to serve local customers, and as local companies grow together with the market.

    Located in the Northeast, the data center will connect via dark fiber to Digital Realty’s first data center, in the west of the city-state. The west is home to global Tier 1 carriers, the Singapore Internet Exchange and Digital Realty’s CloudConnect network.

    Digital Realty opened its existing Singapore data center in 2011 at the International Business Park. That facility is massive at 370,000 square feet and 30 megawatts of critical IT capacity.

    “Our first data center in Singapore marked Digital Realty’s entrance into one of Asia Pacific’s rapid expansion markets and enabled our global clients to enter the city-state and serve other regional markets with a trusted partner,” said Daryl Dunbar, Managing Director Asia Pacific at Digital Realty.

    Dunbar said the new data center expansion supports Singapore’s efforts in becoming a regional data center hub.

    Singapore is one of the fastest-growing Asia data center markets with many providers looking to set up shop in the country. Equinix’s recent Singapore data center is its largest in Asia, and CenturyLink recently announced the launch of a cloud data center there. IO has a massive presence in the country. A swath of cloud providers have also strategically entered Singapore in the last few months, including Atlantic.net, DigitalOcean, and Linode.

    Digital Realty said the new data center falls in line with the country’s Smart Nation Program. The program encourages the sustainable supply of data centers to ensure sufficient future capacity. The building will also be Target Green Mark Platinum certified as well as compliant with the Monetary Authority of Singapore’s Threat and Vulnerability Risk Assessment Guidelines.

    “Aligned to our vision for Singapore to become the digital innovation capital of Asia, Digital Realty’s expansion will enable both local and international companies to build new digital capabilities and scale critical digital services in a cost effective and efficient manner,” said Kiren Kumar, Director of Information Communications and Media, Singapore Economic Development Board (EDB).

    3:00p
    Peak Hosting Taps Digital Realty For Two Data Centers

    Managed hosting provider Peak Hosting tapped Digital Realty to get two new data centers up and running quickly. In order to meet customer need for geographically distributed locations, the company added data center PODS (Performance Optimized Data Suites) in Richardson, Texas and Santa Clara, California. Each POD consists of 1.25 megawatts in full 2N environments.

    Peak Hosting deployed both data centers in less than 60 days, according to the company. It first added 800 servers to a data center in Richardson, followed by a second phase of 400 in less than a month. The data center is up to 3000 servers and counting today. The initial deployment in Santa Clara consisted of 700 servers and associated infrastructure and took only 10 days from the loading docks to OS load.

    Peak Hosting now has a total of 5 data centers in the U.S. as well as one in Europe. The relationship with Digital Realty started with a data center in Digital Realty’s Ashburn campus.

    “We’re excited to support Peak Hosting as it expands, and we’re dedicated to ensuring Peak Hosting takes full advantage of Digital Realty’s recently expanded connectivity offerings, including richer direct-connect options, and easy access to cloud providers,” said Matt Miszewski, senior vice president of sales and marketing for Digital Realty in a press release.

    Peak Hosting offers what it calls Operations-as-a-Service managed hosting. Customers select from a menu of services and locations, then the company builds and manages it for them. Their tag line is they “take care of everything but your code”.

    The company has placed an emphasis on time to delivery, which also happens to be Digital Realty’s specialty in turnkey data centers. Digital Realty’s CIO recently spoke to DCK about the emphasis on streamlining operations.

    “The industry average time to deploy a new data center suite is in the realm of four months,” said Jeffrey Papen, founder and CEO, Peak Hosting. “Our team blew that timeframe out of the water due to its experience and focus, and as a result met a new customer’s requirement for geographically dispersed infrastructure in specific cities on a very short timeline. It’s not that other hosting companies can’t deploy this quickly; it’s that they won’t.”

     

    3:00p
    Skyport Systems Launches Secure Server Platform

    Rising to the challenges associated with developing a truly secure system, Skyport Systems today announced the availability of a hyper converged platform based on distribution of SELinux (Security Enhanced Linux).

    Based on hardened open source Xen hypervisors running on Intel processors that implement Intel Trusted Execution Technology, Skyport Systems corporate vice president Doug Gourlay says the Skyport Secure platform is unique in that not only are policies enforced on each workload running on the system, Skyport itself provides a managed service through which validates and then continuously monitors each component of the system.

    Installable in 30 minutes, Gourlay says Skyport Secure is designed to address segments of the server market where the security of the data center itself is absolutely essential. To provide that level of security, all hardware, firmware and software components are validated at the point of manufacture. After validating the components, the system boots a fully whitelisted version of SELinux.

    “We’re pinning a workload to a specific virtual machine,” says Gourlay. “Then we create an image of all the components in the system to make sure none of them are ever replaced by something we didn’t validate.”

    To make this platform more accessible to a broader number of customers, pricing for Skyport Secure starts at $2,500 per month. At the end of a three year cycle Gourlay says IT organizations can then opt to upgrade to the next iteration of the Skyport Secure platform.

    By definition, Gourlay says SkySecure Server creates a synthetic operating environment and assumes a zero-trust posture regarding all network access. This approach creates a security perimeter around the server that no rootkit or malware can pass, says Gourlay. Via the cloud, Skyport Systems then provides a tamper-resistant audit log, certificate management system along with visibility into all traffic flows and application interactions across each workload, adds Gourlay.

    Gourlay is the first to admit that commodity servers will continue to dominate the data center landscape for years to come. But by making secure servers more accessible, Gourlay says Skyport Systems can expand a market niche today that isn’t being properly served by the major server vendors.

    The reason, of course, they don’t serve this market, says Gourlay, is that historically deploying and managing a secure operating system has been exceedingly difficult. The Skyport Systems approach to the solving that problem relies not only on tighter integration code, but also continues outside monitoring of the server in a way that limits any and all threats to the integrity of that server.

     

     

     

     

     

    3:01p
    Dupont Fabros Signs Facebook To 7.5MW Lease – Facebook Now Leases Over 40MW On Ashburn Campus

    Dupont Fabros landed a sizable lease with Facebook for close to 7.5 megawatts of critical load in Ashburn, Virginia. Facebook is taking down close to 45,000 square feet in the new ACC7 facility, Dupont Fabros’ biggest data center in its portfolio.

    Facebook now leases over 40 megawatts of critical load across Dupont Fabros’ Ashburn campus. The new lease includes space for 4.46 MW in ACC7 Phase I which commences immediately; and 2.97 MW in ACC7 Phase II. Facebook also leases nearly 36 megawatts of critical load at three other facilities located on DFT’s Ashburn data center campus.

    Dupont Fabros is currently constructing a second phase in ACC7 consisting of 9 megawatts and 50,000 square feet of space, due to open in the fourth quarter. ACC7 is already 84 percent leased on a critical load basis.

    “We are gratified by Facebook’s confidence in DFT and our ability to provide continuous and highly efficient power and cooling to their computer servers and network equipment,” said Christopher Eldredge, president and chief executive officer, Dupont Fabros. “The expanded relationship with Facebook gives us the opportunity to customize leases with terms that suit the long-term goals of both companies.”

    The company reported positive leasing trends for the first quarter – however, this lease is more than all first quarter leasing activity. The contract was signed earlier this month. Dupont Fabros’ occupancy across its entire portfolio is 96 percent with the new lease, up from 94 percent.

    As part of the ACC7 lease, Dupont Fabros and Facebook amended each of Facebook’s existing leases. Facebook has the right to individually decrease the term of the lease of each of nine computer rooms, each with 2.3 megawatts of available critical load, provided the aggregate reduction in lease terms does not exceed 67 months, or an average of approximately seven months per computer room.

    The amendments also extended the lease of one computer room totaling 2.28 megawatts of available critical load by six months and two computer rooms totaling 4.33 megawatts of available critical load by twelve months each.

    6:00p
    DataCore Adds OpenStack Support

    DataCore Software this week released an update to its storage software offerings that makes them compatible with OpenStack.

    The DataCore PSP2 update the company’s SANsymphony-V10 and DataCore Virtual SAN software also includes tools to centrally control and manage end-to-end I/O performance, optimize Flash memory, serve up virtual desktops, and automate deployments.

    Other new features include the ability to deduplicate and compress virtual disks in the background and the ability to integrate with backup and recovery software from Veeam.

    Augie Gonzalez, director of product marketing for DataCore, says OpenStack for the moment is being primarily adopted by larger enterprise IT organizations that are trying to build their own private clouds. The update allows those IT organizations running OpenStack to make use of both new and legacy storage infrastructure by adding support for the Cinder interfaces defined within OpenStack.

    “We’re seeing a lot of interest in OpenStack from large enterprises that need to support multiple hypervisors,” says Gonzalez. “But we didn’t think telling people they needed to buy all new storage to run OpenStack made a lot of sense.”

    Gonzalez says the rise of hyper converged environments inside data centers validates the hardware-independent approach to storage management that DataCore has long advocated. For example, in the case of DataCore Virtual SAN Gonazalez says the DataCore offerings can be deployed both on servers and storage arrays to create a hyper converged environment that enables OpenStack to truly scale.

    The reason for that, says Gonzalez, is that DataCore software runs in RAM on the processor versus in Flash. As a result, Gonzalez says DataCore Virtual SAN is ten times faster than VMware Virtual SAN software. That capability is particularly important within the confines of a virtual desktop infrastructure (VDI) deployment where I/O performance often determines the success or failure of the deployment, says Gonzalez.

    No matter whether it’s referred to as software-defined storage or hyper converged infrastructure, Gonzalez says storage in general is moving away from expensive proprietary hardware. In its place is a new generation of storage systems based on standard x86 processors and commodity disks.

    In general, each iteration of x86 processors adds strength to that argument. While proprietary hardware systems can be made to run faster at any given time over the long haul Moore’s Law has consistently show the advantages of making use of general purpose processors as long as, of course, there’s storage management software that can efficiently make use of those x86 processors.

     

     

     

    6:04p
    Net Neutrality, Government Surveillance, Data Centers – Things You are Afraid to Ask

    Ever wonder how the U.S. Government surveillance programs and the FCC ruling on net neutrality are impacting data centers? Join our panel of data center, telecommunications, and legal experts in a roundtable discussion about the changing nature of the relationship between the U.S. government and the data center industry. Register Now

    You’ve heard from the TV pundits. Now hear from the experts in your own industry discuss the latest data center implications of these current events.

    Meet the Panelists

    Christian DawsonChristian Dawson
    Chairman & Co-Founder
    Internet Infrastructure Coalition
    View Full Bio

    William DoughertyWilliam Dougherty
    SVP & CTO
    RagingWire Data Centers
    View Full Bio

    David SneadDavid Snead
    Internet Attorney & Co-Founder
    Internet Infrastructure Coalition
    View Full Bio

    Michael WheelerMichael Wheeler
    Executive Vice President
    NTT Communications
    View Full Bio

    Register Now

    << Previous Day 2015/05/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org