Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, April 8th, 2014

    Time Event
    2:22a
    Netsolus Mines New Niche in Bitcoin Data Centers

    NEW YORK – Many data center providers are testing the waters of the market for hosting Bitcoin hardware. Netsolus has jumped in with both feet, and quickly built a niche in hosting custom gear for virtual currency. Bitcoin customers represent 75 percent of the revenue for the Kansas City-based service provider, which is now building custom data centers for large industrial mining customers.

    “We’re going to be bringing close to 20 megawatts online,” said Bryan Ballard,the Chief Technology Officer for Netsolus, who discussed his company’s virtual currency hosting business Monday on a panel at the Inside Bitcoins conference in New York. “We’re seeing more of the industrial miners.”

    Netsolus is perhaps the best example of a data center provider riding the Bitcoin wave, and building a business focused on the needs of the virtual currency community. It’s made some adjustments to accommodate the requirements of mining hardware rigs, which feature custom chips called ASICs (Application Specific Integrated Circuits) that run constantly as they crunch data for creating and tracking bitcoins.

    Higher Density, Lower Redundancy

    The company’s new facilities will be very different than its primary data center in the Oak Tower, a former central office for AT&T in Kansas City. The Netsolus-built Bitcoin mines will be lower-tier facilities that forego traditional raised floors and redundant power infrastructure, opting instead for powered shells with slab floors filled with custom ASIC systems housed on shelving.

    Netsolus now has five data centers, including several custom Bitcoin mining sites totaling about 3 megawatts of power capacity. The company is discreet about the details of these expansion sites, and the clients like it that way.

    “We keep the locations secret,” said Ballard. “They all have codenames. We have 20 employees in our shop, and only three of them know where all the facilities are.”

    Boost From Butterfly

    Netsolus is a 13-year old company that has historically focused on colocation and managed services. It discovered the Bitcoin market through its relationship with Butterfly Labs, a Kansas City company that builds custom ASIC hardware for mining virtual currency. Butterfly refers customers to Nimbus Mining, a cloud mining operation that hosts hardware with Netsolus and powers a chunk of its operations with Butterfly hardware.

    These cloud mining companies still require Tier III-level reliability. “There is a small subset of the Bitcoin community that is offering (hosted mining) contracts,” said Ballard. “We’re seeing a high uptick in the contract miners, but we expect that to subside over the next year. They need uptime, so they need the equivalent of enterprise-style requirements. The rest of the miners don’t need that.”

    Large-scale mining operations are focused on finding the lowest cost possible, which places a premium on cheap power and bare-bones infrastructure.

    “Reliability and uptime are not that big a problem,” said Josh Zerlan, VP of Product Development at Butterfly Labs, who was also a panelist at Inside Bitcoins. “You don’t need this huge infrastructure to support quadruple-nine uptime. If half the Bitcoin miners in the world go offline, that doesn’t stop the network. Density is the issue.”

    Different Criteria for Site Selection

    Ballard says this changes the site selection process for new Bitcoin mining facilities.

    “In traditional data centers, you’re trying to find the right confluence of fiber, power and bandwidth,” said Ballard. “Our bandwidth is negligible. It’s all about finding the right cost of power and finding enough shelving. We’re looking at old steel mills, and other sturdy facilities with good power.”

    As Netsolus continues to attract interest from large miners seeking capacity, it’s exploring ways to work with other data center providers to meet demand. One possible approach, Ballard says, is to team with wholesale data center providers with vacant powered shell space.

    In the meantime, Netsolus expects a continuing influx of smaller customers making the transition from mining Bitcoin at home. “They don’t expect to pay more than they pay for electricity to mine at home,” said Ballard. “So that’s tricky.”

    The tipping point for the homebrew mining crowd is cooling, which introduces a seasonal component.

    “Having a lot of mining hardware in your house isn’t a problem during the winter,” said Ballard. “But the weather is getting warm. This is like Game of Thrones, except it’s ‘Spring is Coming.’ We’ll get a big rush of people for spring.”

    1:00p
    Cisco Updates Cloud-Enabled Video Processing Service

    At the NAB (National Association of Broadcasters) show in Las Vegas this week Cisco announced plans to launch Videoscape Virtualized Video Processing, Akamai announced that it has passed a MPAA security assessment, and Mellanox and Pixit Media help power visual effects in post-production for Glassworks. The NAB 2014 online conversation can be followed on the Twitter hashtag #NABShow.

    Cisco launches virtualized video processing. Advancing its Evolved Services Platform strategy Cisco (CSCO) announced plans to virtualize and cloud-enable the video processing elements of its industry leading Videoscape TV service delivery platform. Cisco also announced enhancements to the Videoscape AnyRes encoding solution to support full-frame rate 4K/Ultra High Definition content for higher quality and the High Efficiency Video Coding (HEVC) standard for more optimized delivery. Cisco’s Videoscape Virtualized Video Processing (V2P) will be comprised of virtualized video processing portal, virtualized video orchestrator, and the hardware software including Videoscape AnyRes software, Cisco’s DCM and 9036 families of high performance video processing platforms and UCS blades. “Our customers need a radically simpler and more agile video processing infrastructure as they seek to deliver new video experiences, and stay ahead of the rapid growth in video processing options and formats,” said Joe Cozzolino, Senior Vice President, General Manager, Service Provider Video Infrastructure at Cisco. “Virtualized Video Processing enables our customers to focus on delivering better video services, faster and more cost effectively, and frees them from the burden of buying, configuring and re-configuring individual pieces of hardware. As the proven industry leader in cloud and virtualization, we are using these technologies to empower our customers with far greater agility in deploying compelling TV experiences, while reducing operational burdens.”

    Akamai cloud workflow completes MPAA security assessment. Akamai Technologies (AKAM) announced its cloud-based digital media workflow has passed the Motion Picture Association of America’s (MPAA) security assessment. As one of the first to undergo an MPAA security assessment, Akamai’s cloud workflow has the integrated digital rights management (DRM) element that is critical for premium content protection. The cloud-based workflow is intended to alleviate the physical resources and operational complexities often associated with in-house content preparation. In conjunction with the introduction of its secure workflow, Akamai announced its DRM partner program, designed to integrate a range of DRM providers and technologies into the Akamai workflow and offer customers an array of options to best meet their DRM requirements. The first two participants, Irdeto and BuyDRM, will offer Akamai customers a choice of Microsoft PlayReady and Adobe Access DRM technologies.  ”Meeting the rigorous criteria set forth by the MPAA is a meaningful achievement in Akamai’s ongoing commitment to drive innovation in online video and securely deliver the highest possible quality content at scale,” said Michael Fay, vice president, media products and operations, Akamai. “Content protection is crucial, and qualifying our workflow through the MPAA assessment adds further credence to Akamai’s goal of being the trusted provider for source and derivative content on the Internet. We plan to continue to evolve Akamai’s online media solutions so that our customers recognize the Akamai partnership as essential for providing the highest quality online experiences. Achieving this recognition requires not only the highest-performing, most reliable cloud storage and global delivery services, but also scalable, secure and consistently high-quality content preparation.”

    Mellanox an Pixit Media selected by Glassworks for Visual Effects. Mellanox Technologies (MLNX) and Pixit Media, provider of enterprise-ready storage, network and archive solutions, announced that Glassworks has selected a combined Mellanox and Pixit Media solution to power its visual effects (VFX) and post-production storage and networking infrastructure. Glassworks is using Mellanox’s end-to-end Virtual Protocol Interconnect (VPI) solution, which provides InfiniBand and Ethernet connectivity on the same wire, and Pixit Media’s PixStor technology to increase the performance and efficiency of its system without needing to add servers. The solution also future-proofs its infrastructure for 4K workflows. PixStor leverages Mellanox’s end-to-end FDR 56Gb/s InfiniBand and 40/56Gb/s Ethernet interconnect solutions with VPI technology to provide a seamless, high-performance rate of media data traffic from storage disk to creative application desktop. “The ability to share project data between creative departments without the need for multiple, proprietary storage systems will have a big impact on our business,” said Will Isaac, head of engineering at Glassworks, “Our creative staff has instant access to content, enabling quicker and more elegant project collaboration at a reduced cost. As an innovator in an evolving industry, we need to be ready to deliver 4K projects without any further investment in our network. It’s a win-win.”

    1:12p
    Enhanced Functionality on Branch Circuit Power Meters Reduces DCIM Cost

    Stephan Prueger is vice president of sales at TrendPoint Systems.

    Anybody that has deployed a sophisticated DCIM system knows that the Return on Investment (ROI) associated with capabilities like real-time monitoring, intelligent alerting and asset management makes this a worthwhile venture. As such, more data center operators are deploying DCIM solutions across their facilities to both reduce downtime and drive significant CapEx/ OpEx cost savings. Yet, according to the Uptime Institute’s 2013 Data Center Industry Survey, 60 percent of advanced owners and operators cite cost as the primary barrier to DCIM adoption.

    This disconnect is a common one with enterprise software deployments. And while there are no silver bullets when it comes to rolling out DCIM, deploying devices with greater functionality including multi-protocol support, data logging, and an onboard web interface can greatly reduce the TCO of the overall solution.

    Multi-Protocol Support

    DCIM vendors tout the benefit of a “single pane of glass”, the ability to pull together several disparate applications. Application consolidation is a big part of the long-term ROI for facilities managers. But this may not be possible if the facility is loaded with devices supporting just single, legacy or proprietary protocols.

    And while middleware vendors that sit between the device and the DCIM front end will claim the benefits of protocol conversion, it is far more efficient for devices to speak directly to the DCIM system. Direct communication can minimize total device counts in the facility, reduce single points of failure, and help eliminate multiple license fees for monitoring the same points.

    Data Logging/Backup

    Managing all the data generated by rapid polling becomes a big issue as device counts increase. Jaces are also commonly used to aggregate data between the device and the DCIM solution. This isn’t necessary for devices with onboard data logging and backup, as they can handle the workload associated with measuring continuous data over frequent intervals without the need of additional hardware and software. The net result is the ability to scale your device infrastructure much more efficiently and cost effectively.

    Onboard Web Interface

    Managing adds, moves, and changes to the infrastructure becomes increasingly challenging as device counts increase. One way to mitigate the complexity is to deploy devices with an onboard web interface. This allows the device data to be read remotely from the network as opposed to requiring a direct connection. Validating DCIM data becomes much simpler, and any adds, moves, and changes in the device configuration can be managed through the DCIM interface which helps maintain the “single pane of glass” paradigm.

    Implementing a DCIM solution isn’t a discrete event for facility and IT professionals. Instead, it is a continuous process of deployment and maintenance, constantly evolving over time. The key to maximizing ROI is to streamline those initiatives as much as possible. As such, utilizing devices with a high level of functionality can significantly reduce the time, cost, and complexity over the lifecycle of the project.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:30p
    Highlights From Inside Bitcoins New York

    NEW YORK – This week we’re exploring the latest trends in a relatively new class of customers for the data center industry – virtual currency miners.

    The growth of the Bitcoin business was on display Monday in New York, where MediaBistro hosted Inside Bitcoins NY. The event drew more than 2,000 people to the Jacob Javits Center, along with 42 exhibitors in the expo hall. That’s a big jump from last year’s first event in New York, which had 150 attendees and three sponsors.

    Inside Bitcoins provided an overview of the fast-moving landscape for Bitcoin hardware, with many booths featuring vendors of ASIC-based systems for “mining” – the process of creating bitcoins by verifying transactions on the distributed computing network. The event also drew interest from data center and hosting providers, including IO, Webair, Netsolus and CloudHashing. Here’s a look at some of the hardware and activity at the event.

    hashware-hardware

    This mining rig from Hashware illustrates the density of custom Bitcoin equipment, with boards packed together tightly inside the chassis and large fans to generate airflow to cool the components. (Photo: Rich Miller)

    hashware-booth

    A staffer at the booth for Hashware. an Israeli ASIC specialist, discusses the company’s hardware with a delegate at the Inside Bitcoins conference. (Photo: Rich Miller)

    Ravi-Iyengar-Cointerra

    Ravi Iyengar, the CEO of Bitcoin hardware vendor CoinTerra, is a semiconductor industry veteran who is now focusing on the business opportunity in specialized mining ASICs. CoinTerra recently released its first batch of systems, which were pre-sold. (Photo: Rich Miller)

    Cointerra-Terraminer

    CoinTerra’s TerraMiner hardware was on display at the company’s booth on the expo floor. (Photo: Rich Miller)

    InsideBitcoins

    The Inside Bitcoins conference from MediaBistro drew 2,000 virtual currency enthusiasts to the Jacob Javits Convention Center in New York, as well as 42 vendor exhibitors. (Photo: Rich Miller)

    2:00p
    Optimize Your Data Center: Enhancing the App Experience for End-Users

    The modern cloud platform has introduced new ways to compute, process information, and deliver rich content. Furthermore, we are creating a much more distributed data center model and user base. All of these factors are critical in the consideration around data and application experience. You can have the best cloud platform out there – but if the user experience is poor, your entire model will suffer.

    During the entire application delivery conversation, there are several key points to understand. For example, when it comes to your application, can you define the clear difference between bandwidth and latency? Does bandwidth matter if you’re creating a powerful QoE model? What about direct optimizations to the application’s delivery process? This white paper from Equinix explores the challenges that today’s CIOs face in regard to application performance, and how a more strategic data center approach, like distributing applications closer to your users, can help optimize application performance and reduce IT costs.

    According to Gartner’s global survey of 2,335 CIOs conducted in 2012, CIOs are increasingly being measured against a broad set of business priorities that includes:

    • Increasing Enterprise Growth
    • Attracting and Retaining New Customers
    • Reducing Enterprise Costs
    • Creating New Product and Service Effectiveness
    • Delivering Operational Results
    • Improving Efficiency
    • Improving Profitability
    • Attracting and Retaining the Workforce
    • Improving Marketing and Sales
    • Expanding into New Markets

    Meanwhile, technology growth trends are changing how enterprises conduct business. CIOs must deploy architectures that deliver value in these areas, while also scaling their infrastructure in anticipation of these trends. Deploying a truly scalable infrastructure revolves around your ability to adapt to dynamic business needs and ever-changing delivery models. The future of the compute model will truly revolve around data and applications.

    As outlined in the paper, there are five key steps to create better app performance and optimal end-user experience. These include:

    • Identify and locate your end-user communities
    • Characterize your applications
    • Optimize your network
    • Distribute your applications
    • Optimize service consumption

    Access to a modern, secure and network-dense commercial data center is becoming critical for enterprises as they evolve their IT infrastructure to meet future demands. Download this white paper today to see how critical it is to optimally deliver applications and to continuously provide an excellent QoE for the end-user.

    2:00p
    At NAB, Level 3 Launches Video Cloud Services

    At the NAB (National Association of Broadcasters) show this week in Las Vegas Level 3 Communications (LVLT) launched Video Cloud Services, and partnered with Elemental Technologies to demonstrate live video streaming to a Sony 4K television.

    Broadcast and Internet Video Simplified

    Level 3 announced Video Cloud Services, a cloud-based solution that moves, stores and delivers broadcast and Internet video on a global scale. The service combines Level 3′s content delivery, video broadcast and cloud storage capabilities to create a streamlined approach to global content distribution. The services are comprised of a full suite of IP-based video services designed to support content delivery throughout its life cycle, from signal acquisition to delivery to viewers. Services include live Vyvx broadcast video acquisition, encoding and transcoding, global delivery via the Level 3 CDN, as well as comprehensive, self-service analytics and reporting.

    “Level 3 has collaborated with many content-driven companies, and the one recurring theme we keep seeing is that these companies want a one-stop-shop for high-quality video transfer and storage – for both traditional and online media,” said Mark Taylor, vice president of Media and IP Services at Level 3. “Level 3′s Video Cloud Services simplify every aspect of video delivery, giving our customers the performance and global reach they need to not only deliver an unparalleled video experience, but also realize cost efficiencies.”

    “Ubiquitous networks, hyper-connected devices, and the ever-increasing consumer appetite for time, space and device-shifted, on-demand content have forced media companies to revisit their business models and technology capabilities. For a media company to survive and thrive in today’s environment, agility is critical,” said Mukul Krishna, global director of Digital Media at Frost & Sullivan. “Companies need to leverage the cloud to cost effectively and efficiently manage video throughout its lifecycle and deliver it measurably and intelligently across multiple screens. Media companies that do that well will be best poised for success. At such a time, the availability of Level 3′s Video Cloud Services are extremely timely and the industry can benefit greatly from such services.”

    More information on the Services can be found at www.level3videocloud.com, as well as demonstrations at the Level 3 NAB Recreational Vehicle parked on the patio of Las Vegas Convention Center’s South Hall.

    4K Ultra HD video stream

    Level 3 also announced that  it has teamed up with Elemental Technologies to conduct demonstrations of the world’s first real-time 4K Ultra HD video stream in MPEG-DASH (Dynamic Adaptive Streaming over HTTP) using high-efficiency video coding (HEVC). Level 3′s content delivery network provides the performance and scalability required to transmit the 4K video, which will be encoded and packaged by Elemental. The combination of these technologies gives programmers and pay-TV operators a turnkey, scalable way to deliver 4K content simultaneously to millions of viewers across the globe.

    “Level 3 is proud to play an integral role in the first demonstration of these exciting video broadcast technologies used together,” said Mark Taylor, vice president of Media and IP Services at Level 3. “What’s more, this is a live stream over our global CDN, which shows our ability to deliver this type of Ultra HD to millions of viewers across the world.”

    A NAB 2014 booth demonstration includes live playback of a cinematic 4K short captured by Sony Pictures Entertainment’s F55 camera with final rendering on a 65-inch Sony 4K Bravia TV.

    2:54p
    QTS: Atlanta Market Is Very Attractive

    Quality Technology Services, also known as the acronym QTS, has a dominant position and plenty of room to grow in Atlanta. In aggregate, the company has a whopping 1.3 million square feet of space and 100 megawatts of critical power – in just Atlanta. The company operates huge facilities in downtown and in nearby Suwanee.

    The 970,000 square-foot QTS Atlanta Metro Data Center is among the largest data centers in the world. Its scale allows it to offer a range of services from wholesale colocation to cloud and managed services.

    There were some reports that the company was building a new data center in Atlanta; however the company executives say this is false. “We have the land, but with the capacity we have currently, we feel very comfortable that we will continue to grow our business,” said Dan Bennewitz, COO sales and marketing.

    Jeff Berson, QTS’ Chief Investment Officer noted, “In aggregate, we have 740,000 raised floor, 40,000 potential raised floor, and we have just under 300,000 square feet for development.”

    The official stance is the company isn’t building; it has room to grow within its facilities, and does own land for future expansion in Atlanta and beyond.

    The company uses its scale and expertise in compliance as its key differentiators. Its scale allows it to offer everything from wholesale space to cloud and managed services within its facilities, making QTS an all-in-one provider. It offers wholesale and retail colocation, as well as cloud and managed services. The services are dubbed C1, C2, C3, with C1 being the wholesale customers, C2 colocation, and C3 cloud and managed services.

    The company continues to attract customers from outside Atlanta. For C1 (wholesale), a majority of the clients are not from Atlanta, with C2, it’s about an even mix, while C3 attracts a range of customers all over. “We have some of the largest SaaS, Internet, and social media clients as our customers,” said Bennewitz.

    While the company does offer cloud, it’s not uses by a large part of its customer base. “Our cloud and managed services are aimed at enterprises that need highly compliant, secure predictable costs, predictability via a multi-year contract,” said Bennewitz. “Our participation in the cloud market is very tiny. Cloud and managed services are 10 percent of overall revenue.”

    In addition to scale, the company believes its other big differentiator is compliance. “We invested in our own compliance office,” said Bennewitz. “It was a big investment. We want to integrate a compliance framework across the organization. Pursuing compliance in silos doesn’t work. So now we have an integrated framework; there’s overlap, commonalities. It differentiates us and helps us.” The company is currently working on FedRAMP compliance so it can better tackle the federal space. The biggest beneficiary of FedRAMP compliance would be its Richmond, Virginia facility.

    Customer Trends

    “There’s certainly a trend for higher densities, but every customer is different,” said Bennewitz. ”We can accommodate multiple densities. The flexibility that we bring by being able to offer customers different densities, also means we can offer different levels of reliability.” Beyond just Atlanta, the company says the industry will continue to be heterogeneous. Those that succeed will be able to handle the hybrid environment.”

    “The other area where customers are becoming more educated is improvement in latency,” said Berson. “We’ve been a recipient and a beneficiary of the growth and maturation of connectivity.”

    “We really like the Atlanta market, it’s in the top 10 fastest growing, it’s an attractive location in terms of power, skilled labor force and attractive cost of living,” said  Bennewitz. Over 75 percent of the Fortune 1000 have a presence in the Atlanta market. QTS has succeeded in not only attracting local businesses, but raising the profile of the Atlanta market country-wide.

    The company went public in October, and completed an expansion in Atlanta last August, adding 70,000 square feet.

    5:00p
    Storage News: Avere Systems and Cleversafe Set Up Partnership

    Object-based storage provider Cleversafe and enterprise storage provider Avere Systems announced they have entered into a strategic partnership, enabling the delivery of data storage solutions with scalable performance. The combined solution is designed to accelerate access to data and applications in cloud environments for enterprise customers in industries such as oil and gas,  media and entertainment.

    The two companies will integrate the Cleversafe object-based storage solution and Avere Cloud NAS technology. By combining Avere FlashCloudTM on FXT Series Edge filers with Cleversafe’s Dispersed Storage Network (dsNet) technology, companies can address the need for accelerated NAS operations performance while efficiently scaling to handle explosive data growth. The solution significantly improves total cost of ownership (TCO) by eliminating the need to buy additional storage infrastructure to support data growth.

    “At Avere, we know that the future of data storage is the cloud and Cleversafe is a partner that can help us take advantage of the opportunity it presents,” said Ron Bianchini, president and CEO of Avere Systems. “With our joint solution, customers making the move to the private cloud now have a familiar NAS interface that requires no changes to their existing applications, as well as an architecture that will scale in both performance and capacity.”

    “We are dedicated to helping our customers build a storage infrastructure that works across all of their storage needs,” said John Morris, CEO of Cleversafe. “The combined Cleversafe and Avere Systems solution allows us to provide our customers with a cost effective and highly reliable, scalable enterprise storage system for Web-scale data demands with greater performance and faster access to their data and applications.”

    Avere demonstrates scalable NFS performance on Cleversafe’s object storage technology, using SPECsfs2008, the storage industry’s most popular NAS benchmark.  A three-node FXT 3800 cluster achieved 180,394 ops/sec throughput and minimal latency of 0.89ms overall response time (ORT) with a Cleversafe system that was configured for more than five nines of availability and could be deployed across three geographically dispersed sites.

    Both companies issued press releases. Read  Cleversafe’s announcement and the announcement from Avere Systems.

    << Previous Day 2014/04/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org