Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, January 14th, 2015

    Time Event
    1:40a
    CoinTerra Defaults on Debt, Countersues C7 Data Centers

    Bitcoin mining company CoinTerra has put its operations on hold as it awaits a response from its lenders on a restructuring proposal, according to CEO Ravi Iyengar.

    The company has halted its mining and payouts to customers in the wake of a dispute with C7 Data Centers, which has sued CoinTerra for $1.4 million in unpaid colocation fees, and is seeking $4 million in additional costs and contractual payments. CoinTerra disputes the allegations and says it has filed a counterclaim against C7.

    In an interview with Data Center Knowledge, Iyengar said CoinTerra’s fate was now in its lenders’ hands. “The whole operation of the company is waiting on the note holders making a decision,” he said.

    CoinTerra is in default on its senior notes as a result of the shutoff of its equipment at C7, according to Iyengar. The loan is secured by CoinTerra’s assets, including its Bitcoin mining equipment. As a result, CoinTerra was forced to turn off its mining operations at other data centers, including its presence at CenturyLink Technology Solutions. In July, Iyengar said CoinTerra had contracted for more than 10 megawatts of data center capacity with CenturyLink.

    A CenturyLink spokeswoman declined to comment on the service provider’s relationship with CoinTerra, citing confidentiality agreements.

    CoinTerra manufactures and sells bitcoin mining hardware and hosts and manages mining hardware for customers as a service. Iyengar said it has updated its customers on the status of its operations and the default on its debt. CoinTerra has not made payouts to its cloud mining customers since Dec. 19.

    “The Note Holders are evaluating their options,” CoinTerra said in its customer notice. “Until this is resolved, CoinTerra will be unable to make further payments.”

    In its lawsuit, C7 says it provisioned about $12,000 a day in electricity from Rocky Mountain power to support CoinTerra’s cabinets, but that CoinTerra became delinquent “almost immediately” after it began service in April 2014.

    In a statement, CoinTerra said it has filed a countersuit against C7.

    “CoinTerra, Inc. disputes the allegations in the complaint filed by C7 Data Centers, Inc. in Utah State court,” the company said. “CoinTerra has recently retained local counsel to address this dispute. Moreover, CoinTerra has filed a counterclaim against C7 in federal court in the District of Utah. CoinTerra intends to vigorously prosecute its claims against C7 while defending the claims levied by C7. Yet, CoinTerra is hopeful that the parties can resolve this matter quickly.”

    The statement did not specify which of C7’s claims were in dispute. Iyengar said the company’s comment on the C7 case would be limited to its statement.

    In assessing his company’s challenges, Iyengar said CoinTerra’s efforts have been undermined by “irrational” mining practices that have made it difficult to earn new bitcoins. The network is designed to adjust its reward system based on the level of activity. As the price of bitcoin rises, the computing power on the network (known as hashrate) should increase. As the price declines, the hashrate should eventually moderate — at least in theory.

    Iyengar said this has not held true. The price of Bitcoin has been in steady decline since hitting a peak of $1,100 in late 2013, dipping to $225 this week. But as the price has moved lower, the network hashrate has continued to rise, reaching an all-time high of 358 petahashes last week.

    “Somehow, the self-correcting model (for the bitcoin network) has not performed as expected.” said Iyengar. “The difficulty does not go down, but the price does not rise. One or the other has to give.

    “There is irrational mining going on in some parts of the world that defies ROI (return on investment),” he added. “People are mining when they are not making a profit. There are other factors in place. Perhaps it’s unfair access to power. As a result, the models we all assumed have collapsed.”

    1:00p
    What the Bitcoin Shakeout Means for Data Center Providers

    There’s a major shakeout underway in bitcoin cloud mining, with some firms shutting down or halting payouts to customers, while others are shifting their business models.

    The fallout is being felt by data center operators who leased space to large mining operations, prompting one provider to sue a bitcoin customer for millions of dollars in unpaid hosting costs.

    The turmoil is driven by an extended slump in the value of bitcoin and other virtual currencies. After soaring as high as $1,100 in late 2013, the price of bitcoin has plunged, hitting a low of about $225 on Tuesday.

    This decline has wreaked havoc with cloud mining operations, who lease processing power to users. Many of these companies operate their own warehouse-style computing facilities, but some providers lease data center space from wholesale providers.

    On Monday, C7 Data Centers filed suit against mining company CoinTerra, which has missed $1.4 million in payments. The suit seeks up to $5.4 million in damages, citing C7’s costs for provisioning power and the balance of CoinTerra’s contract. CoinTerra disputes the charges and says it has filed a counterclaim.

    CoinTerra CEO Ravi Iyengar told us the company has defaulted on its debt as a result of C7 shutting down CoinTerra’s infrastructure at its data centers. As a result, the bitcoing miner has halted operations and stopped payouts to its customers.

    CoinTerra is also a major customer of CenturyLink, which has leased more than 10 megawatts of data center capacity to the mining firm. Its servers at CenturyLink data centers have also been shut down.

    The Downside of Speculative Mining

    Bitcoin mining has always been a speculative business, offering a way to mint digital money with high-powered hardware by processing transactions, with financial rewards paid out in virtual currency (hence the “mining” nomenclature). The network is based on a public ledger known as the blockchain, with each transaction verified using cryptography.

    The returns on mining fluctuate with the price of bitcoin and network activity. As the price of bitcoin has plunged, even industrial-scale mining operations are unable to cover their costs. On Monday, the second-largest bitcoin mining pool said it was temporarily suspending operations, saying mining had become unprofitable.

    Another mining company that has leased significant data center space is CloudHashing, now owned by Peernova. Customers have complained that the company is mining fewer coins and that payouts have become erratic.

    “CloudHashing is very much operational,” said Emmanuel Abiodun, president of Peernova. “We have simply pointed our mining machines to a larger mining pool to increase the block discovery frequency which is a good thing for all customers. The growth of the bitcoin network made that decision absolutely necessary. Customers will now be paid on a weekly schedule.”

    Shifting Focus Amid Mining Challenges

    Peernova, like other companies in the ecosystem, has been changing its strategy. In December the company said it had raised $8.6 million in new funding and would use the money to accelerate its development of blockchain-based software products for enterprises. “A large amount of our efforts this year will be delivery of these applications,” Abiodun said.

    Another firm that has shifted its business focus is GAW Miners, which made a major push into cloud mining last summer. The company recently launched its own virtual currency called Paycoin, and some customers who continue to mine using GAW’s cloud-based “hashlets” say their maintenance fees can exceed the returns from mining.

    The turmoil has been felt most acutely among smaller players in cloud mining. Over the past month, a growing list of companies have either halted their cloud mining operations or curtailed payments to customers, including ZeusHash and PB Mining. In perhaps the most bizarre development, cloud mining service Hashie suspended operations and temporarily replaced its web site with an “alternate reality game.”

    For Data Centers, Credit Risk Comes Into Focus

    The chaos makes the risks of the bitcoin sector stark. As we noted in July, the sector presents a challenge for the data center industry. As demand for bitcoin mining capacity soared in early 2014, some providers leased significant amounts of space and power to bitcoin specialists, while others skipped these deals, wary of the density requirements, economics and potential credit risk.

    “Some investors don’t want to touch the Bitcoin industry,” data center industry veteran Mark MacAuley, a managing director at RampRate, said at the time. “The risk is outside the profile that’s palatable to some investors.”

    Credit risk is not a new issue for the data center industry, which was hard-hit by the dot-com bust and the collapse of speculative startups. The wave of dot-com bankruptcies rippled through the colocation sector, leading to the collapse of major players like Exodus Communications, AboveNet, and MCI WorldCom.

    As the industry recovered, data center providers tightened their credit standards and limited the scope of their construction projects, seeking to keep the supply of space in line with customer demand. After a lengthy period with few customer default problems, the industry’s appetite for risk has been tested by the bitcoin sector.

    Click below to continue to page 2 of this story

    4:00p
    Hortonworks, Talend Strengthen Stream Data Analytics

    Data integration solutions provider Talend has strengthened its partnership with Hadoop distribution provider Hortonworks. The latest Apache Hadoop extension Apache Storm is now supported in Talend 5.6.

    Talend and Hortonworks engineers jointly integrated the Hortonworks Data Platform and Talend in order to make it easier to run data integration workloads natively within HDP.

    Apache Storm will boost ability to perform distributed, real-time mining of high volumes of stream data. It was originally developed for Twitter, to help it handle hundreds of millions of tweets per day. If this (as the pundits have predicted) is the year of the Internet of Things, it’s likely that Storm will be a big assist in handling all the potential data generated by connected devices.

    “The introduction of Storm addresses the rise of the Internet of Things data types that are coming off of machines and sensors,” said John Kreisa, vice president of strategic marketing at Hortonworks. “These are amongst that fastest growing data types across a range of industries like manufacturing and telecommunications, and Talend users will be able to more easily integrate these data types into their big data platforms like the Hortonworks Data Platform.”

    The partnership has been deepened around ingesting stream data. Besides IoT, other examples include financial transactions and web click-streams.

    Properly leveraging data in real time can be a huge competitive advantage. “There are pockets of smaller companies who are doing some disruptive things to compete with first adopters,” said Jack Jack Norris, chief marketing officer for MapR. MapR, one of Hortonworks’ main competitors, was an early supporter of the Storm project, which was packaged into the MapR Hadoop distribution in December. Norris said the ability to automatically make adjustments on the fly and acting immediately on incoming data was a huge advantage.

    Talend and Hortonworks have been working together since 2011, when Hortonworks was founded, and Talend is a Hortonworks Certified Technology Partner. “We’ve been working together ever since and are aligned around making it easier for customers to take advantage of big data,” said Kreisa. “With this new integration there is even more potential for customers to succeed with their big data projects.”

    “With faster data integration solutions such as Talend 5.6 and Apache Storm, users can run their analytics more quickly for real-time decision making, helping them achieve competitive advantages that they just weren’t able to do before,” said Ashley Stirrup, chief marketing officer, Talend.

    Talend raised $40 million in 2013. It tackles integration problems that come with Big Data efforts. Talend’s software generates native Hadoop code and runs data transformations directly inside Hadoop.

    Hortonworks held an Initial Public Offering (IPO) in December in a strong debut. The company recently acquired XA Secure, a developer of security and governance tools, in a bid to strengthen security of the platform.

    4:30p
    Overcoming Big Data’s Security Challenges with Strong Identity Management

    Matthew brings over 10 years of high technology sales, marketing and management experience to SSH Communications Security and is responsible for all revenue-generating operations.

    Big data is no longer a pipe dream. Organizations across all industries are sifting actionable insights from network data that is growing at faster and faster rates each day. Ninety percent of the world’s data has been produced in the last two years, and hidden in all that data are insights on user behavior and market trends that could never have been made otherwise. Even the White House has gotten in on the game, recently investing $200 million in big data research projects.

    As big data becomes more user-friendly, concerns arise around securing access to sensitive data sets and other areas of the network. These concerns must be addressed if organizations want to reap big data’s benefits without risking data breach.

    Securing M2M Identities

    To run big data analytics, large data sets are split up into more manageable portions and are then processed separately across a Hadoop cluster. They are then recombined to produce the desired analytics. The process is highly automated and involves a great deal of machine-to-machine (M2M) interaction across the cluster.

    Hadoop infrastructure contains several levels of authorization: access to the Hadoop cluster, inter-cluster communications and cluster access to the data sources. Many of these authorizations are based on Secure Shell keys, used within Hadoop because they are considered secure and have good support for automated M2M communication.

    It’s a critical priority to secure the identities that enable access into and across the big data environment. This creates a significant challenge for those seeking to use big data analytics like Hadoop. Some of the issues are straightforward:

    • Who sets up the authorizations to run big data analytics?
    • What happens to these authorizations when the person who set them up leaves the organization?
    • Is the level of access provided by the authorizations based on “need to know” security principles?
    • Who has access to the authorizations?
    • How are these authorizations managed?

    Big data is not the only technology dealing with such questions. They are becoming widespread across data centers as more and more business processes are automated. Over 80 percent of data center network communications are automated M2M transactions, with less than 20 percent associated with interactive user-to-machine (i.e., human) accounts. The emergence of big data as the next killer app raises the urgency of managing machine-based identities in a comprehensive way.

    The Risk Curve of Inaction

    High-profile data breaches involving the misuse of machine-based credentials underscore the reality of the real risk involved with ignoring M2M identities. While enterprises have made great progress in managing end-user identities, they have largely ignored the need to treat machine-based identities with the same level of care. The result is a widespread attack vector across the IT environment.

    Implementing change on a running system is a challenge to the desired outcome of bringing centralized identity and access management to (potentially) millions of machine-based identities. Migrating an environment without disrupting the system in progress is a complicated undertaking, so it’s no small wonder enterprises have been hesitant to take it on.

    Poor Key Management

    The current state of key management is often abysmal. To manage the authentication keys used to secure M2M communications, many system administrators use spreadsheets or write homegrown scripts for controlling distribution, monitoring and taking inventory of deployed keys. This approach allows many keys to fall through the cracks. There might not be regular scanning in place either, allowing unauthorized back doors to be added without the organization knowing.

    Lacking centralized control of keys undermines efforts to stay compliant. The financial industry, for example, is bound by regulations requiring strict control over who has access to sensitive data. The recently strengthened PCI standards demand that any entity that accepts payment cards—banks, retailers, restaurants and healthcare alike —do likewise. Since these industries are currently making swift and decisive moves to bolster their big data strategies to capitalize on the wave of user-driven data, they are increasingly vulnerable to compliance failures and regulatory sanctions.

    Security Steps

    Organizations must acknowledge and confront these risks. These steps are best practices to get them started off on the right foot:

    • IT staff rarely have visibility into where identities are stored, what information those identities are permitted to access and what business processes they’re supporting. Therefore, the first step is passive, non-invasive discovery.
    • The environment must be monitored to determine which identities are being actively used and which are not. Fortunately, in many enterprises, unused—and therefore unneeded—identities often comprise the vast majority. Once these unused identities are located and removed, the scope of the overall effort is significantly reduced.
    • The next step is centralized control over adding, changing and removing machine identities. This enables policy-based governance over how the identities are used, ensures no more unmanaged identities can be added and provides verifiable proof of compliance.
    • With visibility and control established, identities that are needed but are in violation of policy can be updated without disrupting ongoing business processes. Under central management, the privilege level assigned to that identity can be remediated.

    A Secure Strategy

    Big data is here to stay, along with fresh risks in data access control. M2M identity management is essential, but traditional manual IAM practices are inefficient and downright risky. Taking a complete inventory of all keys and following other best practices will save time and money while improving security and compliance. Because big data has increased access to sensitive information, organizations must take proactive measures to roll out a comprehensive and consistent identity and access management strategy.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:45p
    Treasure Data Raises $15M for Subscription Analytics Service

    Treasure Data has raised $15 million in a Series B funding round led by Scale Venture Partners, with participation, among others, by AME Cloud Ventures, a venture fund led by Yahoo! founder Jerry Yang. The startup provides subscription analytics as a service to customers who don’t want to invest in their own analytics software and hardware.

    Treasure Data bills itself as an alternative to Hadoop-based platforms or services. It manages and analyzes high-volume, high-velocity, semi-structured data generated by mobile applications, websites, applications running in the cloud, and network-connected devices (the Internet of Things).

    The company says its platform currently receives 400,000 data records per second. A single Treasure Data customer uploads approximately 12 billion data records per day.

    Venture capital continues to make its way into companies that offer solutions for storing and analyzing large amounts of data. This week alone, Basho announced a $25 million round, and MongoDB said it landed $80 million. Both are NoSQL database companies, but they enable widely distributed highly scalable databases that store enormous amounts of data.

    A lot of money is going into technologies for ingesting, storing, and processing data generated by the Internet of Things, which has numerous implications for IT departments.

    Treasure Data’s analytics as a service includes capabilities for data collection, storage, and SQL analysis. It enables companies to collect data in “near real-time” and organizes and stores it for immediate use. View the data in tables, analyze with SQL, then view, store or export query results.

    The startup claims more than 100 customers including credit score company Equifax, and smart watch maker Pebble. Another customer, Pioneer, uses it for car telematics.

    “Companies have found the scale and velocity of IoT and big data to be very challenging,” Hiro Yoshikawa, founder and CEO of Treasure Data, said in a statement. “As a result, they spend far too much time and money building data infrastructures that serve only to distract them from their primary business objectives. We give our customers an easier way to manage massive data volumes while retaining flexible access to the raw data.”

    Andy Vitus, partner with Scale Venture Partners, will join the Treasure Data board of directors.All current board members, or their respective funds, also participated in the financing round.

    7:36p
    Google’s Stackdriver-Based Cloud Monitoring Now in Beta

    Google announced beta availability of Google Cloud Monitoring, the genesis of the company’s acquisition of Stackdriver eight months ago. Google noted that Stackdriver had become the backbone of its cloud monitoring system at the company’s November cloud event.

    Stackdriver gives visibility into performance, capacity, and uptime of several Google cloud services, including App Engine, Compute Engine, and Cloud SQL. These are important capabilities for cloud users, especially those with complex environments and those using the DevOps model.

    The monitoring console provides a high-level overview of the user environment’s heath and some other key metrics. You can configure endpoints to check and notify when servers or APIs and other resources are unavailable to end users. The service has native integration with common open source services, such as MySQL, Nginx, Apache, MongoDB, RabbitMQ, among others. Use it with the Cassandra plugin to look into performance of a distributed key value store.

    A pair of VMware alumni founded Boston-based Stackdriver. Prior to the acquisition, the company was primarily known for monitoring workloads on Amazon Web Services — Google’s biggest competitor in cloud services. The focus has been developing on Google since the acquisition, but it still supports other clouds, such as Rackspace and AWS. Stackdriver recently added support for AWS Kinesis.

    “We announced Stackdriver’s initial Google Cloud Platform integration at Google I/O in June 2014 and made the service available to a limited set of alpha users,” wrote Dan Belcher, Stackdriver co-founder and Google product manager on the Google blog. “Since then, the team has been working to make operations easier for Google Cloud Platform and Amazon Web Services customers, and hundreds of companies are now using the service for that purpose.”

    Similar moves by other cloud providers include Microsoft’s acquisition of Greenbutton in May. Greenbutton provides a dashboard for monitoring cloud applications. There are several AWS cloud monitoring companies in addition to homegrown monitoring capabilities. Companies like Cloudyn got their start monitoring AWS but added support for Google’s cloud later on.

    A recent Piper Jaffray survey of 112 CIOs saw Google fall slightly out of favor, with only 7 percent choosing it as preferred cloud provider for 2015 compared to 12 percent last year. However, Google was one of four providers CIOs said they expected to buy more services from. The drop is odd, perhaps due to a small sample size or the wrong audience. Google Cloud’s strength is largely with developers, a group whose power within organizations grows alongside DevOps adoption.

    8:26p
    Grand Ming Tops Out 15-Story Data Center Tower in Hong Kong

    Grand Ming Group has finished construction of a 15-story data center tower in Kwai Chung, Hong Kong. This is the second high rise the Hong Kong construction company has converted into a data center.

    The Asia Pacific region has some of the world’s fastest growing data center markets, and Hong Kong is one of them. Lots of companies that use data center services are located in places like Hong Kong, Shanghai, Singapore, but these business centers also serve as data center hubs that serve customers in other parts of the region.

    Some of the most recent players to establish data center presence in Hong Kong include IBM SoftLayer, which launched there in April, AliCloud, the cloud services arm of the Chinese Internet giant Alibaba Group, which announced a Hong Kong data center in May, and LeaseWeb, which took space at a Pacnet facility in Hong Kong in December.

    All three are Infrastructure-as-a-Service providers, which is telling about the nature of demand in Hong Kong and the broader Asia Pacific region.

    Grand Ming’s new Hong Kong data center, called iTech Tower 2, has about 100,000 square feet of space total and can support up to 1,400 server racks. At full utilization, it will provide 6 megawatts of critical power.

    The company estimates its development costs to be about $88 million.

    In a statement, Grand Ming Chairman Chan Hung Ming thanked a data center facilitation unit in the government CIO’s office for help in getting proper approvals for the project. “The process in getting through all governmental approval is not simple because of such unprecedented case in using high-tier data center use on an industrial land development,” he said.

    Grand Ming finished the first chunk of data center space in its first data center building, called iTech Tower 1, in 2008. The 10-story building is located about two miles away from iTech Tower 2, in Tsuen Wan.

    9:30p
    OpenStack Network Startup PlumGrid Becomes Foundation Sponsor

    PlumGrid, provider of virtual network infrastructure for OpenStack has become a corporate sponsor of the OpenStack Foundation, which oversees the popular open source cloud software project. The company has long been a contributor to the project, but now it’s official.

    OpenStack is a widely supported and adopted package of software components for building and managing public and private clouds. PlumGrid has contributed code and reviews to the OpenStack network project Neutron, the image service Glance, the dashboard Horizon, and Nova, the compute component, among others.

    PlumGrid ONS for OpenStack is a software suite that enables scalable and secure virtual OpenStack network infrastructure. Service providers and enterprises deploy it to enable scale-out architecture and secure multi-tenancy.

    “We’re seeing enterprises and service providers move to [OpenStack] because they don’t want to be locked in,” said PlumGrid vice president of marketing Wendy Cartee last October.

    The OpenStack network software works with two popular enterprise distributions of the cloud software: by Red Hat and by Piston Cloud Computing.

    OpenStack continues to receive diverse support from companies ranging from startups like PlumGrid to giants like Cisco and Dell. Security company Symantec recently joined the foundation as a gold member.

    Started by Rackspace and NASA in 2010, OpenStack has matured substantially and will celebrate its fifth birthday with several mainstream production deployments.

    “Formalizing our support for the OpenStack Foundation is the next logical step in our commitment to the project,” Awais Nemat, co-founder and CEO of PlumGrid said in a statement. “Becoming a corporate sponsor demonstrates our commitment to helping harden and extend Neutron, while also offering our customers an ever-expanding knowledge base in delivering networking technology for high-performance OpenStack-powered cloud deployments.”

    Jonathan Bryce, executive director of the foundation, said corporate sponsors like PlumGrid help the open source project expand into the world of high availability applications. “The partnerships, technologies, and engineering expertise these sponsors bring to our community are critical to moving the project forward,” he said in a statement.

    << Previous Day 2015/01/14
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org