Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, November 6th, 2014

    Time Event
    5:01a
    Shippable Raises $8M to Help Enterprises Use Docker Containers

    Shippable has raised an $8 million Series A, coming on the heels of an initial $2 million seed round in December.

    The company believes it has found a niche in the Docker application container space: the shipping. Its platform is an on-ramp to using containers for enterprises helping them simplify software development workflows. It provides continuous delivery, containerized. The round will go toward further development of its offering and marketing.

    Docker containers package applications and dependencies into a single package so that they are portable. It means not having to worry about the nuances of infrastructure configuration, translating into a quicker development cycle.

    Shippable can abstract containerization itself and provide continuous delivery. It automates staging environments and helps teams identify bugs instantly. The solution was natively built on Docker and runs on Amazon Web Services.

    Continuous delivery is a popular software development method that includes a lot of automation of testing, integration, and deployment processes. Shippable’s idea is to provide an “ideal lab” that reduces an enterprise’s test and development lab footprint.

    The company said that traditional continuous integration tools only help automate unit tests. Shippable brings in automated functional testing with instant, on-demand test and development environments that are containerized replicas of production.

    Customers have been responding to this continuous integration and delivery message. After launching version 2.0 in September, the number of users jumped from 8,000 to 25,000, and the number of businesses giving Shippable a spin rose from 400 to 1600.

    The round was raised preemptively, said co-founder and CEO Avi Cavale. A large portion of the December round remains, however the company is making a big push in terms of engineering ahead of exiting beta.

    Shippable is going after hybrid infrastructure needs, addressing enterprise pain points like governance and control over “shadow IT.” It is focused squarely on enterprises and helping them remove the roadblocks they have in their workflows.

    Cavale said that smoothing the entire delivery process is generally trickier inside the enterprise. “The deployment inside an enterprise behind the firewall is a lot more challenging,” he said. “Enterprises are using a lot of legacy products, which we had to take into account and build compatibility and integrations for. Using Docker helps quite a bit.”

    Enterprises are increasingly looking to DevOps in order to bring in more agile development. “DevOps is a culture, not a tool,” said Cavale. “We don’t go to them and say ‘we can help you do DevOps.’ We talk about how long it takes to ship a single change.” The major selling point behind the platform is making that “shipment” fast, easy, and secure.

    Customer growth is attributable to general interest in containers. The cloud giants are championing containers as part of their service offerings (most recently Google, for whom containers have been an integral part of running its own infrastructure for years), and now enterprises are figuring out how to leverage the much-hyped technology effectively in practice.

    The company isn’t building a source control, so it can partner with the likes of GitHub. Shippable can push into container hosted platforms. In terms of service providers, Cavale said they were a bit of a lagging indicator. “They tend to come in after enterprises,” he said.

    4:30p
    Undocumented Changes May Result in a Security Breach

    Michael Fimin, CEO and co-founder of Netwrix, is an accomplished expert in information security.

    The growing tendency of data leaks and security violations, such as Heartbleed, Target, Home Depot and eBay breaches, have shown how vulnerable IT infrastructures are, and further highlight that companies of all sizes need to monitor changes in order to get stringent control over their IT.

    For a better understanding of the current situation within IT departments, Netwrix initiated a 2014 State of IT Changes survey, asking whether IT professionals document changes made to their IT systems, and how those changes impact security and business continuity.

    Worrying facts about system changes

    When talking about monitoring changes to IT systems the first question is whether IT departments ensure that all changes are being documented as they are made. The second question is whether changes are specifically monitored on a daily basis. Surely, if you are unaware of what is happening in your IT infrastructure you are not able to quickly respond to the undesirable effects these changes create, which may result in data leaks or interruption of business continuity.

    Despite the fact that the majority of IT professionals understand that infrastructure must be constantly and continuously monitored, more than a half of them still make undocumented changes to their IT systems. Moreover, 40 percent of organizations don’t even have formal IT change management controls in place.

    When it comes to mission-critical systems, having a meaningful strategy for monitoring data access and users’ activity seems to be a priority of the highest level. However, very few organizations establish continuous auditing of their IT infrastructures for strengthening security and ensuring business continuity. The survey says that 65 percent of IT professionals have made changes that caused services to stop, and this approach is commonplace, even in large enterprises.

    How to secure your data

    Without a system that will help you easily determine any change made to the system configurations, there is a risk that your organization will appear in breaking news announcing yet another company that has suffered from a massive security breach.

    Given the fact that the majority of companies admit the deficiency of adequate measures that prevent security breaches, below are key points that will help strengthen the security of your IT environment and protect sensitive data.

    Keep monitoring user accounts regularly. This includes controlling user permissions, creation and deletion of user accounts, and continuous auditing of user activities. In the case of a significant staff turnover or regular changes to the employees’ permissions, a risk that someone will have redundant access rights is growing.

    According to the Verizon 2014 Data Breach Investigations Report, 88 percent of security breaches result from privilege misuse. Change auditing of IT infrastructure will provide daily and on-demand reports as well as real-time alerts that help to ensure that permissions are adequate, and access to sensitive data is limited to the people who have a business need in it.

    Keep your employees informed that their activity is being tracked. There is no need to hide the fact that you monitor user activity in order to secure sensitive data. Do the opposite – share anonymous reports with your employees and make sure they are aware of the responsibility to follow the company’s security policy.

    Detect breaches early on. Unfortunately, there is no secret remedy that prevents security violations from happening. That is why it is important to be as proactive as possible. Consider deploying a solution that will notify you in case suspicious activity shows up. This will decrease discovery time and provide the opportunity to take all the necessary precautions before sensitive information is massively compromised.

    Security breaches are almost inevitable and their frequency is growing. For this reason, organizations of all sizes need to reconsider their security policy. There are a number of solutions that will help minimize the consequences of security violations. The goal today is to ensure complete visibility across your entire IT infrastructure, to know who did what, when and where, and to track all changes in order to avoid malicious activity.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:27p
    Fire at Bitcoin Mine Destroys Equipment

    Reports of a fire in a warehouse-style facility in Thailand have raised tough questions about the wisdom of hosting expensive bitcoin mining equipment in low-cost warehouse facilities.

    Reports on foreign press and social networks suggest the fire last month may have destroyed hundreds of bitcoin mining rigs, with estimates of their value running into the hundreds of thousands and even millions of dollars. The fire illustrates the risks of operating high-density computing gear in low-tech environments, and underscores the value of data centers that offer advanced electrical infrastructure and fire suppression systems.

    The fire in a Bangkok suburb reportedly destroyed three buildings that housed Bitcoin mining hardware. The mining operation was using new rigs from Spondoolies Tech, an Israeli firm specializing in custom ASICs (Application Specific Integrated Circuits) for the Bitcoin market.

    In postings on BitcoinTalk, officials from Spondoolies Tech said the cause of the fire is unknown, no one was hurt and that they believe the equipment was uninsured. The company said the “buildup was definitely not according to U.S. electric code.” Some commentators on social media speculated that electrical and network cabling may have been an issue, and photos of the facility show tangles of cabling.

    The bitcoin network infrastructure is split between data centers and no-frills hashing centers featuring high-density hardware and low-reliability power infrastructure, often housed in former warehouses. Many bitcoin entrepreneurs focus on building high-powered infrastructure at the cheapest price point possible. As industrial mining operations scale up, they are optimized for rapid changes in hardware and economics.

    But there are tradeoffs that accompany this single-minded focus on cost. Data center veteran Mark MacAuley has been publicly warning about the potential for this type of incident in low-budget mining facilities. “The nuances between a mining facility and a data center are more obvious than I thought,” Macauley tweeted in September. “A data center cares if there is a fire, for example.” MacAuley warned that the design of some bitcoin mines created the potential for fires.

    Most traditional data centers feature fire suppression equipment. These systems typically involved advanced systems to detect heat and smoke, including sensors using VESDA (Very Early Smoke Detection Apparatus). Early detection can allow data center operators to quickly extinguish a fire or smoke condition before it can damage entire racks and rows of expensive equipment.

    When a fire occurs, data centers use several methodologies to extinguish the fire. These include “pre-action” sprinkler system in which water fills the sprinkler pipes only upon an alarm, reducing the potential that leaks in overhead pipes will cause water damage. Some data center forego water altogether, using gas-based fire prevention and “clean agents” – electrically non-conductive gases that don’t leave a residue. Most clean agents are hydrofluorocarbons (HFC), which lack the ozone-depleting characteristics of chlorofluorocarbons (CFCs) like Halon gas, which was once popular in data centers.

    Will this incident prompt more bitcoin miners to consider hosting equipment in traditional data centers? It’s too early to say. Recent declines in the price of bitcoin have further tightened profit margins for miners. To remain competitive, industrial mining operations are making large investments in new hardware. Many businesses would seek to protect these investments with business insurance, but insurers will likely to want the equipment to be hosted in facilities with proper fire detection and suppression equipment.

    6:00p
    Avere Systems Launches AWS-Compatible Virtual NAS

    Storage vendor Avere Systems has introduced a virtual NAS solution, a software-only version of its FXT Edge Filer series. With an update (version 4.5) to its AOS software the virtual FXT filer can be purchased and deployed on Amazon Web Services server instances.

    A year ago Avere launched its Cloud NAS solution as a way to help enterprises approach cloud-scale economics through integrating existing storage systems with cloud. Using its FlashCloud software it integrated legacy NAS with Amazon S3 and Glacier services. The new virtual FXT Edge Filer takes it a step further than Cloud NAS by operating directly in the compute cloud alongside the applications for low-latency access to active data.

    Avere President and CEO Ron Bianchini said the virtual NAS solution will enable companies to take advantage of flexibility and scale of cloud computing with no radical changes to applications or storage infrastructure. “For many customers, this enables them to realize the promised benefits of the cloud,” he said in a statement.

    Avere noted that Virtual FXT can also be used for burst computing in the cloud at peak times with no hardware purchases or long-term commitment to software licenses, allowing companies to provision compute on a pay-as-you-go monthly basis.

    Bianchini pointed out in a blog post that enterprises continue to demand scalability and the ability to move data between locations and service providers. Meeting these demands requires something that Avere has continued to build out with its product offerings, connecting the dots between compute cloud, storage cloud, and on-premises storage.

    7:10p
    CenturyLink Makes Deploying Cloudera Hadoop Clusters on its Cloud Easy

    CenturyLink Technology Solutions has added a fast-track way to deploy Cloudera’s Hadoop distribution and management platform on its Infrastructure-as-a-Service cloud.

    The company introduced six new Cloudera cluster configuration options, or “blueprints,” to its portfolio. Key value of the blueprints, according to CenturyLink, is automation of many steps usually required to stand up a Hadoop cluster running Cloudera.

    Deploying it across a four-node cluster, for example, requires more than 180 steps, Ben Brauer, senior manager of product marketing at CenturyLink, wrote in a blog post announcing the addition. Four nodes is CenturyLink’s basic configuration of a Cloudera cluster and takes a few mouse clicks to set up, but users can add more nodes quickly, without having to configure each of them.

    “Automating a four-node cluster with Cloudera installed is no easy task,” Brauer wrote. “The sheer number of automated steps and their precise sequenced timing required some brain-twisting and a whole lot of architectural sophistication.”

    For high performance, users can run deploy Hadoop clusters on the provider’s eight-CPU Hyperscale cloud instances, which perform better than comparable instances on AWS or Rackspace clouds, CenturyLink said, citing a study by CloudHarmony, a company that performs independent cloud performance testing.

    CloudHarmony AWS RAX CTL

    The service is managed, which means CenturyLink will take care of things like software updates, patching, and cluster management.

    The company makes it easy to predict how much any particular blueprint configuration will cost per hour or per month. That predictability extends to the six Cloudera blueprints.

    As an example, one of the configurations, a four-instance cluster of Hyperscale cloud servers with 128 GB of memory, 3,748 GB of storage, Cloudera Manager and Hadoop JobTracker, will run about $8.80 per hour or about $6,300 per month.

    Three of the configurations vary in cluster size, Hadoop components involved, and level of support and management. The other three are different configurations of additional nodes a cluster can be expanded with.

    The base configurations range in price from $2 per hour for the one-server Express option to about $11 per hour for the four-server Basic option that includes HBase, the open source non-relational database designed for big data applications.

    7:38p
    KnC Miner to Build Another 20MW for Bitcoin Mining in Sweden

    KnC Miner, a company that provides hosting services and builds software for bitcoin mining, has added 20 megawatts to what was previously going to be a 10 megawatt data center in Boden, Sweden, in the vicinity of a massive Facebook data center.

    The company said it decided to expand its data center capacity in Boden to a total of 30 megawatts and 160,000 square feet because of the abundance of hydropower in the area and a favorable business environment. It announced groundbreaking on its first build there only in February, and such a sudden decision to add so much capacity indicates that there is pent up demand for bitcoin mining data center space.

    Bitcoin mines need a lot of power and cooling but not necessarily the level of reliability offered by traditional data center providers. Because traditional colo services are generally cost-prohibitive for mining companies, many of them have built and continue to build massive data centers designed specifically for their purposes.

    Availability of low-cost clean energy as key decision in site selection for mining data centers is becoming a theme in this industry. The abundance of cheap hydropower has made Central Washington one of the hotbeds for mining operations.

    “Our Clear Sky farm in the polar region takes advantage of locally produced renewable hydropower and the surrounding arctic air to achieve industry-leading efficiency,” Sam Cole, a KnC co-founder, said in a statement.

    KnC and Facebook aren’t the only data center projects going on in Boden and the surrounding area. A company called Hydro66 is building a massive, 500,000 square foot, data center there, also citing cheap hydropower and cool climate (for free cooling) as its reasons for building near the Arctic Circle.

    8:00p
    Using Open Source Solutions for Cloud-Ready Big Data Management

    Information interchange has reached all new levels. Now, much more than before, organizations are relying on large data sets to help them run, quantify and grow their business. Just a few years ago, we were already working with large databases. Over the last couple of years, those demands have evolved into giga, tera, and petabytes. This data no longer resides in just one location. With cloud computing, it is truly distributed.

    More organizations will be placing their core business components within a data center and the cloud. Why? It simply makes sense. High-density computing and share environments are the core structure of the modern data center. The big question becomes not whether you want to manage it all – or have someone else do it. Remember, data center dependency is only slated to increase over the coming years.

    The reality really sets in when we look at the numerical information provided by some organizations:

    • IBM recently released a study showing that the end-user has created over 2.5 quintillion bytes of data. Furthermore, they go on to point out that more than 90% of all of the data in the world has been created over the last couple of years.
    • Giants like Walmart are faced with equally growing challenges. With numerous stores all over the world, IT systems have to process over 1 million transactions. Furthermore, because of their size and the amount of product they carry and work with, Walmart has to manage over 2.5 petabytes of data.

    This growth and reliance around data will be offloaded to the only platform that can handle these types of demands: the data center. Any growing organization must look at data center hosting options as a viable solution to an ever-evolving business and IT environment. Whether this is a cloud solution or a manage services option, the modern data center is the spot that can support changed business needs and evolving IT solutions.

    Database administrators have been forced to find new and creative ways to manage and control this vast amount of information. The goal isn’t just to organize it but to be able to use the data to further help develop the business. In doing so, there are great open-source management options that large organizations should evaluate:

    • Apache HBase. This big data management platform was built around Google’s very powerful BigTable management engine. As an open-source, Java-coded, distributed database, HBase was designed to run on top of the already widely used Hadoop environment. As a powerful tool to manage large amounts of data, Apache HBase was adopted by Facebook to help them with their messaging platform needs.
    • Apache Hadoop. One of the technologies which quickly became the standard in big data management can be found with Apache Hadoop. When it comes to open source management of large data sets, Hadoop is known as a workhorse for truly intensive distributed applications utilization. The flexibility of the Hadoop platform allows it to run on commodity hardware systems and can easily integrate with structured, semi-structured, and even unstructured data sets.
    • MongoDB. This solid platform has been growing in popularity among many organizations looking to gain control over their big data needs. MongoDB was originally created by the folks at DoubleClick and is now being used by several companies as an integration piece for big data management. Designed on an open-source, NoSQL engine, structured data is able to be stored and processed on a JSON-like platform. Currently, organizations such as the New York Times, Craigslist and a few others have adopted MongoDB to help them control big data sets.

    Our new “data-on-demand” society has resulted in vast amounts of information being collected by major IT systems. Whether these are social media photos or international store transactions, the amount of good, quantifiable, data is increasing. The only way to control this growth is to quickly deploy an efficient management solution. Remember, aside from just being able to sort and organize the data, IT managers must be able to mine the information and make it work for the organization. I know there are a lot of other open-source big data options out there. Where have you seen success and what have you been using?

    9:00p
    Dell Launches Public Beta of Cloud Marketplace

    logo-WHIR

    This article originally appeared at The WHIR

    Hardware giant Dell is moving further into cloud brokerage with Dell Cloud Marketplace, and the official launch of the public beta program announced Wednesday at Dell World 2014. Dell will resell services from a multi-cloud ecosystem including partners Amazon, Google, Joyent, Docker, Delphix, and Pertino, and will soon add more companies.

    Dell Cloud Marketplace has been rolling out as a multi-phase beta program since June, and the company believes it satisfies an enterprise need for cloud governance.

    “We think today, over 90 percent of cloud services that are in use are not officially sanctioned by IT, and IT has no visibility into them,” James Thomason, Dell Cloud Marketplace CTO, told ZDNet. “They’re unable to forecast their budget, they spend a lot of their time charging back random people’s credit card to IT budgets, and they don’t have governance, so they can’t guarantee all employee access to company systems and information.”

    The marketplace is an extension of Dell’s cloud manager, which was largely built by Enstratius before Dell acquired that company in May of last year. Dell had previously acquired Gale Technologies in 2012 to turn its multi-cloud management and automation software into Active System Manager.

    Dell’s cloud strategy has been moving towards consultancy and reselling partner offerings since it closed its own public cloud service in May of last year. Since then it has announced partner programs with Peer1 for managed cloud services in Canada, Red Hat for OpenStack private cloud solutions, and CenturyLink to offer a public cloud PaaS.

    Dell also closed its consumer cloud storage service DellData Safe in June.

    The new Cloud Marketplace is a natural extension of these and other partnerships, and reinforces Dell’s position as a cloud services vendor.

    IBM and Ingram Micro both launched cloud marketplaces earlier in 2014, and dedicated cloud broker ComputeNext raised $4 million in March to invest in making its cloud marketplace “the Expedia of Cloud Computing.”

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/public-cloud-spending-reach-127-billion-2018-report

    9:30p
    VMware to Offer Public Cloud in Australia through Telstra Partnership

    logo-WHIR

    This article originally appeared at The WHIR

    VMware will offer its vCloud Air public cloud in Australia out of Telstra data centers early next year, the company announced on Tuesday. The infrastructure will be owned and operated by VMware and hosted within Australian telecom Telstra’s facilities, and could both reduce latency for Australian customers and address concerns about local cloud hosting and related compliance issues.

    Adding a regional presence in Australia makes the hybrid-focused vCloud Air a much more practical option for the Australian enterprise market, which is considered ripe for hybrid adoption.

    “We have seen strong uptake in Australia of public cloud services,” Telstra Global Enterprise and Services executive director Erez Yarkoni told ZDNet. “However, we are also seeing increasing interest from customers to identify different cloud strategies appropriate to their individual business needs. We are providing our customers with solutions that enabled multi-cloud environments, so they can optimise performance, cost, and flexibility across multiple applications and workloads.”

    With the initial presence in Telstra’s Melbourne data center, vCloud Air will be offered out of nine data centers, five of which are in the US. The company also announced the general availability of its hybrid public cloud in Japan, where one of vCloud Air’s other four data center footprints is located.

    “Many of our partners in other markets have experienced growth as a result of access to vCloud Air, while our customers have welcomed the versatility of a hybrid cloud platform that integrates seamlessly with their existing on-premises systems,” said VMware Cloud Services Business Unit executive vice president and general manager Bill Fathers. “We expect the same response in Australia as a critical mature market and bellwether for enterprise IT in the Asia-Pacific region, where we see persistently strong local interest in a more seamless approach to hybrid cloud.”

    The announcement coincides with the start of VMware’s Australian vForum partner event, the tenth year for the Sydney convention.

    Cloud providers seem to be taking part in a reverse land-rush, establishing data center footprints in large markets as rapidly as possible, partially to address worries about PRISM-style international spying. To that end, both Amazon and Microsoft have recently moved to add presences in Germany. Australia’s laws and public sentiment are much less protectionist, but security concerns may add to the benefit to the enterprise of having its cloud hosted in its home country.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/vmware-offer-public-cloud-australia-telstra-partnership

    << Previous Day 2014/11/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org