Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 5th, 2014

    Time Event
    12:00p
    Splunk Cuts Prices on Analytics-Based IT Management Cloud Service

    Championing its enterprise-ready cloud service for machine data analytics called Splunk Cloud, Splunk announced lower prices, a new free online sandbox and a 100-percent-uptime service level agreement (SLA).  Splunk Cloud delivers the Splunk Enterprise product as a service, collecting and analyzing a continuous stream of data from enterprise applications, websites, servers and infrastructure.

    “Organizations cannot afford downtime on data platforms that monitor their applications, infrastructure and services,” Guido Schroeder, senior vice president of products at Splunk, said. “With Splunk Cloud, customers get best-in-class enterprise-ready reliability, an unequaled breadth of features and ease of use that enables rapid time to value.”

    To make it easier for potential customers to get acquainted with Splunk Cloud, the company launched a free Online Sandbox, which instantly enables a personal Splunk Cloud environment for evaluation. Splunk also reduced prices by 33 percent on its cloud service and added several new service plans.

    The cloud service features monitoring and alerting, role-based access controls, knowledge mapping, report acceleration, visibility across on-premise and cloud deployments, anomaly detection, pattern matching, high availability and REST APIs.

    Splunk was recently named a leader in Gartner’s 2014 Magic Quadrant for Security Information and Event Management. 

    Aaron Fulkerson, founder and CEO of MindTouch, a Splunk Cloud customer, said his company “tested several machine data analytics services on the market and none except Splunk Cloud could handle the rigors of serious enterprise demands. Our products are mission-critical to our customers, and the 100-percent uptime delivered by Splunk Cloud helps us meet our customers’ expectations to be online all the time.”

    2:21p
    Using Big Data to Optimize Business Operations

    Bhavesh Patel is Director of Marketing and Customer Support at ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.

    Business – and life in general – is becoming data-centric. In order to uphold reliability and preserve reputation, a data center must maintain an unimpeded flow of data at a level not anticipated even just a few years ago.

    To do that, a data center needs to refine how it values data used in monitoring its own operations so that the flow of information generated in running the facility does not flood or overwhelm IT and management capabilities.

    Shifting the focus

    Data centers would do well to shift emphasis from the volume, variety, and velocity of the data generated to monitor its operations to how it can best utilize the data, mining it to optimize business insights and data center operation.

    Streaming data center operation information into clusters that talk to each other can help. Ideally, each cluster would not only collect data but also have local intelligence to determine what information to feed upstream.

    For example, a building could have the following clusters: power (including power efficiency metrics to enable more efficient power distribution and monitoring of critical power), cooling (for optimized efficiency and control of the environment), safety and security, and facility management. Each cluster would feed overview and status information to others, with a building management system orchestrating policy decisions using aggregated data. While each cluster would have its own monitoring, measurement and control capabilities, they would share overview and status data which would help inform decision makers.

    Data center IT and management can use the analyses to change behavior and practices that impact such clustered concerns, with different clusters talking to each other for more effective data mining so that both the data center operator and the data center customer benefit from the enhanced knowledge.

    Redefining how we use data

    The strategy of using existing individual data points in newly networked ways is already in place in other industries. Examples include the auto industry where the vehicles themselves use generated data points to improve safety and comfort, in fleet management to enhance efficiency and monitor driver behavior, and in the running of a multi-location carwash business to improve daily operation.

    In today’s cars, monitoring of data points is clustered under the cover (anti-lock braking system and automatic transmission), under the bonnet (automatic wiper control and engine management system), behind the dashboard (climate control), in the boot (parking aid), in the footwell (electric window and central locking), behind the central console (airbag control unit), and behind the glovebox (alarm and immobilizer).

    As for fleet management, wireless GPS fleet tracking and diagnostic software solutions use long-in-use data points, such as driving speed and total distance on the odometer, to monitor performance and generate in-depth performance data on every vehicle in the fleet. Management can know where each vehicle is and how and when it got there and can receive alerts and reports that enable decisions that reduce fleet fuel and maintenance costs, improve fleet efficiency, and even modify individual driver’s habits.

    A multi-location carwash business takes advantage of generated data to improve daily operations. The company uses multiple data points monitored by sensors affixed onto eight different drums of carwash chemicals at each of eight locations and dedicated software monitors chemical levels throughout the day. The monitoring was previously conducted weekly with a measuring stick stuck into each drum. With the new approach, management can pull reports at any time and react immediately to any levels deviating from the expected norm.

    Reporting on the data center

    At data centers, Data Center Infrastructure Management systems (DCIMs) and/or Critical Power Management Systems (CPMSs) are increasingly popular ways to monitor and report on specific data points related to power generation and distribution. A CPMS could also interact intelligently with a data center building management system which, in this increasingly data-centric world, could also be culling data from other categories of gathered information, creating valuable actionable intelligence in real-time.

    As Big Data becomes more prevalent, there will be more ways to reap meaningful return on data, enabling a better ROI on data collection systems already in place or still on the horizon.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:57p
    RightScale’s Cloud Budget Management Software Goes Into General Availability

    Cloud portfolio management company RightScale has launched its Cloud Analytics product into general availability.

    It is a cost-management solution that works across multiple clouds. The company announced a beta version in November 2013.

    Built to help customers manage their cloud budgets, the platform provides usage and cost analysis. It also has some predictive capabilities, or “scenario planning” across major public and private clouds.

    Hassan Hosseini, RightScale product manager for Cloud Analytics, said customer response to November’s beta release had been positive. “In just a few months, more than 1,000 customers participated in the Cloud Analytics public beta program,” he said.

    Some of those customers include the VW Group’s Audi of America division, a property data services company for the insurance industry called BuildFax and MoneySuperMarket, a British price comparison website for financial services.

    BuildFax CTO Joe Emison said the solution helped address unpredictability of public-cloud costs. “This is due to both wanted expenses, from business growth, as well as unwanted expenses, like forgetting to shut down servers,” he said.

    “RightScale Cloud Analytics provides automated emails that give me the right level of detail to help me understand quickly whether I am on budget and identify any unexpected cloud usage.”

    RightScale has added numerous features since the beta release:

    Scheduled reports: Scheduling automated daily, weekly and monthly cloud cost reports to come to your email inbo. The reports reveal trends in cloud costs, highlighting potential issues before they significantly impact your bill. If there are unexpected changes, you can quickly jump into your Cloud Analytics account directly from a link in the email to investigate further.

    Enhanced scenario builder: Enterprises can compare different instance types and clouds as they map out future cloud usage. Scenario Builder helps forecast the cost of moving applications to the cloud, the costs of using different clouds and resources, the costs associated with increased cloud usage and the impact of buying AWS Reserved Instances.

    Budget alerts: Cloud users, managers and finance teams can get alerted when budgets are exceeded or when cost overruns are projected based on current spend rates.

    Identifying waste: Identify zombie cloud instances: servers that got stranded during boot oftentimes unbeknownst to the user.

    Multi-account Reserved Instance visibility: AWS users can use their own consolidated billing structure to analyze RI usage across accounts. This enables AWS customers to make better decisions on RI purchases and ensure that they are using already-purchased RIs.

    5:13p
    Moogsoft Raises $11.3M for Web-scale IT Operations

    Provider of “collaborative situation management” solutions for web-scale IT operations Moogsoft has closed on an $11.3 million Series B funding round, led by new investor Wing Venture Capital. Existing investor Redpoint Ventures, which contributed to a $7 million 2013 investment round, also participated. The cow-themed company is looking to disrupt traditional DevOps operations with its Incident.MOOG platform.

    ITSM and DevOps  - a collaborative situational approach

    Moogsoft says its Incident.MOOG offering is a new type of a platform for ensuring the availability of today’s continuous delivery applications and infrastructure. Used to create situational awareness by web-scale service providers and enterprises the platform detects incidents as they unfold, creates contextualized situations, which creates a dynamic teaming environment for stakeholders to accelerate resolution and knowledge sharing.

    Using adaptive machine learning and socialized workflow the company hopes its platform will benefit DevOps and IT operations by ditching old rule-based systems in favor of a new paradigm that brings value to existing service management resources.

    “Once in a while, a company invents a radically new approach to an increasingly strategic problem and disrupts an industry. Moogsoft is one of those companies,” said Peter Wagner, founding partner of Wing Venture Capital. “Its collaborative machine learning technology is already driving new efficiencies in the delivery of continuous availability to web-scale environments.”

    Company co-founders Phil Tee and Mike Silvey are veterans in the industry, previously bringing Micromuse Netcool to the market in the 1990′s, which was then acquired by IBM and folded into its Tivoli software suite. After only a few years in business its product is being used by a number of service providers around the world, and the company has partnered with major vendors in the ITSM and DevOps sectors.

    The new investment will be used to fund geographic expansion and to continue innovating its platform.

    Moogsoft CEO Phil Tee said,”Peter and the Wing investment team bring invaluable domain knowledge and operational expertise to Moogsoft that will enable us to scale our business faster. We share a vision for how data, mobile and the cloud are transforming IT. Together, we are building something very big.”

    6:00p
    Video: Activist Blimp’s July 4th NSA Data Center Flyover

    On July 4th, a group of U.S. activist organizations flew an airship over the National Security Agency’s massive data center in Bluffdale, Utah, in protest of the agency’s overzealous data collection from the global communications networks.

    The group consisted of representatives from the Electronic Frontier Foundation, Greenpeace and the Tenth Amendment Center.

    The activists flew Greenpeace’s A.E. Bates thermal airship over the facility – the same airship Greenpeace flew over the Silicon Valley in April to praise Google and Facebook for their efforts to clean up sources of the power they use to power their data centers and to shame Twitter, Netflix, Amazon and Pinterest for not doing enough in that regard.

    The EFF posted a video of preparations for the NSA flyover and the actual flyover Tuesday:

    7:00p
    Symantec and Kaspersky Labs Banned from China as Government Obliged to Choose Domestic Security Software

    logo-WHIR

    This article originally appeared at The WHIR

    China is continuing to block foreign technology services from being accessible in the country, as US-based antivirus firm Symantec and Russian firm Kaspersky Labs have been added to a list of tech firms banned from China.

    According to a report by Business Insider on Monday, China’s government procurement agency has excluded Symantec and Kaspersky from its security software supplier list. There are no foreign security firms on the approved list of five security providers.

    The ban comes shortly after Chinese government officials raided four Microsoft offices. While the details around the raids remain scarce, Microsoft has had a strained relationship with the Chinese government since Edward Snowden’s disclosures revealed that Microsoft technology had been used to aid the NSA in cyberespionage.

    In June, censorship watchdog GreatFire reported that US-based cloud storage service Dropbox was blocked again, after being available for the first time since 2010 in February.

    Some of the concern from the Chinese government seems to be that the software from security firms like Symantec could include backdoors or other hidden functionalities in order to enable the US to spy on China. Symantec told Bloomberg that it doesn’t do that.

    Kaspersky is currently investigating the government’s decision to keep it off the approved list.

    While security companies are the focus in the latest offensive by the Chinese government, US cloud companies as a whole stand to lose significant profits from foreign governments concerned about US spying programs. A study by the New America Foundation estimates the impact to be in the range of billions of dollars.

    As China is a market poised for significant cloud growth, being left out could have lasting effects on the success of US-based service providers, and go much deeper than lost profits.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/symantec-kaspersky-labs-banned-china-government-obliged-choose-domestic-security-software

    8:00p
    SuperNAP 8 Earns Tier IV Gold Status for Operations

    Uptime isn’t just about having a resilient facility. Human error is among the leading causes of data center outages, underlining the importance of a well-trained staff and operational best practices.

    The team at Switch in Las Vegas has now met the highest standards on both fronts. After earning Tier IV certification from the Uptime Institute earlier this year, Switch’s SuperNAP 8 has now added Tier IV Gold certification for Operational Sustainability.

    SuperNAP 8 is the newest facility on the Switch campus. The 350,000 square foot facility was built using pre-fabricated modular components manufactured by Switch, and features new innovations in cooling and power distribution. In February it became the first colocation facility to earn Tier IV Construction certification from Uptime.

    Recognizing the human factor

    Uptime now says Switch has met its highest standards for managing its data centers. The operational sustainability standard recognizes the human factors in running a data center, as well as the design of the facility to meet fault tolerant standards.

    The design and operations of SuperNAP 8 “uniquely expresses harmony of technology, staff, and operational processes, which is key to sustained availability,” said Lee Kirby, the CTO of the Uptime Institute.

    “Switch’s SUPERNAP 8, MOD1 is one of only four data centers around the world, and the first in the U.S. colocation market, to truly demonstrate Fault Tolerance throughout the design, construction, commissioning, and operations,” said Kirby. “Switch’s achievement of Tier IV Gold with a modular design and construction solution further speaks to that team’s sophistication.”

    The only other data centers to have earned Tier IV Gold status are the Telefonica data center in Spain; a US Bancorp site in Kansas, and a municipal facility serving the Province of Ontario in Canada. Switch becomes the first carrier-neutral colocation data center to earn the designation.

    Innovations in cooling, containment

    SuperNAP 8 is the latest design from Rob Roy, the CEO and founder of Switch, as well as the company’s principal inventor and chief engineer. It sits adjacent to SuperNAP 7, the 400,000 square foot center that put Switch on the map. Roy has patented many of the design innovations at the facilities, including an advanced cooling system which can switch between six different cooling modes, and the T-SCIF heat containment system.

    Switch now has more than 1,000 customers, including more than 40 cloud computing companies and a dense concentration of network carriers.

    “I have seen countless data centers around the globe, and nothing comes close to the Switch SUPERNAP data centers,” said Peter Gross of Bloom Energy, a data center design pioneer as founder of EYP Mission Critical Facilities. “Rob Roy has revolutionized modern data center design and has created a technology ecosystem that is unrivaled in the industry. This achievement of Tier IV Gold from Uptime is incredibly well-deserved and thoroughly earned.”

    SuperNAP 8 was built using a modular approach known as SwitchMOD. Each MOD features two 10MVA power rooms and two separate data halls, each with a capacity of 800 cabinets. Some of the innovations in SuperNAP 8 include:

    • The Rotofly system, which uses 2,000 pounds of rotary flywheels to provide extended runtime for each HVAC unit. In the event of a power outage, this capability ensures that the cooling units will continue to move air through the data halls.
    • A steel framework known as the Black Iron Forest, which supports Switch’s T-SCIF aisle containment system (known as a T-SCIF) while also helping to cool the data centers by serving as thermal storage, chilling the air around it to help cool the room and provide a cushion during cooling failures.
    • SwitchSHIELD, a double-roof system that can protect the data center from wind speeds of up 200 miles per hour.

    Currently SUPERNAP 9 and 10 are under construction, which will add capacity for 130 megawatts of power and up to 8,000 cabinets. “They are already 22 percent sold before construction is finished,” said Missy Young, EVP of Colocation at Switch.

    << Previous Day 2014/08/05
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org