Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, August 26th, 2013

    Time Event
    1:08p
    Birst Closes $38 Million For Cloud Business Intelligence

    Birst nets $38 million for fueling growth, Capgemeni launches Elastic Analytics on Amazon’s cloud, and SAP and the NFL team up to power cloud-based player comparison tools.

    Birst Closes $38 million funding round.  Cloud Business Intelligence provider Birst announced it has completed a $38 million investment led by Sequoia Capital. All other existing investors participated in the round and were joined by new investors including Northgate Capital. The new funding will be used to further expand internationally, grow sales and marketing and invest in new product capabilities. The company continues to grow its partner and technology alliances and hopes to expand into new markets in Europe, Asia, and the Middle East. “The demand for BI continues to grow and the shift to the Cloud has become unstoppable,” said Brad Peters, CEO and Co-founder of Birst. “We already have the industry leading product. This investment furthers our ability to strengthen our solution and will allow us to offer it around the world.” “Companies are drowning in data, and Birst is the lifeline that allows them to surface the information that really matters,” said Doug Leone, partner at Sequoia Capital. “We’re excited to continue to partner in their success.”

    Capgemini launches Elastic Analytics.  Capgemeni announced Elastic Analytics, a new end-to-end Business Intelligence (BI) and big data analytics solution. Available via Amazon Web Services, the new enterprise-ready, integrated solution includes the infrastructure, management, security, support, and maintenance to run analytics solutions on a cloud environment. Elastic Analytics provides clients with an easily adaptable mix of technologies, sources and solutions that can be enabled for them, in a fraction of the time and cost of building a traditional Business Intelligence or Big Data solution. It uses existing ETL technologies and the AWS Hadoop-based solution, Amazon Elastic Map Reduce (EMR), to extract and merge the data into highly optimized analytics engines. “Organizations are struggling with the deluge of data and the ability to rapidly respond to new demand for insight from their business users.  Cloud offers a way to deliver solutions quickly but BI and big data infrastructure is complex to set up,”explains Lanny Cohen, Global Chief Technology Officer at Capgemini. “Capgemini partners with the leading cloud providers globally, and most recently has worked closely with AWS to create ‘Elastic Analytics’, a new end-to-end Business Intelligence and Big Data Analytics solution available via AWS, to directly address this need.”

    SAP’s cloud-based analytics for NFL.com. SAP and the National Football League (NFL) launched an enhanced fantasy football analytics platform that offers users real-time insights to help users make more informed, winning decisions.  The player comparison tool is free at nfl.com/fantasy, and will give fans a tool to forecast NFL player statistics for games during the regular season, guide them to make important game-day decisions by factoring in statistical performance and intangibles such as weather, injuries, game location, player rest and more. “Integrating the software solutions and data storage expertise of SAP’s team with the knowledge and experience of the NFL Digital Media team has produced an extremely valuable fantasy football tool,” said Cory Mummery, senior director of Product for NFL Digital Media. “The player comparison tool, combined with our existing fantasy editorial content, social tools and player projections, provides users with an abundant suite of products to help them manage their fantasy teams.”

    1:47p
    How a Robot Can Simplify Data Center Management

    Srinivasa Vivek is a consultant at EMC India Center of Excellence.

    vivek_tnSRINIVASA VIVEK
    EMC

    With all the latest advancements in the cloud and virtualization markets, who needs a power-draining, expensive data center? Unfortunately, more and more organizations are still continuing to run with this overhead. In new data center designs, capacity provisioning for ever-higher power requirements has always been an area of concern, with a lingering question on whether conventional room-conditioning systems can manage future information technology loads. Within existing data centers, computing capacity typically increases over time as IT requirements increase, resulting in increased power and cooling requirements. Data center operators are challenged to provide adequate support infrastructure that is provisioned, or adapt accordingly, to achieve future IT mission requirements while minimizing energy use.

    With increased reliance on data centers, full software and hardware robotics automation is no longer a question of “if,” but a matter of “when.” Simple RFID tags, laser and barcode identifiers can create true data center automation. Data center, automation and robotics technologies have come a very long way over the past decade. From the warehousing or inventory perspective, robots are equipped to sense location, put the assets or inventory in order, and can directly interact with the human-created automation scenarios.

    Using Robots to Improve Efficiency

    As the demand for IT computing and network devices continues to increase, there is a growing need to remove heat generated by the equipment, effectively and efficiently. To improve efficiency, it makes sense to locate cooling (such as computer room air conditioners or computer room air handlers) close to the heat source, or another approach would be to have a removal device, such as a hot air chimney, close to the heat source.

    Now, with techniques in place, are we really sure that the amount of cooling given per rack or equipment is just right? A typical data center should run anywhere between 18 degrees C to 27 degrees C as per ASHRAE (American Society of Heating, Refrigerating, and Air-Conditioning Engineers). The lower the temperature, higher will be the energy consumption and higher the temperature resulting in a possible problem of equipment availability. This raises the risk of temperature fluctuation inducing equipment failures. Thus, an accurate responsive data center temperature monitoring and control has become increasingly important. Running the data center at optimum temperature is thus a challenging technical effort.

    Some of the earliest work on data center energy monitoring and management was based on sensors installed at fixed locations. Alternatively, another method used was a hand-pushed mobile sensing station that accepts human input to record data center layout while automatically recording a set of temperature and humidity readings.

    The quest for an alternative to these approaches led to the development of an inexpensive robot that automatically maps and navigates a data center, collects temperature, humidity and other data, and feeds such data into a software tool for efficiency analysis and data center energy management. A set of engineers from EMC brainstormed and came up with the idea to build a low-cost robotic platform to monitor environmental parameters in a data center. The prototype or DC Robot as we call it (see below) consists of three sensors mounted to a vertical tube as per ASHRAE recommendations on measurement points, and is also equipped with remote navigation capabilities through cameras and remote control software. This DC Robot collects temperature data using three digital sensors and relays it through a Wi-Fi access point for post-processing. An algorithm converts the temperature data into a thermal map. The thermal map can then be used to easily identify the hot and cold spots of the data center aisles with location information.

    DCrobot-EMC

    Fixing a simple cooling leak saves a lot of energy. Small adjustments in the data center’s cooling temperature can also provide large energy savings. Poor control and monitoring of the conditions in a data center may shorten the life of the equipment, overheating can cause intermittent faults, and in extreme cases cause equipment to fail catastrophically. The cost in time, money and lost business productivity can be considerable.

    Having an independent system like the DC Robot check on the air conditioning unit and monitoring devices is a wise idea. It can also help give a clear picture of the conditions in different parts of the data center and coupled with alerts such a system can give ample warning before conditions become critical or out of control. Robots become the extension of data center staff and managers and assist in keeping the data center environment running smoothly.

    Please note: Co-author of this article is Arun AT, Associate Consultant II at EMC India Center of Excellence.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:54p
    IBM Cloud Helps the Trains Run on Time in Central Europe
    ibm-slovenian

    Slovenian Railways has adopted IBM’s cloud solution to improve business operations and customer service for the Central European railway. (Photo: IBM)

    Slovenian Railways has adopted IBM’s cloud solution to improve business operations and customer service for the Central European railway. The railway is modernizing its IT infrastructure to handle the growing amount of people and goods it transports. The IBM cloud will enable better coordination among the different departments enhanced services to customers, like less waiting time for commuters, fewer train delays, and faster response to customer queries. The contract was signed in December 2012 and the project is scheduled for completion by the end of 2013.

    The cloud helps to centralize the operation, giving a more holistic view of the Railways’ operations. It also comes with the typical advantages of cloud such as better control over costs, IT purchases and warranties are less of a headache, all while helping it meet all the necessary compliance requirements.

    “We aim to offer highest levels of services to our customer and to do so we need the best of technologies,” said Jovanovič Dragomir, CIO of Slovenian Railways. “IBM SmartCloud will provide us with a world class, reliable solution to centralize our IT service desk, control cost and leverage mobile solutions. Our employees and customers will benefit from IBM’s expertise and the power of the cloud, as we will improve our operations across the entire group.”

    Adopting a new cloud-based centralized IT system will give the company a more holistic view across all of its freight, passenger and logistics operations so it can more effectively maintain and manage the railway traffic across its network.

    IBM SmartCloud Control Desk and IBM Endpoint Manager, part of the IBM MobileFirst Management solutions, unites users from the affiliated Slovenian Railways companies under a uniform IT service desk with a single point of contact available across the entire organization. Through the common IT platform, it means railway officials can automate and manage systems in real-time and help secure mobile devices and other “end points” such as servers. It also means that employees can bring the device of their choosing with security risks minimized.

    With a diverse range of business, from freight transport services to passenger and commuter rail to infrastructure management, the railway needs a flexible IT environment that can handle the varying requirements of each group as well as separate IT service desks

    Slovenian Railways operates more than 1,200 km of railway track, transports 15.3 million passengers and carries 15.8 million tonnes of cargo each year.

    3:58p
    Opscode Names Barry Crist as CEO

    Opscode, which makes infrastructure automation tools, said today that Barry Crist has been appointed as CEO and Chairman of the company’s Board of Directors. Former CEO Mitch Hill is stepping down to address a personal health issue and will remain on the company’s board.

    Crist spent the last five months as Opscode’s Vice President, Enterprise, successfully leading enterprise customer engagement for the company. Before joining Opscode, Crist served as CEO of Likewise, Inc. where he built an open source, free-to-premium sales model that reached more than 100,000 organizations worldwide. Crist previosuly served as Vice President of the Application Management group at Mercury, and spent time in leadership positions at Network Associates (now McAfee) and Apple.

    “We’re rooting for Mitch as he focuses on his health and thankful for his tremendous leadership in building Opscode into a market leader,” said Crist. “We are in the midst of a massive change in the enterprise and we’re incredibly fortunate that Chef is at its epicenter. Opscode’s focus will be on empowering our customers to transform their businesses with Enterprise Chef.”

    Chef is an open source framework using repeatable code – organized as “recipes” and “cookbooks” – to automate the configuration and management process for virtual servers. Opscode said last week that Fortune 1000 companies now represent 60 percent of its sales, with the Fortune 1000 customer base having grown 150 percent in the past year.

    “Under Mitch’s leadership Opscode has become one of the most promising and disruptive next-generation technology leaders,” said John Connors, Managing Partner at Ignitions Partners. “Barry Crist’s broad experience and stellar track record combined with his unique expertise building a successful open source-based company makes him the right leader for the next generation of growth at Opscode.”

    Opscode also said today that co-founder Adam Jacob has been named Chief Dev Officer and Paul Edelhertz will now serve as Senior Vice President of Customer Operations, driving enterprise adoption and customer success with Enterprise Chef. Jacob previously served as the company’s Chief Customer Officer, while Edelhertz had been Vice President of the company’s Services and Community organizations.

    8:23p
    Key Executive Hires for IO, QTS

    It’s been a busy week for executive hires in the data center industry.

    IO announced that Michael Berry has joined the company as Chief Financial Officer. Berry will be responsible for financial planning and analysis, accounting, operations, treasury activities and investor relations, and will report to CEO and Product Architect, George Slessman.

    “I am very happy to have Mike join IO,” said Slessman. “His experience as a proven operational leader and technology executive will support IO’s growth and provide a solid financial and operational foundation for the organization.”

    Berry joins IO from SolarWinds, a publicly held international provider of IT management software with approximately $300 million in annual revenue, where he was Executive Vice President and CFO. Prior to SolarWinds, Berry was CFO at i2, a publicly held, international provider of supply chain software and services. Prior to his CFO experience, Mr. Berry served in various executive roles at The Reynolds and Reynolds Company, a provider of software and services to the retail automotive industry. He has also held executive management positions at Comdata Corporation and Travelers Express.

    Meanwhile, QTS has selected Jeffrey Berson as the company’s first chief investment officer. Berson will be responsible for QTS’ investment strategy, including business development, strategic projects and investor relations, as well as supporting the office of the CFO on capital markets activities. QTS recently filed plans for an initial public offering.

    Berson has more than 20 years of investment banking experience, much of it with the technology sector’s emerging infrastructure companies including data centers, managed services, enabling communications technologies and fiber providers. He has been involved in transactions aggregating more than $30 billion in value, including numerous public and private debt and equity transactions, convertible debt issues, merger and acquisition advisory assignments, and management and leveraged buy-out transactions.

    Berson joins QTS from UBS where he was a managing director in the Media and Communications Group focused primarily on the communications infrastructure and the telecommunications services sectors. He’s also previously served as head of the Communications Services Investment Banking Group at Oppenheimer & Co., managing director in the Telecom Group at Barclays Capital and held various positions within the investment banking department at CIBC World Markets.

    “Jeff’s extensive experience within the data center industry and managed services, combined with his business insight, equips him well to serve as QTS’ first chief investment officer,” said Chad Williams, chairman and chief executive officer – QTS. “These are exciting times for the growth of QTS and Jeff’s knowledge and experience will play a critical role.”

    8:45p
    The Elastic Cloud: Leading Cloud Stacks Shape API Conversations
    cloud-monitors

    APIs (Application Programming Interfaces) and stack-based delivery models are a vital part of the cloud process.

    More organizations are finding great ways to use cloud computing to create a more elastic infrastructure. But as they boost their reliance on cloud computing, some organizations worry that their environment won’t be compatible with other platforms. So APIs (Application Programming Interfaces) and stack-based delivery models were introduced into the cloud computing technology matrix.

    These APIs are actually a vital part of the cloud process. Why? They help create direct cloud computing connections. Basically, they are pushing towards a more agnostic cloud platform. With more cloud deployments, data center administrators will need to find ways to extend, connect, or integrate their cloud environment with other services.

    Today, there are three core areas where cloud computing can help: Infrastructure, services and applications.

    Within those categories, you’re able to place services like SaaS, PaaS, IaaS and so on. But what happens when we start delivering “Everything-as-a-Service?” What happens when the cloud model continues to grow and evolve, and new types of connections are required? Let’s take a look at three stack models and how they’re helping shape the cloud connectivity and API conversation.

    Apache CloudStack

    CloudStack has been growing in popularity with many different organizations. Originally developed by Cloud.com, CloudStack was purchased by Citrix and then released into the Apache Incubator program. From there, the first stable version was released in 2013. The platform is already compatible with hyervisors like KVM, vSphere, and XenServer. Apache CloudStack is an open-source cloud management platform designed for creating, controlling, and deploying various cloud services. Similar to the other stack-based models, CloudStack supports the Amazon AWS API model and many other APIs.

    The good: CloudStack 4.0.2 is the first stable release – but it’s very new, just five months old. Still, the latest version includes great features, like scaling storage independent of compute or having VMs maintain their machine state without having to experience compute changes. New security features now allow the administrator to create security zones across various regions. The overall deployment of CloudStack is smooth. In a typical setup, you would have one VM running the CloudStack Management Server and the other VM acting as the actual cloud infrastructure. From a testing and deployment perspective, you could deploy the whole platform on one physical host.

    What to look for: Remember, the latest release of CloudStack is very new. The other challenge is that we have yet to see any major cloud provider adopt the platform.  Finally, from an engineering perspective, some have pointed out challenges around the monolithic architecture and installation process. In some cases, although simplified, the installation process will require a bit of knowledge. Still, the platform is being adopted by a few other big players. During the summer of 2012, Datapipe announced that its global infrastructure will run on CloudStack. Other organizations like SunGard, Citrix and WebMD have already adopted the CloudStack model.

    OpenStack

    With more than 200 companies adopting this platform, OpenStack is certainly one of the more popular cloud models out there. Currently managed by the OpenStack Foundation, OpenStack consists of multiple, interrelated stack-based parts. These components all tie together to create the OpenStack delivery model. Much like CloudStack, there is further agnosticism when it comes to the underlying hypervisor and infrastructure on which OpenStack may run. It’ll support platforms which include VMware, Citrix and KVM.

    The good: Let’s face facts – OpenStack is, arguably, the most mature stack-based cloud control model out there. Furthermore, OpenStack’s adoption momentum has been very strong. The latest release of Havana shows some pretty big improvements around all major components in the stack. The networking component (Neutron) allows administrators to do some pretty amazing things with their cloud model. Now, with direct integration with OpenFlow, Neutron allows for greater levels of multi-tenancy and cloud scaling by adopting various software-defined networking technologies into the stack. Furthermore, the networking framework around OpenStack has new services like intrusion detection services (IDS), various load-balancing features, firewall technologies and even a VPN that you can deploy and manage. Traffic and IP redirection is made easier – thus creating a stack platform capable of even greater resiliency and failover.

    Next: The Path Ahead for Open Stack, A Look at Eucalyptus

    << Previous Day 2013/08/26
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org