Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, March 20th, 2014
Time |
Event |
12:30p |
Big Data: The New Crystal Ball for Deciphering NCAA March Madness As March Madness kicks off in earnest today, data is the new crystal ball, playing a growing role in office pools and pundit prognostications. Big data scientists are using analytics to predict bids, and sponsoring competitions to master tournament bracketology.
Several examples: University business professors using SAS analytic software accurately predict the at-large teams in the NCAA tournament, and predictive analytics competition site Kaggle has teamed with Intel to launch March Machine Learning Mania, in which participants build analytical models and predict the outcome of the tournament.
Analytical Madness
More than a decade ago, professors Jay Coleman of the University of North Florida in Jacksonville, Allen Lynch of Mercer University in Macon, Georgia, and Mike DuMond of Charles River Associates and Florida State University in Tallahassee created the Dance Card – a formula designed to predict which teams will receive at-large bids to the NCAA Tournament (aka the Big Dance). For the 2014 bids announced recently the dance card formula correctly predicted 35 of the 36 at-large bids. The model is a combined 108 of 110 over the last three years.
As a teaching tool for the professors’ students, the Dance Card analysis points to several significant factors that the Tournament Selection Committee weighs most heavily, including Rating Percentage Index, Sagarin rankings (USA Today), wins against top 25 teams, and other factors. In this video the professors discuss using SAS analytics to form the Dance Card formula and how the project came together.
Harnessing Machine Learning
The online platform for predictive modeling and analytics competitions Kaggle has a competition for applying analytics to the NCAA competition, called March Machine Learning Mania. Providing contestants with nearly two decades of historical game data, the challenge is to turn information into insight, building and testing their models and then later predicting the outcome of the 2014 tournament. Starting back in January, the Intel (INTC) sponsored challenge gives the team with the most accurate predictions a $15,000 cash prize.
Media Madness
For more on the role of data analysis in predicting the NCAA tournament, see features by The Denver Post and FiveThirtyEight. | 2:00p |
Structure 2014: The Cloud is the Computer GigaOm Structure 2014 will convene June 18-19 at the Mission Bay Conference Center, University of California, in San Francisco.
Based on the concept that computing is everywhere and understanding how to architect data centers, networks and applications is the challenge currently facing the technology and data center industry.
Register by April 11 and save $300.
Featured topics include:
- What the infrastructure powering tomorrow’s killer applications will look like.
- The use of cloud technologies to design, build and ship physical products.
- Do the terms public, private and hybrid cloud create a distinction without a difference?
- Can the vendors that powered the PC and client-server revolutions maintain their edge in the cloud computing era?
- What is the future of the internationalization of the cloud? What does the growth we’re seeing in China, Brazil, etc tell us about the future?
Confirmed speakers will include:
- Andy Bechtolsheim, Founder, Chief Development Officer and Chairman, Arista Networks
- Diane M. Bryant, SVP, GM, Data Center Group, Intel
- Adrian Cockcroft, Technical Fellow, Battery Ventures
- Lance Crosby, CEO, Softlayer, an IBM Company
- Sameer Dholakia, Group VP and GM, Cloud Platforms Group, Citrix
- Bill Fathers, SVP and GM Hybrid Cloud Services, VMware
- Urs Hölzle, Senior Vice President, Technical Infrastructure, Google Fellow, Google
- Chris Kemp, Founder, Nebula
- Vinod Khosla, Partner, Khosla Ventures
- Marten Mickos, CEO, Eucalyptus Systems
- Werner Vogels, VP and CTO, Amazon.com
- Scott Yara, President and Head of Products, Pivotal
Venue
Mission Bay Conference Center
1675 Owens Street San Francisco, CA 94158
For further information and registration, visit GigaOm Structure website.
For more events, return to the Data Center Knowledge Events Calendar. | 2:00p |
Your Cloud Is Only as Secure as Your Provider Organizations that are moving to the cloud are leveraging scalability, better connectivity, and modern delivery models for advanced workloads. As more data, applications and workloads traverse the cloud infrastructure, cloud providers are creating new efficiencies, better data centers, and a more productive workforce. Unfortunately, we are also creating security targets.
Just like with any technology – the more popular it gets, the more security concerns grow. This holds absolutely true for your cloud environment and the data center provider you select to work with. One of IT’s biggest balancing acts is to make data transactions easily available to authorized users while preventing all others from accessing its data assets.
With high-profile data security breaches splashed across headlines nearly every day, CIOs are understandably worried about protecting their data. And for IT leaders who are considering moving their business to the cloud, it is critical to ensure the provider they select has undertaken full and robust measures for physical and logical security.
In this whitepaper from QTS, you quickly learn what you need to know before you select a data center provider or before you migrate to a colocation. As the paper outlines, research has shown that one of the biggest apprehension points about operating in the cloud is security. The public cloud in particular is a source of uncertainty for CIOs, but private and hybrid cloud services carry security concerns, too. While moving to the cloud carries a wide variety of benefits, such as enabling business agility, security remains a persistent threat.
Download this white paper today to learn the right questions to ask. Remember, the modern cloud infrastructure is constantly evolving – this means your provider must be able to evolve as well. When selecting the right provider, there are some key questions to consider. These include:
- How much experience do you have in data center services? And in what industries?
- Do you have experience in our industry with customers that have similar compliance needs?
- Where will my cloud data reside? Do you own your data centers, or do you lease from a third party?
- Do you have industry-leading physical and logical security? Describe technologies used and best practices for both types of security.
- Do you use industry standard methodologies like ITIL (Information Technology Infrastructure Library)?
- What is your security and data reliability track record?
- How fast could you recover in the event of a successful attack or disaster?
- How transparent are you with customers?
- Do you have a third party auditor to provide attestation of compliance?
The modern cloud platform will only continue to grow. More organizations are working with the cloud to deliver part – or in some cases – their entire business model. When working with a solid cloud partner – make sure to select a team that understands security, the cloud model, and how to deliver next-generation cloud services.
This whitepaper from QTS will help you sort the wheat from the chaff in the cloud services industry. | 2:26p |
The Commoditization of Server Hardware Jake Iskhakov is the Director of Sales & Marketing at ServerLIFT Corporation.
As the data center industry continues to grow, the server hardware market is undergoing a transformation. It appears that more companies are avoiding brand hardware and single-vendor infrastructures for commodity equipment that provides the same level of performance. There are a variety of factors driving this trend, including initiatives such as the Open Compute project. The Open Compute Project continues to stimulate interest and participation in open source hardware design, as well as technological innovations that make facilities less dependent on hardware.
Facility Convergence Helps Server Hardware Evolve
More companies are opting for wholesale data center leases, while focusing on energy efficiency and server capacity. This trend is increasing the emphasis on the data center as an ecosystem, which promotes a “plug-n-play” environment. This type of optimized facility compiles the best of each component, regardless of vendor, in order to establish the highest-performing facility possible.
Customization at this level was available only to enterprises, due to the required resources in the past. However, the rise of data centers that require space and capacity, however lack top-tier funds, created demand for facilities that offered tailored features and an eye toward the ecosystem. Additionally, compliance, certification and other industry-specific measures require different facilities to have wildly divergent needs. A one-size-fits-all approach may not work. The plug-n-play model, however, is established with these special requirements in mind.
The Role of Server Hardware in the Software-Defined Data Center
Software and virtualization increasingly occupy a higher percentage of data center infrastructures. This makes physical servers still crucial to facility functionality, but changes the nature of what their relevance. IBM, Lenovo and a number of other hardware providers are increasingly focused on the way that their equipment functions within the larger facility, according to ITBusinessEdge.
While there will remain a need for specialized hardware, commodity server equipment can occupy a large segment of the market formally reserved for vendor-specific assets. Ultimately, hardware infrastructure developers will have to cope with commoditization, either by recalibrating their efforts toward commoditization or forging partnerships with other data center ecosystem contributors.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:45p |
Platfora Raises $38 Million to Grow Big Data Analytics Venture capital continues to pour into big data companies, with big data analytics software company Platfora announcing a $38 million investment round. Led by Tenaya Capital, other participants in the round include Citi Ventures, Cisco and Allegis Capital, as well as prior investors Andreessen Horowitz, Battery Ventures, Sutter Hill Ventures and In-Q-Tel.
The company will use the funds to invest aggressively in talent, technology and field organizations to enable Platfora to lead organizations in a shift away from SQL-based databases to “noSQL” technologies.
“The widespread adoption of big data infrastructure by mainstream enterprises presents a tremendous opportunity for analytics vendors. We believe Platfora’s unique intellectual property and its ability to help any company unlock new business opportunities from their data assets will resonate in the market,” said Brian Paul, Managing Director at Tenaya Capital.
Silicon Valley startup Platfora has now raised $65 million total, and lists customers like Disney, Comcast and Edmunds.com.
“Enterprises are awash in data but are struggling to gain useful insights. While intuition is critical in business, business executives readily admit that they are still making too many gut decisions because they cannot adequately access or analyze all of their data to make better informed decisions,” said Ben Werther, CEO at Platfora. “They cannot access or interpret these new large and heterogeneous big data sets fast enough. With our big data analytics platform, we’re helping organizations participate and win in the Fact-based Economy which we estimate will unlock $5 trillion in new value in the next 10 years through novel uses of big data.”
“For business analysts, the promise of Big Data Analytics can be very seductive. However, they are often hampered by a plethora of immature, emerging technologies that often require significant IT or programming skills,” said Dr. Barry Devlin, founder of 9sight Consulting. “The Holy Grail for Big Data Analytics is to provide information workers with direct data access and analytical capabilities at their desktops, through an integrated environment powerful enough to understand, manipulate, and analyze any data source no matter its age or structure. When line-of-business workers are given this capability and trained to use it wisely, companies experience a quickening effect—offering potential competitive advantages once unattainable or even unimaginable.” | 3:15p |
SAP HANA Powers Operations Bundle To Fuel Big Data Insights SAP has launched an intelligent operations bundle to help the enterprise infuse big data insights in real-time, while HP has unveiled a new ConvergedSystem for SAP HANA portfolio of integrated systems.
At the Gartner Business Process Management Summit in London this week SAP launched an intelligent business operations bundle to help organizations infuse big data insights in real time into their processes to work smarter and improve responsiveness to threats and opportunities. With enterprises struggling to have one single view of processes and making use of big data in a way that empowers users, the new operations bundle from SAP allows organizations to embed Big Data insights into their processes in real time.
“The full value of Big Data only comes when you embed its insights in your business processes,” said Sanjay Chikarmane, senior vice president and general manager, Global Technology Solutions, SAP. “With the new intelligent business operations offering, we aim to help organizations to make use of Big Data in real time to run their processes more efficiently and intelligently. This solution lets customers take advantage of the convergence of process technology and operational intelligence which is considered to be the next generation of business process management.”
The new offering is based on the SAP HANA platform and features SAP Operational Process Intelligence software, SAP NetWeaver Process Orchestration software, SAP Event hStream Processor, and SAP PowerDesigner software. It allows for complete upstream and downstream visibility among process documentation, implementation and analysis by sharing common standards-based process, data and interface models.
HP’s Fast Path to SAP HANA
HP (HPQ) unveiled the HP ConvergedSystem for SAP HANA, a portfolio of integrated systems that are purpose-built to deliver clients a fast path to value when using the SAP HANA platform. The new portfolio delivers clients the architecture to quickly deploy these next-generation data management platforms with systems that can easily scale to meet evolving business needs—from managing analytics and data warehousing workloads to running mission-critical business applications. Unifying the servers, storage, networking, software and services clients need to run their SAP HANA environment, these all-in-one systems are quick and easy to install, accelerating time to value.
Building on the experience of over 800 system implementations for SAP HANA, the new ConvergedSystem portfolio for SAP HANA includes ConvergedSystem 500 for analytics and datawarehousing workloads, ServiceGuard for SAP HANA, and “Project Kraken” – an incubated system that will enable clients to simplify their SAP HANA infrastructure and speed business operations with up to 12 terabytes of data in a single memory pool to power mission-critical business apps.
“Organizations are making long-term, strategic architectural bets for their data centers and data management platforms,” said Tom Joyce, senior vice president and general manager, Converged Systems, HP. “SAP HANA provides a catalyst for business transformation and HP has the architecture, expertise and vision to meet its infrastructure needs. HP is investing in delivering the infrastructure that clients need to meet requirements of environments running SAP HANA.” | 4:15p |
Global Network News: Juniper and Level 3 Juniper Networks is selected by GTT Communications and EVA Air to help speed and simplify networks, and Level 3 begins construction of a new undersea cable to enhance global connectivity options in Cali, Colombia.
Juniper selected by GTT for 100GE global offering. Juniper Networks (JNPR) announced that GTT Communications has selected Juniper to help power its offerings of 100GE capacity on a global scale. The GTT high-IQ network uses Juniper MX series routers and the latest generation of high-density MPC4E line cards. As part of a corporate-wide initiative to simplify its operations, GTT standardized its network on the Junos OS, which has allowed it to streamline its entire network from edge to core. By utilizing the automation capabilities of Junos OS, GTT can rapidly deploy new service offerings faster, and automate corresponding back-end office operations to support and monetize the offerings in a timely fashion. ”The growth of video and other bandwidth-intensive applications has made scaling our network capacity and optimizing provisioning lead-times absolutely critical elements when selecting our network infrastructure, said Richard Steenbergen, chief technology officer at GTT Communications. ”As a leading cloud networking service provider, we must innovate to keep pace with the increasing demand for low-cost bandwidth, while continuing to deliver the reliability and quality of service that our customers have come to expect. Standardizing on Juniper technology gives us a competitive edge, and the flexibility that we require to offer customized services in a crowded marketplace.”
Juniper also announced that EVA Airways has successfully deployed a high-IQ network to support its data center and campus operations based on Juniper’s high-performance Ethernet switches. Juniper’s next-generation switching infrastructure has enabled the airline to increase the performance, reliability and scalability of its network while simplifying operations and reducing costs. EVA Air also selected Juniper EX Series Ethernet Switches to replace all of its complex access network switches and legacy three-layer architecture within its headquarters complex, creating a more manageable, scalable and functional infrastructure.
Level 3 begins construction of undersea cable in Colombia. Level 3 Communications (LVLT) announced the construction of a new undersea cable, connecting Colombia to its international network. The terrestrial segment of the cable in Colombia is being constructed in conjunction with EMCALI (Empresas Municipales de Cali), a state-owned utilities services company in Cali, Colombia. The new subsea route ensures that Colombia’s International connectivity will be enhanced by the addition of a Pacific submarine cable connecting Colombia and removing the reliance on traditional connectivity via the Caribbean Coast. EMCALI will use capacity over Level 3’s undersea cable network to connect the country directly from Cali to major cities in the Americas, such as, New York, Los Angeles, Mexico City, Santiago de Chile, Buenos Aires and São Paulo. “Our collaboration with Level 3 on this project fits perfectly into our strategic regional plan and focuses resources on each company’s core business. Additionally, the importance of the U.S.-Colombia FTA (Free Trade Agreement) and the Pacific Alliance make EMCALI the ideal partner to join with Level 3 on this project, highlighting EMCALI as a leader in providing telecommunications services for the region and countrywide,” said Oscar Pardo, managing director of Emcali.
Level 3 also announced that it has signed an agreement with Cinemark Brazil, to enhance the efficiency of Cinemark’s communications systems by providing the company with Internet services in Sao Paulo to support connectivity to its VPN (virtual private network) of 67 movie theater complexes, which are distributed throughout 35 Brazilian cities. ”Level 3´s scalable, efficient and reliable IP network can provide our customers with fast and dedicated global access to business systems and applications,” said Marcos Malfatti, senior vice president of Sales for Level 3 in Brazil. “Level 3′s advanced Internet backbone provides Cinemark with a seamless experience, whether uploading or downloading information between its theaters, as well as the added assurance of dedicated network security, management and 24/7 technical support to help the future growth of its business.” | 8:00p |
The Green Grid Unveils Energy Productivity Metric for Data Centers The Green Grid has announced a new framework for measuring “useful work” in the data center. It is a metric five years in the making.
A global task force on data center efficiency announced agreement on standard approaches and reporting conventions for data center energy productivity (DCeP). It’s been a long road to DCeP, as it has been difficult to find agreement on a definition. The new approach will move the industry reporting beyond the current Power Usage Efficiency (PUE) benchmark, creating a more detailed metric that takes several other factors into consideration, and applies measurement to the business itself.
“Overall, global data center traffic is estimated to grow threefold from 2012 to 2017 and although data centers are becoming more efficient, their total energy use is projected to grow,” said Deva Bodas, principal engineer and lead architect for Server Power Management at Intel Corporation and board member for The Green Grid. “With escalating demand for data center operations and rising energy costs, it is essential for data center owners and operators monitor, assess and improve performance using energy efficiency and greenhouse gas emission metrics. This is why the recommendations of the taskforce are so important.”
DCeP is an equation that quantifies useful work that a data center produces based on the amount of energy it consumes. The Green Grid is no stranger to attempting to quantify data center efficiency, coming up with many of the standards used today, such as Power Usage Effectiveness (PUE), which compares a facility’s total power usage to the amount of power used by the IT equipment, revealing how much is lost in distribution and conversion.
DCeP allows an organization to define “useful work” as it applies to its business. For example, a retail business may use number of sales as the measure for useful work, while an online search company may use the number of searches completed.
The Challenge of Defining Productivity
The Green Grid’s effort to develop a productivity metric has been complicated by the differences in online businesses and how they measure “useful work” in the data center. Green Grid first set its sights on this in 2009. In October 2012, the broader global task force organized by The Green Grid reached consensus on the use of Green Energy Coefficient (GEC), Energy Reuse Factor (ERF) and Carbon Usage Effectiveness (CUE) metrics. These metrics were in addition to guidelines and specific measurement protocols for PUE, perhaps the most standard metric in use today when measuring efficiency in the data center. But PUE only shows a small part of a larger picture.
“Productivity is difficult to measure in a heterogeneous environment,” said Mark Monroe, the CTO at DLB Associates and past executive director of The Green Grid, at the recent DataCenterDynamics Converged event in New York. “It’s apples and oranges and hammers. Everyone would like a generic ‘how am I doing’ metric, but it’s a basic measurement. The higher you get in the business stack, the less relevant the metric will be.”
Now the Green Grid and the broader task force have finally arrived upon a flexible approach to address this challenge in measuring productitvity. DCeP is computed as useful work produced divided by total energy consumed by the data center. DCeP allows each user to define useful work as applicable to the user’s business. This aspect allows each user to create a custom metric that is meaningful for each user’s environment. There is a chance that Inconsistencies in data center comparisons can develop with this approach, but the intent is that over time, through use of the metrics and communication, the industry can harmonize the attributes to minimize inconsistency in comparisons.
Rising Profile for Data Centers
Data centers are an increasingly important part of most business operations, with escalating demand coupled with rising energy prices. Now more than ever it’s important to assess and improve performance. A white paper discusses proper, uniform ways in which a data center begins to measure Power Usage Effectiveness (PUE), Green Energy Cooficient (GEC), Energy Reuse Factor (ERF), and Carbon Usage Effectiveness (CUE)
DCeP is seen as a more complete measure of data center efficiency than Power Usage Effectiveness (PUE), which measures the power overhead between the utility feed and IT equipment, but doesn’t capture efficiency gains inside the data center in servers and storage.
The Green Grid has done exhaustive work in the field of metrics and is generally seen as the authority when it comes to measuring efficiency in the data center. The new metric for useful work, over 5 years in the making, will change the way we look at efficiency.
The global task force includes The Green Grid, the U.S. Department of Energy, the U.S. Environmental Protection Agency, The European Commission, Japan’s Ministry of Economy and, Japan’s Green IT Promotion Council. With this fourth and final public memo, the task force concludes five years of work to harmonize directions designed to improve key energy efficiency metrics within data centers.
Data Center Knowledge Editor-in-Chief Rich Miller contributed to this story. |
|