Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, May 24th, 2013
| Time |
Event |
| 12:00p |
DOD Cloud Adoption Helps U.S. Troops Stay Connected  Soldiers at the Fort Stewart, Ga. Education Center work on class material and catch up on their e-mail. The adoption of the cloud by U.S. Department of Defense has helped soliders in the United States and deployed abroad to stay connected with their families. (Photo: U.S. Army)
As the U.S. Department of Defense (DOD) adopts new types of globally distributed technologies, it is working to improve communications for servicemembers, both on the battlefield and with family at home.
While those of us in the corporate world may take for granted the ability to clearly relay a message between multiple parties, communications in the military’s IT world can be critically important.
The military’s shift in technology is happening at all levels and the DOD is fully embracing the new infrastructure push. Just two years after its inception, the Department of Defense Enterprise Email system has reached its one millionth user. This milestone means that the DOD Enterprise Email (DEE) is now one of the largest independent email systems in the world.
“For the war fighters, using DEE means wherever they are, they can use their email, whenever they need it. It is not necessary to start a new email account when you move or deploy. It is as mobile as the servicemember,” said Air Force Lt. Gen. Ronnie Hawkins, Director of the Defense Information Systems Agency (DISA).
With so many important users at any given time, the DOD and DISA are working to ensure optimal performance and maximum communication capabilities for U.S. troops. The landscape for the typical soldier or sailor has changed quite a bit. Just a few years ago, communicating with home meant a long wait and a short chat. Now, with better WAN connectivity and solid infrastructure, U.S. soldiers based all over the world can connect with friends and family via everyday connection means. This could mean that a soldier at Bagram Airbase can communicate with his or her loved ones via Skype, Facebook and even Gmail.
These are truly common tools that civilians use every day. Now, because of the advancement in global distributed infrastructure designs, these same platforms can be used to bring home a little closer to the people defending the national interests abroad.
Here’s a look at some of the behind the scenes infrastructure projects which are helping to bolster all cloud, Internet and WAN-based communications for the military.
Data Center Consolidation
The data center is changing to support more cloud, more data and a lot more users. The DOD quickly realized that it needed need to jump on this evolution bandwagon and update its data center infrastructure. With almost 1,200 data centers, there was a direct need to consolidate and deploy better, more efficient technologies. In fact, the same press release indicates that in using DEE, the DOD is doing just that.
DEE saves millions of dollars for the department by leveraging the buying power of the entire department. Enterprise services reduce costs by consolidating system hardware requirements and maintenance, eliminating unnecessary and inefficient administration and resource allocation. That means the military services and defense organizations using enterprise services can save money in IT services to preserve more resources for their primary mission.
Unified computing systems, converged infrastructures and intelligent hardware components are finding their way into the DOD’s data center environment. These high-density environments are capable of being locked down, diversified and given the opportunity to support numerous different services. These technologies are capable of virtualization and even logical segmentation of workloads. This means that administrators are able to granularly control how communications and data enter and leave their data centers.
Improved Global Infrastructure
Directly related to these consolidation and efficiency efforts has been the direct improvement within the global data center and communications infrastructure. What can be achieved now with intelligent switching technologies is truly amazing. One logical switch controller is able to deliver hundreds – even thousands – of virtual connections. These connections are able to be controlled and can cross-communicate when necessary. Virtual appliances can be deployed within various points in an environment creating a highly agile system. Software-defined networking has helped the DOD create more redundancy on a global scale. Furthermore, the increase in bandwidth and direct communication links has improved the type of information that can be passed point-to-point. Plus, the speed at which such data travels has greatly increased as well. | | 1:08p |
Skyhigh Networks Gets $20 Million to Detect Rogue Clouds Do you know where your corporate data is living? Some of it may reside in cloud services without your knowledge, due to the ease of moving files and workloads to services like Dropbox, Box and Amazon Web Services. A startup has developed software to help companies get their arms around this “shadow IT” and investors like what they see.
Cloud visibility and control company Skyhigh Networks announced that it has received $20 million in Series B financing. Sequoia Capital led the round, with participation from existing investor Greylock Partners. Aaref Hilaly, partner at Sequoia Capital, has joined Greylock’s Asheem Chandna on Skyhigh Networks’ Board of Directors.
The Skyhigh Networks Cloud Services Manager is a multi-tenant service that discovers, analyzes, and controls cloud services in use within an organization. It will see all cloud services in use, identify anomalous behavior and opportunities for consolidating subscriptions, and enforce key security and usage policies. Skyhigh will use the new capital to expand its sales, marketing, and engineering teams to meet the increasing demand for its services and to extend its leadership in the cloud visibility and control market.
“The rapid spread of BYOD and cloud computing has led to vast numbers of cloud services being adopted, often with no involvement from corporate IT,” said Hilaly. “Skyhigh sets itself apart from other security companies by giving IT a unique ‘searchlight’ to find these cloud services, assess the risks involved in using them, and control the confidential data stored in them – all in a way that’s respectful of the end user. We are thrilled to partner with Rajiv and his exceptional team who have a long history of delighting customers with innovative products.”
Cupertino, California based Skyhigh Networks was a finalist for the RSA Conference 2013 Most Innovative Company award and was recently named a “Cool Vendor” by Gartner. It lists customers of its products such as Cisco, Equinix, and Torrance Memorial Medical Center.
“We had no comprehensive way of knowing which services were in use, where outgoing data was headed, and what risks these cloud services implied for our business,” said Steve Martino, vice president, Information Security, at Cisco. “The number of cloud providers we were using was definitely an eyebrow raiser. We knew there would be a number of them, but we were surprised by exactly how many showed up.” | | 1:27p |
Data Center Jobs: CBRE Seeking Building Engineers At the Data Center Jobs Board, we have three new job listings from CBRE, which is seeking a Sr Building Engineer – Critical Systems in Alpharetta, Georgia, San Dimas, California, and Roseland, New Jersey.
The Sr Building Engineer- Critical Systems is responsible for day-to-day operation, maintenance, building rounds, critical equipment systems monitoring, and modification of assigned critical environment building systems, which may include the following:mechanical (including HVAC, computer room air conditioners, chillers, and plumbing), electrical (including UPS, DC battery systems, PDU, generators, transfer switches and switchgear), fire detection and suppression, life safety, lighting, and building control systems, and operating, maintaining, monitoring, and performing preventive, predictive, and corrective maintenance on building equipment, which may include: mechanical/HVAC/plumbing systems, electrical/cabling, fire detection and suppression, life safety, lighting, temperature control systems, building management systems, and digital systems. To view full details and apply, see job listing details (Georgia) or job listing details (California) or job listing details (New Jersey).
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 2:00p |
Optimizing Infrastructure for the Big Data V’s – Volume, Velocity and Variety Patrick Lastennet is director of marketing & business development, financial services segment for Interxion.
 PATRICK LASTENNET
Interxion
The use of big data, in general, is still in its early stages for many industries, but the financial services industry has been dealing with big data for years. In fact, it’s already been managed and embedded into core financial processes. What used to be done in hours can now be done in minutes thanks to advanced data processing capabilities being applied to everything from capital market portfolio management applications to financial risk management. Prior to such advancements, data from previous days or weeks was analyzed to help re-strategize market approaches for the next day’s trading. But now, with more complex data analytics capabilities, financial firms are able to shorten that window for data processing and create more up-to-date strategies and trading adjustments in real time.
However, it’s not just the increasing volume of data sets that is of concern to financial firms. There’s also the velocity and variety of the data to consider. When pulling clusters of diverse databases together for both structured and unstructured data analysis, financial firms rely on having powerful processing speeds, especially as real-time insight is increasingly a key strategic factor in market analysis and trading strategies. But are financial institutions equipped with the proper infrastructure to effectively handle the three V’s of Big Data – volume, velocity and variety – and benefit from real-time data analysis?
Increasing the Value of Real-Time Operations
With real-time data analysis, financial institutions are better able to manage risk and alert customers to real-time issues. If a firm is able to manage risk in real time, that not only translates into better trading performance, but also ensures regulatory compliance. Such improvements can be seen in consumer instances with enhanced credit card transaction monitoring and fraud protection and prevention measures. But, on a larger scale, the most recognizable incident that would have benefited from better data analysis may have been the collapse of Lehman Brothers.
When Lehman Brothers went down, it was called the Pearl Harbor moment of the U.S. financial crisis. Yet, it took the industry days to fully understand how they were exposed to that kind of devastating risk. For every transaction made, it’s imperative that the financial firms understand the impact, or, as an extreme scenario, risk another “Lehman-esque” collapse. Today, with advancements in big data analysis and data processing, whenever any trader makes a trade, financial firms know what’s going to happen in real time through the risk management department–that is, if they have the right infrastructure.
Optimizing Current Infrastructure
The crux of handling the volume, velocity and variety of big data in the financial sector lies within the underlying infrastructure. Many financial institutions’ critical systems are still dependent on legacy infrastructure. Yet to handle increasingly real-time operations, firms need to find a way to wean off of legacy systems and be more competitive and receptive to their own big data needs.
To address this issue, many financial institutions have implemented software-as-a-service (SaaS) applications that are accessible via the Internet. With such solutions, firms can collect data through a remote service and without the need to worry about overloading their existing infrastructure. Beyond SaaS apps, other financial companies have addressed their infrastructure concerns by using open source software that allows them to simply plug their algorithms and trading policies into the system, leaving it to handle their increasingly demanding processing and data analysis tasks.
In reality, migrating off legacy infrastructure is a painful process. The time and expense required to handle such a process means the value of the switch must far outweigh the risks. Having a worthwhile business case is, therefore, key to instigating any massive infrastructure migration. Today, however, more and more financial firms are finding that big data analysis is impetus enough to make a strong business case and are using solutions like SaaS applications and open source software as stepping stones for complete migrations to ultimately leave their legacy infrastructure behind.
Integrating Social Data
While the velocity and variety of big data volumes from everyday trading transactions and market fluctuations may be enough of a catalyst for infrastructure migrations and optimization, now that social data is creeping into the mix, the business case becomes even more compelling. | | 2:07p |
Friday Funny: What’s the Best Caption? Happy Friday! We’re more exuberant today here at Data Center Knowledge because we are heading into a three-day holiday weekend in the United States! The beginning of summer with picnics, parades and vacations planned. Yay!
That means before we go — it’s time for a little data center humor. We run our caption contest on Fridays, with cartoons drawn by Diane Alber, our fav data center cartoonist! Please visit Diane’s website Kip and Gary for more of her data center humor.
The caption contest works like this: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion.
This week, we are voting on suggestions for Green Lighting in the Data Center. Please scroll down and vote!
Take Our Poll
For the previous cartoons on DCK, see our Humor Channel. | | 2:45p |
Fortress Buys arvato to Boost Modular Capabilities Fortress International Group consults on data center design, which is rapidly evolving, particularly in regards to modular architecture. This is part of the impetus behind the data center consulting and engineering specialist’s acquisition of the data center integration services business from arvato digital services for $1.5 million.
This acquisition expands Fortress’ capabilities so it can perform a full range of services both to the rack, and inside the rack. The acquisition is expected to generate over $10 million of annualized revenue and be accretive to Fortress International Group’s results in the second half of 2013.
“We are very excited to expand our data center services offerings to include integration services of IT equipment,” said Anthony Angelini, CEO of Fortress, which is perhaps best known for its Total Site Solutions brand. “This acquisition is strategically significant and will be financially accretive to our business. Strategically, the acquired integration business is an ideal complement to our current offerings in both the traditional and modular data center markets. As the company evolves, our ability to perform a full range of services for data center customers both to the rack, and now inside the rack, will enhance the value proposition that our customers gain by trusting their data center requirements to us.”
About arvato
arvato provides custom rack layout design and configuration for large enterprise IT solutions consisting of large banks of computer servers, digital information storage and networking equipment, with custom cabling, power and cooling within data center racks and mobile or containerized data centers.
The business also includes testing and deployment, including onsite installation and network set-up of completed data center racks and mobile or containerized data centers. Additionally, the business provides configuration services including the configuration of IT equipment, which consists of loading applications or systems software, customizing memory or storage capacities, adding peripherals, and testing (including hardware power-up testing, diagnostics and software boot testing).
“Modular data center services represent an important and growing business for us, and, with this acquisition, we now offer an unmatched set of capabilities to the overall data center market and particularly the modular data center space,” said Angelini. “The transaction represents a key step along our strategic roadmap, and we are very excited about the team of people joining us as a result of the acquisition. We anticipate a number of synergies in both business development and cost savings as we integrate our sales teams, existing customers and management teams. The transaction further provides an enhanced end to end solution to both existing and potential customers.”
Fortress (FIGI) also said this week that it had secured a new credit facility through Bridge Bank to support the company’s growth strategy and provide financial flexibility. The facility provides a line of credit up to $6 million over the next two years. | | 4:02p |
Microsoft Will Back Xbox One With 300,000 Servers  Serious Server Density: Packed racks of servers in an IT-PAC at the Microsoft data center in Quincy, Washington (Photo: Microsoft Corp.)
With this week’s unveiling of the new Xbox One gaming system, Microsoft is more talkative than usual about the infrastructure supporting its Xbox platform. The reason? Microsoft says the new console will be able to tap cloud resources to enhance the game experience. What will that mean for the infrastructure supporting the Xbox platform?
Servers. Lots of servers.
“When we launched Xbox Live in 2002, it was powered by 500 servers,” Microsoft’s Marc Whitten said in introducing the new platform. “With the advent of the 360, that had grown to over 3,000. Today, 15,000 servers power the modern Xbox Live experience. But this year, we will have more than 300,000 servers for Xbox One.”
Those servers will expand the Xbox network’s storage capacity to enable users to store their saved games and entertainment in the cloud. But how will these cloud servers work with the console to deliver the Xbox one gaming experience? General Manager of Redmond Game Studios and Platforms Matt Booty provides some answers in a discussion with Ars Technica. At the heart of the issue is “lag” or latency, the delay seen in online connections as data moves between the server and the hardware in your home.
Booty says cloud assets will be used on “latency-insensitive computation” within games. “There are some things in a video game world that don’t necessarily need to be updated every frame or don’t change that much in reaction to what’s going on,” said Booty. “One example of that might be lighting,” he continued. “Let’s say you’re looking at a forest scene and you need to calculate the light coming through the trees, or you’re going through a battlefield and have very dense volumetric fog that’s hugging the terrain. Those things often involve some complicated up-front calculations when you enter that world, but they don’t necessarily have to be updated every frame. Those are perfect candidates for the console to offload that to the cloud—the cloud can do the heavy lifting, because you’ve got the ability to throw multiple devices at the problem in the cloud.” This has implications for how games for the new platform are designed.
So how does Microsoft scale from 15,000 to 300,000 servers? More data centers. This year we’ve recently profiled major expansions at three of Microsoft’s data center hubs in Virginia, Ireland and Washington state. Check out these stories for a closer look at the “cloud end” of Microsoft’s online experience. | | 6:30p |
Data Direct Networks Powers a 100-Petabyte Cloud DataDirect Networks (DDN) announced that University College London (UCL) has selected DDN to provide up to 3,000 researchers with a safe and resilient storage solution, that is expected to scale up to 100 Petabytes.
The first phase of the UCL project for enabling researchers to share, reuse and preserve project-based research data will use DDN object storage technology to store up to 600TB of research data.
UCL sought to remove the burden of storing and preserving research data from individual researchers and in doing so, lower the barriers of sharing and exploiting vital findings in order to improve research outcomes and overcome problems of global significance. With DDN WOS (Web Object Scaler) distributed object storage architecture, GRIDScaler parallel file storage system and tight integration with the Integrated Rule-Oriented Data Management Solution (iRODS), UCL forecasts it will save money –up to hundreds of thousands of UK pounds — by slashing hardware, power and staffing costs, as well as maintenance fees, associated with attaining and maintaining personal data stores across 100 departments, institutes and research centers.
“We were very interested in building a relationship with a strong storage partner to fill our technology gap,” said Dr. J. Max Wilkinson, Head of Research Data Services for University College London’s Information Services Division. ”After a thorough assessment, DDN met our technical requirements and shared our data storage vision. In evaluating DDN, we agreed that the WOS solution had a simple proposition, was high performance and had low administration overhead.” | | 6:50p |
Juniper Delivers Big Data Analytics Solution Here’s a roundup of some of this week’s headlines from the big data analytics sector:
Juniper Network Analytics Suite. Juniper Networks (JNPR) unveiled the Junos Network Analytics suite, a family of next generation big data analytics and network intelligence solutions that now includes the BizReflex and NetReflex products. Developed with help from big data analytics company Guavus, the new products give service providers a tool to optimize their routing network assets, increase revenue opportunities and attract and retain customers. BizReflex extracts and analyzes information from edge and core routers to allow operators to segment enterprise customers according to their respective value and price services accordingly, improving margins and customer retention. NetReflex gives operators more insight than previously possible into traffic patterns on the network, allowing network service providers to reduce costs with informed decision capabilities and improve the efficiency of their network. “Our analytics solutions have been built from the ground up to unlock the value of network-generated data by dramatically increasing the speed and scale at which business insights can be delivered and better businesses decisions can be made,” said Anukool Lakhina, founder and CEO, Guavus. ”We are pleased to be working with Juniper Networks to deliver a network analytics solution that allows customers to optimize their IP/MPLS assets for more efficient network operations, reduce costs and increase revenue.”
Splunk and Hortonworks form alliance. Splunk and Hortonworks announced a strategic alliance to enable organizations to gain operational intelligence using open source Apache Hadoop. The alliance ensures that organizations can take advantage of Splunk Enterprise and Hortonworks Data Platform (HDP) utilizing Splunk Hadoop Connect, which easily and reliably moves data between Splunk Enterprise and Hadoop. “Splunk Enterprise delivers more value as organizations integrate larger and more diverse datasets to the platform. Hortonworks, with its open source roots and ability to operate on Microsoft Windows Server, brings an entirely new data proposition to Splunk customers utilizing Hadoop,” said Bill Gaylord, senior vice president of business development, Splunk. “The integration of HDP and Splunk Hadoop Connect opens up exciting new data possibilities. For example, customers can easily use Splunk Enterprise to collect machine data from across the organization and deliver it to Hadoop for batch analytics. Likewise, the output of Hadoop jobs can be imported into Splunk Enterprise for rapid analysis and visualization.”
Blue Coat to acquire Solera Networks. Blue Coat Systems announced it has entered into an agreement to acquire big data security intelligence and analytics company Solera Networks. The Solera DeepSee platform will add industry-leading security analytics and forensic capabilities to the Blue Coat product portfolio, delivering an end-to-end security solution that includes protection, remediation and governance and gives enterprises complete visibility into the content and context of advanced targeted attacks. “Today’s approach to securing the enterprise is missing an essential element – the ability to defend, react and resolve security issues by efficiently mining a very large dataset of network history to gain previously unavailable insights. The future of the industry is moving beyond just blocking malware and stopping targeted attacks to also identifying and resolving the full scope of the attacks in real time,” said Greg Clark, CEO at Blue Coat Systems. “Retrospective capture and analytics are now an essential component of modern security architecture, and Solera has pioneered this field, creating a DVR for the network that records traffic and allows customers to easily mine that information.” |
|