Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, April 1st, 2014
Time |
Event |
12:00p |
Data Center Jobs: VISA At the Data Center Jobs Board, we have a new job listing from VISA, which is seeking a Chief Engineer in Littleton, Colorado.
The Chief Engineer is responsible for supporting service delivery organizations to design engineering solutions for operational issues, leading the assessment of the capacity and resource utilization of existing Data Center Infrastructure, ensuring technical documentation of environment is up to date and consistently maintained, reviewing existing Data Center design and infrastructure to identify Gaps and plans of Improvement, leading the research and evaluation of emerging technologies, industry and market trends to assist in project development and operational support activities and to define and evolve the Data Center Strategy, identifying trends in infrastructure performance data to identify engineering enhancements, and vendor Selection and Technical Evaluation.To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 12:25p |
Cloud Security Solutions for Hybrid Clouds Gilad Parann-Nissany is the founder and CEO of Porticor Cloud Security. He is a pioneer in the field of cloud computing who has built SaaS clouds, contributed to SAP products and created a cloud operating system. He has writes about the importance of cloud encryption and encryption key management for PCI and HIPAA compliance. Gilad can be found on his blog.
When large enterprises move to a public infrastructure cloud (such as Amazon Web Services or others), it is a gradual, and often times, carefully measured process. Large enterprises strive for 100 percent certainty that the migration process will not impact the business; therefore, they’ll usually start slowly, by migrating one application or process to the cloud.
This is where hybrid clouds kick in. Hybrid clouds offer (just as their name insinuates) a hybrid between on-premise and cloud infrastructure. But once even part of the business is in the cloud, the need arises for cloud security. As data is migrated away from the local “safe” data center, access to the information is no longer controlled by the enterprise, and different, cloud-oriented, security measures must be considered.
Hybrid Cloud Example: Components and Risks
Let’s take a classic hybrid cloud example and dissect its components and risks. Recovery as a Service, RaaS, will contain an on-premise component: usually a physical appliance or an agent of some sort. Additionally, it requires a cloud component: the technology and capacity to allow an organization to recover from failure away from the data center.
The pros are many, specifically the ability to dramatically reduce costs, moving away from a physical recovery solution to a cloud, pay-per-use, solution.
With the benefits come the risks. As soon as data is no longer in the hands of the enterprise, but rather resides in the cloud, cloud security (and in most cases, cloud compliance as well) becomes a primary concern. An enterprise must make cloud security a top priority to ensure that its data is as secure in the cloud as it was in the data center.
Cloud Security Best Practices for Hybrid Clouds
Cloud encryption is considered best practice and a “must-have” as part of any cloud security architecture. It allows for data segregation using mathematical walls instead of the physical walls of the data center world. But in fact, when it comes to cloud security, cloud encryption is the easy part.
The challenge is mostly with the encryption keys. Or, more accurately, who controls and manages your encryption keys? Would you trust your banker to hold on to your safety deposit box key? Probably not, and for a good reason! Same goes for cloud security best practice: never trust your cloud provider to manage the encryption keys for you. As with your safety deposit box, you, and only you, should own your key.
Recent cloud security advancements present an innovative approach to the key management issue using technologies such as split-key encryption. Going back to the banker metaphor, with split-key encryption, the encryption key is split in half, allowing the customer to maintain control of the encryption key while, at the same time, host its most sensitive data in the cloud. Such technologies are enabling secure migration to the cloud and support hybrid use cases such as RaaS.
Don’t Delay Cloud Adoption Because of Cloud Security
We often fear the unknown. We have been inundated with opinions professing that the cloud is not secure. This is not necessarily the case. As our computing environments have advanced, so have the security protocols that protect them. By using the latest cloud security models, your data can be as secure in the cloud as it was in the data center (in my opinion, it is even more secure this way).
But, what about the breaches? Between the media storm caused by Snowden and the NSA and the one caused by the Target credit card hackers, we are led to think that everyone who operates in the cloud will meet their demise. Again, simply not the case. First of all, breaches happen in the physical world too – whether it is a stolen laptop or an employee “mole,” security breaches were not born in the cloud. In fact, migrating to the cloud with cloud security measures like split-key encryption ensures that fewer “hands” touch your data and therefore, reduces the access points.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 1:30p |
Cloud Migration: Knowing When to Make the Move Cloud computing carries a number of great benefits for organizations that need a new delivery platform. Cloud migration is upon us as information technology undergoes rapid change and organizations of all types begin to embrace the idea of moving computing infrastructure from on-premises to the cloud.
It is easy to understand why the cloud has taken off faster than any technology phenomenon in recent memory because the cloud has the potential to reduce total cost of ownership (TCO) while enabling quicker responses to fast-moving markets and ever-changing customer needs.
Still, it’s not always quite so simple. A modern organization’s IT needs are growing more complex, making flexibility and scalability critical while limiting capital expenditures. In this white paper from QTS, we quickly learn how a cloud model can create a compelling argument for IT to be as efficient as possible while aligning with service providers that can help them do this. It’s critical for organizations to create operational excellence while still working to improving business. This is especially the case as IT embraces (and in some cases is disrupted by) technologies that various lines of business bring into the enterprise.
Download this white paper today to learn about some key cloud migration factors. While the questions of ”why” and perhaps “how” to move to the cloud are becoming better understood, the issue of when to move is less clear. CIOs at companies of all sizes – across all industries – are pondering the questions of “when is the right time to move to the cloud” and “which applications and parts” of their technical infrastructure should be included.
The cloud can enable many organizations to do great things with their infrastructure. New delivery platforms create direct optimizations around workload delivery and application access. There is clear maturity around many different cloud services. The key will be to understand which cloud model is optimal and when a move into the cloud is right. This type of decision comes from a number of factors including business vision, strategic ability, corporate readiness, and of course – budgeting and finance.
There are obvious triggers for cloud migration, including fewer outages, product launches, technology refreshes, M&A transactions, or a desire to otherwise overcome technological obsolescence. Still, in many cases, the decision of when to migrate is less clear-cut. When moving to a cloud platform – take the time to understand your own infrastructure, and how a cloud model can help your organization both today – and in the future. | 1:42p |
SanDisk Expands CloudSpeed Family of SSD Enterprise Storage At the Interop and Cloud Connect IT conference in Las Vegas Monday SanDisk (SNDK) announced four new additions to its CloudSpeed Serial ATA (SATA) product family of solid state drives (SSDs). The new SSDs are optimized for demanding transaction processing and content repository workloads in enterprise and cloud computing environments.
The next generation CloudSpeed SATA SSDs from SanDisk utilize a 19 nanometer process. These MLC NAND flash SSDs are high performance and optimized for enterprise server and cloud computing environments. They come in capacities from 240GB to 960GB, feature a 6GB per second SATA interface and provide data transfer rates up to 450/400 MB per second sequential read/write and performance up to 80K/15K IOPS random read/write.
“Organizations in every industry are turning to SSDs to meet the challenges faced by growing volumes of data,” said John Scaramuzzo, Senior Vice President and General Manager, Enterprise Storage Solutions at SanDisk. “However, finding a solution that meets the performance and cost needs of your individual environment can be difficult. The CloudSpeed SATA SSD product family was designed to meet the needs of applications across the read-write spectrum, meaning organizations and server designers no longer need to make tradeoffs between system performance and cost.”
Range of Capabilities
The SanDisk CloudSpeed SATA product family features 4 SSDs in a range of endurance and performance capabilities. The CloudSpeed Extreme achieves 10 full full drive writes per day (DWPD) and delivers up to 75K/25K IOPS of random read/write performance for write-intensive application workloads such as Database Logging and High Performance Computing. It supports up to 14.6 Petabytes Written (PBW) to the drive over its life. The CloudSpeed Ultra offers up to 3 full DWPD and is optimized for mixed-use application workloads such as Online Transaction Processing (OLTP), Financial Transactions, Email/Messaging, E-Commerce, Virtual Desktop Infrastructure (VDI) and Collaboration. The CloudSpeed Ascend is designed for read-intensive application workloads such as File Servers, Web-based Applications and Virtual Tape Libraries, with capacities ranging from 240GB to 960GB. The CloudSpeed Eco is an entry level SSD for read-intensive workloads such as web servers, content repositories, photo sharing, media streaming and cloud computing.
The CloudSpeed SSDs employ a combination of powerful error correction and detection technology, full data path protection, thermal monitoring, and data fail recovery of up to 1 NAND flash erase block. Additionally, SanDisk manufactures its own NAND and has direct control over the design, controller firmware, assembly, testing and supply chain, which delivers innovative products, enterprise-grade quality control and supply assurance. These enterprise SSDs are powered by SanDisk’s Guardian Technology Platform, a suite of enterprise features that include FlashGuard, DataGuard and EverGuard technologies. FlashGuard combines Aggregated Flash Management and Advanced Signal Processing to reliably extract significantly more life from cost-effective 19nm MLC flash, making it suitable for read-intensive application workloads.
“We are only at the early stages of becoming a data-driven society that wants instantaneous access to our information,” said Jeff Janukowicz, Research Director, at IDC. “To deliver real-time or near real-time insight to drive business decisions, organizations worldwide will need to consider high performance storage solutions, such as SSDs, thereby driving demand for these solutions in the coming years.” | 2:30p |
HP Boosts SDN-Enabled Unified Networking Portfolio At Interop Las Vegas this week Hewlett Packard (HPQ) announced new networking solutions designed to support mobility initiatives for businesses of all sizes. The new cloud-managed, software-defined networking (SDN)-enabled unified wired and wireless network solutions will help customers to increase the agility of their network while simplifying its management.
“Our customers are telling us that their networks are primarily cost centers that are too complex to manage, while they are at the same time facing increasing end-user demand for a better mobile experience,” said Bethany Mayer, senior vice president and general manager, Networking Business Unit and Network Functions Virtualization Business, HP. “With HP’s cloud-managed and SDN-enabled unified wired and wireless solutions, we are enabling IT organizations to provide an improved mobile user experience, as well as offering simplified management and investment protection.”
To address legacy network infrastructures, the HP Cloud Managed Network provides an easy-to-manage network solution for small and mid-sized businesses (SMBs) and distributed offices. The solution lowers total cost of ownership by reducing the need for onsite IT staff and lowering upfront costs by up to 30 percent with a pay-per-use cloud service model. Enabling organizations to support the growing population of mobile devices new HP 560 and 517 802.11ac wireless access points offer network agility. The HP 560 access points will also be OpenFlow-enabled, empowering customers to leverage SDN applications without having to rip and replace existing infrastructure.
A new HP 870 Unified Wired-WLAN Appliance provides reliable application performance for end users, supporting up to 30,000 devices. For midsize to large enterprises, the HP 850 Unified Wired-WLAN Appliance supports up to 10,000 devices. HP is supporting customers’ bring your own device (BYOD) initiatives by partnering with Citrix and MobileIron to offer best-in-class mobile device management (MDM) integration with the HP Intelligent Management Center (IMC) platform.
“Enterprises need adaptable, robust, easily deployed and secure wireless LAN solutions as more users are accessing the enterprise network via mobile devices while increasingly using these devices for mission-critical applications,” Rohit Mehra, vice president, Network Infrastructure, IDC.(2) “SDN-based and cloud managed wireless solutions will provide tools that offer network managers greater flexibility and scalability and, more importantly, programmability of networking resources to support the needs of their businesses.”
SDN-enabled Location Aware
Powered by technology developed at HP Labs, a new HP Location Aware SDN application locates any wireless-enabled device indoors with approximately two meter accuracy. HP Location Aware enables a wide range of new context-aware retail, asset management and security applications. This application will be integrated with the HP Virtual Application Networks (VAN) SDN controller and is designed to help businesses transform wireless LAN infrastructure into revenue-generating vehicles.
HP Communications and Media Solutions (CMS), leveraging big data analytics and Location Aware technology, has developed a proof-of-concept application called SmartShopper. Designed for both retailers and service providers, SmartShopper enables enterprises to tighten relationships with their customers and monetize the network delivering real-time, location-based offers to customers’ smartphones. Using HP’s location- and context-aware Telco Big Data and Analytics technology, organizations can increase sales by personalizing the shopping experience and drive revenue through targeted promotion of relevant products. | 3:00p |
Sungard Availability Splits From SunGard, Rebrands Sungard Availability Services is now a stand-alone company, with a new logo, new branding and a lower case “g” in its name. As it splits from SunGard Data Systems, the company is asserting its independence while maintaining the continuity of the most familiar brand in disaster recovery.
SunGard announced in January that it would spin off its Availability Services business, which operates its data centers and disaster recovery business. The goal was to “bring greater clarity” to each company’s vision, SunGard said.
Both SunGard and SunGard Availability Services will continue to be privately owned by the consortium of private equity investment funds from Bain Capital, The Blackstone Group, Goldman Sachs, Kohlberg Kravis Roberts, Providence Equity Partners, Silver Lake and TPG, which acquired SunGard in a leveraged buy-out in August 2005.
Why A Spinoff?
So why spin off the unit to the same group of owners? Sungard Availability now has its own board of directors, and may have more flexibility to pursue partnerships and develop new offerings to compete in the business continuity arena, where it is facing a growing challenge from new cloud providers.
“Now that we are an independent firm, we have the flexibility to evolve our culture, our industry relationships and our investments to maximize our business and best serve customers,” said Andrew Stern, CEO of Sungard Availability.
That “greater clarity” could also apply to investors, as the spinoff creates a more focused entity should Sungard Availability seek buyers or pursue an IPO.
SunGard got its start in 1978, when it effectively invented the disaster recovery business, renting space at 401 North Broad Street in Philadelphia to house backup data for itself and 20 other Philadelphia-area companies. In 1983 Sun oil spun off the business as SunGard – short for “Sun Guaranteed Access to Recovered Data.”
Sungard Availability Services now 7,000 customers, annual revenues of approximately $1.4 billion and operations in 11 countries.
As part of its new identity, Sungard Availability is now using a lower-case “g” in its name as a differentiator with SunGard. Sungard AS also has a new logo designed to covnery strength and dynamism. “A forward-leaning angle in the logo conveys progression and growth, while a triangle in the logo represents stability and the support that the company will continue to provide its customers,” Sungard said.
But the bottom-line messaging for Sungard remains the same: Always on, always available. | 3:36p |
Industry Vendors Form CLR4 100G Alliance for 100G Optics Specification At Interop 2014 in Las Vegas this week Intel (INTC), Arista Networks and many other key industry vendors announced the 100G CLR4 Alliance. With the goal of creating a new, open, multi-vendor 100G optics specification the alliance brings together end customers, system companies, and optical companies, and focuses on the market requirements of large data center customers.
Cost-effective 100G Optics for the New Data center
The data explosion coming from mobile, social, big data and the cloud brings into focus the problem of pent up demand for cost-effective 100G optics, which isn’t addressed by telco-focused 100G solutions to date. The open specification will design a low-power 100G-CWDM Optical transceiver in QSFP form factor, with a reach of up to 2 kilometers over duplex single-mode fiber. Preliminary supporters of the specification include Dell, ebay, HP, Brocade, Oracle, Ciena, and many others.
The 100G CLR4 alliance solution achieves four primary functions: it achieves the smallest 100G QSFP form factor, it reduces fiber by 75 percent, it enables low power (<3.5 watts) and long distance (1-2 km), and it enables high density, with 36 100G ports in 1 rack unit. The power, size and costs associated with 100G optical transceivers geared toward telecom players don’t meet the needs of large data center companies. The need for longer reach addresses the large gap that exists between 100 meters and 2 kilometers. The proliferation of 10G and 40G solutions has also brought about large and numerous form factors for 100G. Data centers require the smallest form factor possible, for maximum port density.
“There are telecom centric optical transceivers today operating at 100Gbps, but their power, size and costs are non-starters for the new data centre. Thus, there is a huge gap that needs to be filled for reaches that span from say 100m to 2km. And that’s the problem we are trying to address here,” said the director of Photonics Research at Intel Labs, Dr Mario Paniccia.
“Arista is excited about the 100G CLR4 Alliance,” said Andy. “We need to accelerate the time-to-market for cost effective low-power, 100G CRL4 QSFP form factor optics that address the 2km reach requirements of large data center customers. We believe an open multivendor effort is the best way to bring this to market.”
The alliance expects to move fast, with a preliminary spec yet this month and a published consensus spec next month. The group is open for anyone to participate in, with the goal of getting the industry to build and deploy 100G CLR4 products. | 6:15p |
FORTRUST Adds More Modular Capacity in Denver Colocation provider FORTRUST has expanded the capacity of its Denver data center by deploying additional IO data center modules, the company said today.
FORTRUST has experienced an increased client demand for space in its modular data centers, prompting the new deployment. FORTRUST now has more than 2.4 megawatts of capacity deployed via IO.Anywhere data modules, and said it expects to add more capacity in coming months.
By using IO modules, FORTRUST was able to boost its IT end-user capacity by 5.2 megawatts without having to expand the physical structure of the facility, while significantly lowering the data center’s Power Usage Efficiency (PUE), a key measure of energy efficiency. FORTRUST also hosts modules in IO’s facilities in Phoenix and Edison, N.J.
Accelerates Speed to Market
“We are the sole provider of the IO.Anywhere technology in Colorado, which further increases the demand we are seeing in this market,” said David Shepard, Senior Vice President of Sales and Marketing. “As a result, speed to market is crucial to our expansion. Through our advanced plug-n-play infrastructure, and IO’s industry-leading technology, we are able to deploy additional modules quickly, efficiently and in a standardized manner to meet these growing customer needs.”
IO has been a pioneer in the emerging market for modular data centers that are built in a factory using repeatable designs and can be shipped to either an IO data center or a customer premises. In addition to its IO.Anywhere modular technology, the company has also developed IO.OS software for managing complex data center infrastructures across multiple sites.
“FORTRUST’s decision two years ago to adopt a modular approach to data center design, construction, and delivery versus a traditional raised floor approach has proven to be a big success with our customers,” said Rob McClary, FORTRUST’s Senior Vice President and General Manager. “Our customers now enjoy increased per rack densities of greater than 20kW along with direct visibility into our DCIM systems for real-time power, temperature, and other pertinent information for their colocation environment. We are proud to remain the exclusive provider of the IO.Anywhere® technology in Colorado.” |
|