Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Friday, October 11th, 2013
| Time |
Event |
| 12:00p |
The Zero-Client: The Next-Generation in Client Computing  As IT consumerization accelerates and data centers become more powerful, services can be delivered to thin-client and “zero client” devices.
The days of the PC, as we know it, are numbered. Corporations are already dealing with IT consumerization and demands around mobility, and the evolution of the data center has helped IT departments deliver more using a lot less. Current data center platforms have become the home of many new technologies. With more high-density and multi-tenancy computing, increased resiliency, and better overall resource utilization, many more organizations are centralizing their entire business model around their data center platform. Because we have better bandwidth and resource capabilities, cloud computing and virtualization have helped digitize the industry.
With that, comes the next-generation end-point. What’s the point of having big resource-intensive machines sitting at every user’s location? Why dedicate extra hours in repairs, maintenance and life cycle management? Why create this extra work when the entire end-user experience can now be delivered directly from your data center down to a tiny end-point device?
In fact, virtualization and compute technologies have come even further by allowing heavier, resource-intensive, applications to function better within the data center. For example, NVIDIA’s GRID pass-through technology allows you to directly integrate with the hypervisor and allocate full memory capabilities to a virtual machine – running on XenDesktop 7, for example. This virtual desktop can then be streamed down to a very small hardware footprint. Although GPU pass-through technology has been available in the past, the big difference is that we have better resource utilization for the virtual desktop and we can place more users per GPU.
Smaller, Faster End Points
In designing more efficient corporate environments, IT managers must look to end-points which are easier to manage, faster to deploy and require less overhead. The introduction of thin-clients paved the way for a small, easy to control end-point. The challenge, in many cases, has been the price. These terminals would still cost between $300 and $400. Many IT managers would argue that the minor savings in management were outweighed by performance gains that a bigger PC might deliver. Still, as the IT infrastructure continued to evolve, virtual applications, desktops, and the data center that supports it all became much more efficient. And, as a result, the end-point evolved. Here’s a look at where we heading:
- Breaking the $100 barrier. What’s the point of deploying an end-point if it’s not cost-effective? Well, zero-clients look to break that trend by breaking the $100 barrier. These devices will deliver workloads, connect to the central data center, and be easy to manage. Already we see devices in the $150 area and below. As data center resources become even more centralized and powerful, much of the processing power will be offloaded to the data center. This will allow the end-point to get even smaller and less expensive.
- Centralizing the data and the management. With faster network closets, and better data delivery mechanisms, the end-point really doesn’t need to be complex. With no moving parts, and really just one main board, zero-clients are used a direct machines to deliver virtual workloads. All of the data will be centrally managed and controlled. This means if a device is lost, the data will always be safe. Plus, pushing new images and controlling versions will be even more simplified. Centralized management consoles will allow for full control and visibility of the end-point environment.
- Rip/replace methodology. It takes more time and money to replace a hardware component, reinstall software, or troubleshoot an issue at the end-point level than many may think. For $100, it’ll become standard practice to go to the end-point, unplug it, and put a new one in – making the workload available immediately after a network connection is established.
- Content redirection and management. A zero-client isn’t just some weak little end-point. In fact, these devices are able to deliver HD content with no lag. Furthermore, administrators are able to control whether the devices process some of the information or if the content is rendered at the data center level. This type of visibility into how traffic flows allows managers to deliver an even more powerful end-user experience. The idea isn’t to lock down or restrict the user. If these devices deliver a poor user experience, deployment will be a serious challenge. That’s why next-generation end-points are designed to run efficiently and leverage the bandwidth and resources that it is provided.
- Flexibility around security and compliance. The great part about zero-computing clients is the flexibility around security and compliance. Not only is the data always centrally held, the end-point will never retain the information. If the device is stolen, no data can be pulled from the machine. Administrators are able to always centrally control data, how it’s delivered and where it’s being access from. With visibility into the information that’s flowing in and out of these zero-clients, security administrators are able to better set data loss prevention policies and have great visibility into data flow.
Vendors like nComputing and Wyse are working hard to replace the big PC end-point with better and more efficient computing platforms. New chips, more bandwidth, and faster networks are all simplifying the end-point and enhancing the data delivery process. As cloud computing and virtualization continue to pick up a bit steam, the end-point community will benefit. By creating an easy-to-manage end-point environment, managers can focus on improving the end-user experience without having to worry about the machine that they’re deploying. The ability to consistently deliver a fast and easy to access workload will create a more efficient (and happier) end-user. | | 12:30p |
Taking Your Cloud Deployment to the Next Level Aaron Patrick is Cloud Architect at Markley Group. He is an accomplished systems architect with more than a decade of experience in the information technology industry. Aaron leads the design, development and deployment of the company’s cloud computing platform, featuring Infrastructure-as-a-Service (IaaS).
 AARON PATRICK Markley Group
In today’s IT landscape the benefits of cloud computing – flexibility, lower costs, higher productivity – are well understood. However, now that the term “cloud computing” is everywhere, companies need to work proactively to ensure they have put their business in the best possible position to succeed now and in the future.
A huge component of this success will be dependent on where your organization’s cloud infrastructure is housed. The data center that your cloud calls home will have certain capabilities and features that may be the difference in whether your cloud keeps pace or falls behind. Uptimes, network bandwidth and security are some of the most important aspects of the data center infrastructure that companies must take into account.
To succeed, however, you need to know what to look for. Below are some key variables that need to be taken into consideration when searching for a data center partner that can manage your cloud computing infrastructure:
Availability
One of the biggest reasons companies opt for the cloud is the increased flexibility it provides. No longer do employees need to be hardwired into a company’s server to access data and complete critical functions. This flexibility also allows businesses to outsource their infrastructure needs and costs to data centers where customers do not need to personally maintain and staff the facility.
However, this all relies on the data center’s ability to ensure the cloud is active, accessible and doesn’t suffer downtimes that cause productivity to come to a screeching halt. Before agreeing to house your essential cloud infrastructure in any data center, take the time to ensure it is reliable and gives you the confidence that your cloud will be active whenever you need it. Examine the uptime rates for potential data centers and if there have been downtimes in the past look into what caused them. Repetition in events can be the sign of a larger problem.
Disaster Recovery
Strong data center partners will also allow for increased redundancy and backup potential, so that your data would not be lost in case of a disaster. The peace of mind that all of your valuable data would be recoverable is extremely important, especially in today’s big data world. Do research into what kind of cloud computing backup solution the data center offers, backing up your information offsite is an added benefit to protect against disaster.
Cross-Connection
Network bandwidth is an extremely important, if sometimes overlooked, factor to cloud performance. Companies should be looking for a data center with multiple network providers available. An option to consider is putting your data center into a carrier hotel, which is a colocation facility where many carriers are physically present. With a large number of providers all under one roof, customers are guaranteed increased bandwidth and network reliability.
Also, carrier hotels may provide low costs for bandwidth since there are competing providers present in the same location, driving the price down. Yet another added bonus that some data centers provide is a direct cross connection into the carrier’s router itself. This kind of connection will further lower costs, while increasing security, performance and reliability. All the things you need to ensure your cloud has a strong infrastructure behind it.
Security
As previously mentioned a cross connect with a carrier’s routers can increase security for your cloud, but it is not the only way to improve security. Many companies store much of their most confidential and important information on their cloud server and that makes security of the utmost concern for IT departments enabling or upgrading their deployment. Data centers, and specifically enterprise class centers, offer very robust security options for cloud infrastructure and have 24/7/365 monitoring to ensure that the data stored there is safe there.
Hybrid Cloud Deployment and the Next Level
These features cover some of the most important aspects of a hybrid cloud deployment and are important for IT departments to consider when looking to take their cloud strategy to the next level.
As the industry continues to evolve and change at a breakneck pace, companies need to make sure their data center will help their cloud infrastructure grow and thrive. By following these guidelines IT departments will be able to find the right partner and guarantee that they are not left behind.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 2:27p |
Interxion To Build Ninth Frankfurt Data Center  Cabinets inside an Interxion data center. The company has just opened its ninth facility in the Frankfurt market. (Photo: Interxion)
European data center service provider Interxion has continued to build its presence in Frankfurt, announcing the opening of its ninth data center in that key European financial and data hub. The company has also joined a government-backed carbon footprint reduction program, and announced customer wins in London and Vienna.
Interxion (INXN) announced it will construct its ninth data center on the Frankfurt campus (FRA9) in response to customer demand. Set to open early next year, the facility will be built in a single phase and will provide approximately 800 square meters of equipped space and 1 megawatt of available power. Interxion will also accelerate the availability of the second 900 square meter phase of its announced FRA8 build.
“Demand for our products and services continues to be strong in Frankfurt, supported by our communities of interest approach and bolstered by the stable German economy,” said David Ruberg, Interxion’s Chief Executive Officer. “We are experiencing growth across multiple segments, including cloud providers, financial services, and digital media.”
Reducing carbon footprint
Interxion also announced that it has joined a government-backed program to help UK data center operators reduce their carbon footprint. Interxion will join Alquist’s data center temperature monitoring pilot, supported by government funding. The company hopes that through the pilot scheme it will be able to achieve significant CO2 reductions through installing Alquist’s Celsius temperature monitoring system at its City of London data center.
“Being at the forefront of energy efficiency has always been a desire of Interxion and this pilot further reinforces this,” said Kevin Dean, Chief Marketing Officer at Interxion. “We look forward to being involved within the project and working with Alquist and the Government to drive efficiencies across the data centre industry.”
Customer Wins
Interxion announced that Australian Securities Exchange (ASX) has chosen to host their ASX Net Global Point of Presence at Interxion’s City of London data center. ASX Net Global will provide London and European based trading firms with cross connect access to all of ASX’s derivatives and equity products, particularly to its flagship interest rate futures. In doing this ASX is able to tap into the data center’s financial community, consisting of over 100 capital market participants including investment firms, high-frequency trading firms, hedge funds, brokers and service providers.
“Interxion’s City of London DC has long been the home of London’s financial community and offers us a ready-made marketplace for our services,” said David Raper, ASX’s General Manager, Trading Services. “With European firms increasingly looking at Australian derivatives products for investment and risk management, it’s very beneficial to be able to offer them direct connectivity to ASX’s world-class offerings via our ASX Net Global PoP in Interxion. With ASX Net Global our customers can connect to us directly either within Interxion’s community or via our European network into our PoP at Interxion.”
Interxion announced that the Vienna Stock Exchange has recently relocated one of its data centers to Interxion’s Vienna data center campus. The Vienna facility was selected due to its track record of security, reliability and market leading range of connectivity, as well as a leading cloud and connectivity hub in Central Eastern Europe.
“We are very pleased to welcome the Vienna Stock Exchange as a customer,” said Christian Studeny, Managing Director of Interxion Austria. “As with other important financial services service providers, the company uses our Vienna data centre as a safe location for their applications and data. They benefit from our robust connectivity options to offer their customers more effective and efficient services. We see a strong increase in demand for our services from banks and market data providers.” | | 3:15p |
TDS Acquires MSN Communications, Plans Colorado Data Center Telephone and Data Systems, parent of TDS Telecommunications, announced the acquisition of MSN Communications for $40 million as well as the construction of a Tier 3 data center southeast of the Denver metro area. Telephone and Data Systems has made multiple strategic acquisitions to better position its subsidiary, TDS Hosted and Managed Services, including OneNeck IT corp and VISI. The most recent acquisition and facility announcement boost its footprint and capabilities in Colorado.
TDS Hosted and Managed Services expects to break ground on the facility early next year. The data center will have five phases of build out, each of up to 100,000 square feet. The engineering of the facility will mirror TDS HMS data center facilities in the Midwest. The company will deploy ReliaCloud in the data center, providing a local cloud solution along with colocation services.
The new facility will complement the new acquisition, with MSN Communications also headquarted in Englewood, Colorado. The solutions provider transaction expands the portfolio of TDS Hosted and Managed Services (TDS HMS), and MSN gains colocation and cloud services to their portfolio.
“I believe this is the right time and the right company for MSN Communications to join forces with,” says Doug Schuck, CEO of MSN Communications. “It’s a tremendous moment in our company’s 20 year history. I’m proud of our employees for building a company that is now part of a Fortune 500 organization.”
President of TDS HMS Phil Laforg added, “Doug Schuck and the MSN Communications employees have built an IT services powerhouse acting as a trusted advisor to companies throughout the Rocky Mountain Region. By joining forces with TDS HMS, MSN Communications is adding colocation and cloud services to their already robust services catalog.”
MSN Communications generated annual revenues of $99 million in 2012. Its roots are as a Value Added Reseller, but it added hosted and managed IT services which complement the TDS HMS portfolio of products.
The company consists of 104 employees, and offers a range of products and services around IT infrastructure including planning, engineering, procurement, installation and management. The workforce holds a slew of certifications and the company has top partner status with several of its vendors including Cisco, EMC, VMWare, VCE, Appspace, NetApp and F5.
TDS Telecommunications Corp. manages the operations of TDS Hosted & Managed Services, LLC (TDS HMS) which consists of OneNeck IT Services Corp., Vital Support Systems, and VISI Inc, and now MSN Communications. It’s a fast growing business unit, as hosted solutions and cloud have been growing in general. The company here greatly boosts TDS HMS through acquisition and the planned new facility. | | 5:24p |
IBM Looks to Fix Network Constraints In Cloud IBM has a patent that takes a stab at the noisy neighbor problem with cloud computing. The company developed and patented a method to dynamically manage network bandwidth within a cloud. IBM received U.S. Patent #8,352,953 for the new method in software-defined networking (SDN), which could lead to better system performance, efficiency and economy in the cloud. This is a potential boost for IBM SoftLayer cloud efforts, as it’s a potential fix to network performance problems for cloud users.
“This is the type of investment in invention and innovation that is needed to be a leader in the competitive cloud computing market,” said Dennis Quan, vice president of strategy, IBM cloud services. “IBM inventors are focused on researching and developing new cloud computing technologies and techniques that will pave the way to leadership for IBM and its clients.”
Dynamically Provisioning Virtual Machines automatically decides the best way for users to access a cloud computing system based on availability of network bandwidth. The invention calls for network resource management to be completed using software to obtain data from the management information database of the network switch to determine the amount of bandwidth being used by each IP address assigned to each VM within the compute node. When network bandwidth becomes constrained in one node, the system will automatically reassign some of the VMs to another node with network bandwidth capacity available.
This will be ideal for online retailers faced with traffic spikes during shopping season, news sites who see a surge after a major news event, sporting events sites, and web companies overall with unpredictable traffic such as a startup launch that gets media attention.
This is an alternate take to the majority focus in cloud; the issues of CPU and memory utilization and optimization. The network bandwidth is the last part of the equation to really get “cloudified,” as this is where performance most likely gets impeded. It’s something that’s not easily fixed by throwing up a bunch of additional virtual machines, or at the very least, is better fixed at the network. This is an old problem that also plagued shared hosters. There’s been increased focus and attempts at Software Defined Networking in the industry to solve the problem.
This invention can be applied run various operating systems, including Linux, Windows, CentOS, and UNIX, and a variety of hardware platforms, including IBM System x racks and BladeCenter, PureFlex, and Power Systems. | | 8:09p |
Friday Funny: Pick the Winning Caption for Our Spaghetti Cabling Cartoon It’s Friday and the weekend is in sight! It’s time for some Friday Fun with our weekly data center cartoon caption contest.
Please take a moment to vote on the caption suggestions for our latest cartoon, A Trying Situation, in which our friends Kip and Gary confront an old data center nemesis: spaghetti cabling. For more Kip and Gary cartoons, visit their website.
Take Our Poll
New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon!
For the previous cartoons on DCK, see our Humor Channel. |
|