Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 6th, 2013

    Time Event
    12:30p
    Cormant Rebrands its DCIM suite; Solves More than Cables

    Data center infrastructure management (DCIM) provider Cormant has rebranded its DCIM suite from “CableSolve” to “Cormant-CS.” The impetus for the move is that “CableSolve” doesn’t convey all that the software does.

    “This product name change removes the incorrect assumption that our solution was primarily for managing cables,” said Paul Goodison, CEO of Cormant. “Establishing a standardized presentation of our core product offering within an enhanced content context will accompany the name change to further ensure Cormant continues to grow and lead the DCIM market.”

    During a transitional phase, throughout 2013, Cormant will refer to their DCIM solution suite as “Cormant-CS (formerly CableSolve)” to maintain continuity.

    The company’s DCIM suite provides holistic network, IT infrastructure and connectivity management inside and outside the data center, which wasn’t really reflected in the name “CableSolve”.  The company decided to rename as part of a collective market communication effort.  The veteran player now has more unified branding under Cormant-CS . The CableSolve name reflected their strength in enterprise network connectivity and infrastructure management, but not their strength at documenting, tracking, monitoring and managing IT infrastructure in the “data center and beyond,” a phrase they use to convey the holistic breadth of their solution.

    “As Cormant continues to grow our customer and partner bases, we seek to best reflect our capabilities while ensuring success for all involved,: said Cormant’s Business Development and Marketing Lead, Michael Phares.

    1:00p
    New Approaches to Data Center Consolidation

    Consolidation projects are plentiful with organizations trying every which way to save money on infrastructure costs. Companies are getting more creative in how they consolidate their environments. We’re not just talking about hardware any longer. Sure, virtualization still plays a big role in the consolidation effort, but there are plenty of other tools as well.

    Executives are tasking managers with looking at more ways they can run leaner and more efficiently. The challenge here isn’t just to scale the environment for today, but to build an infrastructure for tomorrow as well. So, let’s take a new look at the consolidation efforts by breaking the conversation down into three major components: Hardware, software, and the user.

    Each element has its own complexity and, if planned our properly, can be consolidated and run better and longer for an organization.

    Hardware

    The hardware conversation has evolved beyond just servers. We can open up the conversation in the comments section but aside from servers, we’re seeing a whole new level of hardware-based consolidation efforts.

    • Switches. Have you seen some of the intelligent switches coming out recently? These devices can do what 10 to 15 switches would have to do in the past. We’ve now entered the world of network virtualization as well. Software-defined technologies help organizations enable such things as Global Server Load Balancing and device interrogation. Furthermore, SDN has helped many companies create an even more robust disaster recovery and business continuity environment
    • Storage. Multi-tenancy is becoming very big in the storage industry. Intelligently carve out controller pieces and give numerous departments their own slice. Sometimes called storage virtualization, the idea is to better utilize storage controllers to their fullest capabilities. Why have multiple physical controllers when one can handle multiple departments?
    • Blade Chassis. HP, Cisco, Dell, IBM, and so on. All of these makers are creating more efficient and consolidated pieces of hardware. This has also helped improve wiring. The big conversation is around unified computing and high-density equipment. As more consolidation projects take place, administrators need to look at more efficient systems which are capable of scaling with the needs of the business.
    • Rack Management. Controlling rack space has become a big initiative for many organizations. Cooling, power, and airflow are all now talking points in trying to make an environment more dense and efficient. Many organizations are now deploying “green” initiatives to help them fight costs and help out the environment.

    Software

    The software conversation is evolving as well. The data center consolidation effort not only revolves around the hardware component, but the software element as well. Administrators are trying to use less software to accomplish more.

    Let me give you an example: management tools. In the past, native tools would be used to help support a given infrastructure. Now, software components and integral pieces of an environment are so diverse that native solutions just don’t cut it anymore. Why have six different management platforms when just one can work? So, vendors are switching from an individual software offering, to plug-ins or management packs that tie into agnostic systems management tools. SCCM 2012 and SCOM 2012 are great examples where organizations can create packs to help manage their infrastructure under one roof – i.e. Veeam for VMware and Hyper-V or Comtrade for Citrix. Furthermore, products from BMC and IBM are further helping integrate systems into one management console.

    These efforts translate into fewer, VMs, less physical server demands – and thus a leaner data center.

    User (Density)

    The general user model has changed. As organizations grow, they have to place more users on more systems. But that’s not really efficient. So, IT managers have to find ways to increase user density while still keeping costs low. Virtualization is one way of doing it, cloud computing is another.

    A growing management model has grown out of a new term: user virtualization. Organizations are finding ways to virtualize user settings and allow them to carry those settings, metaphorically, in their pockets. By doing so, there are fewer demands on underlying servers and core system components – and it creates a happier end user since their settings are carried with them at all times. By controlling the user layer, managers are able to better see what they are doing and how they are using core infrastructure components. As an organization grows, having user metrics and a fluid environment will help facilitate very efficient data center expansion.

    The really interesting part, as mentioned earlier, has been the creative ways that companies are consolidating their environments. Better rack management and more efficient data center build outs have also helped companies manage their environments more effectively. The data center is becoming more and more a central piece for any organization. Now, as IT goals begin to better align with business needs, administrators can help make the data center a revenue generating resource.

    1:30p
    Carrier Grade NAT – A Look at the Tradeoffs

    Owen DeLong is Director of Professional Services at Hurricane Electric, the world’s largest IPv6-native Internet backbone and leading colocation provider. Owen, who is also an IPv6 Evangelist, has more than 25 years of industry experience.

    Owen-DeLong-Hurricane-ElectricOWEN DeLONG
    Hurricane Electric

    Imagine a restaurant parking lot where each space is permanently assigned to a single patron’s car. This would be very counter-productive because you could never have a big enough parking lot and the vast majority of the lot would go unused the majority of the time. Since there’s only so much space and no way to make more, it’s a terrible waste of that space. Given limited space, it’s important to come up with a better solution. If things aren’t too crowded, we simply let customers park where there’s a space and the space is only tied up for that customer so long as they are at the restaurant. When someone leaves, another customer can use the space.

    However, what happens if the restaurant grows or the parking lot shrinks?  Remember, we can’t create more space.  Usually a combination of clever ways to park more vehicles in the same space is developed, and valets manage the insertion and removal of vehicles from those tighter spaces. Sometimes the valet may have to park in an alternate lot off-site. The valet maintains a translation table of keys to a cars’ locations.

    Clearly, the ideal scenario would be if we could make more space. That’s what IPv6 does for us. In fact, if IPv6 addresses were parking spaces, IPv6 would literally allow us to park every car ever built in each parking lot and still have 4 billion more spaces for each and every car ever built in each parking lot.  Further, we have enough room for every single restaurant, building, house, apartment, condo, shack, shed, office, and any other structure, building, or tenant to have 65,536 such parking lots and still leave more than half of the space unpaved and unused.

    However, faced with having to cope with IPv4 for a few more years, we need to consider the finite space scenario in a little more detail. The first scenario above (permanent parking-space assignments) is akin to static IP addressing. It’s the most efficient way to always know where your car is, but it uses up space very quickly. The second scenario (park wherever you can find a space, but only in the one lot) is like DHCP (Dynamic Host Control Protocol), where addresses are allocated as needed. The third scenario (valet!) seems better still, because, at least from the diners’ perspective, there’s unlimited parking. However, valet in the Internet Protocol space is NAT (Network Address Translation) and it comes with downsides greater than waiting, tipping, and the occasional door ding.

    Times Are Changing

    We’ve been living with some of these tradeoffs and worked around others in traditional NAT for a long time, but now, the valets are having to get more creative as all the parking lots within running distance are getting full.

    The IETF defines Carrier-Grade NAT (CGN) as large-scale Network Address Translation (NAT) implemented by a service provider.  In most cases, CGN is implemented as a layer of IPv4-to-IPv4 NAT on top of the “traditional” NAT implemented at the subscriber side of the connection.

    There are significant challenges introduced by CGN that merit serious consideration by the service provider (and the subscriber). Traditional NAT is implemented on the subscriber side of the connection, which means that a single public IP address presented to the Internet uniquely identifies a subscriber.  A significant amount of internet infrastructure relies on this assumption; violating it is likely to affect law enforcement, civil litigation, geo-location and more. In CGN, this fundamental assumption is broken, so carriers that implement it will need the source address, the port number and the time in order to have any hope of identifying the subscriber behind a given Internet transaction. Presently, however, most servers do not log source-port numbers for incoming connections.

    If a CGN implementation used the same ad-hoc port allocation as traditional NAT, then logs of these dynamic port mappings would be needed for subscriber identification. For a small number of subscribers (8,000, say), nearly a terabyte per day of logging information could be generated.  As such, CGN is unlikely to make use of ad-hoc port allocation.  Instead, subscribers could be statically mapped to port ranges. Unfortunately, this approach has two disadvantages. First, ports (and by extension, addresses) must be held in reserve for customers who are not active. Second, the number of ports (and by extension, the number of concurrent transactions or sessions per subscriber) will be limited. In order to achieve significant address conservation, ISPs using CGN will be faced with very real tradeoffs between subscriber experience and address consumption.

    Control Points

    The next key difference between traditional NAT and CGN is the control point. In traditional NAT, the subscriber can control the NAT device and create static inbound mappings that allow bidirectional connectivity not necessarily initiated by the subscriber. More commonly, a slightly different form of inbound mapping, accomplished through processes known as uPNP or NAT-PMP, is made by applications without direct user intervention. Neither of these mappings is supported by CGN, so applications dependent on these mappings will break in most CGN scenarios. Susceptible applications include most forms of peer-to-peer applications, multiplayer online games, many VOIP-style services and some forms of instant messaging.

    As providers implement these NATs, they will likely be regionalized. For the provider, it is much simpler and more economical to locate a few large NAT centers in a small number of locations than to distribute these capabilities around their network.  Unfortunately, this means that instead of the current path from subscriber->provider->Internet, the path in a CGN environment is expanded to subscriber->provider->NAT Center->provider->Internet.

    Consider a subscriber in San Jose wants to view a website in San Francisco. Today, his browser traffic follows a path that may go as far away as Sacramento (about 200 miles San Jose->Sacramento->San Francisco). In the CGN world, however, the nearest NAT center may be Los Angeles or even Denver, expanding the path to a little over 2,000 miles in the case of Denver, and about 600 miles in the case of Los Angeles, so a 3x to 10x increase in packet delay times.

    Finally, geolocation data for the public side of these NATs will likely reflect the location of the regional NAT center and not the subscriber’s actual location. Geolocation of IP addresses is already of dubious accuracy, but CGN will make it significantly worse. Imagine being in San Francisco and having a web site think you are looking for things near your current location (which it thinks is San Jose, Los Angeles, or even Denver).

    Is CGN an Effective Alternative to IPv6?

    Because of the way things have evolved, the question isn’t whether to implement CGN, but rather whether CGN is an effective – or even a viable – alternative to IPv6. On this question, two fundamental camps have emerged.

    The first camp views CGN as a necessary interim solution on the way to IPv6 transition in order to accommodate subscribers who can’t get public IPv4 addresses due to shortage. This camp will deploy CGN only when their IPv4 addresses space is nearly exhausted and when IPv6 is not an option, due to technological limitations of the subscriber, the destination, or even (in rare cases) the provider. This camp will have strong incentive to encourage as many others as possible to deploy IPv6 technologies in order to reduce the load and minimize the required investment in CGN capabilities.

    2:15p
    Samsung Invests in Cloudant, Prepping for ‘Internet of Things’

    Mobile device usage is growing, and so are the amount of apps and data associated with these devices. It’s why Samsung Venture Investment Corporation has made a strategic investment in database-as-a-service provider Cloudant. The funding will be used to advance research and development to further improve global data distribution technologies and mobile application data management.

    The reasoning behind the strategic investment is Samsung’s interest in the mobile world. The proliferation of mobile devices means the data associated with these devices is growing exponentially. Cloudant provides a highly scalable NoSQL DBaaS for storing, processing, transferring and managing the volume and complexity of this data. Cloudant is able to distribute application data across a global network of data centers while providing non-stop data access with low latency for its customers.

    “Samsung Ventures believes a globally distributed data layer and management of that data is especially critical for large enterprise businesses,” said Hyuk-Jeen Suh, Senior Investment Manager with Samsung Ventures America. “We felt that this is the right time to strategically invest in Cloudant to support the company’s vision to manage the proliferation of data to be created by, for example, mobile devices, machine-to-machine (M2M) technologies, and the ‘Internet of things’ in the future.”

    Cloudant provides global data distribution, mobile replication and synchronization (application developers build their back ends on the Cloudant Data Layer cloud database), monitoring and scalable performance. It serves government agencies, enterprises, and SMBs. It accelerates time-to-market and time-to-innovation because it frees developers from the mechanics of data management so they can focus exclusively on creating great applications.

    “Cloudant looks forward to enhancing our research and development efforts, especially in the mobile technology platform, to accelerate our progress in the market for database-as-a-service,” said Cloudant CEO Derek Schoettle.

    Cloudant is privately held and backed by top-tier investors including Avalon Ventures, In-Q-Tel, Samsung Venture Investment Corporation, and Y Combinator.

    3:00p
    Massive New Data Center Will Bring Windows Azure to China

    windowsazure

    Chinese Internet service provider 21Vianet announced that it has started construction of a new data center in the Daxing District of Beijing. The facility will be capable of holding 5,000 cabinets, the largest facility in China as measured by cabinet capacity. The 42,000 square meter (approximately 452,000 square foot) data center will be built in phases, with the first phase expected to be fully operational by the end of 2013. The data center will be wholly operated and owned by 21Vianet.

    “We are excited to announce the addition of this mega data center in Beijing, China,” said Josh Chen, Founder, Chairman and Chief Executive Officer at 21Vianet. ”We believe this expanded facility will further bolster 21Vianet’s leading market position for hosting and management network services. In addition, we will also utilize this data center to power Microsoft’s premier commercial cloud services. We remain committed to providing our customers with the IT Infrastructure services required to grow their businesses and meet today’s increasingly complex and demanding networking requirements.”

    Late last year Microsoft signed a Memorandum of Understanding with the municipality of Shanghai, and signed agreement to license Microsoft technologies to 21Vianet, who will offer Azure in China out of local data centers.

    “We believe that with our consistent expansion in data centers and network coverage as well as the rollout of our cloud platform, we are well-positioned to capture new opportunities to become China’s leader in the rapidly emerging market for cloud computing infrastructure services,” said Shang Hsiao, President and Chief Financial Officer at 21Vianet. ”At this stage, we do not expect to raise our current guidance for capital expenditures for the full year of 2013, which remains contingent upon the finalization of our roll out schedule and financing strategy.”

    3:30p
    New IBM PureSystems Tuned For Big Data and Clouds
    IBM technician Steve Mallmann performs a quality check on a new IBM Power 740 Express system infused with the latest POWER7+ chip technology. IBM unveiled new Power Systems for SMBs and growth market companies tuned for big data and cloud computing. (Photo: IBM)

    IBM technician Steve Mallmann performs a quality check on a new IBM Power 740 Express system infused with the latest POWER7+ chip technology. IBM unveiled new Power Systems for SMBs and growth market companies tuned for big data and cloud computing. (Photo: IBM)

    IBM has announced major advances to its PureSystem family of integrated systems, directly targeting a big data technology and services market that IDC says will reach $16.9 billion by 2015. New PureSystems models help to remove the complexity of developing cloud-based services by making it easier to eploy and manage a secure cloud environment. While the demand for big data is high, many organizations do not have the resources or skills to embrace it.

    Powered by Netezza technology, the new PureData System for Analytics features 50 percent greater data capacity per rack and is able to crunch data 3x faster, according to IBM. The New York Stock Exchange (NYSE) relies on PureData System for Analytics to handle an enormous volume of data in its trading systems and identify and investigate trading anomalies faster and easier.

    “NYSE needs to store and analyze seven years of historical data and be able to search through approximately one terabyte of data per day, which amounts to hundreds in total,” said Emile Werr, head of product development, NYSE Big Data Group and global head of Enterprise Data Architecture and Identity Access Management for NYSE Euronext. “The PureData System for Analytics provides the scalability, simplicity and performance critical in being able to analyze our big data to deliver results eight hours faster than on the previous solution, which in our world is a game changer when you look at the impact on businesses every second that passes.”

    Tailored Cloud Options

    IBM is hoping to make the cloud simpler for all businesses to adopt, through new cloud options tailored to the data center that allow businesses of all sizes to free up time and money. A smaller PureApplication System was announced, providing all of the infrastructure and management software necessary to develop new applications in the cloud, or on-premise. With smaller configurations this model opens up new opportunities for MSPs and within growth markets without sacrificing performance. The PureApplication System is now offered on POWER7+, for larger enterprises to better manage and maintain compute and transaction-intensive applications across environments, including the cloud.

    “IBM PureApplication System with the POWER7+ architecture offers a greater level of stability and flexibility for our shared customers,” said David Landau, vice president product management, Manhattan Associates. “Most importantly, we expect to offer an even lower cost of ownership to our customers by optimizing the Manhattan Supply Chain Process Platform with the IBM PureApplication System.”

    In addition to third-party ISV patterns, new patterns for IBM software in an expanded patterns catalogue include those for mobile application management, application integration, asset management and social business. IBM has also announced “MSP Editions” for IBM PureFlex System and Flex Systems that provide an accelerated cloud deployment platform that is faster to implement, easier to manage, and more cost effective than the MSP having to build the platform themselves. A new SmartCloud Desktop Infrastructure offering will enhance the overall quality and reliability of virtual desktops and enables IT managers to easily manage, secure and deploy virtual desktop solutions.

    3:30p
    The Migration Path from 10G to 40G and Beyond

    Data centers today require more throughputs, more bandwidth and more resources to continue to deliver the type of performance required to maintain optimal business operations. As more demands have been placed on the data center, administrators have turned to fiber solutions to help them obtain the type of LAN bandwidth that they require.

    Technologies revolving around virtualization, cloud computing and big data are requiring more throughput capabilities than ever before. One of the ways to deliver these resources is through high-density computing systems. In some heavy utilization instances, 10GB/s is just not enough. This is where administrators may run into the challenge of upgrading from 10GB/s to 40GB/s and beyond. This is where the CommScope technologies can really help out.

    CommScope offers a variety of pre-terminated fiber solutions that utilize multi-fiber connectors to facilitate rapid deployment of fiber networks in data centers and other high-density environments. Within the SYSTIMAX brand, these solutions are InstaPATCH 360 and InstaPATCH Plus. The Uniprise solution is referred to as ReadyPATCH. In this white paper CommScope guides the conversation around the mechanisms required to upgrade from a 10GB/s infrastructure to 40GB/s and even further if needed.

    With detailed drawings and descriptions, the white paper outlines various fiber deployment methodologies including the following:

    Traditional two-fiber application channel with InstaPATCH 360.

    Comscope 1

    Two-fiber fan-out channel with InstaPATCH 360 fan-out cables.

     Comscope 2

    Optimized parallel transmission channel with InstaPATCH Plus/ReadyPATCH

    comscope 3

    Download this detailed white paper to see how CommScope can help create a more robust network infrastructure by simplifying wiring and increasing bandwidth throughput. According to CommScope, for Ethernet networking speeds above 10GB/s, the applications standards are specifying parallel optics for multimode fiber networks. IEEE 802.3ba defines the transmission schemes for 40GB/s and 100GB/s. The interfaces for these higher speeds are based on the MPO connector. As such, it is a relatively simple process to upgrade a CommScope pre-terminated solution from 10GB/s to 40GB/s or even 100GB/s.

    6:43p
    Data Center Jobs: ISS Facility Services

    At the Data Center Jobs Board, we have new job listings from ISS Facility Services, which is seeking a Critical Enviro/Data Center Operator in multiple locations.

    The Critical Enviro/Data Center Operator works in an operations & maintenance organization that provides a variety of O&M services within a 24/7/365 environment, performs duties with constant awareness of the need to preserve the reliability of the critical load, provides O&M for building systems of other facilities as assigned, and carries out maintenance and operations by performing the following duties personally or as a team member in a Data Center Operations & Maintenance environment.

    To view full details and apply, see Alpharetta, Georgia job listing details.

    Other positions for Critical Enviro/Data Center Operator are available in the following cities: Suwanee, Georgia; Austin, Texas; Colorado Springs, Colorado; and Houston, Texas.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    << Previous Day 2013/02/06
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org