Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, April 30th, 2014

    Time Event
    11:00a
    IO’s Jason Pfaff Named Data Center Manager of the Year

    Jason Pfaff, vice president, North America DCaaS at IO, received the “Data Center Manager of the Year” award at the Data Center World Global Conference in Las Vegas, Nev., Tuesday.

    This week’s conference, organized by AFCOM, an association of data center professionals, has drawn more than 1,000 facilities, operations and IT pros from around the globe for networking and educational sessions.

    AFCOM President Tom Roberts said Pfaff had worked on the massive modular IO data center in Edison, NJ, and the award was well deserved. Other 2014 finalists were Dave Leonard, chief data center officer at ViaWest, and Robert McClary, senior vice president and general manager at FORTRUST.

    Pfaff has more than 15 years of data center-related experience and has been directly involved in the design, build, and operation of 1.5 million square feet of space and 175 megawatts of power. During his time at IO, he has been a pivotal leader in the company’s focus on the shift from traditional raised floors to pre-fabricated modular deployments.

    Jason oversees all of North America for IO, which equates to 1.3 million square feet over four sites, and is specifically responsible for the successful construction, commission and delivery of a 12-megawatt dedicated data center suite in only 90 days.

    Prior to joining IO, he worked for Digital Realty Trust and Sterling Network Exchange.

    11:30a
    ARIN: Final Phase of Countdown to Last IPv4 Address Begins

    It is no secret that the number of IPv4 addresses has become critically low. American Registry for Internet Numbers (ARIN) is down to the final /8 (around 16 million addresses) and has moved into Phase Four, the final phase, of its IPv4 countdown plan. This means the registry may no longer be able to fulfill all qualifying IPv4 requests.

    “In the early ’90s we realized that if things continued as they were we would be out of IPv4 addresses in a few years,” Owen DeLong, ARIN advisory board member and director at Hurricane Electric, said. ”Things changed. Network Address Translation (NAT) was developed and Classless Inter-Domain Routing (CIDR) and some other technologies that allowed us to conserve addresses.”

    While those changes slowed address consumption down, they did not stop it. Today, every Regional Internet Registry (RIR) has developed an “austerity policy,” DeLong continued. Europe is more than one year into its austerity plan, and it has been more than two years for Asia Pacific. ARIN has 16 million addresses to go before its austerity policy goes into effect, and Latin America and Caribbean Network and Information Center (LACNIC) is close to triggering its plan.

    In Phase Four, ARIN will process all IPv4 requests on a “first-in-first-out” basis. Every request will undergo team review, and requests for /15 or larger will require department director approval, which may mean a longer turn-around.

    A Quick Primer

    Many Data Center Knowledge readers know the difference between IPv4 and IPv6. If so, go ahead and skip the next two paragraphs. For those who do not, IP addresses are core to the Internet. Similar to a phone number, an IP address is what identifies a device connected to the internet. The concept is fundamental to the way computers communicate with each other.

    Each IPv4 address has 32 bits, and its address space can support about 4.29 billion addresses. While tremendous at the time of creation of IPv4, the limit is restrictive today.

    An IPv6 address uses 128 bits, which means the protocol can support an exponentially larger number of addresses. The number is so large that it uses Hexadecimal notation rather than dotted decimal notation. IPv4 has 28 possibilities, while IPv6 has 2128 possibilities.

    New Market Arising: IPv4 Brokers and Auctions

    In response to the shortage, a new industry of brokers and auction houses that deal with IPv4 addresses has arisen. Many of the IPv4 addresses have been assigned, though not necessarily used, and these marketplaces list the number of IPv4 resources that are still available. In February, for example, a company called Hilco Streambank launched a auction marketplace that provides liquidity for IPv4 address sellers and connects them with buyers.

    Broker IPv4 Market Group believes potential legal issues in this highly regulated space make such auctions not feasible. An auction winner may end up not getting the approval to get the addresses they have won, leaving the seller in limbo as well. Some bidders are illegitimate; no contract terms are established other than pricing.

    Hence, brokers are stepping in to lend end-to-end IPv4 address transaction expertise, helping with marketing, sales, the transfer process and the financial aspects. IPv4 Market Group also provides legal and technical advice.

    Both auctions and brokerages are band-aids, however. The space will run out, potentially causing the prices of IPv4 addresses to skyrocket and making a fast-track transition to IPv6 ever more urgent.

    DeLong is not a fan of either brokerages or auction houses. “I’m old-school in this regard,” he said. “I feel that the whole idea of treating address resources as a resale commodity is distasteful at best. These are a community resources that [were] handed out without charge on the basis of actual need for the addresses. It’s pretty clear to anyone who was around in the early days that if you had addresses you no longer needed, you were expected to return them to the community for use elsewhere. I regard all of these monetized transfers as being more of a necessary evil to bridge a (hopefully) short-term gap rather than a desirable state of affairs.”

    IPv6 Switchover Progress Slow

    The switch-over progress has not been as good as it needs to be. “There’s good news and bad news here,” DeLong said. “The good news is that people are starting to pay more attention to this issue and adoption is accelerating. The bad news is that we really should have been at this point over a decade ago, and IPv6 should be almost fully deployed by now.”

    ARIN’s announcement of the final phase should serve as a wake-up call, he said.

     

    12:30p
    What Separates One Data Center from the Next? The People!

    Mike Bennett, VP of global data center acquisition and expansion at CenturyLink Technology Solutions EMEA

    It’s hard for one company to say its data centers are better than the competition’s down the road but the real differentiator is the people inside.

    Sure, connectivity, cooling and power are the fundamentals, but it’s the people that create the right conditions in which to effectively (or ineffectively) manage these factors. The data center staff can make a good data center great or a potentially great data center average.

    It’s common for data center providers to outsource the running of the facilities. They build the data center and have someone else manage it while their sales people sell it, in essence they’re more property companies than technology providers. It works for them, yet those brought in are limited in their ability to truly help evolve the offering. Firstly, they may only be on site for a three-five year contract (and for even less time if they come in half way through). The result is that there is little real incentive to make changes that will pay dividends in the future.

    People Can Make a Serious Difference

    On the other side, permanent staff are able to have a real level of pride because they have a higher degree of personal investment, and know they can make a serious difference, in other words it feels like it’s “their” data center. If they can think of a better way of doing something, they’re able to see the solution through and before you know it, the processes are changed and rolled out globally.

    An example of how those ingrained in an organisation can make a difference is when engineers within one of our data centers realized that if they could adjust the cooling to a more optimal setting, the intelligent fans that run in the chillers didn’t have to work so hard and subsequently, power was saved. The findings were quickly rolled out globally and the team rightly received the due credit.

    The pride of knowing they could implement real change to benefit the future of the data center was a powerful motivator. When data center engineering staff can stay with their company for 25 years, even the smallest things are worth doing as they will feel the benefit down the line.

    Transferable Experiences

    This ability to constantly improve and evolve can see wisdom from other mission critical industries applied successfully to the data center. This includes those coming from manufacturing backgrounds, who have in the past brought a huge amount of talent and even trade secrets from what, on the surface, appears to be a completely different industry to the DC space.

    One example at CenturyLink Technology Solutions is an employee who joined a data center from a chocolate factory. The factory could never be switched off as the chocolate and sugar would freeze; bringing manufacturing to a halt for weeks. Never being able to have downtime made that person’s job mission critical and actually gave them some highly transferable insights.

    It’s not just about the quality of staff; it’s about enabling those talented individuals to constantly improve to the benefit of everyone involved.

    There is an important distinction between a data center and a professional data center operation and that is more often than not, the people.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    How Flow Mapping Technology Enables Network Monitoring at 40Gbps and 100Gbps

    As networks evolve from 1Gbps to 10Gbps, moving toward 40Gbps and 100Gbps. Traditionally, network monitoring and analysis were integrated after an infrastructure was deployed. Now on modern network pipes – monitoring and analysis have become absolutely critical.

    In this whitepaper from Gigamon, we find out the critical nature of network monitoring and analysis. Furthermore, we learn how the the Gigamon Traffic Visibility Fabric scales from just a few connections, up to thousands, allowing traffic to be monitored Enabling Network Monitoring at 40Gbps and 100Gbps with Flow Mapping Technology and secured from a centralized network tool farm. Consider this – traffic aggregation is only one half of the solution behind the Traffic Visibility Fabric. The other half is an advanced filtering architecture called Flow Mapping.

    Here’s the bottom line: Failure to analyze, monitor and secure will result in network downtime, which can quickly cost organizations millions of dollars in lost revenue. To maintain network security, advanced persistent threats, cyber-attacks, and data leaks must be combated and averted. At the same time, efficient network performance must be upheld to prevent bottlenecks and outages by monitoring bandwidth usage and application response time.

    Download this white paper today to learn how Flow Mapping starts with network ports and ends with tool ports, and is used to include or exclude traffic on connections. The paper outlines several direct benefits around using Flow Mapping Technology for your network infrastructure. This includes Flow Mapping which features:

    • Sending only the packets on even source ports to local tool ports
    • Sending only packets matching a user-defined pattern match for a particular MPLS label to local tool port
    • Discarding all traffic from a particular IP address
    • Sending only non-specific traffic to a local tool port using the Collector rule
    • Redirecting all traffic to IDS monitors regardless of any filters applied to network ports
    • Creating filter maps in advance for instant troubleshooting of specific scenarios.
    • Temporarily troubleshooting situations where you want to see all traffic on a port without disturbing any other filter, crossbox filter, Flow Map, or cross-box maps already in place for the port

    The amount of traffic, workloads and users hitting your network will only continue to increase. As cloud and IT consumerization continue to impact the modern organization, IT shops will have to find ways to monitor, analyze, and optimize their network delivery methodology. Flow Mapping technology from Gigamon is a great way to allow efficient network monitoring for your future network needs and speeds.

    2:00p
    VMware Boosts Infrastructure as its Gaze Shifts to the Cloud

    VMware is looking to the cloud, and boosting its data center footprint in the process. It’s a shift that goes all the way down to the company’s business DNA, which has traditionally been as a packaged software company.

    “Our second DNA is we will become a cloud service provider,” said Angelos Kottas, Director of Product Marketing, Hybrid Cloud Services for VMware. “We won’t get out of selling package software, but our first and primary route will become as-a-service delivery. The reason for this is that we can rapidly iterate technology. We don’t have annual or semi-annual release cycles this way. It also means that we will provide all of our offerings as a “for rent” option. It will extend to all services. There will be a key hook into the storage and availability portfolio. A cloud destination will be default for backup and disaster recovery.”

    VMware’s focus is on hybrid deployments – not necessarily the mixing of on-premises and cloud, but providing customers the choice and flexibility to deploy how they want, whether it be on-premise or in the cloud.

    The company has largely been known for targeting the large enterprise, but Kottas says that’s a misperception. “For the IaaS (Infrastructure-as-a-Service), we actually see the highest level of maturity happening in the mid market and commercial segment,” said Kottas. “I don’t mean the SOHO (Single Office/Home Office), but when you get into the mid market. To begin with, 5-10 virtual machines is the typical norm. Most customers don’t go all-in at once.”

    More Cloud = More Data Centers

    The company will most likely rely heavily on multi-tenant data center providers to expand its as-a-service offerings, much as it has done so far. “We are not at a scale where we buy or build our own data centers,” said Kottas. “Today our objective is not to drive down cost to operate. Our objective is to grow rapidly.”

    VMware has been relying on multi-tenant data center providers to extend its “as a service” offerings and VMware vCloud. The initial infrastructure was based in Las Vegas at the Switch SuperNAP facility. Everything was done out of Las Vegas up until general availability, which was launched in mid-2013. The company added data center space in Virginia and Santa Clara to cover the coasts, as well as Dallas to cover the middle part of the country. The first international expansion was in Slough, UK in February.

    The VMware vCloud IaaS offering is in five data centers today, with five more announced. There’s a second UK facility and a second New Jersey data center expected this quarter. A data center in Chicago is expected later in the year.

    The company is also going after government FedRAMP business and plans to open an additional two data centers dedicated to government needs: one in Virginia and one in Phoenix. The company has partnered with data center service providers who are strong in the federal space to address these needs (companies like Carpathia and CenturyLink).

    “We filed with the government at the tail end of February and we anticipate achieving FedRAMP in the 2nd half of 2014,” said Kottas. The company isn’t sticking to one provider, but rather picking the right fit depending on its needs.

    VMware has also indicated its intentions at broader international expansion. France, Germany, Japan and Australia are all in the cross-hairs. “In 2015, we’ll continue to look at expansion,” said Kottas.

    Why DaaS?

    Desktop as a Service (DaaS) seems to be the last bastion of the cloud transition. VMware is placing a heavy stake in the space.

    DaaS is not a new idea; arguably, thin clients were the exact same concept, but those went out of vogue. However, the changing face of the workforce and the increasing outsourced nature of IT has DaaS positioned well for a large number of companies. Employees more frequently work remotely, the devices they use are growing more diverse. Say a company wants to get a set of contractors up and running quickly, DaaS is ideal for this. It helps with companies with compliance issues, or those that want to limit a subset of workers to certain functions.

    Like the IaaS service, DaaS is gaining the most traction amongst the mid-market. “We’re seeing DaaS succeed in the mid market as well,” said Danny Allan, Senior Director of EUC at VMware’. “It’s being adopted a lot in higher education, and State and local organizations where they have high degrees of concurrency. Healthcare is another good vertical.”

    Allan was previously CTO of Desktone, the foundation for VMware’s DaaS offering. He doesn’t envision the entire world moving to desktops as a service, but says it’s a growing demand. “There’s been a huge inbound of requests,” said Allan.

    “The VMware position in general is the hybrid choice is a strategic imperative,” said Allan. “The fact that we can offer both form factors allows customers to choose to adopt cloud, or on-premises as needed.”

    Amazon Web Services recently entered the virtual desktop arena, raising the profile for Desktops as a Service. How does VMware stack up competitively? “It’s great that Amazon has come into the market and validated it a bit,” said Allan. “In terms of hybrid, the model that users want to choose is not just server based desktops. One of the ways we differentiate is that we develop the right workspace. It’s important not to restrict users into a single type of OS or Workspace. We believe in the flexibility of choice. Most organizations do not say ‘I’ll give the same type of desktop to everyone’.”

    DaaS hasn’t been deployed out of all data centers just yet, but Allan envisions a CDN type approach to distributing. “We have CDN (Content Delivery Network) – we’re looking at DDN, Desktop Distribution Network,” said Allan. “Anywhere you are, you can consume the service. We support multi-data centers, we’re deployed in multiple data centers. But some of those nodes could be the customer site. Maybe the best place to reach DaaS is at the server closet at work, and at home, maybe it’s the data center.”

    3:41p
    Highlights from Data Center World

    The Data Center World Global Conference kicked off on Tuesday, with a packed audience for a keynote session with industry veteran Scott Noteboom (now CEO of start-up LitBit.) Attendees gathered at the Mirage Hotel and Casino in Las Vegas to network and attend sessions with a variety of facilities and data center management topics. Check out our photo feature at Highlights from Data Center World Global Conference.

    6:12p
    Judge: Microsoft Must Obey US Warrant Seeking Data Stored in Ireland

    Microsoft must turn over a customer’s personal data stored at a Dublin, Ireland, data center to the US government, a federal magistrate judge ruled earlier this month.

    Judge James Francis said companies like Microsoft and Google must comply with search warrants from US law enforcement agencies seeking customer data regardless of where that data is stored, Reuters reported.

    Documents related to the warrant, approved by Francis in December, are sealed and it is unclear which agency is seeking the data. The request, however, includes the customer’s name, all emails they have sent and received, times they have spent online and credit card numbers and bank accounts they may have made payments with, according to news reports.

    Microsoft challenged the warrant saying the government’s jurisdiction did not extend overseas. Francis struck the challenge down last week, saying that the burden of cooperating with foreign governments to secure such data would be too much of a burden on the US government and would impede law enforcement efforts.

    Judge’s Decision Foreseen

    This was an expected first step in the Redmond, Washington-based company’s effort to push the government to “follow the letter of the law when they seek our customers’ private data in the future,” David Howard, Microsoft’s corporate vice president and deputy general counsel, wrote in a blog post. “When we filed this challenge we knew the path would need to start with a magistrate judge, and that we’d eventually have the opportunity to bring the issue to a US district court judge and probably to a federal court of appeals.”

    Just as the government cannot search a home outside US borders, it should not be able to search data stored overseas, Howard wrote.

    Countering this argument, Francis referred to a law titled Stored Communications Act, which makes warrants that seek data more like subpoenas, and subpoenas for information must be complied with, regardless of where the information is stored.

    Internet Ignorance Common

    David Snead, an attorney and member of the Internet Infrastructure Coalition, said the judge’s ruling reflected a common misunderstanding of how the Internet works. “Too often, judges and other individuals believe that simply because a company is located in one country, they are not required to respect the laws of another,” he wrote in an email.

    “This is particularly true when data is located in multiple jurisdictions. It is well-established law that courts in one country cannot compel companies to violate the laws of another country.”

    Europe on Microsoft’s Side

    The European government has sided with Microsoft on the issue. Mina Andreeva, a European Commission spokesperson, told the BBC that companies operating in Europe have to play by European rules, regardless of where they are headquartered.

    “The European Parliament reinforced the principle that companies operating on the European market need to respect the European data protection rules – even if they are located in the US,” BBC quoted Andreeva as saying. “The commission’s position is that this data should not be directly accessed by or transferred to US law enforcement authorities outside formal channels of co-operation, such as the mutual legal assistance agreements or sectoral EU-US agreements authorizing such transfers.”

    Access by other means should only be provided in “clearly defined, exceptional and judicially reviewable situations.”

    6:52p
    When Authorities Knock on Data Center’s Door, Know the Law

    Disclosure of the National Security Agency’s digital surveillance programs, such as PRISM, by a former NSA contractor Edward Snowden, have already damaged the business of US companies providing services on the Internet, and these companies have to know fundamental laws that govern access to data they store before more damage can be done, David Snead, attorney and co-founder of the Internet Infrastructure Coalition, said.

    Snead delivered a presentation on legal matters that concern data center operators and service providers when it gets to providing the US government access to data at the Data Center World conference in Las Vegas, Nev., Tuesday. His coalition, also referred to as the i2Coalition, describes itself as an organization that supports builders of “the nuts and bolts of the Internet.”

    Most Internet traffic travels between the US and Europe regardless of its origin or destination because of the way the network has developed. Because US infrastructure is so central to all of the world’s traffic, NSA surveillance disclosures and US regulations have a big impact on the global Internet.

    Impact of PRISM Disclosures

    A lot of damage by the PRISM disclosures has already been done. Fifty-six percent of respondents to a post-Snowden Cloud Security Alliance survey said they were less likely to use US providers.

    The Alliance estimated potential losses to be between $21.5 billion and $35 billion.

    Two Acts Everyone Needs to Know

    Snead said there were two US laws that were fundamental to understanding the government’s rights to access data.

    The first one is the Communications Decency Act, which underpins the US communications infrastructure. “If there’s one statute to know, it’s this one,” he said.

    The act takes the responsibility for data and communications taking place on servers away from the operator of the data center housing those servers. Snead and the i2Coalition believe this is a bedrock statute, and they spend a lot of time defending it.

    The second law is the 10-year-old Electronic Communications Privacy Act (ECPA), which distinguishes between data with a privacy interest and without. It also makes data stored 180 days or more accessible with a government subpoena.

    “This statute undermines the US brand by creating exceptions to warrant requirements,” Snead said.

    Snead’s Tips

    Never turn over information in response to “courtesy subpoenas” unless required by law. Have procedures for employees so they are not complying with information requests just to be helpful.

    Make sure the lines between warrant and subpoena are clearly drawn.

    Snead recommends calling the FBI and inviting them for a tour to help them understand your business, so that when the time comes, they send you a warrant instead of coming in and taking an entire cage of equipment (which happens when FBI agents do not understand how the service provider business is run).

    Finally, providers need to understand how law enforcement can access data.

    6:58p
    HP Teams With Foxconn to Compete in Hyper-Scale Servers

    HP is turning to Chinese manufacturing giant Foxconn to try and gain traction in the market for hyper-scale servers. The companies today announced a joint venture that will develop servers targeted for the huge cloud builders, a market where HP has struggled to compete in recent years.

    Foxconn is best known as the manufacturer of many Apple products, including iPhones and iPads, and has worked with many American server vendors as well. That includes a lengthy relationship with HP, which will be expanded with the new joint venture.

    In turning to Foxconn, HP appears to be acknowledging that its challenges in the hyper-scale market won’t be solved by Project Moonshot, its ambitious in-house effort to develop low-power servers for the major cloud builders.

    Hot Competition in Hyper-Scale Market

    The Foxconn deal is HP’s latest effort to improve its competitive position in the booming cloud market, where Facebook, Google, Microsoft and others are buying tens of thousands of servers customized for cloud workloads. Much of that business has been won by contract manufacturers like Quanta, Wiwynn, Hyve and AMAX, which have captured market share with aggressive pricing.

    Those firms have gained momentum through the growth of the Open Compute Project (OCP), a movement founded by Facebook to design open source hardware for hyper-scale market.

    HP is a member of OCP, but apparently sees the partnership with Foxconn as a more promising route to relevance in the hyper-scale market.

    “With the relentless demands for compute capabilities, customers and partners are rapidly moving to a New Style of IT that requires focused, scalable and high-volume system designs,” said Meg Whitman, president and CEO of HP. “This partnership reflects business model innovation in our server business, where the high-volume design and manufacturing expertise of Foxconn, combined with the compute and service leadership of HP, will enable us to deliver a game-changing offering in infrastructure economics.”

    What About Moonshot?

    The big question: what does this mean for Project Moonshot, HP’s highly-touted in-house plan to innovate in the development of low-power many-core servers for the hyper-scale market. It’s been two-and-a-half years since HP unveiled Moonshot, which initially focused on ARM chips from the now-defunct Calxeda but later expanded to include low-power chips from Intel, AMD, Texas Instruments and Applied Micro.

    HP’s announcement said the Foxconn server line will serve as a compeiment to its ProLiant servers, including those from the Moonshot initiative. HP said the Foxconn deal addresses the need for “a new approach to server design that brings together cloud solutions expertise, quick customer response and volume manufacturing.”

    “Cloud computing is radically changing the entire supply chain for the server market as customers place new demands on the breadth of design capability, value-oriented solutions and large-scale and global manufacturing capabilities,” said Terry Gou, founder and chairman, Foxconn. “In partnership with HP’s server leadership, we are embracing this new opportunity to change the industry, capture growth in this emerging market, and deliver end-to-end value as we expand our global leadership in design and manufacturing.”

    << Previous Day 2014/04/30
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org