Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, February 4th, 2014

    Time Event
    12:30p
    Reports: Apple is Building Its Own Content Delivery Network
    apple-clouds

    Apple is reportedly building its own content delivery network.

    Apple is building its own content delivery network (CDN) to deliver services to consumers, according to reports from Dan Rayburn and The Wall Street Journal.

    Rayburn, who closely tracks the content delivery market, reported Monday that Apple has formed a unit to build out a new CDN. Apple’s decision could have ramifications for two leading CDN providers, Akamai (AKAM) and Level 3 (LVLT) that provide Apple with content delivery services.

    “Since Apple is still in the build-out stage of their new CDN, it’s too early to know how this may impact Akamai and Level 3,” Rayburn wrote. “We don’t know what scale Apple wants to build their CDN out to, what region(s) of the world they want to have more control over and how quickly they can get it done. Clearly, Akamai is more at risk than Level 3 though as Akamai’s contract with Apple is worth a lot more and Level 3 could still sell Apple other services it needs for their build out, like IP transit, fiber, co-location and other products and services Akamai does not offer.”

    Rayburn noted that Apple’s efforts track similar moves by Google and Facebook to exert greater control over their networks.

    Later Monday, the Journal reported that Apple “is signing long-term deals to lock up bandwidth and hiring more networking experts.” In September Apple hired Lauren Provo, who ran the peering program at Comcast.

    2:00p
    Top 10 Ways to Improve Your Cloud Career and IT Skill Set
    software-code

    New data center demands are creating a wide array of new types of specialists.

    It’s early 2014 and what have you done to progress your IT career? Technology is moving at a scorching pace. The latest report from Cisco’s Cloud Index shows that annual global cloud IP traffic will reach 5.3 zettabytes by the end of 2017. Furthermore, by 2017, global cloud IP traffic will reach 443 exabytes per month (up from 98 exabytes per month in 2012).

    All of this activity creates new types of positions. New data center demands are creating a wide array of new types of specialists. Engineers become architects, programmers become cloud designers, and database administrators become data scientists! There are a lot of new and interesting options out there to help you push your career to the next level.

    To be successful in the IT and cloud arena you’ll have to optimize your existing skill set.  With that, let’s take a look at 10 great ways you can accomplish this.

    • Get Social, Get Noticed. It’s not just about reading books. Social media is a major part of the cloud computing revolution. We do it here @datacenter – and you should be getting involved as well. Not only does technology operate on a second-by-second basis, we now have mediums that can capture these updates in real-time. Get your voice heard, start learning from others, and introduce the very social aspect of cloud computing into your skill set.
    • Learn a New Technology; Diversify! Are you an engineer or a specialist? Don’t let that control the rest of your career. There are plenty of engineers who specialize in very specific technologies and are happy to continue doing so. If you’re ready to break out of that mold,  start examining parallel technologies which relate to your existing skillset. Are you working with databases? Start looking at data intelligence and big data. Do you work in networking? Start exploring software-defined technologies and cloud-network abstraction. The point is that there are amazing options which let you stay in some comfort zone while still getting the opportunity to challenge yourself.
    • Learn The Language of Business. One of the best ways to get into management or into that lead role is to understand the business process. Learn the language of the executive. Understand that they’re trying to find business challenges and correlate that to technology. If you’re able to speak the language of business, you’ll be able to better relay your IT ideas to the right people. In some cases, this means taking a business course as it relates to technology. Moving forward, IT and the business process are going to be completely unified. More organizations are now building their entire business model around their data center. Take that into consideration.
    • Think Like an Architect – See The Big Picture. This actually takes a bit of mental practice. Thinking like an architect forces you to understand the entire problem, everything that it will impact, and how to design an all-encompassing solution. Here’s the challenge: you have to take IT, business and future fluctuations into consideration. From there, you’ll have to convey your ideas intelligently to a broad audience. This isn’t something you can do overnight, but it’s a valuable skill to pick up. Instead of jumping right into a solution, take a few minutes and think about the impact of a given challenge. Then, think of a solution that not only helps solve that problem, but helps create a better overall infrastructure as well. It’ll take some practice, but thinking like an architect will help you truly understand the big picture around technology and business.
    • Understand Group and Organizational Dynamics. A big piece of IT is collaboration and communication. It may come as little surprise, but some IT folks have a challenge communicating their thoughts to people outside of the IT community. A big piece of becoming a leader in your IT space is the ability to convey your thoughts and ideas to a broad group of people. The other challenge is making sure they understand your message. In many cases this means presenting to executives and engineers. Here’s something that needs to be understood immediately: IT is no longer an independent function of the business organization. Rather, modern businesses are actively integrating technology functions directly into their enterprise model. This means more communication, more reliance on the data center, and a lot more demands from the IT department. Business no longer tells IT where to go. Now, IT and technologists have a direct say in the direction that a business must take. 
    • Translate Business and Marketing into Real IT solutions. There have been a lot of new terms thrown into the technology mix recently. Let’s look at a few: software-defined technologies (SDx), data center operating systems (DCOS), next-generation firewalls (NGFW), community cloud platforms, fog computing, and converged infrastructures. Yes, some of these are marketing terms to draw more attention to the vendor. But the truth is that there are very real technologies behind these marketing terms. Let’s look at software-defined networks (SDN). Here you have a technology which completely abstracts the networking layer and allows the administrator to deploy truly advanced global networking configurations and services. Learn about these new technologies, how they are defined and how they can impact your business. Many of these technologies are the foundation of the future cloud and data center model.
    • Don’t be Afraid to Speak Up (Both Virtually and Physically). Ask questions, participate in meetings, and make sure to be heard. Research issues and be able to back your thoughts with solid data and technological metrics. Learning to communicate in today’s IT world now spans the physical and virtual world. This means getting your thoughts across –intelligently – digitally as well as in person. The only way you’ll be able to implement new technologies that you feel can truly make an impact is by getting your voice heard.
    • Network, Network and Network! Not only is it about who you know, it’s about how many of those people you know. Networking has become a critical career path for the modern IT person. Conferences, local meet-ups, online groups, and social media are the new ways to create your career and technology network. Broaden your network group by incorporating people outside of your area of expertise. Let me give you an example. I was originally a cloud expert focusing on cloud technologies, virtualization, and workload delivery. Over the years, I’ve aligned a lot of my new training around the modern data center, optimization technologies, and how cloud directly integrates with data center operations. Networking with people who can enhance your area of business not only increases your IT knowledge, but can help you optimize your organization.
    • Start Thinking Outside and Beyond the “Data Center.” Get creative. There are so many supportive technologies out there to help optimize your environment. If you’re a storage expert, start thinking about data management and optimization. How can you increase density, improve IO, or decrease space utilization? Here’s the challenge: How can I optimize my data center environment while still lowering operational costs? Understanding SDx technologies and the data center virtualization revolution can help you create new types of business strategies. Learn about new supportive technologies which directly optimize core data center functions.
    • Never Become Complacent. Technology is evolving at such a fast pace that waiting a day to learn about a new solution may already put you behind. Even if you’re a specialized engineer, learn about new trends that will impact what you do. Infrastructures age, systems become legacy, and new platforms are constantly being introduced. It’s more important than ever to diversify your skill set and to stay on top of the latest IT trends. In today’s age of such rapid evolution, it will always feel like you’re catching up to new technologies. Even so, being complacent and ignoring the fast pace of technological evolution can be very detrimental to an IT career. Never forget the importance of constant innovation and all of the new platforms that can support the next-generation computing model.

    It’s really a fascinating IT world right now. Users are utilizing more devices, there’s a lot more data traversing the cloud, and the future is bright for the modern data center and cloud platform. Already we’re seeing the data center become the home of everything. Looking ahead, it’s clear that cloud will only continue to increase in its usage. There will be more offerings, better solutions, and a lot more power from the cloud infrastructure.

    This means that there have to be people who can understand the demands of the business and translate that into intelligent IT solutions. Start to think about your career, where you want it to go and what fascinates you in the modern IT era. The beauty here is that there is a place for everyone in the future cloud and data center infrastructure.

    2:55p
    Splunk and Internet2 Join Forces for Higher Education

    Splunk (SPLK) and Internet2 announced a strategic agreement to bring Splunk software to hundreds of potential new higher-education customers. With a pre-negotiated contract member universities can deploy Splunk and receive special pricing. As part of the Internet2 NET+ service validation process, executives and staff from several Internet2 institutions, including Baylor University, Cal Poly San Luis Obispo, Harvard University, North Dakota State University, University of Illinois, Indiana University and the University of Washington worked together to define and tailor the offering for higher education.

    “Splunk software is helping to deliver transformational insights to Baylor University, and we are excited to be the first institution to purchase a Splunk Enterprise subscription through Internet2,” said Jon Allen, assistant vice president and chief information security officer, Baylor University. “Baylor standardized on Splunk Enterprise as a platform for machine data, with a heavy focus on our IT operations and security. In IT operations, Splunk software made an immediate difference by helping us to reduce downtime and resolve problems faster. In security, we know what is happening on all of our networks and can identify advanced threats and anomalies in real time. These are the kinds of insights that revolutionize efficiency.”

    “Splunk Enterprise is a must-have solution to improve operations for any higher education institution, and we are pleased to help our members take advantage of what Splunk software can provide through this agreement,” said Shelton Waggener, senior vice president, Internet2. “Through Internet2’s NET+ Splunk offering, universities will have uniquely simplified access to valuable insight from the real-time analysis of machine data. Our members manage some of the most complex IT infrastructures, with many already using Splunk software to enhance security, lower costs, increase IT efficiency and improve operations.”

    “Splunk is thrilled to be working with Internet2’s NET+ staff to deliver our leading platform for machine data to even more higher education institutions,” said Rob Reed, worldwide education evangelist, Splunk. “Splunk Enterprise can help to transform the way universities operate. Our customers consistently report that Splunk technology helps them to do much more with less, and we are delighted to make it even easier for other institutions to benefit from the power of Splunk Enterprise.”

    3:46p
    SaaS vs. On-Premise Monitoring: Know the Difference

    The cloud model continues to evolve as more organizations place their infrastructure into a cloud-based solution. Through all of this – there is still the very real need to monitor and proactively maintain a data center and cloud infrastructure. Because more business are directly leveraging cloud resources, it’s more important than ever to truly understand what is happening in your cloud environment.

    With that in mind – how do you know what the right option would be? There are on-premises monitoring solutions and ones totally based in the cloud. Still, the “SaaS vs. on-premises” equation is different for every technology. There are some solution types where a SaaS approach has special benefits that are not obvious at first. In this whitepaper from LogicMonitor, we take a look at why network and server monitoring can be particularly suited to a cloud-based model. Furthermore, this paper examines 9 key points around why a cloud-based monitoring solution can be the right fit. This includes:

    • No Single Point of Failure
    • Easier Monitoring of Hybrid Environments
    • Fast Deployment
    • More Efficient, Effective Support
    • Lower Capital Expense and TCO
    • No Vendor Lock-In
    • Higher Reliability
    • Scalability and Predictable Budgeting
    • Elevated Security

    As your organization continues to grow, you’ll have to look at and evaluate new types of monitoring and management solutions. The data center continues to evolve as more cloud connections influence how resources are utilized. Controlling resources allocation and overall infrastructure performance heavily revolves around the proactive nature of a good monitoring solution.

    Remember, this also means having optimal security. One of the great things about a SaaS monitoring solution is that all data collection happens from a lightweight collector which resides behind your firewall. The collector makes outgoing connections only over SSL, and accepts no connections from the network. Monitoring servers reside in locked cabinets inside of SSAE 16 SOC1 Type 2 data centers, manned 24x7x365, with ingress and egress secured with electronic keycards and biometric hand scans, high resolution motion-sensitive video surveillance, fully redundant power and HVAC, VESDA Fire-threat detection and suppression. Strong internal application controls protect data even from “root” users.

    cloudmonitoring

    Download this white paper today to learn how a SaaS monitoring solution can create true security, great resiliency, and complete vision into your environment. With all of this in mind – In many cases, utilizing a cloud-based SaaS monitoring environment can truly create a robust data center which will better support business operations.

    4:05p
    Encouraging Good Citizenship in the Data Centre

    Andy Huxtable is with Colocation Product Management at CenturyLink Technology Solutions EMEA.

    Andy-Hux-tnANDY HUXTABLE
    CenturyLink Technology Sol’ns

    Ask not what your data centre can do for you, but what you can do for your data centre.

    Naturally, the data centre industry is a big user of energy, so those of us who work within the industry need to do all we can to maximise energy efficiency. However, colocation providers simply provide the IT compute power required by their customers and without this demand, data centres would require no power. Therefore it’s essential that customers understand the positive impact they can have on improving these multi-tenanted environments.

    Service providers make huge investments in time and money into designing complex energy efficient data centres to create the perfect home for an enterprise class IT operation. Further investments are made in the creation of robust processes, and the training that goes with them, to ensure 100 percent availability of energy efficient power and cooling is delivered to the data centre floor. However, with colocation, maintaining energy efficiency becomes the responsibility of the customer from this point onwards.

    How the IT load is set up and maintained on the DC floor has a big impact on energy efficiency and is greatly impacted by human factors, such as the right people doing the right job, adherence and knowledge of DC best practices and following good operational process and procedures. As a provider of data centre space, one has the responsibility for making sure everyone within its data centre knows what they can do to be a good citizen.

    This involves helping customers get maximum efficiency from their equipment in order to help maintain PUE (Power Usage Effectiveness) and drive down operating costs – savings which can ultimately be passed onto all customers.

    Small Things Add Up

    From the simplest things, such as positioning of equipment in racks to maintaining hot and cold aisle integrity to best practice procedures when cabling up servers so as not to impede airflow, and the use of blanking plates, these are all investments which providers need to make with each customer at a granular level so as to help drive efficiency on the ground. Multiply that one server by 1,000 and it’s actually quite a lot of airflow that has been increased by small and important attention to detail.

    Customers should be helped at every level and in deliberate detail rather than given the go ahead to run off and do their own thing, which can sometimes be to the detriment of fellow data centre citizens.

    Data centre management is a complex task that requires specialized skills across a wide number of domains, such as security, power distribution, networking and hardware and software management. The experience of the on-site data centre staff can significantly improve the level of support on offer, so the best providers have skilled staff in all key data centre capabilities, and do not simply react to vendor staff who come in to service the data centre systems. Engineers and technicians who are highly trained and certified, with extensive experience in data centre management should provide dependable advice and guidance that result in improved uptime. Providers that offer managed hosting and cloud services are often better suited to help with colocation needs, as their data centre staff are trained and experienced in supporting complex environments.

    Standard, documented processes and practices should be in place for all data centre activities, such as maintenance and change management, as standardisation drives better performance all round.

    The Devil Is In The Details

    Choosing a provider that can give access to data centre design experts in the initial development stages of suite design can greatly improve energy efficiency by ensuring the use of optimal rack design and structure cabling systems. A good provider will deliver these complimentary, as standard; helping reduce power consumption by using good green IT principles, which is a benefit to all. A great provider will continue to support the customer way beyond the design phase and into the full life cycle of their contract term. Designs, technologies and even staff may need to change over time but the need for applying industry best practices will never go away.

    Data Centres: A Hub For All

    Data centres aren’t just a hub of compute activity, rather they are a hub for human intelligence.

    Attention to detail in order to foster good citizenship goes right down to the fundamentals. Client staff will spend significant time at the data centre and therefore office workspaces, lounges and conference rooms need to be suitable for employees to connect laptops or take conference calls. These are not just amenities; they are vital in providing a working, comfortable environment to keep staff productive and engaged.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    7:26p
    ViaWest Enters Phoenix Market With New Data Center
    ViaWest continues its national expansion, opening up its first data center in Arizona.

    ViaWest continues its national expansion, opening up its first data center in Arizona.

    ViaWest has opened its Phoenix data center, which is the company’s first facility in Arizona. The colocation provider says its seeing tremendous growth in the region, driven by disaster recovery and re-locations from the California market, in addition to the need for enterprises to locate their primary, production applications.

    The Phoenix facility, which will have more than 40,000 square feet of raised floor space, offers colocation, managed and cloud services, along with hybrid offerings to meet growing demand for flexible and scalable IT solutions. The environment is fully compliant to meet PCI, SOC, SSAE and HIPAA regulations.

    Located in a low-risk natural disaster region, ViaWest’s Phoenix data center will include:

    • Multiple medium voltage (12.47kV class) utility power feeds
    • 10 MW redundant, fault tolerant diesel power generation capacity
    • 2(N+1) 5.4 MW redundant, fault tolerant UPS capacity
    • 250+Watts/square footage high density capability
    • 500,000 gallon on-site chilled water storage for continuous cooling

    Industry veteran Chris Parsons will lead the charge in Arizona. Parsons has 20 years of leadership experience and knows the Phoenix market well, with more than 10 years experience there. Before joining ViaWest, Parsons served as a Regional Vice President for Internap Network Services, where he was responsible for sales and revenue growth in the Western U.S. Prior to Internap, he spent two years with Qwest and 13 years with Cable and Wireless USA, where he held various leadership roles in both direct and indirect sales channels.

    “ViaWest’s broad portfolio of infrastructure solutions will be a welcome addition in the Phoenix market,” said  Parsons. “We are the first local provider to offer a robust portfolio of IT infrastructure solutions, which scale to meet customers’ growth and compliance requirements. I am excited to join the ViaWest team capitalize on this opportunity.”

    The company, which is headquartered in Colorado, has been busy expanding into new markets. Last year it announced it was building in Minneapolis, as well as building a Tier IV facility in Denver and it built out a multi-tenant tier IV facility in Las Vegas.

    There’s been a lot of activity in Arizona, at least partially spurred by passed state tax incentives for data centers.

    7:39p
    Microsoft Cloud Chief Nadella Takes Over as CEO

    This article originally appeared on The WHIR.

    Satya Nadella, former EVP of Microsoft’s Cloud and Enterprise group, has been officially chosen as Microsoft’s next CEO.

    Many have speculated that Nadella would succeed CEO Steve Ballmer, who has been Microsoft’s CEO since January 2000. Ballmer announced in August 2013 that he would step down in 12 months.

    This could signal an increased emphasis at Microsoft on cloud computing for businesses and consumers. Microsoft has been very successful in generating revenue from its cloud and server ventures. In its last quarterly report, Microsoft’s commercial cloud services revenue more than doubled, and Office 365 commercial seats and the number of Azure customers both grew triple-digits. Also, SQL Server and System Center reported double-digit revenue growth.

    Nadella has been instrumental in overseeing Microsoft’s server business in its transition towards a more agile, cloud-first business model, and making it one of the company’s top-performing business units.

    microsoft-ceo-88SATYA NADELLA
    Microsoft

    According to Tuesday’s announcement, Microsoft founder Bill Gates will also give his role as Chairman of the Board of Directors to John Thompson, but continue to provide technology and product direction input as Board “Founder and Technology Advisor”.

    In Nadella’s email to employees, he notes that innovation, rather than copying others, will be prioritized, and that the company will try to keep mobility and cloud at the center of its business.

    He wrote, “As we look forward, we must zero in on what Microsoft can uniquely contribute to the world. The opportunity ahead will require us to reimagine a lot of what we have done in the past for a mobile and cloud-first world, and do new things.”

    This article was appeared first at: http://www.thewhir.com/web-hosting-news/microsoft-cloud-chief-satya-nadella-takes-ceo

    7:50p
    AMD, Dell See Opportunity in ARM Server Market
    AMD_opteron1

    A close look at the AMD Opteron ARM development kit on display at the Open Compute Summit last week. (Photo: Colleen Miller)

    SAN JOSE, Calif. – Just two months after the best-known ARM server vendor abruptly closed its doors, two leading technology companies are seeking to establish leadership in this emerging segment of the server market.

    Last week chipmaker AMD unveiled a new 64-bit ARM server, code-named Seattle, giving fresh momentum to the effort to adopt cellphone chips for the server market. Today Dell announced a new ARM-based server, serving notice that AMD isn’t the only large player focused on developing ARM hardware.

    The AMD Opteron A1100 Series will be fabricated using a 28 nanometer processor technology. AMD is offering a development platform, including an evaluation board and software suite, with servers to follow in March. AMD is contributing its ARM microserver design to the Open Compute project, and says it will be compliant with the group’s “Group Hug” standard for a common motherboard.

    “At AMD, we will be the leaders in ARM CPUs and servers,” said Andrew Feldman, Corporate Vice President and General Manager at AMD. “I believe that by 2019, a quarter of the server market will be running on ARM.”

    Dell Unveils Proof-of-Concept Hardware

    Today Dell revealed a proof-of-concept for a 64-bit ARM server based on Applied Micro’s X-Gene 64-bit ARM technology. The unit was developed by Data Center Solutions, Dell’s hyperscale computing unit, and will be available in the company’s solutions center in Texas.

    “As the ARM server ecosystem is still developing, our focus has been on enabling developers and customers to create code and test performance with 64-bit ARM microservers in order to foster broad-based adoption,” said Stephen Rousset, Director of DCS Architecture at Dell.

    The server market has been awaiting the arrival of production-ready ARM servers for some time. Chips from UK-based ARM Holdings are widely used in iPhones, iPads and other mobile devices that offer computing power in a compact format. As the data center market has focused on energy efficiency, ARM servers have hailed as a potential game-changer in the server arena, offering a low-energy alternative to traditional x86 servers.

    Slow Progress for ARM Ecosystem

    But adapting ARM chips for servers has been a slow-moving process. One of the leading players in this effort was Calxeda, which went through $100 million in funding seeking to develop ARM server hardware before shutting down in December.

    “I’ve gotten a lot of questions about the health of the ARM ecosystem,” said Frank Frankovsky, Chairman and President of the Open Compute Project. “ARM has been listening to our questions. I think the ARM ecosystem is continuing to gain momentum and strength. In the next six months we’ll see more higher performance parts available to challenge x86 across a broad spectrum of workloads. My personal opinion is that we’ll see production-ready hardware in late 2014 and implementation in 2015.”

    “We see this as a five-year revolution,” said Ian Drew, the Chief Marketing Office for ARM. “We’ll see monolithic structures breaking down as the cloud becomes real.”

    Common Development Framework

    At the Open Compute Summit, ARM announced the Server Base System Architecture, a platform standard for ARMv8-A based 64-bit servers. The standard is designed to provide a common framework for the 17 companies building solutions atop the ARM architecture – a group that includes AppliedMicro, Cavium, Dell, HP, Texas Instruments, Broadcom, Canonical, Citrix, Red Hat and Linaro.

    “It was done so the apps guys don’t have to port things 17 times. They can port it once,” said ARM’s Drew. “It takes away the roadblocks that could stop this data center revolution.”

    ian-drew1

    At the Open Compute Summit, ARM’s Ian Drew announced the Server Base System Architecture, a platform standard for developing software for ARMv8-A based 64-bit servers. (Photo: Colleen Miller)

    “They are standardizing the way all software looks at ARM processors,” said AMD’s Feldman. “While some may complain that the ARM software ecosystem is not mature, I’d say we’re through our gangly growth years.”

    AMD believes interest in ARM servers remains strong. “Nobody’s told me they’re not interested in ARM,” said Feldman. “It’s always ‘when can I get parts.’ We will have parts in March. By June and July we’ll be fully ruggedized and battle-tested.”

    Feldman acknowledges that after several years of ARM being deployed in testing and development environments, the market wants to see production hardware.

    “The talking’s been done,” he said. “It’s time to get some design wins and deliver some product to customers. 2014 is that year.”

    Potential for Storage, Caching Workloads

    Where might ARM gain traction? Feldman predicts that storage is one area where ARM can excel, along with caching technology like memcached and use in web site hosting. “The sophisticated buyers are where you’ll see the adoption, because that’s where cost per compute matters,” he said. “Hyperscale will flock to ARM.”

    But there’s plenty of competition in the low-energy server arena, including new 14nm Atom chips from Intel.

    “I think it’s foolish to underestimate the power that Intel’s going to bring to bear,” said Feldman.

    dell-arm-poc

    Today Dell showed off a proof of concept of a 64-bit ARM server for the hyprscale market. (Photo: Dell)

    << Previous Day 2014/02/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org