Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 28th, 2017
Time |
Event |
1:00p |
Equinix Rolls out Its Home-Baked DCIM Software for Colo Customers Equinix has launched IBX SmartView, the DCIM software developed in-house, which the company said was in the works last year. The software is now available to customers in some of the company’s data centers, with world-wide roll-out expected later, the company announced Tuesday.
With DCIM software being a struggle to implement in enterprise data centers, customers are likely to welcome a solution whose deployment has been handled by the service provider. Colocation companies are prime targets for DCIM vendors, who have found it difficult to grow revenue in the enterprise market.
Some colo providers, such as Digital Realty Trust, have chosen to partner with vendors for their DCIM solutions, while others, such as Equinix, IO, and the French provider Etix, among others, have invested in developing their own tools.
IBX SmartView functionality includes alerts, real-time and trend data for things like temperature and humidity, as well as infrastructure operating status. Customers get a customized view of their footprint, including a unified view of building management system data from multiple Equinix locations.
The DCIM software is currently available through the Equinix Customer Portal, but the company is planning to provide APIs for integration with customers’ own tools and a mobile interface in future releases.
See also: DCIM on a Budget: Data Center Optimization Without Breaking the Bank | 3:35p |
IT Workers Protest Layoffs, Offshoring of Jobs to India  Brought to you by MSPmentor
Nearly 80 IT workers at a California university were expected to protest Tuesday, on their final day on the job before turning their duties over to a third-party IT services firm from India.
The workers, members of the Communications Workers of America (CWA), were informed last summer that their positions at the University of California, San Francisco (UCSF) were being outsourced to the IT services firm, HCL.
Since then, the workers have been training their replacements via videoconference, and in person for a few foreign employees who were brought to the U.S. under H1-B visas.
“It is the first time a public university has ever offshored American information technology jobs, undermining its own mission to prepare students for high-tech careers,” the CWA said in a statement.
See also: How to Get a Data Center Job at Facebook
Training the replacements was among the conditions for receiving severance packages, union officials said.
The layoffs affect 48 full-time IT workers, 12 contract employees and 18 vendor contractors. Additionally, 18 vacant positions will not be filled.
In all, UCSF is slashing about 17 percent of its 565-person IT operation.
The five-year, $50 million contract with HCL is expected to save the university about $30 million during the period.
Union officials worry that the contract could be just the start of a wave of outsourcing of IT jobs from the University of California.
“The offshoring could soon spread beyond UCSF, as the HCL contract can be utilized by any of the 10 campuses in the UC system,” the CWA statement said.
This article originally appeared on MSPmentor. | 4:00p |
Meet Digital Bridge, a New Consolidator in the US Data Center Market Marc Ganzi believes all the recent data center construction and acquisition activity is early innings, and that demand for physical space to house digital information will continue growing for the foreseeable future.
His company, Digital Bridge, followed its first foray into the data center business – the acquisition of DataBank last year – with acquisition of C7 Data Centers and individual sites in Cleveland and Pittsburgh from 365 Data Centers this year. There are also rumors that Digital Bridge is preparing to buy Silicon Valley’s wholesale data center heavyweight Vantage Data Centers, which also has a campus in Quincy, Washington.
The company has brought Michael Foust, founder and former CEO of Digital Realty Trust, on as DataBank’s chairman, while Jon Mauck, former CFO at IO, now leads acquisitions as DataBank’s chief investment officer.
Ganzi, a businessman and well-known polo club owner and polo player (his team won the US Open Polo Championship in 2009) is taking his cues from consumer appetite for digital products, which has driven demand for the telco infrastructure businesses his company owns, and which he expects will continue fueling a thirst for data center real estate.
Whether or not it closes the Vantage deal – its representatives declined to comment on a recent Reuters report on the potential acquisition, referring to it as “marketplace rumors” – Digital Bridge will not stop there. “The data center space is actually in the early innings,” Ganza says. “There’s still a fantastic opportunity to roll up the space and to create a platform of scale.”
He doesn’t see DataBank and Digital Bridge’s other holdings (three cell-tower companies and a mobile connectivity solutions company) as stand-alone businesses. There’s a synergy, and Ganza believes that synergy will only improve, as all the various elements of internet infrastructure continue converging.
He sees this convergence playing out in meetings with customers. “It’s not uncommon for us to have a meeting with a customer, and two of our CEOs will show up.” Its small-cell and tower teams will show up to a meeting with Verizon; a small-cell and a data center team will come to meet with Google.
Digital Bridge is building a holistic internet infrastructure play, primarily targeting big customers that need solutions extending from wireless networks down to the data centers that process and store the data traveling to and from those networks. Having multiple components of the delivery system for the communications infrastructure is becoming vital for a business like Digital Bridge, and it’s not the only company doing it, Ganza says, listing Zayo Group, Crown Castle, and CS&L REIT as examples of competitors who are already “walking the talk.”
Digital Bridge has devised a conservative acquisition strategy, which in broad terms consists of targeting facilities in markets with supply-demand imbalance, lots of network access and close proximity to major interconnection points; sellers with high-quality credit; and tenants with long-term leases.
The company wants to finance these acquisitions in the long-term bond market and places a lot of emphasis on efficient financing, which Ganzi says is critical for data centers as an asset class. “If we can’t build a strong credit story around the property, we’re not going to be interested in buying that asset,” he says. “We’ve killed more deals than we’ve closed.”
If you’re a company that’s looking to roll up the data center market in the US, this is a good time and place to be. While last year set the record for data center acquisitions, analysts expect this year to be off the charts. Not only are there data center providers looking to sell, there are also enterprises that are moving out of corporate data centers into colocation facilities and selling those corporate assets, according to a recent report by the real-estate brokerage CBRE.
There are many private equity-backed data center companies with two to two dozen facilities out there, Ganzi says. There’s a big tranche of transactions on the table right now and a lot of speculation about assets coming to market in the near future. “It’s a pretty active M&A market right now,” he says. | 4:30p |
Idaho Considering Data Center Tax Breaks State legislators in Idaho are going to consider a bill to add data center tax breaks to the state’s tax code to make it more attractive for companies looking to build server farms.
The Idaho Department of Commerce supports the measure, which it expects to make the state more competitive in attracting data centers. There are currently seven data centers in Idaho, a department official told the Associated Press. Service providers DataSite and Involta have data centers in Idaho, as well as the FBI, among others.
Twenty other states have data center tax breaks in place today, according to the AP. State and local officials use them as incentives to attract the large construction projects, which create a burst of economic activity in local areas during construction and some long-term jobs once they’re complete, in addition to revenue from property taxes and sales taxes on equipment and energy purchases.
While Idaho’s bill would offer sales tax rebates on server equipment, a lot of expensive network, electrical, and mechanical infrastructure equipment goes into a data center. State officials estimate that the bill would take $531,000 per year out of the state’s general fund. | 5:00p |
Will Facebook Renew Its Data Center Leases in Ashburn? One of the biggest things DuPont Fabros Technology execs will be focused on this year is trying to ensure that Facebook doesn’t vacate its Northern Virginia data centers as some of its leases start expiring in 2018.
The social network giant has numerous leases in four of the wholesale data center provider’s Ashburn facilities, cumulatively representing north of 20 percent of the provider’s total annual rent income. The only customer responsible for a bigger portion of DuPont Fabros’s revenue is Microsoft, which contributes 25.4 percent.
“Renewal discussions are an important 2017 focus as our first lease expiration with Facebook occurs in mid-2018,” DuPont Fabros CEO, Chris Eldgredge, said on the call. “The plan is to engage in discussions that lead to a successful renewal.”
More on DuPont Fabros’s Q4 and 2016 earnings: After Beating Its Own Leasing Record in 2016, DuPont Fabros Keeps Foot on Gas
Some of the Facebook data center leases in three older Ashburn facilities (ACC4, ACC5, and ACC6) are due to expire in 2018, some in 2019, and some in 2020 and 2021. The lease that’s up for renewal next year actually represents the smallest percentage of annual rent (2.2 percent).
Still, non-renewal would mean lowering profit guidance and possibly negative impact on the data center REIT’s stock. DuPont Fabros’s profit from Facebook leases would shrink even if the social network does renew – its leases at two of the facilities are “we’ll above current market rates – it would not shrink as much.
See also: LinkedIn Vacates Lots of Space at Equinix Data Centers
If the current market dynamics remain, chances are DuPont Fabros would not have too much trouble filling the space that may be vacated by Facebook next year. Northern Virginia is the hottest data center market in the country and one of the hottest in the world, and hyper-scale cloud providers, such as Microsoft, are hungry for capacity in key locations like that.
Asked whether it would be difficult for the provider to backfill space potentially vacated by Facebook, Jim Kerrigan, managing principal at the data center real-estate brokerage North American Data Centers, said, “Not at all. Demand is still strong right now.”
A Facebook spokesperson declined to comment.
See also: How to Get a Data Center Job at Facebook | 5:30p |
Cloudera Said to Choose Banks for IPO as Soon as This Year By Alex Barinka (Bloomberg) — Cloudera Inc., the big-data company backed by Intel Corp., hired underwriters for an initial public offering that could come as soon as this year, people with knowledge of the matter said.
The company, based in Palo Alto, California, is eyeing a valuation of about $4.1 billion, said the people, in line with what it fetched in its last private round three years ago. Cloudera notified a number of firms this month that they’d been picked to lead the IPO, said the people, who asked not to be identified because the information is private.
After a quiet start to 2017, the U.S. market for technology IPOs market will face its first test this week. Snap Inc., the maker of the disappearing-photo app, is seeking to raise as much as $3.2 billion in an IPO scheduled to price Wednesday that could value the company at as much as $18.5 billion.
Smaller enterprise-technology companies MuleSoft Inc. and Alteryx Inc. both filed to go public this month.
A representative for Cloudera didn’t respond to a request for comment.
The company creates tools and provides services that center on the open-source data analysis software, Hadoop. Its technology helps wrangle mass amounts of data, analyze it and use it to make decisions in real time. Cloudera competes with the likes of Hortonworks Inc. and MapR Technologies Inc.
It has raised upwards of $1 billion in private funding, including a $900 million round in March 2014. That injection included $740 million from Intel, as well as 160 million from investors T. Rowe Price Group Inc., Google Inc.’s venture arm and Michael Dell’s investment firm, MSD Capital LP. | 7:54p |
Tech Primer: Clarity on Containers Kong Yang is Head Geek at SolarWinds.
Container ecosystems from the likes of Google, Docker, CoreOS, and Joyent are easily one of the more intriguing IT innovations in the enterprise and cloud computing space today. In the past year, organizations across all major industries, from finance to e-commerce, took notice of containers as a cost efficient, portable, and convenient means to build an application. The prospects gave organizations an exciting new model to compare and contrast to virtualization.
But for all the hype, many organizations and IT professionals still struggle to understand both the technology itself and how to take advantage of its unique benefits—especially as Docker, the market leader, begins to expand the use cases for containers into the stateful architectural landscape that is common with enterprise applications. This technology primer aims to provide clarity on containers and arm you with everything you need to know to successfully leverage this technology.
Containers 101: Back to Basics
To start, one of the main misconceptions about containers is that they are part and parcel replacements for virtual machines (VMs).
Despite some early enterprise adopters implementing them as such, that is not the case. In a nutshell, a container consists of an entire runtime environment—an application, its dependencies, libraries and other binaries, and configuration files needed to run it—bundled into one package designed for lightweight, short-term use. When implemented correctly, containers enable much more agile and portable software development environments. Containers simply abstract away the need for traditional servers and operating systems.
Virtualization, on the other hand, includes a hypervisor layer (whether it be Microsoft Hyper-V or VMware vSphere) that segregates virtual machines and their individual operating systems. Virtualization abstracts resources of the underlying hardware infrastructure, consisting of servers and storage so that VMs can use these pools of resources. VMs can take considerably longer than containers to prep, provision, and deploy, and VMs tend to stay in commission much longer than containers. As a result, VMs tend to have a much longer application lifecycles.
Therefore, a key difference is that the container model is not intended to be a long-term environment like VMs; rather, they are designed to (ideally) be paired with microservices in order to do one thing very well and move on. With this in mind, let’s discuss some of their benefits.
First, as mentioned, containers spin up much more quickly and use less memory, ultimately leaving a smaller footprint on data center resources than traditional virtualization. This is important, as it enables process efficiency for the development team, which in turn leads to much shorter development and quality assurance testing cycles. With containers, a developer could write and quickly test code in two parallel container environments to understand how each performs and decide on the best code fork to take. Docker builds an image automatically by reading the specific set of instructions stored in the Dockerfile, a text file that contains all the commands need to build a given image. This means that containers should be ephemeral, meaning they can be stopped, changed, and newly built with minimal set up and configuration.
Containers can also support greater collaboration between multiple team members who are all contributing to a project. Version control and consistency of applications can be problematic with multiple team members working in their own virtual environments. Think about all the different combinations of environment configurations. Containers, on the other hand, drive consistency in the deployment of an image—combining this with a hub like GitHub allows for quick packaging and deployment of consistently known good images. The ability to quickly spin up mirror images of an application will allow various members of the same development team to test and rework lines of code in-flight, within disparate but consistent image environments that can ultimately synchronize and integrate more seamlessly.
Interestingly, Docker has begun evolving container technology to go beyond the typical test-dev model, and both deliver additional business value and reduce the barrier of consumption for enterprises. Docker engine runs on major desktop platforms like Windows, Linux, and Macintosh operating systems, which allows organizations to get experience with containers while demoing and testing a few use cases on laptops. Additionally, Docker’s recent acquisition of Infinit, a distributed storage vendor, emphasizes the company’s intention to expand and support enterprise needs. The integration of Infinit’s technology will allow developers to deploy stateful web architecture and legacy enterprise applications. This combination of technologies aims to influence organizations saddled with technology debt and legacy applications to adopt containers.
Virtualization or Containers: Which is Right for You?
So, how do you decide when to leverage containers? It starts with a fundamental understanding of your application architecture and its lifecycle—from development to production to retirement. Establishing this baseline will help you decide whether said applications are ideal for container or better left as VMs.
An e-commerce site, for instance, might decide to transition from using several VMs that are executing on multiple functions to a container-based model where the tiered “monolithic” application is broken down into several services distributed across public cloud or internal infrastructure. One container image would then be responsible for the application client, another container image for the web services, and so forth. These containers can be shipped to an unlimited number of host machines with identical configuration settings, so you can scale and drive consistency across your e-commerce site.
However, even though some applications in your environment might be prime candidates to shift to containers, the cost to evolve people, processes, and technology remains a large obstacle for most organizations that currently support virtualization. Heavy investments in vSphere, Hyper-V, or KVM virtualization solutions, not to mention the accumulated technical expertise and process to support it, is a key reason why businesses today are struggling to adopt container technology.
Despite this, organizations should look for opportunities to gain experience with container technology. As demonstrated by Docker’s expansion into more enterprise capabilities, containers can certainly begin to play a larger role in the modern data center, where web scale and mobile rule. The following best practices will help businesses better prepare themselves to work with and manage containers:
Getting There: Best Practices for Working with Containers
- Adopt strategically – As mentioned, there are a few barriers to adoption for many organizations (cost, technology debt, the need to build up operational expertise, etc.), so a move to integrate containers requires thoughtful consideration. To ease your way into containerization, your organization should look for the low-hanging fruit opportunities, which happens to be test-dev environments. You should aim to leverage Docker’s compatibility with Windows, Linux, and Macintosh OSs to gain experience with some of the simpler use cases like normalizing development environments. This will help you better understand how containers could play a larger role with your organization’s delivery of more complex applications or workloads.
- Monitor as a discipline – To determine how best to integrate container technology into your existing environment, IT professionals must leverage a comprehensive monitoring tool that provides the single point of truth across the entire IT environment and application stack. The resulting performance and behavioral baseline supply the data from which subject matter experts can analyze and determine workload candidates for containers or VMs. At the end of the day, companies expect both performance guarantees and cost efficiency. The best way to meet this requirement is with monitoring tools that provide an understanding of how your applications change over time, and tracking the actual requirements of that application and its workload.
- Automate and orchestrate your application workflow – Containers aim to drive scalability and agility by normalizing the consistency of configurations and application delivery. Thus, automation and orchestration become key to successful container efficacy. The reason an organization leverages containers is to automate the provisioning of resources or applications to either run, deliver a service, or run and test a service before taking it to production, and to do it at web scale. Once you’ve reached this type of scale, you need to orchestrate the workload to take advantage of the collaboration efficiency between all development team members.
- A security state of mind – By sharing the same operating system kernel and associated system memory, containers are able to be extremely lightweight and easy to provision. However, this also means any user or service with access to the kernel root account is able to see and access all containers sharing the kernel. With the cadence of data breaches showing no sign of slowing down, organizations that choose to work with container technology will need to create a security framework and set of procedures that is consistently evaluated and updated to prevent attacks. Examples of these preventive measures include reducing the container attack surface and tightening user access control.
Conclusion
In the year ahead, I expect IT departments will finally come to a greater understanding of container technology and how it can realistically and appropriately be used for IT operations alongside virtual infrastructure.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 8:48p |
No Shortage of Twitter Snark as AWS Outage Disrupts the Internet As with any major outage of a popular cloud service, the sarcasm floweth cross the tweetosphere as Amazon Web Services engineers struggle to figure out what’s wrong with the infrastructure behind its hugely popular cloud storage service S3, an outage that’s affecting many other AWS services, such as Athena, Kinesis Firehose, MapReduce, and Simple Email, among others, and a host of online services run by other companies that rely on Amazon’s cloud.
The AWS outage started Tuesday morning, and many users learned about it on Twitter before they saw a notification on the AWS service health dashboard. As it turned out, some of the dashboard’s functionality also depends on S3, which is why notifications came late, Amazon later explained, also via Twitter.
The outage appears to be isolated to systems hosted in Amazon’s Northern Virginia data centers – the cloud giant’s biggest infrastructure cluster.
The long list of companies affected by the AWS outage includes Adobe, Atlassian, Business Insider, Docker, Expedia, GitLab, Coursera, Medium, Quora, Slack, Twilio, and the US Securities and Exchange Commission, among many others.
In a status update posted at 11:35 AM Pacific (the service health dashboard has been fixed), AWS said it continued to experience high error rates with S3 in US-East-1, the availability region hosted in Northern Virginia data centers. “We are working hard at repairing S3, believe we understand root cause, and are working on implementing what we believe will remediate the issue,” the update read.
Here’s a sampling of the snark the AWS outage has inspired on Twitter:
|
|