Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 11th, 2015
| Time |
Event |
| 1:00p |
ViaWest Building Massive Oregon Data Center ViaWest is building a third data center in Oregon that will almost triple the size of its colocation footprint in the state. A healthy sales pipeline from both Oregon and California has the company building big.
ViaWest currently has around 50,000 square feet of sellable space in the area. The new facility, when fully built, will add close to 140,000 square feet of colocation space within a 210,000-square-foot building. In addition to being a much larger facility, the Hillsboro, Oregon, data center will feature the company’s latest design refinements for high quality and reliability.
“We build in different generations of data centers, and this data center will be our fourth ‘Generation 4′ data center,” said Dave Leonard, ViaWest’s chief data center officer. The other three are in Las Vegas, Minnesota, and Denver.
In the latest generation, the topology of the electrical system is more fault tolerant and the data center is configured in a way that is much more reliable. The company uses what it calls “Super-CRAH” units, which feature a patent-pending cooling technology it initially invented for the Lone Mountain data center in Las Vegas. “They are tuned to use free cooling, and each is tuned to the market it’s in,” said Leonard. Different location means different environment and different tuning.
The units are individually, rather than centrally, controlled, which makes them more reliable,according to Leonard.
Another refinement in ‘Gen 4’ is a phased approach to construction. ViaWest has made it easier to add capacity along the way, as opposed to building the facility all at once.
ViaWest continues to expand under its parent Shaw Communications, which acquired it for $1.2 billion last July as a keystone of its data center business. ViaWest has accelerated its expansion, making a bigger commitment with a much bigger facility in Oregon.
“We’re very confident because the sales pipeline from Oregon and California is huge,” said Leonard.
While ViaWest will employ around twenty people, the area will benefit from the number of jobs that retail colocation facilities indirectly create.
Many out-of-state companies that initially take colocation in Oregon develop a local office or presence to be close to their servers, Leonard explained. The types of customers that use retail colocation are very often high-touch and hands-on, desiring to be close to the servers.
“There’s an indirect pull-through,” he said. “We have quite a few customers that came to the area for colocation and now have substantial presence.”
Shaw and ViaWest have been a great fit so far, Leonard said. “We fill such a specific need for Shaw: we expand their growth capability into data [center] market, and they fill a need for us with permanent capital financing to support growth. Culturally, the two companies fit together like hand and glove.”
Market Booming Despite ‘Brand Tax’
Oregon’s data center scene is booming, and many businesses in California are interested in Oregon colocation. Oregon has lower taxes, more business-friendly regulations, and relatively low-cost power, as well as much lower earthquake risk than high-risk regions like the San Francisco Bay Area.
As the ViaWest project illustrates, growth has not been deterred by a controversial tax issue. While the state has a very favorable tax structure, a central assessment tax that weighs in brand value is currently being reevaluated by legislators. The tax applies to large telecommunications companies but hasn’t applied to data centers.
However, the uncertainty that it could possibly apply to others is causing some concern among bigger brands and multi-nationals like Amazon and Google. It caused Amazon to temporarily halt construction of its fifth data center in the state until the issue gets resolved.
The Hillsboro area is known as Silicon Forest, due to a high concentration of technology companies. There are massive data centers by Amazon, Google, Facebook, and Apple. Other tech giants, including Intel, HP, IBM, Salesforce, Oracle, and Autodesk, have presence there as well.
Other recent data center projects in Hillsboro include construction by Comcast and T5 Data Centers. T5 is constructing a two-building campus in Hillboro, one of them pre-leased to an undisclosed Fortune 100 company. T5 recently secured $55.5 million in credit for expansion.
Recently created national provider Infomart Data Centers is also present in Hillsboro.
There are submarine cable landing stations in six towns along the coast, one of which is Hillsboro, according to TeleGeography. Cables that land there offer direct connectivity to countries in the Asia Pacific region, Alaska, Northern California, and Southern California. | | 3:02p |
TelecityGroup and Interxion Merge into Major European Data Center Provider U.K.-based TelecityGroup and Netherlands-based Interxion are merging in a $2.2 billion deal creating one big European data center provider. Telecity will structure the deal as an offer for Interxion, with London, U.K. acting as primary home base.
The two companies are competitors in many markets and both have focused on attracting cloud service providers into their fold. Both are top players in Europe. Their combined forces make a formidable foe for the likes of Equinix.
Interxion has performed better, with annual growth of around 15 percent while Telecity grew around 7 percent last year, but Telecity is slightly bigger.
Interxion also has 39 data centers and it is valued at close to $2 billion. Telecity has 39 data centers in 11 European countries and a market capitalization of over $2.6 billion.
The benefits are total cost and capital synergies estimated at about $680 million. The deal was attractive to both boards.
Interxion Chief Executive David Ruberg will be appointed CEO of the combined group for a 12 month period. John Hughes will continue as Executive Chairman of Telecity until the deal is completed.
Telecity shareholders will own a slight majority of 55 percent while Interxion shareholders get 45 percent of the combined company.
The deal also means united capital in a continued European expansion. A united front potentially means a more efficient approach to expansion, with potentially bigger builds.
Europe’s countries are all very disparate markets that require in-country builds to serve properly. It can be capital intensive to properly serve each market. Europe as a whole is expected to grow, particularly when it comes to cloud use.
Because each country is often very much its own market, the cloud landscape hasn’t consolidated into a handful of major players like it has in North America. The cloud provider landscape is fractured, with several in-country specialists, Integrators and managed service providers winning this business. These make for great colocation customers and have been a focus of both companies.
Both companies have “cloud ecosystems” and focus on providing cloud connectivity.
Both companies have been speculated as potential takeover targets at various points, most recently Telecity. Rumors were that private equity firms were interested and circling.
Both companies also like to acquire popular interconnection spots.
Telecity’s acquisition strategy has been to acquire popular interconnection spots within emerging markets. It acquired two smaller providers in Bulgaria and Poland to enter those markets in 2013.
Interxion recently acquired a well-connected interconnection hub in Marseille and expanded in Amsterdam and Frankfurt on the back of strong leasing activity.
With synergies come potential downsides. It is very much a competitor and will compete for business until the merger fully progresses. Synergies also means overlap, which sometimes means operational inefficiency.
The two companies will have to evaluate their overall strategies and property porfolios. The two companies will operate as independent competitors in the short term.
“I believe that the combination of Interxion and Telecity represents an attractive value creation opportunity for our shareholders, with improved access to capital markets, reduced cost of capital and a strong balance sheet,” said Interxion Chairman John Baker in a release.
Data Center Knowledge will provide a full breakdown of the deal after further analysis.
| | 4:30p |
Data Center Building vs. Outsourcing: What’s Best For Your Business Ernest Sampera is the Chief Marketing Officer of vXchnge, a carrier-neutral colocation services provider that helps improve the business performance of its customers.
When businesses are on the fast track and experiencing growth, they often find themselves in need of additional storage space for their data. Whether it’s adding additional applications for email, streaming or other critical resource-intensive applications, businesses must make the decision to lease data center space or build an in-house storage infrastructure.
Whether looking to support critical applications or simply manage day-to-day operations, the IT needs of every company vary. Eventually, the right decision comes down to what offers the best advantages and which strategy maximizes the organization’s data storage total-cost-of-ownership (TCO).
There are a number of factors to consider when making the decision of whether to own or outsource a data center, including virtualization issues, different cloud computing environments and simply, the way companies handle different IT issues. It is important to analyze all options in order to make the most appropriate decision that best suits a business’ needs.
Building Your Own Data Center
If building out an existing property, the estimated cost is around $200 per square foot to build a data center, according to Forrester. Additionally, to have fiber installed on the site can cost over $10,000 per mile to simply reach your location. Larger companies have extensive financial resources to cover all construction and fiber costs and are able to handle an influx of staffing and IT needs including infrastructure maintenance, around-the-cloud monitoring and the additional cooling that may be required. This means that building an in-house data center may make the most sense in the long term for a larger company with expansive amounts of data.
After a company builds their own data center, businesses benefit from having total control over their data, security needs, operations and environment, often creating a sense of security. Organizations will have complete control of access to the entire premise, space, power and temperature. After building a data center, they also have the ability to leverage the existing space to build additional cabinets. On the flip side, and an important financial factor to consider, after building and implementing a data center within a company, the space cannot expand or scale without purchasing and installing additional infrastructure.
Outsourcing to a Reliable Data Center Provider
Smaller companies, including those with cloud compute solutions, require a certain amount of data storage space in order to grow its business. A startup may require only a small amount of cabinets, three or four, for example, yet complete dependency for connectivity is required. In this case, due to the cost, it is prohibitive for a small company to build its own data center infrastructure.
Besides the expenditure of building a data center, there are multiple additional costs that are substantial. Outsourcing to a dedicated provider delivers many advantages for those companies who cannot afford to spend a large chunk of their overhead on building such a facility.
Carrier-neutral connectivity: Certain data centers providers offer carrier-neutral connectivity. By outsourcing, the companies within the data center environment have the ability to choose the carrier service provider that best fits their business and financial needs.
Maximized total cost of ownership (TCO): Leasing in a facility offers a substantially lower up front cost. Additionally, with the ability to choose from a number of carriers, without a need to staff a data center, or constantly pay for equipment upkeep, businesses are left with a significantly lower TCO.
Scalability: As previously mentioned, when building a data center, additional purchases and installations are necessary when it comes time to scale out your cabinets. When outsourcing, data center providers are equipped with the ability to seamlessly allow companies to scale out additional cabinets to quickly and easily handle growing storage needs.
Reliability: With an expert team working around the clock and carrier services having a network presence located within the data center, data is processed with great efficiency, resulting in an ultra-reliable performance.
Reach and redundancy: When outsourcing with multiple connectivity options, the potential for carrier failure is reduced, protecting critical applications and infrastructure performance. The option also provides different fiber or copper routes in order to deliver dependable services to different locations, extending reach and reducing latency.
Security: Dedicated data center providers supply 24/7 security detail along with biometric logins at all access points that provides customers a peace of mind that their data is protected with highly advanced technology. Additionally, the multiple layers of physical security around interconnect sites and cabinets protect the transfer of data.
What Works Best for Your Company?
For companies in need of a large deployment of cabinets, with the resources easily available and, according to Schneider Electric, a data center life expectancy of more than five years, it may make sense in the long term to build an in-house data center. For smaller to midsize companies and startups, the evidence points toward outsourcing data center needs.
The bottom line is that no matter the size of a company, it is no longer viable to ignore the importance of having a reliable data center. So whether it is a large company with extensive financial resources or a smaller company, let the numbers and evidence guide the decision to determine whether to build or lease a data center.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.
| | 6:18p |
Financial Services Firm Buys Stream’s Texas Data Center Stream Data Centers has sold a 30,000-square-foot data center in Richardson, Texas, to a financial services firm. The company did not name the buyer, but Dallas Morning News reported it was an affiliate of TD Ameritrade, citing local tax records.
Buyer of the Texas data center retained Stream to provide facilities management services at the site. The data center’s critical power capacity is 7.2 megawatts.
This is what is called a Stream Private Data Center. They are purpose-built but smaller than the company’s single-tenant powered-shell product and can be either single- or multi-tenant. Stream has other similar projects in the Dallas, Houston, San Antonio, Minneapolis, and Denver markets.
Richardson is part of the Dallas-Fort Worth data center market. Analysts at the commercial real estate firm Jones Lang LaSalle characterized it as a market with high rate of absorption of data center inventory in the 2014 North America data center market report.
The report even said there was a “small window” in 2014 where supply of data center space was in deficit, therefore data center providers have been “racing” to build out space.
Drivers of the rising demand in this particular Texas data center market have been workforce growth, corporate headquarters relocations, and regional office expansion in the region, according to JLL.
Insurance industry is responsible for the biggest portion of overall demand – 35 percent – followed by tech companies, financial services, and the healthcare industry, in that order. Banking and financial services companies are responsible for about 20 percent of demand in the market. | | 7:00p |
Swiss File Hosting Site RapidShare to Shut Down Next Month 
This article originally appeared at The WHIR
Popular file hosting site RapidShare announced on Tuesday that it will shut down at the end of March, and accounts will be automatically deleted on Mar. 31, 2015.
The company is encouraging users to back up their content prior to that date because they won’t have access to any data stored on the service after next month.
According to a report by TorrentFreak, RapidShare hasn’t given an official reason for its closure but it’s suspected that the company’s efforts over the last few years to crackdown on pirated content on its service may have something to do with it.
RapidShare was founded in 2002 in Switzerland and built itself into one of the most popular file-hosting services online. It became an attractive place to host copy-infringing material, particularly as its servers were hosted outside of the US.
In recent years, RapidShare made a considerable effort to deter piracy and copyright infringement. In 2012, RapidShare changed its traffic policies, placing a daily cap on outbound traffic.
In 2013, RapidShare laid off 45 of its 60 employees as part of a cost-cutting measure to improve its financial situation.
A recent study noted that up to 78 percent of the material in cyberlockers infringes on copyrights, which suggests RapidShare was far from the only site of its kind storing infringing material.
Encryption is believed to at least partially address provider liability and user privacy.Mega, the encrypted cloud storage service founded by MegaUpload’s Kim Dotcom, is relying on this model to protect it from authorities that may target its service based on the type of content hosted by its users.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/swiss-file-hosting-site-rapidshare-shut-next-month | | 7:30p |
Microsoft Issues Security Updates Addressing ‘JASBUG’ Vulnerability in Windows 
This article originally appeared at The WHIR
Microsoft has issued updates that address a critical vulnerability known as “JASBUG” that affects all current versions of Windows and allows an attacker to take complete control of an affected system.
JASBUG is a flaw in how Group Policy receives and applies policy data when a domain-joined Windows clients or server connects to a domain controller.
To exploit it, the attacker has to convince a victim with a domain-configured system to connect to a network controlled by the attacker. This allows them to take complete control of an affected system – letting them install programs, view, change, or delete data, and create new accounts with full user rights.
Microsoft, released technical patches as a part of its “Patch Tuesday” release on 10 February 2015.
JAS Global Advisors LLC (JAS) and simMachines uncovered the vulnerability while research potential technical issues relating to the rollout of new Generic Top Level Domains (New gTLDs) on behalf of ICANN. It is especially interesting to note that JASBUG does not directly relate to ICANN’s New gTLD Program nor to new TLDs in general.
JAS said the vulnerability was found “by applying ‘big data’ analytical techniques to very large (and relatively obscure) technical datasets,” which revealed unusual patterns. Data analytics from simMachines and JAS’ technical security expertise helped shed light on a fundamental design flaw that has been present in Windows systems for at least a decade.
JAS notes that devices outside of enterprise networks known as “roaming machines” could be especially vulnerable. “Roaming machines – domain-joined Windows devices that connect to corporate networks via the public Internet (e.g. from hotels and coffee shops) – are at heightened risk.”
Microsoft said the security update is “Critical” – the company’s most severe rating – for Windows Server 2003, Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, Windows RT, Windows 8.1, Windows Server 2012 R2, and Windows RT 8.1.
Microsoft’s security updates address the vulnerability by improving how domain-configured systems connect to domain controllers prior to Group Policy accepting configuration data.
Microsoft, however, notes that it is not issuing a patch for Windows Server 2003, which it says doesn’t have the proper architecture to support the fix, and that building the fix for Windows Server 2003 would require re-architecting a very significant amount of the core operating system, which isn’t feasible.
JAS mentions that JASBUG is a design problem not an implementation problem like what was found in vulnerabilities like Heartbleed, Shellshock, Gotofail, and POODLE. And this required Microsoft to re-engineer core components of its operating systems, with special attention to backwards compatibility and supported configurations, requiring added care and extensive regression testing on the part of Microsoft.
Systems administrators responsible for administering Microsoft environments should immediately review the Microsoft documentation.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/micrsoft-issues-security-updates-addressing-jasbug-vulnerability-windows | | 8:00p |
New ‘Six Pack’ Switch Powers Facebook Data Center Fabric In Facebook data centers, the meaning of the words “six pack” no longer has anything to do with beer or abs.
During the Networking @Scale event at its Menlo Park, California, headquarters Wednesday, the company’s infrastructure engineers unveiled a new element of the next-generation Facebook data center fabric. Called Six Pack, the networking switch uses its six hot-swappable modules to enable any piece of IT gear in the data center to talk to any other piece of IT gear.
Facebook, like other operators of massive data centers – companies like Google or Amazon – designs almost all of its own hardware. At these companies’ scale, the approach is more effective and economical than buying off-the-shelf gear from the usual IT vendors, because they get exactly what they need, and because suppliers compete with each other for the big hardware purchases they make on a regular basis.
The Six Pack is a core element of the new network fabric that was unveiled last November. The design relies heavily on Facebook’s Wedge switch, the one its vice president of infrastructure engineering Jay Parikh previewed at GigaOm Structure in San Francisco in June 2014.
The seven-rack-unit chassis includes eight 16-port Wedge switches and two fabric cards. The ports are 40 Gigabit Ethernet. Top-of-rack switches aggregate connectivity in the racks, and Six Packs interconnect all the top-of-rack switches.
The fabric makes the network a lot more scalable than Facebook’s traditional approach, which was to build several massive server clusters in a data center and interconnect them. The more the individual clusters grew, the more congested the inter-cluster network links became, putting a hard limit on cluster size. There are no clusters in the new architecture.
Facebook is planning to make the Six Pack design open to the public through the Open Compute Project, its open source hardware and data center design initiative. Data center operators and vendors will be able to use the design or modify it to build their own switches and network fabrics.
The big thing about Wedge was disaggregation. Individual elements of the architecture could be mixed and matched and upgraded independent of each other. That aspect of the design was preserved in the Six Pack.
“We retained all the nice features that we had,” Yuval Bachar, Facebook hardware engineer, said. “Modularity of subcomponents, as well as up-the-stack disaggregation of software and hardware.”
Unlike traditional off-the-shelf network switches sold by Cisco and other big network vendors, Facebook’s homegrown switch hardware is not closely coupled with network software. The company has written its own Linux-based network operating system, called FBOSS, and adopted all of its Linux-based server management software tools for network management.
Facebook also took a different approach to software defined networking than companies that sell commercial SDN tools take. Instead of taking the control plane (the intelligence of the network) out the switches and putting it into a separate controller, each switch in the fabric has a control plane, Bachar explained.
There are no virtual network overlays. “We are using pure IP network,” he said. There are external controllers as well. “It’s a hybrid SDN, which we find to be very effective, because our switching units are completely independent.”
The announcement doesn’t mean Facebook has replaced all the network gear in its data centers with the new systems. “Right now, we’re just starting our production environment deployment – both Wedge and Six Pack,” Matt Corddry, director of engineering at Facebook, said. The company usually tests new pieces of infrastructure by running some production traffic on them in multiple regions before full-blown implementation. |
|