Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, March 22nd, 2016

    Time Event
    12:00p
    Why OCP Servers are Hard to Get for Enterprise IT Shops

    Editorial-Theme-Art_DCK_2016_March

    This month, we focus on the open source data center. From innovation at every physical layer of the data center coming out of Facebook’s Open Compute Project to the revolution in the way developers treat IT infrastructure that’s being driven by application containers, open source is changing the data center throughout the entire stack. This March, we zero in on some of those changes to get a better understanding of the pervasive open source data center.

    Sometime in the early 2000s, Amir Michael responded to a Craigslist ad that was advertising a data center technician job at a company whose name was not mentioned. He applied, and the company turned out to be Google. After years of fixing and then designing servers for Google data centers, Michael joined Facebook, which was at the time just embarking on its journey of conversion from a web company that was running on off-the-shelf gear in colocation data centers to an all-custom hyperscale infrastructure.

    He was one of the people that led those efforts at Facebook, designing servers, flying to Taiwan to negotiate with hardware manufacturers, doing everything to make sure the world’s largest social network didn’t overspend on infrastructure. He later co-founded the Open Compute Project, the Facebook-led effort to apply the ethos of open source software to hardware and data center design.

    Amir Michael Coolan mug

    Amir Michael, founder and CEO, Coolan

    Today, he is the founder and CEO of Coolan, a startup whose software uses analytics to show companies how effective their choices of data center components are and helps them make more informed infrastructure buying and management decisions.

    We caught up with Michael last week after his keynote at the Data Center World Global conference in Las Vegas to talk about the problems of adoption of OCP hardware and data center design principles by traditional enterprise IT shops, and about the project’s overall progress in light of Google recently becoming a member, making Amazon the last major US-based hyperscale data center operator that has yet to join.

    Here’s the first of multiple parts of our interview with Michael.

    Data Center Knowledge: There has been a lot of talk about the importance of OCP to the world of traditional enterprise IT, but we haven’t seen much adoption of OCP servers in that space besides a handful of large companies, such as Goldman Sachs or Fidelity Investments. Is OCP really a compelling story for the smaller enterprise IT team?

    Amir Michael: The idea behind OCP is taking a lot of the best practices and pushing them into the rest of the data center market. When it comes to enterprise, that’s a challenge. A lot of them are still on standard solutions. The area of interest for OCP there is actually starting the conversations with them. If they are engaged – almost regardless of whether they’re buying OCP solutions or not – they’re going to start to ask the right questions of their vendors as well. Maybe the end result is that they end up buying OCP gear, which is great, but the important part is that they buy efficient gear. And it can be OCP gear, or maybe they go and ask their current vendors to go and build gear that has a lot of the same principles OCP has, and that’s a win as well.

    This may be what ultimately pushes these best practices further into the enterprise space and further into the vendors where it’s not acceptable to build inefficient solutions anymore. People don’t want that.

    OCP’s efforts to try an engage enterprises more through adoption of the motherboard [for standard 19-inch servers], making OCP systems that are easier for them to consume, I think it’s a great way of getting that conversation started. The systems don’t have all the same benefits as 100 percent pure OCP gear does – [the gear] that is powering Facebook’s or Microsoft’s data centers or whoever else – but I think that having that conversation piece at the table, whether or not they adopt it, is extremely important.

    3:00p
    What Does Open Source Software Mean to US Government?
    By The VAR Guy

    By The VAR Guy

    It may be five or ten years behind the curve, but the U.S. government has now declared its love for open source software — or what it calls open source software, at least.

    On March 10 the White House published a blog post that declared a new campaign designed to increase the use of open source by the federal government. It also drafted a policy document on source code, which is open for comment on GitHub.

    According to the government, the new drive toward open source reflects an effort to save money, avoid redundant efforts and foster easier collaboration. “We can save taxpayer dollars by avoiding duplicative custom software purchases and promote innovation and collaboration across Federal agencies,” the blog post said.

    Those have been the main talking-points of the open source camp since the 1990s, of course. (Earlier, in the 1980s, the free-software movement arguably focused more on source code access as a moral obligation than as a utilitarian benefit.) The government is a little late to the party in recognizing them. Bt at least it has shown up.

    For open source supporters, the government’s promise to become more friendly toward open is no doubt a good thing. But there are some important limitations worth noting.

    First, the blog post does not define open source in a clear way. It simply mentions sharing source code between federal agencies, and releasing “a portion” of federal source code to the public. That kind of sounds like Microsoft’s Shared Source program of the early 2000s, through which the company released assorted bits of source code, but not enough that they would actually be useful to anyone else. If the federal government truly wants to embrace open source, it should put all of its code on the table, not just a portion.

    Second, the only “open source” projects that the blog post mentions are ones that center around open data more than open code. The post links to several free websites, like the College Scorecard, which provide access to a large database of information. The interfaces for these sites may have been built with open source code, but that’s not the important thing about the sites. What matters are the reams of open, publicly accessible data that undergird them.

    In that respect, the government seems to be confusing open data with open source code. Yes, it’s very good to release large amounts of data to the public. But that is a very different thing from developing and sharing source code publicly. There is nothing special about the software that powers these websites. Any Web developer could recreate the same sites pretty easily given access to the underlying data.

    More engagement by the federal government with open source is not a bad thing, even if the government doesn’t get it totally right. Yet by failing both to define open source and to appreciate the significant difference between open source code and open datasets, the government is eroding the meaning of open source itself. If we start calling everything open source, even when we are not actually making much source code publicly accessible and distributable, open source stops being significant.

    Of course, that’s why it’s a good thing the government is accepting public comments on its open source policy.

    This first ran at http://thevarguy.com/open-source-application-software-companies/what-does-open-source-software-mean-us-federal-government

    3:30p
    The Complicated History of Cuba’s Internet

    Talkin Cloud logoAs US President Barack Obama visits Cuba this week, the first time a sitting US president has visited the country since 1928, Talkin’ Cloud looks back at Cuba’s complicated history with telecommunications and the Internet.

    Cuba has a low Internet penetration rate of 30 percent, according to 2014 data from the International Telecommunications Union (ITU), up from 28 percent in 2013 and 14 percent in 2009.

    Why is it so low? For one, costs remain prohibitive for many Cubans, and the infrastructure is lacking. There are two state-run Internet Service Providers (ISPs), giving Cubans little choice for Internet connectivity and mobile phone services.

    Could improved relations with the US change that? On Monday, Obama announced that Google was working on bringing improved access to WiFi and broadband to Cuba.

    2008: Cuban government begins allowing Cubans to buy personal computers after nearly a decade-long ban

    2012: Government-owned telecommunications firm ETECSA eliminates fees for receiving phone calls within Cuba

    2013: ALBA-1, a 1,600KM high-speed undersea cable stretching between Cuba and Venezuela, is activated

    June 2013: Citizens are able to access the Internet through broadband connections to the new fiber-optic cable at 118 government-run “navigation halls”

    March 2014: Users can send and receive emails on their phone but only with a .cu email account

    May 2014: Cuban authorities start to dismantle wired or WiFi-based LANs created by citizens in some Havana neighborhoods

    July 2014: French telecom Orange Digital Horizons signs secret deal with ETECSA to offer its services, products and prices to the local operator and share expertise

    December 2014: US President Barack Obama announces that the US will restore diplomatic relations with Cuba

    January 2015: Officials plan to open 136 more Internet access centers around the country by the end of 2015

    February 2015: ETECSA temporarily reduces hourly charge for using Internet at navigation halls and state-run cybercafes from $4.50 an hour to $2 US per hour

    March 2015: US carrier IDT Corp reaches accord with ETECSA to provide direct international long distance calls

    April 2015: Cuban government pledges to expand home connections to 50 percent of the population of 11 million people, and mobile Internet connections to 60 percent by 2020

    July 2015: Cuabn government opened 35 paid public Wi-Fi hotspots; lower prices of $2 US per hour go into longer term effect beginning July 1

    March 2016: During US President Barack Obama’s historic visit to Cuba, he announces that Google is working on a deal to bring WiFi and broadband to Cuba

    This first ran at http://talkincloud.com/telco/cubas-complicated-internet-history-timeline

    5:23p
    Data Center Construction Update

    DuPont Fabros Entering Portland, Growing in Silicon Valley

    DuPont Fabros Technology has acquired a big parcel of land in Hillsboro, Oregon, an area with numerous data centers just outside of Portland. The company has also kicked off construction of the third phase of its Santa Clara, California, campus. The entire third phase has been pre-leased by a single customer, whose name DFT did not disclose.

    The company bought the nearly 50-acre parcel in Hillsboro for $11.2 million, and said it was part of the multi-market expansion plan it had been executing in recent months. It also announced a big data center construction project in Toronto and interest in entering the Phoenix market.

    An aerial view of the DuPont Fabros data center in Santa Clara, California. (Photo: DuPont Fabros Technology)

    An aerial view of a DuPont Fabros data center in Santa Clara, California. (Photo: DuPont Fabros Technology)

    The third phase of DFT’s Silicon Valley campus is a 16MW data center with about 64,000 square feet of data center space. The company expects the build-out to cost between $164 million and $170 million, with an estimated return on investment around 12 percent.

    Google’s $600M Oregon Project a Go

    Google's data center campus in the Dalles, Oregon. (Photo: Google)

    Google’s data center campus in the Dalles, Oregon. (Photo: Google)

    Google has decided to go ahead with the $600 million data center expansion in The Dalles, Oregon, not far from its existing data center campus there, Oregon Live reported. The company has had a data center in The Dalles since 2006, attracted by the city’s combination of energy infrastructure, available land, and workforce.

    The company has invested $1.2 billion in first Google data center there, located less than one mile away from the site of the future facility.

    Today, in addition to land and low energy prices, the state offers attractive tax incentives for data center operators. Google secured tax breaks for the project last year.

    Skybox Building Large in Dallas Market

    Rendering of the future Plano data center by Skybox (Photo: Skybox)

    Rendering of the future Plano data center by Skybox (Photo: Skybox)

    Add one more big data center construction project to the Dallas-Fort Worth data center boom. Skybox Datacenters, a joint venture between two Dallas-based investment firms, announced this week a 150,000-square foot, 20MW project in Plano.

    The company, backed by Rugen Street Capital and Bandera Ventures, wants to take advantage of the reportedly fast-growing demand for data center capacity in the Dallas market. The growth is fueled in part by recent expansion of corporate presence in the area by companies like Toyota, FedEx, JP Morgan Chase, Capital One, and Nokia, according to a statement by Plano Mayor Harry LaRosiliere.

    Skybox’s 21-acre site can support a contiguous building up to 350,000 square feet in size, the company said.

    Other recent expansions in the Dallas data center market:

    Equinix to Build New Dallas Data Center

    RagingWire Takes Its Massive-Scale, Luxury-Amenities Data Center Model to Texas

    Texas Colo with Efficient Data Center Cooling System Launched

    Compass Eyeing Dallas, Atlanta Data Center Markets

    Equinix to Build Fifth Brazil Data Center

    Inside Equinix's SV5 data center in San Jose, California (Photo: Equinix)

    Inside Equinix’s SV5 data center in San Jose, California (Photo: Equinix)

    The company expects its upcoming São Paulo facility, its fifth data center in Brazil, will have the capacity to support 2,800 IT cabinets, which will almost double its total inventory in the country.

    Equinix plans to spend $76 million on construction of the 13MW facility. The 200,000-plus-square foot building will have about 90,000 square feet of data center space.

    It is one of four new data center construction projects on four continents the Redwood City, California-based colocation giant announced earlier this month. The others are in Dallas, Tokyo, and Sydney.

    11:11p
    How Can the Software-Defined Data Center Reach its True Potential?

    George Teixeira is CEO and Co-Founder of DataCore Software.

    In the software-defined data center (SDDC), all elements of the infrastructure such as networking, compute, servers and storage, are virtualized and delivered as a service. Virtualization at the server and storage level are critical components on the journey to a SDDC since they enable greater productivity through software automation and agility while shielding users from the underlying complexity of the hardware.

    Today, applications are driving the enterprise – and these demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. The problem is that in a world that requires near instant response times and increasingly faster access to business-critical data, the needs of tier 1 enterprise applications such as SQL, Oracle and SAP databases have been largely unmet. For most data centers the number one cause of these delays is the data storage infrastructure.

    Why? The major bottleneck has been I/O performance. Despite the fact that most commodity servers already cost-effectively provide a wealth of powerful multiprocessor capabilities, most sit parked and in idle mode, unexploited. This is because current systems still rely on device-level optimizations tied to specific disk and flash technologies that don’t have the software intelligence that can fully harness these more powerful server system technologies with multicore architectures.

    While the virtual server revolution became the “killer app” that exploited CPU utilization and to some degree the multicore capabilities, the downside is that virtualization and the move to greater server consolidation created a workload blender effect in which more and more of the application I/O workloads were concentrated and had to be scheduled on the same system. All of those VMs and their applications become easily bottlenecked going through a serialized “I/O straw.” As processors and memory have dramatically increased in speed, this I/O straw continues to bottleneck performance — especially when it comes to the critical business applications driving databases and on-line transaction workloads.

    Many have tried to address the performance problem at the device level by adding solid-state storage (flash) to meet the increasing demands of enterprise applications or by hard-wiring these fast devices to virtual machines (VMs) in hyper-converged systems. However, improving the performance of the storage media—which replacing spinning disks with flash attempts to do—only addresses one aspect of the I/O stack. Hard-wiring flash to VMs also seems to be a contradiction to the concept of virtualization in which technology is elevated to a software-defined level above the hard-wired and physical aware level, and it also adds complexity and vendor specific lock-ins between the hypervisor and device levels.

    Multi-core processors are up to the challenge. The primary element that is missing is software that can take advantage of the multicore/parallel processing infrastructure. Parallel I/O technology enables the I/O processing to be done separately from computation and in parallel to improve I/O performance by building on virtualization’s ability to decouple software advances from hardware innovations. This method uses software to drive parallel I/O across all of those CPU cores.

    Parallel I/O technology can schedule I/O from virtualization and application workloads effectively across readily available multicore server platforms. It can overcome the I/O bottleneck by harnessing the power of multicores to dramatically increase productivity, consolidate more workloads and reduce inefficient server sprawl. This will allow much greater cost savings and productivity by taking consolidation to the next level and allowing systems to do far more with less.

    Parallel I/O is essentially like a multi-lane superhighway with “EZ pass” on all the lanes. It avoids the bottleneck of waiting on a single toll booth and the wait time. It opens up the other cores (all the “lanes” in this analogy) for I/O distribution so that data can continue to flow back and forth between the application and the storage media at top speed.

    The effect is that more data flows through the same hardware infrastructure in the same amount of time as legacy storage systems. The traditional three-tier infrastructure of servers, network, and compute benefits by having storage systems that directly respond and service existing I/O requests faster and thus have the capability of supporting significantly more applications and workloads on the same platforms. The efficiency of a low-latent parallel architecture is potentially more critical in hyper-converged architectures, which are a “shared-everything” infrastructure. If the storage software is more efficient in its use of computing resources, that means that it returns more available processing power to the other processes on which it runs.

    By taking full advantage of the processing power offered by multicore servers, parallel I/O technology acts as a key enabler for a true software-defined data center. This is due to the fact that it avoids any special hardwiring that impedes achieving the benefits of virtualization while it unlocks the underlying hardware power to achieve a dramatic acceleration in I/O and storage performance – solving the I/O bottleneck problem and making the realization of software-defined data centers possible.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2016/03/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org