Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, August 12th, 2014
| Time |
Event |
| 12:00p |
Intel’s 14 nm Broadwell Chips Will Run Two Times Cooler than Current Gen Intel’s upcoming Broadwell processors, manufactured using the next-generation 14-nanometer process technology, will generate half the heat their predecessors, the current 22 nm Haswell chips do. The chips will run cooler while performing at the same level and consuming less energy.
The company disclosed details of the Broadwell architecture for the first time Monday. The Intel Core M product line will lead to new form factors in computing, enabling systems that are not only cooler, but also thinner and quieter, it said.
The process technology has been qualified and 14-nanometer processors are already in volume production, according to a slide from a presentation by Intel senior fellow Mark Bohr. The company has equipped factories in Oregon and Arizona with 14 nm manufacturing equipment and plans to outfit a facility in Ireland next year.
Intel said some of the initial products powered by Core M will hit the market during this coming holiday shopping season. The availability through manufacturers will widen throughout the first half of 2015.
Intel’s Core M chips will power the entire range of devices, including servers, personal computing devices and devices connected to the Internet of Things. The latter is a category for network-connected devices that are not the usual smartphones, servers, laptops or desktop computers. These would be things like Google’s smart thermostat control device Nest or a monitoring sensor attached to a piece of equipment at a manufacturing plant.
The company is well ahead of competition on process technology. AMD’s smallest process technology is 28 nm, employed in producing chips based on its Jaguar architecture. IBM’s latest Power8 chips, which went into production only recently, are manufactured using 22 nm technology.
IBM recently announced a plan to invest $3 billion in research and development efforts to shrink process technology beyond 7 nm, but that is a very long-term goal, and IBM’s $3 billion investment is small compared to the $10 billion Intel spends on processor R&D every year.
“We have … said that we hope to have 10 nm deployed approximately two years after we ship 14 nm and we’ve also said we have visibility to the 7 nm node,” an Intel spokesperson wrote us in an email commenting on IBM’s announcement. “On the other hand IBM has begun shipment of their equivalent of our 22 nm process.”
Another important distinction is that IBM is not shipping 3D transistors (known as FinFETs), and Intel is. Its 14 nm chips are built using second-generation Tri-gate transistors, while others are still working to develop FinFET technology of their own.
Intel’s Tri-gate transistors stack a single gate on top of two vertical gates, which effectively triples the surface area electrons travel on without tripling the overall size of the chip.
Core M chips will be 30 percent thinner and half the size of their current-generation counterparts. Intel has introduced new power and thermal management features and reduced the amount of power the chips consume when idling by 60 percent.
Idle server power consumption is a major energy sink for data centers. For example, one type of web server Facebook uses to run its application draws 60 watts when idle. This is at the low end of the range for the data center industry as a whole, since Facebook has one of the most optimized data center infrastructures in the world. | | 12:30p |
New US Digital Services Team Brings Private Sector Expertise to Government Tech Projects 
This article originally appeared at The WHIR
The White House announced on Monday that it is launching US Digital Services, a team that will work with other government agencies to upgrade technology infrastructure and make government websites more user-friendly – a move which is long overdue.
After the fumbled Healthcare.gov rollout, the US government is hoping that its US Digital Services team, led by Mikey Dickerson, will be able to ensure the success of digital services projects. Dickerson came from the private sector where he worked as an engineer who helped fix Healthcare.gov. He will serve as the new Administrator of the US Digital Service and Deputy Federal Chief Information Officer.
The team will be responsible for establishing standards for the public sector that align with the private sector, and identify technology to help the US government be able to scale services effectively. Above all, the team will be expected to provide accountability that will ensure agencies see results.
The White House administration is currently seeking public comment on the Digital Service Playbook and the TechFAR Handbook on GitHub.
The Digital Services Playbook, available online, is a playbook of 13 key “plays” taken from successful best practices from the private sector that if followed, should help the government build effective digital services.
The TechFAR Handbook is a guide that explains how agencies can execute key plays in ways consistent with the Federal Acquisition Regulation. According to a statement, the guide will help agencies use existing authorities to procure development services in new ways that closely match the private sector.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/new-us-digital-services-team-brings-private-sector-expertise-government-tech-projects | | 1:00p |
CenturyLink Thinks ‘Dockerized’ Multi-Container Apps Shouldn’t Be a Pain in the Rear CenturyLink Technology Solutions has developed a software solution for managing applications that span multiple Docker containers and on Tuesday contributed it to the open source community.
The solution, called Panamax, essentially makes management of multi-container applications “easier for humans,” Lucas Carlson, chief innovation officer at CenturyLink, who came to the company when it bought his startup AppFog, said.
“Right now, deploying … containerized applications is very easy for simple single-container applications,” he said. Once you venture beyond the single-container topology, there is all of a sudden a myriad of new technologies you need to learn – things like Fig, Mesos, etcd – “the list keeps growing every other day.”
All the things a developer needs to learn makes barrier to entry for using Docker very high. The goal of Panamax is to lower that barrier with a set of standard practices and an elegant interface that enables users to deploy containerized apps in any cloud using the technologies under the hood without having to learn the ins and outs of each of them.
It is yet another big bet CenturyLink has placed on where the company thinks the future of cloud technology lies. Its other big bets include support for the open source Platform-as-a-Service technology Cloud Foundry and Microsoft’s .NET framework, which enables interoperability between pieces of code written in different languages.
A cloud installer
Panamax is a sophisticated “cloud installer” that can be used to install one of the open-source Docker-based Platform-as-a-Service solutions, such as Dokku, Flynn and Deis, or a Hadoop cluster, among other examples.
The solution is still in beta, but CenturyLink has already created blueprints for customers to deploy the software and try it out. Calson said many of the provider’s own employees were evaluating it to see how it will fit in their application development lifecycles.
The plan is for Panamax to support orchestration technologies for Linux containers from a number of solutions already out there, such as Google’s Kubernetes, Red Hat’s GearD and Apache Mesos. Today, it only supports Fleet by the San Francisco startup CoreOS.
What Carlson considers the “killer feature” of Panamax, however, is the application template marketplace. The marketplace is a collection of templates for multi-container applications. What makes this feature especially powerful is that it enables users to share their application templates with others.
A company like Cloudera, for example, may create a 100-container Hadoop cluster and share it with the world, Carlson said.
From dream to reality
He said Panamax had been a passion of his since he first imagined the concept about nine months ago. Improving the way people manage application containers is “pretty much a dream of mine,” he said.
“I had the idea for Panamax,”Carlson said. “I had the concept for what it would be and how it worked and all that.”
He hired 11 high-level programmers, engineers and designers to create a team dedicated to Panamax, and the team has been working on the platform for about six months now. These were all additional hires, separate from the CenturyLink developer team in Seattle that is also doing a lot of infrastructure management software development. That team came with the acquisition of public cloud provider Tier 3.
So far, Carlson is satisfied with the results. “We’re really proud of the quality of the source code,” he said.
Not only is the source code clean, but the user interface is elegant and useful, and documentation is very clear. A lot of resources went toward documentation because the team knew from the start that the code would be open source.
Innovation is a strategic initiative at CenturyLink
CenturyLink’s high level of investment in technology is strategic in nature. “CenturyLink is a company that is very committed to cloud technology,” Carlson said. “That’s one of our big strategic initiatives.”
The company wants to take part in the next technological step in cloud computing instead of catching up with innovation done by others, and Carlson’s is a key role in fulfilling that goal. He has been a programmer for 20 years, built AppFog, a Platform-as-a-Service company, and written two books on development for O’Reilly Media’s publishing arm.
“My career grew up with the cloud, and so when I look forward, the thing that is changing the cloud landscape the most right now is … Linux containers and Docker,” he said. “It’s redefining what cloud looks like in the coming years.”
Carlson’s new developer team is going to continue developing Panamax, but its work is not limited to the Docker management platform. “We’re going to continue to work on innovative next-generation technologies,” he said, but did not want to divulge much else.
“We are definitely actively working on fun stuff that is hopefully going to continue changing the world for the better,” he said. | | 1:00p |
Rackspace Builds Up DevOps Services Portfolio Rackspace has added lots of new features to its DevOps automation services and kicked off a new DevOps advisory service to help customers use DevOps technologies and processes.
This is yet another step in the Texas company’s quest to differentiate itself from the rest of the cloud service provider market with a beefy portfolio of services customers can use along with using its cloud infrastructure resources.
Rackspace’s DevOps Automation service now includes support for Windows (it previously only supported Linux) and a catalog of canned development environment stacks that enable it to stand up DevOps environments for users within one hour. These “best practices” stacks include packages for Chef, Rails, Node.js, PHP, Tomcat and Python.
The company has stood up such stacks for customers in the past, but the process would be much longer because every environment would be crated from scratch. Pre-designed templates now make it quick and easy.
DevOps Automation comes as an option at the highest tier of managed services Rackspace offers to its customers. The provider requires a minimum commitment of $5,000 for cloud services for a customer to use it.
The new DevOps Advisory Service is designed to help customers interested in DevOps bridge the gap between where they are now and a place where they are ready to start using it.
There is a need for such a service since the switch requires not only a change in technologies a company employs but also a cultural change, Prashanth Chandrasekar, general manager of Rackspace’s DevOps business segment, said.
Rackspace’s big bet on DevOps
Rackspace has placed a big bet on services around DevOps, which it unveiled late last year. The team on this side of the business consists of about 100 people, majority of whom fulfill the customer service function, the rest focused on product development, engineering, sales and marketing.
Chandrasekar said the company got into this business because it saw a lot of its customers wanting to adopt DevOps practices but lacked the knowledge and resources to do it. Rackspace also saw that there was a definitive set of tools companies doing DevOps were using, which meant there was an opportunity in building a services business that made it easier for newcomers to use those tools.
These were tools like Chef, New Relic, Graphite, Jenkins, RabbitMQ and MongoDB, among others.
DevOps has become a popular way to manage IT infrastructure to enable rapid, continuous roll-out of software features, but DevOps professionals are expensive and hard to retain, Chandrasekar said. “We decided that we wanted to offer this for our customers,” he said.
Mum’s the word on DevOps customer traction
Chandrasekar said the company did not want to reveal how many of its customers were using its DevOps services, sharing only that there was a lot of interest.
He also said that his business unit’s customer service scores were highest in the company, but that’s not an indicator of the size of the business. The fewer customers a business serves, the easier it is to keep them all satisfied.
One DevOps customer Rackspace has been parading is WePay, an online-payment processing startup that competes with the likes of eBay’s PayPal. Here’s a video of WePay CEO Bill Clerico talking about how the company uses Rackspace services at the provider’s recent Solve conference in San Francisco:
| | 1:18p |
Redefining System Architecture with Data at the Core Momchil Michailov is the CEO and Co-Founder of Sanbolic.
Data is the ultimate corporate asset. Yet most IT architecture starts with a discussion of hardware.
Data has many different forms, lifecycles, values and uses; and collecting, protecting, analyzing and making data available to support business processes is the core value of IT systems.
According to a CSC report, by 2020 over one-third of all data will live in or pass through the cloud, and data production will be 44 times greater than it was in 2009. That said, IT systems need to be designed around the data, tuning the performance, the level of data protection, and access profile for each workload. System architecture in today’s cloud era should be defined by the data it contains rather than the hardware that stores and makes it available.
Software-defined data platforms are drastically and rapidly changing the IT model. By abstracting the underlying hardware, and allowing data management and access to be defined workload by workload, data characteristics are now defining the infrastructure used, rather than vice versa. IT investment, therefore, needs to be better matched against the value of data to the business, while allowing increased flexibility and responsiveness.
The thorny road to software-defined infrastructure
There is no doubt that companies today need to run IT systems and applications to enable critical processes and business intelligence. Unfortunately, these core outcomes are delivered from the data in applications running on storage systems that are obsolete. Storage devices currently represent the biggest portion of the IT budget and are also the most limiting factor in the way IT is run – storage interoperability, shared visibility and management are some of the key limiting factors.
Placing data in storage silos segments it and the servers that access it, hampering IT’s ability to run flexible and agile operations. Couple those challenges with the fact that today’s CIOs are under continued budget pressure while being expected to deliver and maintain more applications and services that are 10 times faster than ever before, and we can see why the promise of software-defined data platforms is enticing, yet still elusive to most enterprises.
A software-defined data platform, the building blocks to success
What today’s IT needs to focus on is ensuring the availability and scalability of the data placed in the storage arrays, and the infrastructure’s ability to enable workload elasticity while providing enterprise service level capabilities. The building blocks are:
Storage as a single unit: A key capability of storage and data management solutions going forward will be their ability to meet the storage needs of dynamic applications and place growing data on appropriate storage media while delivering the performance for applications in both physical and virtual environments. just as servers are the vehicles that deliver the applications.
Public cloud providers have redefined the necessity to run storage the traditional way. Instead, they utilize a converged model where server and storage are in a single unit with the ability to scale out data to meet application demands. This model is called hyper-converged or Server San.
There are a few key attributes of this new hyper-converged storage that are of note:
- Better application performance thanks to CPU being close to the Disk or Flash for much faster input/output (IO);
- Storage cost is dramatically reduced;
- Eliminates the proprietary and cost-intensive licensing of traditional storage and data management tools; and
- Puts server and application administrators in charge of the complete IT stack for their applications.
Software-defined storage management: This hyper-converged infrastructure needs to be enabled by a complete software stack so administrators can modularly deploy storage and CPU capacity. This will eliminate the “forklift” storage update – the major upgrades or overhauls that are required for customers to adapt to this infrastructure. Nimble software management tools will allow the non-disruptive swap of individual servers and storage components on as-needed basis.
Dynamic scale-out of storage resources: Today, the performance of the storage, the services it provides and its cost govern the applications that run on it. Leveraging hyper-converged commodity infrastructure and layering the advanced storage and data management services on top of it creates a new, shared infrastructure, eliminating the upfront storage cost. Enterprises can pick the right storage medium (FLASH, SSD, HDD) and, through software-provisioned storage volumes based on SLAs, offer both file and block access to avoid infrastructure siloes and storage islands. Just as hypervisors allowed us to migrate applications and workloads to circumvent server tie-down, software-defined storage services decouple data from the underlying storage devices.
Seamless orchestration: True hyper-converged infrastructure provides storage and compute; however, in order to be fully operational, the data center of tomorrow needs its orchestration.
A sophisticated orchestration layer allows organizations to migrate workloads across physical and virtual machines and place data in the right medium and location. By controlling and enabling infrastructure through software layers and orchestration, IT can focus on the economics, performance, SLA and availability of its workloads. The end result is hardware cost reduction, non-disruptive hardware upgrades, reduced management cost, tier-one capability, and the ability to span across data centers – on-prem and cloud infrastructure.
However, the most important benefit is the ability to scale out workloads and harness data, the most valuable business asset.
A modern system architecture
To meet the new demands of business, IT can no longer count on the tried and true. What today’s IT teams need is a modern system architecture that has data at its core. The next major frontier in IT will be the adoption of nimble platforms that will redefine how IT gets designed and delivered. We are on the brink of a major storage evolution that will transform how IT enables the business.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 5:46p |
IBM Opens SoftLayer Data Center in Toronto, Canada IBM opened a SoftLayer data center in Toronto, Canada. This is the first SoftLayer facility in the country and the fifth of the planned fifteen data centers this year as part of a broad $1.2 billion investment program to expand the SoftLayer cloud capacity. Total capacity in Toronto is more than 15,000 physical servers.
The facility is located downtown Toronto on Front Street, within a Digital Realty Trust building. The launch follows recent data center openings in London and Hong Kong. The Toronto facility was built exactly like the other SoftLayer data centers, based on a POD (Performance Optimized Data Center) concept. It connects directly into SoftLayer’s global network via a network Point of Presence located in the city.
The new data center provides Canadians with a local presence that can deliver SoftLayer’s full portfolio of services. This includes bare metal and virtual servers, storage and networking. It appeals to those in need of in-country data residency, such as financial institutions, public sector organizations and many large enterprises.
SoftLayer already has a sizable Canadian customer base: more than 1,200. It has customers across financial services, insurance, retail and public sector organizations, as well as strong traction with startups through its Catalyst program, which provides promising startups with credits toward services.
“Toronto is the fourth-largest city in North America and a vital financial and technological hub—not only for the province of Ontario but for all of Canada,” said Lance Crosby, SoftLayer’s CEO. “We have hundreds of existing Canadian customers that can now have SoftLayer services deployed closer to home, and thousands of customers that will take advantage of the facility to get closer to end users in this market.”
IBM acquired SoftLayer for about $2 billion about one year ago. The acquisition formed the basis and infrastructure for its cloud play.
In addition to $1.2 billion on new data centers, IBM is investing $1 billion to launch a Watson business unit as well as another $1 billion on its Bluemix Platform-as-a-Service. IBM has more than 30,000 clients around the world. It has invested more than $7 billion in 17 acquisitions since 2007 to accelerate cloud initiatives. | | 6:24p |
Cisco’s Midyear Security Report Warns of Lower-Profile Threats Cisco released its 2014 midyear cyber security report at Black Hat U.S. The report examines “weak links” in organizations that contribute to the threat landscape, such as outdated software, bad code, abandoned digital properties or user errors. These weak links enable exploits through methods such as DNS queries, exploit kits, amplification attacks and ransomware, among other examples.
The report examines threat intelligence and cybersecurity trends for the first half of 2014, looking at 16 large multinational organizations with more than $4 trillion in assets and revenues in excess of $300 billion. The big takeaway is that companies should not focus on high-profile vulnerabilities only, neglecting to tie loose ends throughout the IT stack.
Focusing in boldface vulnerabilities like the much-publicized Heartbleed allows malicious actors to escape detection in attacks against low-profile legacy applications and infrastructure with known weaknesses.
Java remains the programming language most exploited by malicious actors. Java exploits rose to 93 percent of all Indicators of Compromise (IOCs) as of May 2014, up from 91 percent in November 2013.
The report says there is an unusual uptick in malware within vertical markets. For the first half of 2014, media and publishing led the industry verticals, followed by pharmaceutical and chemical industry and aviation. The top most affected verticals by region were media and publishing in the Americas, food and beverage in EMEA and insurance in Asia-Pacific, China, Japan and India.
The report names three main security insights tying enterprises to malicious traffic:
- Man In The Browser attacks: Nearly 94 percent of customer networks observed in 2014 have traffic going to websites hosting malware. Issuing DNS requests for hostnames where the IP address to which the hostname resolves is reported to be associated with the distribution of Palevo, SpyEye and Zeus malware families that incorporate man-in-the-browser (MiTB) functionality.
- Botnet hide and seek: Nearly 70 percent of networks-issued DNS queries for Dynamic DNS domans. This shows evidence of networks misused or compromised with botnets using DDNS to alter IP address to avoid detection and blacklisting. Few legitimate outbound connection attempts from enterprises would seek dynamic DNS domains outside of malicious intent.
- Encrypting stolen data: Nearly 44 percent of observed customer networks in 2014 were identified as issuing DNS requests for sites and domains with devices that provide encrypted channel services, used by malicious actors to cover their tracks by exfiltrating data using encrypted channels to avoid detection like VPN, SSH, SFTP, FTP and FTPS.
On a positive note, the number of exploit kits has dropped by 87 percent since the alleged creator of the widely popular Blackhole exploit kit was arrested last year. No clear leader has yet to emerge among several observed exploit kits.
The full report is available through supplying contact information here. | | 7:01p |
Google and Others Building $300M Trans-Pacific Submarine Cable Google and five other companies are building FASTER, a new trans-Pacific cable system that will connect major U.S. west coast cities with two coastal locations in Japan with initial speeds of up to 60 terabytes per second.
This is not Google’s first investment in undersea cables. The rationale for FASTER is the same as it was in the past. It is all about the future of the Internet and laying the foundations for an infrastructure that will track the network’s growth.
The consortium includes China Mobile International, China Telecom Global, Global Transit, KDDI and Singtel. NEC will act as system supplier.
The cable will extend to U.S. west coast hubs, including Los Angeles, San Francisco, Portland and Seattle. Google owns a facility in the Dalles, Oregon, which underwent a $600 million expansion last year.
Google’s vice president of technical infrastructure Urs Hölzle said the company is making the investment to make its products faster and more reliable.
The company previously invested in UNITY in 2008 and SJC (South-East Asia Japan Cable) in 2011. UNITY also linked the U.S. to Japan with comparatively smaller 3.3 Tb/s connection. SJC is a $400 million Southeast Asia-Japan cable that became operational last June. It can handle 28 Tb/s.
Undersea cables are massive and tough, built to withstand their environment. While very resilient, they occasionally need to undergo repairs via submarine operators at depths of over a mile.
The major Japan earthquake in 2011 damaged multiple undersea cables. The damage had a modest impact, with network operators routing around the problem. The new cable will add some additional resiliency in addition to enhanced performance.
Google continues to see rapid growth in demand in Asia Pacific. The company began scouting data center locations in the region in 2007. In 2012, it built its first company-owned data center in Hong Kong, followed by data centers in Singapore and Taiwan. | | 7:47p |
LiquidWeb Among Companies Affected by Major Outage Across US Network Providers 
This article originally appeared at The WHIR
LiquidWeb customers in the U.S. experienced outages Tuesday morning as part of a widespread issue impacting major network providers including Comcast, AT&T, Time Warner and Verizon.
LiquidWeb kept customers in the loop on Twitter and said it was communicating with a number of major providers to get more information on the outages across the US. LiquidWeb is directing customers todowndetector.com to see which connectivity providers are affected.
While some customers asked LiquidWeb if the outage was connected to the flooding in Michigan on Monday night, where the company is based, it has confirmed that that is not the case and that the issue is way more far-reaching than Michigan.
In a situation like this, LiquidWeb said it would typically reroute traffic, but it is being hindered by the amount of providers experiencing outages. LiquidWeb public relations specialist Cale Sauter tells the WHIR in an email that its network team is seeing if it can shift traffic to other providers, but since its most prevalent providers are affected, it is making it difficult.
“As ongoing issues aren’t directly related to any of our infrastructure, we are trying to get more information as well,” LiquidWeb tweeted. “We’re in the process of communicating with several providers and will relay any updates the second we receive them.”
LiquidWeb is telling customers on Twitter that it is in “one of the many US outage areas, causing issues for any of its customers using the affected networks regardless of their location.”
One LiquidWeb customer on Twitter said its VPS servers have been down for more than six hours.
The WHIR will update this story once we learn more about the outage.
Disclosure: LiquidWeb is the WHIR’s and Data Center Knowledge’s web hosting provider.
UPDATE 12 pm ET: Michigan-based Nexcess is experiencing issues as part of the major network outage. Winnipeg-based web host New Winnipeg is also reporting some problems.
This article originally appeared at: http://www.thewhir.com/web-hosting-news/liquidweb-among-companies-affected-major-outage-across-us-network-providers |
|