Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, November 23rd, 2016
| Time |
Event |
| 12:15a |
Google’s Alabama Power Plant Conversion Project May Be Off-Schedule An ambitious effort by Google to repurpose the Tennessee Valley Authority’s old Widows Creek power plant in Jackson County, Alabama, just across the Tennessee River from Chattanooga, does not appear to be proceeding on schedule. Travis Leder, a reporter for the local ABC-TV affiliate, may not have been the first to notice, but he was the first to say that if Google intends to begin reconstruction as scheduled during 2016, there’s only one month left.
“We anticipate starting construction in 2016,” reads Google’s project page, even now. Roden’s reporting cites the Bridgeport, Alabama, Chamber of Commerce president as saying Google has not been in touch with him about an operating schedule, since having told him it’s working out details about electrical power.
“They just don’t tell us a lot,” he told Roden.
Google has yet to respond to Data Center Knowledge’s request for comment on the matter.
Electricity should not have been the project’s big problem. The Widows Creek Fossil Plant was a major coal-fired power production facility, so much of its electricity infrastructure is expected to be repurposed.
“Thanks to an arrangement with Tennessee Valley Authority, our electric utility, we’ll be able to scout new renewable energy projects and work with TVA to bring the power onto their electrical grid,” wrote Google senior manager for Data Center Energy Patrick Gammons, in a company blog post last year announcing the project. “Ultimately, this contributes to our goal of being powered by 100% renewable energy.”
But Rosen’s reporting is just the tip of what would be considered a very long iceberg, if it weren’t coal black. The old power plant, now completely retired, was one of the world’s major polluters.
A decade ago, the State of North Carolina sued the TVA, claiming that pollution flowing over state lines caused a public nuisance. But a U.S. Fourth Circuit Court of Appeals ruling in 2010 found that the court system was too risky and problematic a place for states to settle the issue of who’s polluting whom.
“No matter how lofty the goal,” the Appeals Court’s decision read, “we are unwilling to sanction the least predictable and the most problematic method for resolving interstate emissions disputes, a method which would chaotically upend an entire body of clean air law and could all too easily redound to the detriment of the environment itself.”
But the lawsuit had already started an investigation into the extent to which the U.S. Environmental Protection Agency may have been missing vital clues that could help it protect public health. In their joint 2011 report [PDF], the environmental advocacy group Earthjustice and the Sierra Club issued results of their scientific investigation, showing that water sampled from Stevenson, Alabama, in the vicinity of the Widows Creek plant, had tested with over 5,000 times the safety levels for the carcinogen hexavalent chromium (Cr(VI)) recommended by the State of California (which, of course, are not enforced in Alabama, but were used here as a benchmark).
Cr(VI), you may recall, is the same chemical that was the subject of the famous case that propelled crusader Erin Brockovich to fame.
Despite the final ruling in the TVA’s favor, the existence of the dispute and the findings of the Earthjustice report had already added to the TVA’s pressure to shut down the Widows Creek facility.
But just because the plant is shut down does not mean a residual threat to the water supply is no longer present. And just because the TVA won the lawsuit, may not mean it remains wholly liable for cleaning up any public health threat that may remain to the local area, after Google takes full possession of the facility.
| | 12:19a |
Cloud Server Provider Linode Adds Second Tokyo Facility Citing a surge in Asia/Pacific customer activity, especially from both Japan and China, virtual cloud and Web hosting provider Linode announced Monday it’s adding a second data center facility in Tokyo. And in the wake of another round of denial-of-service attacks targeting its Atlanta facility last September, on top of a significant round of attacks the previous December, Linode told Data Center Knowledge it is committing to switch the hypervisors hosting new virtual server instances from Xen to KVM, beginning with the opening of the Tokyo 2 facility.
“We were the target of some DDoS attacks earlier this year, and the Tokyo 2 transit configuration implements many safeguards which prevent or reduce the impact of attacks,” said Brett Kaplan, Linode’s data center operations manager, in a message to Data Center Knowledge Tuesday. “The amount of bandwidth we have now gives us headroom for when we are attacked again. We also have multiple diverse transit providers which can help in the event of DDoS attacks, congestion, and cable cuts.”
In an indication of having seriously reconsidered its architecture to thwart new attacks — which Kaplan clearly accepts as a fact of life — Linode will no longer be offering Xen hypervisors as the controllers for its new servers. Up until this week, however, customers in Japan did not have the option of moving their Linode instances from Xen to KVM. They may now take advantage of that option, Kaplan told us, by migrating their instances to Tokyo 2. As incentive, Linode is offering RAM size upgrades for customers who make the shift.
Strategic Shift
Linode will be renting this space from a major Japan data center provider. It’s promising better connectivity for this new facility, blending multiple transit providers — including NTT, Tata Communications, and PCCW — with settlement-free peers such as Japan’s BBIX exchange.
“With Tokyo 2, we have implemented a very robust transit platform,” Kaplan told us. “This gives us more bandwidth than we had at our Tokyo 1 facility while providing the same latency to countries in the region.
“We also have additional redundancy in the form of diverse transit providers. Should one transit provider go down or become too congested, we can route to one of our other providers. We have also built in plenty of headroom so we can easily add more bandwidth and transit providers down the line. . . One of the most important aspects of this project was to ensure that the new facility not only supports the high-density power standard we require, but also gives us plenty of room to grow from a space, power, and cooling perspective for many years to come. I can confidently say that Tokyo 2 meets all of those requirements.”
Linode’s business model is simplified. It offers four types of virtual server instances, distinguished mainly by their memory sizes — 2 GB, 4 GB, 8 GB, and 12 GB. Compute cores, storage, and bandwidth are proportionately bundled in with all four of these instance types. It charges flat, hourly rates of 1.5¢, 3¢, 6¢, and 12¢ per hour, respectively.
Response and Resolve
When it suffered under the weight of DDoS attacks last year, to its credit, Linode was unusually transparent and forthcoming about the preventative measures it took, including why some of those measures failed. For example, network engineer Alex Forster explained the use of “blackholing” as a method of discontinuing the use of specific IP addresses, in the event they’ve been targeted.
“Blackholing fails as an effective mitigator,” wrote Forster at the time, “under one obvious but important circumstance: when the IP that’s being targeted – say, some critical piece of infrastructure – can’t go offline without taking others down with it. Examples that usually come to mind are ‘servers of servers,’ like API endpoints or DNS servers, that make up the foundation of other infrastructure.”
As a result, he went on, it was particularly difficult for Linode to mitigate attacks against its servers, and the infrastructure provided to Linode by its colo providers.
With the target of those attacks arguably the network infrastructure, exactly what does changing the brand of hypervisor have to do with protecting against future attacks, or mitigating the effects of those attacks? On the surface, it seems like fortifying a city’s air defenses to protect against an attack from the sewers.
Yet there is precedent. In 2012, a research team with Canada’s Simon Fraser University studied the effects of a limited DDoS attack in a laboratory setting [PDF], using four different configurations of hypervisor, including both Xen and KVM. They discovered that virtual machines running on these various platforms had differing response profiles, when performing synthetic benchmark operations both in normal circumstances and under attack.
In fairness, Xen and KVM under-performed one another in different categories of the SFU tests. Yet it was clear that the researchers were capable of leveraging the open source nature of KVM, coupled with its unique architecture, to develop virtual I/O drivers that would at least attempt to mitigate the effects of overburdened traffic patterns.
We often talk about data center architectures needing to become adaptable to changing circumstances, and transparent about how they go about it. Perhaps we should count Linode as a case-in-point. | | 1:00p |
How to Reduce Costs and Improve Efficiency in Your IT Infrastructure Shahin Pirooz is CTO of DataEndure.
There’s saving money, and there’s really saving a lot of money.
The distinctions aren’t always clear to budget enthusiasts who may sometimes enjoy drilling a little too deep to achieve arbitrary percentages of cost reductions by only trying to make the tiniest of trims here and there.
For instance, it’s not hard to thinly slice away at a few employee perks, maybe not send as many people to conferences/networking events, or switch from occasional catered employee lunches to less frequent potlucks. Some companies have considered minor cuts to hours, based on the hope that productivity won’t change drastically at 32 hours a week instead of 40, and they won’t have to pay as much in benefits.
But a smarter calculus is to look for significant savings and efficiencies in the areas of your data management solutions.
Because the IT world still has some “anything goes” elements to it, different technical service providers can offer a wide range of costs, from up-front fees to ongoing subscription fees.
Companies that also venture into the data cloud for storage and transfers also face all sorts of fee and organizational structures. So many different terms, costs and features for every solution can make it hard to quantify an “apples-to-apples” approach – instead many companies are looking through a whole fruit bowl deciding which will work best and save the most money and time.
Try these concepts to improve efficiency and save big.
Move Unneeded Data Out of Tier 1
Do you have a lot of data that you don’t do anything with? More than likely the answer is yes, according to Veritas, which studied how much information customers use regularly and how much isn’t used. Its 2016 Genomics report concluded that a) the amount of data is exploding, and b) because of this, there’s plenty of unexpected clutter in just about everyone’s network environments. Which probably isn’t too terrible of a thing, except when much of this inactive data is in a Tier 1 production environment and costs you an ongoing amount of money for storage and for active access.
The study showed that bigger culprits are files like images, videos and spreadsheets. They’re often up at this tier because people don’t put a lot of thought into them when they were archived, perhaps they were automatically archived, or no one monitors use and types of files after a batch of work product was archived.
While this seems like it could be a “that’s interesting” tidbit, the following numbers will make you say “that’s REALLY interesting.”
If 41 percent of the environment is stale – untouched in three years – that could cost about $20.5 million to preserve. You need to work with a company to identify and evaluate what can be re-classified, deleted or archived at a deeper and less active level, you can save an average of $2 million per year.
Avoid the 3-Year ‘Tech Refresh’
Service plans usually sound like a nice touch when finalizing a long-term deal, especially when the vendor describes how bad things can get if something goes wrong – plus how good they can be if you buy the plan. This push to “help you sleep a little better at night without worrying” is thrown in for pretty much every purchase, from car shopping to insurance to enterprise storage sales.
In the latter area, the coverage often lasts for a term of 36 months. After this time, without any major problems, knock on wood, the vendor may return with an offer to continue coverage, but at a significantly higher price, sometimes 300-400 percent higher.
Rather than starting from scratch, companies can face three options: continue the status quo to bundle the marked-price into their budget, even if it means cuts in other areas; switch to a new system, perhaps from the same vendor for another three-year deal and then have to transfer their data over in a big, messy and expensive switch-over; or saying “no thanks” and going without support (scary!).
A better, simpler option would be to work with partners that don’t require the traditional three-year cycle. An ongoing subscription, not unlike a mobile phone plan, can leave it up to the customer to decide if and when to upgrade. Customers can also be given tools to take care of some of their own data storage and maintenance. Costs remain steady throughout a product’s life cycle.
Try Data Reduction Technologies
Many storage systems proclaim how much room you’ll have if you upgrade and how much better your system will perform, but these claims aren’t always honest or accurate. For instance, a 200 TB system may actually turn out to have 120 TB once everything is formatted and ready to use – still plenty of space, but not what was promised or promoted.
Instead, consider a class system with inline data reduction and compression. This will not only provide the full capacity of amount of a stated item, but even more: the complete 200 TB storage you were pining for can go as high as 500 TB.
This works out to be more storage capacity and better performance, rather than settling for less. Even better, sometimes these features are available at no extra cost.
Your Takeaway
Where storage solutions are concerned, it’s easy to do the minimum, maybe let the auto archive take stuff away to someplace you’ll never look at it. However, active companies often do want or need to access this data. Likewise, it’s also easy to work with whatever tools you’re given, rather than looking for smarter ways of organizing your active and less active records. Make sure you work with a company that can offer smart, affordable strategies customized to your organization’s current and future needs. | | 4:33p |
Facebook Reportedly Builds China Filter as Hurdles Linger BLOOMBERG — Facebook Inc. is so keen to return to China that it built a tool that would geographically censor information in the country, according to the New York Times.
While that may help the Chinese government get comfortable with Facebook, the company’s re-entry may not happen for years, if at all, given licensing restrictions and other regulations that favor locally owned companies. China, which blocked the world’s largest social network in 2009, has few incentives to allow the social network in.
READ MORE: China Adopts Cybersecurity Law Despite Foreign Opposition
Chief Executive Officer Mark Zuckerberg visits China frequently, and yet the company is no closer to putting employees in a downtown Beijing office it leased in 2014, according to a person familiar with the matter. The company hasn’t been able to get a license to put workers there, even though they would be selling ads shown outside the country, not running a domestic social network, the person said. The ad sales work is currently done in Hong Kong. The person asked not to be identified discussing private matters.
“We have long said that we are interested in China, and are spending time understanding and learning more about the country,” a Facebook spokeswoman said in an e-mailed statement. “However, we have not made any decision on our approach to China. Our focus right now is on helping Chinese businesses and developers expand to new markets outside China by using our ad platform.” The company declined to comment on the New York Times report or its real estate interests.
While China represents the biggest untapped market for Facebook, information and web access in the country is strictly controlled and allowing the social network in would raise the risks that unwanted news and views would spread.
It also faces entrenched opposition. Tencent Holdings Ltd.’s WeChat has more than 800 million monthly active users tapping into similar services to those offered by Zuckerberg’s company.
China and Facebook aren’t engaged in ongoing talks about the conditions of a return, according to a separate person familiar with the matter who asked not to be identified as the matter is private. The ability to censor content would be a precondition, not the deciding factor, in any entry to the Chinese market, the person said.
The New York Times said Facebook’s tool would block content from appearing in the news feed. It would be provided to Chinese partners to help them censor content, the newspaper reported, citing unnamed current and former employees. Zuckerberg has supported and defended the effort, saying that it was better for Facebook to enable conversation in a country even if it’s not the full conversation, according to the Times.
The software is among the many projects Facebook has initiated and may never be implemented, nor has it been used so far, according to the Times.
Facebook leased space in Beijing’s Fortune Financial Center in 2014, people familiar with the matter said at the time. The company’s name doesn’t appear on the office directory and several staff at the building said Wednesday the social network giant doesn’t currently operate an office there. | | 4:42p |
Report: SaaS Dominates Cloud Usage in India  Brought to You by The WHIR
Cloud adoption is rapidly increasing in India, as roughly two-thirds of organizations using cloud services have done so within the past two years, according to research released this week by the Cloud Security Alliance and Security-as-a-Service provider Instasafe Technologies. The organizations presented the State of Cloud Adoption & Security in India study at the 4th annual Cloud Security Alliance APAC Congress in Bangalore.
The report includes responses from over 200 CIOs in India, 62 percent of whom said their organization already use cloud services.
SaaS is used by over 88 percent of respondents’ organizations, making it the most common type of cloud service adopted so far, ahead of IaaS (55 percent) and PaaS (45 percent). Among the 38 percent who have not adopted cloud services, almost three-quarters are considered potential users, and are in the exploratory phase.
Adoption seems to be a headlong rush for many organizations, with one-quarter reporting the use of 10 or more cloud applications, and another third using between 5 and 10 cloud applications.
Key barriers in the region include a lack of industry standards, and of course, percieved risks of data loss and security breaches. Cost effectiveness was actually tied with the lack of industry standards as the most commonly identified barrier, but the study authors seem to consider it primarily an awareness issue.
The Digital India program, announced in 2015, is intended to address infrastructure challenges, the study says, providing reason for optimism that cloud services will continue to boom in India.
India’s telecom authority blocked Facebook’s controversial Free Basics service from the country early this year, choosing not to trade zero-rating for faster internet penetration. GoDaddy added services in three languages to serve the growing Indian market this summer. EDC predicted a year ago that India will be country with the most developers in the world in 2018.
This article was originally posted here at The Whir. | | 4:51p |
Microsoft Invests in Quantum Computing Division with Key Appointments  By WindowsITPro
Microsoft is making a significant investment in its quantum computing division with the addition of several new executives, including the appointment of long-time Microsoft executive Todd Homdahl who will lead the scientific and engineering effort.
Microsoft has also hired two leaders in the field of quantum computing, Leo Kouwenhoven and Charles Marcus. The company will soon bring on Matthia Troyer and David Reilly, two other leaders in the field.
The team will work to build a quantum computer while also creating the software that could run on it, Microsoft says in a blog post.
The investment comes months after Microsoft founder Bill Gates said that quantum cloud computing could arrive in as soon as 6 years.
“Microsoft’s approach to building a quantum computer is based on a type of qubit – or unit of quantum information – called a topological qubit,” the company says. “Using qubits, researchers believe that quantum computers could very quickly process multiple solutions to a problem at the same time, rather than sequentially.”
Ultimately, Microsoft wants to “create dependable tools that scientists without a quantum background can use to solve some of the world’s most difficult problems… that could revolutionize industries such as medicine and materials science.”
Holmdahl previously worked on the development of the Xbox, Kinect and HoloLens, while Marcus and Kouwenhoven are both academic researchers that have been working with Microsoft’s quantum team for years. Microsoft has provided funding for an “increasing share” of the topological qubit research in their labs.
Marcus says that without cooperation between scientists and engineers, a quantum economy will never be realized.
“I knew that to get over the hump and get to the point where you started to be able to create machines that have never existed before, it was necessary to change the way we did business,” Marcus said in a statement. “We need scientists, engineers of all sorts, technicians, programmers, all working on the same team.”
This article was originally posted here at WindowsITPro. | | 6:24p |
New UK IBM Cloud DCs ‘Infused with Intelligence,’ But Maybe Not for Its Own Ops If cognitive intelligence algorithms are so great, how come the same service providers that are offering them for their own customers, don’t take advantage of them for their own sake? These AI tools could potentially improve their own automation, their responsiveness to traffic bottlenecks, and conceivably their own power consumption.
When IBM announced this week its plans to expand its UK-based operations for IBM Cloud, with the addition of four data centers across the country in addition to its existing two, it characterized those new facilities as being “infused with cognitive intelligence.” In an earlier decade, that might have sounded too much like science fiction; but today, it sounds more like a miracle ingredient for cleaning kitchen countertops. If indeed these new facilities are so “infused,” will IBM be able to leverage this infusion for the facilities’ own benefit?
In responses to Data Center Knowledge from London, which arrived early Friday morning, the company told us that cognitive intelligence is something that clients, specifically, are asking for. Furthermore, IBM said that clients — and only clients — will be leveraging these features, at least for now.
“IBM Cloud has a single platform that brings together IaaS with more than 150 APIs and services, that clients are asking for as we shift to Cloud as a platform for innovation,” stated the IBM spokesperson. “Clients want help to manage their data resources through our data centers, but we also offer options to scale their applications and build new ones infused with cognitive intelligence, blockchain services, IoT, data-as-a-service, and more.”
This is as far as IBM will go, for now, with respect to its own efforts to do the sort of thing that software engineers call, “eating their own dog food.” They’re not alone in that respect: Google has recently bragged about the cognition-like capabilities of its own DeepMind project, and how it’s expanding the possibilities for real-world applications in neural networking. Last June, Google went so far as to boast it had applied DeepMind to its own data center power consumption models, with the result being that the company reduced its cooling bill (by its own tally) by about 40 percent.
But when pressed further about its application of DeepMind — for example, in determining better patterns for automating the dissemination of the many microservices that comprise its global computing network — all the company was able to do, at that time, was sit quietly and take notes. Its people may get back with us at some point.
It’s not as though Google, or IBM, or anyone else, is uninterested in the idea. In 2014, Google data center research engineer Jim Gao published a research paper [PDF] demonstrating that neural networks were particularly effective in predicting power usage effectiveness (PUE) ratios over extended periods of time, given historical data for prior power consumption.
Gao stated that the variables from these predictions could be adjusted for future conditions — for example, determining whether the use of drycoolers to exchange steam with outdoor air during the winter months, could be more effective at certain periods; and whether increasing wet bulb temperatures have a negative impact on fan speeds at certain points in time.
But Gao also embedded a little complaint at the front of his paper: specifically, that budget outlays at Google and elsewhere for studying the reduction of PUE, were being reduced. The reason: Budget managers were drawing the conclusion that they’d already reached the point of diminishing returns with efforts to lower PUE further.
When we asked IBM to tell us what formulas it used in its conclusion that it needed to triple data center capacity for IBM Cloud in the UK, the only formula it provided came not from internal research, but from analysis firm IDC.
“Presently, client demand for cloud is soaring. Overall, the industry is moving into the new wave of cloud adoption – transitioning from costs savings to a platform for innovation,” reads IBM’s response to Data Center Knowledge.
“The need for cloud data centers in Europe continues to grow; IDC forecasts worldwide revenues from public cloud services will reach more than $195 billion in 2020. UK Cloud adoption rates have increased to 84 percent over the course of the last five years, according to Cloud Industry Forum. IBM’s new facilities will give users access to a broad array of server options, including bare metal servers, virtual servers, storage, and networking capabilities.”
If AI truly is becoming a competitive market, any cloud service provider would do itself a world of good by borrowing its own data centers as its own test case.
![Fareham, UK, the site of the first of four new IBM Cloud data centers in that country. [Creative Commons]](http://www.datacenterknowledge.com/wp-content/uploads/2016/11/161123-Fareham-UK-market-300x225.jpg) Fareham, UK, the site of the first of four new IBM Cloud data centers in that country. [Creative Commons] Few other details are known about IBM’s plans at this point, except that the first of its four new UK data centers will go online next month in Fareham, a western suburb of Portsmouth [ pictured left]. The other three facilities are scheduled to go online during 2017, though their ultimate locations may have yet to be determined. Ark Data Centres will be the lease holder for the second new facility — it’s a joint venture partner with the UK Government’s own Crown Hosting. |
|