Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, March 29th, 2017
Time |
Event |
12:00p |
Ford Plans $200M Data Center in Anticipation of Connected Car Data Explosion Ford Motor Company announced a $200 million investment to build a new data center in Flat Rock—the second of two in Michigan—as it braces for an estimated 1,000 percent increase in data usage due to the creation of new connections between automotive and computer technology.
The company formed Ford Smart Mobility a year ago to grow its leadership in the connectivity, manufacturing and support of electric and driverless cars. And, Ford would be hard-pressed to succeed in the digital revolution without adequate infrastructure for storage, processing and integration of the 200 petabytes coming its way by 2021.
“Ford Smart Mobility and expanding into mobility services are significant growth opportunities,” President and CEO Mark Fields wrote in a post on the company’s website. “Our plan is to quickly become part of the growing transportation services market, which already accounts for $5.4 trillion in annual revenue.”
Last year, Ford began construction on a complete overhaul of its 64-year-old research and engineering campus near Detroit. Ford said it envisions having 30,000 employees on two Silicon Valley-like campuses in Dearborn, rather than scattered across the city in some 70 buildings. More than 7.5 million square feet — nearly triple the size of the Empire State Building — will be transformed.
The plan includes a new 700,000-square foot design center to replace the sprawling product development center and historic design dome. The new campus, expected to be mostly complete within seven years, will be served by a network of autonomous vehicles, on-demand shuttles and electric bikes, Ford said.
News of the investment in a second data center comes after Ford announced in January that it would spend $700 million on its Flat Rock Assembly plant to make it a facility for creating electric and autonomous vehicles—and add 700 new jobs—music to the ears of President Donald Trump.
The morning of Ford’s announcement—one that included another $1 billion in investments upgrades and another 130 jobs in its Michigan plants—Trump tweeted his delight at the news:
Of that $1 billion, $850 million will go toward upgrading its Michigan Assembly Plant to support the return of the Ford Ranger and Bronco. Ford will spend the other $150 million at its Romeo Engine Plant to expand capacity for engine components for several of its vehicles, creating or retaining 130 jobs staying in the U.S.
That brings the total of investments by Ford in Michigan over just three months to $1.9 billion. Over five years, Ford has spent $12 billion on its U.S. plants and created nearly 28,000 jobs in the states.
Ford posted its second-best profits in history in 2016, with net income at $4.6 billion, and an adjusted pre-tax company profit of $10.4 billion. | 3:00p |
Unused Oregon Prison May Be Converted to Data Center A never-used $58 million prison built 14 years ago in Multnomah County, Oregon, is costing at least $300,000 a year to maintain, but it may finally be put to good use.
The Portland Tribune reported that Pacific Development Partners in Santa Monica, California, has offered the county $10 million–a fraction of what it cost to build the Wapato jail—for the property that could potentially become a data center.
Although officials say they are very interested in the offer, they want to proceed with caution to be sure they know intimate details about the firm and all uses up for consideration. the county’s hesitation certainly seems warranted considering that the last time someone showed interest in the property, the deal fell apart after the supposed “developer” lied about his birthdate and, ultimately, his identity to the Portland Tribune.
An investigation by the newspaper ensued and discovered that this person who had plans to turn the prison into an organic food production facility had no proof that he had ever or could ever pull off a multi-billion dollar real estate development deal. The paper also reported that the man was accused of theft and forgery but was never charged.
So, tip-toeing is certainly justified, and it stands to reason that despite saying it wants to push the deal through as quickly as possible, that the firm jumps through hoops to ensure the county’s trust and confidence.
Philip Zimmerman, broker for the firm, said they are willing to agree not to turn it into a private prison or marijuana-growing facility. After all, the use of both recreational and medicinal pot is legal in Oregon.
Although nothing on the real estate company’s website suggests that either of those two uses are in its wheelhouse, Pacific Development Partners has developed, redeveloped or owned more than $1 billion in retail, office, and multi-tenant and industrial projects since 1985, specializing in buying properties then leasing them to a third party.
“We’re not going to be doing any low-level atomic testing, either,” Zimmerman told the Portland Tribute; “We’re serious developers.”
If the firm does end up using the old prison for a data center campus, it’s very unlikely that it would remain vacant due to a lack of demand. That’s why the prison never opened in the first place: When construction began, there was a definite need for a one in the county. However, by the time it was done, the jail population had declined, and the county no longer had the funds to operate it.
With its tax incentives for data centers, Oregon has become one of the most popular states for builders or companies wanting to collocate there. Amazon was the last big company to announce a new data center there, for a total of nine in the state.
| 4:04p |
Super Micro Looks to Server Design to Save Data Center Space (Bloomberg) — Density matters to Super Micro Computer Inc.
To meet customer demands to take up less space, the maker of servers is reducing the number of cables inside its products, their structure revamped to make room for more processing power and memory. That’s putting the focus on improving designs for the computers that run networks, as the San Jose, California-based company seeks to boost performance, without making the package bulkier, to win clients from Dell Inc. to Lenovo Group Ltd.
“All tier one companies are about design,” Chief Executive Officer Charles Liang said in an interview. “You will find our density 50 percent higher. We save customers lots of space.”
Fewer cables also brings better air flow. That, combined with shared adapters and fans, results in less power waste. Liang said more than half of Super Micro’s servers meet the highest levels of power efficiency in the market.
See also: Meet Microsoft, the New Face of Open Source Data Center Hardware
Super Micro’s relationship with Apple Inc. has attracted more attention recently. Apple cut ties with the company due to security concerns with its servers, the Information reported last month. Liang said no “valid security concern” had been found.
But with another technology giant, Super Micro has forged a strong connection. Intel Corp., its main supplier of processors, recently installed more than 36,000 high-density units in its data center in Silicon Valley. The relationship gives Super Micro a preview of Intel’s new chips.
Super Micro Chief Financial Officer Howard Hideshima told analysts in January that when Intel launches a new product, its engineers work alongside a Super Micro crew, giving the latter an edge in preparing its next servers.
“Intel obviously has the best CPU, chipset and networking technology,” Liang said. “Super Micro has the expertise to design hardware for whatever computer systems. I feel we’re pretty lucky indeed.”
Shares of Super Micro have slumped 11 percent this year, partly on the reports of security issues. Of the seven analysts covering the stock, five recommend buying and the other two have hold ratings.
“All server companies work around Intel, and Super Micro holds a close position to it,” said Thomas Zhou, an analyst at research firm IDC. “That can help and limit its product line at the same time.”
Liang founded Super Micro 24 years ago, mainly to sell a much simpler product: motherboards, the main circuit board that sits at the heart of most personal computers. It now does assembly at eight plants worldwide.
About a third of the company’s 3,000 employees are engineers, responsible for everything from motherboards to chassis and power supply. At its factory near Taipei’s main international airport, workers were busy putting together servers that were no bigger than a PC to those large enough to fill an elevator.
The plant produces 10,000 servers a month, contributing to Super Micro’s shipment of 1.6 million units a year. Liang said that number, which includes bare-bones units, would lift it into position as the No. 3 server maker globally.
“We grow at a much faster pace than our industry overall,” Liang said. “The goal is to grow market share.” | 4:37p |
Hybrid IT is Today’s Reality, Not the Future Gerardo Dada is Vice President for SolarWinds.
Today’s business leaders place tremendous pressure on the IT function to align its technology to the latest business initiatives, to move faster, to maintain higher levels of uptime, and to invest in innovation, all while reducing costs where possible and minimizing risk.
At the center of all this are applications and data. Both of these elements help organizations define and differentiate themselves, are essential for the business to operate effectively, and often, are key to delivering value to users and customers. To adapt to these needs, IT organizations are removing barriers to consumption, simplifying processes through automation, and accelerating the rate of change.
As organizations undergo these transformations—implementing cloud, virtualization, analytics, digital experience management, etc.—that lay the new foundation for delivery of applications and data, IT professionals must be prepared to manage, secure, monitor, and remediate issues not only on-premises and in the cloud, but for both environments at once (hybrid IT).
Most organizations already have at least some of their infrastructure in the cloud, and they are often using at least some basic monitoring features—most likely the tools provided by the cloud service provider, which are mostly tactical and infrastructure-centric. In addition, it’s common for organizations to use more than one cloud environment: IT professionals can individually be monitoring multi-cloud environments (for example, Amazon Web Services™ and Microsoft® Azure®), public and private cloud, and even SaaS applications that should be monitored (for example, Salesforce® and Marketo®).
While discrete monitoring tools may cover the basics, there is clearly a missed opportunity to improve IT efficiency and effectiveness by simultaneously monitoring all their cloud and on-premises environments together, creating holistic insight into all the environments they’re responsible for, inclusive of applications, storage, databases, servers, and the network.
Such visibility across environments is essential to solve one of the biggest challenges organizations face when it comes to hybrid IT: the decisions around how to make the best use of the cloud. Important questions include: what workloads should we move to the cloud? What is the baseline resource consumption we should consider for provisioning resources? How will applications perform in the cloud relative to their on-premises performance? What are the likely resource contentions? What is the most resource-effective way to run a specific workload?
IT professionals should not assume that providing more cloud resources, larger instances, and faster databases will be the right answer to all performance questions—that’s often how sticker shock and technical issues arise. The cloud makes the correlation between performance, efficiency, and cost more evident.
Therefore, what’s missing from the puzzle is unified visibility across cloud, on-premises, and hybrid IT services. Here are some ways this “single point of truth,” made possible through end-to-end hybrid IT monitoring, is helpful:
- Using data to make provisioning decisions: At the end of the day, the cloud shouldn’t be used as a cost savings strategy. It’s inexpensive to get started and easy to provision additional services, but the bill can grow quickly. Without the right monitoring data to make good provisioning decisions, buyer’s remorse will surely follow.
- Working towards performance certainty: This one can be difficult, but with an experienced IT professional and a wealth of performance metrics available, the IT function can understand how their systems perform, why they perform that way, and what the performance drivers are. They will also have a deeper understanding of systems optimization.
- Course correcting when needed: Some organizations may want to move applications back on-premises, whether it’s for cost reasons, security, or if they’re not performing well in the cloud. Each of these can only be recognized early on before an incident through holistic monitoring of hybrid environments. A true hybrid system will use a mix of cloud and on-premises resources—good data provides the insight to find the optimal balance between the two.
Finally, here are some suggestions for getting started:
- Create an inventory of what’s being monitored: Most IT departments have a variety of monitoring tools for a number of different things. Are there applications in the cloud that are monitored by one tool? Are workloads being hosted in a different data center that leverage a separate tool? Before standardizing monitoring processes, organizations need to create an inventory of everything they are currently monitoring (or that needs monitored) and the tools being used to do so.
- Focus on what matters (end-user performance): IT departments are ultimately accountable for the end-user experience. An application-centric mindset with end-user experience as a key metric can be a powerful force to align a traditionally siloed team behind a common goal towards which everyone contributes.
- Standardize all systems: This should be done for every workload, independent of what tool is being using, especially if an IT department is using multiple tools. It’s impossible to optimize what isn’t being measured, so it’s in every IT department’s best interest to create a standard set of monitoring processes. Determine what key metrics are needed from each system, the necessary alerts for each system, and what the actionable processes are.
- Unify the view: IT departments should work towards having a comprehensive set of unified monitoring and management tools in order to ensure the performance of the entire application stack, from on-premises to the cloud.
- Adopt the discipline of monitoring: Often, monitoring has been somewhat of an afterthought. For most organizations, it’s been a necessary evil, a resource that the IT department can leverage when there are issues to be remediated, and often a job that’s done with pre-loaded software. However, the concept of monitoring as a discipline, which places a greater emphasis on proactive monitoring, is designed to help IT professionals escape the short-term reactive nature of administration, often caused by ineffective, ad hoc practices, and become more proactive and strategic for their digital transformations.
In summary, IT professionals should look to establish the practice of monitoring as a discipline to succeed in hybrid IT environments. As companies look to become more strategic and transform themselves into truly digital organizations, the onus falls on IT professionals to get them there. With a unified approach to monitoring that aims to turn data points from across infrastructure components and various environments into actionable insights, coupled with some of the best practices mentioned above, IT professionals can ultimately increase the overall effectiveness and efficiency of their organizations and businesses on the whole.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 6:11p |
European Data Center Startup Etix Bets on R&D, Edge Markets The word “unicorn” is not usually applied to data center colocation startups. It’s applied to startups with $1 billion or higher valuations that are usually software or hardware companies. But one European data center provider, thanks to its substantial investment in developing technology, has invoked the image of the mythical beast.
Last year, Luxembourg-based Etix Everywhere became one of 18 companies to participate in the Startup Europe Comes to Silicon Valley program, which named them Europe’s future unicorns. The program seeks to connect European tech companies with investors and potential enterprise customers in the US.
While the data center provider business at its core is real estate business, in the face of commoditization and consolidation the space has been undergoing in recent years, you need to think hard about differentiation if you want to compete with global giants like Equinix, Digital Realty Trust, or Global Switch.
For Etix, backed by private equity investors, the answer has been innovation in both technology and product. The company has been spending heavily on R&D, designing software, hardware, and of course data centers; it’s also created a variety of data center investment options for its customers.
Replicating a single data center design around the world, using pre-fabricated components to reduce time-to-market, it has built six facilities and has around 20 additional projects in various stages of development.
Much of Etix’s technology investment has gone toward automating data center management so that it requires as little human presence on-site as possible. Charles-Antoine Beyney, the company’s CEO and one of its two founders, said this kind of automation is a “prerequisite” to make data centers “commodity-proof.”

Backup generators outside an Etix data center (Photo: Etix Everywhere)
It’s also competing with leading providers on price, he added, although the company generally stays away from the top-tier markets like London and Amsterdam where those players (the likes of Equinix and Interxion) dominate. Etix will go into a market like that if a customer wants it to, but it does not target those locales proactively.
Its sweet-spot is in second- and third-tier markets like Edinburgh, Morocco, Angola, and Iran. “We are really there to provide some really localized services,” Beyney said.
Landing for One of Google’s Submarine Cables
One of Etix’s current projects is in Fortaleza, a city in northeastern Brazil that has population of about 2.6 million but isn’t considered a top data center market in the country. There the company is building a landing station for two future submarine cables that will link Brazil, US, and Angola.
The partially Google-funded Monet cable will connect a landing station in Boca Raton, Florida, to Etix’s landing station in Fortaleza and another one in Santos, Brazil. Also landing in Fortaleza will be the South Atlantic Cable System, which will stretch across the Atlantic Ocean to land in Luanda, Angola.
Both cables are being built by Angola Cables, which hired Etix to build the Fortaleza station and a data center close to it for the future cables’ customers.
The Monet cable is expected to come online this year, while SACS is slated for completion in the third quarter of 2018.
Read more: Here are the Submarine Cables Funded by Cloud Giants
R&D Focused on Lights-Out Data Centers
Etix is targeting locations that can be considered edge markets, and companies that commission data centers in those markets are often based elsewhere. That’s why Etix’s tech investment is focused on making it easy for its customers to manage their facilities remotely.
In addition to the typical infrastructure monitoring and management features, its custom DCIM software (or data center infrastructure management) provides browser-based CCTV monitoring and data center maps that tell local technicians who haven’t been to a site exactly where inside the facility they need to go if there’s a problem.

Technical room inside an Etix data center (Photo: Etix Everywhere)
The software has access management capabilities enabled by the hardware lock controller with a camera developed in-house. Additionally, the company’s significant R&D staff are developing computer-vision technology that can detect humans in the data center, recognize faces, and track a person’s movement through a facility.
When a remote tech is dispatched to a site, they are greeted by an authentication panel with a large screen, where they have the choice of using a QR code, an RFID tag, or a password to get inside. If they don’t have either of the above, they can place a video call to the centralized Etix NOC, staffed around the clock.
Startup Seeds Planted by Equinix Deal
Today Etix competes with Equinix, but the Redwood City, California-based colocation giant indirectly helped the company get its start.
Around the beginning of the decade, Beyney’s other company, a hosting provider called BSO Network Solution Group, was preparing to build a data center in Paris to serve its customers. It had secured land and permits in the city’s suburbs but was eventually approached by Equinix, which was also looking to build a data center in the market.
After some negotiation, Equinix paid BSO €15 million for permits and contracts Etix had secured for the site, where Equinix ended up building what is known today as PA4, a 40MW, 250,000-square foot data center. Cash from the sale provided seed funding for Etix.

Fire extinguishing system inside an Etix data center (Photo: Etix Everywhere)
Splitting Data Center Ownership with Clients
One way Etix differentiates from the likes of Equinix is by offering a variety of data center leasing and ownership structures to its customers. A client can lease a facility from the provider; they can buy a turnkey facility; they can also do a mix of both.
While having a data center customer invest in a facility along with the provider has been increasingly common in recent years, having the “co-investment” model as a standard product offering is not typical. Under the model, Etix splits the facility’s ownership equally with the customer.
The deal in Brazil with Angola Cables is a good example. The telco will own the landing station, while the colocation data center that will provide access to the cable system will be co-owned by Etix and Angola, Beyney said.
Today, Beyney is chasing the second round of funding, but he’s not worried about cash. Etix’s investments around the world are fully backed by banks because of its joint ventures with customers, he said. He also pointed to Etix’s deep-pocket investors, New York-based Tiger Infrastructure Partners and Paris-based infraVia Capital Partners, who are providing tens of millions of euros in credit facility.
The company’s focus is on delivering its construction projects and advancing its technology. | 8:51p |
IT Can Learn a Thing or Two from This $100M Email Phishing Case  Brought to you by IT Pro
A 48-year-old man from Lithuania has been charged with allegedly stealing more than $100 million from two multinational internet corporations through an email phishing scheme between 2013 and 2015.
Making the case notable is that the two corporations are a social media company and a technology company, both of which might have been expected not to fall victim to such a scheme. The names of the companies were not released by law enforcement officials.
The defendant, Evaldas Rimasauskas, of Vilnius, Lithuania, has been charged by federal prosecutors in the U.S. Attorney’s Office for the Southern District of New York with one count of wire fraud and three counts of money laundering, according to a March 21 announcement by the U.S. Department of Justice.
Rimasauskas allegedly set up a fake company which used the same name as a real computer hardware maker in Asia to pull off his scheme, which involved wiring large amounts of money from the two companies to his fake company, according to prosecutors.
“From half a world away, Evaldas Rimasauskas allegedly targeted multinational internet companies and tricked their agents and employees into wiring over $100 million to overseas bank accounts under his control,” acting U.S. Attorney Joon H. Kim, said in a statement. “This case should serve as a wake-up call to all companies – even the most sophisticated – that they too can be victims of phishing attacks by cyber criminals. And this arrest should serve as a warning to all cyber criminals that we will work to track them down, wherever they are, to hold them accountable.”
A spokesman in the U.S. Attorney’s office declined to comment further on the case when asked by ITPro.com.
For IT security administrators, the case is a reminder of the need for vigilance against such attacks, according to several IT analysts who spoke with ITPro.com.
“Part of the problem is a reliance on email security technology that has not kept up with the shift in threat landscape to include hackers with increasing sophistication, nation-state connections and motivation by monetized cyber intrusions,” Neil Wynne, a secure business enablement analyst with Gartner, wrote in an email reply. “Attackers are easily bypassing these traditional prevention mechanisms.”
Business email attacks have been occurring with significantly higher frequency in recent years, said Wynne. “In this type of attack, a message is sent that doesn’t have any URLs or attachments but rather uses social engineering to exploit a vulnerability in the human recipient. Ultimately, the fact remains that human beings are the most vulnerable point of any information system.”
To battle these kinds of phishing attacks, IT security teams must take a multipronged approach that spans technical, procedural and educational controls, he wrote. “Newer technology can be deployed to thwart messages like this from landing in an inbox, but it still should be combined with procedural and educational improvements as well.”
A key tool in the security arsenal to fight such attacks is a secure email gateway (SEG), wrote Wynne. It should include anti-spam and signature-based antivirus; network sandboxing and/or content disarm and reconstruction (CDR) for advanced attachment-based threat defense; and rewriting and time-of-click analysis for advanced URL-based threat defense. It also should include detection for anomalies and display name spoofing and cousin domains as part of an advanced impostor-based threat defense (like Business Email Compromise). To satisfy corporate and regulatory policy requirements, it should also include data loss prevention (DLP) and encryption capabilities for outbound content, he wrote.
Rob Enderle, principal analyst at research firm Enderle Group, said recurring training for employees about recognizing phishing attacks can also help reduce the problem “but, over time, people tend to start thinking it will never happen to them, reducing its effectiveness.”
Ultimately, companies “really need to address this with systems that prevent the activity not just attempts at behavior modification,” said Enderle.
This article originally appeared on IT Pro. | 11:51p |
Extreme to Buy Brocade’s Data Center Network Business Extreme Networks, a San Jose, California-based network technology vendor, has agreed to acquire Brocade’s data center networking business for $55 million after acquisition of the latter company by Broadcom closes.
This is the latest in a series of acquisition deals Extreme has entered into over the last six months as it fleshes out its enterprise networking portfolio to include everything from edge networking to core data center products. It acquired the wireless LAN business from Zebra Technology last October, and earlier this month announced that Avaya had agreed to make it the leading bidder for its networking business in an upcoming bankruptcy sale.
Brocade, also based in San Jose, sells switches, routers, and analytics software for data center networks. In recent years the company has placed a lot of focus on software-defined networks and building networking software that can run on commodity hardware.
Brocade CEO Lloyd Carney told Data Center Knowledge in an interview in 2015 that specialized data center network gear sold by the likes of Cisco and Juniper was on its way out, gradually ceding ground to commodity x86 hardware running networking software.
Singapore-based semiconductor maker Broadcom agreed to buy Brocade last November for $5.9 billion. The companies expect the acquisition of Brocade’s data center business to close sixty days after closing of the Brocade acquisition by Broadcom.
Extreme expects Brocade’s assets it’s buying to generate $230 million in annual revenue.
See also: Not Really a Bromance: Broadcom Wanted Brocade’s FC Storage |
|