Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, April 6th, 2017
Time |
Event |
12:15a |
Radware Discovers BrickerBot, a New Permanent DoS Botnet Cybersecurity firm Radware has discovered a new Permanent Denial-of-Service (PDos) botnet, designed to render the victim’s hardware useless. Also known as “phlashing,” a PDoS attack can damage a system so badly that it requires replacement or reinstallation of hardware, the company said.
Named BrickerBot, this form of attack is becoming increasingly popular, Ron Winward, Radware security evangelist, said. He announced the discovery at the Data Center World conference taking place this week at the Los Angeles Convention Center.
By exploiting security flaws or bad configurations, PDoS can destroy the firmware and/or basic system functions. It is different from its well-known cousin, the DDoS attack, which overloads systems with requests meant to saturate resources through unintended usage.
“Upon successful access to the device,” said Winward, “the PDoS bot performed a series of Linux commands that would ultimately lead to corrupted storage, followed by commands to disrupt internet connectivity, device performance, and the wiping of all files on the device.”
Over a four-day period, Radware said it recorded 1,895 PDoS attempts performed from several locations around the world. Its sole purpose was to compromise IoT devices and corrupt their storage. Those attacks were stopped the first day the bot was discovered, on March 20. However, BrickerBot 2, spotted the same day, is still active and ongoing.
Similar to the exploit vector used by the Mirai botnet, which last October DDoSed 17 data centers of the DNS provider Dyn, BrickerBot used Telnet brute force to breach victims’ devices.
Botnets are not new but have evolved to take advantage of increased usage of the internet and connections with mobile devices. That opens up so many doors for botnets to travel through and do damage to computer systems.
Botnets are made up of network of internet-connected “bots,” sometimes referred to as “zombies,” which are automated processes that execute pre-defined capabilities. A “botmaster” creates a botnet for the malicious intent of controlling a vast amount of hosts. Most of the time, the host doesn’t even realize they’ve been infected. These global networks are huge, and their widespread distribution makes them one of today’s biggest cybersecurity threats.
Radware recommended taking the following precautions on its website:
- Change the device’s factory default credentials
- Disable Telnet access to the device
- Use network behavioral analysis to detect anomalies in traffic and combine with automatic signature generation for protection
- Your intrusion protection systems should block Telnet default credentials or reset telnet connections. Use a signature to detect the provided command sequences
| 12:26a |
Moving to Hybrid Cloud: a Health Insurer’s Journey When Mark Ross took over the management of IT infrastructure for Concordia Plan Services two years ago, it had a single point of failure with just one firewall employed, and nearly all data and every application—except payroll and benefits—resided on in-house servers.
Sounds like a typical health care services organization, right? The need to comply with all kinds of regulations, the concern that important data might be compromised or lost, and the all-around fear of losing control kept and still keeps many of them from using colocation, managed services or cloud.
However, utilizing just one firewall to protect the entire infrastructure to guard against viruses, intrusions, and the like from the inside and out is not typical, or safe.
Today, the provider of health plans to nearly 2.3 million members of Lutheran churches is operating in a truly hybrid environment: All business applications and data reside on either a public or private cloud; Concordia has a colocation site in St. Louis, Missouri, and a backup site in Omaha, Nebraska.
Ross, who spoke at the Data Center World conference in Los Angeles Wednesday, uses managed services to support a number of systems that would either be cost-prohibitive to keep in-house or require a skillset the company’s staff did not have.
The new and improved infrastructure also contains redundant firewalls and circuits that can failover from one to the other without any interruption. Plus, instead of having multiple desktops and laptops floating around the workplace, Ross chose to populate the organization with thin clients in order to keep things as clean as possible.
“We can blast out patches to all the workstations or isolate certain ones,” he explained.
The changes Ross made to the infrastructure increased security, reduced costs, enhanced customer experience, and laid the groundwork for the future success and growth of Concordia.
Here are three key things you need to know as you transition to a hybrid environment, according to Ross:
Know why you are implementing a hybrid cloud solution. Whether this is a hardware-based technology or software-service based technology, there is always a specific reason (or number of reasons) why your organization is implementing this new technology.
Know exactly what you need to meet the goal set out by implementing a cloud solution. Once your organization knows why they are implementing a hybrid environment, you can then go about finding exactly what is needed in order to accomplish this.
Involve all key stakeholders. The ready-to-install nature of cloud computing leads business users to feel they don’t need to involve IT specialists and other subject matter experts such as contractual or legal specialists who will thoroughly review the contract and understand the implications. That couldn’t be further from the truth. | 12:30a |
Cloud Computing Moves to the Edge By Ernest Sampera is Chief Marketing Officer for vXchnge.
In a time when we all expect instant access to our personal and professional networks, it’s never been more important to have the right technology and strategies in place for supporting today’s advanced users, applications, and data. From business applications like ERPs and Salesforce to the ability to post to Facebook with zero lag time, decreased latency is becoming a “must-have” as business users and consumers demand new levels of efficiency and speed.
To remain competitive and meet the growing demand for more responsive services, IT departments are leveraging edge computing. Edge computing enables companies to put the right data in the right place at the right time, supporting fast and secure access. The result is an improved client experience and, oftentimes, a valuable strategic advantage.
Businesses are Shifting to the Edge
The transition to edge computing is being driven by three rapidly evolving, and often overlapping, dynamics: the growth of IoT, the pace of technology-empowered business, and evolving user expectations.
IoT usage is poised to explode, with over 50 billion things projected to be connected to the Internet by 2020. In fact, the IoT is the most commonly cited reason for a move to an edge computing architecture, as more than 80 percent of IT teams want their data centers to be more available and reliable to keep pace with IoT demands. Edge computing enables faster real-time analysis and lower costs for managing, analyzing and storing IoT data.
Today, almost every company in every industry sector needs near-instant data to be successful. Restaurant chains need to know where their food product is coming from, when it expires, and when it will arrive on their doorstep. A mistake in the supply chain could have consequences that range from losing a loyal customer to a food safety crisis that results in food-borne illness. Retail stores need to know what customers bought yesterday, how much they spent, and what they are looking to buy next. In the financial sector, milliseconds can make a dramatic difference for high-frequency trading algorithms. And, in healthcare, real-time patient information can be the difference between life and death. These scenarios require speed and scale to support latency-sensitive, machine-to-machine data.
When it comes to consumers, expectations are high, and brands must be prepared. Edge computing allows businesses with a geographically dispersed customer base to deliver the exceptional availability consumers demand, while also enabling data to be shared across the globe instantly. It also enables businesses with remote or branch offices to replicate cloud services locally, improving performance and productivity.
Moving to the Edge
According to a recent BI Intelligence report, the manufacturing, utility, energy and transportation industries are expected to adopt edge computing first, followed by smart cities, agriculture, healthcare and retail.
Seventy-nine percent of IT teams feel that having customers closer to their content is the most important benefit of a data center. Utilizing an edge data center in markets close to customers means companies can provide better service, with less physical distance and minimal latency.
When choosing an edge data center provider, organizations should look for providers committed to standards such as ISO 27001, HIPAA, or SAE 16 Type II, depending on their particular industry. A data center that is certified can provide peace of mind to companies and their customers that their sensitive data, and ultimately their brand, is protected.
A Need for Speed
The decision to implement an edge computing architecture is typically driven by the need for location optimization, security, and most of all, speed.
The importance of speed to every business operation cannot be overstated. It’s no longer a competitive advantage; it’s a necessity. Today’s data management systems require the most immediate information to support “in the moment” decisions that can have an impact of millions of dollars to the bottom line. By bringing processing to the edge of the network, businesses reduce latency by prioritizing processing and lightening the load on the primary network, supporting better, faster decision-making.
Location optimization reduces data processing from minutes and hours to milliseconds and microseconds and as close to real time as you can currently get. Less physical distance translates to minimal latency and greater reliability. Allowing customers in Nashville to receive the same speed and level of service as those in New York is one example of what edge computing can enable.
While cloud computing won’t be slowing down anytime soon, edge computing is finding its place in IT architectures. Cloud computing and edge computing provide significant, yet different, benefits, and smart IT strategists will be sure take full advantage of both.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 12:00p |
This Hacker Can Talk His Way inside a Data Center When a credit bureau hired Kevin Mitnick’s company to test its security defenses, he went straight for the crown jewels. He decided he would try to get inside the bureau’s data center, physically, on his own two feet.
After spending the second half of the nineties in prison for a number of computer crimes, he did not quit hacking. Instead, the legendary former cybercriminal put together an entire team of hackers who break into organizations’ systems using his signature combination of in-person deceit (Mitnick is a top authority on social engineering) and technological exploits as a service, to help them identify security holes.
This week, on stage at the Los Angeles Convention Center during the annual Data Center World conference, Mitnick demonstrated in real-time an entire list of ways one could get proprietary and personal information, using both internet search skills and sophisticated technological exploits, from personal computers as well as corporate networks.
One of the tools he’s used is a device that reads identification code from access badges by HID, common in corporate offices and data centers. Once it reads the code, the badge can be easily cloned, giving the hacker the same physical access as the badge’s owner.
Mitnick had to clone two badges to get inside this particular client’s data center: one to get into the building and the other to get inside the data hall. He used social engineering (the art of manipulating people into disclosing valuable information) to get his hands on the first one.
Since the data center was inside an office building operated by a real estate company, he called and set up an appointment with a salesperson, pretending to be interested in leasing office space. During the tour, he casually asked how the company managed access control, and the salesperson showed him her badge. He asked to take a closer look, and she handed the badge to him, at which point he held it next to a leather planner he was holding, with the badge reading device inside. He only needed to hold the badge for a second to clone it.
Once the device reads the target’s badge, all it takes is holding a blank badge over it to transfer the code.
To get to a badge that would get him inside the actual bureau data center, Mitnick needed to clone one that belonged to a person that worked in the facility. He could already freely walk around the building, so he went into a men’s restroom that was closest to the data center and waited until he could stand at a stall next to one of the data center’s employees, at which point all he needed was to briefly get his planner close to the badge hanging on the target’s belt.
Of course, access badges are eventually going to give way to more advanced access-control technologies, such as biometric identification and facial recognition, but it will be a while before all legacy enterprise data centers will upgrade their physical security systems with the latest and greatest fingerprint and iris scanners and machine-learning technology that will recognize whether a person in a CCTV video is supposed to be on the data center floor.
While finding a technological exploit to break into a system is just a matter of time for sophisticated hackers, people are still the weakest link in any cybersecurity scheme today, Mitnick said. “Human factor — that’s usually the easiest way in.”
Attacks that exploit that human factor – like the March 2016 spear-phishing email to former chairman of Hilary Clinton’s presidential campaign John Podesta that eventually put sensitive campaign emails into the hands of WikiLeaks – are a favorite type of exploit by cybercriminals.
“These attacks are very common and usually the easiest way in,” Mitnick said. | 6:49p |
Beer, Bots and Broadcasts: Companies Start Using AI in the Cloud Dina Bass and Mark Bergen (Bloomberg) — Back in October, Deschutes Brewery Inc.’s Brian Faivre was fermenting a batch of Obsidian Stout in a massive tank. Something was amiss; the beer wasn’t fermenting at the usual temperature. Luckily, a software system triggered a warning and he fixed the problem.
“We would have had to dump an entire batch,” the brewmaster said. When beer is your bottom line, that’s a calamity.
The software that spotted the temperature anomaly is from Microsoft Corp. and it’s a new type that uses a powerful form of artificial intelligence called machine learning. What makes it potentially revolutionary is that Deschutes rented the tool over the internet from Microsoft’s cloud-computing service.
Day to day, Deschutes uses the system to decide when to stop one part of the brewing process and begin another, saving time while producing better beer, the company says.
See also: This Data Center is Designed for Deep Learning
The Bend, Oregon-based brewer is among a growing number of enterprises using new combinations of AI tools and cloud services from Microsoft, Amazon.com Inc. and Alphabet Inc.’s Google. C-SPAN is using Amazon image-recognition to automatically identify who is in the government TV programs it broadcasts. Insurance company USAA is planning to use similar technology from Google to assess damage from car accidents and floods without sending in human insurance adjusters. The American Heart Association is using Amazon voice recognition to power a chat bot registering people for a charity walk in June.
AI software used to require thousands of processors and lots of power, so only the largest technology companies and research universities could afford to use it. An early Google system cost more than $1 million and used about 1,000 computers. Deschutes has no time for such technical feats. It invests mostly in brewing tanks, not data centers. Only when Microsoft, Amazon and Google began offering AI software over the internet in recent years did these ideas seem plausible. Amazon is the public cloud leader right now, but each company has its strengths. Democratizing access to powerful AI software is the latest battleground, and could decide which tech giant emerges as the ultimate winner in a cloud infrastructure market worth $25 billion this year, according to researcher IDC.
“There’s a new generation of applications that require a lot more intense data science and machine learning. There is a race for who is going to provide the tools for that,” said Diego Oppenheimer, chief executive officer of Algorithmia Inc., a startup that runs a marketplace for algorithms that do some of the same things as Microsoft, Amazon and Google’s technology.If the tools become widespread, they could transform work as more automation lets companies get more done with the same human work force.
See also: Machine Learning Driving Up Data Center Power Density
C-SPAN, which runs three TV stations and five web channels, previously used a combination of closed-caption transcripts and manpower to determine when a new speaker started talking and who it was. It was so time-consuming, the network only tagged about half of the events it broadcast. C-SPAN began toying with Amazon’s image-recognition cloud service the same day it launched, said Alan Cloutier, technical manager for the network’s archives.
Now the network is using it to match all speakers against a database it maintains of 99,000 government officials. C-SPAN plans to enter all the data into a system that will let users search its website for things like Bernie Sander’s healthcare speeches or all times Devin Nunes mentions Russia.
As companies try to better analyze, optimize and predict everything from sales cycles to product development, they are trying AI techniques like deep learning, a type of machine learning that’s produced impressive results in recent years. IDC expects spending on such cognitive systems and AI to grow 55 percent a year for the next five years. The cloud-based portion of that should grow even faster, IDC analyst David Schubmehl said.
“In the fullness of time deep learning will be one of the most popular workloads on EC2,” said Matt Wood, Amazon Web Services’ general manager for deep learning and AI, referring to its flagship cloud service, Elastic Compute Cloud.
Pinterest Inc. uses Amazon’s image-recognition service to let users take a picture of an item — say a friend’s shoes — and see similar footwear. Schools in India and Tacoma, Washington, are using Microsoft’s Azure Machine Learning to predict which students may drop out, and farmers in India are using it to figure out when to plant peanut crops, based on monsoon data. Johnson & Johnson is using Google’s Jobs machine-learning algorithm to comb through candidates’ skills, preferences, seniority and location to match job seekers to the right roles.
Google is late to the public cloud business and is using its AI experience and massive computational resources to catch up. A new “Advanced Solutions Lab” lets outside companies participate in training sessions with machine-learning experts that Google runs for its own staff. USAA was first to participate, tapping Google engineers to help construct software for the financial-services company. Heather Cox, USAA’s chief technology officer, plans a multi-year deal with Google.
The three leaders in the public cloud today have also made capabilities like speech and image recognition available to customers who can design apps that hook into these AI features — Microsoft offers 25 different ones.
“You can build software that is cognitive — that can sense emotion and understand your intent, recognize speech or what’s in an image — and we provide all of that in the cloud so customers can use it as part of their software,” said Microsoft vice president Joseph Sirosh.
Amazon, in November, introduced similar tools. Rekognition tells users what’s in an image, Polly converts text to human-like speech and Lex — based on the company’s popular Alexa service — uses speech and text recognition for building conversational bots. It plans more this year.
Chris Nicholson, CEO of AI company Skymind Inc., isn’t sure how large the market really is for AI in the cloud. The massive data sets some companies want to use are still mostly stored in house and it’s expensive and time-consuming to move them to the cloud. It’s easier to bring the AI algorithms to the data than the other way round, he said.
Amazon’s Wood disagrees, noting healthy demand for the company’s Snowball appliance for transferring large amounts of information to its data centers. Interest was so high that in November Amazon introduced an 18-wheeler truck called Snowmobile that can move 100 petabytes of data.
Microsoft’s Sirosh said the cloud can be powerful for companies that don’t want to invest in the processing power to crunch the data needed for AI-based apps.
Take Norwegian power company eSmart Systems AS, which developed drones that photograph power lines. The company wrote its own algorithm to scan the images for locations that need repair. But it rents the massive computing power needed to run the software from Microsoft’s Azure cloud service, CEO Knut Johansen said.
As the market grows and competition intensifies, each vendor will play to their strengths.
“Google has the most credibility based on tools they have; Microsoft is the one that will actually be able to convince the enterprises to do it; and Amazon has the advantage in that most corporate data in the cloud is in AWS,” said Algorithmia’s Oppenheimer. “It’s anybody’s game.” | 7:01p |
Equinix Exec: We Spent $17B on Data Centers, but Cloud Giants Spend Much More Equinix, one of the world’s two largest data center providers, has spent $17 billion over its 18 years in business to expand its global data center empire, including construction and acquisitions, but cloud giants like Microsoft, Google, and Amazon each easily beat that number in two years’ time.
That’s according to Peter Ferris, chief evangelist at Equinix, who made the comparison during his keynote at Data Center World Wednesday. He offered the numbers to illustrate just how real the shift of IT infrastructure from on-premise data centers to the cloud is.
“Google, and Microsoft, and Amazon are each spending more than $10 billion dollars a year at this point building data centers,” he said. “Either they’re all crazy, or this transition is real and it will continue.”
The figure is a reasonable approximation. Cloud providers don’t disclose their exact data center spend publicly, but at least Google and Microsoft execs have recently shared approximate figures, and they’re similar.
Google’s trailing three-year capital investment is about $26.6 billion, Urs Hölzle, the company’s senior VP for technical infrastructure, said in his keynote address at the Google Cloud Next conference in San Francisco in March. The bulk of that investment is in infrastructure.
It’s not unusual for Microsoft to spend more than $2 billion on infrastructure in a single quarter.
They’re spending this money to bulk up data center capacity and network bandwidth to absorb enterprise application workloads, as companies increasingly rethink the necessity of owning and operating their own data centers, and Ferris has a front-row view of this shift.
Like many other data center service providers, Equinix has positioned its facilities as places where companies can connect their corporate networks directly to the networks of cloud giants. This option is more secure and provides better performance than connecting to the cloud through the public internet.
Reducing operational cost isn’t the only reason companies are shifting to the cloud. Every industry is undergoing a shift to digital tools and products, and the distributed nature of hyper-scale clouds makes it easier to get those digital tool and products to their end users.
Companies of all types have to rethink both their infrastructure and their business strategies in the digital world, Ferris said. “No matter what business you’re in, it’s conceivable that you will be disrupted.”
More Data Center World coverage:
This Hacker Can Talk His Way inside a Data Center
Radware Discovers BrickerBot, a New Permanent DoS Botnet
Microsoft: OCP Forced Hardware Makers to Rethink Interoperability
Finding the Sweet Spot for Your Data Center
Automation: Not Just for DevOps
Three Reasons Network Security Policy Management is a Big Deal | 7:07p |
Guarding the Guards Themselves—The Truth Behind Security Devices By Dennis Cox, Chief Product Officer, Ixia
We trust numerous security devices to protect our networks, and they have an exceedingly difficult job. The number of applications that now make up a network has hit ridiculously high numbers. Who knows what the latest cloud-based applications will bring? In today’s IT environment, security solutions still have to solve the security issues of the past and present but be capable of predicting future issues as well. That is such a high bar to achieve — it’s nearly unachievable — but these devices can’t be blamed.
The answer, of course, is layering multiple solutions to ensure the widest breadth of protection possible. Unfortunately this too has its own set of issues, such as the cost of managing multiple solutions. So what is the best way to make sure you pick the best device for your network?
All about the apps
At the end of the day, your company has the goal (most likely) of making money. Anything that isn’t a key part of the core business or helps deliver on this goal must be treated secondary. With this in mind, you can outline a list of the top three applications in your network to better understand what those key facets are. For example, I would list my top three applications as Oracle for finance, salesforce.com for our sales process, and Office365 for our messaging (e.g. email, Skype). There are many more but those three are the cornerstones of the business — at least for the sake of this article. What are your top three? Write them down! You need to understand and not forget about updating them or using the vendor’s security updates.
Armed with this list, you can also ask the vendor(s) of your network or security devices how they perform with those applications. Let’s face it; if the vendor hasn’t tested with Oracle do you want to use them? What about salesforce.com? Over 100,000 companies use salesforce.com, if your network equipment vendors aren’t testing with it, you might want to switch vendors. Ask them to share the results. Perhaps the best vendor is the one that shows you that they know how each application impacts their device and how it will perform.
The sum of all parts
Network security devices like firewalls, routers and switches are the sum of the parts that make them. When you deploy Java in your network — you most likely check that it is up to date or get an alert when a new vulnerability is announced. This is pretty standard — Patch Tuesdays, automated updates and more, all alert users to possible risk. But, what about the components that make up your firewall? Would you know if the network processor or switch chip in that platform became an issue?
Rarely are those alerts made, in fact, rarely does the vendor that sold you the product even know about the vulnerability. This can be noted in when we look at hardware. It is important to understand that security devices leverage third-party chips from all over the world, which can house their own vulnerabilities.
To illustrate, consider what happens when you need a storage device? An engineer in operations or the vendor’s hardware team picks one. He or she publishes the BOM (bill of materials) and their job is done. If a part goes end of sale or end of life, the BOM is updated with a replacement part. Other than that, the job is done. However, if a vulnerability is found in a switch chip, who updates the software on the chip for a customer? Generally nobody. That vulnerability gets to live for a very long time. To protect your organization, ask for a list of the components (at least the data path components) that make up the network devices you are using and make sure to note any possible issues. This also allows you to add the component vendors to your vulnerability alerts.
Additionally, spread the love around a bit and make sure you don’t have a clear path. You don’t want every device using the same Ethernet chip from end to end, which is more common than you can imagine. It would mean a Layer 1 or 2 DDOS attack could really ruin your day.
Ultimately, you want your organization to be in a place where vulnerability won’t be your undoing because a security device that was supposed to protect you was the vector of compromise. Someone has to guard the guardians.
Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 11:32p |
|
|