Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, September 3rd, 2014
Time |
Event |
3:52p |
Top 10 Ways In-Memory Computing Can Revitalize Tech at Federal Agencies Chris Steel is Chief Solutions Architect for Software AG Government Solutions, a leading software provider for the federal government helping them integrate and dramatically enhance the speed and scalability of their IT systems.
Until recently, it seemed that in-memory computing platforms were only leveraged by the most technologically savvy organizations. However, the value has become so obvious that many organizations, especially budget strapped federal agencies, are racing toward adoption.
With IT experts agreeing that RAM is the new disk, in-memory computing is being seen as the secret to cost-effective modernization. As a result, more and more organizations are moving data into machine memory and out of disk-based stores and remote relational databases.
While still more prevalent in the commercial sector, the public sector is rapidly learning that if data resides right where it’s used – in the core processing unit where the application runs – several benefits arise.
Below are the top 10 reasons why federal agencies are embracing in-memory computing:
Blazingly fast speed In-memory data is accessed in microseconds. That’s real-time access to critical data—or at least 100 times faster than retrieving data from a disk-based store accessed across the network.
Higher throughput Significantly lower latency leads to dramatically higher throughput. Agencies that run high-volume transactions can use in-memory data to boost processing capacity without adding computing power.
Real-time processing For some applications—like fraud detection or network monitoring—delays of seconds, even milliseconds, don’t cut it. Acceptable performance requires real-time data access for ultra-fast processing.
Accelerated analytics Why wait hours for a report of days-old data? With in-memory data, you can do analytics in real-time for faster decision-making based on up-to-the-minute information.
Plunging memory prices The past decade has seen a precipitous drop in the cost of RAM. When you can buy a 96GB server for less than $5,000, storing data in-memory makes good fiscal and technical sense.
RAM-packed servers Hardware makers are adding more memory to their boxes. Today’s terabyte servers are sized to harness, in-memory, the torrent of data coming from mobile devices, websites, sensors and other sources.
In-memory data store An in-memory store can act as a central point of coordination, aggregating, distributing and providing instant access to your Big Data at memory speeds.
Easy for developers There is no simpler way to store data than in its native format in-memory. Most in-memory solutions are no longer database specific. No complex APIs, libraries or interfaces are typically required, and there’s no overhead added by conversion into a relational or columnar format. There is even an enterprise version of Ehcache, Java’s de facto standard caching.
Expected by users In-memory data satisfies the “need-it-now” demands of consumers and business users, whether that’s for speedier searches, faster Web services or immediate access to more relevant information.
Game-changing for mission critical applications and the agency In-memory data creates unprecedented opportunities for innovation. Government organizations can transform how they access, analyze and act on data, building new capabilities that deliver top and bottom line benefits directly benefiting the mission. Get There Faster!
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:00p |
Compuware Going Private in $2.5B Deal Compuware, the giant that develops software for large-scale enterprise IT, is going private through a purchase by private equity firm Thoma Bravo. The deal is expected to close in early 2015.
Thoma Bravo will acquire the Detroit company for$2.5 billion ($10.92 per share) in cash and stock. This is slightly lower than the $11 per share previously offered by hedge fund Elliott Management, which has been trying to acquire the company for about 18 months.
Compuware was founded in 1979 and its roots are in mainframes. The company has evolved over the years, particularly in terms of application performance management.
It acquired Gomez in 2009 for $295 million, extending its monitoring capabilities from data center to end-user. The largest Michigan-based tech company, it has been facing investor pressure to cut costs and reduce staff.
Large public technology companies continue to go private due to investor pressure. The biggest example is Dell. Shareholders approved a buyout for $25 billion last year.
Compuware has been on a business optimization trajectory, having recently sold three of its non-core business units for $160 million this year.
Going private allows a company to transition faster and focus on growth.
Elliot Management issued a statement of support for the Thoma Bravo deal. The hedge fund has been involved in pushing BMC and Novell to go private – two companies it held a stake in.
It participated in the privatization of Dell and proposed earlier this year that storage giant EMC spin off its 80-percent stake in VMware.
Elliott also recently offered $3 billion to acquire another application performance player called Riverbed. | 5:21p |
Partners Pushing Data Center Space at Large Jersey Carrier Hotel Owners of a 15-story telco and data center building in Jersey City, New Jersey, have restructured the team of companies managing the building’s data center space and providing services there.
Real estate developer LeFrak Commercial, telco consultants Hidalgo Communications and communications service provider Atlas Communications Technology will fortify and market the 100,000 square feet of data center floor in the building at 111 Town Square Place, also known as 111 Pavonia.
The carrier hotel, opened in the late 80s, was the first office building in Jersey City’s Newport community. It is fed by two 12-megawatt utility connections and offers low-latency connectivity to the region’s financial exchange infrastructure, according to LeFrak, the property’s managing agent.
LeFrak has chosen Atlas to provide fully managed service solutions in the building. The services will include engineering, design and support for tenants. The third partner, Hidalgo, will sell data center space.
Anthony Hidalgo, the company’s CEO, said, “We’ are excited to add 111TSP to our portfolio of data centers and will be rolling out a channel partner program very different from anything else that exists today.”
Another customer and partner of Hidalgo’s is Seattle wholesale data center giant Sabey Data Centers. Hidalgo provides turnkey communications solutions for Sabey and its clients and has been helping Sabey close deals with carriers for its massive Intergate.Manhattan building in Manhattan. | 5:30p |
Enabling Alternating Phase Power Distribution at the Data Center Rack Layer The modern data center is a complex engine processing the world’s most complex workloads. Industry and user trends are indicating that reliance on data center resources will only continue to grow. This means that requirements around efficiency and rack-level density will grow as well.
Through all of this, data center operators are continuously tasked with running a more efficiently controlled data center platform. But how can you accomplish this with so many new connections and high-resource demands? How can you optimize the delivery of power to your most critical applications?
The demand for more power in the computer cabinet has led many data centers to upgrade to three phase power distribution. Proper three phase power distribution has traditionally meant dividing power into multiple branches within the rack PDU (Power Distribution Unit).
In this whitepaper from Server Technology we explore the advantages of a new, less common approach to PDU design by means of alternating each phase on a per-receptacle basis instead of a per branch basis.
Here’s something to think about: The principles of three phase power are not always well understood by the installer, whose only task is to power up the equipment being installed in the computer rack. Load balancing (matching current draw on each phase) is critical in these applications for multiple reasons:
- If the three phases are not balanced, heat is generated resulting in higher cooling costs
- Unbalanced loads lead to inefficiency and higher power bills
- High loads on a single phase means greater chance of tripping either a PDU or upstream breaker, and losing power at the rack
Good practice in the data center is to install rack mounted equipment so that the current draw is similar on each branch. This is relatively easy if the rack is filled with only one type of device. Unfortunately this is often not the case. Mixed devices such as switches, storage devices, blade servers and different brands and types of 1U/2U/3U servers can create a crazy mesh of power cables in the back of the rack.
This can potentially inhibit airflow and add to the heat problems mentioned above.
So what can you do to improve power delivery? A solution to these issues is to use an alternating phase PDU. These specially designed PDUs alternate the phased power on a per-outlet basis instead of a per-branch basis.
Three phase power distribution at the rack level traditionally meant that power was divided into separate branches. Load balancing and cabling of these older PDU designs can be difficult.
Download his whitepaper today to see how power distribution units are implementing alternating phased power on a per-receptacle basis. You’ll also quickly see how this provides tangible benefits in the form of simplified cabling, better airflow, better load balancing and greater efficiencies – which ultimately will lower the operational expenses of the data center. | 5:47p |
Data Center and Cogen Plant Project’s Developer Weighing Maryland as Alternative to Delaware The Data Centers, the company whose project to build a data center and an adjacent power plant in Delaware recently fell through due to public outrage, is now looking at alternative locations in Maryland.
The Baltimore Sun reports the company is looking at Cecil County and other places in the state, as well as other locations in Delaware and five other states. The company will need approval from the state’s Public Service Commission if it decides on a Maryland site.
TDC has an interesting project largely because of a planned cogeneration plant on site. The plan to build in Newark, Delaware, faced opposition from the surrounding town because of the plant.
Now, Cecil County in Maryland looks like the frontrunner, with Cecil officials open to the project. “Maryland promptly recognized the value of the jobs and tax revenues of the project,” said Bruce Myatt, CTO and executive vice president of infrastructure at TDC. “Cecil County has expressed a continued interest in the project over the last two years.”
The project already has the support of Maryland Governor Martin O’Malley and the project will remain the same at its new home.
TDC has originally planned to build a large data center supported by a 279-megawatt energy generation facility featuring combined heat and power (CHP) that would allow it to operate “off the grid” on a property owned by the University of Delaware. While heralded as a forward-looking data center cogeneration project, members of the local community also met it with sizable resistance.
The university terminated its lease with TDC on July 10th, following intense debate. The property is now being redeveloped as a science, tech and research campus.
“We have proven that the very concept of the ‘done deal’ is now dead,” the opposition group said in a statement following the terminated lease. “The community’s voice is powerful in shaping our future. The significance of this effort extends beyond the power plant and will have a positive impact on our community for years to come. It also serves as a powerful example for other communities facing similar challenges.”
The ordeal was an example of NIMBY (Not In My Backyard) in practice. Data centers have generally avoided NIMBY concerns as they are seen as great for communities, helping a tech scene thrive in general.
The Data Centers is thinking outside the box with its cogeneration plant, and doing things differently proved to be a hot topic of debate in Newark. The developers expected their project to be welcomed as a data center. But the neighbors looked at the plans and saw a power plant.
While there was vocal opposition from the surrounding town in Newark, there was also support. The project is bound to bring a big investment in the community it ends up being built in. | 8:02p |
Google Enterprise Rebrands as “Google for Work” Google has rebranded its Google Enterprise collection of online tools as Google for Work. A decade ago the tech giant began offering business versions of its popular consumer products. It initially launched Gmail and search enterprise offerings and later expanded into its business application suite.
Under its new simpler and less stodgy moniker the group is charged with tuning more existing consumer products into business ones. The name change better reflects the core audience and makes the offering seem more accessible to small and medium business.
Google for Work includes branded email, calendar, video calls, cloud storage and document editing and more. The company continues to build out its hosted application offerings.
Businesses continue to see consumer tech make its way into the workplace, often through individual employees introducing an application like Evernote into the fold. Google for Work focuses on secure mobile device usage rather than stemming the mobile trend.
The primary competitor is Microsoft’s hosted flavor of Office, Office 365. Open-Xchange is a large mass market hosting partner with similar offerings, and Zoho also plays in the space. Hosted applications address increasingly mobile workforce by providing access from anywhere and on any device.
“Work today is very different from 10 years ago,” wrote Eric Schmidt, Google’s executive chairman. “Cloud computing, once a new idea, is abundantly available, and collaboration is possible across offices, cities, countries and continents. Ideas can go from prototype to development to launch in a matter of days. Working from a computer, tablet or phone is no longer just a trend—it’s a reality.” | 9:41p |
Infrastructure Change Brings Facebook Down Facebook has issued a statement attributing the social network’s brief outage on Wednesday afternoon to an error that occurred during an infrastructure configuration change.
“Earlier today we encountered an error while making an infrastructure configuration change that briefly made it difficult for people to access Facebook,” a company spokesperson said in an emailed statement. “We immediately discovered the issue and fixed it, and everyone should now be able to connect.”
The outage lasted a little longer than 10 minutes, which was enough to ignite a flurry of sarcastic Tweets tagged #Facebookdown. The website’s other most recent outages happened in August and in June.
Facebook has a robust data center infrastructure, which it designs almost entirely using in-house engineering talent. It has data centers on both coasts of the U.S., as well as in Sweden. It also has some capacity deployed in the U.S. with wholesale data center providers.
Companies with “web-scale” data center infrastructure, such as Facebook, rely on software to make their IT systems resilient. This approach is different from the traditional enterprise approach of building layers of redundancy in mechanical and electrical data center infrastructure.
In a typical web-scale data center, a server cluster is put together in a way that ensures the cluster as a whole can maintain the workload when individual nodes within it go down.
Still, even this approach obviously does not ensure 100-percent application uptime. Online services like Facebook and Twitter do go down from time to time.
Facebook apologized for inconvenience the issue may have caused and promised to investigate it thoroughly to prevent it from happening in the future. | 10:00p |
Google Building Quantum Computing Processors Google has teamed up with a group of university scientists in California in an effort to build quantum information processors.
Instead of the binary “on” and “off” states of a transistor in processors today, a quantum chip can theoretically use a transistor equivalent that takes advantage of the unique ability of subatomic-size units called “qubits” to be in multiple states simultaneously.
Google has a research team dedicated to Quantum Artificial Intelligence. On Tuesday, the company announced that the team kicked off a hardware initiative to design and build quantum processors based on superconducting electronics.
The team has brought on board John Martinis, a University of California, Santa Barbara, professor, and his team of researchers collectively referred to as the Martinis Group. They are an award-winning group that has been focusing on development of high-fidelity superconducting quantum electronic components.
“With an integrated hardware group the Quantum AI team will now be able to implement and test new designs for quantum optimization and inference processors based on recent theoretical insights as well as our learnings from the D-Wave quantum annealing architecture,” Google representatives wrote in a blog post announcing the hardware initiative.
D-Wave is a Canadian company that has reportedly built the first commercially available quantum computer. Called D-Wave One, it runs on a 128-qubit chipset.
Google’s Quantum AI lab, launched last year, is based on D-Wave Two, the second-generation quantum computer powered by a 512-qubit chipset. U.S. National Aeronautics and Space Administration and the Universities Space Research Association participated in launching the lab.
Google said it would continue working with D-Wave scientists, experimenting with the company’s Vesuvius system at the NASA Ames Research Center in Mountain View, California. The company plans to upgrade Vesuvius to a 1,000-qubit processor, codenamed “Washington.”
The promise of quantum computing is to make computers infinitely more powerful. The problem, however, is that even the best quantum-level hardware available today is unreliable.
The Martinis Group has come up with a way to arrange qubits in a way that makes the array of qubits a lot more stable than has been previously possible. The group’s paper describing the results was published in the scientific journal Nature in April.
Austin Fowler, a UCSB physicist who has laid much of the theoretical foundation underneath the Martinis Group’s work, said there were still more issues that needed to be resolved for quantum computing to gain commercial traction.
Error rate in the system his colleagues have proposed is still too high for the technology to become commercially viable, he said in a statement, explaining that the error rate should be below one percent. “If we can get one order of magnitude lower … our qubits could become commercially viable,” he was quoted as saying.
“There are more frequencies to worry about, and it’s certainly true that it’s more complex. However, the physics is no different.” |
|