Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, October 9th, 2013
| Time |
Event |
| 11:47a |
QTS Realty Goes Public Today  A look inside a QTS (Quality Technology Services) data center. QTS Realty plans to go public today on the New York Stock Exchange. (Photo: QTS)
Data center developer QTS Realty Trust is going public today, hoping to raise between $365 million and $422 million through an initial public offering on the New York Stock Exchange. The QTS offering will provide an indication of Wall Street’s appetite for data center IPOs, and be of keen interest to other companies in the sector that have filed to go public, including IO and hosting industry consolidator Endurance International.
In a regulatory filing last week, QTS Realty said it expects to sell 12.25 million shares of common stock at a price between $27 and $30 per share. At the midpoint of the proposed range, QTS Realty Trust would have a market value of about $995 million. The company plans to convert to a real estate investment trust (REIT) and list on the NYSE under the symbol QTS.
The QTS offering arrives as Wall Street is receptive to IPOs. There have been 159 IPOs thus far in 2013, and 110 of those companies have seen their share prices move higher after going public, according to IPO Scoop.
The offering by QTS is a key step in the company’s ambitious growth strategy, which has focused on buying massive industrial facilities and adapting them for data center use. The company hopes to expand seven of its data centers across the county, investing up to $277 million to add more than 312,000 square feet of customer space in key markets over the next two years.
QTS operates 10 data centers in seven states offering 714,000 square feet of raised floor data center space and 390 megawatts of available utility power. The company reported revenue of $84.4 million in the first half of 2013, with net income of $7.1 million and funds from operation (FFO, a key benchmark for REITs) of $26.7 million. In 2012, the company had revenues of $157.6 million and FFO of $45.2 million. | | 12:30p |
Cable Pathways: A Data Center Design Guide and Best Practices Scott VanDenBerg is Technical Sales Specialist with Optical Cable Corporation.
 SCOTT VANDENBERG Optical Cable Corp.
Everyone has heard the saying: it’s the little things that make the biggest impact. This holds true especially when designing a data center. There are many important aspects to consider—from power and cooling requirements, to servers and hardware. Good cable pathway designers know that multiple products must work together to ensure a successful pathway from point A to point B. Let’s talk about a few of the key elements.
Pathways
Pathways allow the placement of data center trunk cables and cross-connect cables between racks and cabinets. Both overhead and under floor pathways should be designed to support the weight of cables in the initial installation and it should also facilitate the addition of future cables. Planning for 90-degree bends, waterfall dropouts and other vertical support methods should be incorporated in the initial design and will allow routing of cable without damage. Pathway products come in a number of different styles:
- A ladder rack is made of tubular steel and comes in sizes from 6” to 36” wide. The installation of a ladder rack is simple and requires little trade experience. Ladder rack come with many accessories such as 90-degree bends, waterfalls and cable retaining posts. These accessories allow the routing of cable without damage.
- A cable tray is a ladder rack with sides and may be steel, aluminum or fiberglass. These sides allow for a greater amount of cable to be supported. The maximum loading depth recognized by the NEC is 6”. The cable tray is designed to support both electrical and data cables and is typically more robust than a ladder rack. It requires a pre-design effort because it is not flexible to work with in the field.
-
A basket tray is a cable tray designed for light duty applications. The basket tray is lightweight and easy to install; however a certain level of experience is needed to properly install. Many of the accessories that accompany ladder racks also accompany basket trays, to ensure proper bend radiuses are maintained a proper transition to the equipment rack.
- An underfloor cable tray is a product used primarily in data centers. The concept is the same as the overhead support apparatus. However, when using under floor cable tray systems, the air space may be a plenum air space, so all cable and patch cables would need to be plenum for proper air flow.
Design and Installation Considerations for Cable Support Products
In order to support existing infrastructure, and plan for future growth, there are a number of key considerations that should be made throughout the design process and installation. Some important things to keep in mind include:
- Installation of overhead and underfloor supports should be done in a matrix type fashion that allows cables to be routed from point to point anywhere in the data center.
- Grounding and bonding is very important when installing any cabling support product. Be sure that all racks, cabinets, and pathway support products are properly bonded and the system is grounded.
- Allow room for future growth. All cable tray and ladder rack should be sized to accommodate at least 50% growth after the initial install.
- Be very careful about stressing the cable. Be sure to use sweeping 90-degree bends always when transitioning from the pathway support and the racks or around corners.
- Be sure the heaviest cable is on the bottom of the tray or separated from the lighter cables. This will prevent the heavier cable from stressing the lightweight cables.
- Separate the copper cables from the fiber cables if possible.
- Avoid mounting any cable components in locations that block access to other equipment inside and outside the racks.
- Avoid routing pathways with copper cables near equipment that may generate high levels of electrometric interference. Avoid areas around power cords, florescent lights, building electrical cables and fire prevention components.
- Care must be used in the engineering process when choosing Patch Cable and Pre-Terminated Fiber Cable lengths.
- When utilizing Pre-Terminated cables, slack will always be a potential problem. If it is allowed it to build up it creates many problems such as, clogged up pathways, excessive weight overloading the supports, and reduced airflow./li
Design and Installation of Horizontal and Vertical Wire Managers
Now that we have considered everything we need to support our cabling above and below the equipment racks and cabinets, we need to consider our cabling pathways in and around the cabinet and or rack.
- Horizontal wire managers allow the neat and proper routing of patch and equipment cords from the switch/server to a patch panel. Density is very important in data center cabinets and racks so keep in mind how many rack spaces are being utilized with horizontal wire managers. Horizontal wire managers are available in many sizes from 1U to 4U. 1U and 2U heights are the most prevalent. They also come in depths from 1.75” to 6” deep. Cable management can be accomplished in both the front and rear of the rack with double-sided organizers. Depending on how much you want to hide the cables, horizontal wire managers can also come with or without doors/covers. They come in metal and plastic.
- A vertical wire manager provides a vertical pathway for cable within the rack or cabinet. It allows multiple horizontal wire managers to feed into a larger vertical pathway for the entire height of the rack. There can be as many as 20 horizontal wire managers feeding into one large vertical manager. Therefore, these vertical managers need to be large, with depth sizes from 4 to 10 inches deep and up to 10 inches wide. These managers can come with waterfalls and spools to help manage multiple cables and to help with maintaining proper bend radius on copper and fiber cables.
- It’s important to make sure there is enough space to accommodate all the patch cords to avoid overfilling the wire managers. Overfilling wire managers will cause kinking in the patch cords make it very difficult for moves, adds or changes. You should allow a minimum 30 percent space in the wire manager for growth.



Cable Selection
Outside diameter is the key to reducing cable fill in your cable tray and your cable management. Let’s look at the options available.
- Copper cables are more difficult due to their weight and large OD compared to fiber optic cables. Copper cables are typically used for inter and intra rack communications.
- Fiber optic cables offer options to reduce cable fill and can provide much greater bandwidth than copper.
- There are many types of fiber optic cables designed for data centers that will dramatically reduce cable fill in cable trays.
- Pre-Terminated fiber optic cables are also prevalent in data centers. They are used for many reasons including quality, dependability, and reduced installation time. Cable slack is hard to accommodate in data centers no matter where it is located—cable trays, vertical or horizontal cable managers. Every effort should be made to get the lengths right before they are installed.
Other Considerations
Cable management in the racks is as important as in the pathways. Waterfalls from the overhead cable supports into the vertical wire managers provide necessary strain relief. Spools that can be attached in the vertical wire manager help maintain bend radius for both copper and fiber cable. Also, Velcro cable supports are reusable and a safe way to secure the cable without damaging it.
There are many things that need to be considered when it comes to cables and pathways in a data center. One thing is for sure: data centers will continue to grow as technology continues to advance the way we live.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 1:00p |
Proactive vs. Reactive: Assessing Risks in the Data Center ORLANDO, Fla. - David Boston of Boston Consulting knows a lot of about how to evaluate data center risk. At one point, he managed more than 100,000 square feet of space for GTE Data, a company that was later acquired by Verizon. But it was touring and assessing risks for hundreds of data centers as a consultant that honed his eye and he discovered that each data center is unique, no matter how uniform a company chooses to build them. All data centers have strengths and weaknesses. The most important assessment objective is to reveal risks that were not previously evident.
At Data Center World last week, he spoke about why data centers should perform risk assessments – not only on a reactive basis, but a proactive one. Currently, 90 percent of risk assessments are reactive, and the audience—reluctantly—revealed this to be true. “Pain generates immediate concern,” said Boston.
“No matter how well the data center is planned, we still find single points of failure in almost all assessments,” he said. While internal assessments can be effective in identifying some problems, “they tend to be effective in identifying what they know what to look for,” said Boston. “Most have seen far fewer facilities in issues over their careers. Personal visits to several facilities add to best practices.”
Previously, Boston served as Program Director – Site Uptime Network, at Uptime Institute. On average, he says that data centers still see an average of one downtime event per year. Downtime is overwhelmingly caused by human error, accounting for 60-80 percent of the time. Boston stresses that processes are just as critical as equipment, and that often, you cannot see problem points when it’s your own data center.
Those who do choose to proactively assess generally schedule reviews every 3-5 years.
There are three major reasons to seek out a risk assessment
- Validate the function of infrastructure system capacities
- Identify single points of failure
- Determine which systems fail to match design objectives.
The benefits of an assessment extend beyond just prevention. They confirm that systems are being utilized as expected. A report will consist of an executive summary, recommended actions, as well as a list of commendable best practices. They will often pick up on site risks that aren’t immediately evident, such as effective color coding or unprotected breaker handles, as two small, given examples. Boston also gives an example of something he once found that wasn’t immediately evident to the data center operator. The generators were specified without the option to change filters while in operation, which could have caused serious problems during extended downtime.
Boston provided a checklist of objectives they look for that extend beyond just systems, such as who the responsibility falls on at each point. “There’s a greater need for cooperation in the data center and defined expectations,” he added.
He recommended using a list of what to ask for with an outside risk assessment. “The schedule will vary based on what’s on the checklist,” said Boston. “Most facilities require on site review that lasts 2-4 days, with a subsequent report produced in 2-3 weeks.” These time frames depend on facility size and complexity.
“How much time it takes to recover is more important than how long you’re down,” Boston noted. Risk assessments help to have a master place in place “to get the most bang for your buck.” Outside risk assessments help identify what data center operators might not be seeing right in front of them – they have more experience viewing a variety of facilities and often, we can’t see problems right in front of us when we’re not a few steps back from the situation.
As the consultant emphasized, there’s greater need for cooperation and defined expectations in the data center. Different people are in charge of different systems, and outside parties provide a holistic view of the data center. | | 1:30p |
US Court Rules in Favor of AWS Over IBM in CIA Cloud Battle Brought to you by The WHIR. 
Amazon won a court battle Monday against IBM over a $600 million dollar cloud computing contract awarded by the CIA.
In January 2013, AWS was successful in winning the cloud contract from the CIA, a decision that was later contested by IBM. Big Blue argued that the outcome of the bidding didn’t make sense because AWS’ contract would cost the CIA $54 million dollars more than IBM’s proposal.
After a formal investigation by the US Government Accountability Office in June, it was recommended that the CIA re-evaluate parts of its decision. The CIA said that it ultimately went with AWS not for cost, but for its “superior technical solution.”
On Monday, IBM was downgraded as analysts warned that IBM’s traditional server business was in trouble and the shift to cloud and SaaS will “adversely impact all of IBM’s segments in some way,”according to a report by ZDNet.
Government contracts provide a huge opportunity for cloud providers, who aside from enterprise clients, typically deal with accounts paid by smaller customers on-the-go or on a monthly basis. The CIA contract win will give AWS some much needed cred in the enterprise and government cloud sectors.
In September, Savvis won a $1.1 million, three-year contract for providing cloud hosting to the Federal Communications Commission.
Article originally published at: http://www.thewhir.com/web-hosting-news/us-court-rules-in-favor-of-aws-over-ibm-in-cia-cloud-battle | | 2:30p |
In Norway, Fjord Keeps Financial Data Cool  Cabling trays run along the underground passages within the Green Mountain data center, a fjord-cooled facility in Stavanger, Norway. (Photo: Green Mountain)
Are you ready for fjord-cooled financial data? Norway’s largest bank, DNB, will house its primary IT operations inside the Green Mountain data center, running its servers in underground data bunkers that once stored ammunition for NATO. The Green Mountain facility is located on the island of Rennesoy, where it draws frigid water from an adjacent fjord to cool its data halls.
The move by DNB marks a major win for Green Mountain, as well as continued momentum for Nordic data centers that tap the region’s cool climate and renewable energy to support IT infrastructure. The contract spans over 21 years and is a multi-million NOK deal.
“DNB is one of Norway’s largest users of data centre space and a user that has especially tough requirements on all aspects of the data centre quality, security and operations,” said Knut Molaug, Green Mountain’s CEO.”This agreement is an important break-through for us, even though we have already five important customers on-board and good traction in the market place. With this sixth client a new market segment opens for us. This contract establishes us a key player in the data centre space for the financial services sector both in Norway and the rest of Europe.”
100 Meters Underground
Green Mountain is a 21,000 square meter (226,000 square foot) nestled along the shores of the island of Rennesoy, inside concrete buildings within caves carved out of the mountain that are more than 100 meters underground. The project is being developed by the investment arm of the Norwegian shipping firm Smedvig.
Green Mountain’s cooling system taps the fjord for a steady supply of water at 8 degrees C (46 degrees F), which is optimal for use in data center cooling systems. Chilled water is a key component of many data center cooling systems. This water is often supplied by chillers, large refrigeration units that require a hefty amount of electricity to operate. Eliminating the chillers will usually allow a data center to operate with lower energy bills than similar facilities using chillers. The facility is powered entirely by hydro-electric energy.
“We chose Green Mountain because it has everything we need and because of its green profile, high security, high availability and low power consumption,” said Liv Fiksdal, Group executive vice president IT and Operations in DNB. “In the financial markets stability is of essence, so a company like Green Mountain, which offers continuity in business and a predictable base line of operating costs, is perfect for the finance industry.”
 A data hall within the Green Mountain data center, more than 100 meters below ground in Stavanger, Norway. (Photo: Green Mountain) | | 2:50p |
The Pros and Cons of Underground Data Centers  The entrance to the Cavern Technologies data center in Lenexa, Kansas, which houses its customer servers more than 75 feet underground. The company is among a growing number of data bunkers storing data for high-security clients. (Photo: Cavern Technologies)
ORLANDO, Fla. - The data bunker industry is growing, as more customers seek out ultra-secure underground hosting for their IT operations. Operators of subterranean server farms say these environments are similar to above-ground facilities, but they often must address misperceptions about underground sites, many of which are housed in limestone mines.
The emergence of underground data centers was the focus of a session at last week’s Data Center World Fall conference, in which several experts discussed the advantages and challenges of underground data centers, and offered tips to consider when evaluating a data bunker.
“The underground data center space is experiencing rapid growth due to the efficiency and speed to market it offers,” said John Clune, the president of Cavern Technologies, which operates a data center in a limestone mine in Lenexa, Kansas. “One of the bigger challenges has been the perception of underground data centers. People are imagining a tight cubbyhole with a guy with a light on his helmet. The reality is that we’ve got 18 foot ceilings.”
Cavern Technologies is among a cluster of underground facilities in the Midwest, which also includes SubTropolis, The Mountain Complex and SpringNet Underground in Missouri; and the InfoBunker and U.S. Secure Hosting in Iowa.
Tips from Data Bunker Veterans
Not all underground data centers are created equal, and potential customers need to shop carefully and be mindful of the differences between traditional and underground facilities, according to architect Kerry Knott of Bell/Knott & Associates. Knott has worked on a number of underground business parks and data centers in Kansas and Missouri, and offers some insights into evaluating a data bunker.
“Data center buildouts are a good use for these kind of facilities,” said Knott. “Once the data center is built, if you take someone in there blindfolded, they’d never know they were underground. You’ve got the same equipment; it’s just been an underground facility.”
But there are some differences. Here are some pros and cons to consider with facilities built in limestone mines:
Speed to Market: Clune says Cavern was recently able to deploy 5,000 square feet of data center space for a client in just 60 days. “The speed to market is impressive in the underground,” said Clune. One factor is that there’s no need to build or adapt a shell, as the underground space has already been created and all that is needed is the framing and buildout of the data halls. Another benefit is permitting from local officials. “In every underground I’ve worked with, we have had a blanket permit” once the initial underground space is created, said Knott. “It’s one of the advantages of underground structures. That could be an 8 to 10 week savings.” Another benefit is that construction can continue year-round, with no weather delays.
Construction Costs: Underground data centers can also be cheaper, Knott said, since there’s no expense to construct a concrete shell. Subterranean structures also offer potential savings on disaster-proofing, especially in the Midwest. “To build a tornado-proof building above ground can cost an extra $100 a square foot,” said Knott, who added that customers often inquire about other types of disasters. “People are concerned about collapse, and they’re worried about earthquakes,” he said. “An underground space, unlike the building above ground, doesn’t move and doesn’t need to be reinforced. An earthquake doesn’t affect the enclosure at all, but you do have to brace the improvements.”
Facility History and Origin: Recently-built underground facilities are usually appropriate, but those that were mined in the 1960s and earlier may not be. “To be an acceptable space for a data center, it has to have been mined for commercial development,” says Knott. “The limestone has to be preserved in the proper thickness and have structural integrity. The room size is also important, because the columnar support will be rock columns that may be 25 to 30 feet in diameter.”
The size and placement of these columns impacts the technical space. “Optimizing the layout within the property is essential,” said Knott. “It’s tough to get 90-degree corners with underground columns, so you have to be creative, since almost all your equipment is square. With the restrictions of the columns and placement of the corridor, you have to work with what you have. It can be awkward if these are haphazardly shaped.”
Cooling and Ventilation: Underground spaces are naturally cool, but that doesn’t mean they’ll stay that way once you fill them with servers. “Heat rejection is the biggest concern and the biggest challenge,” said Knott. “Most underground spaces have their own fresh air and ventilation system, but that’s generally for comfort rather than the kind of heat we’re putting into the space with the data center. Your options are to drill (ventilation) holes up through the top or horizontally to the exterior.”
Placement of Mechanical Equipment: Some mechanical and electrical equipment requires ventilation and must be housed in an exterior yard. There are several options to address this, which customers must consider if their goal is disaster avoidance, as this equipment will be more exposed. “Generators and air-cooled chillers can be placed against an exterior wall or protected with an outside wall,” said Knott. “You can also build another underground chamber to house them.” Another issue to consider is fire suppression systems, and what happens with water in the event the system is ever discharged in part of the facility.
Staff Considerations: There won’t be any daylight in an underground data center, but that’s not different from many above-ground data centers, Knott says. A bigger concern for staff might be parking, as underground facilities can be large, and that sometimes means that parking areas are a significant distance from the data center.
 John Clune, President of Cavern Technologies, a Midwestern underground data center, talks about the pros and cons of underground data centers. While the underground temperature is a consistent 68 degrees, the data center engineers do have to accommodate for waste heat from servers and other gear. (Photo by Colleen Miller.)
 One of the data halls at Cavern Technologies in Kansas, which offers few clues that the facility is 75 feet below ground. (Photo: Cavern Technologies) | | 3:00p |
Optimizing Airflow Can Extend Data Center Life Although there’s much discussion around increased efficiency, just how efficient are the new breed of cloud and enterprise data centers? According to the latest findings from the Uptime Institute, in audits of 45 data centers, the data centers averaged 3.9 times the amount of cooling capacity actually required by the demand of the associated IT load. Not very efficient.
Despite improvements in airflow management to reduce bypass airflow – surplus cooling capacity, according to the report, had increased from a factor of 2.6 in a previous study conducted 10 years earlier. The difference between capacity and required demand described by these numerical terms, has been defined by the Uptime Institute as the Cooling Capacity Factor (CCF), and a CCF of 1 = a 10 percent surplus of supply to demand to accommodate positively pressurizing the room and accommodating minimal ceiling and floor leakage. Furthermore, despite this excess cooling capacity, there were still hot spots. For facilities representative of this research sample, there are definitely opportunities for extending the life of the data center.
This white paper from Chatsworth shows how planning an extension to the life of a data center – that appears to be out of cooling and power – is not merely a matter of eliminating hot spots and recapturing stranded capacity to supply a static environment. Remember, when data center managers are forecasting hitting a capacity wall, they are envisioning some continued growth to support the business’ mission critical activities. This growth is a combination of increased traffic, incremental applications and technology refreshes.
Download this white paper today to learn how optimum airflow management through effective containment can help create a longer life for your data center. With more demand around the modern data center infrastructure – creating a plan around data center airflow optimization not only helps save on infrastructure dollars–it also improves the performance of your overall environment. | | 5:00p |
Intel Expands Roadmap For An Internet of Things Platform Intel revealed an expanded and accelerated strategy for an Internet of Things (IoT) Platform, promoting its Quark and Atom processor families as well as introducing gateway solutions to extend its reach into emerging applications. Intel (INTC) announced its plans to enable intelligent devices, end-to-end analytics and connecting legacy devices to the cloud to drive business transformation.
“The Internet of Things consists of a wide range of Internet-connected devices, from a simple pedometer to a complex CT scanner,” said Ton Steenman, vice president and general manager of Intel’s Intelligent Systems Group. “The true value in the Internet of Things is realized when these intelligent devices communicate and share data with each other and the cloud, uncovering information and actionable insight that can transform business. As a leader in computing solutions from the device to the datacenter, Intel is focused on driving intelligence in new devices and gateways to help connect the billions of existing devices.”
Central to its IoT platform are the Intel Atom and Quark SoC processors. Intel offered its scalable roadmap of products to power devices at the edge of the network, from the energy-efficient Intel Quark SoC to the high-performance Intel Xeon processors. The addition of the low-power, small-core Intel Quark SoC X1000 will extend the company’s reach into new and rapidly growing IoT markets. The new product family features error-correcting code (ECC), industrial temperature range and integrated security. ECC delivers a high level of data integrity, reliability and system uptime for equipment required to run at all times. The new Intel Atom processor E3800 product family is ideally suited for digital signage applications.
Intelligent Gateways
Working together with its McAfee and Wind River companies, Intel is addressing the challenge of building a new family of intelligent gateway solutions that connect legacy systems and provide common interfaces and communication between devices and the cloud.
Targeting industrial, energy and transportation markets, this system of systems helps ensure that the data generated by devices and existing infrastructure can be shared securely between the cloud and intelligent devices for analysis. Leveraging the McAfee Embedded Control and the Wind River Intelligent Device Platform, a new family of intelligent gateway solutions from Intel provides integrated and pre-validated hardware and software. The first set of intelligent gateway solutions will feature versions based on the Intel Quark SoC X1000 and Intel Atom processor E3800 product family and will be available in the first quarter of 2014.
Daikin Applied, (formerly Daikin McQuay and a wholly owned subsidiary of Daikin Industries, Ltd.) is using the Intel-based intelligent gateway solutions to deploy a complete end-to-end solution for commercial HVAC equipment. Intel is enabling Daikin Applied to connect its existing Rebel rooftop units and deliver data to the cloud that is then aggregated and analyzed. By using an integrated intelligent gateway solution, Daikin Applied is able to focus on rapidly deploying differentiated value-added services such as real-time HVAC unit performance, remote diagnostics, monitoring and control, advanced energy management, and third party content integration services to its customers. | | 6:00p |
Big Data Analytics: Teradata Launches Aster Discovery Platform Teradata (TDC) announced the Aster Discovery Platform – part of a new generation of solutions that includes Aster SQL-GR, a graph engine, and the Teradata SNAP Framework (Teradata Aster Seamless Network Analytic Processing Framework).
“Recent IDC research revealed that only 10 percent of organizations have the features and functionality needed to explore data and discover insights. There are clearly many opportunities for improvement,” said Dan Vesset, program vice president, Business Analytics and Big Data, IDC. “An integrated data discovery platform should be part of every organization’s portfolio. Without this capability, organizations have a real void in their business analytics strategy.”
Able to scale to millions of nodes or processing units the Aster SQL-GR graph engine is unique because it is scalable, processes big data in parallel, and is not limited by system memory. It enables native processing of large-scale analytic graph queries and pre-built graph functions and can be used for customer churn, product affinity, fraud detection, and recommendation engines.
Teradata SNAP Framework enables multiple analytic engines and file stores to be seamlessly “snapped” together based on the customers’ tailored discovery needs. The tightly integrated components within the Teradata SNAP Framework empower users to delve deeply into data for new competitive insights by leveraging multiple analytical capabilities – like graph, MapReduce, text, statistical, time series, and SQL-based analytics.
“The innovative design of the Aster Discovery Platform will liberate data scientists around the world by reducing complexity, breaking down analytic silos, and magnifying analytic ability,” said Scott Gnau, president, Teradata Labs.
The power lies in performing one iterative SQL statement to invoke one or all of the SQL, MapReduce, text analytics, statistical analytics, or graph analytics capabilities in a single solution. The new capabilities will accelerate the discovery process and empower business users to visualize information in exciting and valuable ways. For example, retailers search for specific low profit margin food items that influence the purchase of high profit items.
Teradata has added new pre-built functions that can be accessed from a common SQL framework. Aster Discovery Platform also extends in-database analytics and discovery with the integration of Fuzzy Logix’s 600-plus advanced analytic algorithms. The analytic functions further empower the data scientist and analyst to rapidly dig into data with the insightful, interactive visualizations delivered through a Web browser and common business intelligence tools.
“We use the Teradata Aster Discovery Platform to better understand our customers’ unique needs. The resulting insights help us update the Gilt Group website with the most relevant, personalized product offers,” Geoff Guerdat, Director of Data Engineering, Gilt Group. | | 6:30p |
Video Demo: Eaton 93PM UPS for Data Centers Eaton recently introduced the 93PM uninterruptible power supply (UPS), which provides backup power for data centers. The 93PM is designed to work either at the perimeter of a data hall or as an in-row unit within an aisle containment system. To accommodate an in-row configuration, the 93PM has added the option of exhausting heat through the rear of the unit into the hot aisle. The fans on the top of the unit are placed at the back so that data center operators can also use a chimney system at the rear of the unit to remove air, and still have room for overhead electrical cabling. The 93PM also figures an updated interface to provide more data, as well as Eaton’s Energy Saver System (ESS) to provide “eco mode” benefits of up to 99 percent energy efficiency. In this video, Eaton Product Manager Jason Anderson provides an overview of the 93PM and its features. This video runs about 4 minutes, 30 seconds.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. |
|