Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, December 19th, 2013

    Time Event
    12:30p
    Fusion-io Scales ioTurbine for Enterprise Caching
    fusion-io-ioturbine

    A screen shot of ioTurbine software from Fusion-io. (Photo: Fusion-io).

    Fusion-io (FIO) announced the release of an ioTurbine caching software update that enhances performance and manageability for large-scale enterprise environments. The new ioTurbine software is also optimized for integration with central management reporting solutions such as VMware vCenter and is tolerant of widely distributed environments typical of regional or branch offices. The company also unveiled a new Fast Flash program to help evaluate the performance and business benefits of introducing caching to their data center systems.

    “Caching technology is a powerful tool for large-scale enterprises looking to add the value of flash performance without forklift SAN replacements,” said Lee Caswell, Fusion-io vice president, Virtualization Products Group. “With new one-click application caching and streamlined centralized management, ioTurbine intelligently directs application performance to server-side flash, while preserving the data protection and capacity benefits of shared storage.  Powerful new caching algorithms redirect billions of operations from SANs to servers to make enterprise data centers more efficient.”

    The practice of caching across the data center is rapidly growing since server flash is less expensive than disk drives when examined in a cost per operation analysis. The cost reductions and performance benefits accumulate quickly for customers at scale. A leading Fusion-io healthcare customer reported that caching with ioTurbine enabled it to offload 1.6 billion operations from its SAN in a recent 30-day period.

    The new Fast Flash program  offers non-intrusive assessments of specific customer workloads, return on investment analysis, hands-on integration support, and volume pricing for companies adding caching in multiple servers. It provides an ideal way to integrate caching solutions while learning from industry-leading flash memory and caching experts.

    “Caching can drive improved I/O efficiency and cost savings by ensuring that servers and storage each focus on their core respective competencies of performance and capacity,” said Mark Peters, senior analyst at the Enterprise Strategy Group. “These latest updates from Fusion-io add significant breakthroughs for caching at scale to ioTurbine’s real-world-tested solution, and extend its ability to not only deliver improved application performance but also work within constrained IT budgets.”

    1:30p
    Top 5 Data Center Power Monitoring Best Practices

    Jon Trout is Vice President of Engineering and Production, TrendPoint Systems

    Jon-Trout-TrendpointJON TROUT
    TrendPoint

    If you run a data center, you know about the importance of power monitoring to help maximize uptime, increase capacity and improve cost savings. As we often say in our business, “you can’t manage what you don’t measure,” and power monitoring gives operators greater transparency into the energy usage and overall efficiency of their data center operations.

    Even if your organization is well-informed about the value of branch circuit power monitoring, and you are committed to doing it right, you might be curious about the best ways to choose and implement a power monitoring system.

    Here are our company’s “top five” best practices:

    1. Think Flexibility: With all of the power distribution types (e.g PDUs, Panel Boards, Busway, etc) that may be in your one or multiple data centers and the variety of vendors who supply these products (e.g. Schneider Electric, Siemens, Eaton, GE, etc.) that might be present, the power monitoring products you install need to be able to interact effectively with the all of them. Choosing a “platform” that covers the spectrum of power distribution types and vendors, as well as various amperage sizes and circuit configurations, will simplify deployments and streamline integrations into software systems.

    2. Stay Adaptable: We’re seeing a rise in the number of data centers that use busway power because of the adaptable power distribution system it provides. PDUs and panelboards are also frequently changed and modified to support the dynamic data center environment. Make sure that your power monitoring products can adapt when the power shifts – that way you’ll save money on re-building or replacing the existing meters.

    3. Achieve Utility Grade Accuracy: Most data center power meters claim to offer accuracy that is within five percent of the actual power utilization. Although five percent is not insignificant, the best practice in power monitoring is to attain utility-grade accuracy, which is within one percent of the actual amount of power consumed. The reason? A utility grade level of accuracy enables co-located and other data centers to fairly rebill clients for the cost of energy.

    4. Avoid Non-standard Protocols: Does your data center use SNMP to communicate between systems? Modbus TCP? BacnetIP? If the power monitoring meters don’t utilize standard communication protocols it will be more time-consuming and costly to integrate the meters with a DCIM or BMS systems. The best way to avoid this problem is to make sure that any metering system you purchase has the RIGHT communications protocols for the software system you plan to utilize. Ideally, your metering platform should be able to support all your power distribution products, communicate easily to the software, and interact seamlessly with all of the other components of the data center.

    5. Look for Greater Device Functionality: Typical monitoring solutions require a complex and costly network of protocol conversions, middleware, and data interpretations to provide the operations and engineering teams with a comprehensive picture of power consumption in the facility. Features such as onboard Ethernet, onboard data logging, onboard alarming, and an accessible web interface can reduce the failure points and cost associated with a monitoring deployment.

    Data center power monitoring systems can be a powerful tool to help achieve the overall initiatives in the facility. Selecting the right power monitoring system for your facilities current and future needs is an important part of the selection process. Implementation and integration of the power monitoring system are steps that can be easily overlooked, but they play a big role in the effectiveness of the facility and the DCIM or BMS systems they compliment. Flexibility, adaptability, accuracy, communications, and device functionality are important characteristics of a successful power monitoring system.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Amazon to Add China Cloud Computing Region in 2014

    This article originally appeared on TheWHIR.

    Amazon has struck a deal to provide cloud services for businesses in China as a pilot project starting in early 2014, the company announced on Wednesday.

    Local Chinese partners ChinaNetCenter will provide data center space and Sinnet will deliver the service, which will be used by a “select group” of China-based and multinational companies.

    Amazon signed a memorandum of understanding with the governments of Beijing and Northwestern Ningxia region, allowing it access to the heavily regulated market. The Ningxia government not only approved the AWS pilot project, but is also a participant, as some of its public services will be delivered through the Amazon cloud.

    “This will help fully utilize data center and infrastructure resources in Beijing and Ningxia, and provide highly reliable and secure cloud services to millions of Chinese customers,” Yuan Jiajun, executive vice chairman of the Government of Ningxia Hui Nationality Autonomous Region, said in a statement.

    Amazon has been providing cloud services to some Chinese customers, those services used internationally tailored services with overseas infrastructure, whereas they will now be able to store data in China, which should reduce latency issues.

    Existing Chinese AWS customers include smartphone company Xiaomi Inc. and biotech company Tiens Group Co. Ltd.

    “China represents an important long-term market segment,” said Andy Jassy, head of Amazon’s Web Services.

    By partnering with local players and gaining official approval, Amazon moves onto the home turf of giant ecommerce competitor Alibaba Group. Alibaba’s own cloud computing arm branched into cloud for financial services companies in late November, launching Ju Baopen to bring online banking to Chinese consumers.

    A Forrester report released in September predicted huge ecommerce growth in China over the next several years. Gaining access to the market, however, has been a challenge for foreign companies. Partnerships with Chinese companies who have already met regulatory requirements have been necessary for all major cloud providers moving into China.

    Microsoft gained access to the Chinese market for its Windows Azure platform by partnering with Chinese company 21Vianet earlier in the year. 21Vianet is also the local partner for IBM SmartCloud Enterprise+, in a deal announced almost simultaneously with the Amazon announcement.

    Original article published at: http://www.thewhir.com/web-hosting-news/amazon-add-china-cloud-computing-region-2014

    4:00p
    Akamai Enhances Web Performance Solutions

    Aimed at delivering an increasingly mobile, personalized and dynamic web experience, Akamai (AKAM) unveiled several important enhancements to the company’s flagship web experience solutions designed to intelligently maximize site and application performance. These enhancements are driven by the industry trends that have forced both online businesses and enterprises to face new performance challenges, as represented by industry trends including responsive web design and “bring-your-own-device” (BYOD).

    “Increasingly, our customers’ users are demanding a near instant web experience,” said Mike Afergan, senior vice president and general manager, Web Experience Business Unit, Akamai. “What’s more, our customers are looking to improve their control and visibility. It is our mission to bring to market the web experience solutions that will help our customers realize their goals in today’s fast moving environment. With these exciting new capabilities, we believe we are doing just that. We are excited to see what our customers and partners will do with these powerful tools.”

    The enhancements made help customers overcome mobile, network and browser limitations that can hinder customer engagement and negatively impact business productivity. Further, new Akamai reports and tools provide greater visibility for companies looking to gain increased control over web site optimization and application performance.  The latest feature enhancements are available now across the company’s Ion and Terra Alta solution lines.

    “The key business benefit of Akamai’s web experience solutions is what they enable, not just what they deliver,” said Graham Benson, IT director of the UK online fashion retailer MandM Direct. “For example, the page load performance improvements delivered by Akamai Ion allowed us to implement a single, fully responsive, website design that serves mobiles, tablets and PC’s rather than needing to deliver specific bespoke sites targeted at the individual device types.

    “As a retailer, having a single site that is device-agnostic ensures a consistent user experience irrespective of the access device that the consumer chooses to use,” Benson continued. “This is very important to us. And from a marketing perspective, it increases business agility by reducing the maintenance and development windows. A single site is quicker, cheaper and easier to build/maintain than three separate ones. Finally, the implementation of the Akamai FEO technology toolset has meant that our internal IT team does not need to become browser optimization experts. Instead, they can concentrate on the site’s functional components, safe in the knowledge that Akamai will take care of the performance elements of the site.”

     

    9:00p
    I, Data Center: An Interview with A Robotics Professional
    robot-data-center

    Is this what comes to mind when someone says “robots in the data center?” Much has changed in robotics and how it applies to data center automation.

    Today we conclude Data Center Knowledge’s three-part series on data center automation and the potential role of robotics.

    Scott Jackson, Senior Robotics Programmer at DevLinks, has been working in the IT field for a very long time. He’s a robotics automation programmer and designer and has begun to see action at the data center level when it comes to robotics integration. He’s seen numerous projects, designed many types of robotics environments, and is a big supporter of placing robotics as a supplement in the next-generation data center.

    Jackson recently sat down with us for a detailed conversation around robotics, automation technologies, and the future of the data center model. Here’s that chat.

    Bill: The conversation around robotics continues to heat up as more large shops begin to explore the idea. What are you seeing in the robotics world that’s influencing this conversation?

    Scott: Primarily, what I’m seeing drive this conversation is the new-found flexibility and adaptability in today’s industrial robots, along with historically low costs for automation integration. Factories have been automated on a large scale for the last 30 to 40 years or so. For the most part, they have simply been repeating the same process over and over, without deviation. Newer capabilities (such as vision, RFID, force sensing, networking, etc.) are being baked into pipeline robots as standard or low cost options. These new capabilities allow the robot to conform to changes in its environment, rather than relying on total compliance to a specified norm. It’s this kind of adaptability that would be necessary in a complex environment like a data center. After all, you’re not building cars. You’re handling thousands of components which aren’t necessarily located with a high tolerance. A robot needs to be able to adjust to that on the fly. We’re finally at that point in history where it’s become feasible to assign these tasks to a robot.

    Bill: That’s a very optimistic perspective. Plus, the notion that data centers can be complex is certainly very valid. Should data center administrators be learning about robotics? What sort of cross-training should happen?

    Scott: Yes, but only to a point. While I applaud anyone willing to tackle robotics integration on their own, I would highly recommend seeking the services of an expert systems integrator for a project as large as an entire data center integration. A working knowledge of the robots themselves and the system as a whole is a must for easy scheduled and preventative maintenance, but there is simply too much to consider when it comes to system design.

    Bill: Just like anything in technology, complexity is always a challenge that must be overcome. Still, automation, workflow orchestration, and data center optimization are all potential benefits that robotics can bring. Do you see a direct tie into management systems which help run data centers currently?

    Scott: Industrial automation has been integrated with plant-wide data recording and production management systems for quite some time. The communication capabilities already exist, so I see no obstacle that would prevent something like that. In fact, I would think that a system like this would be necessary to help balance the workflow on the automation to get the absolute most out of it.

    Bill: How have robotics advanced over the past 2 years? Have they become more intelligent, mobile, and easier to control?

    Scott: By far the biggest advancement in industrial robotics has been adaptable technologies like machine vision, force sensing and learning vibration control. This stuff is not “new” to robotics, but it has recently become affordable enough that they can be used to replace old fashioned, complicated and expensive mechanical automation. A side effect of this has been that these new capabilities are opening up entirely new markets to robotics when it had previously been cost prohibitive to do so. More people are putting cameras on robots than ever before, and that increased demand has also led to increased development, making integration even easier.

    Mobile robotics is making great strides too. Sensors used in mobile robots have  come down significantly in price in the last couple of years, thanks primarily to the popularity of smart phones. Position and location sensors are being miniaturized and optimized for mass production. We’re going to start seeing smaller and cheaper mobile robots start filling roles in the workplace very soon as a result. Amazon’s Kiva is an often cited example of low-impact mobile warehouse integration. I say low-impact because the system is quite literally the mobile units themselves and the shelves that they move around. It’s considerably easier to install than the traditional gantry-based robotic retrieval system that a lot of other “lights-out” warehouses employ. Also worth mentioning would be Google’s efforts at self-driving vehicles and their recent acquisition of seven different robotics tech firms. It’s a safe bet that many more things than just automobiles will benefit from all of that development.

    11:10p
    ARM Chip Pioneer Calxeda Shuts Down
    calxeda-ECore2

    Calxeda, which created a system-on-chip based on ARM technology, is restructuring.

    The move to adapt cell phone chips for low-power servers suffered a setback today as Calxeda, the startup that was a leading player in the market for servers based on ARM chips, has suddenly ceased most of its operations.

    The company decided to wind down after it was unsuccessful in raising additional funding. Calxeda had previously raised $103 million from venture capitalists, including a $48 million round in 2010 and an additional $55 million in October 2012.

    “The refinancing fell through quite suddenly, and there wasn’t enough cash to buy us time to find and close additional investors,” said Karl Freund, the VP of Marketing for Calxeda. “We are pretty much shutting down. We will continue to support some customer projects in the hope that the company emerges from this in a smaller but meaningful form.”

    Calxeda was an early mover in the effort to adopt cell phone chips for use in servers. The company’s processors  are based on technology from ARM Holdings, which has long been used in the iPhone and iPad. But Calxeda had a longer roadmap to production than competing mobile-to-data-center initiatives using Intel Atom chips, and has only recently seen Calxeda-powered servers from Boston Limited enter production, with its technology remaining in the test-and-dev phase with other server OEMs.

    Calxeda recently announced its second-generation product line, the EnergyCore ECX-2000 family, targeting the private cloud market. It was also working on a 64-bit system-on-chip, a key step towards making Calxeda competitive in the hyperscale server market. But just 14 months after raising $55 million, it apparently had not made enough progress to continue the effort.

    “Carrying the load of industry pioneer has exceeded our ability to continue to operate as we had envisioned,” said Freund. “During this process, we remain committed to our customer’s success with ECX-2000 projects that are now underway.”

    “Over the last few years, Calxeda has been a driving force in the industry for low power server processors and fabric-based computing,” Freund added. “The concept of a fabric of ARM-based servers challenging the industry giants was not on anyone’s radar screen when we started this journey. Now it is a foregone conclusion that the industry will be transformed forever. Calxeda is proud of what we have accomplished, the partners who have collaborated with us, the investors who supported us, and the visionary customers who have encouraged us and inspired us along the way.”

    Investors included ARM, Austin Ventures, Vulcan Capital, Battery Ventures, Advanced Technology Investment Company (ATIC), Highland Capital, Texas Instruments and Flybridge Capital Partners.

    << Previous Day 2013/12/19
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org