Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, August 2nd, 2013

    Time Event
    11:20a
    GridGain Raises $10 Million To Grow In-Memory Computing

    GridGain secures $10 million to expand in-Memory Computing, Wroclaw and IBM preserve rare European documents, and Resona Bank selects a Teradata Active Enterprise Data Warehouse.

    GridGain secures $10 million.  In-Memory computing platform provider GridGain announced that it has closed on $10 million in Series B venture financing. The round was led by new investor Almaz Capital, a global venture capital firm, with continued participation from previous investor RTP Ventures, the U.S. arm of ru-Net Holdings and one of the largest internet and technology investors in Russia. The funds will be used to expand sales, marketing and new product development to meet the growing need for in-Memory Computing (IMC)in big data environments. “GridGain is addressing a real need in a rapidly growing big data market. Due to this market growth, the company is making tremendous traction,” said Geoff Baehr, managing partner, Almaz Capital. “Almaz Capital’s goal has always been to seek and build long-term partnerships with passionate entrepreneurs who are eager to make a difference. GridGain’s management team has proven experience in In-Memory Computing as well as the vision and drive to transform the industry.”

    Wroclaw University turns to IBM big data solution. IBM and the Wroclaw University Library in Poland announced a national scientific project to preserve and digitize nearly 800,000 pages of distinctive European manuscripts, books, and maps dating back to the Middle Ages and rarely accessible to the public until now. Co-founded by the European Union through the European Regional Development Fund, the project will create the largest digital archive of medieval manuscripts and ancient geographical atlases in Poland. To address the big data challenge the project will use IBM System X servers and storage – to provide search and retrieval services for up to 300 terabytes of information. ”The Wroclaw University Library’s mission is to protect, preserve and ensure broader access to Polish cultural heritage,” said Adam Zurek, Head of the Department of Scientific Documentation of Cultural Heritage, Wroclaw University Library. “We selected IBM to help us identify, choose and implement a solution in line with our goals of digitizing the library’s documents and making them available to the broader public online. Thanks to the IBM Smarter Computing solution, our library has enhanced its potential as an educational resource for students and researchers from Europe and all over the world.”

    Teradata selected by Resona Group.  Teradata (TDC) announced that the Resona Group deployed a Teradata Active Enterprise Data Warehouse to better serve customers and improve efficiency. The Data Warehouse will support the strategic and real-time operational inquiries from 14,000 bank employees, 2,200 ATMs, 230 call center representatives, and all customers doing internet banking. Using analytics, the Resona Group will be able to better develop new financial products and services, reach corporate and individual customers with timely marketing campaigns, improve efficiency, and reduce operational costs. ”With the latest Teradata technology, Resona Bank will be able to more efficiently reach and serve its customers at call centers, through on-line banking and at bank branches,” said Yukihiko Yoshikawa, president, Teradata Japan. “We support Resona Bank’s leadership and vision; their data-driven sales and service strategy will enable them to achieve their growth initiatives.”

    12:27p
    Eucalyptus: Debate is Dissipating Customer Confusion Over Private Cloud

    The past week has seen vigorous debate about the path of the OpenStack cloud initiative. Eucalyptus CEO Marten Mickos thinks the current discussion around OpenStack and APIs has helped to draw the battlelines for private clouds.

    The current discussion began when Cloudscaling’s Randy Bias called on the OpenStack community to focus on compatibility with Amazon’s APIs. Tech blogger and Rackspace’s Startup Liaison Officer Robert Scoble responded, writing that amid conflicting philosophies about the way forward, the key is to focus on innovation. Scoble argued that API compatibility issues don’t drive startup decisions.

    How about a competitor’s point of view? Private cloud provider Eucalyptus Systems has always focused on compatibility with Amazon Web Services, and thinks the discussion is clearing up customer understanding of who does what.

    “OpenStack has decided that they do not want to follow Amazon on APIs,” said Mickos. “Randy Bias has forced the issue, and the reaction is a denial and dismissal.”

    Fighting Too Many Battles?

    Mickos disagrees with Scoble on his point that you can’t focus on API and innovation “I believe his thinking is flawed,” said Mickos. “You have to focus, but a way to focus is follow the dominant design on APIs and just do it, focus innovation.”

    “OpenStack is fighting too many battles; trying to build a API, build a standard, fight in both the public and private cloud space.”

    Eucalyptus – short for “ Elastic Utility Computing Architecture for Linking Your Programs to Useful Systems” – is open source software for building Amazon-compatible private and hybrid clouds. Eucalyptus Systems is the commercial implementation of the software, which emerged from a research project at UC Santa Barbara.

    Mickos previously served as CEO of MySQL AB, the open source database. He has a perspective on what works in open source software.

    “With the most successful open source projects in the world, each one of them has been an open source implementation with a dominant design,” said Mickos. “MySQL was an open source implementation of SQL, Apache was HTTP, and WordPress is arguably an implementation of the blog. The blog is not a standard, it’s an agreed upon format, a dominant design.”

    API Wars?

    How important are compatible APIs? “It depends on how the world will play out,” said Mickos. “Eucalyptus will say you have to have compatibility. OpenStack will say you can move workloads if the APIs aren’t the same. You can argue that the APIs aren’t important. We think the convenience of those compatible APIs is important.”

    Mickos notes that 2012 wasn’t a great year for private cloud because of all the confusion around who was providing what. Discussions like what has been going on this past week are helping clear things up.

    “Now that they (OpenStack) have chosen their path, it has a positive effect,” said Mickos. “If you want Amazon compatibility, you go to Eucalyptus. It makes it easier for us to find our customers, and for OpenStack to find their customers. “ 

    The customer confusion is dissipating, Mickos says. Where each stack provider’s strengths lie and how they’re positioned is becoming clearer to end users, thanks in part to these ongoing discussions. No one competitor is one size fits all; no one choice is right for every customer. It depends on their needs. In a way, it means that open source cloud computing projects are competing less against one another. The customer pie is more clearly split, with Eucalyptus specifically trying to carve out the  portion of enterprises that want a private cloud that is AWS compatible.

    “We used to talk to so many customers that came to us for the wrong reason, now they come for the right reason,” said Mickos. “2012 was a bad year for the private cloud space. We had confusion. Now customers know where to go.”

    Mickos notes one of the bigger Eucalyptus customers has been growing over 25 percent a  month. “This is a Eucalyptus user with over 10,000 developers,” he said. “They can quickly test software and get it out. They’re doing it to make their R&D more efficient. We’re very happy.“

    Eucalyptus shipped version 3.3. in June. Going forward, expect the company to open up its object store to support more and more object store options. “We’re showing how you can mix and match the best components of open source. You can choose something that has more bells and whistles,” said Mickos.

    12:30p
    Data Acceleration Crosses the Big Data Chasm

    David Greenfield is Product Marketing Manager for Silver Peak, Inc.

    Dave-Greenfield-tnDAVID GREENFIELD
    Silver Peak

    Big data presents organizations with significant opportunities, but many small- to medium-sized enterprises (SMEs) will need to overcome significant technical and bureaucratic challenges if they are to leverage the technology. The volume and velocity in a big data effort requires IT to rethink how the company collects and shares information. Implementation costs, particularly consulting costs, are significant and new expertise is needed to extract meaningful insights from the flood of data engulfing today’s business.

    Big Data, Big Challenges

    But make no mistake, while the highest profile big data cases have been the Facebooks of the world, companies of any size will gain from the technology. Gilt Groupe, a global fashion e-tailer, grew to a half-a-billion in sales in large part because it mined nearly five years worth of member data to develop targeted marketing campaigns. Every minute, Fab.com, a meeting place for designers and customers, combines data about a user’s purchase history, membership information and more to spot trends that drive business decisions.

    Brick-and-mortar companies may face their own challenges, but lack of information is not one of them. Between the e-mail blasts, video feeds from security cameras, data from point-of-sale systems, reports from inventory systems and most organizations generate enough data to populate a big data database. Gathering that information into a single location will be an enormous challenge.

    Shipping disks or tape to a central location for uploading into a big data database is not always feasible or desirable and moving so much information across the corporate network is often impossible. It’s not just the lack of bandwidth that’s the issue. Even when hundreds of megabits connect sites, the delay and quality of the network in high-speed networks, dramatically undermines actual throughput. A coast-to-coast, 100 Mbps connection, for example, will still be limited to just 5.24 Mbps per flow (assuming 100 ms latency and no packet loss). Should loss increase to just 0.1 percent, throughput drops to 3.69 Mbps per flow (See “Party Poopers and Speed Junkies” and calculate the throughput on your network with this calculator.)

    Network limitations also pose challenges when accessing data. With most databases, users typically like to copy and work on the data on their local device, which again leads to replicating gigabytes of data across the network. Applying a similar practice to big data leads to soaring network costs, poor performance and user frustration. But, organizations cannot afford to restrict big data access to local users; limited employee access to and use of big data is a major reason for project failures.

    Inflated network costs, though, are only one area impacting big data’s price tag. Software and storage costs may be relatively small when compared with traditional enterprise data warehouses in part because of the use of Hadoop and other open source software package and scale-out-storage, but those costs often do not factor industry and regulatory requirements for security, disaster recovery, and availability.

    Also missing in most calculations are the personnel costs. Given the immaturity of today’s big data market, Gartner expects organizations can expect to spend about 20 to 25 times of the supply costs on consulting and integration services. (By contrast, in mature markets, such as business intelligence systems, Gartner expects consulting services to run about three times of supply revenue.) Ongoing personnel cost, though, will likely remain. Organizations will need train or hire personnel to analyze big data. The “data scientist,” a combination of business intelligence (BI) analyst and statistician, is the hot new title for someone who mines these data sets for the new insights that will automate and optimize business processes.

    Data Acceleration and Cloud Help Big Data

    Cloud computing is a perfect match for big data. Big data’s appetite for storage, computation, power, complex database infrastructure, and sophisticated data processing capabilities is well served by offerings, such as Amazon Web Services (AWS).

    AWS provides unlimited Elastic Cloud Compute (EC2), Elastic block storage (EBS) and simple storage services (S3) with a low price. It offers DynamoDB, a highly available distributed database cluster, and Elastic MapReduce, a managed platform to support Hadoop-based analysis stack. These cost-effective resources and technology empower business to build their own analytics within Amazon to gain deeper and richer insights into almost everything.

    But the challenge still remains – how to get the data into the cloud or the company’s data center. Data acceleration software solves that problem. By running as an instance on both ends of the line, data acceleration software can improve throughput by over 200x. Moving 100 GBytes of data, for example, can take just 6.2 minutes – not 22 hours. Data acceleration does this by optimizing protocols to correct for latency, de-duplicating data to maximize the use of bandwidth, and, in some cases, recovering lost packets on the fly without requiring retransmissions that undermine throughput. And since data acceleration software can be licensed by the hour, costs can be exceptionally low for use cases where large data volumes need to be moved one-time or infrequently.

    Data acceleration software is a critical step to almost any realistic, large-scale big data deployment. Whether deployed in the cloud or the enterprise, shortening the time to aggregate the data dramatically improves the value organizations see out of their big data deployments.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:27p
    Field of Streams: MLB Opens Omaha Data Center for Video Data
    scottdatacenter

    The Scott Data Center in Nebraska is home to a new data center for MLB Advanced Media.

    Baseball is a game of statistics. At any baseball game, there’s always someone meticulously filling out the playbook, keeping track of every pitch and every hit. A team will go through 162 of these games, not including playoffs. Data is beautiful, and this is a data junkie’s haven.

    Now we can access the games online, through streaming on a variety of devices like mobile phones and Roku set top boxes. Every pitch is stored and accounted for. MLB Advanced Media provides the streaming and the stats for an immense amount of ballgames. The company generates so much data that it is preparing to open a data center in support of its services.

    The Scott Data Center in Omaha will be MLB’s first technological footprint outside of New York.  And it’s a sizeable footprint at that, with MLB Advanced set to take 8,000 square feet of space, making it twice as big as the data center’s next biggest client. Level 3 opened 10,000 square feet of space at Scott last year, and MLB Advanced Media is the second tenant to sublease with Level 3. Level 3 also built MLB Advanced Media’s data center in New York, as well as provides content delivery services to the company.

    In a single baseball season, MLB Advanced Media generates 1.5 million gigabytes of live and on-demand baseball video. This includes video streaming from the company’s growing roster of non-baseball clients. MLB Advanced Media creates and stores 6 million gigabytes of content each year.

    Sandy Drives Backup Requirement

    Scott Data Center – which Omaha.com points out is 200 miles from the nearest major league baseball franchise – will also house and back up everything that MLB Advanced Media stores and streams. This includes live and archived ballgames optimized and viewed for a myriad of devices and operating systems, and stats galore. Baseball is a game of stats; a multitude of stats covering every pitch and every play is also stored and will be housed at Scott. The amount of data from baseball alone is staggering; 30 franchises, each playing 162 games, 9 innings each, 3 outs an inning, all sorts of different pitches, batting averages…it’s another example of data happening around us.

    In addition to growing amounts of data, the company said that Hurricane Sandy was also a driver for a second location. MLB Advanced Media was one of the companies that was famously hauling up diesel fuel up several flights of stairs to a generator to stay online. “MLB Advanced Media already was looking for a second location, but Hurricane Sandy’s October 2012 assault on New York City brought home the need,” CEO Joe Inzerillo told Omaha.com.

    MLB Advanced media doesn’t do just baseball. It also has other clients, such as World Cup Soccer Matches and March Madness basketball games. It’s even serving non-sports clients like musician Glenn Beck and Row 44, a provider of in-flight broandband and entertainment services. The company will stream more than 20,000 live events this year, and they will all reside inside of Omaha.

    “MLB’s stature – both its brand name and its extensive infrastructure and power requirements – is expected to strengthen Omaha’s position as a hub for data centers,” said Scott Data Center President Ken Moreano.

    1:51p
    Friday Funny: Vote for the Best Caption

    Happy Friday! Time for some chuckles about life in the data center.This week we vote on reader suggestions for “Everybody in the Pool” cartoon drawn by Diane Alber, our favorite data center cartoonist!

    New to the caption contest? Here’s how it works: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion. The winner receives a hard copy print, with his or her caption included in the cartoon!

    Take Our Poll

    For the previous cartoons on DCK, see our Humor Channel. Please visit Diane’s website Kip and Gary for more of her data center humor.

    2:48p
    CoSentry Refinances, Has $100 Million For Continued Expansion

    CoSentry has completed a refinancing of its existing credit facilities that provides the company with up to $100 million of capital for continued expansion and future growth initiatives.

    The big Midwest data center player will use the proceeds to fund the expansion of its Midlands Data Center in Omaha, NE, which will be commissioned in October 2013. It will be used to fund other future expansion of both facilities and capabilities, as well as for repaying existing debt facilities.

    Entities sub-advised by an affiliate of GSO Capital Partners LP (the credit business of Blackstone) provided the credit facility to Cosentry.

    “We are very pleased to establish this partnership with GSO Capital Partners as we enter a period of even more rapid expansion and growth than we have experienced historically,” said Mike Polcyn, CFO of Cosentry.  “Our clients and partners have relied on our ability to expand and innovate, and this new credit facility assures we will continue to meet their needs.”

    Cosentry currently has a network of six data centers located in the Midwest serving enterprise and mid-market clients. The company recently announced it was doubling capacity in Omaha, as well as named Brad Hokamp as CEO.

    9:55p
    Major Outage for BlueHost, HostGator, HostMonster

    networking-470

    A Utah data center supporting some of the web hosting industry’s best known brands has suffered a networking outage, causing an extended service outage. The problems at a Provo, Utah facility operated by Endurance International Group, have led to downtime for customers of BlueHost, HostGator and HostMonster.

    The outage began Friday morning, and by 5;30 pm many sites continued to experience problems. Endurance said customers should not experience any loss of data. “The resources of our entire company are focused on the recovery, including our executive team, which is leading these efforts from our command center in Burlington, MA,” the company said.

    BlueHost, HostGator and HostMonster host nearly 5 million web sites between them. Endurance International is a hosting “roll-up” focused on the shared hosting industry, which acquires established brands and then ports them onto a common hosting platform.

    “We understand that your sites are your lifeblood, as well as the engine to our economy, and we have committed all company resources, nationwide to a swift resolution and full restoration,” said Ron LaSalvia, Chief Operating Officer of Endurance.

    Endurance has set up a dedicated web site at EnduranceResponds.com to keep customers informed. See Network issues Cause HostGator, BlueHost, HostMonster Outage at The WHIR for additional details.

    << Previous Day 2013/08/02
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org