Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 12th, 2015

    Time Event
    10:00a
    MarkLogic Raises Over $100m For Enterprise NoSQL Database Platform

    Under the guidance of financial advisor Allen & Company, enterprise NoSQL database platform provider MarkLogic closed an oversubscribed $102 million round of funding to help it capture the operational database market.

    The funds will be used to accelerate global market growth, expand its partner ecosystem, and continue to develop its database platform. The company’s initial target for the round was $70 million. Total financing is now around $175 million.

    MarkLogic offers a schema-agnostic Enterprise NoSQL database coupled with powerful search and flexible application services. The company believes a generational shift is occurring in the database industry driven by big data and a changing IT landscape.

    Traditional database management systems were not built with today’s dynamic data in mind, or to integrate and access heterogeneous data that is the norm among global organizations, according to MarkLogic.

    There are several open source NoSQL technologies that have answered the call, however. MarkLogic believes these open source options lack essential enterprise features like data reliability, transactional consistency, security and availability.

    MarkLogic aims for the sweet spot between both groups. It sees the largest growth in the operational database market where traditional relational database vendors compete, with half of MarkLogic’s business last year derived from completing projects that started on Oracle.

    Research firm Gartner, which pegs the database management system market at $36 billion, placed MarkLogic in the Leaders Magic Quadrant for operational database management systems.

    MarkLogic touts 550 global enterprise and government customers. Six of the top 10 global banks use MarkLogic for transactional operations.

    Some big enterprise customers include Aetna, BBC, Broadridge Financial, Centers for Medicare and Medicaid Services (CMS), Dow Jones, Federal Aviation Administration (FAA), Hannover Re, McGraw-Hill Financial, National Archives and Records Administration (NARA), NBC Entertainment, U.S. Department of Agriculture (USDA), and the U.S. Navy.

    “Only MarkLogic can integrate, manage and operationalize both structured and unstructured data, solving a critical data challenge that relational technology was not designed to handle and delivering faster time to results for our customers,“ said Gary Bloom, president and CEO of MarkLogic. “This funding further cements our path of growth as we prepare for the next chapter in the company’s history.”

    Wellington Management Company led funding, which also included participation from new investor Arrowpoint Partners, and existing Investors Northgate Capital, Sequoia Capital, Tenaya Capital, and Bloom.

    MarkLogic recently selected Vantage Data Centers for high density data center space in Santa Clara, California.

    There have been several recent, sizable funding rounds in the enterprise NoSQL database space this year. NoSQL database management system provider MongoDB recently raised $80 million. Couchbase recently raised $60 million and noted solid momentum and partnerships. Another NoSQL database startup worth mentioning is Basho, which announced a $25 million funding round.

    12:19p
    Vapor IO Challenges Top Of Rack Management With Open MistOS

    One of the biggest challenges in the data center industry is how to manage data center infrastructure. This is especially true at the rack. Each server vendor has a different approach, down to the way you handle server out-of-band management. Vapor IO hopes to eliminate what founder Cole Crawford calls “gratuitous differentiation” with Open MistOS (OMOS), a new Linux distribution that provides top-of-rack (TOR) management capabilities for data centers.

    Vapor challenged the data center aisle concept and is now challenging antiquated TOR management approaches with Open MistOS. OMOS works in conjunction with OpenDCRE (Data Center Runtime Environment), for a completely out of band management suite.

    Vapor IO is tackling the rack itself. OMOS allows talking to the rack infrastructure over an API. Components can be mapped into MistOS for easier development of management, monitoring and orchestration.

    The rack is the most important piece of the data center, but it’s also the bifurcation line between those than run the facility and those that run the servers, said Cole Crawford, founder of Vapor IO, co-founder of OpenStack, and executive director of the Open Compute Project Foundation.

    In the data center, you have facilities people and operations people. Both groups stop at the rack. Vendor IT equipment also stops at the rack.“Out-of-band management has been dominated by gratuitous differentiation,” said Crawford. “Now, the rack is no longer a no man’s land.”

    By connecting the hardware and Open Mist OS operating system, it’s essentially the data center’s answer to DevOps. “Through this, we will create a unified fabric for a disaggregated data center,” said Crawford.

    MistOS sits alongside OpenDCRE (Data Center Runtime Environment) for a completely out-of-band solution written on a standard linux kernel. DCRE is Vapor IO’s own data center infrastructure management and analytics system that includes hardware sensors and software. OpenDCRE is the foundational element of that system, which was contributed to the Open Compute Project. OpenDCRE is a combination of sensors, firmware, and a controller board.

    Vapor CORE extends Open DCRE functionality by introducing an intelligence layer on top of Open DCRE. With Open MistOS, Vapor can now act as a gateway within the data center management network, mapping inbound connections to server serial consoles, enabling further development of the data center.

    “When it comes to orchestrating the workload, you need to look south then provide that information real-time northbound to the operating system itself,” said Crawford. “Now that you have a robust API for out of band, you have the ability to go north and integrate workloads.”

    Key features of Open MistOS include:

    • Vapor CORE API access: Open MistOS includes Vapor CORE API components for rapid deployment and integration with management, monitoring and provisioning services.
    • Bare metal support: Open MistOS includes support for bare metal provisioning and firmware updates through the device tree.
    • Automatic discovery of other Open MistOS TOR OOB management devices: Open MistOS includes a built-in discovery service that automatically detects other Open MistOS instances, allowing for quick and easy deployment, discovery and integration into the data center management fabric.
    • Docker and Open vSwitch support: Open MistOS brings Docker and Open vSwitch support on Raspberry Pi, allowing for TOR containerized deployment and management of applications.

    In addition to Open MistOS, Vapor IO will sell a commercial version that will be available as an on-site service and as a hosted solution.

    The company will showoff its hardware in a demo in two partner booths at OpenStack Summit next week. The first is StackVelocity, an OCP solution provider, and the second is StackStorm. The demo will simulate a data center failure and recovery at both booths.

    “Companies are tired of investing in proprietary technologies that ultimately lock them into a platform for years,” said Crawford. “The industry has time and again shown its fervent appetite for open standards and interfaces that guarantee freedoms not offered by legacy vendors. With Open MistOS, we are guaranteeing that freedom.”

    1:00p
    BMC Extends IT Management Reach

    Looking to help IT operations teams extend their IT management reach well beyond their own data centers, BMC Software has released an update to BMC Cloud Lifecycle Management (CLM) that adds enhanced support for both Amazon Web Services and Microsoft Azure platforms, while also adding support for the for Docker containers and Platform-as-a-Service (PaaS) environments based on the open source Cloud Foundry project.

    Chris Stauber, vice president of product management and marketing at BMC, says version 4.5 of BMC CLM is intended to help internal IT operations regain control over shadow IT environments that have spiraled out of control in recent years.

    “There’s a storm of chaos hitting IT folks,” says Stauber. “We’re trying to make it easier for them to manage hybrid multi-tenant environments.”

    Other new additions to the management platform include advanced tools for detecting service disruptions and tighter integration between BMC CLM and the BMC MyIT, a framework through which BMC enables end users can often resolve their own IT issues without any intervention required from the internal IT organization.

    At the moment most IT environments are made up of set of semi-autonomous platforms that are mostly managed in isolation. But as pressure mounts to deliver a broad range of digital services that span multiple applications mounts, Stauber says IT organizations will need to unify the management of those platform more than ever.

    In addition, as each of those platform scale at different rates, Stauber notes that most organizations simply can’t afford to keep throwing IT administrators at their IT application and infrastructure management problems.

    Ultimately, Stauber says modern IT organizations are looking to expose a set of defined services to users and developers either through a portal or as a set of well-defined application programming interfaces (APIs) that developers can invoke.

    At a time when the line between where one data center begins and another ends is getting blurry thanks to the rise of hybrid cloud computing, BMC sees an obvious need for a new generation of management tools.

    The good news is that most modern applications and IT infrastructure now expose APIs inside and out of the cloud that make it simpler for IT management frameworks such as BMC CLM to manage them. In fact, the real challenge from here may not be the technology at all, but rather getting the entire IT organization to standardize on a common set of tools that many individual administrators have for one reason or another become over time overly attached.

    2:00p
    Zynga Ditches Data Center Plans For AWS

    Casual and social gaming provider Zynga will shut its data centers and shift workloads back to Amazon, two years after the company spent $100 million to build out its own data centers, according to The Wall Street Journal.

    Zynga appeared to be a business that could potentially “outgrow” cloud, in the sense that once a company reaches a certain scale, it begins to make sense to build your own data centers.

    However, running your own data center is capital intensive. It was the nature of Zynga’s business that made this a tricky proposition. Zynga didn’t grow as anticipated, following bursts of growth. This is particularly true in regards to its growth outside of the Facebook ecosystem.

    Online gaming is subject to unpredictable traffic spikes and drastic sways of traffic dependent on the release schedule. It’s often feast or famine for a game publisher, given the unpredictable nature of what gamers glom on to, and the often short lifespan of a title. The type of unpredictable workloads and growth Zynga faced are tailor-made for cloud.

    For Zynga to run its own data centers, it meant a mad scramble to provision servers and power during big releases, followed by lulls. This kind of wide variation is hard to accomplish on your own servers.

    Zynga CEO Mark Pincus said on an earnings call that there a lot of strategic places for them to have scale, but running their own data centers isn’t one that’s appropriate. “We’re going to let Amazon do that,” he said.

    The company has always relied on AWS for some workloads, even at the height of using its own data centers. Zynga created software called zCloud that helped it shift workloads between Amazon’s servers and its own data center.

    Zynga disclosed its intentions to boost data center investment when it filed for its Initial Public Offering (IPO) in 2011. The company said it could operate its own data centers more cheaply than Amazon. Amazon’s economies of scale proved hard to beat.

    Zynga leased data center space from two wholesale data center providers, DuPont Fabros Technology and Digital Realty Trust. Another indication of the company’s close roots to Facebook – several of the facilities were adjacent to Facebook’s data center facilities.

    Zynga’s rode social media’s coattails to build a casual gaming empire with initial hits like FarmVille and Mafia Wars. Heavily dependent on Facebook users, Zynga suffered whenever Facebook made changes to the feed unfavorable to gaming content.

     

    3:30p
    Maximizing Insight From Your Monitoring

    Ryan Smith is a network engineer for Cervalis LLC, a premier provider of IT infrastructure and managed services.

    Monitoring is critical to any organization’s network. Most organizations have at least one platform keeping an eye on things. Some larger organizations may even have multiple monitoring platforms looking at different sections of the network. While monitoring is a widely accepted best practice, some companies don’t dedicate the time, money and resources required to gain the most insight from a very powerful tool.

    Monitoring is a simple game of polling. Among other things, monitoring systems ask devices in your network a wide array of questions. Is the interface up? How much traffic is on the interface? What’s the CPU doing? Where does memory utilization stand? Most monitoring platforms are capable of asking all of these questions and many more. The real differentiator comes down to how often they ask the question.

    Out of the box, many of these platforms will be able to poll your devices every five minutes. For most deployments, this seems to fit the bill nicely. However, when it comes to truly understanding what your network and servers are doing, quicker polling periods become mandatory.

    The three graphs below were all taken from the same circuit. The first uses a 10-minute polling period, the second a five-minute polling period, and the last, a one-minute polling period. At around 1:15 p.m. the one-minute polling period shows a clear spike in traffic to about 260Mbps. However, the five-minute polling period paints a different picture, catching the spike at around 210Mbps. The 10-minute polling period barely caught the spike at all. If your monitoring system is set to poll every five minutes or longer, you’re missing critical information.

    Figure 1 - 10 Minute Polling Period

    Figure 1 – 10 Minute Polling Period

    Figure 2 - FiveMinute Polling Period

    Figure 2 – FiveMinute Polling Period

    Figure 3 - OneMinute Polling Period

    Figure 3 – OneMinute Polling Period

    Of course, there is a downside to polling more often: the creation of more data points. Let’s suppose you monitor 10 circuits for inbound traffic, outbound traffic, inbound errors and outbound errors. That means every polling period would generate 40 data points that need to be processed and stored in your monitoring system—not to mention the processing power it takes to query them when it comes time to run reports. Assuming a 10-minute polling period would mean that every hour generates 240 data points, while a five-minute polling period would double that number to 480 data points, and a one-minute polling period would grow the number to 2400 data points. And with all that for just 10 circuits, you will quickly see how things can get out of control.

    To make matters worse, many companies often fail to dedicate the right amount of computing resources to support their monitoring platforms. Often times, the old company file server gets replaced and repurposed as a monitoring server, or, even more frightening, an old PC gets pulled off the shelf to play the role. Once this happens it becomes impossible to provide the monitoring software the hardware it needs to collect, process, and store all that polling information. What you’re left with is a system that works but doesn’t provide the best information.

    When you select a monitoring platform, focus not only on the software you want to run, but also on the hardware on which you’ll run it. Work closely with the software vendor and clearly define how many objects you need to monitor, what values you need to monitor for, and how often you want to poll. Finally, request both the minimum system requirements as well as the hardware requirements that are unique to your deployment.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:37p
    Barracuda Networks Launches 40G Load Balancer Appliance

    Barracuda Networks today unveiled a 40G load balancer appliance designed to offload network and security traffic from virtual machines.

    Designed to be deployed at the edge of the data center, the Barracuda Load Balancer Fast Distribution Controller (FDC) is based on technologies from Intel that enable IT organizations to better scale virtual data center environments.

    Sanjay Ramnath, senior product manager for Barracuda Networks, says the Barracuda Load Balancer FDC is optimized for data center environments that need high throughput to scale. By offloading the load balancing function from the hypervisor, Ramnath says it becomes possible for data centers running hundreds of virtual machines.

    Priced starting at $20,000, the Barracuda Load Balancer FDC makes use of a combination of Intel processors and an Intel Data Plane Development Kit (DPDK) to direct requests to application workloads running on a particular server more efficiently.

    “Hypervisors on their own just can’t keep up with load balancing,” says Ramnath. “This appliance runs heads and shoulders above any other load balancer we have.”

    According to a report from Principled Technologies, a consulting firm contracted by Barracuda Networks, the Barracuda FDC can actually provide up to 60.71 Gb/s of throughput and that the Barracuda FDC handled 9.99 million simultaneous web connections and 1.33 million connections per second.

    In addition, because the Barracuda Load Balancer FDC makes use of Intel processors Ramnath says Barracuda has been able to install an instance of its firewall software directly on top of the appliance.

    Collectively, Ramnath says the Barracuda Load Balancer FDC is a new element of an overall application delivery network architecture that makes it possible to federate the management of load balancing across appliances, hypervisors and physical servers.

    In some ways, relying on a physical load balancer appliance is a step back towards the future. With the rise of virtualization, many IT organizations moved the load balancing function on to a virtual machine. But as data center environments scale Ramnath says the virtual switches inside hypervisors can’t process all the requests being made as efficiently as a dedicated appliance.

    As workloads become more distributed Ramnath contends that IT organizations will increasingly be required to deploy load balancing capabilities at the edge of the data center. And given the number of connections that need to be managed Ramnath says that going forward it won’t be too long before 40G Ethernet connections are more the rule than the exception as the number of physical and virtual servers deployed inside and out of the cloud continue to multiply.

     

    5:30p
    Fujitsu Upgrading Australian Data Centers Starting With Perth

    Fujitsu is upgrading its seven Australian data centers, starting with one in Perth recently knocked out by a storm in February that will cost around $8 million.

    The data center upgrades are part of the company’s 2025 roadmap. Fujitsu said it will seek Uptime Institute Tier IV Certification for its Malaga, Perth data center, currently rated Tier III by the Uptime Institute. If successful, it will be the first data center in Australia to achieve the distinction.

    The Uptime Institute said in a statement that it is not currently engaged on a Tier IV Certification project for Fujitsu Australia, but that it looks forward to supporting this project in the future.

    Fujitsu opened its first Australian data center in 2000, with the Perth data center opening in 2010. The company now operates 270,000 square feet in the country.

    The Perth data center is a logical first step given its recent outages. Last February, Fujitsu’s data center suffered two back-to-back incidents. A first incident lasted an hour and a half and was followed by a second failure in a control system. The upgrades will provide some peace of mind.

    “The Tier IV Certification process for Malaga will provide unprecedented guarantees of availability for all businesses that rely on cloud-based data,” said Mike Foster, chief executive officer of Fujitsu Australia. “Those data centers governed by Tier IV standards will give customers even greater confidence to move more mission-critical applications into ‘always on’ cloud infrastructure.”

    To say Perth is subject to harsh weather is an understatement. The year started with a heatwave that caused outside temperatures to rise to about 112 degrees F. Record-breaking temperatures were partly to blame in an iiNet data center being knocked offline.

    Australia has seen strong cloud adoption on the whole, and several technology companies are investing to capture the growing market.

    VMware and Microsoft’s Azure both recently expanded Australian footprints, and SAP is making a $150 million Australian Government cloud push (Australia’s government has a cloud-first mandate much like the United States). IBM/SoftLayer opened a data center in Melbourne as part of a global $1.2 billion expansion. Red Cloud is undergoing a massive Australian expansion via t4 modules, adding 1 million square feet of space, and Global Switch recently completed the first phase of a $300 million Sydney data center.

     

    7:00p
    US Demands Answers from China Over ‘Great Cannon’ Cyberattacks

    logo-WHIR

    This article originally appeared at The WHIR

    The US is asking Chinese authorities to look into reports that China interfered with online content hosted outside of the country using its so-called Great Cannon, State Department spokesperson Jeff Rathke said on Friday.

    The Great Cannon is a tool that can hijack traffic to or from individual IP addresses and allows China to target foreign computers that communicate with any website based in China, according to a report by Reuters.

    Researchers in Toronto said the Great Cannon represents “a significant escalation in state-level information control: the normalization of widespread use of an attack tool to enforce censorship by weaponizing users.” It can manipulate international web traffic intended for Chinese web companies and redirect malicious traffic to US sites, Rathke said.

    “So the United States is committed to protecting the internet as an open platform on which all people can innovate, learn, organize, communicate, free from censorship or interference. And we believe a global, interoperable, secure, and reliable internet is essential to realizing this objective. And we view attacks by malicious cyber actors who target critical infrastructure or US companies and US consumers as threats to national security and to the economy of the United States,” Rathke said.

    “We have asked Chinese authorities to investigate this activity and provide us with the results of their investigation. At the same time, we’re working with all willing partners to enhance cyber security, promote norms of acceptable state behavior in cyber space, and to protect the principle of freedom of expression online.”

    The US response comes as Russia and China have signed a cybersecurity deal wherein the two countries agree not to conduct cyberattacks against each other. The pact also includes the two countries agreeing to “exchange information between law enforcement agencies, exchange technologies and ensure security of information infrastructure,” the Wall Street Journal reports.

    In March, the non-profit group GreatFire.org was hit by a sustained DDoS attack that served 2500 times its normal traffic. The organization said that the attack “is an exhibition of censorship by brute force.”

    This first ran at http://www.thewhir.com/web-hosting-news/us-demands-answers-from-china-over-great-cannon-cyberattacks

    7:29p
    Negotiating the Alignment Between Business and Data Center Objectives

    Join us for a new webcast on Tuesday, May 19, 2015 at 2:00pm ET learn why it is more critical than ever to align your data center and business objectives. Register Now

    Modern demands and technology are continually driving innovation in businesses of all industries. With each advancement, more bandwidth, speed, and storage is required, taxing even the most nimble data centers. Every business is different, therefore every IT strategy is different too.

    This webcast will explore how different industries are changing the way they do business and how that directly impacts the data center. Learn what business drivers are shaping today’s network infrastructures and what you should care about next. Finally, you’ll get 5 key ways to make sure your data center is optimized for the technology of tomorrow, no matter your line of business.

    About the Presenter

    Tony Walker, Data Center Business Development and Strategy, TE ConnectivityTony Walker
    Data Center Business Development and Strategy
    TE Connectivity
    View Full Bio

    About the Moderator

    Bill Kleyman, MTM TechnologiesBill Kleyman
    Director of Strategy and Innovation
    MTM Technologies
    View Full Bio

    Register Now

    << Previous Day 2015/05/12
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org