Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, March 23rd, 2017

    Time Event
    2:00p
    Mashape, Creator of Top Open Source API Management Tool Kong, Raises $18M

    Mashape, a rising star in the quickly growing API management space, has raised its second funding round led by Andreessen Horowitz, which is among Silicon Valley’s top VCs who invest in enterprise technology.

    Mashape’s new investor joins other big names in tech who have previously put money into the company, such as New Enterprise Associates, Alphabet executive chairman Eric Schmidt’s Innovation Endeavors, and Amazon founder and CEO Jeff Bezos.

    The Series B round is $18 million and comes four years after the startup’s A round, which was $6.5 million. Previous investors CRV and Index Ventures also participated in the latest round.

    Martin Casado, the Andreessen Horowitz partner who spearheaded the deal, has joined Mashape’s board. Casado is something of a superstar in the enterprise software world, especially in relation to data centers. He co-founded Nicira Networks, one of the pioneers in Software Defined Networks, or SDN, which VMware acquired in 2012 for $1.26 billion. Nicira’s technology became a major part of the foundation of VMware’s now widely used network virtualization platform NSX.

    Casado joined the VC firm, which was Nicira’s first institutional investor, about a year ago.

    APIs, or Application Programming Interfaces, essentially allow different pieces of software to connect and work with each other. You can make a PayPal money transfer straight from your mobile banking application for example because PayPal is connected to your bank’s app through an API.

    Used initially to provide mobile access to monolithic enterprise software codebases, the concept has evolved alongside the emergence of Docker containers, which allow individual micro-services to come together and form software solutions instead of single code monoliths, Augusto Marietti, Mashape co-founder and CEO, explained in an interview with Data Center Knowledge.

    A ride-hailing application like Uber or Lyft for example may consist of individual micro-services for billing, payments, mapping, notifications, and many others. Each service in a way is a separate application, while the old monolithic approach is to bake all the functionality into a single big app.

    Micro-services interconnect via APIs. Also relying on APIs are micro-functions, another recent concept, which breaks services up further into more granular functions. This is related to the idea of “serverless computing,” where cloud providers like Amazon completely abstract infrastructure from the user, instead providing various micro-functions as cloud services.

    Because companies now have so many APIs to deal with, there’s hunger for API management solutions, and the market is heating up. MuleSoft, an integration software provider whose Anypoint Platform competes with Mashape held a successful IPO earlier this month; Oracle acquired API management startup Apiary in January; Google snapped up Apigee in September of last year.

    Mashape differentiates itself by focusing on APIs for this new micro-service and micro-function-driven world, while many of its competitors are providing API management that deals with the old enterprise software monoliths, according to Marietti.

    Market research firm Gartner has highlighted Mashape as one of the few API management platforms that are still independent, identifying the VC-funded startup as an attractive acquisition target.

    Mashape owes its popularity to Kong, the API management solution it open sourced in 2015. While there are numerous other open source API management tools, such as WSO2’s API Manager, Tyk, and Netflix’s Zuul, Kong appears to be the most popular one today.

    The company lists Spotify, Citibank, Giphy, and Rakuten as customers. It makes money by selling a closed source enterprise platform based on Kong, which includes advanced features for large organizations, such as analytics, access control, security, and developer portal, as well as support and service-level agreements.

    Its software can be hosted in companies’ own enterprise data centers or in the cloud. Today about 50 percent of installs are on Amazon Web Services, 20 percent is shared between Google Cloud Platform and Microsoft Azure, a small percentage resides in IBM BlueMix, Alibaba, and Red Hat OpenShift clouds, and a large chunk is in on-prem enterprise data centers, Marietti said.

    3:00p
    Euro Bank Regulator Watching Closely Banks’ Move to Cloud

    (Bloomberg) — The European Central Bank is keeping a close watch on lenders’ increasing adoption of cloud computing technology, according to an official at the euro area’s top banking supervisor.

    “Outsourcing isn’t a bad word for us, but neither we nor banks can always easily monitor those activities,” said Francois-Louis Michaud, a deputy director general at the ECB who is reviewing banks’ use of the cloud. “They need to think about this balance between opportunities and risks as outsourcing opportunities increase.”

    Some of Europe’s banks are choosing outside companies to host data for them as they fight to raise profitability by cutting costs and developing innovative financial services. While banks know that information is among their most valuable and sensitive assets, the ECB has made evaluating such outsourcing practices one of its priorities for banking supervision this year.

    “When we visit banks for an on-site inspection, there are a range of actions the supervisors take afterwards,” he said. “We send a follow-up letter with recommendations we expect them to implement to eliminate any weaknesses we may find.”

    Cost Pressures

    An ECB survey found that banks spent 42 percent of their information technology budgets on outsourcing services last year, up from 36 percent in 2011, Michaud said. The cloud accounted for a “small piece,” at 1 percent of the total, he said.

    Still, cost-cutting and competitive pressures are increasingly pushing banks toward the cloud, which lets users store and process information at third-party data centers.

    “There isn’t all that much concentration in terms of the big cloud providers servicing the banks we oversee, although in some countries there can be more focus than others,” he said. “We don’t see any banks being really ahead of the pack, neither in their outsourcing strategies, nor regarding implementation.”

    5:04p
    Microsoft Probes Cause of Global Web Outage

    Brought to you by MSPmentor

    Microsoft technicians on Wednesday continued to search for the cause of a massive outage that disrupted user access to Office 365, Skype, Xbox Live and other online services, in some cases for more than 16 hours.

    The outage, which affected large swaths of the U.S. and Europe, was the second this month of Microsoft’s online services, though a disruption on March 7 only lasted about an hour.

    See also: AWS Outage that Broke the Internet Caused by Mistyped Command

    This week’s disruption began Tuesday, about 1:15 p.m. U.S. Eastern Time, and was declared resolved at 5:50 a.m. ET.

    “We’ve monitored the infrastructure and have confirmed that restarting the affected systems remediated impact,” Microsoft said on the Office 365 status page.

    The longest-running disruption involved the Office 365 OneDrive file-hosting service.

    “In some cases, after signing in to OneDrive, users were unable to access their content,” that status report said. “As the issue was intermittent in nature, users may have been able to reload the page or make another attempt successfully.”

    An initial attempt to restore OneDrive was unsuccessful.

    See also: How to Survive a Cloud Meltdown

    “We’ve determined that the previously resolved issue had some residual impact to the service configuration for OneDrive,” Microsoft said in a status update Tuesday afternoon. “We’re performing an analysis of the affected systems to determine what further steps are needed for full recovery.”

    At the height of the outage, those affected were unable to access Outlook email.

    “Users may be intermittently unable to sign in to the service,” that advisory said. “As the issue is intermittent in nature, users may be able to reload the page or make another attempt successfully.”

    It’s unclear precisely how or if the outage was connected to a disruption Tuesday of Microsoft’s Azure cloud, during the same time.

    “Between 17:30 and 18:55 UTC on 21 Mar 2017, a subset of Azure customers may have experienced intermittent login failures while authenticating with their Microsoft Accounts,” reads the advisory on the Azure status page.

    “This would have impacted the ability for customers to authenticate to their Azure management portal (https://portal.azure.com), PowerShell, or other workflows requiring Microsoft Account authentication,” it continued. “Customers authenticating with Azure Active Directory or organizational accounts were unaffected.”

    Microsoft deployed a patch to end the 85-minute outage and work continued today to figure out exactly what happened.

    “Engineers will continue to investigate to establish the full root cause and prevent future occurrences,” Microsoft said.

    This article originally appeared on MSPmentor.

    7:24p
    Sponsored: Creating Power Capacity Planning Best Practices – Starting With Your PDU

    Almost every organization is now experiencing a digital shift where their data centers have become a direct part of the business. Today, cloud computing, converged infrastructure, and high-density workloads have all placed new types of capacity challenges on the modern data center. IDC says that worldwide spending on cloud services will grow at a 19.4% compound annual growth rate (CAGR) — almost six times the rate of overall IT spending growth – from nearly $70 billion in 2015 up to more than $141 billion in 2019. Data center operators today must plan around their distributed environments and ensure that capacity planning around power is done properly.

    According to Green Grid research into European data center usage, energy efficiency and operating costs are the most common areas of the data center reported as requiring improvement. Furthermore, the difficulty in predicting future cost (43 percent) and the cost of refreshing hardware (37 percent) are cited as top challenges of developing resource efficient data centers, along with the difficulty of meeting environmental targets (33 percent).

    So, when it comes to data center requirements and power distribution – how do you create best practices around capacity planning? What about incorporating intelligence into your power management architecture?

    With these challenges in mind, organizations must look to leaders in the power industry to help them achieve maximum data center efficiency and power utilization. Very recently, Server Technology released the High Density Outlet Technology (HDOT) in Switched and Smart Per Outlet Power Sensing (POPS) architecture, what others refer to as “metered outlet”. This is the most feature-rich rackmount PDU that Server Technology has ever developed. In fact, this is the first time Server Technology has packed their industry-leading features all into a single rackmount PDU, making it one of the most advanced PDU solutions on the market. The HDOT POPS PDUs provides maximum flexibility, unparalleled uptime, and accurate capacity planning.

    On that note – lets discuss capacity planning and density:

    • Capacity Planning Design. With POPS technology, this PDU provides the capability to securely monitor power per individual outlet/ device. Power information per individual outlet/device includes current, voltage, power (kW), apparent power, crest factor, accumulated energy, and power factor. POPS Switched technology provides the flexibility needed for all data centers and remote sites, including power requirements for high amperage and high-voltage, Branch Circuit Protection and SNMP traps and email alerts including current monitoring. When paired with Sentry Power Manager (SPM), Server Technology’s award-winning power management solution, Switched POPS technology, provides detailed power data within the cabinet.
    • Power Density Design. HDOT Alt-phase provides for better efficiency. The idea is simple: Stay Green. Save Green. This proprietary outlet design allows the users to fill narrow/shallow racks with 36 to 54 devices using 36 to 54 outlets. Since the PDU is available through Build Your Own PDU online configurators, the user can order a PDU with their desired outlet configuration with the right outlets in the right place. With the addition of Alternating Phase Technology, these outlets allow the user to plug in devices from top-to- bottom or bottom to top without disrupting phase and load balance. This allows for shorter cords, lowering cooling costs and simplifying cable inventory.

    Capacity Planning – Start with Power

    Planning for the growth of power usage in comparison to capacity is critical at all levels of the power chain; however, if the design is sufficiently implemented, capacity at the rack level can be predicted based on measurements of each piece of IT equipment.

    In creating your own data center architecture, it’s important to look at use-cases and design. This means designing around specific capacity planning requirements. When creating your own data center platform – capacity planning is an absolute must, especially when creating efficient power environments. So, it’s important to consider how power is distributed between racks and throughout the data center. This is why a good PDU is very important.

    For example, the PDU architecture from Server Technology is perfect for customers who:

    • Have 36 to 54 devices in their cabinet
    • Need quick-turn PDUs available with a large number of outlet variations
    • Are looking at bill-back to departments
    • Are looking to minimize power drops
    • Need high-temperature PDUs
    • Three phase power in a common form factor

    Many data center operators have created a science out of maximizing server utilization and data center efficiency, contributing in a big way to the slow-down of the industry’s overall energy use. Today, data center providers are making investments in improvements that will positively impact the efficiency of their facilities infrastructure, as well as the power and data center capacity that supports their clients’ IT gear.

    With the HDOT in Switched and Smart POPS, customers no longer need to pick and choose a single solution based on their current needs. The HDOT POPS rackmount PDU addresses the three data center pain points: capacity planning, power density, and uptime.

    Moving forward, data center managers will need to work with their ecosystem to ensure the best possible performance and utilization. Data center sprawl is a real-world issue, and Server Technology PDUs are designed to help with next-generation requirements as well as enabling unparalleled capacity.

    This article was sponsored by Server Technology. Please visit their solutions page for more information. 

    << Previous Day 2017/03/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org