Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, February 25th, 2015

    Time Event
    1:00p
    Optimizing Entire Data Center Without Breaking Budget

    We have more users, more workloads, and a lot more data traversing the current cloud and data center platforms. Trends are indicating that this growth will only continue. When it comes to data center growth and big data, there something very important that needs to be understood. The value around data and information is much higher than it ever was before. With that in mind, content delivery, user optimization, and data quantification are critical aspects to the modern business.

    Through it all, we’re still being asked to optimize and have everything run as efficiently as possible. But what can you do to optimize and not break your budget? Well, there are some new technologies that can help take your data center to the next level.

    • The logical data center layer. There is so much virtualization out there that it’s getting a bit crazy. But that also means you can likely find the right optimization technology for you as well. Looking to optimize your network? Maybe look at VMware’s vSAN technologies to avoid buying new gear. How about storage optimizations? Software-defined storage can help completely abstract your storage layer and help you expand! Definitely take a look at all of the new SDx and virtual technologies out there to help optimize existing resources and more. The great part here is that you can use virtual appliances to optimize your data center. Unlike before, you can actually test drive these tools to see how well they work. If you like it, keep it and expand it into your data center. If not, it’s nothing short of spinning down a VM.
    • Utilizing the hybrid cloud. Did you know that it’s much, much easier to extend your infrastructure into the cloud? OpenStack, CloudStack, Eucalyptus and many others are enabling data centers to become a lot more extensible. Burst technologies and load-balancing allow for a seamless transition between private and public resources. For example, the Eucalyptus environment creates direct API services around numerous Amazon services. Now, you’re able to bring the power of the public cloud directly to your organization. With features like auto-scaling and elastic load-balancing, the Euca cloud allows for truly robust cloud infrastructure control. A hybrid cloud extension is a great way to utilize resources only when you need them. The big difference is that it’s much easier to do so now.
    • Commodity hardware. Now, before anyone gets upset, I’m not telling you to replace you entire data center with white-box servers and gear. Unless you want to, of course. However, with software-defined controls and virtualization the controlled has suddenly become logical. That means the underlying hardware doesn’t really matter because the control layer can run on any hypervisor at any data center. Just something to think about when it comes to optimizing both hardware and software platforms. Let me give you a more specific example. In a recent article from DCK, we outlined how Rackspace introduced dedicated servers which behaved like cloud VMs. The offering, called OnMetal, provides cloud Servers which are single-tenant, bare-metal systems. You can provision services in minutes via OpenStack, mix and match with other virtual cloud servers, and customize performance delivery. Basically, you can design your servers based on specific workload or application needs. This includes optimizations around memory, I/O, and compute. Pretty powerful stuff.
    • Asset management and monitoring. The distribution of the modern data center, branch offices, and micro-clouds have created a bit of an asset issue. That said, how well are you tracking all of your gear? Proactively knowing what you have – everywhere – can help identify where gaps can be filled. This goes for both hardware and software. Proactively monitoring resources from a logical layer helps control resource leak issues. Monitoring and control from a visibility perspective is a great way to re-allocate resources when needed. Which means you don’t have to buy anything new just yet. Let me give you a specific example around cloud monitoring. CA Nimsoft Monitor offers pretty much every necessary monitoring solution for you to create cloud workload intelligence. Application monitoring support includes Apache systems, Citrix, IBM, Microsoft, SAP and more. Plus, if you’re working with an existing cloud infrastructure or management platform, Nimsoft integrates with Citrix CloudPlatform, FlexPod, Vblock and even your own public/private cloud model. The list of supported monitoring solutions continues to span through servers, networks, storage, virtualization and more. By understanding, monitoring and managing your cloud and data center resources – you can make very intelligence decisions around your entire infrastructure.

    Your data center will only continue to evolve. Modern user and business demands are pushing data center technologies to new levels almost weekly. The growth in data and the need to control, replicate and use this information is critical as well. The really great part in today’s world is that you don’t just have to buy another piece of hardware to optimize you data center. Next-gen technologies are now helping organizations create a much more efficient data center; without having to break the budget.

    4:30p
    The Hybrid Cloud Integration Challenge

    Eddie Cole, vice president of engineering, Scribe Software.

    While moving to the cloud is becoming a common business decision, many organizations are still trying to figure out the best path to meet their business needs and IT infrastructure goals. The resulting discussions often add up to a hybrid approach where organizations tap the value of both public and private clouds for different services, which can lead to complex cloud silos that trap important data.

    When moving to a hybrid cloud, integration professionals must take four new considerations into account including the need to understand SaaS data policies and API limitations, the implementation of new strategies for moving data, mitigating identity and licensing challenges, and planning for new security risks in the cloud.

    No Backdoor to SaaS Applications

    Before beginning a hybrid cloud integration initiative, you will need to understand the data policies and API limitations of each SaaS vendor that you work with. No two are created equal, but two new challenges to plan around are API rate limiting and a lack of full CRUD operations on all entities.

    Unlike on-premise solutions, most SaaS applications implement some form of rate limiting, either through well-defined rules and policies or through poor performance exhibited under load. Your best bet for both cases is to discover the limitations as soon as possible and plan accordingly. The first case is usually easy to research, while the second can only be discovered by exercising the API.

    The other new big blocker is that not all APIs offer full CRUD operations on all of their entities. In fact, more often than not the API will only expose a subset of the data model you previously had access to from the backend. Even if the API exposes the entity you are looking for and the operation you want, you still need to make sure the entity contains all of the properties you need to integrate. Very often we find APIs that expose entities missing properties that are available in the application’s UI. These two issues are especially true for user-defined entities and properties. Many APIs either do not expose these custom types, or do so through an entirely different set of calls. Always be sure to know what data entities are available and what operations you can perform on them before you commit to any hybrid integration project.

    Limitations to Moving Data

    Especially when it comes to large data migrations, you will likely need to come up with new strategies for moving on-premise data to public/private cloud stores or SaaS applications. There are typically three strategies to choose from.

    Parallel Processing. This is usually the simplest strategy. As long as you are not running into a rate limiting issue, very often you can break large data migrations into separate processes that can be run in parallel. The trick is to understand how the entities you are migrating are related, and how your data itself can be segregated.

    Incremental Loading. Incremental loading is all about the “slow and steady wins the race” approach. Usually the time available for a given data migration is based on how long you can have users off the system. By taking an incremental loading approach, data is moved based on change date from oldest to newest so that the two systems are eventually synced up to the point that the down time for cut over is only based on the data change rate, not the size of your data.

    External Key Cross Referencing. This last strategy is perhaps the best way to optimize regardless of your approach. The concept involves building a cross reference of keys between the systems on a fast local data store, meaning that the integration never needs to perform costly searches on the slow and/or rate limited SaaS application to relate entities.

    Identity and Ownership Challenges

    In the on-premise world, you could simply add additional users as part of your migrations or integrations, so that the proper data could always be related to the proper principles on each side. Unfortunately SaaS applications often pose licensing issues around adding users. My favorite solution is to add all the real SaaS users first, and then build an external cross reference as mentioned above, which you can use to map your users between the systems. This approach preforms very well and also supports mapping many users into a single SaaS user if needed – not that I think you might be looking to skimp on a few seats.

    Security is Top Concern

    Integration by its very nature is security anti-pattern, as its ultimate goal is to expose secure data to other systems for consumption. Of course, any well-designed integration approach is only going to expose data to trusted systems in a secure manner. However, as data is moved to the public cloud where there is more risk, integration is necessarily faced with more obstacles to reach that secure data and to keep it secure once it’s in process.

    Many SaaS applications allow both credential checking and IP whitelisting. If available, use both. If possible, avoid storing credentials and if nothing else, store them encrypted. For encryption of either credentials or any data you might put across the wire, remember that the encryption gets old fast – always keep your approach up-to-date.

    The hybrid cloud offers enormous advantages for companies seeking the best of private and public clouds for scalability, price, control and flexibility. While companies with sufficient manpower and expertise can follow the steps above to successfully navigate these new integration challenges, any business can achieve their hybrid cloud goals by enlisting systems integrators and/or leveraging third party integration tools specializing in cloud integration. Either approach works, but be sure to plan for the new challenges and realistically assess your company’s available resources and timeline before embarking on a journey to the hybrid cloud.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission processfor information on participating. View previously published Industry Perspectives in our Knowledge Library.

    5:52p
    eBay Open Sources Pulsar, a Real-Time Analytics Framework

    Online auction giant eBay introduced an open source real-time analytics framework called Pulsar. eBay said it was using Pulsar in production at scale and was now making it available for others. Pulsar is licensed under the Apache 2.0 License and GNU General Public License version 2.0.

    Pulsar is an example of a wider bifurcation occurring in the realm of handling massive amounts of data companies now have access to. There are quantity needs for batch processing and analytics needs for on-the-fly analysis. Pulsar was built in response to real-time data handling needs.

    The company uses Hadoop for batch processing, delegating real-time analysis of user interactions to Pulsar. Batch processing has been successfully used for user behavior analytics, but newer use cases demand collection and processing in near real time, within seconds, according to the company. Real-time analysis leads to better personalization, marketing, and fraud and bot detection.

    These real-time needs prompted the company to build its own Complex Event Processing framework. It was built to be fast, accurate, and flexible.

    Pulsar is capable of scaling to a million events per second, according to a company blog post. It has sub-second latency for event processing and delivery. There’s no cluster downtime during upgrades and topology updates, and it can be distributed across data centers using standard cloud infrastructure.

    Pulsar also includes a Java-based framework so developers can build other applications atop.

    The Pulsar deployment architecture (source: eBay tech blog)

    The Pulsar deployment architecture (Source: eBay tech blog)

    Pulsar uses an “SQL-like event processing language,” according to Sharad Murthy, eBay’s corporate architect, and Tony Ng, the company’s director of engineering — the blog post’s authors. It is used to collect and process user and business events in real time, provide key insights that systems can react to within seconds.

    Atop of the CEP framework the company implemented a real-time analytics pipeline, which relates how different parts can work together. Some of the processing it performs includes enrichment, filtering and mutation, aggregation, and stateful processing.

    The pipeline is integrated into different systems. Two examples given are sending events to a visual dashboard to show real-time reporting, or tying it to backend systems that can react when certain things happen.

    Developers can run SQL queries for analytic purposes. “In Pulsar, our approach is to treat the event stream like a database table,” said Murthy and Ng on the blog. “We apply SQL queries and annotations on live streams to extract summary data as events are moving.”

    eBay plans to include a dashboard and API for integrating with other services.

    eBay is smart when it comes to handling and visualizing data, and how it relates to the bigger picture. In 2013, eBay unveiled Digital Service Efficiency dashboard at the Green Grid Forum. The DSE is a system of metrics that ties data center performance to business and transactional metrics. In short, it shows how turning one knob affects other parts of infrastructure. The dashboard sums it all up with a “miles per gallon” measurement for technical infrastructure.

    7:10p
    Dell, Nutanix Intro Next-Gen Converged Infrastructure Appliances

    Dell announced two new turnkey XC Series converged infrastructure appliances just months after launching the first generation of products. The vendor partnered on the appliances with Nutanix, catering to VDI (virtual desktop infrastructure) and virtualized server workloads in the enterprise.

    A focal point in its software defined storage portfolio, Dell said the new solutions offered greater than 50 percent more storage capacity and up to twice the rack density compared to the previous generation. The end-to-end solutions combine Nutanix software with Dell hardware and global services and support, according to the company.

    Advancing the foundation for the new converged infrastructure appliances, the new XC630 and XC730xd use Dell’s 13th generation servers, which feature Intel Xeon E5-2600 v3 processors. Dell says that while the primary use for the XC line so far has been virtualization and virtual desktop infrastructure, a lower entry point and greater capacity and density features should widen the range of potential workloads to things such as private cloud or big data installations. The NutanixOS adds an abstraction layer that detaches provisioning from physical hardware, as well as adds other software-defined features.

    The market for software defined storage platforms and products has certainly been vibrant in recent months. Former VMware employees launched Springpath — a hardware-independent SDS platform, FalconStor and SUSE introduced new offerings, and IBM added an intelligent software layer with its Spectrum Storage software portfolio.

    Squeezing in 16 terabytes per rack unit, Dell introduces the XC630 as an entry-level 1RU offering, which packs more VDI users in half the space of previous editions. Dell says that the XC630 also substantially lowers the price to enter the portfolio. Able to support 60 percent more storage than previous generations, the new XC730xd appliance can have up to 32 terabytes of storage in just 2RU of space.

    The goal of the all-inclusive XC series from Dell and Nutanix is to ease the cost and complexity of deploying applications, while incrementally scaling as requirements demand it.

    7:56p
    Melbourne IT Acquires Australian Cloud Services Provider Uber Global

    logo-WHIR

    This article originally appeared at The WHIR

    Melbourne IT has reached an agreement to acquire cloud services provider and domain registrar Uber Global for $15.5 million AU (approximately $12.2 million USD), the company announced Wednesday. In an announcement made via the Australian Stock Exchange, where it is listed, Melbourne IT reported the acquisition along with its 2014 financial results, and indicated its plans to continue growing its domains and hosting business.

    Uber Global, which bears no relation to the similarly named ride-sharing app, serves a customer base of 70,000 plus over 400 resellers with domains, hosting and cloud applications, and white-labelled business services through channel partners and bespoke cloud solutions. The Uber website says that it is the country’s third largest cloud provider, and counts 110,000 customers altogether.

    Uber Global operates the brands Uber Enterprise, Uber Wholesale, ilisys Web Hosting, and SmartyHost. The company has a forecasted EBITDA for the coming year of $2 million AU, and Melbourne IT figures that synergies will net it another half million once fully integrated in 2017.

    Melbourne IT Chairman Simon Jones said, “Melbourne IT’s Board sees the acquisition as a major opportunity to achieve additional scale and to strengthen the core domains and hosting business. It also aligns with the Board’s stated requirement that such acquisitions be earnings accretive from day one.”

    The acquisition is being funded out of existing cash reserves, which were bolstered by a year of major growth for Melbourne IT. The company reports that its financial year 2014 revenue was up 21 percent, and its net profit after tax rose 27 percent after normalization for its acquisition of Netregistry.

    Melbourne IT purchased Netregistry for over $50 million AU in early 2014, which increased its scale and the breadth of its operations.

    The profits and acquisitions mark a significant turnaround for Melbourne IT, whichsold off its corporate domain name and online brand services divisions in 2013 in the wake of reporting disastrous financial results.

    The Australian government plans to ramp up investment in cloud services with the help of an advisory board, for which it named the first 49 industry members last week. While neither Melbourne IT nor Uber Global is on the list, that does not prevent them from competing for government contracts in the future, and more companies are expected to be added.

    While Melbourne IT has focussed largely on the SMB market in the past, it clearly has aspirations beyond its established strengths.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/melbourne-acquires-australian-cloud-services-provider-uber-global

    8:16p
    NaviSite Launches Second Silicon Valley Data Center

    Time Warner Cable-owned cloud and managed service provider NaviSite has brought online a new data center site within a Digital Realty facility in Santa Clara, California.

    It has had a data center in nearby San Jose for more than 10 years. The company now has nine data centers in the U.S. and U.K.

    NaviSite is focused on growing its enterprise hosting business, and the Santa Clara expansion is part of that effort. It leverages the robust network backbone operated by its parent company as differentiation for its services.

    This is a third data center NaviSite has leased from Digital. The two companies have also formed a sales alliance, meaning NaviSite will have access to other tenants in the data center provider’s facility. Digital has been making more and more of these sorts of alliances, realizing it could no longer sustain growth by focusing strictly on providing its traditional bare-bones wholesale data center services.

    NaviSite has taken 3.5 megawatts of capacity with Digital in the supply-constrained Silicon Valley data center market. Commercial real estate analysts say there is currently a lot more demand for data center space in the region than there is supply, prompting wholesale providers, such as CoreSite and Vantage, to race to build out additional capacity. Digital is going through a portfolio-optimization phase, focused on getting rid of non-core properties rather than building more.

    NaviSite used to provide colocation services but shifted focus about five years ago, getting out of the colo business and doubling down on managed services, including cloud and enterprise hosting.

    In 2011, it was acquired by Time Warner for $230 million in an episode that was characteristic of the time. That same year, Verizon acquired Terremark, and CenturyLink acquired Savvis. Telcos were buying data center and cloud service providers to diversify their revenue streams and to compensate for shrinkage in traditional telco services. Such acquisitions still take place from time to time but not as often. The last high-profile example was the acquisition of data center provider ViaWest by Canada’s Shaw Communications in 2014.

    10:42p
    Carter Validus Buys 10MW Atlanta Data Center

    Carter Validus Mission Critical REIT is continuing its data center and hospital acquisition spree.

    The latest two deals were for properties in Georgia: a $56.7-million purchase of a data center in Alpharetta (an Atlanta suburb), and a $20.2-million purchase of a hospital in Savannah.

    The Atlanta data center is a 165,000-square-foot facility built in 1999 as a primary mission critical site for a financial services company whose name Carter Validus did not reveal. It has about 50,000 square feet of raised floor and access to 10 megawatts of power total.

    The medical facility in Savannah is a 40,000-square-foot 50-bed long-term acute care hospital.

    “The addition of both the Alpharetta Data Center II and the Landmark Hospital of Savannah properties is representative of our commitment to acquire high-quality, mission critical assets in the healthcare and data center industries,” John Carter, the REIT’s CEO, said in a statement.

    The company’s business strategy is simple: buy high-value properties that generate rent revenue. Some properties it buys from a landlord who doesn’t occupy the building, and some of its deals are “sale-leaseback” transactions, where the previous owner continues occupying the property as a tenant after having transferred ownership to the buyer.

    Recent examples of the latter include a purchase of two IO data centers in the Phoenix market last year and acquisition of an AT&T data center in the Nashville suburbs in 2013.

    During its four years of buying properties, the real estate investment trust has amassed about 20 data centers and 40 healthcare facilities around the U.S., a portfolio valued at nearly $2 billion.

    << Previous Day 2015/02/25
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org