Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 20th, 2014

    Time Event
    9:30a
    SingleHop Raises $14.8M To Accelerate SMB Cloud Business

    SingleHop, provider of Infrastructure-as-a-Service and cloud-enabled managed hosting for SMBs, has secured $14.8 million in venture debt financing.

    The round brings the company’s total to $42 million in equity and debt capital, giving it the financial flexibility to continue growing the business and building out its technology.  Its previous, Series A, round of $27.5 million in 2012 was lead by Battery Ventures.

    Venture debt financing in the latest round was from Silicon Valley Bank, with participation by Farnam Street Financial.

    Dennis Grunt, director of Silicon Valley Bank in Chicago, said, “While so many companies are finding it increasingly difficult to reap the benefits of cloud infrastructure without large IT departments to implement and maintain them, SingleHop has the technology and the commitment to customer success that makes a big difference for companies of any size.”

    SingleHop has accomplished a lot of growth and has put a lot of energy into its technology, particularly around automation and next-gen hosting services. The company currently has more than 14,000 servers in production serving more than 4,000 customers globally.

    “This financing ensures we are well capitalized to grow our team and our services and well positioned to make strategic investments as we continue to cement our leadership position in the SMB space,” said SingleHop CEO Zak Boca. ”We laid out a strategy in 2013 to introduce more services and build our channel program. “We’ve done well with uptake and we’re hearing through our channel partners that there’s a strong desire for new services.”

    The money will go toward developing new services as well as potential technology or talent acquisitions.

    Historicaly,  a large chunk of SingleHop’s customer base has consisted of resellers. “When we started it was a lot of hosting resellers, but it’s evolved into systems integrators and value-add resellers,” said Boca. “We’re selling with them now; we’re closer to the customer in these deals.” The company still has a good number of resellers and channel partners, however, which total more than 500.

    It sees SMBs as its best opportunity. “SMBs are doing more complex things, and that’s a space we’ve always served and served well,” said Boca. “The key for us is that it’s so automated. We’re focused on introducing new features, new capabilities and services tailored to that SMB space. There’s a more sophisticated SMB customer, doing complex things in the data center.”

    SingleHop provides fully managed or self-service options, customizable dashboards and a proprietary automation platform called LEAP which allows access to systems from any device.

    It has four data centers, two in Chicago, one in Phoenix and one in Amsterdam. Interxion is SingleHop’s data center provider in Amsterdam, which the company chose because it offers a well-connected location with a skilled workforce and a significant number of quality network providers and data centers in the area.

    “We originally introduced Amsterdam to satisfy existing customers,” said Boca, “but we’ve also done really well in getting new business. We’re 8-10 months ahead of where we thought we’d be.”

     

     

    12:00p
    Among Major Tech Companies, Snapchat, AT&T and Comcast Do the Least to Protect User Data

    logo-WHIR

    Transparency reports have become a new standard in the tech industry, according to an Electronic Frontier Foundation (EFF) report released recently. The fourth annual “Who Has Your Back” report shows that 20 out of 26 top technology companies published transparency reports revealing information about government requests for user data.

    Only 7 of the companies published transparency reports last year, and while companies are legally barred from disclosing information about certain types of requests such as National Security Letters, the EFF praises the practice for providing “a small but vital level of public transparency.”

    The EFF worked with data analysis company Silk to produce a tool for exploring corporate transparency reports, which breaks down data requests by company, country and compliance. The marked increase in data available provides a detailed picture of data requests globally.

    “The sunlight brought about by a year’s worth of Snowden leaks appears to have prompted dozens of companies to improve their policies when it comes to giving user data to the government,” said EFF Activism Director Rainey Reitman. “Our report charts objectively verifiable categories of how tech companies react when the government seeks user data, so users can make informed decisions about which companies they should trust with their information.”

    Nine of the companies in the report received the highest possible score of six stars, and 6 more received 5 out of 6 and were not eligible for the sixth as they did not have to go to court on behalf of users.

    The top scoring companies are Apple, CREDO Mobile, Dropbox, Facebook, Google, Microsoft, Yahoo, and the two companies which received top marks last year, Sonic and Twitter. The companies a court challenge away are LinkedIn, Pinterest, SpiderOak, Tumblr, Wickr and WordPress.

    Snapchat, AT&T and Comcast were singled out for their poor protection of user data as they handed over requested information to government agencies without warrants.

    The EFF and several of the companies covered in the report protested mass surveillance on Feb. 11 as part of “The Day We Fight Back.”

    The event took place shortly after President Obama placed limitations on government use of collected data, in measures characterized as disappointing by the EFF.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/eff-transparency-reports

    12:30p
    Data Storage Location: Four Things to Consider

    John Landy is chief security officer at Intralinks, a secure enterprise collaboration and file-sharing provider. Prior to joining Intralinks, John was chief architect at JPMorgan Invest, the online brokerage subsidiary of JPMorgan Chase.

    Deciding where to house corporate data requires an understanding of the relevant laws across countries, as well as a thorough risk analysis. Organizations might have legal or ethical obligations to protect data in one jurisdiction, while also facing legal requirements to turn that information over in another. In addition, the move to cloud-based data storage and processing has only added to jurisdictional concerns.

    There are four important factors organizations should review when selecting the physical location of their data repositories:

    1. Your data’s subject matter. The actual content of your data might render it subject to the jurisdiction of some government body, no matter the location. For instance, under Massachusetts state law, personal information about residents of the Commonwealth is subject to the data breach notification law, regardless of the holder of that information’s ties to the state or the actual location of the data.

    2. Your target location’s mutual legal assistance treaties (MLAT). These are binding agreements between countries by which one country’s agents can request the assistance of another country to obtain information over which they don’t have direct physical or legal access.

    3. Transit patterns of traveling data. Information flow through the Internet often involves transmission through many countries, traveling the path of least congestion. Any of these countries through which your data passes can claim jurisdiction – including countries where your traffic path may have been hijacked through hackers.

    4. Relation of data to your corporate headquarters. Governments where your company may have related interests may be able to gain access to data stored elsewhere. The laws where the company or organization is headquartered may require access to information within their “custody.”

    Once these four criteria have been used to evaluate potential locations, organizations should explore the entire range of conceivable threats through a risk analysis. A government’s data monitoring/interception jurisdiction is an important consideration. The legal environment must be considered and weighed against other threats and factors.

    To begin this process, it’s important for organizations to know the laws, understand how governments can act on those laws, and not be misled by popular accounts or rumors. Further, data encryption in transit, regardless of location, is a must. Additionally, when data is being stored, it should be secured, with multi-factor encryption keys owned and controlled by the company that do no rest within any single source.

    Choosing where to locate your data shouldn’t be done in a vacuum. Cloud computing and remote storage can relieve many organizations of the technological burdens of understanding the mechanics that fall under the cloud, but this doesn’t relieve businesses from the burden of understanding the laws of the nations where these clouds operate.

    Intralinks-Infographic-470

    Click to enlarge.

    Global Locations: Who Makes the Grade?

    Below is a ranking of countries and regions based on their current MLAT status and legal protection stance on corporate data in transit and at rest. While a full breakdown of the data protection laws for every nation on earth would require volumes, this is a brief, selective overview. Many countries have laws that govern the general disclosure of information with enumerated exceptions for law enforcement.

    These nations receive a grade of “A” for their strong laws for intelligence and law enforcement activity, and strong data protection enforcement:

    • Canada
    • Switzerland
    • Spain
    • Brazil
    • Japan

    The below countries are solid “B” grade earners. Their governance laws weigh more heavily on the side of intelligence and law enforcement activity, but do offer some data protection enforcement.

    • The United States
    • The European Union (EU)
    • Australia
    • Germany
    • Mexico

    The “C” students have minimal rules for intelligence and law enforcement activity, as well as limited data protection enforcement:

    • India
    • Thailand
    • Sumatra
    • South Africa

    The following countries rate either a “D” or an “F” due to their negligible rules for intelligence and law enforcement activity, and non-existent data protection enforcement:

    • Russia
    • China
    • Pakistan
    • Saudi Arabia
    • Egypt
    • Libya
    • Hong Kong

    It’s important to note, however, that the United States is situated in a unique power position relative to other nations in terms of raw access to data. A majority of the world’s telecommunications traffic runs through switches and servers located in or controlled by U.S. interests. This explains why U.S. companies receive special treatment when dealing with European Union citizen data, for example.

    While a strong rule- of-law jurisdiction, lack of clarity of extrajudicial processes for collection of data has muddied the waters. Meanwhile, Germany’s Anti-Terrorism Law provides its security architecture direct access to personal data. Consider this: under federal court order, authorities in Germany can legally release a “Federal Trojan” in order to infiltrate computer systems and obtain information without notifying the system owner.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:00p
    CenturyLink Launches Minnesota Data Center

    CenturyLink Technology Solutions has opened a data center in the Minneapolis-St. Paul metro built to support up to 6 megawatts and 100,000 square feet of raised floor, although the initial phase is 1.2 megawatts on 13,000 square feet.

    Located in Shakopee, Minnesota, a Minneapolis suburb, the data center captures synergies of CenturyLink’s existing “feet-in-the-street” local presence and its Savvis acquisition.

    The company used developer Compass Datacenters to build the facility. Initially, CenturyLink said the facility would be able to support 4.8 megawatts, but after working with Compass and analyzing the potential, this was recalculated to 6 megawatts. Construction of the facility began in October 2013.

    The company is offering hybrid cloud solutions that leverage network, colocation, managed hosting and managed services.

    David Meredith, senior vice president and general manager at CenturyLink Technology Solutions, said, “Our new Twin Cities data center will provide companies in industries such as retail, consumer brands, financial services, healthcare and media with an incredible, distinctive data center experience.”

    CenturyLink promised a big 2014 in terms of expansion, committing to adding 180,000 square feet and 20 megawatts of space to its global footprint. The company said it would expand its global data center presence in eight markets and invest in growing its CenturyLink Cloud network of public cloud data centers.

    CenturyLink has an existing data center in the Minneapolis-St. Paul region — a former Qwest facility that Savvis acquired before being itself gobbled up by CenturyLink. It is small by comparison, offering 375 kilowatts of power and 7,088 square feet of raised floor. The new data center offers much more room to grow and address demand.

    Minnesota – an active emerging market

    Minnesota continues to see a lot of data center construction activity. ViaWest recently opened doors to a 70,000 square feet, 9 megawatt facility. Cologix has been expanding rapidly in 511 11th Avenue. Stream Data Centers is building a 75,000 square foot data center in a southwest suburb (which received Tier III certification for design documents from the Uptime Institute). Other providers include DataBank, Compass Datacenters (which partnered with CenturyLink on this build) and Digital Realty Trust.

    There is also a massive legacy data center on the market that used to house American Express’ infrastructure that would require some upgades to modernize the building.

    Minnesota is an emerging market, both for local enterprises and as a backup disaster recovery option  “The Shakopee facility’s Tier III rating and room for expansion, along with the CenturyLink IP backbone and global data center footprint, will appeal to local enterprises and may also put Minneapolis on the map as a backup-disaster recovery option for firms located elsewhere,” wrote 451 Research Analyst Rick Kurtzbein in a May 13 report.

    CenturyLink operates more than 55 data centers with more than 2.5 million square feet of raised floor space wordwide.

    2:00p
    Your Operations Guide for Maximum Data Center Reliability

    As you build out your infrastructure, place more users on your servers, and develop your cloud – reliability will become a serious concern. Because technology is so critical during the business process, outages can be extremely costly.

    There is significant time spent in the data center industry discussing data center infrastructure design or the Tier rating of a data center. With so much focus on the critical systems infrastructure design of a data center (specifically the electrical and mechanical systems) it leaves the impression that this is the key to predicting the reliability of a data center. This way of thinking seems like a “break – fix” philosophy for data center operations. The reason to have data centers is that utilities are generally “break – fix” operations. The Tier rating of a data center is an indicator of the distribution paths, capacities, redundancies, and approximately how many planned maintenance windows the IT-end user can expect, assuming none of the maintenance is deferred. However, as the primary factor or predictor of reliability, the Tier rating alone falls short.

    Here is the important point to consider: Data center reliability is the combination of many factors of which infrastructure design is only one of many.

    People, processes, operations, maintenance, lifecycle, and risk mitigation strategies are also necessary in creating reliability. This eBook from Fortrust describes the strategies which work with any Tier rating and decrease the likelihood of unplanned downtime or outages in a data center.

    To really gain a great understanding around maximum data center reliability, the eBook explores the following topics:

    • The Most Likely Causes of Unplanned Outages or Downtime
    • Human Error and Infrastructure Capacity Management
    • Maintenance and Lifecycle Strategy
    • Data Center Site Selection and Risk Mitigation Measures

    Download this this eBook today to see how the key to preventing unplanned outages or downtime in the critical systems infrastructure is to focus the greatest amount of attention and effort on the “most likely” causes of outages. Remember, it is important to understand that not all data centers are designed the same, built the same, managed alike or operated alike. This is why following deployment, maintenance and management best practices can help with keeping your data center reliable and up.

    3:00p
    eBay Shifts to Water-Cooled Doors to Tame High-Density Loads

    Data center operators are seeking to pack more computing power into each square foot of space. As these users add more servers and increase the power density of their racks, it can alter the economics of how to best cool these environments.

    A case in point: eBay recently reconfigured a high-density server room in its Phoenix data center, switching from in-row air cooling units to water-chilled rear door cooling units from Motivair. These units, which cool server exhaust heat as it exits the rear of the rack, have been in use for years (see our 2009 video overview for an early example) but tend to be more expensive than traditional air cooling.

    As power density changes, so does the math driving investments in cooling, according to Dean Nelson, Vice President of Global Foundation Services at eBay. The data hall in Phoenix featured 16 rows of racks, each housing 30kW to 35kW of servers – well beyond the 4kW to 8kW seen in most enterprise data center racks. Cooling that equipment required six in-row air handlers in each row, meaning eBay had to sacrifice six rack positions for cooling gear.

    Switching to the rear-door units allowed eBay to recapture those six racks and fill them with servers, boosting compute capacity in the same footprint. The Motivair cooling doors are active units, with on-board fans to help move air through the rack and across the unit’s cooling coils. Some rear cooling doors are passive, saving energy by relying upon the fans on the servers for airflow. Nelson said that the high power densities in the eBay Phoenix installation required the active doors.

    Weighing Power Tradeoffs

    The Motivair doors use high-efficiency EC (electronically commutated) fans, which mean the overall power usage (12.8kW vs 11kW for the in-row units) was minimal. The system also uses less energy because it can use warm water, working at a water temperature of at 60 degrees rather than the 45 degrees seen in many chiller systems.

    “We thought the rear doors were expensive, but when you added the six racks back into the row, it paid for itself,” said Nelson.

    The cost scenario also worked because eBay had pre-engineered the space to support water cooling. Nelson believes that major data centers are approaching the limits of conventional air cooling, and sees water cooling as critical to future advances in high-density equipment. eBay is using hot water cooling on the rooftop of its Phoenix data center, which has served as a testbed for new data center designs and technologies being implemented by the e-commerce giant.

    In the rooftop installation, eBay deployed data center containers from Dell that were able to use a water loop as warm as 87 degrees F and still keep servers running within their safe operating range. To make the system work at the higher water temperature, the system as designed with an unusually tight “Delta T” – the difference between the temperature of air at the serve inlet and the temperature as it exits the back of the rack.

    The Phoenix facility also includes traditional indoor data halls, which is where the rear-door cooling doors were implemented.

    Here’s a look at the finished room:

    ebay-motivair-full

    5:52p
    Data Foundry Expands With Equinix In Ashburn

    Colocation and managed services provider Data Foundry has expanded its footprint in the northern Virginia data center markets, taking a business suite inside Equinix’s DC10 Ashburn facility. DC10 is unique on the Ashburn campus as it is divided into dedicated suites rather than retail colocation racks and cages.

    This is an expansion of Data Foundry’s presence in Virginia, which it initially established in 2003. The new suite gives Data Foundry a well connected presence in the heart of Data Center Alley.

    “As a provider of global data center services, we either own and operate our own data centers or we deploy within the most secure, reliable and connected facilities in the world,” said Shane Menking, President and Chief Financial Officer of Data Foundry. “It’s exciting to expand our East Coast presence in response to existing customer demand and at the same time be able to serve companies based within the Virginia market.”

    There’s continuing growth in general in Virginia’s Data Center Alley. Eight million square feet of data center space is currently either built or under construction. Equinix’s initial campus in Ashburn has filled to the point where the company has filed plans to build a second mega-campus in the area.

    Equinix introduced Business Suites to capture the middle ground between retail and wholesale colocation. A recent tour of Equinix’s DC10 revealed that space is running short. Data Foundry was attracted to the site’s access to a high concentration of IT, telecommunications, biotech, federal government, and international organizations in the region. Equinix plays an integral part in connectivity in Ashburn.

    Data Foundry customer Apptix, which is headquarted in Herndon, is excited about the new facility nearby. Apptix offers business communication services like MS Exchange email hosting, hosted VoIP, mobile Exchange hosting, SharePoint hosting. It will expand into Data Foundry’s new facility.

    “Having partnered with Data Foundry for several years in Austin, we’re excited that Data Foundry is expanding its capabilities within our Northern Virginia backyard,” said Peter Walther, Vice President of Business Operation at Apptix. “The proximity between our corporate headquarters in Herndon and their new Ashburn data center enables Apptix to more effectively leverage Data Foundry’s capabilities to enhance our service for our blue-chip channel partners and global customer base.”

    Data Foundry owns and operates four data centers in Texas and offers a suite of global colocation and managed services in data centers located in Ashburn, Los Angeles, Amsterdam and Hong Kong. Its presence in Texas, particularly Austin and Houston, is sizeable and growing. The company recently broke ground on a 350,000 square foot, 60 megawatt facility in Houston.

     

    7:00p
    Cisco sharpens SDN focus on enterprises, but will they accept lock-in?

    Cisco chose to lead the stream of news announcements from its big conference in San Francisco this week with Application Centric Infrastructure. ACI is the company’s vision for software defined networking proposed as an alternative to the two other ways of automating network management: OpenFlow and virtual overlays.

    At Cisco Live! on Monday, the company announced it would release the Application Policy Infrastructure Controller, the centerpiece of an ACI-enabled system, this summer. It also said that not only its latest Nexus 9000 network switches will do ACI, but customers will be able to enable older-generation products (Nexus 2000, 3000, 5000, 6000 and 7000) to be managed by the APIC.

    Cisco said the management software for its server-and-network bundle Unified Computing System, called UCS Director, will support ACI. Converged-infrastructure stacks that combine UCS with NetApp storage (FlexPod) and with EMC storage (Vblock) will also be integrated with ACI.

    Third way to do SDN

    ACI, which Cisco initially announced one year ago, is yet another approach to SDN, which seeks to solve the problem of having to manually reconfigure the network every time an application expands in scale. The other two proposed solutions are OpenFlow, an open standard for communicating network configuration requirements, and virtual network overlays.

    In an OpenFlow system, applications program the network through an OpenFlow controller, which acts as an intermediary. This can be a problem at scale, where the controller creates a potential bottleneck.

    ACI is different because it disaggregates applications from the network. The application communicates its connectivity requirements (or policy) to the APIC and ACI-enabled network hardware; the network then self-configures to meet those requirements.

    Virtual network overlays are the third way of doing SDN. In an overlay-based system, applications express connectivity needs to a virtual network layer sitting on top of the physical network and network configuration is managed by an application control layer, which is also separate from the application.

    The biggest difference between ACI and overlays is that ACI forces you to use Cisco’s hardware only.

    ‘If you like it, put a ring on it’

    ACI is a proprietary technology that only works on Cisco’s networking gear, which may be a problem for enterprise data center managers, majority of whom have a “dual-vendor” strategy for buying infrastructure components. By proposing that they lock themselves into Cisco for SDN, the networking giant maybe shooting itself in the foot, since its competitors in the space are promoting the opposite approach – one where components by different vendors speak the same open protocols, so they can be interchangeable.

    Dinesh Dutt, chief scientist at Cumulus Networks, a startup with a Linux-based network operating system for bare-metal switches, said all data center operators he has worked with preferred to have at least two vendors for every piece of technology. Dutt came to Cumulus in 2013 after 15 years at Cisco.

    Dutt’s colleague Shrijeet Mukherjee, vice president of engineering at Cumulus, said a dual-vendor strategy encourages flexibility and agility. Buying from a single vendor leads to big markups and may slow innovation. “If you’re already locked in, they can gouge you on price and don’t need to innovate, as you’re stuck with them,” he said.

    Dutt and Shrijeet do not necessarily buy into OpenFlow either because of the potential scaling issues it can lead to. “The OpenFlow model means a whole bunch of competing ‘asks’ are being made of the controller, which becomes a central compiler which programs the network for global optimization,” Mukherjee wrote in an email. “This is hard to scale and make work in practice.”

    The most elegant solution, in his opinion, is the overlay approach: “The overlay solution is simple [and] does not need a sea change in how the network is done today and just fixes the problem that needs fixing.”

    Non-disruptive on-ramp crucial for enterprise install base

    Steve Garrison, VP of marketing at Pica8, said the prominent SDN startup saw Cisco as a “Johnny-come-lately” in the SDN space. After spending some time dismissing SDN, the company added support for OpenFlow, which it followed by acquisition of Insieme, the “spin-in” startup that developed its ACI vision.

    “In some ways, they finally caught up,” Garrison said, but because Cisco added OpenFlow support and kicked off ACI so late, it lost some market opportunity. The company has lost a lot of business in the service provider market, and the ACI strategy seems to be focused squarely at the enterprise data center.

    “We really see them very much protecting enterprise at this point,” Garrison said. This is why enabling ACI on the older Nexus switches is so important for Cisco strategically.

    Cisco’s big advantage with enterprise data centers is the trust it has built over the years. Data center managers who have relied on Cisco gear throughout their entire careers may not be averse to locking themselves in with Cisco for SDN.

    But, as Garrison explains, these are the customers who are the most averse to changing how they do things, which is why giving them an “on-ramp” to ACI by enabling it in their existing infrastructure is paramount. “For a customer who’s been buying into Cisco for a long time … the propensity to change is lower,” he said.

    Mainstream SDN is years away

    Garrison said the industry is still several years away from seeing mainstream adoption of SDN. “Here we are; Monday morning; who’s got time to do this stuff right now?” Cloud companies and service providers are early adopters because they are the only ones with the operational pains of scale and virtual-machine mobility that SDN is meant to address.

    Enterprises simply do not have those problems today – their environments are not as dynamic – but as more of them adopt private or hybrid cloud strategies, the automated dynamic network will become more relevant for them.

    7:35p
    Creating a Cloud Readiness Assessment

    It’s a lot easier to move your infrastructure into the cloud than have to migrate everything back into a private data center. The idea is to make sure you deploy the right workloads and have the correct deployment methodology throughout the entire process.

    When cloud computing started getting popular, organizations began pushing more of their environments into a public or hybrid cloud model. Although this was absolutely a great move by many of these businesses, some began to feel the pains of putting the wrong application or database into a public cloud. User, data, and workload proximity are critical, as is deploying the right workload against the proper type of cloud model.

    Before you migrate a workload into a colocation or public cloud provider space, there are some key infrastructure aspects to consider. One of the best ways to prep your entire organization for a potential cloud move is to utilize a cloud readiness assessment. Working with a cloud-ready partner can really help this process along. Here’s the challenge: every business and every data center is unique. However, the methodology around a readiness assessment can be standardized to some extent.

    That said, here are some key points to consider in a Cloud Readiness Assessment Project:

    • Your business model and goals. It’s hard to narrow this down in just one article, but the first thing to understand will be your current business model and where your organization is headed. Are you planning aggressive expansion? Are you planning on taking on additional users or branches? Are you deploying a new type of application or product? Are there core reasons to move an application, data set or entire platform into the cloud? Through research and working with cloud and industry professionals, you’ll be able to create a business model that will scale from your current platform into the cloud. Here’s why this is important: ROI. Through your use-case and business model analysis, you may very well find that moving to a cloud platform is not financially conducive. Or you might require a different approach.
    • Your user base. In today’s ever-evolving technology world, the end-user has become even more critical. The always-on generation is now demanding their data anywhere, anytime, and on any device. How capable will your cloud platform be to deliver this rich content to your end-users? How well can you ensure an optimal user experience in the cloud? During your assessment, take the time to do a very good user survey. Find out how they compute, which devices they use, and what resources they are accessing. The last thing you want to do is build a cloud platform without direct end-user input.
    • Your existing physical infrastructure. Are you sitting on new gear or are you overdue for a hardware refresh? All of this is part of the cloud assessment process. Your ability to replicate into the cloud will be directly impacted by your current underlying physical environment. The reality is simple: if your gear is extremely outdated, you may need to fix some in-house issues before moving into the cloud. A workload running on a certain type of physical system now may behave very differently in the cloud later. If your environment is pretty much new, consider various cloud options. In some cases, organizations ship their own servers into a cloud provider’s data center. The need to upgrade or implement new hardware requirements can definitely add to the bottom line of any cloud migration project.
    • Your existing logical infrastructure. We operate in a virtualized world. Software-defined technologies, advanced levels of virtualization, cloud computing, and mobility are all influencing our data center and business models. With that in mind, a cloud readiness assessment must scale both the physical and logical aspects of your environment. Are you already virtualizing your applications? How old are those apps? Can pieces of your environment even run on a cloud platform? For replication purposes, do you need to upgrade your own virtual systems? Even beyond the physical aspect – working with the data side of your environment is going to be the most challenging. Applications, their dependencies, and the data associated with it all are important considerations during an assessment.
    • Selecting the optimal cloud option. The progression of cloud infrastructure offers an organization a number of options. Colocation, various cloud models, and even the hybrid approach are all viable for the modern business. The important piece is selecting the right option. To give you a realistic perspective, in some cases it makes sense to build out your own data center because your business model, user-base, and future business goals all require it. The point is that there are a number of options to work with.

    The cloud can be a powerful tool. Already, many organizations are building their business process around the capabilities of their technology platform. As always, any push towards a new infrastructure will require planning, and a good use-case analysis. In the case of cloud computing, running a cloud readiness assessment can save quite a few headaches in the future. Basically, you’ll be able to better understand your current capabilities and what the optimal type of infrastructure would be. Ultimately, this helps align your IT capabilities directly with the goals of your organization.

    << Previous Day 2014/05/20
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org