Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, February 22nd, 2013

    Time Event
    1:24p
    Migrating to the Cloud: Top 3 Best Practices

    Jake Robinson is a Solutions Architect at Bluelock. He is a VCP and former CISSP and a VMware vExpert. Jake’s specialties are in infrastructure automation, virtualization, cloud computing, and security.

    jake-robinsonJAKE ROBINSON
    Bluelock

    Working at an Infrastructure-as-a-Service provider, I see a lot of IaaS application migration. Migration occurs in both directions–from physical servers to cloud, from private cloud to public cloud (and back), and to private cloud from public cloud.

    Though it occurs often, migration shouldn’t be rushed. A poor migration strategy can be responsible for costly time delays, data loss and other roadblocks on your way to successfully modernizing your infrastructure.

    Each scenario is different based on your application, where you’re starting from, and where you’re going.

    Best Practice: Pick Your Migration Strategy.

    • Option 1: Just data migration. This is typically the correct choice for Tier 1 and 2 applications. If you choose to migrate your VM or vApp, it’s still going to be constantly changing. If it’s a Tier 1 application you won’t be able to afford a lot of downtime, so typically, we’ll recommend invoking some sort of replication. Replication is a complex, detailed subject in itself, but the key to understanding it is to identify the size of the data, the rate of change and the bandwidth between the source and target. As a general rule, if your rate of change is greater than or equal to your bandwidth, your migration will likely fail. That’s because the rate of change refers to everything coming in to the app, it’s gaining gravity as the rate comes in. The bandwidth is like the escape velocity it requires to get off the ground, or migrate. You need a high enough bandwidth to “overtake” that rate of change.
    • Option 2: Machine replication.  This is best for Tier 1 and 2 applications that can afford some downtime and it involves stack migration.  There is less configuring in this scenario, but there is more data migrating. Option two is best if you’re moving to an internal private cloud. You will be able to replicate the entire stack, because you have plenty of bandwidth to move stuff around. It’s important to note the portability of VMware-based technology, because VMware allows you to package the entire VM/vApp, the entire stack, into an OVF. The OVF can then be transported anywhere if you’re already on a virtualized physical server.
    • Option 3: P2V migration. You typically see this for Tier 2 and 3 apps that are not already virtualized. The concept involves taking a physical app and virtualizing it. VMware has a VMware converter that does P2V, and it’s very easy to go from a physical to a private cloud using P2V.  It is, however, an entirely different set of best practices, and you should do some extended research to make sure you have the latest updates, best practices and suggestions. In option three, there is no replication; however, those apps can be shipped off to a public cloud provider to run in the public cloud after being virtualized.
    • Option 4: Disaster Recovery. A final path some companies take is to treat it as a Disaster Recovery (DR) scenario.  Setting up something to do basically replication from the physical to one machine to another. They choose to replicate the entire stack from point a to point b, and then click the failover button.

    Now, let’s say you have identified the best vehicle and path to migrate your application. Before you actually get to work there is still quite a bit of information to evaluate and incorporate.

    Best Practice: Understand the Gravity of Your Data.

    When moving Tier 1 applications from a physical data center to a private or public cloud, we have to take data gravity into account, and the data itself will be the weightiest part.

    There’s no easy way to shrink down the data, so you need to evaluate the weight of the data in the app you’re considering migrating. Especially if you’re a high transaction company, or if it’s a high transaction application, there would be a lot of data to replicate. The data of the app constitutes 99 percent of the data gravity of the application.

    Another aspect that you should evaluate as part of your pre-migration plan is to determine how connected your VM or vApp is to other apps. If you have a lot of applications tightly coupled to the application you want to migrate, the cloud might not be an option for that application, or at least only that application.

    Best Practice: Identify How Your Apps Are Connected.

    Does your application have data that other applications need to access quickly? If so, an “all or nothing” philosophy of migration is your best option. If you have an application that is tightly coupled to two or three others, you may be able to move them all to the cloud together. Because they are still tightly coupled, you won’t experience the latency that would occur if your cloud-hosted application needed to access a physical server to get the data it needs to run.

    A step beyond identifying how many apps are tied to the application you wish to migrate, work next to identify which of those applications will be sensitive to latency problems. How sensitive it can be should be a consideration of whether you migrate the app or not.

    To be able to check this best practice off your list, be very sure you understand everything your application touches so you won’t be surprised later, post-migration.

    Each application, and migration strategy, is unique, so there is no detailed instruction manual that works for everyone.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    1:30p
    Data Center Jobs: RagingWire

    At the Data Center Jobs Board, we have a new job listing from RagingWire, which is seeking a Director of Critical Facilities Operations in Sacramento, California.

    The Director of Critical Facilities Operations is responsible for the strategic facility planning, daily operational oversight, and planned maintenance and repairs of RagingWire’s CA-based critical facilities. The position includes oversight of electrical and mechanical systems as well as fire/life safety, and the director is expected to have expertise in all areas of data center facility design, operations and maintenance. To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    2:01p
    Friday Funny: Shaking in the Data Center

    Happy Friday! You’ve made it to the end of the work week. Time for some data center levity.

    Each Friday, Data Center Knowledge features a cartoon drawn by Diane Alber, our fav data center cartoonist, and our readers suggest funny captions. Please visit Diane’s website Kip and Gary for more of her data center humor.

    The caption contest works like this: We provide the cartoon and you, our readers, submit the captions. We then choose finalists and the readers vote for their favorite funniest suggestion.

    Congratulations to Jim Leach of Raging Wire for “I like her, but I’m just not ready for a 2N relationship.” for the Valentine’s Day cartoon.

    This week Diane writes, “So the newest craze on youtube is called the “Harlem Shake” and I just love it! If you haven’t seen it yet, you soon will, it’s becoming as popular as the “plank”. Anyway, the Harlem shake is basically when a single person starts dancing all by themselves with a mask or helmet over there head and then as soon as the base drops the entire room starts going crazy. Well, I had to have Kip and Gary par take in the fun and what better place than the tape storage room! The only problem is I don’t think Gary has any idea of what is going on. . .”

    Click to enlarge cartoon.

    Click to enlarge cartoon.

    For the previous cartoons on DCK, see our Humor Channel.

    4:49p
    VCE Launches Wave Of New Offerings

    vce-vblockVCE announced a large set of new software and hardware innovations, advancing its vision for converged infrastructure with new open management capabilities for the Vblock portfolio, Vblock Specialized Systems, and new Vblock Systems designed for branch offices and mid-sized data centers.

    VCE was formed by Cisco and EMC with investments from VMware and Intel with the goal of developing converged infrastructure and cloud computing models. The Vblock platform presents a prepackaged, integrated IT offering.

    “In three short years, VCE has pioneered and led the transition to converged infrastructure by delivering a unique customer experience,” said Praveen Akkiraju, CEO of VCE. “Our approach is driven by a commitment to bring simplicity and speed to data center deployments and operations. Today we take the next important steps in defining the next generation data center with the introduction of new VCE developed systems and software that converge data center operations for customers.”

    VCE announced several new Vblock systems, with the Vblock System 200 targeted at distributed environments and mid-sized data centers, and the Vblock System 100 for remote office/branch office deployments. Vblock System 300 and Vblock System 700 architectures have been enhanced to provide even greater configuration flexibility in optimizing compute, networking, and storage resources.

    VCE Vision Intelligent Operations is software that enables converged infrastructure management. Developed for the entire Vblock portfolio, the software dynamically informs customer management frameworks about Vblock Systems, integrating directly with VMware’s Virtualization and Cloud Management portfolio, as well as supporting an open API-enabled integration into other industry management toolsets.

    VCE is introducing the first of a series of systems that feature pre-installed core enterprise workloads. Vblock Specialized System SAP HANA has received SAP HANA Certification, enabling rapid and efficient deployment of this powerful in-memory computing and database platform with unsurpassed scale and performance.

    “IT Executives in the Wikibon community are modernizing their data centers and transforming operations to focus precious resources on adding sustainable business value,” said David Vellante, co-founder and Chief Analyst, Wikibon. “Converged infrastructure such as VCE’s Vblock Systems can dramatically speed deployments, simplify infrastructure management and reduce risk. CIOs will value the fact that VCE is extending convergence to management and operations of converged infrastructure.”

    $1 Billion run rate

    VCE partners Cisco (CSCO) and EMC announced that demand for the joint venture’s products and services has surpassed the $1 billion dollar annual run rate milestone, and that VCE has shipped its 1,000th Vblock in its 3 year history. The most recent quarter saw demand for VCE products and services surpass a $1 billion annual run rate, solidifying VCE’s place as one of the fastest growing companies in IT industry history. Gartner has named VCE the leader in the integrated infrastructure segment with a commanding 57.4 percent market share.

    “In any major industry transition, technology innovation powers business productivity advances,” said Akkiraju. “The transformational improvements VCE brings to our customers by simplifying IT deployments and operations has earned their trust and loyalty, and in turn, fueled VCE’s rapid growth. Our deep experience accelerating customer data center modernization efforts positions us well to drive the next wave of innovation in converged infrastructure and converged data center operations.”

    6:00p
    Riverbed Launches New Whitewater Operating System

    Riverbed (RVBD) announced Whitewater Operating System (WWOS) version 2.1 and introduced larger virtual Whitewater appliances.

    Riverbed’s WWOS 2.1 has added support for Amazon Glacier storage and Google Cloud storage, so customers have immediate access to recent backup data. High data durability offered by Amazon cloud storage services and the ability to access the data from any location with an Internet connection greatly improves an organization’s disaster recovery (DR) readiness.

    “Once created, most unstructured data is rarely accessed after 30-90 days. Leveraging the cloud for storing these data sets makes a lot of sense, particularly given the attractive prices of storage services designed for long-term such as Amazon Glacier,” said Dan Iacono, research director from IDC’s storage practice. “The ability of cloud storage devices to cache locally and provide access to recent data provides real benefits from an operational cost perspective to avoid unnecessary transfer costs from the cloud.”

    New virtual Whitewater appliances support local cache sizes of four or eight terabytes and integrate seamlessly with leading data protection applications as well as all popular cloud storage services. The new WWOS 2.1 includes new management capabilities that enable monitoring and administration of all Whitewater devices from a single console with one-click drill down into any appliance.

    “The features in WWOS 2.1 and the larger virtual appliances drastically change the economics of data protection,” said Ray Villeneuve, vice president corporate development, at Riverbed. “With our advanced, in-line deduplication and optimization technologies, Whitewater shrinks data stored in the cloud by up to 30 times on average – for example, Whitewater customers can now store up to 100 terabytes of backup data that is not regularly accessed in Amazon Glacier for as little as $2,500 per year. The operational cost savings and high data durability from cloud storage services improve disaster recovery readiness and will continue to rapidly accelerate the movement from tape-based and replicated disk systems to cloud storage.”

    6:15p
    Dell Leads $51 Mil Funding For Storage Company

    Solid-state storage company Skyera announced it has closed $51.6 million in financing led by Dell Ventures, with participation from other strategic investors. Skyera, backed by key technology and financial partnerships, is positioned at the forefront of the hyper-growth solid-state storage sector.

    “Skyera offers innovative technology that is breaking new ground in enterprise solid-state storage systems, including controllers, memory and software,” said Marius Haas, President, Enterprise Solutions Group for Dell. “Dell continues to expand its growing enterprise systems portfolio to help our customers do more.  We are focused on changing the economics of storage and other systems for our customers by bringing high-end enterprise features to the broad mid-market and solving enterprise problems at a mid-range price point.”

    Enterprise solid-state storage systems from Skyera give applications fast performance, lower power consumption, high density and cost effectiveness. The investment will be used to accelerate the integration of the latest-generation flash technology and drive broader adoption of Skyera’s enterprise solid-state storage solutions. Last fall, the company  introduced its SEOS solid-state operating system that optimizes the hardware, storage and data management capabilities of its purpose-built Skyhawk enterprise storage system.

    According to Gartner Research, “The SSD appliance market, while nascent, is emerging as a compelling solution to deliver high performance with ultra-low latency, which is particularly attractive today in database/data warehousing, virtual desktop, high-performance computing and cloud storage environments.  While cost-effective flash-based hardware is essential, vendors most poised for success must possess a thorough optimization of data management software specific to the characteristics of flash memory to best exploit a market projected to grow from $393 million in 2012 to nearly $4.2 billion in 2016.”

    9:15p
    Going to the Cloud? Time to Make Security and Policy Decisions

    cloud-monitors

    With cloud computing taking off at a very fast pace — some administrators are scrambling to jump into the technology. Unfortunately, many organizations are purchasing the right gear, deploying the right technologies, but still forgetting the policy creation process.

    The truth is that cloud computing is relatively new for many organizations. This means that companies looking to enter the cloud must be careful and avoid jumping in with both feet. Although every environment is unique, administrators must take the time to create a plan which will help them retain control over their cloud initiative.

    One big push for cloud computing has been the concept of “anytime, anywhere and any device.” This heavily revolves around allowing users to access their own devices, while pulling data from a corporate location(s). Although this can be a powerful solution, there are some key points to remember when working with cloud computing policy creation:

    • Train the user. A positive cloud experience, many times, begins with the end-user. This is why when creating a Bring Your Own Device (BYOD) or mobile cloud computing initiative, it’s important to train the user. Simple workshops, booklets and training documentation can really help solidify a cloud deployment.
    • Create a new cloud-ready usage policy. Although the end-point may belong to the user, the data being delivered is still corporate-owned. This is where the separation of responsibilities and actions must take place. Users must still be aware of their actions when they are accessing internal company information. It’s important to create a policy which will separate personal and corporate data.
    • Start a stipend program. One of the greatest strengths of cloud computing is that it can eliminate the need to manage the end-point. Some estimates mark the management of a corporate desktop between $3,000 and $5,000 over the life of the computer. Many organizations are creating a stipend program allowing users to purchase their own devices where they are responsible for the hardware. After that, the company is able to deliver the entire workload via the cloud. This type of data separation has helped many organizations reduce cost and maintain agility.
    • Provide a listing of approved devices. When creating a cloud policy, it’s important to work around approved and tested devices. If BYOD is the plan, test out a specific set of devices which are known to work with the corporate workload.
    • Update the general computer usage policy. Almost every organization has a computer usage policy. With cloud integration, it’s time for an update. Devices are no longer sitting on the LAN, rather, they are now distributed anywhere in the world. This policy should have a subsection outlining usage requirements, considerations and responsibilities aimed at both the user and the organization. Having a structured usage policy can help avoid confusion as to who may be managing what.

    When working with cloud computing there are a few best practices to keep in mind:

    • Avoid a free-for-all! As mentioned earlier, it’s very important to have an approved device list. Many cloud environments hit serious snags when administrators take the concept of “any device” a little too seriously. Some phones or types of computers may just not work well with a given initiative. Have a solid plan as to which devices will be used and develop a plan around those end-points.
    • Create a management platform. Just like a localized data center, cloud platforms must be managed. Administrators should set up alerts, alarms, and monitor their infrastructure just as they would anything else. Within the cloud, resources are very finite and must be managed accordingly.
    • Monitor end-user experience. The success of the cloud will greatly depend on how good the end-user experience will be. Administrators will need to keep an eye on content redirection, latency and throughput bottlenecks. Staying proactive will keep the environment running smoother.
    • Leverage replication. A major benefit of cloud computing is data agility. The advice is simple – don’t place all of your data in one basket. Cloud replication has been made easier with better bandwidth and more WAN tools. Higher uptime and DR can be maintained with little disruption to the user by having a replicated cloud environment. So, if your organization plans on heavily relying on the cloud, have a backup plan.
    • Always innovate. Cloud computing allows administrators to go beyond their physical walls. The ability to create new application platforms, more efficient delivery methodologies and a more powerful end-user experience are all benefits of cloud computing. Organizations should use the cloud to innovate and develop!

    Security considerations

    Cloud computing brings with it some new challenges for IT security professionals as they try to control the data that is being delivered down to the end-user. Although security policies will vary, administrators should consider the following when working with a new cloud initiative:

    • Develop a joint security plan and keep all teams involved. Cloud computing is not an independent technology. Quite the opposite – it relies on multiple infrastructure components to operate. When developing a cloud solution, all IT teams must participate in the process. Locking down storage, keeping an eye on the WAN, and ensuring the right policies are in place are jobs for the entire IT organization.
    • Control the cloud. Even after deployment, it’s important to continuously monitor and manage the cloud environment. This means having monitors and agents in place to keep an eye on all functions within the infrastructure. Staying proactive with your cloud initiative means that administrators will catch issues before they become major problems.
    • Evolve current security settings. Existing security policies don’t need to be thrown out. They do, however, need to be changed. Cloud computing takes data to the WAN and allows it to live and thrive over the Internet. This is where security settings (Active Directory, GPO, firewall rules, etc.) all need to be evaluated and tweaked to match the goals of the cloud initiative. Furthermore start to evaluate next-generation security technologies to help your cloud infrastructure be even more robust.

    Having control over your cloud environment will heavily revolve around the amount of time spent planning the deployment. There are many different verticals in the industry and lots of different ways to approach cloud policy creation. Remember, cloud computing isn’t just one singular platform. Rather, it’s a combination of technologies all working together for the delivery of data and resources. When these technologies are properly planned out and aligned with the goals of the business – an organization can create the recipe for a powerful cloud computing environment.

    9:15p
    Data Center Jobs: Total Server Solutions

    At the Data Center Jobs Board, we have a new job listing from Total Server Solutions, which is seeking a Sales Agent in Atlanta, GA.

    The Sales Agent needs to have an understanding of network concepts and terminology, be able to work within a team environment, and be able to meet and exceed monthly sales quotas. More than two years experience in telecom/data center sales is a requirement.To view full details and apply, see job listing details.

    Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed.

    << Previous Day 2013/02/22
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org