Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, October 21st, 2014

    Time Event
    1:00p
    The Software-Defined Data Center: Translating Hype into Reality

    ORLANDO, Fla. - About 80 percent of the software being written today is coded to be cloud-ready and run on virtualized infrastructure. That’s according to Bina Hallman, vice president of storage systems management at IBM, who said the data is evidence of the looming transition to a software-defined infrastructure in modern data centers.

    This shift was the focus of a Monday panel session at the Data Center World conference in Orlando, where specialists in storage, networking and data center environments discussed how the hype around “software-defined everything” translates into real-world change inside data centers.

    “Software-defined data centers are a spectacular opportunity,” said Art Meierdirk, senior director of business services for INOC, which offers outsourcing services for network operations. “We’re moving toward a point where users can manage their businesses in ways that we wouldn’t have dreamed of 20 years ago.”

    Virtualization brings agility

    Software-defined technologies are driven by virtualization, an abstraction layer which uses hypervisors and virtual machines to organize and manage workloads in new ways. Provisioning virtual resources with software makes it easier to scale applications and use hardware efficiently. Software-defined networking holds the promise of reducing costs by shifting network management task to commodity servers rather than expensive switches.

    It’s a new world, with major implications for infrastructure. Virtual machines make it easier to move workloads from one location to another, a capability that unlocks a world of possibilities. Companies can save money by shifting VMs between racks and even across data centers, seeking the cheapest and most efficient environment for a given workload or time of day.

    “SDN will change the way we think about data centers,” said Aaron Rallo, CEO of TSO Logic, which makes capacity management software. “As my workloads start moving around, how do you keep track of them? There’s a lot of upside, but you need new tools and analytics.”

    As workloads move, the power and heat travel along with them, raising challenges for power distribution and cooling. As Rallo noted, this places a premium on communication between the IT team and the facilities staff, two groups that don’t always work in tandem within the data center.

    A transition for the data center

    Software-defined technologies can be implemented in compute, storage, and network. Some may be skeptical about the many buzzwords and futuristic visions, seeing software-defined as the IT equivalent of flying cars. But as these technologies develop and mature, panelists said, they will see increasing traction and companies will need to develop a roadmap for adoption.

    “How do you get an infrastructure in place that can run the traditional business, but take advantage of these opportunities in social, mobile and analytics?” Hallman asked. “We believe that open standards are the key to accelerating innovation.” The OpenStack cloud platform is one example of an open technology that can serve this function, she noted.

    In software-defined networking, the adoption process starts with small-scale implementations, according to Marco Alves, an engineer at SDN Essentials, which offers training services for network professionals.

    “A lot of businesses are already testing SDN technologies,” said Alves. “Nobody is talking about whether to implement an SDN solution. The discussion is really about what type of solution. To solve these issues, migration strategies are fundamental.”

    As companies refine their proof-of-concept installations, they will begin to deploy software-defined solutions in gradual steps.
    “When the change comes, it will be a phased process,” said Alves. “I don’t think anyone will do a forklift upgrade and replace their infrastructure.”

    Rallo agreed. “There’s a lot of steps and levels between where data centers are, and where they’re going,” he said.

    3:00p
    Mirantis Raises $100M to Push “Zero-Lock-In” OpenStack Distro

    OpenStack vendor Mirantis has landed a huge financing round it said will allow the company to double its engineering efforts. The $100 million Series B was led by Insight Venture Partners, with participation from August Capital, and existing investors Intel Capital, WestSummit Capital, Ericsson, and SAP.

    This financing round eclipses the two $10 million rounds that the company has had since it started three years ago as it looks toward a potential IPO in 2016.

    In addition to increasing investments in development of OpenStack software Mirantis said it will step up plans to grow its open partner ecosystem, as well as expand in Europe and Asia-Pacific regions.

    Mirantis President and CEO Adrian Ionel depicts the company as a pure-play OpenStack provider, whose mission is to empower an open cloud, distributing a zero-lock-in OpenStack solution. Ionel said the company’s “customers are seeing the value; we’ve gone from signing about $1 million in new business every month to $1 million every week.”

    Reporting a strong first half of the year Mirantis noted that its OpenStack distribution was being used by 45 of the world’s largest telecom operators — an area that Mark Collier, chief operating officer of the OpenStack Foundation said was a market ripe for disruption. Mirantis has helped with a number of other large OpenStack installations, including Cisco, Comcast, DirectTV, Ericsson, Expedia, NASA, NTT Docomo, PayPal, Symantec, Samsung, WebEx and Workday.

    Mirantis has been a leading contributor to the OpenStack project for some time, as well as other areas such as the Open Platform for NFV (Network Function Virtualization) initiative, where it will help build a carrier-grade, integrated, open-source reference platform. The company has its own training program complete with OpenStack Professional certification. It also has a strong, global partner ecosystem, with many tight integrations with leading software and hardware vendors.

    Juno, the latest release of the suite of open source cloud software, came out last week.

    3:30p
    Ensuring Your Cloud Provider Meets Your Business Needs

    Christopher Stark is the founder, president and CEO of Cetrom Information Technology, Inc.

    Every company has its own needs, yet solution providers often try to force a square peg into a round hole by offering a “one size fits all” cloud solution. Companies that opt for this cloud model must adjust their processes and procedures to the capabilities of the system, while enduring the pain of knowing that the solution does little to address their business needs.

    If you own a small business, staying within a budget is an ongoing challenge. Finding a solution provider that can facilitate a migration to the cloud at the right price is logical. A pay-as-you-go model will offer extensive flexibility for the future. Since the price is based on what your organization actually uses, this model helps you and your service provider easily adjust to changes in your business and the surrounding market. Proceed with a budget in mind, but remember that cost should not be a deterrent from finding the right solution.

    To ensure your company’s needs are met, the following tips should be considered when evaluating a cloud solution provider.

    The solution provider should tailor the cloud solution to your company’s specific needs. Every business utilizes applications relevant to its own industry, so selecting a cloud solution that does not support these applications negatively impacts productivity and the company’s bottom line. The service provider you choose should have the capability to design and implement a solution exclusively for your business, including its unique applications. Invest time in researching and interviewing prospective solution providers, or soliciting opinions of similar-sized companies. The time spent in this phase will be worth it when your company finds a cloud vendor that is able to design, develop and implement a cloud solution that keeps your business operations running smoothly and efficiently.

    The relationship with the solution provider should be a partnership. Because of the investment your company is making in migrating to the cloud, a certain amount of trust should be expected between the business and the service provider. This relationship needs to develop into a partnership that lasts through the duration of your contract. During the transition to the cloud, it is not enough for a solution provider to leave your company with the technology and instruction manuals, expecting your staff to complete the process. The cloud provider should be on-site to execute the migration and answer any questions your staff has about the new technology. When the migration is completed, the service provider must be readily available to address concerns and resolve any issues. With an effective partnership in place, your organization can operate with the confidence of knowing it has the necessary resources to achieve its business goals.

    Your solution provider should have an eye for the future. As your company strives to stay ahead of the curve in its industry, the solution provider must do the same. Constant innovation and a forward-thinking mentality help solution providers offer cutting-edge solutions and facilitate brainstorming sessions with clients. Leading vendors also look to their customers to understand ways to improve and enhance the product it offers. As the cloud provider continues to improve its solution, your business will reap the benefits of hosting data and applications on an up-to-date technology platform.

    Ultimately, the goal for any business is to identify a solution provider that meets and exceeds the needs and goals you already have in place. Selecting a provider that employs the “let’s see what makes sense for you” approach when developing a cloud solution will enable you and your staff to maintain peace of mind that your technology is working for you and shift your focus back to being productive and profitable.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    4:00p
    Failure is Not An Option: Lessons Learned from Disasters

    ORLANDO, Fla. - As a passenger boards an airplane, she notices a crack in the fuselage near the door, but doesn’t mention it to the flight crew.

    Sound improbable? It happened in April 1988 prior to the takeoff of Aloha Airlines flight 243. Minutes later, a section of the top of the airplane tore off at a height of 24,000 feet. A flight attendant was blown out of the plane and killed, and dozens were injured. Miraculously, the flight crew was able to land the plane.

    Why wouldn’t someone report an obvious threat to safety? Adrian Porter posed the question to a room full of data center professionals Monday at the Data Center World conference. Porter, the senior manager for data management at 1-800-CONTACTS, spoke about real-world disasters and the lessons they hold for disaster recovery.

    The consensus: The woman who spotted the crack in the fuselage assumed the airline must already know about it, and therefore it must be okay to fly.

    That’s not so improbable to imagine, Porter said, when you consider the number of warnings and alerts that IT professionals receive. Most don’t rise to the level of the Aloha incident. But there are plenty of examples of failing to note warning signs – like the fact that the Aloha jet designed for 34,000 takeoffs had already logged 89,000 flights, leading to metal fatigue.

    “Silence is not golden,” said Porter. “If you see something, don’t trust that someone else is taking care of it.”

    Overcommitment is dangerous

    On other occasions, stakeholders get emotionally and financially committed to a goal and find it difficult to obey warning signs. As an example, Porter cited the 1995 expeditions to Mount Everest, in which a sudden blizzard at high elevation led to the deaths of eight climbers.

    The climbers included teams from two expedition companies whose clients paid $65,000 for the Everest experience, and spent two months getting acclimated to the altitude. When a break in the weather offered an opportunity and the summit loomed, it became hard to turn back, even after the 2 p.m. deadline by which teams should begin their descent to find safety before nightfall.

    “It’s human nature to push forward,” said Porter. “But sometimes the safest thing is to roll back. Stick to the script. Rollbacks are better than catastrophes.”

    Sometimes disasters provide examples of grace under pressure. Porter cited the example of Apollo 13, which was crippled by an explosion that forced three astronauts to power down their command module and use the attached lunar lander as a “lifeboat” to stay alive. The story, which is well known from the film starring Tom Hanks, ends with their safe return to earth – but only after crews of engineers from Mission Control improvised procedures to preserve critical functions of the damaged spacecraft.

    Porter noted quotes from NASA Flight Director Gene Krantz, who counseled his engineering team to focus and “work the problem” until a solution was found. “Failure,” Krantz noted, “is not an option.”

    4:31p
    Azure Data Center Comes to Australia as Microsoft Aims for Cloud Domination

    Microsoft is planning to launch a data center in Australia that will support the first Azure cloud availability region in the country next week.

    The company plans to have 19 Azure regions by the end of the year – up from the current 15. The cloud’s other Asia-Pacific locations are in Japan, Singapore, and Hong Kong.

    Microsoft CEO Satya Nadella and Scott Guthrie, executive vice president of the company’s Cloud and Enterprise business, made the announcement during a company event in San Francisco Monday. Along with the new cloud region they announced numerous other products, all aimed at dominating the global cloud services market.

    As Guthrie put it in a blog post, Microsoft wants to deliver “the industry’s complete cloud – one for every business, every industry and every geography.”

    The company said in September it would launch Azure data centers in India. There have also been anonymously sourced news reports that revealed its plans to expand physical cloud footprint in Germany and South Korea.

    In addition to full-fledged data centers, Microsoft expands cloud presence through agreements with colocation providers and network carriers, which offer private network links between their customers’ infrastructure and Azure data centers.

    Azure in a Dell box

    Guthrie and Nadella announced new Dell-made servers that come packed with all the Microsoft software necessary to deploy Azure-like cloud infrastructure in customers’ own data centers. Called Cloud Platform System, the serves combine Azure, Windows Server and System Center.

    The systems are meant to make it easier to stand up hybrid cloud infrastructure, which melds on-premise private cloud deployments with public Azure services. “The Microsoft Cloud Platform System, powered by Dell, is the next step in hybrid cloud and brings all our learnings running Azure to your data center,” Guthrie wrote.

    Hybrid cloud is an important section of the market for providers that want to serve security- and compliance-conscious enterprise customers. VMware, Microsoft’s major rival in the space, has an offering called vCloud Air (formerly vCloud Hybrid Cloud Service) that promises seamless integration of customers’ on-premise VMware environments with its public cloud services.

    CoreOS and Cloudera coming to Azure

    Another piece of Azure news was a partnership with Cloudera, an Intel-backed Hadoop distribution company, and with CoreOS, one of the cloud world’s hottest startups with a Linux-based operating system that enables companies to set up and operate clusters of commodity servers the same way web-scale data center operators like Google and Facebook do.

    CoreOS is now available as one of the types of OS images customers can deploy on Azure, and Cloudera’s data management platform, called Cloudera Enterprise, will be Azure-certified by the end of 2014, Microsoft expects.

    The news follows last week’s announcement of a partnership with Docker, another cloud-startup darling, which provides products built around its eponymous open source application container technology.

    New high-performance cloud VMs and cloud storage

    Finally, the execs announced a new VM flavor available on Azure, called the G-series, and premium cloud storage. Both are for workloads with high performance requirements. Microsoft said they were for customers with “the most demanding workloads in the cloud.”

    8:04p
    Clise Pitches 12-Story Seattle Data Center That Will Recycle Waste Heat

    A new 12-story data center has been proposed for the Denny Triangle neighborhood of Seattle, Washington. Clise Properties and Graphite Design Group submitted data center plans to the design review board for the upcoming development. The location is close to Amazon’s Rufus 2.0 campus and is currently a parking lot.

    Plans are for a 12-floor facility predominantly dedicated to data center equipment space.Early renderings show generator and UPS floors just above entry and loading. Above generator and UPS floors are eight “typical” data center floors, about 11,000 square feet each. The project could be completed by early 2017 if approved.

    Clise is planning to recycle data center heat by piping hot air into heating systems of nearby office buildings, similar to the way Amazon is planning to use heat generated by data centers in the nearby Westin Building. Clise also developed the Westin property, which is one of the west coast’s biggest network hubs. Digital Realty Trust bought a 49-percent stake in the building in 2006, entering a joint venture with Clise.

    Amazon’s campus is visually distinct, with three globe-like structures. The plan is to use data center waste heat energy to heat the entire high-rise campus, which covers four blocks. Clise formed a company called Eco District together with McKinstry to design and built the system.

    Exhaust air from server farms can reach temperatures above 100 F. That heat is normally wasted and undesired. In the proposed data center waste heat recycling system, that heat is piped into office buildings, putting it to use and cutting heating bills.

    Multiple companies have undertaken data center waste heat recycling projects, including Telecity and IBM, doing it in a variety of ways. A partial list is available here.

    Seattle has a thriving high-tech industry. Being a birthplace of Amazon and Microsoft, the area played a crucial role in the advent of cloud services. The proposed data center has a rich ecosystem of tech companies to tap as potential residents. Despite being in early design stages, Clise Properties President and Chief Operating Officer Richard Stevenson told Puget Sound Business Journal that there is already significant interest from potential future residents.

    The data center is planned for the corner of Sixth Avenue and Bell Street at 2229 6th Avenue, according to public records. It is southeast of Seattle Center, a 74-acre park, arts and entertainment center originally built for the 1962 World’s Fair.

    8:30p
    StackIQ Raises $6M for Web-Scale IT Automation and DevOps for Enterprises

    Web-scale IT automation software company StackIQ has completed a $6 million Series B funding round. The company, which targets the enterprise market, said it has added several Fortune 100 enterprises to the customer roster this year. The new funding will allow it to scale its operations and increase its marketing initiatives.

    StackIQ simplifies the deployment and operations of large server clusters and distributed applications within enterprise data centers. It helps to enable DevOps in the enterprise by automating processes. DevOps is a combination and integration of developers and operations.

    The company has appointed John Oh, formerly vice president of worldwide marketing for F5 Networks, to oversee its marketing efforts.

    Often within an enterprise, a handful of people are in charge of large, web-scale IT distributed across several data centers and clouds. StackIO tries to make their lives easier by putting full stack automation into their hands for building, deploying and managing distributed big data and cloud applications.

    Research firm Gartner predicts that web-scale IT will be an architectural approach found within half of global enterprises by 2017. This means there will be a growing need for tools like StackIQ to help manage these infrastructures and an increasing amount of DevOps.

    StackIQ has deep integration with all the major Hadoop distributions for automating big data tasks. It is also integrated with open source cloud computing platform OpenStack, OpenStack being a growing favorite among users of the DevOps approach.

    StackIQ released Cluster Manager at the beginning of the year. The release added cluster-wide automated configuration, orchestration and change management.

    Some recent StackIQ customer wins include one of the world’s largest communications and media companies, a major U.S. wireless carrier, major automobile manufacturer and several financial services companies.

    “The convergence of cloud and big data are reshaping today’s enterprise data center. Highly distributed application architectures are now becoming the ‘new norm’ as customers are harvesting mountains of data to derive insights and drive competitive advantage,” said Sherman Chu, managing partner at Grayhawk Capital. “Underpinning this reality is the need for an infrastructure that scales easily. StackIQ is solving this daunting task through the industry’s only comprehensive cluster-aware automation solution for web-scale applications.”

    Venture capital continues to find its way to providers looking to help enterprises tackle web-scale IT. Mesosphere said it helps an enterprise treat distributed data centers and cloud like a single computer and raised $10.5m this year. Moogsoft recently raised $11.6m to help enterprises ditch rule-based systems through its “collaborative situation management” paradigm. CoreOS has a Linux OS distribution that can update simultaneously across massive server deployments and raised funding.

    New StackIQ funding participants include Grayhawk Capital, Keshif Ventures, DLA Piper, and OurCrowd. Existing investors Anthem Venture Partners and Avalon Ventures also participated.

    9:29p
    Outage of Vietnamese News Sites Caused by Country’s Largest Attack Ever

    logo-WHIR

    This article originally appeared at The WHIR

    A joint investigation by leading Vietnamese web host VCCorp and police has determined that a massive outage last week was caused by an attack which is being called the largest ever cyberattack in Vietnam. VCCorp hosts numerous Vietnamese news sites, which were knocked offline on Monday October 13th, and gradually came back online over two days.

    During the outage websites displayed the message “Data center is experiencing problems. Please come back later,” according to Tuoi Tre News. The outage was initially blamed on a technical issue at the company’s data center, but the possibility of “third party” interference was acknowledged by VCCorp.

    The attack did took down VCCorp-hosted news sites giadinh.net.vn, nld.com.vn, dantri.com.vn and others, as well as a number of VCCopr-operated news sites. Despite this, Tran Quang Chien, managing director of local cybersecurity site SecurityDaily said the attack was unusual in that it did not simply target user information, shut down platforms or change interfaces.

    “The hackers went much further this time by deliberately deleting all of VCCorp’s data in a way that would make it hard for the system administrators to retrieve the [data] loss,” Chien said, calling it “the largest cyberattack, with the most severe damage, ever in Vietnam.”

    Tuoi Tre cites unverified reports suggesting that the attack originated from within the company, and also says that experts consider most online service providers in Vietnam to be lax on internet security.

    Earlier this month FireEye and SingTel announced a partnership to address the APAC cybersecurity market.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/outage-vietnamese-news-sites-caused-countrys-largest-attack-ever

    10:00p
    CenturyLink Moves into Switch SUPERNAP in Vegas

    CenturyLink announced it has signed a data center services deal with Switch Tuesday. CenturyLink will sell space in one of Switch’s Las Vegas facilities, while Switch customers will have access to the Monroe, Louisiana-based telco’s extensive data center services portfolio.

    The Switch SUPERNAP campus in Vegas is well known for its sleek design, massive size and ex-military security personnel. CenturyLink has been expanding its data center footprint ever since it doled out $2.5 billion to acquire Savvis in 2011.

    CenturyLink’s data center portfolio is now up to about 60 locations around the world. Among other recent expansions was the launch of its first data center in mainland China in September and the launch of its second site in Toronto in August.

    The company uses a mix of data center providers to expand its footprint. They include another recently opened location by IO in Phoenix and a Compass building in Shakopee, Minnesota.

    While keeping colocation services at the core, the company has been aggressively pursuing the role of one of technology leaders in cloud services. It has acquired an Infrastructure-as-a-Service company called Tier 3 and a Platform-as-a-Service company called AppFog and has been investing in developing its offerings in both flavors of cloud. Earlier this month it opened a new development center in Seattle that will focus on integrating its varied portfolio of services onto a single unified platform.

    CenturyLink has significant network service business in Las Vegas, where it employs about 700 people.

    “Partnering with CenturyLink, a company that many of our customers already leverage for network services, will benefit the local Las Vegas community and extend a world-class product to the global CenturyLink customer base,” said Rob Roy, CEO and founder of Switch.

    This is a second high-profile customer win Switch has talked about publicly this month, following announcement of a 1,000-cabinet deal with the online photo editing and sharing service Shutterfly.

    The Switch SUPERNAP campus has two buildings, together spanning more than 750,000 square feet. It is building a single 600,000-square-foot facility adjacent to it.

    Its roughly 1,000 customers include eBay, Google, Cisco, VMware and Microsoft, which hosts its Xbox One cloud infrastructure there.

    << Previous Day 2014/10/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org