Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, May 23rd, 2017

    Time Event
    12:00p
    Digital Realty’s First Japan Data Center Comes Online

    Digital Realty Trust, a San Francisco-based data center REIT, has officially launched its first data center in Japan.

    Digital Osaka 1’s footprint measures more than 90,000 square feet; the facility provides 7.6 MW of power. It was fully leased at launch, and the company has already acquired an adjacent lot to continue building out what it expects will become a 27 MW data center campus.

    Osaka is a densely populated major financial hub in Asia. There are companies working in a wide variety of industries in the region, as well as universities and other research and development organizations, Digital Realty said in a statement.

    The company announced it had bought a piece of land in Osaka for data center construction back in 2013. Last year, it announced that a major hyper-scale cloud provider had pre-leased the entire first phase of its first data center in Japan, becoming an anchor tenant for the campus.

    Citing a market research report, it also said Japan is now “one of the most sought-after markets for cloud data center locations” due to “strict data sovereignty laws and high customer demand.”

    On its first-quarter 2017 earnings call, Digital Realty execs said another company had leased the second phase, instantly bringing the REIT’s inventory in its new market to zero. The company hasn’t named either tenant.

    Andy Power, Digital Realty CFO, mentioned the purchase of additional land on the same call. “We won’t be bringing on incremental capacity in 2017 or early 2018 [in Osaka], but shortly thereafter, with that recent (land) acquisition,” he said.

    3:30p
    Survey: Non-IT Business Units Spend Own Budgets on Tech

    Non-IT departments and business lines within companies are playing a larger and larger role in technology purchasing decisions, including using their own budgets to buy technology. These are units like finance, marketing, sales, and logistics, among others.

    IT industry trade association CompTIA saw more than half (52 percent) of respondents to a recent survey indicate that they used business-unit budget to pay for technology purchases in 2016. Close to half (45 percent) said technology ideas came from people outside of their IT organizations, and 36 percent said more execs are involved in making technology decisions.

    The association, which surveyed 675 US businesses in February, also found that non-IT units are also increasingly hiring tech staff of their own. Some of the reasons cited include need for specialized skills, faster response times, and better collaboration.

    Carolyn April, senior director, industry analysis, CompTIA, said in a statement:

    “CIOs and information technology (IT) teams remain involved in the process, as their expertise and experience are valued. But business lines are clearly flexing their muscles. It’s another strong signal that technology has shifted from a supporting function for business to a strategic asset.”

    Business lines are primarily buying cloud-based software solutions, according to the association, which recommended that channel partners should package their products and services differently. That means speaking “the language of business,” because these new technology buyers don’t necessarily want to know technical details.

    April:

    “Channel partners need to position themselves as consultants and service providers who can help customers make informed decisions about what they buy.”

    3:58p
    LinkedIn’s Data Center Standard Aims to Do What OCP Hasn’t Done

    While fomenting a full-blown revolt against the largest American hardware vendors’ once-outsize influence on the hyper-scale data center market, by many accounts Facebook’s Open Compute Project has yet to make a meaningful impact in smaller facilities that house the majority of the world’s IT infrastructure.

    OCP hardware has been difficult to source for companies that buy in much smaller volumes than do its two biggest users – Facebook and Microsoft – and if you don’t want to redesign your data center to support the standard OCP requirements, your already slim vendor choice for OCP gear that fits into standard 19-inch data center racks is narrowed further.

    That’s the problem Open19, a new data center standard developed by LinkedIn, aims to solve. It promises a way to build out data centers that’s both compatible with traditional data center infrastructure and simple and quick enough to meet the servers-by-the-ton pace of hyper-scale data center operators.

    It will be a lot easier for companies to adopt Open19 “because they don’t need to change the basic infrastructure,” Yuval Bachar, LinkedIn’s principal engineer for global infrastructure architecture and strategy, said in an interview with Data Center Knowledge.

    Today, LinkedIn is launching a non-profit foundation in an effort to grow an ecosystem around its data center standard. And it’s recruited some heavyweight founding members – GE Digital, Hewlett Packard Enterprise, and the multinational electronics manufacturing giant Flex (formerly Flextronics) – in addition to the data center infrastructure startup Vapor IO.

    The Open19 Foundation’s charter is to “create project-based open hardware and software solutions for the data center industry.” Similar to the way the Open Compute Foundation (which oversees OCP) works, Open19 will accept intellectual property contributions from members, LinkedIn’s hardware spec being the first one.

    See also: Why OCP Servers are Hard to Get for Enterprise IT Shops

    Microsoft Keeps an Open Mind

    It’s unclear how complete Open19 is at the moment, or to what extent hardware built to the standard has been deployed at LinkedIn data centers. Bachar said the hardware has not yet reached production level.

    The cloud data center hardware team at Microsoft, which acquired Linked last year, started standardizing on OCP across its entire global footprint in 2014, when the company joined the project. Its latest-generation cloud server design, still in the works, makes adjustments to ensure easier installation in colocation data centers around the world, including a 19-inch rack and a universal power distribution unit that supports multiple international power specs.

    Read more: Meet Microsoft, the New Face of Open Source Data Center Hardware

    Whether Microsoft will eventually integrate LinkedIn’s data center infrastructure with its own, and whether it will decide that it would be advantageous to run the social network on the same type of hardware that runs the rest of its services is unknown at this point.

    Kushagra Vaid, who oversees cloud hardware infrastructure at Microsoft, told us in March that the company was far from making a decision about LinkedIn’s data centers. “We haven’t really started talking about it,” he said. “We’re going on two clouds for now.”

    He added that there were elements of LinkedIn’s standard that he liked: “There are some good things in Open19.”

    Bachar said he could not comment on what Microsoft’s plans would be, saying his team was continuing to be focused on building an infrastructure that would improve performance for LinkedIn members. “For LinkedIn, this is the future of how we build our … data centers.”

    Bricks and Cages

    There are other key differences between OCP and Open19, beyond the form factor. Unlike OCP, LinkedIn’s standard doesn’t specify motherboard design, types of processors, network cards, and so on. It also doesn’t require that suppliers that want to sell Open19 gear open source their intellectual property.

    “When we built OCP, we built it a s a community-led standards organization, where companies and individuals could donate intellectual property and have that intellectual property be innovated again,” Cole Crawford, Vapor IO founder and CEO and former executive director of the Open Compute Foundation, said in an interview with Data Center Knowledge.

    “Open19 is a standard in and of itself,” specifying a common chassis and network backplane but not the electronics inside, he went on. “Whatever exists inside of that chassis … that can be differentiated by OEM (Original Equipment Manufacturer), by an ODM (Original Design Manufacturer), with no [IP contribution] requirements at all.”

    Open19 describes a cage that can be installed in a standard rack and filled with standard “brick” servers of various width and height (half-width, full-width, single-rack unit height, double height). It also includes two power shelf options, and a single network switch for every two cages.

    A data center technician can quickly screw the cage into a rack and slide brick servers in, without the need to connect power and network cables for every node.

    Standardizing All the Way to the Edge

    Another way Open19 stands out is by standardizing both core data centers and edge deployments, an increasingly important and growing part of the market. As digital services have to process more and more data to return near-real-time results, companies put computing infrastructure closer to where the data gets generated or where the end users are, places like factory floors, distribution warehouses, retail stores, wireless towers, and telco central offices.

    Edge is a key play for Vapor IO, whose Vapor Chamber and remote data center management software are designed for such deployments.

    Edge data centers are also key to GE Digital’s major play, its industrial internet platform Predix, which collects sensor data from things like jet engines or locomotives and analyzes it to predict failure for example. It is a cloud platform for developers building these industrial internet applications, and as such requires a highly distributed, global infrastructure. Different data center standards across suppliers and geographies have made the process of building this platform difficult, Darren Haas, VP of cloud engineering at GE Digital, said in a statement.

    “Predix extends our capabilities across all form factors — from the edge all the way through to the cloud,” he said. “We built Predix so developers can create software that moves between the various form factors, environments and regions, but we still wrestle with different standards and systems by node, region and vendor.”

    6:04p
    The Challenges of Implementing Microservices

    Wayne Gibbins is CCO at Wercker.

    The emerging combination of micro-service architectures, Docker containers, programmable infrastructure, cloud, and modern Continuous Delivery (CD) techniques have enabled a true paradigm shift for delivering business value through software development.

    Gartner sees the same shifts, and in its “2016 Hype Cycle for Application Architecture,” the research group suggested that the micro-service architectural style is at the “peak of inflated expectations.” Accordingly, there may be some difficult times ahead for organizations that choose to adopt microservices, but we believe that one of the primary first steps towards success with this new architectural style is the codification and validation of functional and nonfunctional requirements within a CD pipeline.

    Look Before You Leap: Visibility and the Codification of Requirements

    The adoption of agile and lean software development methodologies has been a reaction to the need for increased speed within the business. For organizations still struggling to adopt these approaches, techniques like Value Stream Mapping and Wardley Mapping can prove invaluable. In reality, many organizations don’t have a holistic picture of what it takes to get an initial business idea or functionality improvement through to the end user.

    It can also be tempting to blame one part of the organization for slowing delivery without really understanding the journey of a work item through the entire delivery system. One area that often gets blamed is Quality Assurance (QA), as the industry has attempted to validate this after software has been designed and implemented (leaving QA to be the “gatekeepers” of quality), to focus on ineffective manual efforts, or to sacrifice this part of delivery entirely.

    We are strong believers in making work visible. A strong corollary of this need for visibility is the need to codify requirements as this facilitates automation, which in turn enables speed. The desire of CEOs and CIOs to deliver value through software more rapidly, and with increased safety, must be built upon automation. Many companies are codifying and validating requirements using custom steps within their CD pipelines and workflows. Behavior-Driven Development (BDD) is very useful for forming and codifying business hypothesis and behavior. Automated performance and security testing – with specified ranges of acceptability and codified into steps – should be considered table-stakes in the world of modern software development.

    Microservices: Macro Impact with People, Architecture and Ops

    Much has been said about the impact of Conway’s Law on the development of modern software, and indeed, the micro-service architectural style emerged out of organizational re-aligned at the unicorn organizations like Netflix, Gilt Group, and Spotify. This process has been further refined into the “Inverse Conway Maneuver”, where an organization deliberately attempts to manipulate structure and team alignment to promote an intended architecture within software i.e. forming small, cross-functional teams with well-defined business goals. Automated platform deployment and operational support should result in the creation of a microservice-based system.

    Delivering a single microservice is not inherently difficult. The bigger issues arrive with integration. This is particularly challenging for enterprises that are naturally tackling complex problems that require many components to work correctly together to deliver the promised value. During the first iteration of service-oriented architecture, classical SOA, the solution was to specify service functionality with Web Service Definition Language (WSDL), effectively providing a contract in which to test and validate integrations. Although a great idea, the reality of WSDL was that it could be overly restrictive, and there were compatibility issues across platforms.

    As the micro-service style has evolved, early proponents clearly took note of the integration challenges with SOA. Methodologies like Consumer-Driven Contracts (CDC) have emerged for the design and specification of services, with some even calling this “Test-Driven Design (TDD) for modern architecture”; the evolution of service virtualization to promote isolated testing of services; and run-time validation is frequently performed in staging (or production) with canary deployments and synthetic transactions continually testing happy path functionality and critical nonfunctional requirements.

    All of these methodologies can, and should, be integrated within your build pipeline.  As software consultant Kevlin Henney famously suggested: The reason automobiles have brakes is so that they can travel faster; and in the world of implementing microservices to deliver rapid value, the integration of automated validation into your continuous delivery pipeline is directly analogous – if something goes wrong, the system automatically hits the brakes on the deployment.

    Summary

    The micro-service architectural style provides many benefits. Gartner cautions that we are at the “peak of inflated expectations.” The hype around this approach should not be ignored, but the risks can be mitigated with a sensible and structured approach to implementing Continuous Delivery (CD) within your software delivery process. The successful implementation of microservices does not occur purely at the architectural level, and attention must be paid to foster the correct organizational structure and develop a culture of hypothesis-driven development to yield the full benefits.

    Finally, validating the functional and nonfunctional requirements of a micro-service system is essential, both at the individual service and also (arguably most importantly) at the integrated system level. We strongly believe in the power of building effective continuous delivery pipelines and view this as a core component for organizations looking to embrace microservices and the modern “cloud-native” approach to delivering valuable software.

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Penton.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/05/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org