Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, June 23rd, 2017

    Time Event
    12:16p
    Foxconn Dangles $10 Billion Tech Investment to Create U.S. Jobs

    (Bloomberg) — Terry Gou is intent on building Foxconn Technology Group’s international footprint.

    The world’s largest maker of iPhones is readying $10 billion or more of investment across several U.S. states, starting with a decision by July on the location for a $7 billion display-making plant. Foxconn’s billionaire chairman also vowed to press on with a bid for Toshiba Corp.’s semiconductor business — a deal that could cost $27 billion and usher Foxconn into the memory chip business.

    The founder of Foxconn, whose main listed arm is Hon Hai Precision Industry Co., is now weighing the pros and cons of locating the factory in one of several U.S. states, including Wisconsin. Taiwan’s richest man said he’d been in dialog with the White House and various state governors — one of whom called him moments before he kicked off Thursday’s shareholders meeting. A cutting-edge plant in the Midwest would mark a victory for President Donald Trump and his effort to bring jobs back to America.

    “Our investment in the U.S. will focus on these states because they are the heart of the country’s manufacturing sector,” he told investors. “We are bringing the entire industrial chain back to the traditional manufacturing region of the U.S. That may include display making, semiconductor packaging and cloud-related technologies,” Gou told reporters later, without elaborating.

    Gou said the plant could be in one of six states without naming all of them. He later said Foxconn’s U.S. investments could be in seven states, naming Ohio, Pennsylvania, Michigan, Illinois, Wisconsin, Indiana and Texas. The company could end up creating “tens of thousands” of American jobs, Gou said without specifying a timeframe for the $10 billion in investment.

    Hon Hai, the world’s largest manufacturer of consumer electronics, has steadily expanded production outside of its main base in China and also invested in new spheres of technology. Gou promised investors he’ll continue the pursuit of Toshiba’s prized memory chip unit — even though it had never been among the frontrunners and the Japanese company’s stated its clear preference for a bid led by Bain Capital and local government-backed companies.

    The billionaire blasted government agencies for pushing an in-house deal, saying it was a “minority of bureaucrats” who’re promoting a Japanese-led offer. Gou, who has been vocal about his desire to land the deal, didn’t go into details but reminded investors how Hon Hai beat out rivals for Sharp Corp. in 2016. Hon Hai is said to have considered offering as much as 3 trillion yen ($27 billion) to force Toshiba to take its bid seriously, and had been in talks to rope in allies such as Apple Inc. — its largest customer.

    “The Toshiba deal isn’t over. It is similar to Sharp’s story,” he said. “I believe we still have a big chance.”

    The billionaire however focused primarily on Hon Hai’s plans for the longer term. Apple’s main manufacturing partner, which does most production in China, makes everything from smartphones to PCs with a growing clout that has seen it courted by governments around the world.

    Gou promised to ramp up investment in the U.S., possibly helping with a rust-belt economic revival. Dubbed “Flying Eagle,” Foxconn’s plan to build a U.S. facility could create tens of thousands of American jobs during Trump’s first year in office. The company is considering a joint investment with Sharp, but details have yet to be hammered out.

    In the nearer term, Hon Hai’s shares are riding high as Apple prepares to unveil its latest iPhone — one of the most-anticipated devices of 2017. The shares closed little changed in Taipei after reaching a record earlier this week.

    Hon Hai reported first-quarter earnings short of estimates after a stronger Taiwan dollar squeezed profit in the lull before the new iPhone. That came after a year in which smartphone shipments grew at their slowest pace on record and PC demand continued to flounder. In 2016, Hon Hai’s sales fell 2.8 percent while net income rose just 1.2 percent. Gou said Thursday that revenue and profit this year would be better.

    Over the longer term, Gou is re-tooling Foxconn for the future, installing robots to offset rising labor costs in China. It’s also investing in emergent fields from virtual reality to artificial intelligence.

    Hon Hai makes a wide range of electronic devices from HP laptops and Xiaomi handsets to Sony PlayStation game consoles. But Apple is by far its most important client, yielding roughly half the company’s revenues.

    “Although our profit didn’t see a rapid expansion, we didn’t cut a cent in our annual R&D budget,” Gou said. “We have been accumulating some technologies that will drive long-term growth.”

    2:49p
    The Blockchain Revolution: Where’s the Disruption?

    A year or so ago, blockchains were the up-and-coming disruptors of technology.

    These days, if mentioned in that light, you’re liable to get a roll of the eyes and a reply that translates into “where’s the beef?” Unlike containers, which went from zero to 60 in a matter of months, blockchains might seem to be stuck in second gear. Sometimes, appearances aren’t what they seem.

    For the uninitiated, a blockchain — the technology behind bitcoin — is a type of distributed database used to maintain a set of records, or blocks, each containing a time stamp and linking to a previous block. Once recorded, the data in a block can’t be modified without doing so to all others in the chain — and importantly, access to a block can also be restricted.

    Eventually, say the soothsayers, blockchains will be used to solve a problem presented by the digital age. Before computers took over day-to-day business transactions, paper  records were nearly impossible to alter in a way that would pass scrutiny by expert examiners. That’s because attempting to change a paper document always leaves evidence that can be easily be discerned forensically. This hasn’t been true with digital technology, and blockchain offers a solution to that dilemma. That’s why the technology works for bitcoin and other cryptocurrencies. It’s also why IBM uses blockchain to monitor its supply chain.

    Despite last year’s media attention when the Linux Foundation, backed by some of tech’s heaviest hitters, announced the open source blockchain project, Hyperledger. This year blockchains seem to be off the public radar. Much of that is because the complexity of both the technology and its uses don’t lend itself to overnight adoption.

    “True blockchain-led transformation of business and government, we believe, is still many years away,” Marco Iansiti and Karim R. Lakhani recently wrote in the Harvard Business Review,. “That’s because blockchain is not a ‘disruptive’ technology, which can attack a traditional business model with a lower-cost solution and overtake incumbent firms quickly. Blockchain is a foundational technology: It has the potential to create new foundations for our economic and social systems. But while the impact will be enormous, it will take decades for blockchain to seep into our economic and social infrastructure.”

    Good points, but it’s doubtful it will take decades. Although blockchain deployments are not yet responsible for taking up a lot of data center floor space, the adoption of the technology seems to be happening rapidly, if quietly.

    Earlier this month, Hewlett Packard Enterprise announced it had partnered with distributed database company R3 to bring its blockchain application, Corda, to HPE’s platform for high-volume, high-value workloads, Mission Critical Systems. In making the announcement, the company said it expects distributed-ledger deployments will be utilized in everything from IoT to hybrid cloud installations to the edge. Red Hat, a founding member of the Hyperledger project, is also offering the technology, primarily through its OpenShift Blockchain Initiative.

    Blockchain is also showing up in places that might be unexpected, such as in General Electric’s cloud-based platform as a service, Predix, originally designed for the collection and analysis of data from industrial equipment and used by a diverse range of industries, but which has expanded to become an “industrial IoT platform.” In late May, Swedish-based telecom Ericsson announced it had entered into a partnership with GE to integrate its Blockchain Data Integrity platform into Predix, where it will be made commercially available.

    As the name implies, the Ericsson product is designed to guarantee the integrity of data being sent and is expected to be especially useful for industries such as public utilities, transport, healthcare and aviation where requirements for data integrity and verifiable trust must be met to achieve regulatory compliance.

    About the same time as the Ericsson announcement, a group of 25 European energy trading companies formed a consortium to conduct peer-to-peer trading in the wholesale energy market using a blockchain application, Enerchain, developed by German-based business-to-business solutions provider Ponton. In this case, the participants — which includes Ponton — are sharing costs to develop a proof of concept that will include “a full-scale prototype which is integrated into participants existing trading infrastructure and supports a decentralized credit limit solution required for bilateral trading.”

    Right now, the technology is still young and needs to mature before it will be adopted for many of the industries it will benefit most. Eventually, banks and other financial institutions will adopt blockchains wholesale, as the technology is an exact match their needs. That move will be made gingerly, however. The financial industry is notoriously risk adverse, and the adoption will doubtlessly require a costly migration away from legacy practices and applications.

    The medical industry is already making moves to adopt blockchains for everything from protecting the privacy of medical records to guaranteeing the integrity of medical data, such as MRI results, sent over the internet. That move may already be in the works, utilizing platforms such as GE’s Predix.

    As blockchain technology matures and we learn more about best practices for its use, there might be something else to consider. Joichi Ito, Neha Narula and Robleh Ali point out in another Harvard Business Review article that in the early days of the internet there were competing standards in areas where now there are only one. Indeed, before the internet itself grew to prominence, there were several public networks, such as CompuServe, Prodigy and America Online, that acted like small, private internets.

    “Like the internet, in the early stages of development there are many competing technologies, so it’s important to specify which blockchain you’re talking about,” they wrote. “And, like the internet, blockchain technology is strongest when everyone is using the same network, so in the future we might all be talking about ‘the’ blockchain.”

    3:21p
    Packet, Qualcomm to Host World’s First 10nm Server Processor in Public Cloud for Developers

    Packet, a bare metal cloud for developers, announced that it will collaborate with Qualcomm Datacenter Technologies, Inc. to introduce the latest in server architecture innovation on the 48-core Qualcomm Centriq 2400 processor.

    The New York City-based company is currently showcasing its consumable cloud platform at Red Hat’s AnsibleFest conference in London, and demonstrating leveraging open source tools such as Ansible, Terraform, Docker and Kubernetes — all running on Qualcomm Datacenter Technologies’ ARM architecture-based servers.

    The series of joint efforts will continue at Hashiconf (Austin), Open Source Summit North America (Los Angeles), and AnsibleFest (San Francisco).

    “We believe that innovative hardware will be a major contributor to improving application performance over the next few years. Qualcomm Datacenter Technologies is at the bleeding edge of this innovation with the world’s first 10nm server processor,” said Nathan Goulding, Packet’s SVP of Engineering. “With blazing-fast innovation occurring at all levels of software, the simple act of giving developers direct access to hardware is a massive, and very timely, opportunity.”

    Packet’s proprietary technology automates physical servers and networks to provide on-demand compute and connectivity, without the use of virtualization or multi-tenancy. The company, which supports both x86 and ARMv8 architectures, provides a global bare metal public cloud from locations in New York, Silicon Valley, Amsterdam, and Tokyo.

    “Our collaboration with Packet is the first step of a shared vision to provide an automated, unified experience that will enable users to access and develop directly on the Qualcomm Centriq 2400 chipset,” noted Elsie Wahlig, director of product management at Qualcomm Datacenter Technologies, Inc. “We’re thrilled to work with Packet to engage with more aspects of the open source community.”

    While an investment by SoftBank accelerated the company’s access to developments in the ARM server ecosystem, Packet has been active in the developer community since its founding in 2014.

    4:54p
    Google Will Stop Reading Your Emails for Gmail Ads

    (Bloomberg) — Google is stopping one of the most controversial advertising formats: ads inside Gmail that scan users’ email contents. The decision didn’t come from Google’s ad team, but from its cloud unit, which is angling to sign up more corporate customers.

    Alphabet Inc.’s Google Cloud sells a package of office software, called G Suite, that competes with market leader Microsoft Corp. Paying Gmail users never received the email-scanning ads like the free version of the program, but some business customers were confused by the distinction and its privacy implications, said Diane Greene, Google’s senior vice president of cloud. “What we’re going to do is make it unambiguous,” she said.

    Ads will continue to appear inside the free version of Gmail, as promoted messages. But instead of scanning a user’s email, the ads will now be targeted with other personal information Google already pulls from sources such as search and YouTube. Ads based on scanned email messages drew lawsuits and some of the most strident criticism the company faced in its early years, but offered marketers a much more targeted way to reach consumers.

    Greene’s ability to limit ads, Google’s lifeblood, shows her growing clout at the company. Since her arrival in late 2015, Google has poured investments into its cloud-computing and business software tools to catch up to Microsoft and Amazon.com Inc.

    Greene announced the changes on Friday in a blog post, where she wrote that G Suite has more than 3 million paying companies and had doubled its user base in the past year. Google announced those metrics in January. Google doesn’t share its cloud division sales, but its “Other Revenues,” whi

    8:21p
    Planning for the New Windows Server Cadence

    The next version of Windows Server will let you run Linux containers using Hyper-V isolation (and connect to them with bash scripts), encrypt network segments on software-defined networks and deploy the Host Guardian Service as a Shielded VM rather than needing a three-node physical cluster.

    Data deduplication for ReFS means Storage Spaces Direct will have 50 percent better disk utilization. Multi-tenancy for Remote Desktop Services Hosting means you can host multiple clients from a single deployment, and virtual machine load balancing is now OS and application aware, so you can prioritize app performance as you balance the load between multiple hosts.

    However, if you want to get those features as quickly as possible, you need to consider if you’re ready to deal with a new version of Windows Server in your data centers every six months.

    Even Azure is only now going through its update to Windows Server 2016 (some data centers have upgraded, and some haven’t, so only some VM instances support nested virtualization, for example). But the goal is for Azure to have a new release of Windows Server rolling out to its data centers within six months of it arriving. And with the new Windows Server lifecycle, that’s what Microsoft would like you to be considering in your own data centers – at least for some workloads.

    Faster Channels

    Like the Windows 10 client (and Office), Window Server (both standard and data center SKUs) and System Center will now have two feature updates a year, in spring and fall (most likely, in March and September, making the next two releases 1709 and 1803). Like the monthly quality updates, these updates will be cumulative.

    You’ll need Software Assurance or an MSDN subscription to get the Semi-Annual Channel, but the release cycle will cover both Nano Server and Server Core.

    If you don’t want twice yearly updates, the Long Term Servicing Channel will be pretty much the familiar model of a new release every two or three years, with 10 years of support (16 if you pay for Premium Assurance), and that’s available for both Server Core and Server with Desktop Experience, which you’ll need for Remote Desktop and other apps that require a GUI. One thing that isn’t yet clear is whether LTSC for Windows Server will have the same silicon support policy as Windows 10 clients – which explicitly doesn’t support any software or silicon released after the LTSC version. So  if you want to upgrade the CPU, you have to switch to a newer LTSC release. That would be a big change from the current Windows Server policy, and we look forward to Microsoft clarifying this.

    The first Semi-annual Channel release, coming in September, also marks some changes to Nano Server and Server Core. Although Nano Server currently supports a number of infrastructure roles, it’s rarely used for that; telemetry shows that the vast majority of Nano Server instances are for container scenarios – and in that role, customers want Nano to be even smaller. Because of that, Microsoft is removing the infrastructure roles, which will make images at least 50 percent smaller and improve startup times, so you’ll see better density and performance for containers. “Nano Server going forward will be about new modern pattern apps like containers, and it will also be optimized for .NET Core 2.0,” Microsoft Enterprise Cloud product marketing director Chris Van Wesep told Data Center Knowledge.

    Server Core will take over the infrastructure roles, and should be your default for traditional data center roles. There isn’t yet a full list of what will be removed, but the Server Core image may well get smaller as well. “If what you’re trying to do is make an optimized deployment for modern data center, you might not need the fax sever role any more,” he suggested. “Let’s just make it the best for what it’s trying to be and not be everything to everybody, especially some of that old stuff.”

    You’ll want to use Server Core as the host for Nano Server (which means you’ll need Semi-annual Channel and SA), but it will also be relevant for running containers using Hyper-V isolation, which doesn’t require Nano Server. “Without any code changes, you can take legacy .NET apps that don’t have a GUI dependency, drop them into containers and deploy them on Windows Server and get the benefits of containerization, even with legacy patterns. You can save yourself some money and get yourself on a new platform.”

    You can mix and match servers with different channels in your infrastructure. “We expect most customers will find places in their organization where each model is more appropriate,” said van Wesep. But if you want to switch a server from LTSC to Semi-annual Channel (or vice versa) you’ll need to redeploy that server, so the way to deal with this is to pick the right model for your workloads.

    “If you have a workload that needs to innovate quickly, that you intend to move forward on a fairly regular basis, the SAC will be the better way of doing that,” suggested van Wesep. “That could be containers, but I also look at customers like hosting and service providers, who are more on the cutting edge of software defined data centers. If we’re putting new scale capabilities or clustering functionality into Hyper-V, they may not want to wait two years to get access to those.”

    Splitting releases like this makes sense for the diverse Windows Server market, Jim Gaynor, Research Analyst at Directions on Microsoft told Data Center Knowledge. “Nano Server is turning into a container image component. That means it’s going to be changing quickly, not least because containerization is changing rapidly, so staying in the fast lane is a no brainer. Server Core SAC is for fast new features; Server Core LTSC is for your low-change roles like AD, DNS, file/print, and so on. If you want the fast-developing feature train for your container, VM, IIS or storage hosting, you go with Server Core SAC. This is a logical push for Server Core, since it’s where Microsoft wants people to be. For RDS, core incompatible Win32 server apps, and point and shoot orgs… there’s literally no change.”

    “LTSC is what you pick if you’re a set-it-and-forget-it shop that buys a Windows Server Standard or Datacenter license, without SA,” agrees Directions on Microsoft Research Analyst Wes Miller. But as you move into the faster cadence, whether it’s for infrastructure improvements or containers, you will need to take the licensing implications into account. “There’s now a higher likelihood that you’d need to have SA across all your user and device CALs, due to random pockets of servers in your organization needing SA.”

    “If you go back to Windows Server 2012 and 2012 R2,” van Wesep reminded us, “we had a one-year gap between them. Previously customers had been saying ‘it’s so frustrating that it takes you three years’, so we did one release faster – and people said ‘that’s way too fast for us to consume’. What we realized is that there really are different people that have different needs.”

    How Fast Can You Run?

    Having Windows, Windows Server and Office aligned like this also makes support simpler, van Wesep explained. “Every September and March you have a feature update and those updates will be in market for 18 months, so at any point in time you have three versions, and any of the three Office versions will work with any of the three client versions, and now Windows Server and System Center can participate in that.”

    So far, so helpful. For some organizations and some workloads though, updating every six months – or even once a year, because you can choose to skip one Semi-Annual release and still be supported, though you can’t skip two in a row –  will be too fast a pace, and LTSC is the answer there. But if you’re adopting devops and continuous delivery and turning to containers to make that work, you want the frequent updates to Nano Server that will enable that – and you’ll already be moving at a fast enough pace that six-monthly updates will just become part of your on-going process. Some customers will also want the ‘latest and greatest’ features for infrastructure.

    Keeping Windows Server updated is also much less work than, say, upgrading the last of your Windows Server 2003 systems. “By incorporating things like mixed mode cluster updates the process of moving forward shouldn’t be nearly as painful as it’s been in the past,” van Wesep claimed, pointing out that “Containers are redeployed net new each time anyway. We think, for the workloads people will be using this for, the process of moving forward isn’t going to be as arduous as it was. It’s about decoupling the application from the OS from the underlying infrastructure; it’s ok to have different cadences of upgrades for all of those layers.

    Getting a new version of Windows Server twice a year doesn’t turn Windows Server into any more of a subscription than it already is with Software Assurance. This new cadence is about innovating faster with Windows Server, especially for containers. As with Windows 10, this is about turning deployment from a large-scale project that consumes all your IT resources every few years to an on-going process where you try out Insider builds, pilot Semi Annual Channel releases for a few months, deploy them to the relevant servers, patch them monthly – and then start again a few months later.

    “Living properly in a channel-based (and in some situations container-based) world means organizations likely need to consider their model of deployment and servicing – and treat it as a process, not like an event,” says Microsoft’s Miller.

    8:37p
    Energy Department Awards $258 Million to Develop Exascale Supercomputers

    The Department of Energy (DOE) has awarded $258 million to six U.S. tech companies to build the country’s first exascale supercomputer – a move designed to help the United States regain its supercomputing dominance, but also to improve the nation’s economic competitiveness.

    An exascale supercomputer is 1,000 times faster than a petaflop system or 50 times faster than a 20-petaflop system that’s available today, said Paul Messina, director of DOE’s Exascale Computing Project (ECP).

    The research grants were awarded last week to AMD, Cray, Hewlett Packard Enterprise (HPE), IBM, Intel and Nvidia to accelerate research into exascale computing. With their help, the Energy Department wants to deploy its first exascale supercomputer by 2021 and supply the nation’s researchers and U.S. companies the computing power they need to accelerate scientific research and build better products, from military aircrafts to wind turbines, Messina said.

    “It’s important to our national security and economic security,” he said.

    The $258 million in funding, which will be issued over a three-year period, pays for 60 percent of the initiative, called the ECP’s PathForward program. The six companies are expected to provide the remaining 40 percent of the funding, bringing the total investment for the project to about $430 million.

    Their goal is to develop hardware to make exascale a reality and that will require solving four key challenges: parallelism, memory and storage, reliability and energy consumption, Messina said. Their work will include the development of innovative memory architectures, higher-speed interconnects and faster computing power without consuming a lot of energy, he said.

    In fact, DOE wants to build an energy-efficient exascale system that requires only 20 to 30 megawatts.

    “It’s a big challenge to get to what I would call a practical, affordable, usable exascale system,” Messina said. “If you built an exascale system with today’s technology, you are talking about half a gigawatt. That’s a big bill.”

    Analysts say the Energy Department’s funding announcement for exascale computing is critical, particularly as the U.S. has fallen behind China and Switzerland in the latest ranking of most powerful supercomputers.

    In the new Top500 list released this week, China topped the list for the fourth straight year. Its 93-petaflop Sunway TaihuLight and 33.9-petaflop Tianhe-2 systems are ranked No. 1 and No. 2 as the fastest supercomputers in the world, followed by Switzerland’s newly upgraded PizDaint system at 19.6 petaflops. The U.S. slipped from No. 3 to No. 4 with its 17.6-petaflop Titan system at the Energy Department’s Oak Ridge National Laboratory.

    The U.S. is also racing against China and other countries to develop an exascale system. China, for example, reportedly expects to have an exascale system up and running by 2020.  The U.S., for its part, is aggressively pursuing exascale supercomputing. Initially, in 2015, DOE planned to deliver an exascale system by 2023 or 2024. But in December, it increased the time frame and now plans to have a system in place by 2021.

    “The DOE’s PathForward program is a critical step in moving the country’s supercomputing capabilities forward,” said Addison Snell, CEO of Intersect360 Research. “Giving these grants to leading technology companies helps enable these advancements not only for leading national research centers, but also for industrial companies that will benefit from their own investments that follow, in areas like manufacturing, energy exploration, finance, and drug discovery.”

    Messina believes the DOE, in partnership with the six tech vendors, will reach its goal of developing an exascale system in four years. He doesn’t want a one-of-a-kind prototype built for a national laboratory. He wants the tech vendors to develop commercial products that other federal agencies and U.S. companies can purchase, so they too can take advantage of exascale.

    In fact, at ECP, Messina has put in place an industry advisory council of 18 companies. Messina said he will consult with the council to ensure the PathForward project delivers hardware and software that meets the business’ needs, he said.

    Vineeth Ram, HPE’s vice president of HP and AI marketing, said HPE’s exascale efforts will revolve around its memory-driven computing effort, which focuses on memory and not processing as the center of the computing platform.

    James Sexton, IBM fellow and director of Data Centric Systems, said IBM will collaborate closely with DOE to design an exascale system to ensure they meet the government’s requirements.

    “We need to get the performance in different ways,” he said. “We are working on processors, accelerators, the network, memory and storage – every single piece that goes into a system. We are investing in new ideas and new concepts to be able to get that performance increase.”

    While PathForward funds hardware research for exascale computing, ECP has also invested in software research. In September, ECP awarded $39.8 million to develop research applications that will run on exascale systems. Fifteen proposals including ones on cancer research and improving 3D printing were funded.

    In November, the organization awarded another $34 million to 25 research and academic organizations to develop software stacks for exascale systems, such as programming models and runtime libraries, and mathematical libraries and frameworks.

    “It’s a pretty tight timeline because there’s a lot of design and research and a lot of manufacturing to be done,” he said. “It takes a lot of new components and new software, but it’s certainly feasible.”

    “Although a national lab may be the first customer, it’s important that it’s a real commercial product that other federal agencies and commercial entities can buy,” he said.

    Tech executives say exascale will vastly improve artificial intelligence and speed research in all fields including in healthcare, where precision medicine for cancer can provide patients with more customized care.

    “Today you are limited by the complexity of the model you can use, but if you can make simulations more complex and solve it in half the time, you get better results,” said Vineeth Ram, HPE’s vice president of HP and AI marketing.

     

    << Previous Day 2017/06/23
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org