Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Tuesday, August 29th, 2017

    Time Event
    12:00p
    Will Edge Computing Help the Server Market Bounce Back to Growth?

    If you follow the dynamics of the server market at all, you probably know that the size of the global market has been steadily shrinking in terms of revenue. For five consecutive quarters now, overall revenue from server sales has been slowly sliding down, a few percentage points at a time, according to Gartner.

    The analysts attribute this to growing use of cloud services, noting that the large-enterprise and small and medium business segments have not been growing for server vendors, while the hyper-scale data center segment has. Perhaps this is why server shipments have been growing while the amount of revenue vendors are getting from those shipments has been shrinking. Hyper-scale operators buy undifferentiated commodity hardware at high volumes, paying much less per unit than a typical enterprise user would buying off-the-shelf servers.

    But a few broader technology trends make some hopeful that the market will return to growth. Ravi Pendekanti, who is currently senior VP for server solutions product management and marketing at Dell EMC, and who previously spent many years in senior product and marketing roles at Sun Microsystems, SGI, Juniper, and Oracle, says the next generation of technologies, such as self-driving cars and the Internet of Things, as well as commoditization of data center network hardware driven by software-defined networking, will drive that next wave of growth in the server market.

    Pendekanti joined me for the latest edition of The Data Center Podcast to talk about these and other trends in the market:

    Download or stream on SoundCloud

    Stream right here:

    3:00p
    Hurricane Harvey: Delivering Managed IT Services During a Catastrophe

    Brought to you by MSPmentor

    It’s being called a 500-year storm and Houston, Texas-based Elevated Technologies finds itself smack in the middle of it.

    As early as the middle of last week, the managed services provider (MSP) began reaching out to clients, preparing the organizations as much as humanly possible for the fury of Hurricane Harvey.

    But despite the best-laid plans, the extent of damage to the state’s Gulf Coast still came as a shock.

    “No one really anticipated this,” Jason Rorie, founder of Elevated Technologies, told MSPmentor Monday.

    “I don’t think anyone really anticipated the kind of flooding that Houston would be subject to,” he added. “The city is basically underwater.”

    Task number one for the MSPs five-member team was ensuring customers’ data would survive.

    “We started talking really on Wednesday when they projected where it would hit,” Rorie said. “First thing we did is make sure that all of our clients, all of their offsite backups were running successfully. We use Veeam and StorageCraft.”

    As the storm neared landfall on Friday afternoon, the MSP began instructing clients to shut down on-premises servers before those employees left for the day.

    “The flooding causes power outages and once your UPS (uninterruptible power supply) batteries drain and your servers crash, that’s when problems really start,” Rorie said.

    The hope is that once the storm passes, many of those servers can quickly be powered up and the businesses can get up and running.

    That’s assuming, of course, that the flooding is manageable and the boxes stay dry.

    “Most of our clients, even in a single story building, the servers are elevated,” Rorie said. “We don’t anticipate any servers being flooded.”

    But for some customers with 24/7/365 requirements, shutting down servers until the storm passes represents an unacceptable interruption to their businesses.

    One of Elevated Technology’s customers is the Houston branch of a Dubai-based private jet-booking outfit that arranges global travel for top executives of major companies, like Nike.

    “They were going to try to work through the storm,” Rorie said.

    “They said, ‘we get what you’re trying to do for us but we’ll take a chance,’” the client said.  “’We’re going to roll the dice because we want to turn out as much work as we possibly could.’”

    A great many of the support tickets the MSP has received since the storm began involve folks seeking to solve VPN access problems so they can work from home.

    It’s a problem with which Elevated Technology can empathize.

    Despite being located on the fourth floor of an office building on the western side of the city, Rorie and all of his employees have still been forced to work remotely.

    “I can get even get out of my own neighborhood,” he said. “I’m talking to you from my garage, looking at the water rising in the driveway.”

    The company’s manpower took a further hit when one of the five employees lost power at his residence.

    Some reports suggest that as many as 40 percent of small businesses in the Harvey’s path might never recover.

    That could have long-lasting consequences for the revenues of firms like Elevated Technologies.

    “That is a valid concern,” Rorie said.

    But by far his greatest worry stems from uncertainty about how much longer the storm will last, and how much more damage it’ll wreak on his customers before it’s all over.

    “There is so much more potential to do damage than has already been done,” Rorie said “Until the flooding subsides and clients get back to work, we won’t know the full extent. “My biggest fear is it doing much more damage (because) we’re going to have our hands full when everyone returns next week.”

    3:30p
    A Type of Computing: NYTimes Crossword Moves from AWS to Google App Engine

    Brought to you by IT Pro

    Online games can be compute-intensive, particularly first-person shooters or games with heavy graphics that require a lot of power. But apparently a crossword puzzle can put a strain on cloud architecture, too.

    The New York Times crossword – first introduced in print in 1942 – has grown into a suite of mobile apps and an interactive website with other 300,000 paid subscribers, and has outgrown its infrastructure hosted in Amazon Web Services (AWS) cloud.

    According to a blog post by NYTimes principal software engineer JP Robinson, the crossword’s backend systems were running on AWS with a LAMP-like architecture, but when it introduced the free daily mini crossword three years ago, the larger daily audience put a lot of strain on its architecture.

    See also: How the New York Times Handled an Unprecedented Election-Night Traffic Spike

    “As the crossword grew in popularity, our architecture started to hit its scaling limitations for handling game traffic. Due to the inelastic architecture of our AWS system, we needed to have the systems scaled up to handle our peak traffic at 10PM when the daily puzzle is published,” he said. “The system is generally at that peak traffic for only a few minutes a day, so this setup was very costly for the New York Times Games team. Luckily, we at The Times recently decided to move all product development to the Google Cloud Platform where a variety of tools awaited to help us move faster and save money.”

    “After shopping the Google product suite, we decided to rebuild our systems using Go, Google App Engine, Datastore, BigQuery, PubSub and Container Engine. I’ll discuss the architecture in greater detail in future posts but for now, I’m going to concentrate on App Engine, which is the core of our system.”

    While the blog post reads a bit like a Google advertisement, it is interesting to see the factors that go into a company making a decision to decide to migrate from one cloud to another. You can read the full technical breakdown in the blog post.

    Google Cloud Platform introduced new network tiers last week for enterprise users to optimize around performance or price.

    8:05p
    How Google’s Custom Security Chip Secures Servers at Boot

    Data centers these days are busy replacing expensive hardware solutions with “software-defined” everything, but the trend is the opposite when it comes to security. While software still prevails in keeping servers secure, hardware is often being added to the mix as another layer of protection, especially during the boot process, when a computer is vulnerable to dangers such as maliciously modified firmware.

    This trend started when UEFI — and Secure Boot — replaced BIOS on computers, and was carried a step further when Google began including an additional custom designed hardware security chip on all servers and peripherals in its data centers. In June, Hewlett Packard Enterprise followed suit and announced it was joining the secured-by-hardware crowd by including its own custom chip on its Gen10 servers. Lenovo also includes a degree of security-on-a-chip technology on its line of servers, through XClarity Controller.

    There are several advantages to having security protections contained in chipsets that are separate from a server’s CPUs. Being isolated from the server’s main components, they are more difficult for an outside hacker who manages to get through a system’s defenses to find and penetrate. In addition, they can utilize read-only memory that can be difficult or impossible to modify.

    See also: Here’s How Google Secures Its Cloud

    At its Cloud Next event in March, Google unveiled a custom security chip called Titan, which was likely the “official unveiling” of the security hardware we discussed on Data Center Knowledge in January. On Thursday, some Google Cloud Platform engineers posted a blog detailing how Titan works to make its data centers more secure.

    Titan consists of a secure application processor, a cryptographic co-processor, a hardware random number generator, a sophisticated key hierarchy, embedded static RAM (SRAM), embedded flash and a read-only memory block. According to Google, its main purpose is “to ensure that a machine boots from a known good state using verifiable code and establishes the hardware root of trust for cryptographic operations in our data centers.”

    This type of protection has grown increasingly important, not only to stop traditional black hats motivated by profit, but to repel governments — both foreign and domestic — that have been successfully devising methods to exploit firmware vulnerabilities, sometimes in ways that can survive re-installation of an operating system.

    See also: Google Shares New Details About its TPU Machine Learning Chips

    Interestingly, the GCP engineers said that before verifying the validity of code on the host server, Titan runs something of a self-diagnostic to make sure it hasn’t been compromised:

    “Titan’s application processor immediately executes code from its embedded read-only memory when its host machine is powered up. The fabrication process lays down immutable code, known as the boot ROM, that is trusted implicitly and validated at every chip reset. Titan runs a Memory Built-In Self-Test every time the chip boots to ensure that all memory (including ROM) has not been tampered with. The next step is to load Titan’s firmware. Even though this firmware is embedded in the on-chip flash, the Titan boot ROM does not trust it blindly. Instead, the boot ROM verifies Titan’s firmware using public key cryptography, and mixes the identity of this verified code into Titan’s key hierarchy. Then, the boot ROM loads the verified firmware.”

    At this point, Titan verifies the contents of the host’s boot firmware using public key cryptography:

    “Holding the machine in reset while Titan cryptographically verifies the boot firmware provides us the first-instruction integrity property: we know what boot firmware and OS booted on our machine from the very first instruction. In fact, we even know which microcode patches may have been fetched before the boot firmware’s first instruction. Finally, the Google-verified boot firmware configures the machine and loads the boot loader, which subsequently verifies and loads the operating system.”

    Google also explains how Titan gives each machine its own cryptographic identity:

    “The Titan chip manufacturing process generates unique keying material for each chip, and securely stores this material … into a registry database. The contents of this database are cryptographically protected using keys maintained in an offline quorum-based Titan Certification Authority (CA). Individual Titan chips can generate Certificate Signing Requests (CSRs) directed at the Titan CA, which … can verify the authenticity of the CSRs using the information in the registry database before issuing identity certificates.”

    According to Google, this process not only verifies the identity of the chips generating the CSRs, but verifies the firmware running on the chips as well.

    “This property enables remediation and allows us to fix bugs in Titan firmware, and issue certificates that can only be wielded by patched Titan chips.”

    As security issues continue to grow, it would probably be good to bet that hardware solutions such as this will soon become the norm.

    10:48p
    Five Challenges Companies Must Overcome to Make Use of All Their Data

    Joe Pasqua is EVP of Products for MarkLogic.

    Companies today know they need to fully and effectively leverage all data—including the increasing digitization of human communications and the data being generated by everything from light bulbs to smartphones. They know they must capture a wide variety of data, store it in a way that makes it accessible, and query it based on the rapidly changing needs of the business. They also know that they can’t get by with rigid, predetermined schemas . What they are finding, however, is that this is much easier said than done.

    What’s standing in their way? Many things, unfortunately; but there are five big challenges that companies must overcome in order to fully exploit their data along with partner data, and other external data sources. 

    1. Inability to make use of multiple data types and formats. Data today comes in all shapes, sizes, and forms that must be processed and analyzed basically in real time. This includes data that does not fit neatly into the rows and columns of legacy relational database systems. What’s more, those different forms and types of data need to be used together seamlessly. Richly structured data, graph data, geospatial data, and unstructured data may all figure into a single query or transaction. 

    2. Slow pace of innovation based on legacy systems. Technology and business requirements are changing almost daily, and organizations need to innovate to stay competitive and compliant. Many companies today can barely deal with the data they have on hand, let alone what will be coming in the future such as IoT-generated data. When investing in innovation, they are often frustrated because they need to deal with legacy systems, which hold many of the corporate data assets. These systems are an anchor that slow their progress and ability to effectively compete. 

    3. Proliferation of data silos in the enterprise. The rapid growth of all kinds of data and the growth in the number of services businesses provide to their customers, has created a proliferation of data silos in the enterprise. To better serve their customers, regulators, and themselves, businesses need to create a 360-degree view of their business objects such as customers, products, or patients. But creating this holistic view has been an arduous and wildly expensive task. All the while, more data silos are being created. What’s worse, the data quality and the governance of these views is often an afterthought leading to bad results, or even regulatory fines. 

    4. The use of ETL and schema-first systems. Relational databases are the de facto standard for storing data in most organizations. Once a relational schema is populated, it is simple to query using SQL. Sounds great, but—and this is a big but—companies have to create the schema that queries will be issued against. Integrating all existing schemas (and possibly mainframe data and text content) requires a tremendous amount of time and coordination among business units, subject matter experts and implementers. Then, once a model is finally settled on by various stakeholders, data must be extracted from source systems, transformed to fit the new schema and then loaded into the new schema—a process referred to as ETL. Critical understanding can be lost in all of this translation, and it simply takes too long ( average 6-18 months). Moreover, it never ends. Data sources change. New sources are added. Different questions are posed. ETL keeps on taking, not giving. 

    5. Lack of context. Perhaps the biggest problem companies have today is thinking they know what they don’t know. Data without context is useless. What does this data mean? How does it relate to other data? What is the provenance of the data? In what circumstances and with whom am I allowed to share it? In most cases the answers to these questions aren’t captured in the database. It might be in a developer’s head, or a design document, or an ETL script, or worse, all those places, but not consistently. Traditional databases aren’t focused on storing, managing, and querying this contextual metadata and typical ETL processes usually drop this information on the floor. Giving up on context means giving up on getting the most value from your data.

    So, what’s a company to do? Increasingly, companies are turning to multi-model databases. With a multi-model database, they can capture data’s context and store it with the data, providing maximum data agility and auditability – and essentially future-proofing the database system against any new type of data, shift in data paradigm or regulatory requirement that will inevitably come down the pike.

    Companies considering a multi-model database platform should look for:

    ·      Native storage of multiple structures (structure-aware)

    ·      The ability to load data as-is (no schema required prior to loading data)

    ·      Ability to index these different models efficiently

    ·      Ability to use all the models together seamlessly – composability

    ·      Enterprise-class security and availability

    Of course, no shift in database technology is made lightly—many IT professionals have gone their entire careers in one technology. But if there was ever a time for companies to ensure that they can effectively collect, analyze, and leverage the data at their disposal, it’s now. 

    Opinions expressed in the article above do not necessarily reflect the opinions of Data Center Knowledge and Informa.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    << Previous Day 2017/08/29
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org