Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, April 8th, 2016

    Time Event
    5:22p
    Amazon Names Andy Jassy CEO of AWS
    By Talkin' Cloud

    By Talkin’ Cloud

    Amazon has appointed Andy Jassy to CEO of Amazon Web Services. Jassy formerly served as vice president of AWS, the fast-growing cloud division.

    The appointment comes as Amazon CEO Jeff Bezos said in a letter to shareholders that he expects AWS sales to hit $10 billion this year.

    In the letter Bezos notes how “Marketplace, Prime and AWS are examples of bold bets at Amazon that worked, and they’ve become [its] three big pillars.”

    “And as we’ve grown as a company and those three big pillars have also grown, we’ve decided it makes sense to change the titles of the leaders of those businesses—Jeff Wilke and Andy Jassy—to CEO Worldwide Consumer and CEO Amazon Web Services. This is not a reorganization but rather a recognition of the roles they’ve played for a while,” Amazon said in a blog post on Thursday.

    Andy Jassy joined Amazon in 1997 and was part of the formation of Amazon Web Services, “working as a ‘shadow’ to Bezos, a role somewhere between technical assistant and chief of staff,” according to a report by the Financial Times. Under his leadership, AWS brought in $2.4 billion in revenue in Q4 2015, a 69.37 percent year-over-year increase. For all of 2015, AWS revenue hit $7.88 billion.

    While Amazon’s announcement of the title change for Jassy brushes it off as “recognition” of the role he’s played for a while it shows the company’s commitment to treating AWS as a separate business, and supporting it as such. After all, the company is not without its critics, so it will need a CEO to answer them.

    For now, Jassy told the Financial Times that the “public perception of Amazon is often out of sync with what is happening inside the company.”

    “We’re trying to build a business that outlasts all of us,” he said.

    Original post published at http://talkincloud.com/cloud-computing/amazon-names-andy-jassy-ceo-aws

    6:31p
    MeriTalk Study Sheds Light On Federal Data Center Initiatives

    A recently published MeriTalk survey sheds light on where an estimated $10 billion could be saved by moves to improve power consumption, capacity, physical footprint, speed, and security.

    The need to transition to more efficient federal IT solutions was underscored last month by the directive signed by the White House freeze on data center expansion and construction by federal agencies accelerate colocation and cloud deployments.

    If an agency wants to build a data center or expand an existing one, it must make the case that there is no better alternative, such as using cloud services, leasing colocation space, or using services shared with other agencies.

    Latest Cloud SLA Blueprint

    Federal agencies have traditionally had a bias towards on-premise IT solutions, despite a series of initiatives, including “Cloud First” announced in 2011, and most recently, the Data Center Optimization Initiative.

    Read more: Will Federal Data Center Construction Freeze Benefit Colocation Providers?

    On March 7, the General Accounting Office (GAO) released a list of best-practice guidelines agencies should follow in creating service level agreements (SLA) with cloud service providers:

    1. Specify roles and responsibilities.
    2. Define key terms.
    3. Define clear measures for performance.
    4. Specify how and when the agency has access to its own data and networks.
    5. Specify how the cloud service provider will monitor performance and when the agency will confirm that performance.
    6. Provide for disaster recovery planning and testing.
    7. Describe performance exception criteria.
    8. Specify how providers are measured for protecting data.
    9. Determine how the provider will notify the agency of a security breach.
    10. Specify the consequences for non-compliance with SLA performance measures.

    The GAO found that about one-third of the contracts they reviewed already fulfilled the 10 practices. The list of procedures was forwarded to the White House OMB to include in its cloud recommendations to federal agencies.

    MeriTalk ‘Flash-Forward’ Highlights

    This report mainly focused on what needed to be accomplished by 2021, and the likelihood of accomplishing some of the tasks.

    DCK - MeriTalk flash-forward Mar29'16

    Source: MeriTalk – “Flash-Forward” March 29, 2016

    Security: While there was a broad range of answers to most of the questions, this was not the case when it came to security, where 97 percent of managers responding felt that there was a need to upgrade security.

    The universal concerns regarding security could act as a tailwind to encourage migration to FedRAMP vetted colocation and cloud service providers. The efficacy of cost savings while simultaneously upgrading security and disaster recovery would seem to be a compelling combination.

    Mission Readiness: On the other hand, only 11 percent of federal IT managers felt that their data centers were “fully equipped” to meet their agency’s current mission demands. Perhaps the most telling statistic was that only five percent of these managers felt that current facilities would be sufficient to handle agency computing requirements in 2021.

    Cloud Adoption: A heartening trend was that managers foresee nearly doubling the number of systems in the cloud over the next five years, from 28 percent to 48 percent. However, according to the report, only 47 percent have established a leadership team, and only 42 percent have a formal vision for the future.

    Inadequate budgets, security, bandwidth and legacy systems were all mentioned as potential roadblocks. This may be why only 60 percent of federal IT managers believe they will be able to comply with “Cloud First” by 2021.

    A Slow Start

    Even the longest journey begins with taking the first step. Unfortunately, less than 50 percent of the basic steps necessary to accomplish the 2021 goals have been implemented.

    • Established a leadership team – 47 percent.
    • Established a formal vision for the future – 42 percent.
    • Audited data center(s) to understand current capabilities and shortcomings – 39 percent.
    • Cataloged all aspects of the data center – 39 percent.
    • Developed case studies and/or ROI estimates around planned investments – 35 percent.
    • Leveraged big data to approximate future needs – 31 percent.

    When it came to estimating success with complying with FDCCI by 2021, the difference between civilian and DoD agencies was negligible, with 55 percent and 53 percent affirmative responses, respectively.

     

    6:49p
    GitHub Forum Highlights Public Views on Open Source in U.S. Government

    By The VAR Guy

    What’s good about open source software, what are its limits and how should it be used in government? These are issues that the public is now debating vigorously in a new forum created by the U.S. government following its recent push to make more government-owned code open.

    The backstory: Last month, the federal government used GitHub to solicit public comments on draft guidelines that would require federal agencies to make more use of open source code. Among other requirements, the proposal would mandate that at least twenty percent of federally owned code be released as open source.

    • A strong desire (expressed by a number of commenters) for the government to make code “open by default,” rather than requiring that only a certain percentage of it be open source. Part of the criticism here has to do with making the government go further in adopting open source. But also at stake is the problem that the requirement to open-source twenty percent of code is ambiguous: Does it mean twenty percent of an agency’s programs have to be open source, only twenty percent of its total code, or something else?
    • Demands to use the term “free software” as well as “open source” in reference to publicly available code. This concern reflects a familiar debate within the free and open source software community, but not one that has previously had much currency in government affairs.
    • The suggestion to adopt open standards when open source code itself is not practical (although this particular comment seems to express the belief that it is not feasible to use open source for complex software platforms, which is arguably not true at all).
    • The suggestion to clarify which types of licensing or public-domain status qualify software as open source. This is an issue on which no one is likely to agree totally. But it still could not hurt for the government to be more specific in explaining what it means license-wise when it writes about open source.
    • An urge to make sure code is not only open, but also secure. Some of the commentary on this suggestion seems to reflect a lingering sense among the public that software whose source code anyone can read could be more easily exploited by attackers.
    • A note that the Trans-Pacific Partnership (TPP), which makes it illegal for governments under certain circumstances to require that code be open source, seems to contradict both the spirit and the terms of the new federal proposal. The threat that the TPP poses to open source has received little press, but it’s a big issue, and discussion of the new government guidelines could help spur a healthier debate about this.

    To be sure, there’s much more to say about open source than what has appeared on GitHub so far. But this is a novel and original debate about open source software’s merits for government agencies. Whatever the final outcome of the new federal proposal, the GitHub forum has perhaps brought more attention to open source’s usefulness for the public at large than such software has received since the late 1990s.

    This post was first published at: http://thevarguy.com/open-source-application-software-companies/github-forum-highlights-public-views-open-source-us-gover.

    9:54p
    Latest OpenStack Release Advances ‘Intent-Based’ Configuration

    Resource orchestration may be the most important software-based technology to impact the management, facilitation, and even the design of data centers this year. The ability to drive server utilization and keep data center footprints small depends, in very large part, upon whether cloud infrastructure systems — today, mainly OpenStack — make optimum use of storage, compute, memory, and bandwidth resources throughout the data center, once they’ve all been pooled together.

    OpenStack’s resource orchestration component, called Heat and introduced with the “Havana” release back in October 2013, was the first to tackle automated orchestration of resources. But Heat originally used templates — effectively, scripted recipes for how to spin up a requested resource on-demand for the specific infrastructure that resource would require.

    “Heat provides an orchestration service for OpenStack clouds,” explained OpenStack Foundation Executive Director Jonathan Bryce, in a note to Datacenter Knowledge. “A template-driven workflow engine allows a cloud user to describe exactly the set of resources that are needed for an application, and Heat will deploy and auto-scale those resources.”

    Here’s the problem: In virtualized data centers now, all these resource classes are variables. It’s harder to automate deployment using too many variables without simple scripts having to evolve into forecasting engines.

    Desired State

    OpenStack began addressing this problem with last October’s “Liberty” release by introducing a feature that remains, even now, not too well-documented. Called Convergence or Convergence Engine (and begging to be called something else) this new feature of Heat would be capable of orchestrating workloads in something closer to real-time, by way of an “observe-and-notify” approach to monitoring performance.

    The idea here is to enable a database table to represent the desired state of the workload or the application — the properties it should exhibit, given the current conditions of the hosting platform and the OpenStack cluster as a whole. To accomplish this while, at the same time, continuing to support the templates approach that already existed, Heat’s Convergence would maintain separate database tables for resource properties observed and properties desired, enabling changes to be made to both by way of remote procedure calls (RPCs) placed by monitoring software.

    OpenStack’s engineers didn’t say so at the time, but their new system was well aligned with a property that telecommunications engineers have been desiring for quite some time: intent-based configuration. Put another way, if the orchestrator is capable of expressing the properties it needs a workload to exhibit, the “convergence” would be the process of reconciling desire with practicality. (Nearly every successful marriage is forged on this principle.) This way, the orchestrator responds to the intent of the configuration as best it can.

    “As an example, I could describe a Web application that requires 3 Web servers, 2 database servers, a caching server, and needs to run a post-install script to cluster the database servers together. Rather than running all of the calls independently, I can create a Heat template that describes the desired state and the Heat service will automatically deploy everything,” explained Bryce.

    The Telco Factor

    Up to now, telcos and big data centers had been relegating OpenStack to orchestration “low-priority” workloads, such as periodic accounting and database control, as opposed to actual service delivery. Their complaint had been that OpenStack hasn’t been able to scale to the speed and demands they require.

    The Mitaka release addresses that complaint directly, said the OpenStack Foundation’s Bryce. “The updates to Heat allow the Heat service itself to be clustered across multiple machines,” he told us. “This allows horizontal scaling across a cluster of servers that can break a Heat template up and run the steps in parallel. This provides better performance and scaling for executing orchestration workflows, as well as better reliability.”

    Yet an intent-based configuration system would be another major step in their direction. At the Open Networking Summit a few weeks ago in Santa Clara, CA, Huawei Chief Architect David Lenrow told attendees that intent-based architecture could be the catalyst that would drive multiple telcos together toward forging a common standard for northbound interfaces (NBI) — a way for applications, especially SDN, to contact network components using a common API.

    “What we’re really focusing on is intent-based networking,” said Lenrow, “and a different model for operating the network. We’re trying to establish that as this foundation for a common interface, and then get lots of major vendors and lots of major operators to work together and cause this interface to become widely deployed.” Once that deployment level reaches critical mass, he believes, both major and minor communications players will want to become involved simply because others in their space are doing so.

    So there’s a lot at stake with respect to how soon the open source contributors to OpenStack can get cranking, with respect to Convergence. For instance, the success of AT&T’s hyper-accelerated effort to modernize its service and data centers, by open sourcing its ECOMP service delivery platform, may rest upon — among many other matters — whether intent-based configuration is production-ready.

    ‘Use with Caution’

    While OpenStack Havana brought Convergence formally into the production stack, release notes published at the time warned that the tool “has not been production tested and thus should be considered beta quality – use with caution.” That came as a confusing signal for some who thought OpenStack had plenty of opportunity to perfect “beta-quality” code during the actual beta process.

    Last Thursday’s formal release into general availability of the “Mitaka” edition of OpenStack came with some warm reassurances that Convergence has been battle-tested now, and is ready for prime time. But as OpenStack Foundation Chief Operating Officer Mark Collier tells Datacenter Knowledge, Mitaka’s propagation into the space of deployed OpenStack platforms may not be immediate.

    “Upgrade urgency really depends on the individual user and their needs, ” stated Collier in a note to DCK. “Now that we are on the 13th release, the compute, storage, and networking APIs, and code behind them, have been stable for several releases, so users certainly don’t feel forced to upgrade immediately. That said, one of the big improvements in the software is actually in the area of upgrades, to make those upgrades less painful.

    “Secondly, Mitaka brings a lot of improvements in ease of use both for operators and end users, so those are as big of a draw as features per se and are based on feedback from those companies operating Juno, Kilo, and Liberty — so many are eager to put them to use. The last nuance to understand is that many users rely on downstream commercial distributions that typically take a few weeks to produce the latest release so that’s part of the timing to keep in mind.”

    Rollout

    As Rackspace Senior Product Director Bryan Thompson told Datacenter Knowledge, Rackspace is currently concluding what it calls the “design process” for integrating OpenStack Mitaka into its Private Cloud services. As part of that process, it’s looking into the extent to which certain additions made to OpenStack some months earlier have matured — specifically, whether they’ve matured to the degree that Rackspace may consider them “fully supported.”

    “All major releases, including new features in those releases, are rolled out only after thorough testing, documentation and training to enable our teams to fully support these components for our customers,” wrote Thompson. “Major upgrades (e.g., Juno to Liberty) are typically done between three to six months after GA by the OpenStack Foundation, as we work through all of the processes to update tooling and augmenting components to the new OpenStack bits, complete thorough testing, and roll out training and documentation for our supporting teams. We typically introduce at least one ‘minor’ release within a given OpenStack series, where we will introduce new projects or extended features to our prescribed deployment of OpenStack, and critical updates to address any vulnerabilities and/or high-impact defects are performed as needed, in the form of revision releases after testing.”

    Bug fixes and security patches are considered critical, Thompson added, and can be rolled out quite soon; but every other stage requires a thorough testing process. It seems as though there are now two testing phases for OpenStack. The first one is conducted by open source software developers, followed by implementers working with code that OpenStack itself has made generally available – even if it warns, from time to time, that the code may be only “beta-quality.”

    If you do the math, a feature being worked into OpenStack may take between 18 months and two years to merit “maturity.” Historically speaking, that’s actually a short period of time. And for data centers, two years of thorough testing may be an absolute requirement anyway. But for some customers in the telco field — especially AT&T, whose milestone dates for ECOMP still read “2017” — two years may be too long.

    << Previous Day 2016/04/08
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org