Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Thursday, October 9th, 2014

    Time Event
    4:00p
    Magic Quadrant for Data Center Infrastructure Management Tools (DCIM)

    Your data center has become a critical component of your business. So much so that many business initiatives are actually planned around the capabilities of IT.

    The emergence of cloud, IT consumerization, and a lot more data has forced the data center to evolve and support new demands.

    Through it all, data center management and control sit at the top of an administrator’s task list. This is because power, energy consumption, and efficiency are all critical pieces of keeping a business running and a data center healthy.

    IT and business executives have realized that hundreds of thousands of dollars in energy and operational costs can be saved by improved physical infrastructure planning, by minor system reconfiguration, and by small process changes.

    The systems which allow management to leverage these savings consist of modern data center physical infrastructure (i.e., power and cooling) management software tools. Legacy reporting systems, designed to support traditional data centers, are no longer adequate for new agile data centers that need to manage constant capacity changes and dynamic loads.

    This new Gartner report reviews 17 vendors of Data Center Infrastructure Management (DCIM) solutions. Based on its own expert analysis and backed up by in-depth interviews with users in the space, Gartner has published the Magic Quadrant for Data Center Infrastructure Management Tools for the first time. The report brings insight and clarity to the fast-moving field of DCIM.

    Download this report now for your opportunity to:

    • Compare 17 DCIM vendors
    • Determine which DCIM technology providers meet your prioritized needs
    • Learn why Gartner recognizes Nlyte Software as a leader for DCIM tools based on its ability to execute and completeness of vision

    DCIM tools deliver the insights needed to provide ongoing value to the businesses by enhancing the lifecycle management of equipment, enabling better technical refreshes to be managed with little or no downtime, and providing meaningful advice to the business regarding its options when new workloads are needed to support the organization’s ongoing strategy.

    As the data center becomes more critical for the modern organization, key technologies like SDDC and DCIM will drive data center optimization. A truly efficient data center not only optimizes data control, it also positively impacts the end-user.

    5:10p
    iTRACS Improves Change Management, Integration in New DCIM Release

    iTRACS is out with the latest release of its Converged Physical Infrastructure Management 3.4 software for data center management. The new version has improved change management capabilities and more integration options. CPIM is considered data center infrastructure management software.

    Additions in the new release include a faster data import capability that can accelerate mode -building. There is also better integration through improved asset reconciliation, which helps operators resolve data conflicts when combining data from multiple systems. Better asset reconciliation helps in heterogeneous infrastructures.

    There are also enhancements to operational monitoring with VMware and OSIsoft. The data center management software now collects, aggregates and analyzes more real-time data points on power, VMs and other key metrics.

    Change management has been optimized with improvements in commissioning and task management.

    “Whether it’s an enterprise or multi-tenant facility, data centers are increasingly leveraging DCIM to enhance agility and efficiency in response to the needs of their end-users,” said Anand Ekbote, vice president of infrastructure management at CommScope, which owns iTRACS. “The focus for data center operators usually revolves around speed and agility in relation to deploying new IT assets, getting actionable information on power and integrating DCIM with outside systems.”

    5:40p
    Bell Canada Lights Up Government Data Center in Ontario

    The Canadian government has opened a second data center in Ontario as part of its ongoing data center consolidation initiative, whose plan is to replace nearly 500 facilities with seven by 2020.

    The data center was launched by Shared Services Canada, an organization created in 2011 to execute the consolidation of government IT assets. SSC has closed 10 data centers to date and plans to close 47 more before the end of this year.

    The new data center is in Borden, Ontario, joining one in Gatineau, Quebec, in operation since November 2013. Bell Canada acts as the provider and manager for both. SSC’s plans include using both commercial providers and the government’s own data centers to consolidate government IT infrastructure.

    Through consolidation and other activities, the government expects to spend nearly C$100 million a year less on data center operations by 2020 than it does today.

    “Our Government is committed to providing the facilities and equipment that are needed to provide efficient, effective and secure services to Canadians, while respecting taxpayers’ dollars,” said Diane Finley, minister of public works and government services. “We are delivering on our commitment in the Economic Action Plan to modernize and standardize IT infrastructure across the entire government. Establishing these modern and efficient data centers will reduce costs, improve service and increase security.”

    U.S. government is also undergoing a massive data center consolidation. The Government Accountability Office (GAO) has estimated that agencies can save as much as $3.1 billion through next year.

    6:04p
    Splunk Improves Scalability in Latest Splunk Enterprise Release

    Splunk introduced version 6.2 of its real-time operational intelligence platform for machine data at its .conf2014 event in Las Vegas this week. With more focus enterprise features and functionality, the new release improves scalability of concurrent searches and eliminates shared storage requirements. Splunk Enterprise 6.2 is available as software for on premises, cloud or hybrid deployments, or through the Splunk Cloud.

    “Splunk Enterprise 6.2 gives easier, more intuitive analysis to casual and less technical users through enhanced automated discovery of valuable patterns in the machine data,” Guido Schroeder, Splunk senior vice president of products, said. “With improved scalability, elimination of shared storage requirements, and a new Distributed Management Console, Splunk Enterprise 6.2 will also drive greater efficiency for the thousands of organizations that rely on Splunk to gain operational intelligence.”

    The release features a new interface that guides users through previewing, on-boarding and preparation of machine data for downstream analysis. The Splunk Instant Pivot feature lets anyone perform analysis and create dashboards by pivoting directly from any search without knowing the Splunk Search Processing Language.

    Prebuilt Panels in Enterprise 6.2 enable faster dashboard creation by providing the ability to create, package and share reusable dashboard building blocks.

    The company also announced that Splunk and Amazon Web Services will deliver Hunk, its big data analytics product for Hadoop and NoSQL, priced on an hourly basis, directly from the Amazon Elastic MapReduce (Amazon EMR) console.

    In sync with its Enterprise 6.2 release, the company advanced Hunk to version 6.2 as well, with the similar goal of making the product more accessible to a range of professionals within the enterprise.

    New features in Hunk 6.2 include Amazon EMR Console 1-click purchase, an interactive Sandbox, data explorer, Hunk Apps for popular data stores, instant pivot, event pattern detection and prebuilt panels.

    6:17p
    CA Adds More Amazon Cloud Monitoring in UIM Software

    CA Technologies has released a new version of its Unified Infrastructure Management (UIM) software, the artist formerly known as Nimsoft Monitor. Release 8.0 includes more advanced analytics, alerting and cloud monitoring capabilities.

    CA has added support for more Amazon cloud services, including its Relational Database Services, Elastic Block Store, ElastiCache and Simple Queue Services (SQS). UIM now also supports custom cloud monitoring metrics within Amazon Web Services for optimization of AWS-based applications and infrastructure.

    Enhanced analytics in CA UIM aim to help companies handle performance issues before they make an impact. “Time to threshold” calculates and identifies threats to performance to provide early warnings. “Time over threshold” identifies real and persistent performance issues that need to be addressed.

    CA’s Unified Infrastructure Management Snap (formerly CA Nimsoft Monitor Snap) also has new features, including time-to-threshold analytics, historical alarm views and enhanced reporting.

    CA acquired Nimsoft in 2010. During that time, the company went on a cloud-spending spree to jumpstart its cloud business, also acquiring Cassatt, NetQoS, Oblicore, and 3Tera for its virtual private data center creation tools (which was initially called grid computing).

    Nimsoft has found a home within UIM and the larger infrastructure management picture that CA is painting. CA can monitor both inside (DCIM) and outside the data center (UIM), competing in two fields of play – DCIM providers don’t always extend monitoring to cloud and cloud monitoring rarely extends to DCIM levels of facilities management. CA has both.

    7:00p
    ElasticHosts Expands US Cloud Presence with Miami and Dallas Data Centers

    logo-WHIR

    This article originally appeared at The WHIR

    UK cloud server provider ElasticHosts has launched new data centers in Miami and Dallas, the company announced Thursday. ElasticHosts hopes the new points of presence will reduce latency for customers on the east coast of the US, while providing access to the Latin American Market.

    In addition to the two new American data centers, ElasticHosts has expanded its Sydney data center. The Sydney and Dallas data centers are both Equinix facilities, building on an existing relationship.

    With the new capacity ElasticHosts is also able to announce global availability of Elastic Containers. The auto-scaling and pay-for-use model of Elastic Containers make the containerized cloud servers 50 percent lower cost to Linux users than virtual machines, according to the company.

    ElasticHosts launched Elastic Containers in April, calling the IaaS offering its flagship solution.

    The Miami data center provides connectivity to the hub of undersea cables which service the US east coast and rapidly emerging Latin American markets. IT spending in Latin America, led by Brazil, is expected to continue to increase substantially in 2015, according to Tech Pro Research. Brazil’s cloud services market is expected to surpass one billion dollars by 2017, according to an April Frost & Sullivan report.

    “This launch will allow us to accelerate our growth in the booming Latin American cloud market,” said Richard Davies, ElasticHosts CEO. “The new Miami DC allows us to expand into this market while also providing better support to our customers on the East Coast; further broadening our footprint across the US and worldwide.”

    ElasticHosts has been growing its network steadily, adding two new points of presence in early 2012 to serve North America and four more in early 2013 to expand its global availability.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/elastichosts-expands-us-cloud-presence-miami-dallas-data-centers

     

    7:30p
    Several Google Cloud Services Experience Wednesday Downtime

    logo-WHIR

    This article originally appeared at The WHIR

    Google cloud services including Gmail, Google Hangouts, and Google Analytics experienced outages on Wednesday afternoon starting at 3 pm PT. Google email security service Postini and its cloud storage service were also affected.

    Google said that the Google Analytics outage impacted a majority of users, but the problem was fixed relatively quickly. By 3:21 pm PT, Google updated its Apps Status Dashboard to report that the issue with Analytics was fixed.

    The issues with Google Cloud Storage started at 2:30 pm PT, causing some users to experience “elevated errors and latency,” according to a post in the Google Cloud Storage forum.

    “The problem with Google Cloud Storage was resolved as of 15:55 PDT,” Google said in the forum post. “We apologize for any issues this may have caused to you or your users and thank you for your patience and continued support. Please rest assured that system reliability is a top priority at Google, and we are constantly working to improve the reliability of our systems. We will provide a more detailed analysis of this incident once we have completed our internal investigation.”

    Messages sent from Hangouts or Gmail during the outage may have seen a delay in their delivery, but the services should be back to normal now.

    Google was apologetic for the impact the service downtime had on its customers, but hasn’t released any details about the nature of the outages.

    While the downtime was addressed quickly, with enterprises depending on Google Apps, any amount of downtime can still be disruptive to a work day. This week Rackspace launched managed Google Apps for Work to provide enhanced support and account management for Google Apps including Gmail and Hangouts.

    This article originally appeared at: http://www.thewhir.com/web-hosting-news/several-google-cloud-services-experience-wednesday-downtime

    8:00p
    Rackspace Appoints New Security Chief

    Rackspace has appointed Brian Kelly, formerly executive director at Ernst & Young, as its new chief security officer.

    CSO is a key position in the data center provider world. The focus on security is especially acute for a company like Rackspace, which hosts infrastructure for companies in different parts of the world in its interconnected network of data centers.

    Most recently, Kelly oversaw the Security, Risk and Assurance practice at London-based EY, one of the world’s largest professional services multinationals. He’d been with the company since 2003, according to his LinkedIn profile.

    Kelly’s background, however, is in federal government. A career military man, he spent 15 years in U.S. Air Force, from the late 70s through the mid-90s, and two years as consultant to the Department of Homeland Security, reaching the rank of lieutenant colonel.

    He held a number of positions between military and EY, including senior positions at Trident Data Systems and Deloitte.

    At Rackspace, Kelly will report to the company’s chief operating officer Mark Roenigk. He will oversee all physical and virtual security efforts for both Rackspace and its customers.

    “Brian Kelly will be a strong addition to our leadership team as we collaborate with customers to address the growing cyber security challenges in today’s world,” Roenigk said in a statement.

    9:03p
    CyrusOne Raises $600M to Fund Growth

    CyrusOne has raised $600 million in debt capital, consisting of a $450 million credit facility and a $150 million loan.

    Access to a bigger war chest boosts the data center provider’s ability to increase its footprint in a capital-intensive market. CyrusOne, which went public last year, has been on an aggressive expansion trajectory this year.

    The credit facility replaces an existing secured $225 million credit facility with an unsecured one. It is both bigger and comes with a lower interest rate (1.7 percent as opposed to 3.25 percent).

    “We are pleased with the results of the transaction,” CyrusOne CFO Kimberly Sheehy said. “In moving to an unsecured structure, while significantly increasing the aggregate commitment, we have enhanced financial flexibility and the capacity to fund our growth at attractive interest rates.”

    So far in 2014, the company bought a large piece of land in northern Virginia, where it is building a 48-megawatt data center, kicked off multiple construction projects on its home turf in Texas and broke ground on a 12-megawatt facility in the Phoenix market.

    9:30p
    As Bitcoin Grows Mainstream, Data Center Provider Opportunity Widens

    As sharp price declines wreak havoc with the economics of Bitcoin mining, some entrepreneurs see a shift to transaction fees as the future of the virtual currency. This evolution came into focus this week, as the lead developer proposed revising Bitcoin’s core code to increase the volume of transactions on the network.

    A shift from mining rewards to transaction fees has implications for the data center industry, which could gain more business from industrial Bitcoin miners if the facilities supporting the network need to be enterprise-friendly. That would mark a shift from the current practice, in which the Bitcoin network infrastructure is split between data centers and no-frills hashing centers featuring high-density hardware and low-reliability power infrastructure, often housed in former warehouses.

    As marquee brands like PayPal and Dell embrace the virtual currency, the global Bitcoin miner network has plenty of data-crunching capacity. But are America’s largest merchants and payment processors comfortable trusting their customer experience to mining rigs sitting in warehouses halfway around the world?

    “My belief is that if you’re Michael Dell and your revenues from bitcoin become meaningful, you may want to know where your transactions are being processed,” said Dave Carlson, the CEO of MegaBigPower, one of North America’s largest mining operations. “You may not want to rely upon a mine in China. I believe there will be premium processing contracts, so merchants will know exactly where their transactions are processed.”

    Carlson believes e-commerce will create a major opportunity for dedicated Bitcoin mining operations. He expects to deploy at least 20 megawatts of capacity in central Washington state in coming months — and perhaps much more — as MegaBigPower introduces new hardware optimized for industrial-scale mining and energy efficiency.

    Shakeout for the mining sector?

    Carlson is a pioneer in industrial-scale mining, operating one of the sector’s largest Bitcoin mining facilities in a warehouse in central Washington. He’s seen the network’s processing power (hashrate) soar from 2 gigahashes per second last October to more than 300 gighashes per second, with much of that power controlled by multi-megawatt mines in areas like China, Sweden and the Ukraine. The emergence of these large, well-financed Bitcoin miners has squeezed out many enthusiasts that once mined the cryptocurrency in bedrooms and garages.

    That shakeout will continue. In the last month, the price of Bitcoin has sunk more than 20 percent to about $325.

    coinprices

    An overview of the recent decline in the price of bitcoin. (Chart via CoinPrices.io)

    That decline has pressured the profit margins for mining operations and mining pools, which aggregate processing power from individuals and small groups. Carlson’s not alone in seeing the mining sector under pressure.

    “The current price drop has put a tight squeeze on miners big and small,” writes Scott Fargo, who tracks the mining industry for CryptoCoinNews. “It is most noticeable to smaller and medium sized mining operations that don’t have the reserves to keep running miners that could, in some cases, cost more to run in power than they now generate in income. Smaller miners have been getting out, and new miners are less likely to invest and start mining.”

    Carlson takes a long view of the Bitcoin opportunity. He cites the recent adoption of Bitcoin by Paypal for its merchant network as a sign of changes to come.

    “The real long-term opportunity is transaction fees and the processes and products around that,” said Carlson. “Transaction fees will rise with the criticality of the transaction requirements of the largest businesses.”

    The developers managing the Bitcoin protocol are also contemplating a future surge in transaction volume. Noting that the Bitcoin network can support only about seven transactions per second, Bitcoin Foundation Chief Scientist Gavin Andresen this week proposed a hard fork of the open source code to make the network more scalable.

    Shifting incentives

    The Bitcoin network provides a method to effectively mint digital money, using high-powered hardware to process transactions and earn financial rewards paid out in virtual currency (hence the “mining” nomenclature). The network is based on a public ledger known as the blockchain, with each transaction verified using cryptography. Incentives are offered for Internet users who dedicate processing power to the network, in the form of a “block reward” of 25 new bitcoins every 10 minutes or so.

    << Previous Day 2014/10/09
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org