Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Friday, February 3rd, 2017

    Time Event
    12:10a
    Snap Signed Five-Year, $2B Deal to Use Google’s Cloud

    By Mark Bergen (Bloomberg) — Snap Inc. plans to spend $2 billion with Alphabet Inc. over the next five years to use Google’s cloud-computing services, according to Snap’s initial public offering filing. The social media firm also listed its dependency on Google as a key risk investors should consider before buying stock.

    Snap, owner of the Snapchat mobile app, said it relies on Google’s services for the “vast majority” of its computing, bandwidth and data-storage needs. “Any disruption of or interference with our use of the Google Cloud operation would negatively affect our operations and seriously harm our business,” the company wrote in the filing on Thursday.

    The two companies signed an agreement in January for the five-year deal. Under the terms, Snap is required to spend at least $400 million a year on Google cloud services. A portion of these payments can be delayed, during the first four years of the deal. In return for this commitment, Snap said it will get discounted pricing from Google, without being more specific.

    The deal is a big win for Google’s cloud division, which under Diane Greene, is trying to catch leaders Amazon.com Inc. and Microsoft Corp. Snap was an early customers of Google’s cloud services, and the social media company remains one of its largest spenders, people familiar with the companies have said.

    See alsoGoogle Ramped Up Data Center Spend in 2016

    4:00p
    DCIM on a Budget: Data Center Optimization Without Breaking the Bank

    Measurement and control are key to optimization, and that of course applies to data center optimization too. That’s what DCIM software provides, tools for measurement and control.

    While most data center managers agree that Data Center Infrastructure Management tools can help optimize data center operations, purchasing DCIM software often becomes a low budget. Fortunately, with some time and effort, there are ways to implement DCIM without the expenditure of off-the-shelf software. Here we’ll explore the strategies organizations can use to optimize data center operations on a shoestring budget.

    In the first part of this two-part series, we’ll focus on the measuring aspect of DCIM. Specifically, we’ll discuss methods for collecting asset intelligence, benchmarking efficiency, and leveraging existing monitoring. In the second part, we’ll explore control and optimization.

    For guidance on DCIM software, visit DCK’s DCIM InfoCenter

    Collecting Asset Intelligence

    A first step in implementing any DCIM program is to get a firm understanding of data center assets by undertaking a thorough inventory process. Although this step is time-consuming and tedious, the benefits of collecting asset data will be realized immediately. The key is to collect the right data and record it in a user-friendly way. The best practice is to create spreadsheet to ensure you are collecting the right asset data and classify it similar to the way DCIM software would.

    Begin by creating a spreadsheet with five tabs labeled: Locations, Cabinets, Freestanding Equipment, Rack-Mounted Equipment, and Blade Equipment.

    • The Locations tab should have at least seven columns with the following headings: Country, State, County, City, Building, Floor, and Room. Although you may only focus on one data center, this location information ensures that if any data centers are added in the future, each will be uniquely identified.
    • Within the Cabinet tab, create column headings that includes Room Name, Cabinet Name, Asset Tag, Make, Model, Generation, and Grid Location. Room Name should match one of the names listed in the Locations tab. If your data center doesn’t use asset tags on cabinets, or doesn’t have a raised floor, or grid system, leave these fields blank when performing the inventory.
    • The Freestanding Equipment tab should also include Room Name along with Name (name of the equipment), Serial Number, Asset Tag, Asset Type (such as server, storage or network), Make, Model, Generation, Grid Location.
    • The Rack-Mounted Equipment tab will be used to identify all devices mounted in server cabinets. The column headings to include are: Name, Serial Number, Asset Tag, Asset Type (such as server, storage, chassis, network, or power strip), Make, Model, Generation, Grid Location, Room name, Cabinet, U. If equipment is mounted vertically such as power strips, record the U location as 0.
    Chassis-Mounted Equipment tab will identify all blades within chassis. The column headings will be the same as Rack-Mounted Equipment with a few exceptions. A column called Chassis Name should be included. Also, instead of identifying the U, the location of blades should be identified by the slot location within a chassis.

    As mentioned, collecting equipment data without DCIM software is grueling, but this data can be used to produce some very powerful information and reports. For example, armed with complete and accurate make, model, and generation data, companies can identify older generations of equipment that should be slated for tech refresh. Following are some additional ways this data can be leveraged.

    Power consumption per server cabinet can be estimated based on equipment make and model. Most IT equipment manufactures include estimated average power draw within “tech specs” on their websites. If only maximum power draw is listed, fairly accurate average power draw can be estimated by multiplying the max by 66 percent, or .66. Once all device power is added up within each cabinet, imbalances in power density are revealed. This is great information in identifying over-subscribed power circuits and/or identifying possible cooling issues such as hot spots.

    Although often overlooked, weight capacities are a critical part of data center capacity planning. Similar to power consumption, equipment weight can be found within tech specs. Adding up equipment weight along with the server cabinet weight can ensure total weight remains within floor load thresholds.

    Reports showing open ports vs. used ports can be produced if patch panels are included while collecting inventory. The lead time between purchase and installation can be a very long for new patch panels and trunk cabling. Having this port information provides the necessary warnings so that additional ports can be added well in advance and new equipment can be racked and cabled without delay.

    Equipment data can also be used to generate front views of server cabinets and racks instead of DCIM software visualizations. These views are a great way to see how full cabinets are. Within spreadsheets, cells can be stretched horizontally and reduced vertically to represent each server cabinet or rack U space. Cells can be filled in with colors and text to represent equipment occupying U spaces. Also, images of the equipment can be added to the cells for a more realistic representation.

    Benchmarking Data Center Efficiency

    Power usage is a window into your data center. Understanding usage patterns as well as identifying fluctuations can tell you a lot about the efficiency of your operations. By benchmarking and monitoring power you can identify areas of improvement as well as predict and prevent issues from occurring.

    The first step is to benchmark. If monitoring is connected to all power supplied to the data center, including what is needed for lights and cooling, reports showing data center efficiency can be created. Record this amount of power supplied to the total data center load dedicated to the data center’s lighting, cooling, IT equipment, etc. Then record power supplied exclusively to IT equipment. There are several energy efficiency metrics, such as PUE (Power Usage Effectiveness), or the newer Mechanical Load Component (MLC) and Electrical Loss Component (ELC). Being consistent with one of these methods is what matters most when calculating how managers are taking steps to improve the data center.

    If power monitoring is non-existent or sporadic, there may be another way to show cost savings. Power utility bills are sent with kWh usage for each feed into a building. Ideally, the data center has at least two dedicated utility feeds, separate from the office-space power. Power usage will naturally fluctuate due to a variety of factors, such as outdoor weather, but kWh fluctuations in the data center can also be due to changes in IT load. As changes such as large decommissions or efficiency improvements are implemented, the effect may be shown on the following month’s utility bill. Annual cost savings can then be estimated by multiplying this drop in kWh by the utility rate.

    For an example of how power monitoring information can improve operations, use the information to project cost savings. Suppose you’re considering swapping out old servers with new ones in order to cut energy costs. A way to prove gains before actually committing to a large-scale server refresh is to measure and record power draw on just one of the older servers. Then, replace this old one with the new server. Compare the old server’s power draw to the new server power draw. The power difference can then be multiplied by the per-kWh utility rate to get an accurate daily, monthly or annual savings per server. Total savings can then be accurately projected for the project by multiplying that number by the number of servers being considered for replacement.

    Leveraging Existing Monitoring

    Beyond power consumption monitoring, data centers often have a mixture of monitoring points waiting to be put to use. UPSs, CRAC units, PDUs, and power strips typically have unit controls that provide sensor data, alarms, and real-time monitoring data. DCIM software usually collects this data, but it can also be accessed simply by scrolling through the multiple pages of data shown on the equipment’s display screens. Usually, this data is also accessible through the network and internet via device IP addresses. If, for example, this data is available for PDUs, exact power loads on circuits feeding server cabinets and freestanding equipment would be known. Exact power loads would also be known for IT equipment plugged into “smart” power strips that have either metered or switched outlets.

    With some computer programming skills, relatively straight-forward programs can be written to collect this monitoring point data and display it in a user-friendly format. This monitoring data can be refreshed frequently to provide up-to-the-minute readouts from several pieces of equipment. Reports and graphs showing historical trends along with impact to capacity as equipment gets added or removed can be generated.

    These monitoring points can be used to improve operations in multiple ways, such as finding hot spots. Smart power strips usually include environmental monitoring ports. External temperature sensors can be added and placed in cold aisles. With sensors placed in multiple racks, thorough and accurate temperature readings can be provided, which can enable data center managers to rebalance heat loads by relocating servers from cabinets that are too hot to cabinets that are underutilized. These power strip temp sensors can also save significant money if their readings show cabinets are over-cooled. If cabinets are in fact too cold, turning up the data center temperature can equate to huge cost savings by reducing energy needed to cool the room.

    Conclusion

    Purchasing DCIM software may not be in the budget. However, significant steps can be made to optimize data center operations without capital expense. The first step is obtaining a clear understanding of assets through spreadsheets, measuring power through existing monitoring and/or equipment power estimates to benchmark efficiency. With that data in hand you can undertake strategies to control and optimize operations.

    About the Author: Tim Kittila is Director of Data Center Strategy at Parallel Technologies. In this role, Kittila oversees the company’s data center consulting and services to help companies with their data center, whether it is a privately-owned data center, colocation facility or a combination of the two. Earlier in his career at Parallel Technologies Kittila served as Director of Data Center Infrastructure Strategy and was responsible for data center design/build solutions and led the mechanical and electrical data center practice, including engineering assessments, design-build, construction project management and environmental monitoring. Before joining Parallel Technologies in 2010, he was vice president at Hypertect, a data center infrastructure company. Kittila earned his bachelor of science in mechanical engineering from Virginia Tech and holds a master’s degree in business from the University of Delaware’s Lerner School of Business.

    5:30p
    Friday Funny: Surprise Under the Raised Floor

     — We may be out of floor space, but there’s always room for romance.

    Here’s the cartoon for this month’s Data Center Knowledge caption contest.

    This is how it works: Diane Alber, the Arizona artist who created Kip and Gary, creates a cartoon, and we challenge our readers to submit the funniest, most clever caption they think will be a fit. Then we ask our readers to vote for the best submission and the winner receives a signed print of the cartoon. Submit your caption for the cartoon above in the comments.

    Some good submissions came in for last month’s Cloud Gazing edition; all we need now is a winner. Help us out by submitting your vote below.

    What should the bubble say?

    Take Our Poll

    Stay current on Data Center Knowledge’s data center news by subscribing to our RSS feed and daily e-mail updates, or by following us on Twitter or Facebook or join our LinkedIn Group – Data Center Knowledge.

    7:38p
    Snap Future Shaped by Complex Ties to Google as Supplier, Rival

    Mark Bergen and Sarah Frier (Bloomberg) — Every year at its annual I/O conference, Google gives away stuff. Last May, the trinkets included a tiny pamphlet with a code that let attendees follow the company on Snapchat.

    At one point earlier that year, a much bigger alliance was in the works. The companies discussed a feature that would have let a Snapchat user point the app’s camera at an object, hold down her thumb briefly and get information about it from Google’s search results, according to people familiar with the situation. They asked not to be identified discussing private deliberations. Google and Snap representatives declined to comment.

    Pairing with Snapchat would have been a boon for Google, whose efforts in social networking have foundered, while giving Snapchat a needed early revenue boost. But the deal has yet to materialize. Instead, Snap has been developing its own search tools.

    The episode underscores the knotty courtship between Snap Inc. and Alphabet Inc.’s Google. As Snap prepares to go public, most of the attention will be on how many eyeballs and advertising dollars it can pry away from social media behemoth Facebook Inc. Yet Snap’s fate will also hinge on how it navigates its ties with Google, an investor, cloud-computing provider and potential adversary.

    The two are friendly. In 2011, some Google staff were feverishly scouting young companies to sign up for its new cloud-computing service. Google salesman Max Henderson found a small Southern California team working on an app with messages that disappeared. Evan Spiegel, Snap’s Chief Executive Officer, took the offer.

    As Snapchat exploded in popularity, Spiegel’s business with Google expanded apace. Google has grown its cloud unit fast in recent years, but Snap remains one of its top customers, people familiar with the companies have said.

    On Jan. 30, the companies inked a deal for Snap to spend $2 billion with Google’s cloud over five years, according to Snap’s public offering filing on Thursday. Snap also listed its dependency on Google as a risk factor.  Snap’s cloud spending was likely a big reason its cost of revenue was $47 million higher than total sales in 2016. About $400 million of annual cloud expenses in coming years will put pressure on Snap to keep grow its advertising business, which already grew seven-fold last year.

    Google’s cloud chief Diane Greene recently hosted Snap engineering executive Timothy Sehn on stage at Google’s Horizon conference as one of her marquee customers. “We would not have been able to do what we we’ve been able to do without Google cloud,” Sehn said.

    CapitalG, Alphabet’s private-equity arm, disclosed an investment in Snap in November. That backing typically comes with access to other Google resources, from marketing advice to privacy expertise. Spiegel has cited Eric Schmidt, Alphabet’s chairman and former Google CEO, as a mentor.

    Still, Snap may quickly become a rival, particularly as it feels its first pressure from public shareholders. Snap’s grab for digital video marketing dollars pits it against Google’s YouTube. A similar pattern is playing out at Uber Technologies Inc., which counts Google as an investor but increasingly competes against the internet giant in maps and autonomous vehicles.

    Spiegel’s emphasis on making Snap a camera company presages rivalry on other fronts. In its filing, Snap told investors it plans to expand into additional hardware products beyond its video-capturing Spectacles. Some may even warrant regulation from the Food and Drug Administration, hinting at products using health sensors or advanced displays. Those are the sort of futuristic devices Google is also tinkering with.

    A bigger standoff could come in software. Snapchat users upload an inordinate amount of personal photos and videos with the app’s camera. In theory, Snap could tap this data to offer granular targeting information to advertisers.

    Search would play a big part in this. Last year, Snap hired top search engineer Wisam Dakka from Google and acquired a mobile search startup called Vurb. Snap could mine the search data and offer more targeted user information to advertisers, like Google does. Other companies have similar data, yet few have Snap’s obsessed users.

    “None of these companies are using it today to understand intent and people,” said Ankit Jain, vice president at digital research firm SimilarWeb. “That’s something Snapchat could do.”

    Snap could also start to use data-driven ads to spur more sales at physical retail stores, a big demand of marketers. The company has been testing internet-of-things prototypes including sensors and beacons, according to people familiar with the matter. Google has been working on similar projects for years, too.Building search tools and pursuing these other ambition projects will likely require artificial intelligence skills. However, Snap’s head of research overseeing these efforts, Jia Li, left the company in November. She went to Google.

    << Previous Day 2017/02/03
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org