Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Wednesday, May 4th, 2016

    Time Event
    12:00p
    Data Center Market Spotlight: New Jersey

    Editorial-Theme-Art_DCK_2016_May

    Our theme this month is site selection. From electricity costs and network infrastructure to the available pool of skilled workforce, data center site selection is one of the most complicated and important business decisions a company makes. Data center location affects everything from the cost of doing business and overall company agility to the quality of user experience. And, like every other aspect of the data center business, where companies choose to put their critical IT infrastructure and why is changing because of … you guessed it: the Cloud. This month, we’ll examine these trends more closely.

    From The Sopranos to Jersey Shore to Chris Christie, there’s always something that keeps New Jersey in the spotlight. The Garden State of course has a lot more to offer than gangsters, trashy reality TV, and flamboyant politicians, and one of those things is a sizable data center industry that’s been recently undergoing some changes.

    Once among the country’s most active wholesale data center markets, New Jersey is now seen as a place where retail colocation and other smaller-footprint services are going strong, while wholesale business in the traditional multi-megawatt sense is slowly drying up.

    At least some of the lull in demand for large wholesale deals in New Jersey can be attributed to a slowing demand from the financial services industry, which has historically driven the bulk of demand there, according to the commercial real estate firm Jones Lang LaSalle. But there’s also a broader trend that runs beyond New Jersey: fewer and fewer companies are in need of big wholesale-type data center deployments. The users that have been driving the recent spike in wholesale leasing are the largest cloud service providers, and they tend to cluster in specific markets, such as Northern Virginia and Silicon Valley. New Jersey is not one of those markets.

    “New Jersey has not been a very favorable market for wholesale,” Jabez Tan, senior analyst at Structure Research, which specializes in data center markets, said. “That’s been confirmed by a lot of players there.”

    What makes various towns in New Jersey attractive as data center locations is proximity to New York City. It is close enough to serve customers in the city but it is also a lot cheaper. Another factor that makes the state attractive for data center operators is the fact that there are lots of data centers there already. The industry’s clustering instincts play a big role in site selection, and places with lots of data centers attract more data centers.

    Wholesale Weakness

    While New Jersey is cheaper than New York, however, it is not cheaper than Northern Virginia, which has some of the world’s biggest data center clusters. Hence, a wholesale data center specialist like DuPont Fabros Technology has a sprawling and thriving data center campus in Northern Virginia while trying to get rid of a single facility in New Jersey it has struggled to fill.

    On DFT’s first-quarter earnings call, CEO Christopher Eldredge said there was no shortage of interested buyers for the company’s NJ1 data center in Piscataway. “We have now progressed through multiple rounds of bidding,” he said. “It’s not unrealistic to assume a third-quarter 2016 closing.”

    CoreSite, one of DFT’s chief rivals, has done well in New Jersey this year. Jersey was one of its strongest markets in terms of leasing activity in the first quarter, CEO Tom Ray said on the most recent earnings call.

    However, there is one important “but” to consider. CoreSite provides both wholesale and retail colocation services, and the strong Q1 in New Jersey is attributable to the latter rather than the former. In fact, Ray highlighted New Jersey as a particularly weak wholesale market.

    “Regarding the wholesale market segment, we continue to believe that balance between supply and demand remains favorable in the Bay Area, less favorable in New Jersey, and is more at equilibrium in our other markets,” he said.

    Of the 12 leases CoreSite executed in the first quarter in both New York and New Jersey data centers, only one was for 5,000 square feet. The rest were all under 1,000 square feet.

    One of the biggest wholesale players in New Jersey is Digital Realty, and all of its non-Telx wholesale facilities in New Jersey are more than 90 percent occupied, but that’s because Digital hasn’t been expanding capacity as quickly as it used to.

    Providers Confident in Demand for Smaller Deals

    Providers who aren’t after multi-megawatt deals appear to be chugging along nicely in New Jersey, and there’s even a new player that entered the market this year.

    Agile DataSites was officially unveiled in January and launched its first two data centers in New Jersey and Pennsylvania. The New Jersey facility, launched in March at the site of a former pharmaceutical lab in Princeton, is a 280,000-square foot property with access to about 45MW of power. While ADS is marketing a mix of data center services, from wholesale to hosting and managed services, CEO Jeff Plank expects its sweet-spot will be similar to the three deals with service providers it’s working at the moment, which range from 80kW to 100kW.

    ADS is targeting IT and communications service providers, healthcare companies, and retailers. Its strategy relies to a great extent on its service provider customers, especially those that help enterprises build hybrid infrastructure that consists of a mix of their own servers and cloud services, according to Plank, expecting to benefit from the enterprise push to the cloud the likes of Amazon, Microsoft, and Google have been promoting.

    IO, which has been in New Jersey since 2011, operating a massive data center facility in Edison that used to be a New York Times printing plant, has also seen the size of the deals on the market shrink. David Mettler, the company’s VP of sales and US market director, said even “the definition of wholesale has come down in terms of size of the deal.”

    There aren’t as many multi-megawatt deals in New Jersey as there used to be, and IO now considers a metered-power deal that’s “several hundred kilowatts” in capacity to be wholesale. And, multi-megawatt deals aside, activity in the market has been healthy, according to Mettler. “We’ve seen pretty good activity in New Jersey,” he said.

    2:54p
    The Impact of Block Sizes in a Data Center

    Pete Koehler is an Engineer for PernixData.

    Guesswork is often the enemy of those responsible for data center design, operations, and optimization. Unknown variables lead to speculation, which inhibits predictability and often compromises success. In the world of storage, many mysteries still remain, unfortunately, with block sizes being one of the most prominent. While the concept of a block size is fairly simple, its impact on both storage performance and cost is profound. Yet, surprisingly, many enterprises lack the proper tools for measuring block sizes, let alone understanding them and using this information to optimize data center design.

    Let’s look this topic in more detail to better understand what a block is and why it is so important to your storage and application environment.

    What is Block Size?

    Without diving deeper than necessary, a block is simply a chunk of data. In the context of storage I/O, it would be a unit in a data stream; a read or a write from a single I/O operation. Block size refers to the payload size of a single unit. We can blame a bit of this confusion on what a block is by a bit of overlap in industry nomenclature. Commonly used terms like blocks sizes, cluster sizes, pages, latency, etc. may be used in disparate conversations, but what is being referred to, how it is measured, and by whom may often vary. Within the context of discussing file systems, storage media characteristics, hypervisors, or operating systems, these terms are used interchangeably, but do not have universal meaning.

    Most who are responsible for Data Center design and operation know the term as an asterisk on a performance specification sheet of a storage system, or a configuration setting in a synthetic I/O generator. Performance specifications on a storage system are often the result of a synthetic test using the most favorable block size (often 4K or smaller) for an array to maximize the number of IOPS that an array can service. Synthetic I/O generators typically allow one to set this, but users often have no idea what the distribution of block sizes are across their workloads, or if it is even possibly to simulate that with synthetic I/O. The reality is that many applications draw a unique mix of block sizes at any given time, depending on the activity.

    The difficulty with understanding the impact of block sizes always comes back to one key issue – the lack of ability to view them, and interpret their impact. This is quite surprising considering how many performance issues related to storage are ultimately tied to block sizes. Understanding such an important element of storage shouldn’t be so difficult. .

    Why Does Block Size Matter?

    As mentioned prior, a block is how much storage payload is sent in a single unit. The physics of it become obvious when you think about the size of a 4KB payload versus a 256KB payload (or even a 512KB payload). Since we refer to them as a block, let’s use a square to represent their relative capacities.

    Throughput is the result of IOPS, and the block size for each I/O being sent or received. Since a 256KB block has 64 times the amount of data as a 4K block, size impacts throughput. In addition, the size and quantity of blocks impacts bandwidth on the fabric and the amount of processing required on the servers, network and storage environments. All of these items have a big impact on application performance.

    This variability in performance is more prominent with Flash than traditional spinning disk, and thus should be carefully observed when procuring an All Flash Array or other device using solid state storage. Reads are relatively easy for flash, but the methods used for writing to NAND Flash can inhibit the same performance results from reads, especially with writes using large blocks. A very small number of writes using large blocks can trigger all sorts of activity on the flash devices that obstructs the effective performance from behaving as it does with smaller block I/O. This volatility in performance is a surprise to just about everyone when they first see it.

    Block size can impact storage performance regardless of the type of storage architecture used. Whether it is a traditional SAN infrastructure, or a distributed storage solution used in a hyper-converged environment, the same factors, and challenges remain. Storage systems may be optimized for different block size that may not necessarily align with your workloads. This could be the result of design assumptions of the storage system, or limits of their architecture. The abilities of storage solutions to cope with certain workload patterns vary greatly as well. The difference between a good storage system and a poor one often comes down to the abilities of it to handle large block I/O. Insight into this information should be a part of the procurement, design and operation of any environment.

    The Applications that Generate Blocks

    What makes the topic of block sizes so interesting are the operating systems, the applications, and the workloads that generate them. The block sizes are often dictated by the processes of the OS and the applications that are running in them.

    Unlike what many might think, there is often a wide mix of block sizes that are being used at any given time on a single VM, and it can change dramatically by the second. These changes have profound impact on the ability for the VM and the infrastructure it lives on to deliver the I/O in a timely manner. It’s not enough to know that perhaps 30 percent of the blocks are 64KB in size. One must understand how they are distributed over time, and how latencies or other attributes of those blocks of various sizes relate to each other.

    Traditional Methods Lack Visibility

    The traditional methods for viewing block sizes have been limited. They provide an incomplete picture of their impact – whether it be across the data center, or against a single workload. Below is a breakdown of some common methods for measuring block sizes, and a description as to why they are lacking:

    1. Kernel statistics courtesy of vscsiStats. This utility is a part of ESXi, and can be executed via the command line of an ESXi host. The utility provides a summary of block sizes for a given period of time, but suffers from a few significant problems.
    • Not ideal for anything but a very short snippet of time, against a specific VMDK.
    • Cannot present data in real-time. It is essentially a post-processing tool.
    • Not intended to show data over time. vscsiStats will show a sum total of I/O metrics for a given period of time, but it’s of a single sample period. It has no way to track this over time. One must script this to create results for more than a single period of time.
    • No context. It treats that workload (actually, just the VMDK) in isolation. It is missing the context necessary to properly interpret.
    • No way to visually understand the data. This requires the use of other tools to help visualize the data.

    The result, especially at scale, is a very labor intensive exercise that is an incomplete solution. It is extremely rare that an Administrator runs through this exercise on even a single VM to understand their I/O characteristics.

    1. Storage array. This would be a vendor specific “value add” feature that might present some simplified summary of data with regards to block sizes, but this too is an incomplete solution:
    • Not VM aware. Since most intelligence is lost the moment storage I/O leaves a host HBA, a storage array would have no idea what block sizes were associated with a VM, or what order they were delivered in.
    • Measuring at the wrong place. The array is simply the wrong place to measure the impact of block sizes. Think about all of the queues storage traffic must go through before the writes are committed to the storage, and reads are fetched. (It also assumes no caching tiers outside of the storage system exist). The desire would be to measure at a location that takes all of this into consideration; the hypervisor. Incidentally, this is often why an array can show great performance on the array, but suffer in the observed latency of the VM. This speaks to the importance of measuring data at the correct location.
    • Unknown and possibly inconsistent method of measurement. Showing any block size information is not a storage array’s primary mission, and doesn’t necessarily provide the same method of measurement as where the I/O originates (the VM, and the host it lives on). Therefore, how it is measured, and how often it is measured is generally of low importance, and not disclosed.
    • Dependent on the storage array. If different types of storage are used in an environment, this doesn’t provide adequate coverage for all of the workloads.

    The hypervisor is an ideal control plane to analyze the data. It focuses on the results of the VMs without being dependent on nuances of in-guest metrics or a feature of a storage solution. It is inherently the ideal position in the data center for proper, holistic understanding of your environment.

    The Absence of Block Size in Data Center Design Exercises

    The flaw with many design exercises is we assume we know what our assumptions are. Let’s consider typical inputs when it comes to storage design. This includes factors such as

    • Peak IOPS and Throughput.
    • Read/Write ratios
    • RAID penalties
    • Perhaps some physical latencies of components, if we wanted to get fancy.

    Most who have designed or managed environments have gone through some variation of this exercise, followed by a little math to come up with the correct blend of disks, RAID levels, and fabric to support the desired performance. Known figures are used when they are available, and the others might be filled in with assumptions. But yet, block sizes, and everything they impact are nowhere to be found. Why? Lack of visibility, and understanding.

    An infrastructure only exists because of the need to run services and applications on it. Let those applications and workloads help tell you what type of storage fits your environment best. Not the other way around.

    Summary

    Proper visual acuity and understanding of the distribution of block sizes across an environment pays dividends throughout the entire lifecycle of an environment. Understanding and accommodating for block sizes in the design, operation and optimization phases of the VM lifecycle leads to more predictable application delivery in your environment, with possibly a more affordable price tag.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    6:05p
    In Tight Tech Market, Investors Choose Quality over Flash
    By The VAR Guy

    By The VAR Guy

    According to analyst firm Gartner, worldwide information security spending reached $76.9 billion in 2015. Private investments into the space reflected that number as investors pumped a jaw dropping $3.3 billion into 229 cyber security deals last year, according to data from CB Insights. As businesses of all sizes in all sectors begin to realize their vulnerability to hackers, security spending is expected to grow by well over 100 percent to $170 billion by 2020. But will cybersecurity investments keep pace?

    2016 has now infamously become known as the Age of the Cockroach as investments in so-called “unicorns,” tech startups with a valuation of over $1 billion, come to a near halt. While unglamorous, the term points to a significant change in investor mindset.

    “It goes without saying that the funding climate has seen a marked shift from 2015 to 2016,” says Brian Ahern, CEO at Threat Stack, a cloud security firm that recently received$15.3 million in Series B funding. “Inflated security valuations over the past couple of years, compounded by an uncertain macro-economic climate, have made institutional investors more cautious as they contemplate new security investments.”

    Chris Lynch is a partner at Accomplice, one of the investment firms behind Threat Stack’s recent funding round. “There’s been an over-infusion of capital into the tech sector, which has created artificial valuations on a lot of these companies that aren’t building real products and services,” Lynch says. “They’re building things that, frankly, we don’t know if they’re going to work or not, but they’re getting tremendous valuations from this infusion of capital from so many different sources.”

    Ahern began previewing the company to large brand-name west coast institutional investors in the third quarter of 2015. “With business momentum building behind Threat Stack, the majority of the investors realized the appeal of the market but wanted to see quantifiable results through the end of 2015,” said Ahern. The company exceeded its 2015 financial commitments and kicked off the series B fundraising activities with high hopes. Then the market slowed. Then it slowed some more.

    Lynch wasn’t surprised when the markets tightened and investors began turning to “companies that maybe didn’t look the sexiest, but were doing their job and building their company as opposed to waving their arms and making a lot of noise.” Institutional investors became focused on establishing greater reserves or shifting investment dollars from startups to growth opportunities with more established reputations—our aforementioned cockroaches. Whatever the reason, one thing became clear. Investors weren’t being sold only on grand visions and vague strategies. They wanted hard numbers and quantifiable facts.

    The list of questions investors threw at Ahern went on and on. How large was the market? What is Threat Stack’s true differentiated approach? Was the sales model repeatable? What was the customer retention rate? Had the company delivered on its previous commitments? What was the quality of the leadership team? What was the company culture? Was growth sustainable?

    Ahern says the fundraising was more difficult than anyone anticipated, but that starting the process early and maintaining a focus on solid customer support and actual financial performance raised Threat Stack above competitors that may be more focused on raising money than building a great company. Now he, Lynch and Threat Stack’s other investors can focus on executing through the next couple of years while others try to scramble for funds in a suddenly tight market.

    “I think what you’re seeing and what this financing represents is a flight to quality,” Lynch says. “They’re getting funded in a time when it’s tough to get funded as a security company because there’s always room for quality.”

    This first ran at http://thevarguy.com/it-network-business-financing-solutions/tight-tech-market-investors-choose-quality-over-flash

    7:31p
    TierPoint’s Boston Data Center Has a New Landlord

    Lincoln Rackhouse, a division of a Dallas-based real estate firm called Lincoln Property Company, has acquired a colocation data center in Marlborough, Massachusetts, which is about 30 miles west of Boston.

    The building, which has both office and data center space, is fully leased. About 70 percent of it is leased to data center tenants, according to Lincoln.

    A Lincoln spokesman declined to disclose who the building’s data center tenants are, but colocation provider TierPoint lists the address (34 St. Martin Street) as the location of its MetroWest Boston data center. TierPoint is potentially not the only data center tenant at the site.

    Lincoln bought the property from RREF Real Estate, the spokesman, Warren Loftis, said in an email.

    Lincoln is a little-known company in the data center market, but it provides data center services all around the US. It helps companies source data center capacity from other providers but also owns several data center sites itself, including facilities in Texas, Colorado, North Carolina, New York, and now also in the Boston market.

    The Boston data center plugs into the area’s diverse fiber network, which is a key aspect of the property as a colocation site. It has access to multiple long-haul fiber routes and sits in proximity to other data centers and telco offices in the area.

    The building has about 130,000 square feet of data center space and more than 10MW of critical power capacity, Loftis said. In a brochure for its MetroWest Boston data center, TierPoint says it has 40,000 square feet of raised floor, which means the remaining 90,000 square feet is either leased to other tenants or is being held by TierPoint for future expansion.

    TierPoint is a quickly expanding data center provider, doing most of its expansion via acquisition. It acquires smaller-size data center players with facilities in secondary markets, such as Boston.

    TierPoint has acquired the data center business of the telco Windstream, Midwest data center provider Cosentry, Chicago data center provider AlteredScale and Florida provider CxP.

    Related: Markley Building Out 50MW Boston Metro Data Center

    << Previous Day 2016/05/04
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org