Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Wednesday, February 5th, 2014
Time |
Event |
12:30p |
Looking Back: Google’s First Data Center Fifteen years ago, Urs Hölzle had his first visit to a data center. Accompanied by Google co-founder Larry Page, Hölzle took a trip inside an Exodus Communications facility in Santa Clara, where the search engine had set up its first cage of equipment. Hölzle, who helped build Google’s huge global data center network, shares about those humble beginnings over at Google+.
“You couldn’t really ‘set foot’ in the first Google cage because it was tiny (7′x4′, 2.5 sqm) and filled with about 30 PCs on shelves,” Urs writes. “A1 through a24 were the main servers to build and serve the index and c1 through c4 were the crawl machines.”
Google co-founder Sergey Brin adds some detail in the comments: “We improvised our own external cases for the main storage drives including our own improvised ribbon cable to connect 7 drives at a time per machine,” Brin writes. “Of course, there is a reason people don’t normally use ribbon cables externally and ours got clipped on the edge while we ferried these contraptions into the cage. So late that night, desperate to get the machines up and running, Larry did a little miracle surgery to the cable with a twist tie. Incredibly it worked!”
There’s also an Exodus invoice for that first cage. Read the full post at Google+. | 2:00p |
Creating Enterprise Data and Mobility Security  As more enterprise users are taking advantage of mobile devices, there’s a growing and important need for mobility security.
More end-users are bringing in their own devices into the corporate setting to get their jobs done. In fact, some users are now utilizing three or more devices, all of which may have access to corporate data.
Furthermore, the numbers around just how much data is being passed through these devices really paints the picture. According to the latest Cisco Visual Networking Index, “The increasing number of wireless devices that are accessing mobile networks worldwide is one of the primary contributors to traffic growth. Each year, several new devices in different form factors, and increased capabilities and intelligence, are being introduced in the market. By 2017, there will be 8.6 billion hand-held or personal mobile-ready devices and 1.7 billion machine-to-machine connections.”
 A growing number of wireless devices, including smartphones, tablets and laptops, are accessing mobile networks worldwide.
There are some inherent benefits to creating corporate mobility – productivity, worker happiness, less end-point management – but there are also many concerns. IT administrators are already responsible for many devices on their network. Now, there’s the potential that they have to monitor and manage even more.
The most efficient way to approach mobility is to have a well-planned out deployment with good policies in place. Without a doubt, one of the first planning points will revolve around security and how to best manage it with so many devices being brought in.
Here’s the first mind-shift that has to happen. Instead of trying to control the device – you should care more about the applications, workloads, data and experience being delivered to the device. This way you create an optimal delivery methodology which is truly agnostic to the device itself. Still, security must be wrapped into these policies and around the workloads that are being delivered. To that extent – here are some great ways to create mobility and data security.
- Use Enterprise/Mobility Management Platforms. The rise of the mobility revolution meant that there had to be a technology that would help administrators manage both devices and the data flowing through them more efficiently. Working with these management platforms can have a lot of benefits for organizations allowing personal devices to connect to internal network components. Scanning for things like rooter or hacked devices and stopping access from malicious software are all MDM/EDM features. Furthermore, administrators can leverage granular control mechanisms to have better visibility and manageability of end-point devices. If a device is lost or stolen, administrators have the option to wipe only corporate data or the entire device remotely. Finally, these platforms can directly optimize how applications and other content is delivered to the user – by creating adaptive orchestration policies.
- Lock down applications and workloads. A large part of the mobility and data control environment resides with various virtualization technologies. In creating a good mobility security policy, administrators have to find ways to lock down their applications, various data points and even desktops. By using next-generation technologies, administrators can limit access to all or even part of an application or workload. Above and beyond just controlling how the end-point accesses the environment, user and data controls should be deployed to better manage mobility-enabled devices.
- Deploy next-generation security. Enterprise security has come a long way. Physical firewalls are no longer the end-all security solutions. Now, administrators can deploy specific security processes on dedicated virtual or physical devices. In working with next-generation security products, administrators are able to really lock down the access into their network.
For an enterprise mobility initiative, next-gen security can help with some of the following tasks:
- End-point device interrogation.
- Access based on the device, location, and user.
- Using application firewalls.
- Deploying virtual appliances as secondary checkpoints or isolated controllers for end-user personal devices.
- Deploying adaptive two-factor authentication methods driven by secure certificates.
- Data access monitoring.
- Data Leakage Prevention (DLP), Intrusion Prevention/Detection Services (IPS/IDS)
The term “next-generation” security really focuses on the new types of IT initiatives currently being deploying by many organizations. A part of that includes mobility, device, and data management. Terminology aside, if you’ve purchased a network access controller, security appliance, or some type of gateway technology – chances are that your device has some next-generation security features already built-in. Use your appliances – both virtual and physical – to their fullest capabilities to deliver a truly powerful computing experience.
Create Mobility and Data Usage Policies
An organization may have the best infrastructure in place for mobility; however, an uninformed user can still be a very dangerous asset to have to manage. User empowerment and education has come a long way in the IT field and many are much savvier than they are given credit for. In light of this, their usage of corporate data on personal devices may actually make them (accidentally, in most cases) more dangerous. First of all, there needs to be a corporate mobility policy in place. In many instances, this is an extension of the existing computer usage policy. Users must know that although the devices they are using are be personal, the data they are viewing is still corporate-owned. Because of this, their data usage or even working session may be monitored and controlled. Although visibility into the personal device will be limited by privacy regulations – all data accessed from the central data center may be monitored and user activity logged.
Creating a happier worker can have many different benefits. However, security and integrity of corporate data must be one of the top priorities. The beauty of today’s security technologies is that administrators are able to still deliver a powerful computing experience while locking down their infrastructure. When working with modern mobility trends, the main rule is simple: never allow a free-for-all to occur.
Although many devices may be allowed – IT administrators should still limit the types of devices they allow on their network. In many cases, to access corporate data, the end-user may need to install some client software. To ease management, IT should supply a hardware list which is capable of supporting the client on various end-point platforms. In doing so, the user can still bring in their own devices, access the data, and IT will be able to secure and control the experience. | 3:04p |
Cologix Acquires DataCenter.BZ in Columbus, Ohio  The equipment area inside the Datacenter.BZ facility in Columbus, Ohio, which has been acquired by Cologix. (Photo: Datacenter.BZ)
Interconnection and data center company Cologix has acquired DataCenter.BZ including its 32,000 square foot facility in Columbus, Ohio. The acquisition fits in the company’s strategy of acquiring the leading interconnection points in underserved markets, following similar deals in Minnesota and Jacksonville. The DataCenter.BZ facility in Columbus is a strong interconnection point, where customers have access to 30 network choices and the OHIO-IX within the Meet-Me-Rooms. The entire DataCenter.BZ team will remain with Cologix.
“We are proud to welcome the DataCenter.BZ Team and its customers to the Cologix platform,” says Grant van Rooyen, President and Chief Executive Officer of Cologix. “DataCenter.BZ is a truly unique blend of an interconnect hub and an enterprise grade data center designed to Tier IV standards with high touch local support. That, combined with Columbus’s growing recognition as a vibrant and dynamic market, gives us great confidence in the benefits this acquisition will yield for the Cologix platform. Gordon Scherer, Paul Keinath and Michael Scherer, along with their team, have designed and built a business of a truly remarkable quality and we are thrilled to be partnering with them to rapidly expand the opportunity in Columbus and surrounding areas. ”
This is Cologix’ eight acquisition since the company was formed in 2011. The transaction includes five acres of land, all buildings and mission critical facilities, more than 100 customers, significant metro conduit and dark fiber assets and in excess of $5 million of run rate EBITDA.
“This is an exciting day for DataCenter.BZ customers and the enterprise community in Central Ohio,” said Gordon Scherer, President, DataCenter.BZ. ”Becoming part of the Cologix platform offers our customers seamless access to a unique North American footprint, confidence around continued investment in Columbus and a certainty that our core values of upholding the highest standards for customers will be maintained going forward.”
Columbus, Ohio is the 15th largest city in the U.S., but is still an emerging colocation market. It’s home to five Fortune 500 companies and ranked in the Top 10 friendliest cities to small business by Thumbtack and the Kaufman Founation
Denver-based Cologix now operates network neutral data centers in Columbus, Dallas, Jacksonville, Minneapolis, Montreal, Toronto and Vancouver. With the DataCenter.BZ acquisition, Cologix supports 700 customers, offering choice of access to more than 350 unique networks. The company increased its credit facility to $165 million last December and is partially using that access to capital in its aggressive expansion plans. Last year also saw expansions at the DALLAS INFOMART as well as in Vancouver. | 3:08p |
Amazon AWS: Understanding The User’s Role in a Shared Security Model James Mascarenhas is executive director, cloud storage solutions for Endpoint Vault.
 JAMES MASCARENHAS
Endpoint Vault
Amazon Web Services, backed up with a huge number of servers around the world, delivers computing (EC2), storage (S3) and other IT Services using an Infrastructure as a service (IAAS) model. These services are available to anyone who wishes to use AWS infrastructure to build own independent virtual computing system.
Amazon provides security of its data center with best security practices and compliance standards, and has developed a shared security model that relies on users to complete the security chain. AWS doesn’t have access to your virtual instance and only you can manage and make changes to it therefore the virtual server now becomes the responsibility of the owner. Some of the ways that Amazon provides security to its data centers and to the users are:
1) Concealed and classified data center locations, with round the clock security.
2) Access is limited to employees and contractors with multi-factor authentication before giving them permission for physical and logical access of the data center.
3) Adhering to different Compliance standard associated with security
- FISMA, DIACAP, and FedRAMP among host of other standards
4) Instance isolation from other virtual machines that are running within the same server.
Since physical access is highly restrictive these compliance standards have been publicly made available to verify the security of Amazon Data centers. These are just some of the security measures among host of others that can be analyzed in depth by downloading the AWS Security Whitepapers.
Now that Amazon played its part by taking care of the data center as well as taking care of your virtual instances. The security ball comes to your court and what security measures you take will decide how more or less vulnerable your virtual instance going to be. The basic rule of thumb is that you have to treat your virtual server/instance in the same way as you do for your on premise server except that in cloud you don’t have to worry anything about the physical nature of the server since Amazon has taken care of it.
The basic security measures that Amazon believes you should take is summarized below:
Account/Key Management
Use of MFA for the root account: The root account gives unlimited access to your AWS Resources and anyone with access to it may modify the resources associated with that account. Limit the use of root account and instead create groups to access AWS Resources. The account security can be further enhanced by use of Multi-Factor Authentication (MFA) which will take multiple authentication measures before giving authorization to use particular resources.
Create multiple groups and set permission accordingly: Create different groups to manage and set policies based on the requirement of the group or individual. Even if you want someone to have full admin level access instead of giving them root account URL you should create a group and add specific user in it so that the permission can be easily revoked when necessary without compromising with the instance security.
Patch Management
Audit routinely different software and OS that are running in your virtual environment for potential security threat and lapses. Eliminate these loopholes by updating the application with patches provided by the vendors. You can use tool that will automate such processes for you, IBM Endpoint Manager is one good example for Window, Linux, UNIX and Mac OS patch management.
Securing data by use of encryption
You can upload encrypted file directly to the instance thereby only those with the decryption key will be able to decrypt the data. You can also use server side encryption features provided by Amazon to automate the encryption-decryption process for you, file that is being uploaded will be encrypted before getting saved in the data center and will be decrypted automatically when you download the object. You should also consider encrypting data in transit using SSL for secure delivery of your content when in network.
Access management
How content is available to other people might create security lapse for your digital assists. By default Amazon has set everything to private mode that is accessible only to the root account holder but you can override it and make that specific data available to everyone or with only specific set of people. So share data with those you trust, or if it is necessary to make that piece of data publicly accessible then make sure to take proper measures.
VPN/ Security Gateway from your Site
You can secure or boost the security further by use of VPN to connect your virtual instances directly to your corporate site. Use of security gateway ensures that if something wrong happens with the Amazon servers your data gets uploaded back to your local server safely and securely.
The rule of thumb is that the Amazon will take care of the physical assets and you should take care of the logical assets of the instances that you own. Amazon Shared Security Philosophy states that the final security responsibility lies with the owner and not with Amazon. Let’s put it like this: the Amazon has built a shiny new car with enough security measures and at the time of handing over the keys, tells you, “buddy the car is yours now, just drive carefully”.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 4:18p |
Backblaze Completes 500 Petabyte Data Center  Rows of storage units inside the new Backblaze data center in the Sacramento market. (Photo: Backblaze)
Online backup provider Backblaze has completed a 500 petabyte data center, at the Sungard Availability facility just outside of Sacramento, California, the company said in a blog post. After out growing its 40 Petabytes of storage in a caged facility in Oakland, the company set out in 2012 to find a new home. After reviewing proposals from all over the nation, Sungard was selected, and the staff went to work installing the company’s signature Storage Pods. The data center also has SAS 70 Type II and ISO 9001 certifications and is PCI-DSS compliant.
The Sacramento data center has been quietly receiving customer data, and by September of last year all new customer accounts were being serviced there. Backblaze expects to store 500 petabytes of customer data at the new facility.
The extremely cost-efficient Backblaze Storage Pod 1.0 caught the interest of many in 2009, and its current 3.0 Pod packs 180 terabytes in a redesigned 4U chassis with many upgraded components. Having shared their design ideas on the Storage Pod architecture, other companies such as Netflix were inspired to design their own custom storage appliances. The complete story of Storage Pod 3.0, the architecture, specs, and economics are in this February 2013 blog post. | 7:23p |
DataSite Sees Business in Boise, Retools HP Data Center  A look at the raised floor space in the east wing of the new DataSite Boise, Idaho facility.
DataSite is taking over a building in Boise that was originally built by the Bank of Idaho back in the 1980s. HP first acquired the site, doubling it in size and running it as a key data center to their operations. Boise was home for HP’s innovation of the laserjet printer, and continues to be a big player in the area.
The data center is 60,000 square feet, which HP ran as two entirely separate data centers. The switch gear, generator, UPS plant and all of the cooling systems are redundant. DataSite saw the site as an opportunity to offer its brand of hybrid colocation. The facility’s power capacity is currently 1,650 killowatts, but the company plans to boost that in phases, eventually expanding to four to five megawatts of power to support about 35,000 square feet of raised floor.
“In Boise, we have a unique opportunity,” said Jeff Burges, President of DataSite. “Boise is underserved today. There’s nobody doing colocation in Boise today like we do.”
Hybrid Provisioning Choices
DataSite’s hybrid colo offering provides customers with options in how they provision four key infrastructure components: The main utility (switch gear and utility service), the generator plant, the UPS plant and the cooling plant. This approach was developed to address users that didn’t want to be a colo customer, allowing them the option to own and control the pieces they want, even with their money. It is the same strategy the company employs in its Marietta, Georgia data center. The company also has a site in Orlando, Florida.
“We do hybrid colocation and dedicated,” said Burges. “We melted those two systems together to provider a variety of tier ratings and custom configs. Not only can we approach a customer – we can either gives them, for their entirety or rack, a higher or lower tier rating. This enables them to get the benefits for some of their installation, but not other parts. With redundancy goes cost – the higher the redundancy the higher cost.”
The company looks for what it calls the “Trifecta of Value” in evaluating sites. The three key factors are:
- Inexpensive electricity rates: The company says rates are as low as 4.5 cents per kilowatt hour in Boise.
- Environmentally Friendly Power Sources: In this case of Boise, the majority of power is hydroelectric.
- Optimal Climate: Boise is a high desert climate with 2,700 foot elevation. It has more hours of the year at 45 degrees or below than just about any location. ”You can run that free cooling for a remarkable number of hours,” said Burges. “We drive the effective power rate into the 2 (cent range). The PUEs are low.”
Why Boise?
Boise has a growing tech scene. “The quality of technology is there, HP Micron is there, there’s a tech university,” said Burges. “There’s a lot of tech startups. A lot of people call it a mini-Austin.”
The only real environmental risks in Boise are flood risks near the Boise river, but the data center isn’t in a flood zone. There’s also some fire risk due to dry climate. However, most natural disasters leave Boise untouched.
“It’s an N+1 world, but many wish to have more, or less,” said Burges, referring to industry parlance for redundancy in key equipment. “We believe it’s an N+1. We market variety, but with most N+1, some buildings just miss ‘tier 3’ by missing an element or so. The end user community looks at those things they look to be most important. The world is a little easier on the colo operators. We see that the end user likes to be able to get that redundancy if they want.
“We will not be skiddish about spreading people out in the building,” said Burges. “It was built in two data centers. We’re combining two into one for very high reliability. There’s plenty of space to build equipment rooms. We can do an entirely built-to-suit with the remaining space. We can do dedicated space. The way HP built out and operated it enables our customer-centric colo better than any of our buildings. “
“High density is here to stay, and this location will offer up to 400-500 watts per square foot,” said Rob Wilson, Director of Sales and Marketing, DataSite.
DataSite will refurbish the building over the next 45 days and open for business on April 1. The company has already begun talks on a couple of deals. “Something that’s missing in Boise is the higher end environment, the space for critical load said Wilson. “They’re forced to build their own, so this really opens that market for having that environment.” | 7:45p |
Cloudera Launches Enterprise Data Hub Concurrent launches a big data focused solution for application performance management, Cloudera launches enterprise data hub versions tailored to big data needs, and Violin Memory names Kevin DeNuccio as CEO.
Cloudera launches enterprise data hub. Cloudera announced the availability of an enterprise data hub solution and unveiled new simplified product packaging. Cloudera Enterprise is now offered in three editions, aligned with how customers use Hadoop: Enterprise Data Hub, Enterprise Flex edition, and Enterprise Basic edition. A new addition to the Cloudera Enterprise family is support for Apache Spark, an open source, parallel data processing framework that complements Hadoop, making it easy to develop fast, unified Big Data applications that combine batch, streaming, and interactive analytics. ”Fast Data is a new hotspot with Hadoop,” said Tony Baer, principal analyst, Ovum. ”The same data explosion that generated demand for Big Data Analytics is also creating demand for making the data instantly actionable. Cloudera’s embrace of Spark is a major stake in the ground for promoting Fast Data workloads on Hadoop. Cloudera’s simplified pricing and broad inclusion of components will remove a major point of friction for customers seeking to explore the potential of Hadoop.”
Violin Memory appoints Kevin A. DeNuccio as CEO. Violin Memory (VMEM) announced that its Board of Directors has appointed Kevin DeNuccio as president and chief executive officer. The previous CEO Howard A. Bain III will remain chairman of the board of directors. DeNuccio brings to Violin more than 25 years of executive and board experience building, managing and growing leading technology businesses such as Metaswitch Networks and Redback Networks. “Kevin DeNuccio has an outstanding track record within the technology industry of achieving profitable growth and creating significant value,” said Bain. “His experience transforming businesses by developing deep client and channel relationships, rapidly scaling operations and cultivating talent will be invaluable to Violin as we focus on building a strong future for all our stakeholders.”
Concurrent launches APM product for big data. Big data platform company Concurrent announced Driven, an application performance management (APM) product for big data applications. Driven is designed to address the pain points of enterprise application development and application performance management on Apache Hadoop. Offering key application metrics in real-time, Driven lets developers, data analysts, data scientists and operations isolate and resolve problems immediately. Driven is a free cloud service and is an integral part of the Cascading community, where users can collaborate across organizations and get help from community experts. “Driven is a powerful step forward in delivering on the full promise of connecting business with Big Data,” said Chris Wensel, founder and CTO at Concurrent. ”Gone are the days when developers must dig through log files for clues to slow performance or failures of their data processing applications. The release of Driven further enables enterprise users to develop data oriented applications on Apache Hadoop in a more collaborative, streamlined fashion. Driven is the key to unlock enterprises’ ability to drive differentiation through data. There’s a lot more to come – this is only the beginning.” |
|