Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Tuesday, February 12th, 2013
| Time |
Event |
| 1:36p |
ScaleBase SQL Software Gets a Sequel Database scalability software provider ScaleBase announced version 2.0 of its Data Traffic Manager software. The database scalability software solves three big challenges that next-gen applications face when it comes to increased adoption: scalability, availability and centralized management.
Version 2.0 of the Data Traffic Manager software is a major upgrade, and gives application providers with a robust way to virtualize their databases. Through providing scale out through advanced read/write splitting or data distribution in a single solution, ScaleBase says its is able to scale to an infinite number of users and increase performance without requiring any changes to an existing MySQL infrastructure.
“The enhancements in the new version are in direct response to what our customers and prospects are asking for: support for multiple apps, a more intuitive interface and easier deployment,” said Doron Levari, CTO and founder of ScaleBase. “We’re committed to giving app developers and DBAs everything they need to stay focused on app innovation, without having to worry about changing their infrastructure to scale out their apps.”
ScaleBase allows for central management and the ability to analyze complex, distributed database environments. Data Traffic Manager 2.0 supports replicated databases across geographical boundaries, with location-independent load balancing and failover.
Scalebase is used by companies with some of the fastest growing databases – including Mozilla. It detects database outages and transparently fails over to backup instances, ensuring application availability. In a breakout year last year the company doubled in size, moved to new corporate headquarters, and raised $10.5 million in venture funding. | | 3:00p |
Navigating Data Center Performance Challenges With KPIs (Part 1) David Appelbaum is vice president of marketing at Sentilla Corporation, and has worked in software marketing roles at Borland, Oracle, Autonomy, Salesforce.com, BigFix, and Act-On.
 DAVID APPELBAUM
Sentilla
Today’s data center is characterized by sprawling, complex and difficult-to-understand infrastructure that once installed never leaves. Data center professionals must address constant demands for new services and rapid data growth, along with escalating demand and availability requirements. To make smart decisions about their infrastructure, they need asset-level and multi-level visibility into what’s happening across all their data centers and colocation facilities. The path to effective infrastructure management begins with global visibility and metrics.
A survey of 5,000 data center professionals conducted in the second half of 2012 highlighted the Key Performance Indicators (KPIs) and metrics that data center professionals need for smart infrastructure decisions. It also tracked the kinds of tools respondents were using, and how effective those tools were at delivering meaningful KPIs.
Here is the first group of findings from this survey, with the rest to be unveiled in a second post coming soon.
Good Metrics are Essential for Flight
An experienced pilot is comfortable flying a small plane by sight in good weather. But, if you put that same pilot in a jumbo jet in foggy weather, he or she will need cockpit instrumentation, a flight plan and air traffic control support.
Today’s data center is more like the jumbo jet in the fog than the small plane. There are many moving parts. New technologies like virtualization add layers of abstraction to the data center. Applications –and especially data– grow exponentially. Power, space, and storage are expensive and finite, yet you need to keep all the business-critical applications running.
To plan for tomorrow’s data center infrastructure, you need visibility into what’s happening today. Relevant metrics show not only what’s happening, but also how it relates to other parts of the data center and to your costs:
- What’s the ongoing operational and management cost of one application versus another?
- What’s your data center’s power capacity versus utilization?
- Do you have enough compute, network, storage, power, and space to add this new application?
- Which applications aren’t using much of their capacity, and can you reclaim capacity by virtualizing them?
In the absence of good information, you have to make decisions based on hunches, trends, and incomplete, disconnected data. The safest choice may be over-provisioning. This kind of ‘flying blind’ is a risky and expensive way to run a data center.
The Heart of the Problem: Capacity Limits
While over-provisioning might seem like the safest course of action for preventing performance and availability problems, it can also lead to serious capacity shortages in other places. For example, if you keep adding servers to an application, you may run out of rack space and hit power limitations.
The aforementioned 2012 survey indicates that many data centers are running into capacity constraints. Nearly a third of the respondents have used more than 70 percent of their available rack space.
Many face storage constraints as well – 40 percent of the respondents have used more than half of their available storage, and another 36 percent didn’t know their disk space utilization.
Many organizations find themselves in a cycle in which they’re constantly planning and building out new data centers – without having optimized the efficiency of the existing infrastructure.
Cloud technologies seem to offer an answer, but in reality cloud computing only moves the capacity problems from one location to another. To make intelligent decisions about what to move to the cloud, you need insight into application utilization, capacity and cost – because whether an application resides in your data center, a private hosted cloud or the public cloud, you’re still paying for capacity.
How Do You Address Capacity Constraints?
Different strategies are being employed to address the capacity constraints in the data center:
- Migrate applications to cloud resources (in either private or public clouds). Moving to something like Amazon Web Services may require application re-architecting.
- Consolidate applications, servers and even data centers to gain efficiencies. The 2012 survey shows that there’s still plenty of room for more server virtualization in most data centers. More than half of respondents had virtualized 50 percent or less of their data center applications.
To make smart decisions about where to invest your resources and efforts, you need insight into what’s happening in the data center today. This includes information about capacity, utilization, and cost. Stay tuned for Part 2 to see the rest of the findings.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 3:17p |
Deal Roundup: VMware to Acquire Virsto VMware, F5 Networks and Radware have announced acquisitions recently:
VMware to acquire Virsto. VMware (VMW) announced it has signed a definitive agreement to acquire Virsto Software, a provider of software that optimizes storage performance and utilization in virtual environments. Continuing its mantra of the software-defined datacenter Virsto will expand VMware’s storage portfolio and help address the increasing complexity and cost of storage within virtual and cloud environments. “VMware is committed to continuing to deliver software innovations that bring significant efficiencies to our customers while simplifying infrastructure and IT,” said John Gilmartin, vice president of storage and availability, VMware. “We believe that the acquisition of Virsto will accelerate our development of storage technologies, allowing our customers to greatly improve the efficiency and performance of storage in virtual infrastructure.”
F5 to acquire LineRate. F5 Networks (FFIV) announced it has agreed to acquire LineRate Systems, a developer of software defined networking (SDN) services. Through this acquisition, F5 gains access to LineRate’s layer 7+ networking services technology, intellectual property, and engineering talent. “F5’s vision is to enable application-layer SDN services for software defined data centers, providing our customers with unprecedented automation, orchestration, and control,” said Karl Triebes, EVP of Product Development and CTO at F5. “Extreme scale of applications, infrastructure, and services will be vital to SDN, core to F5, and the key to customers realizing the benefits of software defined data centers. LineRate’s programmable network capabilities and innovations in layer 7 will bolster our efforts to extend F5’s market leadership in these areas, and we are very pleased to welcome the LineRate team to F5.”
Radware acquires Strangeloop Networks. Radware (RDWR) announced that it has completed the acquisition of Strangeloop Networks, a leader in the Web performance acceleration domain. Strangeloop’s technology accelerates web applications sing an advanced set of proprietary site treatment and is thus able to deliver page content in the most efficient way possible. Strangeloop’s products will continue to be sold and will now be offered under the Radware FastView brand name. “By accelerating the web application response time of an enterprise, Radware can enhance their business performance as well as employee productivity,” says Ilan Kinreich, chief operating officer, Radware. “This is an invaluable capability as we’ve seen an aggressive adoption of SaaS, mobile and cloud technologies where user response time is critical for business continuity. Additionally, online retailers and financial services are dependent on their customer-facing web applications. Therefore, speed and performance are two of the most critical factors in order to increase customer satisfaction, conversion rates and generate revenue.” | | 4:00p |
Preventing a Botnet Attack on Your Data Center The rise in Internet and cloud utilization has been directly proportional to the rise in WAN-based attacks. As more organizations utilize the power of the cloud, more attackers will search for targets to attack. Fueled by innovations like do-it-yourself botnet construction kits and rent-a-botnet business models, the growth of botnets has skyrocketed and botnet products and services are now brazenly advertised and sold on the Internet. As many as one quarter of all personal computers may now be participating in a botnet, unknown to their owners. According to Arbor Networks and this white paper, at its peak, it is believed the now-defunct Mariposa botnet may have controlled up to 12 million zombies.
One of the major uses of botnets is to launch Distributed Denial of Service (DDoS) attacks, which are simultaneously executed from multiple infected hosts under the command and control of a botherder. The goal is to slow or take down the targeted domain, network infrastructure, web site or application so it can’t respond to legitimate requests. An attack may also have a secondary goal of installing malware that can steal valuable information from the infected system or enable further intrusions at a later date. The reasons for a DDoS attack can truly range. Everything from political hactivism to competitive purposes – DdoS attacks are on the rise.

In 2010, 77 percent of respondents in Arbor’s annual survey experienced application-layer attack, and such attacks represented 27 percent of all attack vectors.
Download this white paper to learn the intricate workings of a botnet. Because no single entity can analyze and gain expertise in all types of malware or threats, each of these security organizations typically specializes in a specific type of threat. For example, ASERT specializes in the analysis of botnets and DDoS attacks and relies on other expert organizations in the security community for information regarding other types of threats. In return, Arbor openly publishes and shares its analysis and information with these other trusted security organizations. This white paper looks at how botnets and DDoS attacks have evolved over the years and where administrators must concern themselves moving forward. In partnering with the right security solutions, IT managers can create a secured environment capable of agility and growth. | | 5:12p |
Video: LeaseWeb Migrates 3,000 Servers What’s it like to move 3,000 servers into a new data center? As part of its expansion in the U.S. market, LeaseWeb recently migrated 100 server-filled racks into a new data center hall at the COPT6 facility in Manassas, Virginia. The move took place over three nights, with no downtime. The LeaseWeb team has put together a video providing a behind the scenes look at the process and how they organized the project, which involved significant work to prepare the data hall, move the servers, and get them back online. This video runs 5 minutes.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube. | | 6:00p |
Data Center Jobs: ISS Facilities Services At the Data Center Jobs Board, we have two new job listings from ISS Facility Services, which is seeking a Critical Enviro/Data Center Chief Engineer in Clarksville, VA or Colorado Springs, CO.
The Critical Enviro/Data Center Chief Engineer is responsible for the Operations & Maintenance program of the data center as well as any non-data center space(s) assigned; performs supervisory functions common to a Critical Environment facility maintenance and utility plant operations organization; reviews and evaluates performance and ensures quality standards are being met; takes corrective action to resolve problems; and knows and understands the Owners’ building operations rules for the building(s) under his/her care and as provided by the Data Center (DC) Manager.
To view full details and apply, see Clarksville, VA job listing details.
To view full details and apply, see Colorado Springs, CO job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | | 6:27p |
Eucalyptus: We’ll Be More Open Than Other Open Clouds  (Image: DreamsTime)
In a landscape with dueling open clouds, which is the most open? Cloud software specialist Eucalyptus sees pushing boundaries of open clouds as an opportunity. This philosophy is driving changes at the company, with a sharper focus on its “open” roots from its origins in academia. Eucalyptus has open sourced its courseware and training, making all those materials available for free.
“We’re extending our open model into professional services,” said CEO Marten Mickos. “Anyone can look at the source code, training material, documents that go around the code, everything. We realize that our competitors will look at it, but we’re happy to offer it to the world in order to better the product.”
Eucalyptus makes open source software for building private and hybrid clouds that are compatible with Amazon Web Services. It’s a niche that is increasingly competitive with the growth of OpenStack and CloudStack, which each have well-heeled patrons in Rackspace and Citrix, respectively.
Mickos sees the company’s new direction as a positive step. Mickos says the beginning of 2012 was a little bit shaky as the Eucalyptus team invested significant time on new features for High Availability. But the company also tripled its engineering team and added a graphical user interface, a big step forward in making Eucalyptus user-friendly.
Shakeup in the Ranks
There’s been a bit of a shakeup in its ranks of late. Eucalyptus co-founder Rich Wolski will return to his academic roots, spending more time at University of California Santa Barbara. Red Hat veteran Said Ziouani, who was brought on to lead sales, has left the company. SVP of marketing David Butler departed last fall.
Why the decision to open source courseware and training documents? “There are a number of reasons we are making this shift, but the most important one is culture,” wrote, Jason Eden, Senior Director of Professional Services at Eucalyptus, in a recent blog post. “If we truly are an open source company, does it make sense for us to develop closed-source intellectual property, tightly control access to that information, and use it primarily as a way to drive direct business unit revenue? There are certainly other very successful open-source software companies that do just that.
“In the end, we decided that if we were going to be an open-source company, we were going to go all-in,” Eden continued. “2013 will be the year of the Eucalyptus PS Experiment, where we challenge conventional wisdom about the value and place of professional services in a software company like Eucalyptus.”
Mickos: Revenue Model Still Works
While making Eucalyptus more open and easier to use has obvious benefits for users, the question remains: is there enough demand for its remaining high-touch support services?
“We do not see a problem in making money,” said Mickos. “Customers come and say ‘we need your support.’ They need day to day care, and they buy a subscription.”
Rival OpenStack continues its rise with public cloud and big corporations, while Eucalyptus is doing well with enterprises who use Amazon Web Services. “We are the complement to the leading public clouds,” said Mickos. “Amazon has 80 to 90 percent of the market. Not all Amazon customers will need us. Some are small or just looking for convenience, and some are so extreme that they’re not our audience. In the middle there’s a very attractive market. We very much much have a home.”
Mickos believes that a focus on usability and the mid-market is the way forward for Eucalyptus in the open cloud ecosystem.
“One distinction between us and OpenStack and CloudStack is that we take a product approach,” said Mickos. “If you need 10 people to get an OpenStack cloud going, you need 2 to get a Eucalyptus cloud going. Cloud computing on premise is indeed very advanced, but there is a way to package it and deploy quickly. Eucalyptus is as straightforward as possible.”
Who is Running Clouds?
Mickos says Eucalyptus is seeing clear trends in its customers’ motivations for adopting cloud. “On the business side – if you’re thinking about a customer, why do customers run private clouds?,” he said. “What’s the benefit they get? The first reason they deploy Eucalyptus is agility. They can deploy applications faster. We had a customer that used to do 4 tests a month (because of infrastructure constraints). Now they run tests 4,000 times a month. It allows them to spin up new tests and kill them.”
The second benefit is improvement in manageability. “They calculate the cost for servers per month, and its half (with Eucalyptus)” compared to traditional deployments, said Mickos. “The manpower needed is much lower. They don’t need 24×7 support. because they can just spin up if something goes down.”
The final and most meaningful long-term benefit is higher utilization. An example Mickos cited is Nokia Siemens, which he says has tripled its utilization thanks to Eucalyptus. “With virtualization, with one server they can now do what they used to do with three,” he said.
The company says it will continue to move towards openness and ease of use. Eucalyptus 3.3 (targeted for release in the second quarter of 2013) is focused on providing additional AWS-compatible services to enable Scalable Web Application use cases. The three biggest new features and improvements under development for 3.3 are elastic load balancing, auto-scaling, and Cloudwatch, a cloud resource and application monitoring service. Other features such as resource tagging, expanded instance types, VMWare vSphere 5.1 support, NetAPP SAN storage adaptor improvements, and the ability to perform maintenance on a node controller without interrupting applications or services running on the cloud. | | 6:45p |
Inside the Cobalt Cheyenne Data Center  Some of the cabinets in the data hall at the new facility. Cooling is supplied by air-cooled CRAC units with integrated air-side economizers for maximum efficiency and flexibility.
Cobalt Data Centers has opened the doors on the first of two planned Tier 3-compliant data centers in Las Vegas. The company held a grand opening ceremony last week. In our photo feature, Closer Look: Cobalt Cheyenne Data Center, we provide a look inside the new data center. |
|