Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Monday, March 23rd, 2015
| Time |
Event |
| 10:00a |
Couchbase Rethinks NoSQL Database Architecture Couchbase CEO Bob Wiederhold believes a revolution has been achieved in the NoSQL database’s architecture. The upcoming Couchbase 4.0 changed architecture so that three very different types of database tasks can run on different sets of servers optimized to do those tasks — an alternative to using the same set of servers (a single node) to accomplish all three core tasks of read/write, indexing, and queries.
The big benefit is much better performance for index and queries, making Couchbase better at large-scale mission critical deployments. Couchbase and NoSQL in general have been strong with read/write tasks, data storage, and processing, but since three core types of database tasks usually run on a single node, either the node has to be ridiculously powerful or two types of tasks suffer reduced performance.
Each of the three types of database tasks has different demands from the underlying servers. Read/write is memory oriented; indexing is disk-intensive; querying needs a lot of CPU horsepower.
“Enterprises are faced with a broad range of data processing requirements, for which they have traditionally relied on extending the relational model and, more recently, combined a variety of specialist NoSQL databases,” Matt Aslett, research director at 451 Research, said in an email. “Our research suggests that enterprises are making strategic investments in more agile, multi-model databases that serve a variety of needs. Couchbase’s multi-dimensional scaling appears to be an innovative, flexible approach to supporting a wider range of data processing workloads, and the ability to scale multiple workloads independently increases the likelihood that customers will use the database to support multiple workloads simultaneously.”
“Multi-dimensional scaling” is what Couchbase calls this capability. Widerhold said it doesn’t have to be preconfigured in advance.
“During runtime you can make changes in how you configure your system,” he said. “It’s possible to configure workloads on the fly.”
NoSQL adoption is occurring in three phases, according to Wiederhold. The first was grassroots development adoption and open source company adoption. Developers were playing around and bringing it into companies for small project. Wiederhold said that the criteria for success was ease of development. MongoDB was particularly strong here, he said.
The second phase was around 2013, with enterprises believing NoSQL had evolved to the point where it was suitable for more mission critical use cases. Operations became more involved in decisions. “It was a time of deep strategic evaluations. This is when Couchbase took off. We have very strong scalability and performance, differentiation against MongoDB.”
Beginning early next year, Weiderhold believes the third phase will be in full gear: wide deployment of NoSQL. “Strategic evaluations are done, the technology is considered stable enough to be deployed widely. This is being made in the context of a much bigger replatforming that is taking place. Major parts of the infrastructure software stack are changing right now.”
Enterprise IT is undergoing an evolution architecturally with the move toward cloud shaping the next generation. Web scale, mobile apps, and Internet of Things all depend on doing things a little differently in terms of databases, with NoSQL more suited to this world than relational databases.
“Relational databases have added features that appear to be similar to NoSQL, but they’re missing the point,” said Weiderhold. “They’re fundamentally different architectures and trade-offs.”
The 4.0 release will be out this summer and feature other enhancements, but Couchbase is currently focused on getting the word out about the new multi-dimensional scaling capability.
The company has other projects in the works as well. One example is N1QL (pronounced Nickel) Query, a language designed for ad-hoc, SQL-like querying on its NoSQL database. | | 3:30p |
CFOs and the Many Flavors of Cloud Rob Levine is responsible for developing CentriLogic’s global financial policies, including overseeing the organization’s international facilities acquisitions, procurement, logistics and company acquisitions.
Three momentous trends have dominated IT planning in the past few years – cloud computing, mobile devices and the consumerization of IT. Employees everywhere have taken charge by using personal devices and apps at work and provisioning cloud services without input from the IT department.
The emergence of Infrastructure-as-a-Service (IaaS) public cloud providers and hundreds of other SaaS applications have indeed brought innovation and time to market benefits, yet without oversight, adoption of these technologies can backfire quickly. Pretty soon, a company is overspending, using multiple services for the same purpose and exposing a company to data loss, security breaches and integration issues.
Fundamental Issues to Consider
This is where the CFO comes into the game. Beyond business applications, CFOs need to understand the quickly changing world of IT infrastructure and outsourcing. The more CFOs know about cloud computing and hosting options, the more they can influence IT decisions and help the CIO avoid a scenario of integration chaos and waste. Here’s a rundown on the fundamental issues for CFOs to consider when involved with making decisions on cloud and IT hosting:
1. Defining the business problem or need. IT professionals get excited about experimentation, but any decision always rides upon where the business needs to go and what are the immediate and future requirements. This often comes down to an application-level decision.
For example, when IT wants to use a hyper-scale public cloud provider for testing or development, they can save money and time versus building out servers internally or signing an annual data center contract for a short-term project. But when the business is determining how to manage a core application that will store sensitive customer data or other proprietary information, a more secure, controlled environment with a private cloud or managed hosting provider may be the better business option—even it means spending more.
2. Vendor options and cloud models. Most major IT infrastructure vendors say they do cloud, but make sure you understand their definition of that. Here’s what the CFO needs to know:
- Public cloud providers offer an on-demand, fully outsourced model which is pay-as-you-go. Typically, there are no contracts involved and you can spin up or spin down resources whenever needed. Support is limited and security options may not be adequate for highly sensitive data or rigid compliance needs. Such services are quick to set up and popular with various business units, but using them comes with risks. For instance, many public cloud providers cannot guarantee geographic data residency and do not make failover/redundancy easy to accomplish.
- Private cloud refers to an outsourced or on-premise option which entails servers that are dedicated to your business – unlike with shared public cloud resources – and offer a higher degree of control for data protection and management. Private cloud offerings are typically more expensive than public cloud.
- A third option is a hybrid cloud setup in which a company uses both public cloud and private cloud providers and technologies, a common option for large companies with applications that have vastly differing requirements. The downside is companies need specific methods and tools for managing and coordinating multiple environments.
- Finally, a more traditional route is to colocate existing infrastructure and contract managed hosting services from IT vendors. In these arrangements, the company owns all the equipment, but benefits from not having to manage it in their own data center. As well, the company attains 24 x 7 support, a fully redundant environment, custom development and the comfort of a dedicated account manager. Note, however, that managed services can come with a Cadillac price.
3. Cost structures. When cloud computing went mass-market in 2011 and 2012, the discussions were slanted heavily toward saving money and launching time-sensitive projects without procurement delays. Yet there’s been a shift from a pure cost evaluation when it comes to hosting decisions. CFOs must balance security and compliance needs (such as the requirement to store certain data sets within the country of origin) and IT organization realities (do we have the right staff to set up and manage a public cloud infrastructure) with cost and time-to-market.
By choosing one provider to handle multiple application needs – some providers can do it all – companies may benefit from economies of scale, third-party expertise and simpler vendor relationships. Yet there may be prevailing reasons to use multiple providers: an IT staff with expertise in a public cloud domain can create a solid case for using that provider for running marketing and big data projects, while maintaining operational systems internally or outsourced with a private cloud provider.
Understand what compromises your company might be making by choosing the cheapest solution. Let’s say the CIO of a large financial institution is managing a $10 million IT budget. Perhaps only 2 perhaps of that budget relates to compliance, yet if that piece is handled incorrectly, the financial risk to the business could be in the hundreds of millions of dollars from lost customer confidence, customer defections and the legal and technical price of cleaning up a data breach mess.
4. Working with the CIO. The relationship between the CFO and the CIO is more important than ever. The friction is natural: CIOs look at a hosting decision from a technological standpoint, whereas the CFO wears the money and risk management hat. Marrying these two perspectives is nirvana, but not impossible. It starts with the CFO’s level of interest and motivation to learn about the nuances in technologies and hosting options and to gain a deep understanding of the CIO’s perspective. By doing so, the entire business can benefit from a decision that balances cost, risk, business benefits and IT innovation. Infrastructure is everything to the modern business – and the CFO is smack in the middle of making these critical decisions.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | | 6:23p |
EdgeConneX Opening Three Florida Edge Data Centers EdgeConneX recently announced the building of three new “edge data centers” in Tallahassee, Jacksonville, and Miami, Florida, all set to go live at the end of the second quarter.
EdgeConneX specializes in serving content providers and content delivery networks in underserved metro areas often overlooked by internet infrastructure companies. Its data centers host bandwidth-intensive, latency-sensitive content and applications at the edge of the network. The company has announced plans to add 10 new data centers this year but has been quiet about details.
Each edge data center is dense in power, scalable to more than 3.2 megawatts of power, and capable of delivering 20-plus kilowatts per rack. The company also offers wireless solutions, which is where its roots were prior to expanding to an edge colocation focus.
The three Florida data centers are all in major population hubs with one in the south part of the state, another located centrally, and the third in the Panhandle further north.
Tallahassee is the most overlooked metro in the trio. The Florida Panhandle is an area that has suffered a few direct hits from hurricanes, making it not very conducive for data center activity. However, it is a fast-growing metro area in need of content serving.
Miami is a major data center hub that also serves Latin America, and Jacksonville is a growing interconnection spot. Allied Fiber recently completed a Miami-to-Jacksonville fiber route; Cologix acquired Jacksonville player Colo5 last year. Data center provider Peak 10 also has a presence there.
“Florida’s information technology sector continues to grow, and companies like EdgeConneX are helping to diversify that sector,” EdgeConneX Enterprise Florida President and CEO Bull Johnson said in a press release. “These data centers in Tallahassee, Jacksonville, and Miami will create important capital investments in those regions and further strengthen Florida’s position as a business super-state. Florida is the third-largest state for high-tech establishments, and nearly 250,000 Floridians are employed in the technology sector today.”
Clint Heiden, chief commercial officer at EdgeConneX, added: “The domestic demand for content has outgrown the internet infrastructure that exists today. EdgeConneX is rapidly building infrastructure in new markets to bring the internet local across the country, creating the Internet of Everywhere.”
Heiden also said the internet is served from one of nine peering points across the U.S. today, a setup that works fine for email but not for increasing content consumption in metros.
In October 2014, more than 72 million Americans streamed 7.1 billion minutes of sports content over smartphones in addition to close to 80 million streaming 8.6 billion minutes of content, according to Nielsen. Smartphone usage rose from 30 percent in 2010 to 75 percent by the end of 2014. | | 6:43p |
Indian Telecom Bharti Airtel Offers AWS Direct Connect 
This article originally appeared at The WHIR
Indian telecom Bharti Airtel announced on Monday that it has joined the AWS Partner Network and will offer AWS Direct Connect for customers to establish a dedicated network connection between on-premises infrastructure and AWS services.
Customers can establish a dedicated network connection between their network and one of the AWS Direct Connect locations. The connection can be partitioned into multiple virtual interfaces, according to AWS.
“Today, we are seeing more and more organizations embrace the benefits of hybrid network architectures and on-premise environments across the globe,” Bharti Airtel CEO Ajay Chitkara said. “In line with this market adoption, we are excited to strengthen Airtel’s cloud services portfolio by adding AWS to our growing list of cloud services providers. We are confident that this will help our global customers truly leverage the benefits of cloud, and further Airtel’s long-term commitment towards delivering the best technological capabilities for its customers.”
AWS Direct Connect reduces bandwidth costs and offers consistent network performance. The service provides 1 Gbps and 10 Gbps connections, and users can provision multiple connections more capacity is required.
“We are excited to be working with Airtel to bring the security and reliability of AWS Direct Connect to Amazon Web Services Inc. (AWS Inc.) customers across India,” said Bikram singh Bedi, Head of Amazon Web Services India. “By utilizing AWS Direct Connect, AWS Inc. customers are able to reduce network costs, increase bandwidth throughput and provide a more consistent network experience, helping Indian businesses of all sizes to rapidly expand their organizations.”
Airtel offers a number of cloud services to its customers in India, including Office 365.
This article originally appeared at http://www.thewhir.com/web-hosting-news/indian-telecom-bharti-airtel-offers-aws-direct-connect | | 7:15p |
Cisco Exec Leaves to Head Dell’s Data Center Tech Group Paul Pérez, who until recently served on the leadership team for Cisco’s server products, will lead the Dell data center technologies business, Dell said Monday.
In the same announcement, the company also said Rory Reed, former CEO of chipmaker AMD, has joined Dell as chief operating officer and president of worldwide commercial sales. Read left AMD last year, his role taken over by the company’s then chief operating officer Lisa Su.
At Cisco, Pérez was both vice president and general manager for the group responsible for the Silicon Valley IT giant’s Unified Computing System server line and CTO of its data center business group. He oversaw Cisco’s tech strategy for converged infrastructure, virtualization, and private-cloud automation.
While Pérez had been at Cisco for the past three and a half years, he is a long-time HP man. He spent close to thirty years working for the Palo Alto-based Silicon Valley legend, starting in the 80s as a designer working on the PA-RISC chip architecture (used in now-defunct Itanium processors) and eventually becoming chief technologist, firs for HP StorageWorks and then for industry-standard servers and software.
As CTO of Dell’s enterprise solutions group, Pérez will help drive the company’s long-term enterprise-tech strategy. Both he and Read will report to the company’s chief commercial officer Marius Haas.
Read served as AMD CEO for more than three years before announcing his resignation in October 2014. He was president and chief operating officer for Lenovo for five years prior to joining AMD. Before that, he spent more than 20 years working for IBM.
Dell was number-three server vendor in the world by revenue in 2014, according to IDC. HP was first, and IBM was second. Cisco was fourth, following Dell. | | 8:40p |
Penn State Building $60M University Data Center Penn State University finalized plans for a second data center at University Park campus. The projected total budget for the project is $58 million. Funding will come primarily from borrowing and Hershey Medical Center and College of Medicine reserves.
Many individual servers across the campus will be consolidated into the new university data center. The school needed additional secure capacity for operational and administrative needs and said the data center will help it remain on the cutting edge of “cyberscience” by helping attract researchers, teachers, and students.
The new data center will have a 1.75 megawatt initial load with option to add another megawatt in support of the initial footprint. Future expansion to 8 megawatts is possible.
The data center joins one under construction in Penn State Hershey Medical Center, which began this year.
“The increased need for a secure and robust data center at University Park arose from the expanded use of technology to provide the best education and research possible,” Nicholas Jones, executive vice president and provost, told Penn State news.
The data center will have hot and cold aisle containment. The university looked at a variety of cooling options and it will employ a three-stage process. The first stage will use heat exchangers to circulate indoor air. The second stage will spray water on heat exchangers to reduce temperature of the incoming air. The third stage will use mechanical cooling with compressors as required.
Other recent university data center projects include a modular data center by CommScope at the University of Montana. We recently looked at Florida Polytechnic University’s data center design for supercomputing. Emerson Network Power is building a research and development center on the University of Dayton campus in Ohio. |
|