Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, June 9th, 2016
Time |
Event |
3:50p |
Meet Breogan, a Supercomputer Built to Disrupt Mexico’s Market (Bloomberg) — On a gated residential street about an hour’s drive south of Mexico City’s main business district lives Breogan, a $350,000 computer that Alberto Alonso built to shake up the nation’s stock market.
Alonso, 32, created the machine and the software it runs with a squad of eggheads including a trained atmospheric physicist and a robotics specialist. The algorithm the device uses automatically buys and sells shares when it spots an alluring trade, aiming to disrupt a market where business is still often done with a phone rather than a computer.
“Mexico still isn’t a place for geeks, but give me a chance,” Alonso, whose jeans and untucked shirt gave him a look closer to a grad student than Wall Street trader, said in an interview.
Alonso is seeking to transform a market that trails the rest of the world when it comes to computer-driven stock trading. While the Bolsa Mexicana de Valores has actively courted algorithmic traders, the bourse is at least a decade behind the US in terms of automation, according to Larry Tabb, the founder of research firm Tabb Group.
Alonso, who is operating his computer in test mode with a limited number of stocks, says that a quirk in Mexico’s market means he’ll be able to mint money with his computer once he has it fully running. About a quarter of local trading volume stems from companies that also trade in other countries — including household names like Apple and Facebook — opening up an arbitrage opportunity for computers fast enough to spot discrepancies in stock prices denominated in pesos and dollars. In theory, any gap is a chance to make money on a bet the differential will narrow.
The trades become even more attractive in moments of high currency volatility, according to Alfredo Guillen, the chief operating officer of equities markets at Mexico’s stock exchange operator.
“The more volatility there is in the exchange rate, the more trading there is,” Guillen said. A gauge of swings in the peso surged to a four-year high in April. Mexico’s currency fell 0.7 percent to 18.247 per dollar at 10:50 a.m. in New York on Thursday. The nation’s benchmark IPC stock index declined 0.6 percent.
That corner of the market where global securities trade in Mexico is the Sistema Internacional de Cotizaciones, known as the SIC. While US-based investors participate, the platform allows Mexicans to buy global securities without opening an international account. The system has been around since 2003.
Using Breogan (pronounced “bray-oh-GAHN”) and algorithms to stimulate trading volume in the SIC beyond the most popular names would have a significant impact on the market, according to Irving Cortes, a friend of Alonso who used to work with him at the derivative-focused financial firm DerFin in Mexico City and still collaborates with him in his current role running CM Derivados.
“It’s an enormous opportunity,” he said, citing “a large quantity of old-school traders” in Mexico.
Alonso isn’t alone. Virtu Financial, a New York-based algorithmic firm that’s one of the top traders in the world, has served as a market maker on the SIC for years, according to a person familiar with the matter, who asked not to be identified citing a lack of authorization to comment publicly.
The number of stocks and exchange-traded funds listed on the SIC jumped 47 percent to 1,101 between 2011 and 2015, according to the exchange. Still, Alonso insists SIC trading is concentrated in too few companies. His strategy could broaden the action to more stocks, spurring more volume, he says. Providing liquidity for rarely traded shares by acting as market maker for shares on the SIC could bring in $100 million a year in revenue in 2017, he estimates.
“There’s an opportunity,” said Jorge Alegria, who previously served as head of market operations and derivatives for the Bolsa Mexicana de Valores and now works as a consultant on financial market structure in the US, Latin America and Asia. “All of this development, investigation and above all innovation is great for the Mexican market.”
The “innovation” in this case is Breogan. The machine, which Alonso says is like having 500 people watching the Mexican market for arbitrage opportunities at once, automatically locks in buy and sell orders. The system needs less than 70 milliseconds to execute a trade once it identifies the potential for profit.
Since late February, Breogan has brought in about 8.6 million pesos ($475,000) in revenue for Alonso’s firm, known as GACS. He has been ramping up the system slowly, only running the algorithm on about one-sixth of the names available in the SIC. He plans to invest another $50,000 in Breogan to boost processing power.
Much like his hand-picked team, Alonso’s metaphors extend beyond those typical of a trader. On his desk next to Breogan — which is named for a mythical Galician king — Alonso has a Magic: The Gathering card featuring Jace, the Mind Sculptor.
“He’s a Planeswalker,” Alonso said, explaining how Jace exists between two places, just like Breogan has feet in both Mexico and the US.
“Here we are scientists first, then business people,” Alonso said in an interview while puffing Pall Mall XLs and sipping coffee from a Computer History Museum mug. | 5:01p |
How Server Power Supplies are Wasting Your Money Overprovisioning, viewed in data center design and management as something between a best practice and a necessary evil, has been built into the industry’s collective psyche because of its core mission to maintain uptime, at all costs. If a data center team spends more than it really has to, it needs to improve efficiency, but if a data center goes down, somebody’s failed to do their job.
Data center managers and designers overprovision everything from servers to facility power and cooling capacity. More often than not, they do it just in case demand unexpectedly spikes beyond the capacity they expect they will need most of the time. The practice of overprovisioning is common because few data center operators have made it a priority to measure and analyze actual demand over time. Without reliable historical usage data, overprovisioning is the only way to ensure you don’t get caught off-guard.
Chief Suspect: the Server Power Supply
Today, however, more and more tools are available that can help you extract that data and put it to use. The key is figuring out what to measure. Should it be CPU utilization? Kilowatts per rack? Temperature?
The best answer is all of the above and then some, but one data center management software startup suggests server power supplies are a good place to start. The company, named Coolan, recently measured power consumption by a cluster of servers at a customer’s data center and found a vast discrepancy between the amount of power the servers consumed and the amount of power their power supplies were rated for.
It’s the latter number – so-called nameplate power – that is used in capacity planning to figure out how much power and cooling a facility will need. Overprovisioning at this basic-component level leads to overprovisioning at the level of the entire facility. Companies buy transformers of higher capacity than they will need, UPS systems, chillers, and so on.
The common practice in data center power provisioning is to assume that each system will operate at 80 percent of the maximum power its power supply is rated for, according to Coolan. Few systems ever run close to that once deployed.
A Case in Point
The customer, whom the startup could not name due to confidentiality agreements, is a cloud service provider, and the cluster that was analyzed consists of a diverse group of 1,600 nodes, including a range of HP and Dell servers. Not even 500 systems are of the same model.
More than 60 percent of the boxes in the cluster consumed about 35 percent of their power supplies’ nameplate power; the rest consumed under 20 percent. Considering that the sweet spot for maximum power-supply efficiency is between 40 and 80 percent, according to Coolan (see chart below), every single server in the cluster runs inefficiently and the facility infrastructure that supports it is grossly overprovisioned.
Because the customer is a large software developer, Coolan says it’s safe to assume many cloud-based service providers are in the same situation, overpaying for underutilized infrastructure.

Power supply utilization efficiency curve (Credit: Coolan)
How to Narrow the Gap?
So what are the action items here? Amir Michael, Coolan’s co-founder and CEO, says ultimately the answer can be anything that narrows the gap between the workload on every server and the amount of power it’s being supplied. It can be narrowed on the side of the workload itself, by loading the server more to get it into that efficient utilization rate, or on the side of the power supply: if it’s a new deployment, select lower-capacity power supplies (which are also cheaper), and if it’s an existing deployment, take a look at the way power-supply redundancy is configured.
Making adjustments on the power-supply side is often much easier than increasing server workload. “It’s a challenge for them to actually load the boxes, and there are lots of companies trying to solve that problem,” Michael said. Docker containers and server virtualization are the most straight-forward ways to do it: the more VMs or containers are running on a single server, the higher its overall workload is. But it’s not always that simple.
Changing the way redundant power supplies on a server are configured is a much lower hanging fruit. Two redundant supplies can be set up to share the load equally, which means it’s really difficult to get each of them close to the optimal operating range. But you can change the configuration to have one of the power supplies serve the entire load while the other is on hot standby. That will at least get you closer to an optimal utilization rate, Michael explained, adding that data center managers are seldom aware that they have the choice to change this configuration.
Wisdom of the Hyper Scale
Hyper-scale data center operators like Facebook or Google have been privy to the problem of underutilized power supplies for years. In a paper on data center power provisioning published in 2007, Google engineers highlighted a gap of 7 to 16 percent between achieved and theoretical peak power usage for groups of thousands of servers in the web giant’s own data centers.
The data center rack Facebook designed and contributed to the Open Compute Project several years ago features a power shelf that’s shared among servers. You can add or remove compute nodes, increase the power supply’s capacity or decrease it independently of the compute capacity to get an optimal match.
Michael is deeply familiar with server design at Google and Facebook, as well as with OCP, which he co-founded. He has spent years designing hardware at Google and later at Facebook.
Arm Yourself with Data
The best-case scenario is when a data center operator has spent some time tracking power usage of their servers and has a good idea of what they may need when they deploy their next cluster. Armed with real-life power usage data, then can select power supplies for that next cluster whose nameplate capacity is closer to their actual use and design the supporting data center infrastructure accordingly. “Vendors have a whole host of power supplies you can choose for your system,” Michael says.
Of course there’s little guarantee that your power demand per server will stay the same over time. As developers add more features, and as algorithms become more complex, server load per transaction or per user increases. “The load changes all the time,” Michael says. “People update their applications, they get more users.”
This is why it’s important to measure power consumption over periods of time that are long enough to expose patterns and trends. Having and using good data translates into real dollars and cents. Coolan estimated that the aforementioned customer would save $33,000 per year on energy costs alone, had the servers in its cluster operated within the power supplies’ efficient range. That’s not counting the money they would save by deploying lower-capacity electrical and mechanical equipment (assuming their cluster sits in their own data center and not in a colocation facility).
Read more about Coolan’s study in today’s blog post on the company’s site. | 5:13p |
Why Outsource Application and Database Management John Hughes is Manager Partner for ManageForce Corporation.
Outsourcing is by no means a new idea, but outsourced database and application management have become more prominent in almost every type of industry. For the most part, that has to do with how today’s technology has broadened the possibilities and capabilities of remote management, making it much more flexible.
Executives still debate whether there are any tangible benefits to having a remote team handle certain responsibilities or tasks for a business. But the reality is that if you find the right people and identify the right areas to outsource, hiring a remote team will positively impact your bottom line. Outsourcing managed services can help reduce costs, tap into top quality talent, provide efficiency and flexibility, and most importantly, lets you and your team focus on what really matters in your business—the core tasks that drive meaningful growth.
1. Letting internal teams focus on core projects. The key to getting the most out of outsourced remote management is through finding the best balance of services across support areas, whether the need is for remote application management, database administration, or specialized functional support. Having a dedicated and reliable team will allow you to free internal resources to focus on more important projects.
Every business task is divided into two categories, “chore” tasks or “core” tasks. “Chore” tasks encompass all the responsibilities that keep the business running, like managing infrastructure and maintaining optimal performance of databases. Though performing these tasks will ensure that you can stay open for business, they don’t distinguish your operations from the competition.
“Core” tasks are the responsibilities fundamental to business growth. They are the types of activities unique to your company that form part of the strategic plan to create a competitive advantage. As such, you would ideally keep these “core” areas internal, and as the main focus of your IT department.
So, if you can free your team from the “chore” support tasks by finding experts to manage those tasks, you will be able to dedicate more time and resources to “core” business projects.
2. Saving money by doing less and getting better results. Cost savings is usually the most tangible benefits of offloading “chore” tasks to focus on “core” tasks. When you find the right outsourced team for remote database and application management, you typically will get more out of limited resources like on-demand access to senior-level experts, as you no longer have to spend the time or capital to find and hire that expert. That is also true of niche talent needed for a temporary service or task. Without the support of outsourced managed service providers, you would need to hire a consultant or add an employee in order to obtain this niche talent. But by right-sourcing in partnership with a managed service provider, you have access to the right person for the job whenever the need comes up.
Another way remote management helps cut costs is by providing game-changing technologies that would otherwise be an expensive capital expenditure to acquire. By tapping into remote managed services, your business can seamlessly tap into cloud database migration and support, virtualization of database environments, database server consolidation—just to name a few of the many other new features and tech that will become available over time.
3. Tapping into the best quality talent. When right-sourcing to a qualified database and application managed services provider, another big advantage is that you gain access to a full team of industry experts in every aspect of IT. Especially for an IT team that is smaller and/or more limited in their resources (monitoring/management tools, consulting budgets, etc.), if your business takes care of “chore” tasks internally, it’s often too big of a burden,thereby limiting your ability to grow your team’s expertise. But if you outsource these tasks to a qualified managed service provider, you can either free your team to focus on more strategic activities and/or leverage the outsourced remote management team’s expertise and specialized knowledge to find solutions and improvements to your unique needs.
4. Around-the-clock flexibility results in more efficiency. A business needs to be prepared to encounter problems and inconveniences at all times. When it comes to technology, any setback can result in catastrophic consequences for the business. You need to be up and running to make money; nobody likes expensive downtime.
With remote management, you can provide IT support 24 hours a day, seven days a week, 365 days a year–no sick days, no vacations, no interruption. It doesn’t matter what time zone you are in; you can rest assured that an entire team dedicated to solving your problems will be available around-the-clock.
By taking the time to identify “chore” tasks that cost time and money but don’t add growth to the business, an executive can make a huge positive impact on the bottom line. Not only will the business likely cut costs, but it will also have access to a large pool of industry experts, the most advanced technology, as well as around-the-clock service. The potential benefits are too great to at not least consider outsourced database management and remote application management, if you are able to find the right team to collaborate with; as it frees up your team to focus on strategic business planning and execution.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 5:52p |
Fortinet Acquires AccelOps for $28M to Expand Security Fabric Strategy  By The VAR Guy
Fortinet has acquired AccelOps for $28 million in cash, effective immediately, according to an announcement made on Tuesday. The deal is expected to help Fortinet increase its ability to monitor and secure multi-vendor solutions via the company’s new Security Fabric strategy.
“The lack of a holistic view across organizations’ entire distributed, multi-vendor networks and the growing quantity and complexity of threat information create big-data security challenges,” said Ken Xie, founder, chairman and CEO of Fortinet, in a statement. “With the acquisition of AccelOps, Fortinet extends its Security Fabric to address these challenges by combining security and compliance monitoring with advanced analytics for multi-vendor security solutions, enabling automated and actionable security intelligence from IoT to the cloud.”
All of AccelOp’s solutions, which now fall under the FortiSIEM moniker, will be incorporated into the Fortinet Security Fabric, according to the announcement. Fortinet also plans to incorporate its FortiGuard Labs global threat intelligence and third-party feeds with AccelOps’ next gen security information and event management (SIEM) capabilities.
Additionally, AccelOps’ Security Operations Center and Network Operations Center capabilities will be utilized for Fortinet’s managed security service providers as well as the Fortinet Support Services. Fortinet also announced FortiCare 360 Support, a new subscription service that includes automated security and performance audits of customer infrastructure.
Fortinet could pay up to an additional $4 million in cash for AccelOps subject to the company’s future performance, according to Fortinet’s SEC filings.
“Our mission has always been to help our customers make security and compliance management as effortless and effective as possible,” said Partha Bhattacharya, founder and chief technology officer, AccelOps. “The synergies between AccelOps’s solutions and Fortinet’s Security Fabric vision and thought leadership will ensure that our customers are protected with the most scalable and proven global threat intelligence, security and performance analytics and compliance and control across all types of network environments with multiple security and networking vendor products.”
In other big acquisition news, SolarWinds purchased LOGICnow last week, giving the company access to more than 20,000 MSP partners worldwide. SolarWinds CEO Kevin B. Thompson said the acquisition will allow SolarWinds to offer the most complete range of IT solutions in the managed services provider (MSP) space.
This first ran at http://thevarguy.com/information-technology-merger-and-acquistion-news/fortinet-acquires-accelops-expand-security-fabric- | 6:47p |
Penton Technology Launches New Professional Education Platform We’re excited to share our new interactive Penton Technology Professional Education platform, which is designed to enrich the online education experience for technology professionals.
“Our new education platform allows tech professionals to enhance their careers, advance their skills and access critical education on-demand,” said Rod Trent, Education and Conferences Director for Penton Technology.
The contemporary site is easy to navigate on all mobile devices and desktops. Besides access to exclusive courses taught by seasoned pros, the platform features interactive material, including embedded videos, quizzes, photos and graphics. The courses have been designed to provide high-quality continuing education in a convenient and user-friendly format.
“We’re seeing significant growth in interest in our deep dive, peer-based training at both our conferences and our eLearning classes,” Trent added. “That’s why we are creating a professional development ecosystem that capitalizes on learning-focused content at our conferences, on our web sites and forums, and at new on-site learning events we’ll soon be rolling out across the country.”
To celebrate the debut of the new platform, by Thought Industries, Penton Technology is discounting all on-demand classes by 20% through July 15. The expanded course offerings compliment Penton Technology’s onsite conference training.
This year IT/Dev Connections, which runs Oct. 10-13 at the ARIA Resort in Las Vegas, will present new on-site classes on various cloud services and Windows 10, and introduce on-site Microsoft certifications for the first time, including some that will not be available at Microsoft’s Ignite conference in September. There’s also an extensive array of new professional development workshops being offered at our upcoming Data Center World and HostingCon conferences.
We’d love to get your feedback on the new platform. Send it to Rod.Trent@penton.com. | 9:58p |
CME Chairman Threatens to Leave Chicago Data Center if Controversial Tax Passes Executive chairman of the CME Group, one of the world’s largest exchange operators, is leaving no options off the table in his fight against a proposed tax levy on trades executed on exchanges in Illinois, including an exit from the state altogether, which would include moving CME’s electronic trading infrastructure out of the data center in Chicago suburbs the company recently sold to CyrusOne.
“I would have no other chance but to move the business,” Terry Duffy, the executive chairman, told Bloomberg.
The new tax proposal would require a $1 or $2 tax on every trade. According to Duffy, it would force CME customers to stop buying and selling on the company’s exchanges, including the Chicago Mercantile Exchange, because the deals would be uneconomic.
Other data centers outside of Illinois “would welcome me with open arms,” he said in an interview on Bloomberg Television Thursday.
The proposal is in early stages and “faces long odds of approval,” according to Bloomberg.
If the tax is passed and CME follows through with Duffy’s threats, its exit would hurt CyrusOne, which has been marketing the facility in Aurora, Illinois, to financial services companies, using physical proximity to CME’s trading engines as its core value proposition.
CME sold the facility to the Carrollton, Texas-based data center provider in a sale-leaseback transaction in March for $130 million and signed a 15-year lease for 72,000 square feet of data center space. Last month, CyrusOne announced huge expansion plans at the site: construction of a 500,000-square foot building there to make room for more customers but did not say when it would start building. |
|