Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
[Most Recent Entries]
[Calendar View]
Thursday, February 6th, 2014
Time |
Event |
12:30p |
Data Center Jobs: Jones Lang LaSalle At the Data Center Jobs Board, we have a new job listing from Jones Lang LaSalle, which is seeking an HR Coordinator in Roanoke, Texas.
The HR Coordinator is responsible for the day-to-day operation, maintenance and repair of systems and equipment that support a high availability data center. These systems include, but are not limited to, uninterruptible power supplies, backup electrical generators, fire suppression, EPO, leak detection, centrifugal chillers, cooling towers, pumping systems, automated electrical distribution systems, raised floor environments and monitoring systems. To view full details and apply, see job listing details.
Are you hiring for your data center? You can list your company’s job openings on the Data Center Jobs Board, and also track new openings via our jobs RSS feed. | 1:00p |
Facebook Rethinks How It Builds Data Centers: Legos or IKEA?  Is this what Facebook’s data centers will look like in the future? This illustration depicts one of several new designs Facebook is developing. Click for a larger image.
SAN JOSE, Calif. – Is Facebook ready to ditch its penthouse? The company is working on ways to streamline its data center construction, but it hasn’t yet decided whether to go the Lego route or try the IKEA approach.
Facebook has built three massive data center campuses, but wants to accelerate the process and create a repeatable design for a “Rapid Deployment Data Center” that can work anywhere in the world. The goal is to effectively cut construction time in half.
“We’d like to deliver twice the amount of data space in the time it would normally take,” said Marco Magarelli, Facebook Strategic Engineering & Development team. “We wanted to find new ways of doing things faster and better. So we got together with some industry experts in data center design and lean construction approaches, like those often used in hospital buildings.”
In a session at last week’s Open Compute Summit, Magarelli discussed the outcome of this process, outlining two new design concepts that Facebook is considering for its future facilities. One design involves a modular approach to construction, shipping large pre-fabricated “building blocks” that can be rapidly put together, much like Legos, to create a building. The second design focuses on the use of IKEA-style kits filled with lightweight parts that can be assembled on-site to create rows of racks and ducting inside a data hall.
Both approaches would totally revamp the way Facebook cools its servers. The company currently employs a “penthouse” cooling system which uses the upper floor of the building as a large cooling plenum with multiple chambers for cooling, filtering and directing the fresh air used to cool the data center.
The new designs shift the cooling chambers to the perimeters of a single-story facility, dramatically shrinking the amount of real estate required for cooling.
“This opens up the possibility of removing structure,” said Magarelli. “Can we take away the penthouse?”
 Marco Magarelli from Facebook’s Strategic Engineering & Development team discusses new approaches to data center construction during the open Compute Summit last week in San Jose, Calif. (Photo: Colleen Miller)
Let’s take a closer look at both design concepts:
Vendor One: Chassis Approach
The first approach looked at new ways to package the infrastructure for Facebook’s data centers, seeking to transport components in large building blocks that could be assembled on-site. This concept has been widely used with modular data centers, which use factory-built “skids” of power and cooling equipment to supply the back-end infrastructure, and containerized data halls to house servers.
Facebook prefers a data hall to modules, so it sought ways to containerize elements of its server area. Working with a vendor, it first examined whether it could fully package its containment systems and overhead ductwork. | 2:00p |
OpenDaylight Project Releases Software to Simplify SDN During the OpenDaylight Summit in Santa Clara this week, IBM announced a new unified network controller based on OpenDaylight technology and the OpenDaylight Project launched an open source platform to advance Software Defined Networks (SDN) and Network Functions Virtualization (NFV). The event conversation can be followed on Twitter hashtag #ODSummit.
OpenDaylight – Hydrogen Release
OpenDaylight is an open platform for network programmability to enable SDN and create a solid foundation for NFV for networks at any size and scale. The project announced its first open source software release, “Hydrogen” at the conference. Hydrogen is the first simultaneous release of OpenDaylight delivering three different editions to help a wide array of users get up and running as quickly as possible- Base Edition, Virtualization Edition and Service Provider Edition.
“OpenDaylight formed with the goal of tackling one of IT’s toughest challenges: simplifying network management,” said David Meyer, Technical Steering Committee chair, OpenDaylight. “This first release is a great step forward and the community is already looking to build on its work to address a variety of additional capabilities and features in subsequent releases that are being discussed at the first OpenDaylight Summit this week.”
Key features included in each Hydrogen edition include a multi-protocol SDN controller Service Abstraction Layer (SAL), OpenFlow plugin, OpenFlow Protocol Library, Open vSwitch Database configuration and management protocol support, and Java-based NETCONF and YANG tooling for OpenDaylight projects.
“We are seeing new OpenDaylight implementations and solutions coming to the forefront every day,” said Neela Jacques, executive director, OpenDaylight. “All signs point to 2014 being a key year for the project as we continue to grow the community, build the architecture and engage with organizations and end users who want to accelerate the path to SDN and NFV.”
IBM Delivers Unified Network Controller
IBM introduced a new unified network controller based on OpenDaylight technology designed to get organizations up and running fast on Software Defined Networks. Designed to accelerate the process of setting up SDNs, the new IBM Software Defined Network for Virtual Environments (SDN VE) includes open source components and interfaces from the OpenDaylight Project, as well as with support for OpenStack platform that enables organizations to integrate their SDNs into private and public clouds.
“It generally takes days to re-provision a network,” said Robert M. Cannistra, Senior Professional Lecturer of Computer Science and Information Technology, at Marist College, in Poughkeepsie, N.Y. IBM has been working closely with the SDN Innovation Lab at the college for the past year. “The solution we’re developing with IBM and the SDN VE is designed to cut that down to under an hour or literally minutes by allowing a data center operator to move data and applications to a safe data center from a remote location using a tablet or smartphone.”
The SDN VE consists of the unified controller, virtual switches for creating overlays, gateways to non-SDN environments and open interfaces for application integration. SDN VE enables network administrators to achieve greater enterprise performance, scalability and security, and address ever-changing business needs by speeding up network provisioning from days to hours. IBM SDN VE availability is planned for this quarter.
“Our goal is to take advantage of the openness of the OpenDaylight platform and deliver that advantage to clients by collaborating with other developers to establish an ecosystem of interoperable network applications and services,” said Dr. Inder Gopal, IBM vice president of System Networking Development. “The cooperation needed to realize the benefits of SDN is only possible within an open framework, and IBM is pleased to provide a holistic solution for new and existing networks.” | 2:00p |
TELUS Opens Highly Efficient Data Center In British Columbia, Canada  The exterior of the new Telus data center in Kamloops, British Columbia. (Photo: Telus)
Canadian telco provider TELUS has opened a technologically advanced and energy efficient data center in Kamloops, British Columbia. The facility will serve as the foundation of TELUS’ cloud computing services for thousands of Canadian businesses.
“The Kamloops Internet Data Centre will be the cornerstone of our national next-generation cloud computing services, handling complex data storage and offering unsurpassed connectivity, superior functionality, state-of-the-art security and industry-leading reliability to our clients,” said Lloyd Switzer, TELUS senior-vice president of Network Transformation. “The centre’s modular design and ability to expand to meet the growing demands of our clients is rooted in our passion for putting customers first in all we do, and positions TELUS to lead the industry in sustainable data centres.”
TELUS invested $75 million in the data center, part of $3 billion in infrastructure and facilities upgrades TELUS is making across British Columbia from 2012 through 2014. This builds upon the $29 billion TELUS has already invested in operations and technology throughout the province since 2000.
“Kamloops is the perfect location for this world-class centre due to its geography, climate, proximity to our networks and clean power, and the presence of a highly skilled workforce,” said Switzer. “The completion of one of the most environmentally sustainable data centres in the world is something all of us, especially the local community, should be very proud of.”
Advanced Cooling System
A key element to the data center’s efficient design is Skanska/Inertech’s advanced cooling system that allows the center to consume 80 percent less electricity and 86 percent less water than typical data centers.”Effectively it’s a closed loop refrigerant system,” said Bryant Farland,senior vice president of Skanska’s Mission Critical Center of Excellence. “It takes advantage of the heat generated by the hot aisle, maintaining a temperature that’s appropriate.”
The data center will use outside air for cooling 99 percent of the year. Its effective use of power will require at maximum only 40 hours of mechanical cooling energy per year. The company states this will keep over 2,300 tons of carbon dioxide from the environment annually, equivalent to 10,000 Canadian households (or 12 American ones … kidding). The facility is built to LEED standards and has receive Tier III reliability certification from the Uptime Institute for Design and Construction, making TELUS the only Canadian company to achieve this twice, in Kamloops and Rimouski, Quebec.
“TELUS’ investment in our city supports that Kamloops is recognized as not only a green and sustainable community, but also as a centre of innovation for business,” said Kamloops Mayor Peter Milobar. “As this project comes to completion, we’d like to congratulate TELUS, and look forward to an ongoing relationship.”
The Skanska/Inertech Relationship
Skanska/Inertech’s advanced cooling system is a big reason the facility is so environmentally sound. Skanska has been partnered with TELUS for several years, and together have been building impressive facilities.
“Three years ago we met with TELUS,” said Farland. “TELUS had sought different folks to respond to an RFP to a data center they were constructing. They advertised as an open design build construction. We emphasized the importance and ability the technology would have to drive operational efficiency. Telus was at the forefront of the embracing the technology. They became comfortable because of the performance guarantee.”
That performance guarantee was an annualized PUE of 1.15. “It’s almost unheard of, this is a real differentiator in the market for Skanska to stand behind this work,” said Farland.
Originally there was a master agreement to develop two sites, the new Kamloops data center and one in Rimouski Quebec. Based on the results, there’s a good chance the partnership will continue.
Skanska Focused on Efficiency
The facility is one prime example of how Skanska is helping to develop some of the most efficient facilities in the world.
“There’s some pretty wonderful things happening beyond what we’ve done here,” said Farland. “We recognized that refrigerant closed loop isn’t for everyone. There’s some spectacular air-side economization going on as well. One of the things we’ve done is invested in personnel and engineering teams within the organizations. We’ve done a pretty diverse set of projects. Optimizing all different kinds of mechanical solutions. It’s important to have that kind of diversification; you have to take into account those geographies one is building in.”
Skanska has done a lot of enterprise business, working with financial and government clients, but its seeing a lot more interest in the colocation world. “Multi-tenant data centers continues to be a point of emphasis,” said Farland. “We recently won a project with one of the largest colocation providers. We’re diversifying.”
The Kamloops Data Center created 75 permanent Canadian jobs, 25 of them in Kamloops, and 200 jobs during construction. TELUS will create more local construction jobs when additional modules are added to the center in coming years. The center was designed to be scalable. | 2:30p |
Data Centers Should Trust Emergency Power to UL 1008 Listed Automatic Transfer Switches Bhavesh Patel is director of marketing and customer support, ASCO Power Technologies, Florham Park, NJ, a business of Emerson Network Power.
BHAVESH PATEL
ASCO
When utility power at a data center fails, automatic switchover to an emergency backup power system that enables “business-as-usual” is the ideal scenario. But sometimes that doesn’t happen because the transfer switch itself fails.
While automatic transfer switches (ATSs) are the norm in data centers, they are not all the same. There are various standards to which automatic transfer switches can be certified.
For reliability and the greatest likelihood that a switchover to backup power will occur flawlessly, many experts in business-critical continuity recommend selecting an automatic transfer switch certified to UL 1008. UL 1008 Certification requires conforming to extremely rigorous industry-recognized requirements.
What Does Underwriters Lab Certification Mean?
Established by UL (Underwriters Laboratories) in 1970 to guard against transfer switch failures and resulting potential fires, UL 1008 is both a performance standard and a design and construction standard.
In a recent survey commissioned by ASCO, a business of Emerson Network Power, of executives at facilities with transfer switches, 20percent of respondents reported at least one failure of a switch in the previous five years. And 42 percent of those failures left a facility without power. One third of those respondents stated that the transfer switch failed completely and became totally non-operable. Age of the switch was not a determinant – the number of failed units less than five years old was the same as the number of failed units 15 or more years old.
Aimed at ensuring reliability and durability of operation, UL 1008 requires rigorous testing of the transfer switches by an independent testing and certification agency. The stringent requirements include: withstand and close on ratings (WCR) covering severe fault currents, bolted faults and short circuits; a test to ensure the device can carry rated currents; and endurance tests which use tables specifying the number of cycles the transfer switches at each ampere level must achieve and still perform the intended function. A typical transfer switch that has earned UL 1008 certification can transfer 6,000 times with a minimum of 1,000 operations at 100percent of the rated load.
To put those numbers in perspective, a UL listed transfer switch that is tested monthly might be operated 15 or so times a year. Normal power outages may happen 7-10 times a year in a facility, when the switch also transfers power over to an on-site source. So the standard certainly has lots of leeway built in – over a 50 year period a particular transfer switch might be operated about 1,200 times, give or take.
Two more advantages of installing a UL 1008 listed transfer switch are simplification of the inspection on code-required emergency power systems and the spelling out of construction standards for mechanical operation, essentially contributing to ensuring safe operation of the switch.
 ATC Switch. On the upper left hand corner of the door, the UL label is applied.
If a manufacturer’s transfer switches have successfully passed the battery of tests, that manufacturer is permitted to label those switches with the UL mark. UL 1008 transfer switches are labeled as “non-automatic transfer switch,” “automatic transfer switch,” or “transfer and bypass isolation switch.” Literature that is associated with the transfer switch should state “UL tested” or “tested and certified by UL 1008.” The UL 1008 switch itself must have the UL inside a circle logo and the word “listed” along with the exact transfer switch words cited in the standard. In addition, each switch carries a code that identifies the manufacturer.
Certification Is Not Commonly Understood
Despite the importance of UL 1008 certification, many data center executives are unaware of any certifications of a transfer switch or why they should look for it. In fact, when asked to which UL standard their transfer switch(es) were certified given the choices UL 67, UL 98, UL 891, UL 1008, “others,” or “not sure,” 92 percent responded not sure.
When asked who ensured the transfer switch(es) were certified to the appropriate standard, 15percent were not sure and 10percent said themselves. Almost half the responses indicated it was the design engineer’s responsibility and 47percent felt it was the installing contractor’s responsibility. About one quarter of the responses chose “authorities who approved occupancy certificate.” (Responses added up to more than 100percent because multiple answers were allowed.) Whoever has the responsibility for the transfer switch selection, it is important that individual has extensive experience in the process. To avoid any misunderstanding, the UL 1008 requirement should be written into any master construction specifications.
In addition to installation of a UL 1008 certified transfer switch, it is also essential to ensure that proper periodic maintenance and testing are in place. Typically, a qualified service company should be contracted to regularly perform periodic inspection and maintenance on the transfer switch and associated equipment and make any repairs as needed. Indeed, transfer switches have moving parts that, if left in the same position for months or years, can seize up.
The service provider may be the original equipment manufacturer’s service organization or, more commonly, a third-party service provider. If possible, it is also a good idea to have in-house personnel with expertise on the switches and their maintenance.
Don’t take the quality of potential service providers for granted. The lowest bidder is not necessarily the best choice. One of the first questions is whether the service provider’s techs are trained by the manufacturer of the automatic transfer switch to work on that switch. Expertise on and familiarity with protocol for one manufacturer’s switch does not guarantee “easy-fit” comfort on another manufacturer’s switch. Beyond that, does the provider offer round-the-clock service? Will the tech arrive onsite with spare parts from the manufacturer or does the tech first have to survey the damage or problem and then call for parts?
Data center managers should know the type of transfer switch the data center’s emergency backup power is relying on. If an upgrade is warranted, make sure the design and specifying engineers select from an established manufacturer of UL 1008 listed switches suitable for the given application. The aim is for the transfer switch to operate flawlessly when called upon over the span of many years.
Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library. | 2:30p |
The Cisco Invicta Platform: Taking Convergence to a New Level 
The data center is changing. The proliferation of cloud computing, modern data center optimizations, and the vast increase in data traversing the WAN has placed new demands on infrastructure. Administrators are actively trying to increase density while still maintaining optimal performance. More organizations and user are entering the data center and are requesting resources. Entire applications, desktops, workloads and data points are being delivered directly from the data center platform.
In fact, according to the latest Cisco Cloud Index Report, the data center virtualization and cloud computing transition is pretty significant:
- The ratio of workloads to non-virtualized traditional servers will increase from 1.7 in 2012 to 2.3 by 2017.
- The ratio of workloads to non-virtualized cloud servers will increase from 6.5 in 2012 to 16.7 by 2017.
- By 2017, nearly two-thirds of all workloads will be processed in the cloud.
At this point, it’s safe to say that the data center is the central hub of all modern technology. Amidst all of this convergence, there’s been an evolution inside of the data center. Major technological vendors quickly realized that infrastructure density is key to ruling the cloud and data kingdom.
The idea behind convergence isn’t new. What is new and exciting are all of the new types of features and solutions being placed into the modern converged infrastructure. The recent acquisition of Whiptail by Cisco was certainly exciting news. However, the interesting part has been the rapid pace of integration into the existing UCS platform and some of the new features already added. To better support next-generation data center and user demands, Cisco did some interesting things around optimizing their converged offering.
- Complete Network, Storage, Compute Integration. This is pretty significant. If you look under the hood of this platform, you’ll quickly realize that Cisco isn’t in the traditional “storage” business. In fact, it doesn’t want to be. Cisco is interested in creating a next-generation data center model by controlling and helping you manipulate how data traverses both your cloud and your internal resources. This means improving application, desktop, user, and even data performance within a converged platform. Basically, this model allows for a complete software defined solution. This can now happen at the server, network, and data layer where Cisco logically abstracts functionality of key services to allow for greater control of that virtual layer. Ultimately, this means apps launch faster, desktops are accessed quicker, and data can be delivered truly on demand to a distributed user base.
- The UCS Invicta Operating System. The idea here is actually the creation of that software-defined data layer. Basically, the Cisco Invicta OS is designed to use NAND flash memory to sustain high throughput, a high rate of I/O operations per second (IOPS), and ultra-low latency while overcoming the write performance and longevity challenges typically associated with multilevel-cell (MLC) flash memory. The Cisco UCS Invicta OS accelerates the delivery of enterprise data by 10 times while using flash memory to reduce energy costs in a platform that consumes less data floor space. Furthermore, this OS integrates with the overall UCS platform to create that true software-defined layer. Logical abstraction of physical resources allows Cisco and this platform to create greater control over data, its flow, and how it impacts the overall environment.
- UCS Director Integration. Here’s the really cool part: you don’t have a stand-alone data control appliance. Rather, you have a powerful data management platform that can directly integrate into the rest of your converged infrastructure. This means that administrators can use the Cisco UCS management suite (Cisco UCS Central Software, Manager, and Director) to deploy, organize, and monitor Cisco UCS resources to best meet their business requirements. Furthermore, organizations can create intelligent workflow policies around hardware and software profiles which live on the UCS converged platform. This means that policies can intelligently utilize new types of resources being presented by the Invicta infrastructure.
- Creating the Next-Generation Data Center. So much is run through the modern data center that optimization and proactive control is absolutely necessary. The Invicta platform was just an addition to an already powerful UCS infrastructure. In utilizing SDx technologies and by intelligently integrating key components into their platform, Cisco was able to make an environment capable of handling organizational needs both today, and in the near future.
Data center delivery goes far beyond just applications and desktops. By integrating a powerful NAND flash infrastructure and complete profile control, you’re able to do a lot more with your data center than ever before. This includes:
- Analytics and intelligence.
- Batch processing.
- Email archiving and data control.
- Online transaction processing optimization.
- Video compression and delivery.
- Complete workload delivery (VDI, applications).
- Database querying and loading.
- High-performance computing.
There are some pretty direct benefits when working with a converged system. Remember, vendors are actively trying to improve performance around several key factors within the data center model. In deploying these converged systems, you’re aiming to improve key data center functionality as well. This includes:
- Reduced power consumption
- Reduced overall energy costs
- Reduced data center floor space consumption
- Lower operating expenses
A good workflow automation and orchestration infrastructure can create a powerful data center engine. This environment would proactively work for you to optimize existing workloads and gauge the need for additional resources. Moving forward, creating an intelligent system which can work for you without administrator interaction will be the go-to model. Remember, we’re only increasing the amount of devices that connect into the modern data center. So, when you create your corporate platform, look at convergence as a viable option for a next-gen-ready data center infrastructure. | 2:45p |
DCK Video: Hyve Launches Front I/O Optimized Servers Hyve Solutions unveiled its new front I/O optimized servers, called the Ambient Series, last week during the Open Compute Summit V in San Jose, California. On the floor of the exhibit hall, Data Center Knowledge took a tour of Hyve’s offerings with Steve Ichinaga, Senior Vice President and General Manager, Hyve Solutions. Video runs 5:28 minutes. More details after the jump.
Not only are the servers in the Hyve Solutions Ambient Series for OCP designed to have all inputs and outputs on the front end and to fit the OCP Open Rack specification, they are also optimized for a higher thermal envelope. They can be efficiently cooled by the ambient air within a data center. The Hyve Solutions Ambient Series for OCP offers a variety of compute and storage options ranging from high capacity to high IOPS. The OCP form factor (at 21″ wide) allows for even further storage density than the traditional 19″ Hyve Solutions Ambient Series products.
Additionally, new storage designs were rolled out including the Hyve Solutions OCP 1320 and Hyve Solutions 1316 Ambient Series products based on the Seagate Kinetic Open Storage Platform. These storage components leverage Ethernet fabric versus traditional HDD interfaces, allowing for lower cost, higher density and higher gains in overall efficiency. These prototypes will be available to qualifying OEMs for testing and evaluation. “Seagate’s Kinetic technology has enabled us to increase storage density, while maintaining an energy efficient profile, all of which are critical to our largest storage cloud customers,” said Ichinaga.
For additional video, check out our DCK video archive and the Data Center Videos channel on YouTube.
| 3:00p |
Microsoft Invests in Foursquare, Google and Cisco Cross-License Patents Microsoft invests $15 million in Foursquare and licenses location data, and Google and Cisco agree to a broad cross-licensing agreement to avoid unnecessary patent lawsuits.
Microsoft invests $15 million in Foursquare. While most of the attention this week was on Satya Nadella becoming the next Microsoft CEO and the company’s future, Microsoft (MSFT) was investing in the cloud – putting $15 million into location app Foursquare. The 4 year agreement makes Microsoft the largest licensee of the Foursquare places database, which has over 60 million entries and 5 billion check-ins. The investment from Microsoft closely follows a late 2013 Series D fundraising round that netted $35 million. Foursquare’s blog states, “when you use Microsoft devices powered by the Windows and Windows Phone operating systems and products like Bing, places will be enhanced by Foursquare – to provide contextually-aware experiences and the best recommendations of any service in the world.”
Google and Cisco Cross-license patents. Cisco (CSCO) and Google (GOOG) announced that the two companies have entered into a long-term patent cross-licensing agreement covering a broad range of products and technologies. The agreement allows each company to extract significant value from its patent portfolio through a license to the other’s portfolio and by helping to reduce the risk of future litigation. Both Cisco and Google are members of the Coalition for Patent Fairness, a leading advocacy group for patent reform. ”In today’s overly-litigious environment, cross-licensing is an effective way for technology companies to work together and help prevent unnecessary patent lawsuits,” said Dan Lang, Cisco’s Vice President of Intellectual Property. “This agreement is an important step in promoting innovation and assuring freedom of operation.” | 5:00p |
Bitcoin Miners Building 10 Megawatt Data Center in Sweden  KnC Miner has started work on a 10-megawatt data center in Boden, Sweden, which will be filled with Bitcoin ASIC mining rigs similar to this one. (Photo: KnC)
Facebook has new neighbors in Sweden, and they’re building Bitcoin’s version of the Death Star – a 10 megawatt data center filled with high-powered computers mining for cryptocurrency.
Bitcoin mining equipment company KnC Miner has begun construction on its new facility in Boden, about 10 miles down the road from Facebook’s server farm in Lulea. The data center is being built in a facility previously used as a helicopter hangar for the Swedish armed forces. It will be retrofitted to house thousands of custom Bitcoin mining rigs built by KnC Miner, one of a host of new vendors that has emerged to serve the growing market for Bitcoin hardware.
KnC Miner is based in Stockholm, Sweden and has established a leadership position in Bitcoin mining rigs powered by ASICs (Application Specific Integrated Circuits) to crunch data for creating and tracking bitcoins. The company says it has sold $75 million in hardware since June, with customers in 120 countries.
KnC Enters Cloud Mining Services
The new data center marks KnC’s entry into cloud mining services. It’s the latest in a series of bitcoin companies to announce the establishment of multi-megawatt data centers for Bitcoin mining – the term for using data-crunching computers to earn newly-issued virtually currency. Like hyperscale computing companies such as Facebook, Bitcoin companies are chasing cheap power to improve the profitability of running their power-hungry rigs.
“We searched all over the world for a suitable place to build the first of many of our own mega data centers, and to find the best location lying right here in our home country is fantastic,” said Sam Cole, one of the co-founders at KnC Miner. “Our highly advanced technology consumes a lot of energy, so for us it was imminent to find a production site with access to renewable yet stable and safe energy. We have had an incredible amount of support from The Node Pole representatives, local companies and the government here in Boden.”
The Node Pole initiative seeks to market Sweden as a destination for data center development, leveraging the region’s abundance of stable and competitively priced electricity from renewable energy.
KnC Miner started construction last week, but expects it to be fully operational already within the next few months. Representatives of the Node Pole said that KnC is “already in discussions with local authorities regarding the establishment of even larger facilities in the local area already later this spring.” |
|