Data Center Knowledge | News and analysis for the data center industry - Industr's Journal
 
[Most Recent Entries] [Calendar View]

Monday, November 21st, 2016

    Time Event
    2:00p
    Finding the Right Data Center Model for Your Business

    James Young is Technical Director, Asia Pacific at CommScope.

    There’s no question: Data center needs change as business applications change direction, grow or die off. Managers can align their data center capacity, choosing to stay with their own onsite capacity or migrating all of their applications to a public, shared cloud. Some also may choose to rent space in a multi-tenant data center (MTDC), moving away from their own facilities. There’s also a happy medium in which businesses mix all of these alternatives into their own unique blended solution. So how do data center managers decide when and how to evolve their data centers? Naturally, there are pros and cons to each data center model.

    On-Site Data Centers

    In this scenario, managers control their data center at their own location. An on-site data center improves efficiencies for some business needs. Of course, it also carries maintenance requirements. With no crystal ball as to what will happen in the future, it is more difficult to scale this investment up or down: guess wrong, and the cost of this alternative can be much higher than other choices.

    All of the activity around cloud computing makes it seem like on-site data centers are dinosaurs, but there are a couple of different reasons why companies choose to own their own data centers. For example, some companies have static requirements and perform a significant amount of processing on an ongoing basis, and they have invested in the data center capacity to do that. Over time, they might make changes to the data center, but it ends up being more expensive to go into a leased facility or into the cloud unless there’s a good reason to do it, such as changes in operations or technology.

    Insurance companies and banks, for example, have been using the same applications for a long time – they’re part of the organization’s core capabilities and are considered to be a strategic advantage. There are also companies that work with large data sets, like oil and gas firms. Moving those large data sets around to different locations – like into the cloud – is very expensive and time-consuming.

    Also, many enterprise workloads aren’t built or designed for cloud or can’t be virtualized, such as applications written in COBOL for mainframes. Changing those applications and replicating that software in a different environment is a very large and expensive undertaking. At some point, of course, legacy applications such as this will end up being rewritten to run in virtualized or cloud-native environments when the needs justify the expense and risk of doing so. For example, financial services firms that run online transaction processing or actuarial applications will probably take quite a few years to rewrite these applications based on the higher risk of change and the complexity their own customized solutions contain.

    Cloud Computing

    From the enterprise perspective, the trend definitely appears to be toward increasing use of cloud over time. When an organization moves its workload to a cloud environment, it’s different than simply converting to virtual machines (VMs). Enterprise data centers support many different applications, and this can translate to hundreds or thousands of VMs. It’s difficult to manage this new virtualized environment while ensuring high security and availability. Adding automation and orchestration tools greatly enhances operational efficiency and can turn multiple VMs into a private cloud. Now resources are agile and available for whatever workloads the business may need to deploy.

    Private and public cloud environments differ in several ways. When a company chooses a private cloud environment, the company has the benefit of absolute control. Internal operating costs may be much less than the monthly charges from using a public cloud depending on the way the data center services are used.  Evolving to a private cloud is a lot easier from a security and management perspective than moving to a public environment; it is common practice for the enterprise for the most part.

    The public cloud is a rental environment that offers much of the same facilities we see in private cloud. Here, the next wave of applications is typically rewritten as cloud-native applications to run on specific types of public platforms. Public cloud moves companies completely out of the infrastructure business, and it might offer better security than small or mid-sized enterprises can manage on their own.

    But regulations can be an issue: the regulatory environment hasn’t necessarily caught up with what’s possible with public cloud. For some companies, there are several regulatory hurdles to overcome before moving to a public cloud. That’s why many banks, for example, often host private clouds in their own facilities.

    Another downside to public cloud is that there’s no standardization across public cloud platforms. Cloud vendors have unique ways of doing things. It can be difficult to change to another vendor down the road. Putting software into a different cloud environment can be troublesome and expensive. If you have very large datasets in the cloud, it can be hard and expensive to move them into another facility.

    Cloud also forces managers to give up a certain amount of control in terms of service level agreements. If a public cloud fails, managers have to be aware of backup and recovery plans. Once he or she has made a commitment to the public cloud, it may be impossible to move back.  Will you still have resources to manage a private environment? If you recovered your data, where would you back-up applications run?

    Still, if your company isn’t using any differentiated platforms, chances are the cloud-based applications are good enough, and they may be easier for customers to access. But large enterprises that may have invested millions in developing proprietary applications, which they use to differentiate themselves, are not going to cloud in droves.

    MTDCs

    MTDCs offer the ability to pay for infrastructure as a utility rather than running it in your on-premise environment. Even complex enterprise resource planning (ERP) environments with millions of dollars tied up in customization might not justify building and owning a private data center. In this case, it may make sense for them to move it into a hosted facility and buy space, power, cooling and connectivity from the MTDC. And, since it is commonplace for major public cloud, service and content providers to also have a presence in MTDCs, enterprises can connect directly to them.  This can significantly reduce latency and user experience, and simplify their planning by having their public and private clouds, and carrier connectivity, under a single roof.

    Choosing Models

    Finally, a lot of companies are using more than one data center model. A company might use public cloud to enable self-service access to compute resources, for example, while it runs proprietary, business-critical applications in an on-site data center or an MTDC.

    In the end, different companies have different needs, and they use different data center models to meet them. There are several alternatives available today, and you can customize the investment strategy and the migration path to implement a combination of hardware/software approaches. Cloud is becoming more popular, MTDCs are becoming much more capable, and regulators are trusting off-premises solutions more. The trick is to use the best model for the task at hand.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    2:00p
    Finding the Right Data Center Model for Your Business

    James Young is Technical Director, Asia Pacific at CommScope.

    There’s no question: Data center needs change as business applications change direction, grow or die off. Managers can align their data center capacity, choosing to stay with their own onsite capacity or migrating all of their applications to a public, shared cloud. Some also may choose to rent space in a multi-tenant data center (MTDC), moving away from their own facilities. There’s also a happy medium in which businesses mix all of these alternatives into their own unique blended solution. So how do data center managers decide when and how to evolve their data centers? Naturally, there are pros and cons to each data center model.

    On-Site Data Centers

    In this scenario, managers control their data center at their own location. An on-site data center improves efficiencies for some business needs. Of course, it also carries maintenance requirements. With no crystal ball as to what will happen in the future, it is more difficult to scale this investment up or down: guess wrong, and the cost of this alternative can be much higher than other choices.

    All of the activity around cloud computing makes it seem like on-site data centers are dinosaurs, but there are a couple of different reasons why companies choose to own their own data centers. For example, some companies have static requirements and perform a significant amount of processing on an ongoing basis, and they have invested in the data center capacity to do that. Over time, they might make changes to the data center, but it ends up being more expensive to go into a leased facility or into the cloud unless there’s a good reason to do it, such as changes in operations or technology.

    Insurance companies and banks, for example, have been using the same applications for a long time – they’re part of the organization’s core capabilities and are considered to be a strategic advantage. There are also companies that work with large data sets, like oil and gas firms. Moving those large data sets around to different locations – like into the cloud – is very expensive and time-consuming.

    Also, many enterprise workloads aren’t built or designed for cloud or can’t be virtualized, such as applications written in COBOL for mainframes. Changing those applications and replicating that software in a different environment is a very large and expensive undertaking. At some point, of course, legacy applications such as this will end up being rewritten to run in virtualized or cloud-native environments when the needs justify the expense and risk of doing so. For example, financial services firms that run online transaction processing or actuarial applications will probably take quite a few years to rewrite these applications based on the higher risk of change and the complexity their own customized solutions contain.

    Cloud Computing

    From the enterprise perspective, the trend definitely appears to be toward increasing use of cloud over time. When an organization moves its workload to a cloud environment, it’s different than simply converting to virtual machines (VMs). Enterprise data centers support many different applications, and this can translate to hundreds or thousands of VMs. It’s difficult to manage this new virtualized environment while ensuring high security and availability. Adding automation and orchestration tools greatly enhances operational efficiency and can turn multiple VMs into a private cloud. Now resources are agile and available for whatever workloads the business may need to deploy.

    Private and public cloud environments differ in several ways. When a company chooses a private cloud environment, the company has the benefit of absolute control. Internal operating costs may be much less than the monthly charges from using a public cloud depending on the way the data center services are used.  Evolving to a private cloud is a lot easier from a security and management perspective than moving to a public environment; it is common practice for the enterprise for the most part.

    The public cloud is a rental environment that offers much of the same facilities we see in private cloud. Here, the next wave of applications is typically rewritten as cloud-native applications to run on specific types of public platforms. Public cloud moves companies completely out of the infrastructure business, and it might offer better security than small or mid-sized enterprises can manage on their own.

    But regulations can be an issue: the regulatory environment hasn’t necessarily caught up with what’s possible with public cloud. For some companies, there are several regulatory hurdles to overcome before moving to a public cloud. That’s why many banks, for example, often host private clouds in their own facilities.

    Another downside to public cloud is that there’s no standardization across public cloud platforms. Cloud vendors have unique ways of doing things. It can be difficult to change to another vendor down the road. Putting software into a different cloud environment can be troublesome and expensive. If you have very large datasets in the cloud, it can be hard and expensive to move them into another facility.

    Cloud also forces managers to give up a certain amount of control in terms of service level agreements. If a public cloud fails, managers have to be aware of backup and recovery plans. Once he or she has made a commitment to the public cloud, it may be impossible to move back.  Will you still have resources to manage a private environment? If you recovered your data, where would you back-up applications run?

    Still, if your company isn’t using any differentiated platforms, chances are the cloud-based applications are good enough, and they may be easier for customers to access. But large enterprises that may have invested millions in developing proprietary applications, which they use to differentiate themselves, are not going to cloud in droves.

    MTDCs

    MTDCs offer the ability to pay for infrastructure as a utility rather than running it in your on-premise environment. Even complex enterprise resource planning (ERP) environments with millions of dollars tied up in customization might not justify building and owning a private data center. In this case, it may make sense for them to move it into a hosted facility and buy space, power, cooling and connectivity from the MTDC. And, since it is commonplace for major public cloud, service and content providers to also have a presence in MTDCs, enterprises can connect directly to them.  This can significantly reduce latency and user experience, and simplify their planning by having their public and private clouds, and carrier connectivity, under a single roof.

    Choosing Models

    Finally, a lot of companies are using more than one data center model. A company might use public cloud to enable self-service access to compute resources, for example, while it runs proprietary, business-critical applications in an on-site data center or an MTDC.

    In the end, different companies have different needs, and they use different data center models to meet them. There are several alternatives available today, and you can customize the investment strategy and the migration path to implement a combination of hardware/software approaches. Cloud is becoming more popular, MTDCs are becoming much more capable, and regulators are trusting off-premises solutions more. The trick is to use the best model for the task at hand.

    Industry Perspectives is a content channel at Data Center Knowledge highlighting thought leadership in the data center arena. See our guidelines and submission process for information on participating. View previously published Industry Perspectives in our Knowledge Library.

    3:31p
    Alibaba Cloud to Launch its First European, Middle East Data Centers as Part of Global Push
    Brought to You by The WHIR

    Brought to You by The WHIR

    Alibaba’s cloud computing arm will close out the end of the year with four new data centers in a move that Alibaba Cloud is calling a “major milestone” in the Chinese company’s global expansion.

    According to an announcement on Monday, the four data centers will open by the end of 2016 in the Middle East (Dubai), Europe, Australia and Japan, bringing its network to 14 locations around the world.

    Alibaba Cloud started the rollout on Monday with the launch of its data center in Dubai. Alibaba said that with the launch, it will be the first major global public cloud services provider to offer cloud services from a local data center in the Middle East. According to Gartner, the public cloud services market in Middle East and North Africa region is projected to grow 18.3 percent in 2016 to $879.3 million, up from $743.1 million in 2015.

    “Alibaba Cloud has contributed significantly to China’s technology advancement, establishing critical commerce infrastructure to enable cross-border businesses, online marketplaces, payments, logistics, cloud computing and big data to work together seamlessly,” Simon Hu, President of Alibaba Cloud said in a statement. “We want to establish cloud computing as the digital foundation for the new global economy using the opportunities of cloud computing to empower businesses of all sizes across all markets.”

    The new locations offer access to various services such as data storage and analytics, and cloud security, the company says.

    In its most recent earnings, Alibaba’s cloud unit revenue jumped 130 percent to 1.5 billion yuan in the quarter. The division currently has 651,000 paying customers.

    Alibaba Cloud’s first European data center will be launched in partnership with Vodafone Germany in a facility based in Frankfurt.

    Expanding its footprint in Asia-Pacific, Alibaba Cloud is opening a data center in Sydney by the end of 2016. It will have a dedicated team based in Australia to help build partnerships with local technology companies.

    Finally, its Japanese data center is hosted by SB Cloud Corporation, a joint venture between Softbank and Alibaba Group.

    “The four new data centers will further expand Alibaba Cloud’s global ecosystem and footprint, allowing us to meet the increasing demand for secure and scalable cloud computing services from businesses and industries worldwide. The true potential of data-driven digital transformation will be seen through globalization and the opportunities brought by the new global economy will become a reality,” Sicheng Yu, Vice President of Alibaba Group and General Manager of Alibaba Cloud Global said.

    3:31p
    Alibaba Cloud to Launch its First European, Middle East Data Centers as Part of Global Push
    Brought to You by The WHIR

    Brought to You by The WHIR

    Alibaba’s cloud computing arm will close out the end of the year with four new data centers in a move that Alibaba Cloud is calling a “major milestone” in the Chinese company’s global expansion.

    According to an announcement on Monday, the four data centers will open by the end of 2016 in the Middle East (Dubai), Europe, Australia and Japan, bringing its network to 14 locations around the world.

    Alibaba Cloud started the rollout on Monday with the launch of its data center in Dubai. Alibaba said that with the launch, it will be the first major global public cloud services provider to offer cloud services from a local data center in the Middle East. According to Gartner, the public cloud services market in Middle East and North Africa region is projected to grow 18.3 percent in 2016 to $879.3 million, up from $743.1 million in 2015.

    “Alibaba Cloud has contributed significantly to China’s technology advancement, establishing critical commerce infrastructure to enable cross-border businesses, online marketplaces, payments, logistics, cloud computing and big data to work together seamlessly,” Simon Hu, President of Alibaba Cloud said in a statement. “We want to establish cloud computing as the digital foundation for the new global economy using the opportunities of cloud computing to empower businesses of all sizes across all markets.”

    The new locations offer access to various services such as data storage and analytics, and cloud security, the company says.

    In its most recent earnings, Alibaba’s cloud unit revenue jumped 130 percent to 1.5 billion yuan in the quarter. The division currently has 651,000 paying customers.

    Alibaba Cloud’s first European data center will be launched in partnership with Vodafone Germany in a facility based in Frankfurt.

    Expanding its footprint in Asia-Pacific, Alibaba Cloud is opening a data center in Sydney by the end of 2016. It will have a dedicated team based in Australia to help build partnerships with local technology companies.

    Finally, its Japanese data center is hosted by SB Cloud Corporation, a joint venture between Softbank and Alibaba Group.

    “The four new data centers will further expand Alibaba Cloud’s global ecosystem and footprint, allowing us to meet the increasing demand for secure and scalable cloud computing services from businesses and industries worldwide. The true potential of data-driven digital transformation will be seen through globalization and the opportunities brought by the new global economy will become a reality,” Sicheng Yu, Vice President of Alibaba Group and General Manager of Alibaba Cloud Global said.

    7:03p
    DDoS Target Dyn Becomes Another Feather in Oracle’s Cap

    In a move that appears to have been as much of a surprise for folks at Oracle as for anyone else, Dyn — the commercial DNS provider whose name entered the general public’s vocabulary last October, as the target of a massive distributed denial-of-service attack — has agreed to be acquired by Oracle for an undisclosed sum.

    Dyn’s principal service is its Internet Performance Management (IPM) platform, which offers large customers with global data center presence a means of dynamically steering their users to the most accessible points of presence.  Think of it like load balancing at a deeper level of infrastructure, and either a complement or an alternative to a CDN.

    But for general consumers, Dyn also operated a service for routing remote devices for browser accessibility — for example, letting a user directly access a video feed from the security camera on her front porch, from any other browser.  DynDNS makes components of a home network accessible from a Web address associated with one of Dyn’s own domain names.

    Looking through today’s presentation material Oracle compiled for investors [PDF], it’s clear that DynDNS was not the service that made Dyn appealing to Oracle.  Rather, Oracle perceives DNS management as a competitive alternative to CDN, giving its cloud services platform a value-add that other CSPs may not be able to match.

    “While Oracle already offers enterprise-class IaaS and PaaS for Internet applications and cloud service,” wrote Dyn’s Chief Strategy Officer Kyle York, in a company blog post this morning, “Managed DNS and its corresponding value added services are critical core components of being a full-stack cloud platform provider. Adding Dyn’s best-in-class DNS solution to Oracle cloud will provide enterprise customers a one-stop shop for infrastructure services.”

    It was Dyn’s consumer services that put its entire network in jeopardy last October, along with measurably slowing down the entire Internet worldwide.  As security journalist Brian Krebs was among the first to report, an open source strain of malware dubbed Mirai had targeted particular Internet of Things devices, especially cameras and DVRs.

    Those video devices included embedded firmware manufactured by Hangzhou XiongMai Technologies, which for one reason or another, hard-coded the default administrator password in the devices’ firmware.  Even for users who had followed instructions and changed their passwords, the default admin password was still operable.

    That enabled the malware to flash the firmware with instructions making them into launch points for attacks on Dyn’s servers.  GitHub, Netflix, Reddit, SoundCloud, Spotify, Twitter, and major news sites such as CNBC experienced significant slowdowns as a result.  Even some access to Amazon AWS became limited, although Amazon utilized backup DNS servers — a safety contingency evidently not all providers consider.

    The only viable reason Dyn may have been the final target for the malware attack was to demonstrate the fragility of the Internet, when enough pressure is applied to a single point.

    Nonetheless, the negative association that resulted from the attack in the popular press ended up casting Dyn in a dark light along with the fragility of Internet security as a whole, the uncertainty about IoT standards, and even the last set of election results.  Although Dyn is not a publicly traded company, its growth plans may have been dependent on extending the trust it had attained with its existing customer base.

    The underlying message in Oracle’s requisite FAQ document [PDF] following today’s merger announcement was, don’t ask so many questions.  Both companies will continue to operate independently for their respective services.  But beyond that, the FAQ did not say much more, besides providing a link back to the document which linked to the FAQ.

    The deal will be subject to regulatory approval, which may not be forthcoming in the wake of staff changes at federal agencies.

    7:03p
    DDoS Target Dyn Becomes Another Feather in Oracle’s Cap

    In a move that appears to have been as much of a surprise for folks at Oracle as for anyone else, Dyn — the commercial DNS provider whose name entered the general public’s vocabulary last October, as the target of a massive distributed denial-of-service attack — has agreed to be acquired by Oracle for an undisclosed sum.

    Dyn’s principal service is its Internet Performance Management (IPM) platform, which offers large customers with global data center presence a means of dynamically steering their users to the most accessible points of presence.  Think of it like load balancing at a deeper level of infrastructure, and either a complement or an alternative to a CDN.

    But for general consumers, Dyn also operated a service for routing remote devices for browser accessibility — for example, letting a user directly access a video feed from the security camera on her front porch, from any other browser.  DynDNS makes components of a home network accessible from a Web address associated with one of Dyn’s own domain names.

    Looking through today’s presentation material Oracle compiled for investors [PDF], it’s clear that DynDNS was not the service that made Dyn appealing to Oracle.  Rather, Oracle perceives DNS management as a competitive alternative to CDN, giving its cloud services platform a value-add that other CSPs may not be able to match.

    “While Oracle already offers enterprise-class IaaS and PaaS for Internet applications and cloud service,” wrote Dyn’s Chief Strategy Officer Kyle York, in a company blog post this morning, “Managed DNS and its corresponding value added services are critical core components of being a full-stack cloud platform provider. Adding Dyn’s best-in-class DNS solution to Oracle cloud will provide enterprise customers a one-stop shop for infrastructure services.”

    It was Dyn’s consumer services that put its entire network in jeopardy last October, along with measurably slowing down the entire Internet worldwide.  As security journalist Brian Krebs was among the first to report, an open source strain of malware dubbed Mirai had targeted particular Internet of Things devices, especially cameras and DVRs.

    Those video devices included embedded firmware manufactured by Hangzhou XiongMai Technologies, which for one reason or another, hard-coded the default administrator password in the devices’ firmware.  Even for users who had followed instructions and changed their passwords, the default admin password was still operable.

    That enabled the malware to flash the firmware with instructions making them into launch points for attacks on Dyn’s servers.  GitHub, Netflix, Reddit, SoundCloud, Spotify, Twitter, and major news sites such as CNBC experienced significant slowdowns as a result.  Even some access to Amazon AWS became limited, although Amazon utilized backup DNS servers — a safety contingency evidently not all providers consider.

    The only viable reason Dyn may have been the final target for the malware attack was to demonstrate the fragility of the Internet, when enough pressure is applied to a single point.

    Nonetheless, the negative association that resulted from the attack in the popular press ended up casting Dyn in a dark light along with the fragility of Internet security as a whole, the uncertainty about IoT standards, and even the last set of election results.  Although Dyn is not a publicly traded company, its growth plans may have been dependent on extending the trust it had attained with its existing customer base.

    The underlying message in Oracle’s requisite FAQ document [PDF] following today’s merger announcement was, don’t ask so many questions.  Both companies will continue to operate independently for their respective services.  But beyond that, the FAQ did not say much more, besides providing a link back to the document which linked to the FAQ.

    The deal will be subject to regulatory approval, which may not be forthcoming in the wake of staff changes at federal agencies.

    9:22p
    HPE Helps CIOs Navigate the Cloud with Managed Services for Azure
    Brought to You by The WHIR

    Brought to You by The WHIR

    More than a year after Hewlett Packard Enterprise (HPE) said it would stop offering its own public cloud services the company has launched a portfolio of managed services around Microsoft Azure to help CIOs navigate digital transformation.

    Building on its Microsoft partnership that HPE announced last December, the managed services launched this week encompass design, deployment, delivery and daily operational support, delivered from its HPE Enterprise Services team.

    Led by Eugene O’Callaghan, vice president of workload and cloud, HPE, the division has a standard delivery team that handles onboarding and service fulfillment.

    “Our job is really to enable the CIO to be successful,” says O’Callaghan, who has been in the role overseeing its global practice for the past 18 months. “We are looking to partner with our CIO clients to help them deliver secure, reliable operations at a lower cost.”

    HPE Managed Services for Microsoft Azure include management of virtual server, storage and network infrastructure; service provisioning and de-provisioning as well as infrastructure configuration management; operations support, active directory management, OS patching services; and backup and recovery services and security services.

    “Our clients are excited about this,” O’Callaghan says. “We want to enable our clients to focus on their business challenges. So they’re not always going to have the skills in house to deliver these functions and services.”

    O’Callaghan says that when CIOs approach HPE, they often have a chosen partner already for public cloud. Microsoft is HPE’s partner in this area, but HPE CEO Meg Whitman suggested last year that the company could look to other public cloud vendors, Amazon and Google, in the future, however HPE seems to be focused on Microsoft Azure for now for its enterprise-focus.

    Beyond its managed services, HPE Enterprise Services division also provides advice and transformation services, i.e. migration or application transformation, to “a set of brokerage services to provide that integrated, hybrid environment across their entire estate to then the set of landing zones – the right hosting environment, the right infrastructure as a service environment for the appropriate workload,” he says.

    Some of the challenges that CIOs face in the cloud are strikingly similar to those they face in traditional on premise environments – take IT sprawl, for example.

    “That’s a risk we’ve been helping our clients mitigate for some time in a traditional environment but the same risks are there in public cloud,” O’Callaghan says.

    “We have a few clients who are completely in the public cloud but it’s not the majority of our clients,” he says. “The majority of our clients have a traditional estate, they have private cloud optimized and automated for the appropriate workloads, virtual private cloud, and some workloads in public cloud as well.”

    “Not many clients have the skills in-house to integrate across public, private and traditional and provide that single operational environment,” he says.

    HPE is providing an overview of its HPE Managed Services for Microsoft Azure (PDF).

    9:42p
    Facebook Experimental Drone Accident Subject of Safety Probe

    BLOOMBERG – A U.S. safety agency is investigating an accident involving a massive experimental drone Facebook Inc. is developing to bring the internet to remote areas of the world.

    No one was hurt in the incident, which came during the unmanned aircraft’s first test flight on June 28. It marks the latest hiccup in Facebook’s plans to wirelessly connect the world, following an explosion earlier this year that destroyed one of its satellites and political resistance to the service in India.

    The high-altitude drone, which has a wingspan wider than a Boeing Co. 737 and is powered by four electric engines, suffered a “structural failure” as it was coming in for a landing, according to a previously undisclosed investigation by the National Transportation Safety Board.

    “We were happy with the successful first test flight and were able to verify several performance models and components including aerodynamics, batteries, control systems and crew training, with no major unexpected results,” the company said in an e-mailed statement.

    While there has been no previous mention of the NTSB investigation or details about the incident, the company did say the drone, called Aquila, had had a structural failure in a July 21 web post.

    ‘Substantial’ Damage

    The accident occurred at 7:43 a.m. local time near Yuma, Peter Knudson, an NTSB spokesman, said. The NTSB has classified the failure as an accident, meaning the damage was “substantial.” There was no damage on the ground, Knudson said.

    The flying wing designed to eventually be solar powered so it can remain aloft for long stretches. The social-media company is seeking to boost the percent of people around the world who connect to the internet by leapfrogging ground-based infrastructure limitations.

    Company Chief Executive Officer Mark Zuckerberg said he was “deeply disappointed” when a SpaceX rocket explosion Sept. 1 destroyed a Facebook satellite that would have helped spread internet access across Africa.

    The company has also had political hurdles. In India, for example, Zuckerberg was surprised when people rejected the company’s offer of free web services that had Facebook at the center. Locals saw it as a poorly-disguised land grab of the Indian internet market, instead of a charitable project.

    Interest in Indonesia

    Indonesian Vice President Jusuf Kalla spoke to Zuckerberg in recent days at the Asia-Pacific Economic Cooperation summit in Peru about using the Aquila drone to beam internet to remote parts of the country, the Jakarta Post reported.

    “If we make the right investments now, we can connect billions of people in the next decade and lead the way for our generation to do great things,” Zuckerberg said in a Facebook post from the summit on Saturday.

    Zuckerberg was so excited about the drone aircraft’s first flight that he flew to the test facility in Arizona early on June 28, according to an account in The Verge.

    In a web post after the flight, he said it was so successful it was extended from 30 to 96 minutes. “We gathered lots of data about our models and the aircraft structure — and after two years of development, it was emotional to see Aquila actually get off the ground,” Zuckerberg wrote.

    The accident was the second involving an unmanned aircraft designed to fly for long periods as a less expensive alternative to satellites. An Alphabet Inc. drone known as the Solara 50 was destroyed May 1, 2015, at a desert landing strip in New Mexico after experiencing control problems as it flew in a thermal updraft, according to the NTSB.

    Carbon Fiber

    The aircraft are made with the latest carbon-fiber technology in an attempt to make them as light as possible so they can stay aloft with minimum power.

    Facebook’s drone has a wingspan of 141 feet (43 meters) and weighs 900 pounds (408 kilograms). It has no traditional fuselage and is built almost entirely of thin, black wings. It flies slowly, using only the energy required to power three hair driers, according to Facebook.

    Aquila is designed to fly for months at a time, using solar energy to replenish batteries at altitudes above 60,000 feet (18,288 meters). It will be equipped with a laser communications system that can deliver data 10 times faster than current technologies, Facebook said in a promotional video.

    The NTSB hasn’t yet released any of its preliminary findings on the extent of the damage or the potential causes of the failure.

    << Previous Day 2016/11/21
    [Calendar]
    Next Day >>

Data Center Knowledge | News and analysis for the data center industry - Industry News and Analysis About Data Centers   About LJ.Rossia.org