Huawei: AI and Data Protection Challenges and Responses with Data Regulators

Huawei calls for shared responsibility to enable a more reliable use of Artificial Intelligence.
Susan Fourtané

Every disruptive technology in the history of mankind has changed the world in one way or another. In the case of Artificial Intelligence (AI), industry analysts agree upon the reality that AI has the ability to change every industry on Earth as well as every organization. Even though the industry does not seem to have a unified definition for AI, basic characteristics of AI include handling complex goals, collecting and combining a different amount of data, extracting information and knowledge, learning autonomously, and making automated decisions at varying different levels.

The two main types of commercial-level AI applications include internal enterprise applications, which are used to improve work efficiency in the enterprise, and applications enabling vertical industries either to improve automation, enhance capabilities, inspire innovation, or a combination of them.

We can differentiate four key roles of activities involving AI: 

  • Consumers/customers

  • Deployers

  • Solution providers

  • Data collectors

These are the characteristics, applications, and AI activities that provide the basis for the governance of AI security and privacy protection, according to Huawei.

Bringing substantial opportunities and benefits is only one side of the coin, though. This exciting, as well as challenging general-purpose technology, also possesses a dark side: Challenges in security and privacy protection, nonetheless. According to Huawei, the healthy development of Artificial Intelligence relies on the governance of AI security and privacy protection. 

Recently, Huawei and Beijing Normal University together with data regulators in the EU, U.K., and Hong Kong called on governments, standard organizations, and industries worldwide to join hands to drive the development of Artificial Intelligence (AI) data and privacy protection, as well as trustworthiness and establish global data protection standards for AI. Huawei hopes for more discussion on challenges posed by AI to data protection in various regions, as well as to trigger thoughts and policy progress of regulators.

John Suffolk, Global Cyber Security and Privacy Officer at Huawei, introduced Huawei's privacy protection practices, including AI security and privacy protection governance at the AI and Data Protection: Global challenge Global Response side event at the 41st International Conference of Data Protection and Privacy Commissioners (ICDPPC), themed Convergence and Connectivity, Raising Global Data Protection Standards in the Digital Age, Hosted in Tirana, the capital city of Albania earlier this month.

He said that technology is rapidly moving beyond existing policies and legal frameworks, especially in the AI field. He also said that we should not wait for ICT development to mandate the formulation of policies or legal frameworks — instead, we should take immediate actions to develop policies and a legal framework that can maximize development opportunities and prevent unexpected consequences.

Huawei proposed that "as a new momentum in the digital age, AI is in urgent need of addressing data and privacy protection issues and establishing global standards." The proposal has been widely recognized by the industry leaders who have also expressed their views on the challenges to AI privacy and data protection from the perspectives of policies, laws, and industry practices.

Most Popular

In its Thinking Ahead about AI Security and Privacy Protection: Protecting Personal Data and Advancing Technology Capabilities whitepaper (PDF), Huawei says despite AI is transforming numerous industries, having a profound impact on them, there is no unified definition yet. The paper states the basic characteristic of AI as follows: 

  • Handling complex goals

  • Collecting and combining different amounts of data 

  • Extracting information and knowledge, and learning autonomously

  • Making automated decisions at different levels 

Can we all agree on an AI definition, please? 

  • According to the Ethically Aligned Design First Edition paper released by IEEE in 2019, AI can be defined as "the design, development, deployment, decommissioning, and adoption of autonomous or intelligent software when installed into other software and/or hardware systems that are able to exercise independent reasoning, decision-making, intention forming, and motivating skills according to self-defined principles."

  • The Ethics Guidelines for Trustworthy AI released by the European Commission's High-Level Expert Group on Artificial Intelligence (AI HLEG) also in 2019 says that "Artificial Intelligence (AI) systems are software (and possibly also hardware) systems designed by humans that, given a complex goal, act in the physical or digital dimension by perceiving their environment through data acquisition, interpreting the collected structured or unstructured data, reasoning on the knowledge, or processing the information, derived from this data, and deciding the best action(s) to take to achieve the given goal. AI systems can either use symbolic rules or learn a numeric model, and they can also adapt their behavior by analyzing how the environment is affected by their previous actions." 

  • Artificial Intelligence Security White Paper released by the China Academy of Information and Communications Technology (CAICT) in 2018, says that "AI enables intelligent machines or intelligent systems on machines. It studies and develops theories, methods, and technologies for simulating, extending, and expanding human intelligence, perceiving the environment, obtaining knowledge, and using knowledge to reach optimal results."

  • A Proposed Model AI Governance Framework released by Singapore Personal Data Protection Committee (PDPC) in 2019, says that "Artificial Intelligence (AI) refers to a set of technologies that seeks to simulate human traits such as knowledge, reasoning, problem solving, perception, learning, and planning." 

AI Challenges in security and privacy protection

No single organization or company has sufficient resources to tackle the increasingly complex security and privacy risks, and threats to AI.

According to the Thinking Ahead about AI Security and Privacy Protection: Protecting Personal Data and Advancing Technology Capabilities whitepaper (PDF), the security and privacy protection of AI development is facing three broad challenges each of them involving other smaller challenges. Industry standards and specifications indicate that technical reliability, societal applications, legal requirements and responsibilities are the main challenges to AI security and privacy protection.   

  • Technical reliability: Smaller challenges within this broader area include deep neural networks (DNNs) lack of robustness resulting in susceptibility to invasion attacks. Lack of transparency and explainability of these complex systems may infringe legal or regulatory requirements such as GDPR in terms of automated decision-making. Also, data breaches, tampering, theft, and misuse of vast amounts of data may result from the unavailability of comprehensive data security protection. A case of this is in the field of autonomous driving when evasion attacks can lead to traffic offenses and even trigger accidents. In healthcare, attackers can also introduce significant errors in the dosage recommended by AI models by only adding a small amount of malicious data but resulting in huge serious consequences. The paper states that although the accuracy rate of cancer screening by AI shows to be high, doctors agree with only about half of such results because they seem to believe that the results lack reasoning and logic. Although this could be argued or better investigated.

  • Societal applications: Here, smaller challenges within the broader societal applications may include the lack of management and control over the purposes of AI, which in turn lead to AI being misused; also, data quality issues lead to biased and unfair judgments. A serious challenge is presented when application developers and deployers with insufficient knowledge and capabilities misuse AI systems, causing serious security and privacy incidents. Case in point, facial recognition software incorrectly matching photos with criminals' mug shots showing a high false match rate due to improper parameter settings.

  • Legal requirements and responsibilities: Defining the rights and responsibilities of stakeholders in a clear manner is not yet possible since there are no laws or regulations such as regulations on autonomous driving and algorithm accountability. A clear case here is, when an autonomous driving vehicle killed a pedestrian, leading to discussions about the supervision and legal responsibilities of autonomous vehicles. Who should be held legally responsible, the machine or the human who programmed it? It can also be a case when the autonomous system experience a technical problem, say, a wire breaks, which is not related to the actual programming of the autonomous system but causes a malfunction and the vehicle goes out of control. 

In order to clarify AI stakeholders' responsibilities for building a digital world, Huawei states: "We call on governments, standard organizations, end-users, and the industry as a whole to reach consensus and work together to develop new codes of conduct, standards, and laws specific to AI and its use cases. Huawei and its global partners will work together to review their tasks based on business scenarios, further clarify responsibilities and activities, and provide systematic reasoning and governance methods to jointly provide people with AI services that can ensure security and privacy."

According to Huawei, no single organization or company has sufficient resources to tackle the increasingly complex security and privacy risks, and threats to AI.

Therefore, there is an urgent need for a general call to join industry leaders. By working together developing new targeted codes of conduct, defined standards, and new laws to enhance AI security and privacy protection it will be possible to guarantee a responsible and ethical deployment of AI in the future. AI is a shared responsibility. And this is just the beginning.

Related Articles:

message circleSHOW COMMENT (1)chevron