Published: 15 November, 2021

Towards an ethical regulation of artificial intelligence

Ms. Stephanie Attias, Founder and CEO of Regulate-Me. Com, gives her take on the world of AI

Robots, cyborgs, automated cars… Do you remember when such futuristic developments were far beyond our wildest dreams? In 2021, artificial intelligence (AI) has become our new reality and our favourite tool to increase productivity. Tech innovators are pushing the boundaries of AI and these innovations are impacting the way we live our lives. We wake up in our smart homes, unlock our phones with biometric recognition, receive predictions on our search needs, listen to music recommended to us by machines according to our personal taste, use apps to find love and drive cars with integrated GPS systems that tell us where to go. We are at the dawn of a new age.

Welcome to the AI era! A smart reality filled with advantages, but a reality that needs to be controlled daily, to avoid dangers in our personal lives, and the manipulation of our human behaviour. Today, we are at the intersection of a place we have never been before in our digital society, a place where we need to determine what is real and what is not. AI opportunities and evolutions are endless, and this makes us wonder: How far is too far in AI? (I), Will the European Regulation of AI impact your business? (II), How does Data Protection apply to AI systems? Will AI regulatory fines trump GDPR fines? (III)

AI: How Far is Too Far?

Humanising machines to the point they feel life-like, automating job interviews, monitoring streets with drones, using robotic warfare, triggering medical treatments via automated machine decisions, conducting open surgery by robots... To what extent is technological evolution acceptable in our society?

The tech world is fascinating and scary at the same time. While opportunities and evolution are endless, technology can also come with major risks. Indeed, since AI systems can be created and used by both people with good and bad intentions, the biggest challenge will be to protect citizens against AI misuse. Legal, psychological and ethical issues will need to be resolved to maximise the benefits of AI technology while reducing risks.

While AI takes technology to the next level by pushing human ability and creativity to unknown territories, how far should we go? The idea of machine learning manipulating our emotions is what is seen as unacceptable by the majority of our population. Humans seem to be more comfortable with the idea of automating manual tasks involving precision and calculations, rather than tasks involving emotions, intuition and fundamental values. People like to stay in control they like the idea of being empowered by technology, but get worried as soon as they feel replaced by machines.

The main goal is therefore to align AI goals with human goals, so that humans don’t eventually lose control. To this extent, AI should be frequently verified and secured to ensure it remains ethical and continues to benefit our society. As stated by Amit Ray: "As more and more artificial intelligence is entering into the world, more and more emotional intelligence must enter into leadership." Indeed, while AI is empowering, it is not perfect (yet) and still needs to be checked for mistakes. Since the biggest harm could be in a code or algorithm which looks right but is in fact wrong, human oversight is key to verify data making sure it is free of bias and offences, and to check codes for errors and licences - before such information is published or entered into the AI system.

The European Commission’s Proposed AI Regulatory Framework

According to Margrethe Vestager, Executive Vice-President of the European Commission, "The EU is spearheading the development of new global norms to make sure AI can be trusted". Europe’s main focus is to encourage "Ethical Artificial Intelligence" and make sure adequate security, user safety, fundamental rights and ethics are being respected by technological innovations - and their creators.

On 21 April 2021, the European Commission published a proposal for an Artificial Intelligence legal framework (the new "AI Regulation") accompanied by a proposal for a new Regulation on Machinery Products, which focuses on the safe integration of AI systems in machinery. This November 2021, the European Commission also launched a € 3m ($3.5m) project, Robotics4EU, to boost the adoption of responsible robotics based on machine learning in Europe.

To understand the new rules proposed by the EU, let’s define AI according to the EU (A), and analyse if this regulation could apply to your business (B), by describing the obligations of High-Risk AI System Providers (C) and addressing issues relating to biometric identification (D).

What Falls Under The EU Definition of "Artificial Intelligence"?

The EU uses a broad approach to define artificial intelligence, describing AI as: any software that can, following a set of human-defined inputs, generate outputs such as content, predictions, recommendations or decisions influencing the environments in which they interact. This means that a simple search, where the user inputs data into a search engine and receives machine output, could in itself be considered AI if this action is generated by a specific AI technique such as: machine learning, logic-knowledge based and statistical approaches.

Will the EU Regulation of AI Apply to Your Business?

The proposed EU regulation on AI will apply to both providers and users, from public and private sectors, located inside and outside the EU, as long as the AI system is placed on the European market or as long as the system’s use affects people located in the EU. AI System Providers will have to provide details on how their system should be used, and users will have to use the AI system in a compliant way.

Since some technologies represent less risk than others, the EU has decided to create a framework for the use of AI by following a risk-based approach according to whether the AI uses: unacceptable risk, high risk or low risk. According to the EU, low risk includes AI systems where there is a risk of manipulation. For example, if your business uses chatbots, specific transparency requirements will be imposed to limit the risk of algorithmic bias and you will need to inform your users that they are interacting with a machine. According to the EU, a risk is deemed unacceptable, and should be banned, if it poses a clear threat to people’s security, fundamental rights or social values. Some examples of unacceptable risk include: the use of AI to manipulate human behaviour, the exploitation of children vulnerabilities, social scoring by governments, live facial recognition in public spaces, though several exemptions exist for national security and other purposes.

It is therefore important to determine how much risk your AI system represents. After defining unacceptable risk and low risk, let’s focus on high-risk AI systems and the obligations for their providers.

High-Risk AI Systems: What Are The Obligations For Providers? What is considered as high-risk AI according to the EU? The regulatory proposal defines high-risk AI systemsas creating an adverse impact on people’s safety, social values and/or their fundamental rights. Some examples of high-risk AI include: biometric identification or AI decisions touching on important personal interests such as: areas of recruitment, HR technology used to promote, terminate, manage employees, education, healthcare or law enforcement.

According to the European Commission, high-risk AI systems should not be banned, but they should provide "adequate risk assessment and mitigation systems", along with "appropriate human oversight measures to minimise risk." Therefore, before placing a high-risk AI system on the EU market, EU regulations require companies providing AI in high-risk areas to demonstrate to regulators that their system complies with mandatory requirements for trusted AI systems. Such mandatory requirements include: proof of the AI system’s safety, risk assessments, technical documentation, traceability explaining how technology is making decisions, record keeping, data quantity and quality, transparency, human oversight, accuracy, robustness and cyber security.

Tip: The key for your business is to encourage technological innovation while safeguarding technology and data protection with constant human oversight. Make sure to remain transparent, check your system’s algorithms so that they are not complex and opaque, as this could be a problem for regulators. Even though your business represents minimal risks, it is encouraged to create voluntary codes of conduct and remain compliant. How Does the EU Proposal Address Remote Biometric Identification?

Covid-19 and remote work have significantly contributed to companies shifting to biometrics, instead of passwords, to secure their assets online. Under the new rules, all AI systems intended to be used for emotion recognition and remote biometric identification will be considered high risk and subject to a third-party conformity assessment including documentation and human oversight requirements. Biometric identification can take different forms: It can be used for user authentication, facial recognition for identity verification in financial institutions, including AML/KYC compliance, or for identity verification at border crossings to check a person’s identity against his/her travel documents. However, the use of real-time remote biometric identification in public spaces continues to pose risks for fundamental rights, human dignity, private life, data protection and non-discrimination. Indeed, while the accuracy of systems can reach 99% accuracy rate, even 1% is extremely risky when the result leads to the suspicion of an innocent person. It is important to remember that accuracy of facial recognition systems can vary significantly based on a wide range of factors: camera quality, light, distance, database, algorithm and the subject’s ethnicity, age or gender. That is why in October 2021, the European Parliament passed a non-binding resolution calling for a ban on law enforcement use of facial recognition technology in public places, unless it is used to fight "serious" crime, such as kidnapping and terrorism. Europe’s approach stands in contrast to that of the US, which continues to embrace and expand facial recognition even in light of studies showing the potential for ethnic, racial and gender bias.

As you can see, the importance of data in AI systems is crucial! Data to AI is like food to humans. High-quality data and data testing will help your business make sure AI systems are accurate and non-discriminatory.

The Application of Data Protection to AI Systems

Artificial intelligence and data protection: How do they interact? It is no secret that in 2021 we are our data. Our data is used to program algorithms allowing machines to understand each of us, sometimes even better than we understand ourselves. AI is a broad notion referring to technologies or systems enabling a computer to perform tasks involving a simulation of human intelligence including decision making or learning. Machines are fed by data and learn by finding patterns in such data. However, to do so, the technology must collect voluminous amounts of data, called "Big Data" and personal data. Once the Machine finds a pattern, it can then manipulate this pattern to create products which humans would never be able to create.

Data is precious and the way it is collected has changed in recent years in part due to the EU’s General Data Protection Regulation (GDPR), and other data protection laws worldwide. However, while GDPR approaches technology in a neutral way, "any processing of personal data regardless of the technique used to process such data," the European regulation on AI focuses on one technology only: artificial intelligence. EU’s legal framework for AI also completes GDPR by describing how to achieve fairness. Under GDPR, it was stated that fairness must be ensured but - we were not told how. The EU regulation of AI clarifies this point by describing how to achieve fairness by using: data governance, data specificity, bias monitoring, detection and correction, automatic log of output, quality management systems and a strategy to achieve regulatory compliance. It is important to note that unlike GDPR, the EU regulation of AI requires ongoing risk management for the entire life of the AI system and Post Market Surveillance, even after a product is placed on the market.

Tip: It is necessary for your business to establish an effective data policy for AI, even if you already have data protection measures in place pursuant to GDPR requirements. You can no longer "just assume" that because your business already collects data, you can pass this data to the AI developer for new applications. AI projects need historical data and this requires several decisions to be made by your team: Which data is most important? How far back do you need togo with regards to data collection? Should proof of concept be used? Can data be "cleaned" for AI use? Making the right decisions will be crucial for your business. Especially since an entirely new data policy might be required, along with new working practices to keep feeding your AI system with the right quality of data.

Since AI regulation trumps GDPR when it comes to fines, our tip is to ensure you remain compliant at all times. Beware, violating AI regulation could lead you to face fines reaching € 30m, or 6% of the total worldwide annual turnover of your preceding financial year, whichever is greater! This is a significant increase compare to GDPR fines which were already high (€ 20m or 4% of your annual turnover, whichever is greater).

Around the Regulatory AI Globe: What’s Next?

To conclude, we are seeing a global shift in the adoption of regulatory measures relating to AI. While the EU is awaiting adoption of the European Commission’s AI proposals, the US is using AI in the Federal Government, creating a new National AI Initiative Office for federal coordination, and the White House issued Guidance for Regulation of AI Applications in November 2020. This establishes a framework for US federal agencies to assess potential regulatory and non-regulatory approaches to emerging AI. Further around the globe, China also adopted a regulation of AI with policies ensuring state control of Chinese companies and storage of valuable data within the country. Finally, in the UK, data privacy and biometrics regulators are also seeking to improve, or even remove, employee monitoring via facial recognition and surveillance camera operations.

We are facing unprecedented times, and the outcomes of the coming years will entirely reshape the use of new technology. According to Eliezer Yudkowsky, AI theorist: "The greatest danger of artificial intelligence is that people conclude too early that they understand it". Stay Tuned.