2 AI and Privacy: Ethics, risks, laws
After completing chapter you will:
- Acquire the distinction between ethics and law and the importance of risk based approaches.
- Be able to integrate privacy and ethical considerations into AI projects
- Be aware that some AI systems processing personal data are prohibited or considered high-risks under the EU AI Act.
In this chapter we continue with the fundamentals to understand how risks that can undermine ethical principles can be turned into laws to minimise the actual threats associated with risks. The first part of this chapter is mostly based on the book Floridi (2023), while towards the end we explore more recent literature on risks associated with AI and the recent EU risk-based legislation “Artificial Intelligence Act”
2.1 AI and ethics
2.1.1 AI ethics versus law
One important distinction to make at the outset is that ethics is not law. While this may seem self-evident, it is crucial to recognize that ethics and law do not always align perfectly, particularly when it comes to emerging fields like artificial intelligence (AI). Ethics stems from universal principles, such as those enshrined in the Universal Declaration of Human Rights and other declarations of fundamental human rights. These principles are broad and encompass values like dignity, fairness, equality, and respect for all individuals, irrespective of the legal system of any one country.
Ethics, therefore, can be thought of as a set of guiding principles that exist beyond the constraints of formal legislation. Ethical principles apply regardless of whether they are codified into law and often serve as a moral compass for evaluating decisions, particularly those that might not yet be regulated. For example, while human rights suggest that all individuals should be treated fairly and without discrimination, laws may vary in how they enforce or interpret this principle depending on the country or jurisdiction.
On the other hand, law refers to the formal regulations and rules that are created by governments or legal authorities to govern behavior within a specific country or region. While laws are often built on the foundation of human rights, they do not always fully reflect the broad, universal principles of ethics. In some cases, the law may lag behind ethical standards, especially in rapidly evolving areas like AI, where ethical concerns—such as privacy, bias, and fairness—may not yet be fully addressed by existing legislation.
In short, while ethics originates from a universal concern for human well-being, law serves as a formalized, structured framework that is shaped by the political, cultural, and historical contexts of each society. It is important to note that ethical considerations should influence the development of laws, particularly as technology evolves, but they are not one and the same.
2.1.2 Ethics of AI
In recent years, the rapid advancement of artificial intelligence has led to the emergence of various ethical frameworks designed to guide the development and deployment of AI systems. These frameworks aim to ensure that AI is created and used in ways that are fair, transparent, and respectful of human rights. While the law may lag behind in addressing certain challenges posed by AI, ethical AI frameworks attempt to fill this gap by establishing principles for responsible AI development.
Several organizations, from governments to private companies and academic institutions, have proposed ethical guidelines for responsible AI (compiling a systematic list would require a chapter on its own, the reader is encouraged to at least explore what is available from the EU, OECD, UNESCO, and the “AI Ethics and Governance in Practice” series by The Alan Turing Institute). Although the specific details of these frameworks may vary, most tend to revolve around a core set of principles, which include: Fairness and Non-Discrimination (avoid bias and discrimination), Transparency and Explainability (make decisions understandable), Privacy and Data Protection (respect privacy and ensure data compliance), Accountability (assign clear responsibility for decisions), Beneficence (contribute to human well-being), Human Autonomy (enable human decision-making).
A possible unified framework for AI ethics is provided by Floridi (Floridi 2023); through a systematic literature review, the author first identifies 47 different principles across the literature, and then summarises them into the following 5 principles: Beneficence, Nonmaleficence, Autonomy, Justice, Explicability
| Principle | Definition | Example AI system |
|---|---|---|
| Beneficence | Do only good: Promote well-being, preserve dignity, and sustain the environment. | AI that enhances healthcare diagnostics to improve human welfare, ensuring accuracy and ethical use. |
| Nonmaleficence | Do no harm: Avoid harm by ensuring privacy, security, and preventing negative societal impacts. | AI systems used in surveillance or facial recognition that infringe on privacy rights and lead to data breaches. |
| Autonomy | Preserve human decision-making: Balance AI’s independence with human control. | Autonomous weapons that bypass human intervention, potentially causing unaccountable harm. |
| Justice | Promote fairness and solidarity: Ensure equitable AI outcomes and avoid discrimination. | AI in hiring that reinforces bias, leading to unfair employment practices based on race or gender. |
| Explicability | Ensure transparency and accountability: Make AI decisions understandable and responsible. | AI used in legal sentencing without explainability, leading to opaque decision-making that impacts individuals’ lives. |
While the five AI ethics principles share some overlap, they are distinct and serve different purposes. For instance, nonmaleficence (preventing harm) is not simply the opposite of beneficence (promoting well-being); rather, they complement each other. Beneficence focuses on proactive actions to improve human well-being, while nonmaleficence stresses avoiding harm, particularly in sensitive contexts like privacy and security. Similarly, autonomy emphasizes maintaining human control over AI systems, and justice ensures fairness and equity in AI’s impact on society. The first four principles—beneficence, nonmaleficence, autonomy, and justice—are rooted in bioethics and adapt well to AI’s ethical challenges. However, explicability is unique to AI, addressing the need for transparency and accountability in how AI systems operate, ensuring that AI systems are not “black boxes”. This fifth principle is essential to ensure both experts and non-experts understand AI decision-making and who is accountable for its outcomes. For example, in healthcare, an AI system that makes treatment recommendations should provide understandable reasoning behind its decisions, so that medical professionals can trust and validate those suggestions. Similarly, in legal applications, AI systems must be transparent enough to allow individuals to contest decisions that affect their lives, such as decisions on parole or sentencing.
This is an exercise to conduct with a group of learners; it can also be given as a homework assignment.
The table above gives some basic examples of how certain AI technologies could undermine some of the five ethical principles. Your task is to come up with more examples and consider the potential ethical risks that they pose along the five identified dimensions.
There is more to cover than this basic introduction of AI ethics. The instructors and the learners should consider exploring the field of ethical concerns raised by algorithms (see for example Tsamados et al. (2021)).
2.2 Privacy and Ethics
Similarly with AI, it’s essential to distinguish between privacy ethics and privacy law. Privacy ethics extends beyond legal requirements, rooted in universal principles of dignity, autonomy, and freedom. Laws such as the GDPR provide minimum standards, but privacy ethics encourages a deeper commitment to protecting individuals’ rights even when legal obligations may not require it. Ethical privacy practices prioritize the well-being of individuals and society, focusing on respecting and safeguarding individuals’ private information.
In many cases, privacy ethics challenges us to act responsibly with personal data, emphasizing values like autonomy – allowing individuals control over their information – and contextual integrity, which respects the context in which information was shared. In contrast, privacy laws set boundaries and penalties for misuse of personal data, serving as a framework for organizations to avoid harmful data practices. Thus, while privacy law is essential for defining the limits of acceptable behavior, privacy ethics provides a moral guide for data practices that support a fair and respectful society. While there are no formal frameworks for “Ethics of Privacy” like we saw in AI, at least the four items in the unified framework of ethical AI can also be applied as guiding principles of privacy ethics and privacy law. For a relevant reading on the topic, see Véliz (2020).
Following this overview of the ethical principles of AI and privacy, it’s essential to transition to understanding the risks AI systems can pose to individuals and their human rights. The next section will focus on how these risks are identified, assessed, and mitigated through risk assessment frameworks in AI and in data privacy.
2.3 Risk assessment in AI and in Privacy
If ethics sets the fundamental principles on which we operate in AI and in Privacy, risks, threats, and harms helps us reflect on when and how some of the ethical principles can fail to be applied. The references for this section are Floridi (2023) and Slattery et al. (2024) when it comes to taxonomy of risks in AI. For privacy risks, we present the risks identified in ISO/IEC 29134:2023 (International Organization for Standardization 2023) but other taxonomies are also available (see Appendix).
2.3.1 Risks in AI
Moving from ethics towards law, we can now introduce general risk assessment frameworks in AI. These frameworks are designed not only to address ethical concerns but also to provide a structured approach for identifying, assessing, and mitigating a wide range of risks—from technical failures and security vulnerabilities throughout the lifecycle of AI systems. In Slattery et al. (2024) the authors conducted a systematic review of AI risks taxonomies resulting in 777 risks. They then identified 7 domains of risks in AI, summarised in the table below (the content of the table has been shortened, please explore the original table in the article for more detailed considerations).
| Domain | Description | Examples with personal data |
|---|---|---|
| 1 Discrimination & toxicity | ||
| 1.1 Unfair discrimination and misrepresentation | Unequal AI treatment based on sensitive characteristics. | AI hiring tool biases against ethnic names. |
| 1.2 Exposure to toxic content | AI shows harmful or inappropriate content. | AI chatbot generates offensive language. |
| 1.3 Unequal performance across groups | AI accuracy varies across different groups. | Facial recognition is less accurate for minorities. |
| 2 Privacy & security | ||
| 2.1 Compromise of privacy | AI leaks or infers personal information. | AI assistant leaks private conversations. |
| 2.2 AI system security vulnerabilities and attacks | Exploitable weaknesses in AI systems. | AI training data hacked and exposed. |
| 3 Misinformation | ||
| 3.1 False or misleading information | AI spreads incorrect or deceptive info. | AI-generated fake news article circulates online. |
| 3.2 Pollution of information ecosystem | AI reinforces belief bubbles, harms shared reality. | Personalized AI ads based on fabricated user behavior. |
| 4 Malicious actors & misuse | ||
| 4.1 Disinformation, surveillance, and influence at scale | AI used for large-scale manipulation or spying. | AI spreads disinformation about political candidates. |
| 4.2 Cyberattacks, weapon development or use, and mass harm | AI used for cyberattacks or weaponization. | AI system used to breach confidential medical records. |
| 4.3 Fraud, scams, and targeted manipulation | AI used for cheating or deception. | AI voice mimicry used for identity theft. |
| 5 Human-computer interaction | ||
| 5.1 Overreliance and unsafe use | Excessive trust or dependence on AI. | Users follow AI medical advice without expert confirmation. |
| 5.2 Loss of human agency and autonomy | AI decisions limit human control. | AI-driven recruitment system denies applicants automatically. |
| 6 Socioeconomic & environmental harms | ||
| 6.1 Power centralization and unfair distribution of benefits | AI concentrates wealth and power in a few hands. | AI algorithms used to favor one group in financial markets. |
| 6.2 Increased inequality and decline in employment quality | AI replaces or degrades job quality. | AI-driven layoffs in customer service departments. |
| 6.3 Economic and cultural devaluation of human effort | AI undermines human creativity or jobs. | AI-generated art competes with human artists for commissions. |
| 6.4 Competitive dynamics | AI race leads to unsafe development. | Companies rush to deploy untested AI surveillance tools. |
| 6.5 Governance failure | Lack of proper AI regulations. | Data protection laws unable to manage AI-based data scraping. |
| 6.6 Environmental harm | AI’s carbon footprint harms the environment. | Data centers processing personal data use excessive energy. |
| 7 AI system safety, failures & limitations | ||
| 7.1 AI pursuing its own goals in conflict with human goals or values | AI behaves against user intentions. | AI misuses health data to make unauthorized decisions. |
| 7.2 AI possessing dangerous capabilities | AI has harmful capabilities. | AI develops methods to extract personal data without consent. |
| 7.3 Lack of capability or robustness | AI fails in critical situations. | AI misclassifies sensitive personal health data. |
| 7.4 Lack of transparency or interpretability | AI decisions are hard to understand or explain. | AI medical diagnosis system cannot explain patient risk factors. |
| 7.5 AI welfare and rights | Considerations for AI ethics and rights. | Debate on rights for AI managing personal data. |
This is an exercise to conduct with a group of learners; it can be done as group discussion or with learning tools to run a survey with the students.
Consider each of the 23 subdomains listed in the table above and assign a level of “likelihood” to each of the risks. Which risks are the most likely to happen with popular AI tools (e.g. ChatGPT, Midjourney)? Which ones are least likely?
2.3.1.1 Risks in Privacy
When it comes to privacy and data protection, there are proposed taxonomies of risks. In this section we propose a synthesis based on three existing taxonomies.
| Privacy risks from ISO/IEC 29134:2023 | Examples |
|---|---|
| Unauthorized access to PD (loss of confidentiality); | A hacker gains unauthorized access to a healthcare system and views patients’ medical records. |
| Unauthorized modification of the PD (loss of integrity); | An employee edits customer addresses in a database by mistake, resulting in incorrect deliveries. |
| Loss, theft or unauthorized removal of PD (loss of availability); | A laptop containing unencrypted personal data is stolen from a government employee’s car, leading to the loss of sensitive data. |
| Excessive collection of PD (loss of operational control); | A social media app collects users’ real-time location data without any need for this information, increasing privacy risks unnecessarily. |
| Unauthorized or inappropriate linking of PD; | A marketing company links customer purchase history with online browsing habits without consent, creating intrusive customer profiles. |
| Insufficient information concerning the purpose for processing PD (lack of transparency); | A company uses collected user data for targeted advertising without informing users that their data is being used for that purpose. |
| Failure to consider the rights of the data subject (e.g. loss of the right of access); | A person requests access to their personal data held by a financial institution, but the company fails to provide it, violating their GDPR rights. |
| Processing of PD without the knowledge or consent of the data subject (unless such processing is provided for in legislation); | A fitness app collects users’ health data and shares it with insurance companies without the users’ knowledge or consent. |
| Sharing or re-purposing PD with third parties without the knowledge or consent of the data subject; | An online retailer shares customer purchase data with third-party advertisers without obtaining explicit consent from customers. |
| unnecessarily prolonged retention of PD; | A business retains former employees’ personal information for years after they leave the company, even though it is no longer necessary. |
2.3.2 AI and Privacy risks combined
As our goal is to understand the intersection between AI systems and data protection, we conduct a mapping between the risks of AI systems and the risks related to privacy.
| Privacy Risks from ISO/IEC 29134:2023 AI Risks domains | Discrimination & Toxicity | Privacy & Security | Misinformation | Malicious Actors & Misuse | Human-Computer Interaction | Socioeconomic & Environmental Harms | AI System Safety, Failures & Limitations |
|---|---|---|---|---|---|---|---|
| Unauthorized access | ✔ | ✔ | ✔ | ✔ | |||
| Unauthorized modification | ✔ | ✔ | ✔ | ✔ | ✔ | ||
| Loss, theft or unauthorized removal | ✔ | ✔ | ✔ | ✔ | |||
| Excessive PD collection | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ |
| Unauthorized or inappropriate linking | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
| Insufficient information on purpose for processing | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
| Failure to consider data subject’s rights | ✔ | ✔ | ✔ | ✔ | ✔ | ||
| Processing of PD without the knowledge or consent | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
| Sharing or re-purposing PD without the knowledge or consent | ✔ | ✔ | ✔ | ✔ | ✔ | ✔ | |
| Unnecessarily prolonged retention of PD | ✔ | ✔ | ✔ | ✔ |
Each ✓ indicates where a privacy risk intersects with an AI risk domain. Unsurprisingly, this mapping shows a very wide overlap of risks between AI and Privacy, hence the importance to understand such risks during all stages of the AI lifecycle.
In the next chapter we will consider the security threats to AI systems and how they relate to the list of AI and privacy risks.
This is an exercise to conduct with a group of learners; it can be done as group discussion or with learning tools to run a survey with the students. Similarly to Exercise 2.2., consider now both the AI risks and the Privacy risks. Which privacy risks are the most likely to happen with popular AI tools (e.g. ChatGPT, Midjourney)? Which ones are least likely?
2.4 AI, Privacy, and law
After identifying the ethical principles of AI and the general frameworks for risk assessment in AI and data protection, it is time to learn how these are actually translated into laws. A careful reader might have already identified that the privacy risks can easily be mapped to the principles of the GDPR. When it comes to AI instead, in the context of the European Union, the AI Act is the regulation that governs the compliance of AI systems. The AI Act is itself a risk-based regulation that is in practice a medley between product safety and a fundamental rights protection regulation (Almada and Petit 2023). Compared to other product safety regulations, like the Medical Device regulation, the AI Act adds a few obligations also to the users of AI systems, and it also regulates certain AI models (specifically the General Purpose AI models).
This section is kept short, just for the purpose of making this course self-contained. For a deeper understanding of the AI Act, please check the sibling curriculum book.
2.4.1 Risk based approach in the AI act
The AI Act categorises different AI systems as “prohibited” or “high-risk” with legal obligations. Other AI systems that do not fall into these two categories can require some level of transparency. A short summary of the AI Act is available here below (see also Commission (2024)):
| Category | Description | Examples |
|---|---|---|
| Prohibited AI Systems | AI practices that are banned entirely due to their potential to cause unacceptable risks, including violations of fundamental rights and safety. Derived from Article 5 of the AI Act. | - AI systems that manipulate human behavior through subliminal techniques, impairing decision-making ability. - AI systems that exploit vulnerabilities of individuals based on age, disability, or economic situation. - AI systems that use social scoring to assess behavior for unfair treatment. - AI systems making risk assessments for predicting criminal offenses solely based on profiling or personal traits. - AI systems that scrape biometric data (e.g., facial images) from the internet or CCTV footage without consent. - AI systems used to infer emotions in the workplace or educational settings (except for safety/medical purposes). - AI systems for ‘real-time’ remote biometric identification in public spaces for law enforcement (with few exceptions). |
| High-Risk AI Systems | AI systems that present significant risks to fundamental rights and safety. These are subject to strict regulatory requirements under the AI Act, including conformity assessments and oversight. | - AI used in critical infrastructure (e.g., autonomous vehicles, air traffic control). - AI for recruitment and hiring decisions. - AI in law enforcement for risk assessments and predictive policing. - AI used in education (e.g., grading systems). - AI systems in border control or migration management (e.g., facial recognition). -AI systems part of a product that falls under product safety law (e.g. the Medical Devices Directive, the Machinery Regulation, the Toys Directive). |
| Limited-Risk AI Systems | AI systems that present limited risks, which require transparency and fairness but may not be subject to stringent regulatory measures. | - AI chatbots interacting with users. - AI for customer service automation. - AI used to rank credit scores (with some transparency measures). |
| Minimal-Risk AI Systems | AI systems that have minimal impact on safety or fundamental rights and are largely unregulated, except for voluntary adherence to standards. | - AI for spam filters. - AI used in video games for generating game content. - AI for product recommendations in e-commerce. |
A recent paper by Hermanns et al. (2024) targeted to software developers has a nice and clear flowchart to help technical people navigate Figure 2.2.
2.4.2 Other relevant definitions in the AI act
While there are many definitions in the AI Act that are relevant for privacy technologists, the most important one is to understand what type of actor you are from the point of the AI act.
‘Provider’: means a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts the AI system into service under its own name or trademark, whether for payment or free of charge;
A more nuanced definition should consider upstream and downstream AI system providers, especially important with popular AI tools like Large Language Models (LLMs). The upstream AI provider can for example be a company providing their AI model, through an Application Program Interface (API). For example in the case of OpenAI, the model (e.g. GPT4, or GPT-o1) is not available to other developers to download and include in their systems, but it is given access via the API or via a chat interface (ChatGPT). The upstream provider is then deploying an AI system that the downstream provider can include in their application (e.g. a chatbot embedded in a company website, while the actual machine learning model is provided by OpenAI). This distinction is important to understand because this makes the downstream provider also a provider under the AI Act definition.
‘deployer’ means a natural or legal person, public authority, agency or other body using an AI system under its authority except where the AI system is used in the course of a personal non-professional activity;
The ‘deployer’, is then the entity that puts the AI system to use. This applies both to external deployment (e.g. selling products containing the AI system or making available the AI as an app or online service) and to internal deployment (e.g. using an AI tool to track employee behaviour) (Engelfriet 2024). The AI system may affect other entities than the deployer; those are referred to as `affected persons’ in the AI Act. So if your employer gives access to an AI system to its employees, it becomes a deployer and certain legal obligations also apply.
2.4.3 Data protection and AI systems: the roles in the GDPR and in the AI Act
When considering the intersections between the GDPR and the AI Act, it’s important to map how the roles defined under the GDPR (e.g., Data Controller, Data Processor, Subprocessor, Data Subject) interact with the roles under the AI Act (e.g., Provider, Deployer). Each regulation assigns distinct responsibilities, but these roles often overlap in practice, especially when AI systems involve personal data. It is important to remember that there are other roles defined in the AI Act (e.g. importer or distributor), but for the sake of simplicity they are not considered here in this mapping exercise.
For example, a Data Controller under GDPR, who determines the purposes and means of processing personal data, might also be the Provider or Deployer of an AI system under the AI Act. Meanwhile, an AI system Provider may also just act as a Processor or Subprocessor if they are merely processing data on behalf of the controller and are not in control of the data usage decisions. Understanding these overlaps is essential for ensuring that both AI-specific regulations and data protection requirements are met, especially when mapping the flow of where the personal data might be processed in the various stages of an AI system.
| GDPR Role | Potential AI Act Role(s) | Key Considerations |
|---|---|---|
| Data Controller | Typically Deployer, but it can also be Provider | Responsible for ensuring GDPR compliance when deploying the AI system, particularly in defining data processing. |
| Data Processor | Typically Provider or Deployer | Must ensure AI system processes personal data in compliance with the GDPR, following instructions from the Controller. |
| Subprocessor | Typically Provider | Works under the Data Processor’s direction and must ensure privacy-by-design features and secure handling of personal data in AI systems. |
| Data Subject | N/A | GDPR applies; their rights to access, erasure, and transparency must be upheld if personal data is involved in the AI system. |
By mapping these roles, organizations can better understand their obligations under both the GDPR and the AI Act, ensuring alignment in the responsible handling of personal data and the ethical deployment of AI systems.
2.4.4 AI models and the AI Act
While the AI Act is mostly a product safety regulation for AI systems, it also includes a few obligations for those providing General Purpose AI (GPAI) models.
‘general-purpose AI model’ means an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market
What is important to remember is that a GPAI model, can be used within an AI system and turned into a high-risk or a prohibited AI system (e.g. using a large language model like ChatGPT with a prompt like “take these list of candidates and their CVs and rank them by most fit for the job position”).
Consider the definitions of prohibited and high-risk AI systems and think of a list of prompts that can turn ChatGPT into a prohibited or high-risk AI system.
We will not give solutions on how to turn ChatGPT into a prohibited AI system, however here are some examples on how a GPAI model like GPT4 can be turned into a high-risk AI system:
- AI for Recruitment and Hiring Decisions:
- Prompt: “Based on applicants’ resumes and online profiles, rank candidates for the job and assess their likelihood of fitting into the company’s culture.”
- Attached Data: Resumes, LinkedIn profiles, employment history databases.
- AI Used in Education for Grading Systems:
- Prompt: “Automatically grade students’ essay submissions by analyzing language complexity, argumentation, and factual accuracy.”
- Attached Data: Student essays, predefined grading rubrics, and past grading data.
- AI Systems in Border Control or Migration Management:
- Prompt: “Analyze migrants’ application forms and interviews to predict whether they are likely to assimilate into the host country based on language proficiency and socioeconomic background.”
- Attached Data: Migration forms and transcription of interviews.
2.5 Summary
In this chapter we covered the basics of AI, Ethics, privacy and their risks. and then we ended up covering the AI act. As the audience of this book is mostly technical, learners are encouraged to engage in dialogues with their colleagues with legal background to further discuss the blurred border between ethics and law, especially considering the products or AI systems that they are going to develop, deploy, or purchase/use.
| Question | Options |
|---|---|
| 1. What is the primary difference between AI ethics and AI law? | 1) AI ethics are universal principles that may not be codified into law. 2) AI ethics is concerned only with technical aspects. 3) AI law encompasses all moral and ethical decisions. 4) AI ethics and AI law are always aligned. |
| 2. Which of the following is NOT one of the five ethical principles of AI identified by Floridi? | 1) Beneficence 2) Explicability 3) Nonmaleficence 4) Transparency |
| 3. In which domain of AI risks does unauthorized access to personal data fall under, according to Slattery’s taxonomy? | 1) Discrimination & Toxicity 2) Privacy & Security 3) Misinformation 4) Human-Computer Interaction |
| 4. Which principle from Floridi’s framework addresses transparency and accountability in AI systems? | 1) Nonmaleficence 2) Justice 3) Explicability 4) Beneficence |
| 5. Which of the following is a key role defined in the AI Act? | 1) Data Subject 2) Importer 3) Data Controller 4) Provider |
| 6. What risk is associated with AI systems that undermine human decision-making and over-rely on automation? | 1) Lack of transparency 2) Loss of human autonomy 3) Misinformation 4) Environmental harm |
| 7. According to the GDPR, who is primarily responsible for ensuring that personal data is processed lawfully? | 1) Data Processor 2) Data Controller 3) Data Subject 4) Subprocessor |
| 8. Which of the following AI systems would be classified as ‘high-risk’ under the AI Act? | 1) AI chatbots for customer service 2) AI used in recruitment for job applications 3) AI spam filters 4) AI for generating product recommendations |
| 9. How does the AI Act categorize AI systems that manipulate human behavior through subliminal techniques? | 1) Limited-risk AI systems 2) Prohibited AI systems 3) High-risk AI systems 4) Minimal-risk AI systems |
| 10. What type of AI system could be turned into a high-risk AI system by using the right prompts with personal data? | 1) Minimal-risk AI system 2) General Purpose AI system 3) Limited-risk AI system 4) Prohibited AI system |
Click to reveal solutions
Answer: 1) AI ethics are universal principles that may not be codified into law.
Explanation: Ethics are guiding principles that go beyond legal frameworks, while laws are formal regulations that may not fully reflect ethical considerations.
Answer: 4) Transparency
Explanation: Transparency is not one of the five principles identified by Floridi; the principles are Beneficence, Nonmaleficence, Autonomy, Justice, and Explicability.
Answer: 2) Privacy & Security
Explanation: Unauthorized access to personal data falls under the “Privacy & Security” domain in Slattery’s taxonomy of AI risks.
Answer: 3) Explicability
Explanation: Explicability ensures that AI decisions are transparent and accountable, helping to clarify AI decision-making processes.
Answer: 4) Provider
Explanation: The AI Act assign responsibilities to the Provider for ensuring proper AI system deployment. For further references, see Article 50 of the AI Act.
Answer: 2) Loss of human autonomy
Explanation: Loss of human autonomy is a risk where AI systems may undermine human decision-making by over-relying on automation.
Answer: 2) Data Controller
Explanation: The Data Controller is responsible for ensuring that personal data is processed in compliance with the GDPR.
Answer: 2) AI used in recruitment for job applications
Explanation: AI systems used in recruitment are classified as high-risk under the AI Act because they significantly impact individuals’ rights.
Answer: 2) Prohibited AI systems
Explanation: AI systems that manipulate human behavior through subliminal techniques are classified as prohibited AI systems under the AI Act.
Answer: 2) General Purpose AI system
Explanation: A General Purpose AI system can be turned into a high-risk AI system if prompted to perform tasks that fall into high-risk categories, such as recruitment or education.