Conclusions and future directions

This book has hopefully offered a structured and practical introduction to the intersection of artificial intelligence, cybersecurity, and GDPR-compliant data processing.

We began with the basic concepts of AI literacy, ethical considerations, risk assessment, and then added further layers with data protection and cybersecurity to provide a comprehensive reference for learners (and teachers!) of such topics.

Secure AI development with personal data goes beyond technical compliance. It demands an ongoing, collaborative effort involving developers, data scientists, cybersecurity teams, legal experts, and the final users of such systems to achieve lawful, but most important, trustworthy AI systems.

We have covered privacy enhancing technologies in the context of machine learning and have explored the practical applications of MLOps with secure coding and deployment practices.

Sustainability

We cannot ignore the fact that AI systems, especially those requiring large amount of resources during training or inference, can have significant environmental, financial, and operational impacts. Training modern AI models, particularly deep learning models with billions of parameters, is an energy-intensive process. Recent studies have highlighted the substantial carbon footprint of these models. Between 2012 and 2018, the computational resources required for leading-edge deep learning research grew 300,000 times, leading to surprising levels of energy consumption and carbon emissions (Schwartz et al. 2020). Running large AI models in production continuously also draws significant power: one estimate suggested that each query to an AI like ChatGPT consumes multiple times more energy than a typical Google search. While considering the societal and environmental aspects of AI is beyond the scope of this book, we should always responsibly consider if AI is the tool we need for our task.

Where to go next?

As AI technologies continue to evolve, so too will the privacy and security challenges they present. So what should be monitored?

  • Emerging technologies: New trends such as agentic AI systems, machine unlearning, model context protocol will introduce new regulatory, ethical, and technical challenges.
  • Growing threats: Adversarial attacks, data poisoning, and model inversion will require increasingly sophisticated threat detection, monitoring, and incident response strategies.
  • Regulatory evolution: The implementation and interpretation of the EU AI Act, alongside ongoing GDPR enforcement and new ISO standards, will continue to shape how AI systems must be built and maintained and surely parts of this book will need to be rewritten in a couple of years ahead.

How to stay informed and what to do next? We recommend at least the following:

  • Follow regulatory changes: Follow updates from the EDPB, EDPS, ENISA, the European Commission, and national data protection authorities to watch how the regulatory landscape is evolving around artificial intelligence.
  • Interact with professional communities: Join forums, special interest groups, and attend conferences focused on AI, data protection, and cybersecurity, with a focus on your role in developing such systems.
  • Continuous learning: With this book we barely scratched the surface, you most likely want to purse further training or certifications in GDPR, cybersecurity, MLOps, and AI ethics, and in general experiment with new technologies as they are being released.
  • Contribute to Open Resources: Open-source tools are the true leaders of the AI innovation we see today, so you should onsider contributing back to the tools you use. And you can also contribute to initiatives like OWASP’s AI Security and Privacy Guide!

Final thoughts

As AI continues to transform society, your role as a developer or cybersecurity professional carries immense responsibility. The systems you build impact individuals, institutions, and societies. By applying the knowledge, tools, and principles outlined in this book, you are not just building software, you are helping to shape the future of ethical, secure, and trustworthy AI. And if you’re feeling extra motivated and inspired, please consider contributing back to this book by expanding it further.

Enrico Glerean, Helsinki, February 2025