Back to the Future: Thoughts from the World Summit AI 2025, Amsterdam

Teksti | Outi Loikkanen

This article reports from the World Summit AI 2025 held in Amsterdam under the theme “Back to the Future: It’s About Time.” The conference gathered global experts, researchers, and industry leaders to discuss artificial intelligence (AI) as both a transformative and disruptive force in society. The article highlights key discussions on ethics, learning, and creativity in AI.

A silver DeLorean DMC-12 sports car with its distinctive gull-wing doors open is parked outdoors.

The DeLorean DMC-12 pictured outside the conference. This is the iconic car transformed into a time machine by Doc Brown in the Back to the Future film trilogy. Picture by Outi Loikkanen.

Introduction

The 2025 World Summit AI (WSAI), organised by InspiredMinds at the Taets Art & Event Park in Amsterdam, positioned AI as a technology shaping not only industries but also societies and values. The event’s theme, “Back to the Future: It’s About Time,” captured the urgency of aligning technological progress with human purpose.

As a lecturer and project worker involved in the Saavuta Äly project that aims to strengthen Non-Governmental Organisations’ (NGOs’) capabilities in using AI, I attended the summit to explore how current global AI debates can translate into practical teaching and training contexts. The conference offered many different perspectives—from critical analyses of AI power structures to creative and educational applications—that can enrich both lectures and help guiding third sector professionals to use AI responsibly and efficiently in their work.  

Power, responsibility, and the global AI landscape

The opening keynote by Karen Hao, journalist and author, titled “Empire of AI: How Silicon Valley is Reshaping the World”, challenged participants to question narratives that frame AI as an inevitable path to wellbeing. Hao argued that companies such as OpenAI have gathered significant wealth and power while presenting it as technological progress, raising concerns that too much corporate control could weaken democratic values.

This discussion provides a powerful case study for teaching critical AI literacy: encouraging students and professionals to evaluate who benefits from AI systems and who is excluded.

In contrast, John Abel from Google Cloud highlighted opportunities in “AI at scale,” stressing digital sovereignty and the importance of maintaining control over data while fostering innovation. His view underlined Europe’s growing commitment to responsible AI development—an important perspective for public and non-profit organisations navigating global technology ecosystems.

Ethics by design and the culture of trust

Ethical and governance perspectives were central throughout the summit. The Guardians of Tomorrow panel, featuring Mona de Boer (PwC Netherlands), Nicol Turner Lee (The Brookings Institution), and Wendell Wallach (Yale University), discussed building safety and accountability by design into AI systems. Their approach echoed the principle of “responsible by default,” which is increasingly relevant to European AI policies and NGO operations relying on third-party platforms.

Similarly, Rahul Pathak from AWS emphasised in his talk “Unlocking an AI Mindset” that successful AI adoption depends as much on cultural change as on technology. For educators, this highlights the need to integrate organisational readiness and ethics into AI training—moving beyond tool-based learning to a mindset of responsibility and transparency.

AI, learning, and inclusion

Education emerged as one of the most dynamic themes of the summit. Nikolaz Foucaud, Managing Director for Enterprise EMEA at Coursera, explored how generative AI is reshaping the global skills landscape, demanding new models of lifelong learning.

Equally insightful was Dr. Sharon Sochil Washington’s presentation from MASi, a multilingual AI-powered school initiative. She proposed viewing grammar not only as a linguistic but also as a social architecture, arguing that when AI “flattens grammar,” it also flattens culture. Her work demonstrates how inclusive AI design can bridge linguistic and cultural divides—a lesson especially relevant for NGOs operating across diverse communities.

Creativity, well-being, and public good

On the public sector side, Swaan Dekkers, AI Innovation Lead at the City of Amsterdam, presented how the municipality applies AI for urban well-being through responsible innovation and citizen engagement. These examples show AI’s potential as a driver of human-centered design rather than just automation.

A strong message of societal responsibility was also conveyed in Joe Depa’s presentation, “AI for Good: Delivering Impact with Data, Value and Trust.” As the Global Chief Innovation Officer at EY, Depa illustrated how AI can generate concrete benefits across sectors—from enabling life-changing surgeries to improving access to food and reducing pollution. He pointed out that meaningful social impact requires data integrity, shared value creation, and public trust at the core of AI strategies. This perspective aligns with current efforts in NGOs and education to ensure that technology not only increases efficiency but also advances collective well-being.

Further, Andrew Schroeder from Direct Relief led discussions on using AI to address humanitarian challenges, highlighting the tension between innovation and equity — important balance for NGOs embracing digital tools.

Lessons for education and NGO practice

Across many different sessions, one main message was clear: AI transformation is as much about people as it is about technology. Several speakers reminded that leading AI work means balancing different goals, speed and responsibility, experimentation and trust—instead of trying to remove those tensions completely. Good AI leadership requires moving fast, but also thinking carefully.

Some conference examples made this idea very concrete. In the AI Crisis Management Simulation by PwC, participants faced a situation where an AI-based recruitment tool produced biased results. The task was to find out what went wrong, communicate openly about the issue, and fix the system to make it fair again. The exercise showed that ethical AI work is not only about technical skills but also about cooperation, communication, and responsibility.

Another good example came from the Co-creation Workshop run by Oracle, EY, and AWS. Mixed teams worked together to build quick prototypes for AI solutions to social or business challenges. Participants saw how data quality, prompt wording, and human supervision all influence the results. For teachers, this kind of workshop offers a good model for learning by doing—where students can test ideas, make mistakes, and reflect on the results together.

Also For NGOs, these lessons are very relevant. Many organizations are now thinking how to use AI to make their work easier, while still keeping human contact and empathy at the centre. For example, an NGO could use AI to analyse feedback from service users or to help plan communication—but it needs to do this in a fair and transparent way.

Using case-based learning and ethical discussions in training helps staff learn to make careful, situation-based decisions. It also builds confidence to question technology when needed. The conference showed clearly that responsible use of AI is not only about algorithms or tools, but about people who have the courage and skills to use technology in a thoughtful and humane way.

Conclusion: Shaping an inclusive AI future

The World Summit AI 2025 highlighted that artificial intelligence is not solely a technological issue—it is a social, ethical, and educational one. The future of AI depends on cultivating human capacities for reflection, empathy, and collaboration.

For educators and NGOs alike, the conference offered both inspiration and responsibility: to ensure that AI becomes a tool for empowerment rather than exclusion, and that its benefits extend across communities. The challenge is not just to keep pace with technology, but to shape the utilization of AI towards a future that is inclusive, meaningful, and profoundly human.

The language editing and structure for this text has been improved using Copilot.

EU logo with text partly funded by European union.

The SaavutaÄly – AI Competence Supporting Civil Society in Transition project aims to strengthen AI skills and the ability to apply artificial intelligence in developing work and organizations within the third sector. It targets NGO employees and volunteers, helping them respond to changing competence needs in working life while supporting adaptability, innovation, and sustainable careers.

The project also promotes experimental use of AI to find new solutions for everyday work and improve accessibility. Funded by the Häme ELY Centre through the European Social Fund (ESF), the project runs from November 2024 to October 2026 in the Uusimaa region and is implemented by Laurea and Humak Universities of Applied Sciences.

References

  • Abel, J. 2025. AI at scale: Unlocking the next era of intelligence and innovation. Presentation at World Summit AI 2025 8.10.2025. Amsterdam
  • Boer, M., Lee, N., Eallach, W. 2025. Guardians of Tomorrow: Global Frameworks for AI Safety. Panel discussion at World Summit AI 2025 8.10.2025. Amsterdam.
  • Dekkers, S. 2025. Responsible AI in the City of Amsterdam. Presentation at World Summit AI 2025 8.10. 2025. Amsterdam.
  • Depa, J. 2025.  AI for Good: Delivering impact with data, value and trust. Presentation at World Summit AI 2025  9.10.2025. Amsterdam.
  • Foucaud, N. 2025. The future of learning and work: Creating a skilled workforce for a changing world. Presentation at World Summit AI 2025 8.10.2025. Amsterdam.
  • Hao, K. 2025. Empire of AI: How Silicon Valley Is Reshaping the World. Keynote presentation at World Summit AI 8.10.2025. Amsterdam.
  • Pathak, R. 2025. Unlocking an AI Mindset. Presentation at World Summit AI 2025 8.10.2025. Amsterdam.
  • Schroeder, A. 2025. AI for Social Impact and Humanitarian Action. Panel Discussion at World Summit AI 2025 8.10.2025. Amsterdam.
  • Sochil, S. 2025. Grammar as the Architecture of Understanding. Presentation at World Summit AI 2025 8.10.2025. Amsterdam.
URN http://urn.fi/URN:NBN:fi-fe20251105105318

Jaa sivu