Krems an der Donau, a historic city in Lower Austria, hosted in April 2025 the International Teaching Week of IMC Krems UAS. One of the week’s top events was a workshop that brought together international educators and experts to explore challenges in AI development, such as bias, transparency, and accountability. As AI continues to shape society, it’s essential for educators to consider these issues in their teaching. This article explains the guiding principles for AI as a background for ethical discussions and highlights the ethical challenges and their possible solutions discussed in the workshop.
Photo by Bi / Adobe Stock (Laurea Education-licence)
The background: guiding principles for AI
In recent years, there’s been growing concern about how artificial intelligence (AI) is used in our lives. To make sure AI is developed and used responsibly, the European Union and other international organizations have created important guidelines. In 2019, the European Commission’s High-Level Expert Group on Artificial Intelligence published the Ethics Guidelines for Trustworthy AI. These guidelines say that trustworthy AI should follow the law, be ethical, and work reliably throughout its use (European Commission 2019). The goal is to make sure AI supports people, keeps their data safe, is fair to everyone, and benefits both society and the environment.
Other global organizations have joined this effort too. The Organisation for Economic Co-operation and Development (OECD) introduced its own set of AI Principles. These encourage AI that is both innovative and respectful of human rights. The OECD highlights the importance of fairness, transparency, safety, and holding developers and users of AI accountable. It also encourages governments to invest in AI education and research, support inclusive technologies, and work together across countries to build safe and trustworthy AI (OECD 2024).
The most recent law on AI in Europe is the AI Act. This law creates rules based on how risky the AI system is. For example, AI that could seriously harm people—like systems that manipulate users or rank people unfairly—are banned completely. AI used in important areas like healthcare or policing must follow strict safety rules. Other systems, like chatbots or tools that create deepfake videos, need to tell users they are AI. Everyday tools like spam filters don’t have many rules because they’re seen as low risk. The law applies even to companies outside the EU if they sell or use AI in Europe. It also sets special rules for companies that build powerful AI models, especially if those models could affect society in a big way (Future of Life Institute 2025).
Together, these guidelines and laws are meant to make sure AI is used in a way that helps people instead of harming them. They remind developers, companies, and governments that AI should be fair, clear, and safe for everyone.
Ethical AI workshop in IMC Krems
IMC Krems University of Applied Sciences focuses on four main areas: business, health, science, and technology (IMC Krems 2025). The recent International Teaching Week brought together participants from different fields and different countries making it a diverse and interesting event.
One of the week’s highlights was a workshop called ”Ethics in AI: Challenges and Considerations.” The workshop gathered educators from around the world to talk about the important ethical issues in artificial intelligence (AI). There were people from Canada, Denmark, Finland, Georgia, Ireland, Kazakhstan, Ukraine and USA in this workshop. The goal was to help participants better understand AI ethics, explore real-life AI problems, and think about the responsibilities of educators.
During the workshop, participants discussed several key ethical topics in AI, such as bias and fairness, transparency, accountability, privacy, and inclusion. They explored ways to reduce bias in AI systems to ensure fairness for all users. The group also talked about how to make AI decisions easier to understand and trust. Another important topic was who is responsible for AI’s decisions and actions. Privacy and consent were also major concerns, with a focus on how AI systems can protect users’ rights and data. Finally, the workshop emphasized making sure AI benefits everyone in society, not just certain groups.
The event made clear the need for not only responsible AI development but also responsible AI education and encouraged international collaboration among educators and experts.
Potential solutions for ethical AI and how to tackle the use of AI in education
To tackle bias and fairness, participants discussed the importance of auditing training data and AI models, implementing human-in-the-loop decision-making processes, and defining clear evaluation criteria for AI systems. In terms of privacy and consent, the conversation emphasized the need for clear oversight mechanisms, the implementation of opt-out options for users, and ensuring data anonymization where necessary. To address concerns about AI-generated content, participants suggested labeling AI-produced materials, developing detection tools to verify authenticity, and setting ethical boundaries for generative content creation. They also highlighted the necessity of restricting AI-generated content to clearly defined and regulated purposes.
The most engaging part of the workshop was hearing how educators from different parts of the world are approaching AI-related issues in their teaching. For instance, in Kazakhstan, educators were teaching students how to use AI to enhance their own thinking, recognizing that AI will take over many tasks students are currently being trained for. However, the management saw this as potentially teaching students to cheat, so they had to limit AI education. In Ireland they have been focusing on setting the policies and guidelines for students on how they can use AI. If students are suspected of breaking these rules, they are required to meet with two lecturers who will ask questions about their work to determine whether it was done by the student or with the help of AI.
In Laurea, we also have guidelines for using AI. Artificial intelligence can be used to support learning and teaching, but its use must be transparent, ethical, and must not replace the teacher’s role or violate privacy and data protection policies. Students must be informed about the principles and limitations of AI use, and they are always responsible for the content they submit, even if AI tools were used during the process. Any use of AI—whether in writing, image generation, or language editing—must be clearly mentioned and properly cited according to Laurea’s referencing guidelines. (Laurea 2024.)
Conclusion
In conclusion, the workshop about ethical AI provided valuable insights into the ethical challenges posed by artificial intelligence. By discussing key topics such as bias, fairness, transparency, and privacy, participants shared practical solutions for ensuring that AI technologies are developed and used responsibly. The event also made visible the different approaches to AI education worldwide, showing that while some regions embrace AI as a tool for enhancing learning, others are taking more cautious steps to regulate its use. This exchange of ideas showed the important role of international collaboration in learning from others, advancing ethical AI and making sure it serves the good of society.
References