AI tools are rapidly becoming a natural part also of Research and Innovation activities, helping writers organise ideas, improve clarity, and save valuable time. Yet within Horizon Europe (HE) proposals, their use comes with clear responsibilities. The European Commission now explicitly allows the use of AI tools in proposal preparation, but with one crucial condition: applicants remain fully accountable for every word. Used wisely, AI sharpens arguments and streamlines writing. Used carelessly, it risks errors, bias, and plagiarism. The key lies in defining where human expertise ends and AI assistance begins.
Photo by Freepik
The EU’s Position: Use It, but Own It
According to the Guidelines on the Responsible Use of Generative AI in Research developed by the European Research Area Forum under the European Commission (2024), researchers and applicants are encouraged to use AI tools in a transparent, ethical, and accountable manner. The guidelines acknowledge that AI can enhance research and proposal development but emphasise the importance of critical human oversight. Users are expected to verify all AI-generated outputs, ensure factual accuracy, avoid plagiarism, and respect intellectual property rights. Above all, they must remain fully responsible for the content they produce, even when assisted by AI.
The European Commission’s position is clear: using AI tools such as large language models in Horizon Europe proposal preparation is acceptable, if transparency and validation are ensured. Applicants should disclose whether and how AI was used, verify that all generated material is appropriate and correct, and be aware of the limitations of these tools, including bias and potential factual errors (European Commission, 2024).
These principles are consistent with the broader European regulatory framework established under the EU Artificial Intelligence Act (European Parliament and Council, 2024), which defines requirements for trustworthy AI systems and promotes transparency, human oversight, and accountability across all sectors. Together, these initiatives signal that the European Union supports innovation with AI, but only within a framework that safeguards ethics, accuracy, and responsibility.
Lessons from Practice: A Human–AI Partnership
Drawing from my own practical experience in writing Horizon Europe proposals, AI can be a powerful collaborator when used with care and discipline. The first and most important principle is to write precise prompts. The output from AI will only be as good as the question it is asked to solve. Clearly defining the role (“You are a Horizon Europe proposal writer”), the task (“Draft the methodology section based on this input”), and the context (“Research and Innovation Action focusing on technology development and cross-sector collaboration”) helps direct the AI toward relevant and meaningful results.
Equally important is to view AI as a partner for iteration rather than delegation. AI should be seen as a co-writer, not an author. Each text generated should be read critically, evaluated for fit and tone, and then rewritten or refined where necessary. The best outputs are achieved through multiple cycles of review, with the human writer remaining firmly in control of the narrative. This approach aligns with the principles of responsible innovation (Stilgoe et al., 2013) and the human-in-the-loop model of AI (Floridi, 2023), which both emphasise iterative oversight, transparency, and shared accountability between human and machine.
Another essential lesson is to base the AI’s work on your own validated material. Instead of asking AI tools to “search the web” or generate generic background text, providing it with your existing project descriptions, deliverables, or validated sources ensures that the language remains consistent and aligned with your objectives. This approach also prevents the introduction of vague or speculative claims.
Ensuring the accuracy of references is equally crucial. AI tools often fabricate or distort citations, which can damage credibility if left unchecked. Every reference should therefore be verified manually to confirm its accuracy, existence, and relevance (European Commission, 2024). AI excels at structuring and improving clarity, but the scientific reasoning, data, and conceptual depth must come from human expertise.
Finally, transparency is key. The European Commission explicitly requests that applicants disclose the use of AI tools in Part B of the proposal. A short statement such as: “Sections of this proposal were drafted with the assistance of AI tools under the supervision of the coordinating team. All outputs were manually verified and edited,” demonstrates both compliance and ethical responsibility. This simple act of openness not only aligns with EC expectations but also reinforces credibility and trust.
Responsible Use Beyond Proposal Writing
The ethical application of AI does not stop at proposal preparation. The Horizon Europe Programme Guide (Point 18) stipulates that any AI-based system used or developed within a project must be technically robust, socially responsible, and explainable. Such systems should be reliable, accurate, and proportionate to the risks they pose, while safeguarding human integrity and preventing harm (European Commission, 2024a). These principles are fully aligned with the EU Artificial Intelligence Act, which sets legally binding requirements for risk management, transparency, data quality, and human oversight for AI systems placed on the EU market (European Parliament and Council, 2024). The Act reinforces that all AI-based tools—whether in research, innovation, or administration—must function safely, respect fundamental rights, and operate under human control.
The same ethical considerations extend to proposal evaluation. According to the Standard Briefing Slides for Horizon Europe Expert Evaluators (European Commission, 2024b), evaluators who use AI tools during assessments must ensure confidentiality and data protection, avoid over-reliance on automated systems, and remain fully responsible for their decisions. Confidential proposal content should never be uploaded into external AI tools. These regulatory and ethical principles underline a broader point: responsible AI use is not only about compliance, but about cultivating trust and integrity throughout the research lifecycle. This understanding naturally extends into how we interpret the future of proposal writing. These expectations also resonate with national ethical frameworks such as the Finnish National Board on Research Integrity (TENK, 2023), which highlights transparency, accountability, and researcher responsibility as cornerstones of ethical conduct in AI-assisted research. In summary, responsible AI practices in research and innovation are not only ethical obligations but also practical enablers of credibility, quality, and long-term trust in European science.
The Bottom Line
AI is transforming how we write, think, and collaborate, but it does not replace human reasoning, responsibility, or integrity. Within Horizon Europe, AI is best understood as an instrument for clarity, coherence, and efficiency – not for generating substance or scientific argumentation. Used responsibly, it enhances quality and productivity; used carelessly, it undermines credibility. The future of proposal writing will likely be hybrid, combining human expertise, strategic judgement, and creativity with AI’s speed and linguistic precision. In this evolving landscape, success will not depend on whether AI is used, but on how responsibly it is applied.
References
- European Commission 2024, 20 March. Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum. Directorate-General for Research and Innovation. Retrieved 7.10.2025.
- European Commission 2024a. Horizon Europe Programme Guide (Point 18 – AI-based systems or techniques). Retrieved 7.10.2025.
- European Commission 2024b. Standard Briefing Slides for Horizon Europe Expert Evaluators. Retrieved 7.10.2025.
- European Parliament and Council 2024. Regulation (EU) 2024/1689 establishing harmonised rules on Artificial Intelligence (Artificial Intelligence Act). Official Journal of the European Union, L 202, 12 July 2024. Retrieved 7.10.2025.
- Floridi, L. 2023. The Ethics of Artificial Intelligence. Oxford University Press. Retrieved 24.10.2025.
- Stilgoe, J., Owen, R., & Macnaghten, P. 2013. Developing a framework for responsible innovation. Research Policy, 42(9), 1568–1580. Retrieved 24.10.2025.
- TENK. 2023. The Ethical Principles of Research with Human Participants and Ethical Review in the Human Sciences in Finland. Finnish National Board on Research Integrity. Retrieved 24.10.2025.
The language editing and structure for this text has been improved using Copilot.