On 8 May 2026, the European Commission published an update to the ERA Living Guidelines on the Responsible Use of Generative AI in Research – the third revision since the document first appeared.
The guidelines are non-binding, but in practice they set the standard for anyone seeking EU research funding: Horizon Europe, MSCA, ERC, and any programme the European Research Area administers. So if your application or project report touches on artificial intelligence use, this document defines the rules.
The Commission describes the May 2026 revision as technical and targeted, but it introduces two new elements: guidance on third-party AI use during meetings, and the EU’s first official position on “hidden prompts” in AI systems.
Find EU grants for your research project
GetGrant – AI search across open Horizon Europe, MSCA, ERC, and other programmes for Ukrainian researchers and organisations.
What the ERA Living Guidelines are and why they matter
The European Commission developed the ERA Living Guidelines together with the ERA Forum, which includes EU member states, research organisations, and research stakeholders. The document addresses three audiences: researchers, research organisations, and research funding bodies.
The document is “living” because it updates regularly as technology develops. The first version appeared in 2023, the second in April 2025, the third in May 2026.
These recommendations apply directly to anyone applying for or implementing projects under Horizon Europe and other EU programmes. So when your institutional partner or grant administrator asks about artificial intelligence use in your project, this document shapes the answer.
What changed in May 2026
The Commission describes the update as limited and technical – the core structure stays unchanged. But two new elements deserve attention.
First: third-party artificial intelligence interactions during meetings and information management. The guidelines now specifically address situations where a meeting participant or counterpart uses AI to record, process, or analyse a conversation. When a colleague feeds a meeting transcript into an AI tool, questions arise around confidentiality, intellectual property, and personal data protection. The guidelines call on researchers and organisations to account for these risks.
Second: hidden prompts. This is a new concept in an official EU document. Hidden prompts are instructions that sit inside an AI system, invisible to the end user. When a university or research organisation deploys its own artificial intelligence tool or licenses a corporate version of ChatGPT or a comparable product, it may configure the system’s behaviour through instructions the researcher never sees.
The guidelines now require organisations to tell researchers about these hidden configurations and explain how they influence system behaviour.
Six recommendations for researchers
The guidelines give researchers six core recommendations. Here is what each one means in practice.
1. Remain fully responsible for scientific output
You cannot list artificial intelligence as an author or co-author of a scientific work, because authorship implies responsibility and responsibility belongs to the human researcher. Everything AI generates in your research carries your signature.
2. Use AI transparently
So if AI substantially influenced your results, disclose this in the methodology section. Record the tool name, version, date, and what you used it for. If AI only helped with proofreading, that counts as non-substantial use and you don’t need to disclose it. But if it analysed data or drafted portions of a literature review, that is substantial use and requires disclosure.
3. Protect confidentiality and intellectual property rights
Do not upload unpublished data, colleagues’ manuscripts, or sensitive information into public AI tools, because everything you enter into an external artificial intelligence system may end up in its training data. Check the tool’s terms of use before uploading anything.
4. Comply with applicable law
EU data protection law (GDPR) and copyright both apply to work with AI. When AI-generated text contains personal data, you as the researcher are responsible for GDPR compliance. Also note that AI output can infringe copyright when the model trained on protected material.
5. Continuously develop your knowledge of AI tools
AI develops quickly, so the guidelines ask researchers to stay current with best practices and share experience with colleagues. Also consider environmental impact: choose the least resource-intensive tool for the task.
6. Avoid substantially using artificial intelligence in peer review and proposal evaluation
This applies to anyone reviewing grant applications or scientific manuscripts. So feeding someone else’s unpublished work into an external AI system creates the risk that the work ends up in that model’s training data. The guidelines explicitly ask reviewers to avoid this.
For research organisations and funders
The guidelines address more than just individual researchers. Organisations and funders each get their own recommendation blocks.
The guidelines ask research organisations (universities, research institutes, think tanks) to develop internal AI use policies and embed them in general research integrity codes, provide training for staff at all levels, monitor internal AI use and its implications, and tell staff about any hidden prompts in deployed AI systems.
For grant funding bodies – including the European Commission and national research agencies – the guidelines ask them to incorporate these principles into application and reporting requirements and promote transparent disclosure as standard practice.
Practical checklist for EU grant applicants
If you apply for or implement an EU grant, use this table as a quick reference.
| Situation | What the guidelines require |
|---|---|
| AI analysed data or wrote sections of your paper | Disclose in methodology: tool name, version, date, purpose |
| AI only corrected style or grammar | Non-substantial use – no disclosure needed |
| You want to list AI as an author | Not permitted: AI cannot be an author or co-author |
| You want to upload a colleague’s data to ChatGPT | You need consent and a legal basis; check the tool’s terms of use first |
| You are reviewing a grant proposal or manuscript | Avoid substantial AI use – you risk leaking unpublished work |
| A colleague records a meeting in an AI tool | Confidentiality and IP risk – clarify conditions and inform participants |
| Your university provides a corporate AI tool | Ask whether hidden prompts are in place and what they contain |
Four principles underlying the guidelines
These recommendations rest on four core principles from the European Code of Conduct for Research Integrity.
Reliability: you are responsible for the quality and reproducibility of your results, regardless of which tool you used to produce them.
Honesty: disclose your AI use transparently and explain what role it played in the research.
Respect: account for the technology’s limitations, its environmental impact, confidentiality, and the rights of other researchers.
Accountability: you bear responsibility for everything you publish under your name. You cannot delegate that responsibility to an AI system.
Find open EU grants for your project
GetGrant updates its database of active opportunities for researchers, universities, and NGOs daily. With a subscription – AI matching for your profile.
Source: European Commission, DG Research and Innovation, 8 May 2026. Full guidelines: ERA Living Guidelines (PDF) →