Practical recommendations for the use of generative AI in our scientific work
Authors: Petr Keil, Arka Pal, Aditya Ganesh, Richard Bittman, Gabriel Ortega, Adam Uličný, Vojtěch Barták
Generative AI is increasingly being used for scientific writing, coding, content generation, analysis, teaching, and learning. It has reached a level where many of us are unsure how to use it properly, both in practical and ethical terms. On the 9th of October 2025 we held a seminar/workshop at our department where we discussed how to use, and not use, AI for our scientific work. We aimed to be concrete and practical in our recommendations. Here is what we came up with (without the use of AI):
- Always acknowledge the use of generative AI to your collaborators, students, supervisors, journal editors, and reviewers, at all levels of the scientific process. You should be specific about how and where you used AI. Did you use it for generation of ideas or code, or for summary of literature, or for language improvement? Be open and specific about it.
- Verify everything for meaning, logical consistency, and correct references. Also, you should only use AI for things that you will be able to verify afterwards. If you cannot verify (e.g. when you let AI to generate code in a language that you can’t read), then collaborate with someone who can verify the AI output and make that collaborator an official part of your team (and a co-author). In other words, each AI output should be verified by some qualified team member.
- Validate AI output against existing resources such as published literature, papers, or even Wikipedia. This can help you to spot AI hallucinations and errors. Also, try different prompts for the same goal to see if the output is consistent (but beware that this may be problematic, as different AI tools can all give incorrect output).
- Try to write scientific text or code yourself from time to time, without the use of AI. This will help you to avoid AI dementia, and to retain the basic skillset which is necessary for the verification and validation.
- Collaboration is a good way to use AI: Write prompts together, verify and validate outputs together, laugh about the AI hallucinations together. We worry that the use of AI will reduce the need to collaborate between scientists with different skills, so collaborative use of AI can help with preventing that.
- Read. Don’t exclude actual reading (of papers, books) from your work. At least open every paper that you cite in your work to verify that it really contains what AI suggests. When doing this, check the title, abstract, and the trustworthiness/reputation of the journal and the authors. Read properly the most important papers and books that you cite.
- Learning. For learning, we recommend not using solely AI tools, but a wider array of things. On top of using AI, get intuition for a problem or a skill through videos, popular/summary texts, or an actual living person, read books/articles for more details and deeper insights, try writing/coding things on your own, leave computer and go to the field to actually observe and physically manipulate things.
- Literature review. Don’t rely solely on AI for literature review. AI tools can summarize and suggest research papers, but draw primarily on freely available sources. This means that key, high-quality studies behind paywalls may be missing, some included studies might come from unreliable journals or overviews you get may be incomplete or misleading. Use AI as starting „point of departure“, e.g. brainstorming topics or identifying keywords.
- Teaching. Make students defend, verify, explain, or validate their presentations, code and writing to you. When giving them tasks, look for alternatives to private on-screen coding and writing, such as writing or coding on paper, coding in front of others, or presenting their creation live. When teaching, present things in a simple way, slowly enough, i.e. don’t give students the reason to rush and to use AI to summarize a complex topic. Teach them coding and writing basics.
- Security. Copy-pasting things to AI may be a risk, so think twice before pasting your stuff to an AI prompt. This is critical particularly with materials where there are co-authors, or materials with sensitive or personal information, or with valuable new ideas. Use certified or government-backed tools, or local services. Don’t use DeepSeek. Beware of security tools, e.g. security-related buttons in ChatGPT, and use them. Beware of the “share” button of ChatGPT.
- Images. Beware that AI-generated images are generally perceived by others as ugly, even when they seem pretty to you.
- AI agents. An AI agent as a system that autonomously performs tasks by designing entire workflows with the tools available in your operating system. Validation of AI agents can be challenging. Thus, always make sure to review intermediate outputs and ensure all results are transparent and reproducible and fully understood. Also keep detailed logs to document the workflow. However, our experience with AI agents is so far limited, and this is something to still be discussed and assessed.
- Environmental impact of AI is massive! Beware of it, don’t waste its power, don’t use AI frivolously. Test and choose the best AI tool for the task at hand, which can reduce the number of requests submitted. Keep the context window short. Other ways to compensate for our personal environmental impact from using AI include running simple searches in a search engine like Ecosia or some alternative.
- Don’t feel ashamed for using generative AI, but only if you follow the steps above ;-)
- Charles university (CUNI)’s recommendations
- European Commision (EC) Guidelines on the responsible use of generative AI in research developed by the European Research Area Forum - Research and innovation
- American Institue of Mathematical Sciences’s Generative AI Guidelines
- Harvard University Information Technolog’s Guidelines