This Post is a bit different, but I think mandatory…
I’ve noticed that more and more people are using AI on this platform. Unfortunately, many do so without a responsible standard. I want to be open and transparent about my own AI use because I dislike reading a text and immediately recognizing it was generated by AI. From that moment on, I can no longer be sure whether the ideas or arguments truly come from the person themselves. You might know this too.
For this reason, I thought about it and I believe I owe my readers clarity on how and in which ways I use AI. Hence, this document explains my approach, which I consider a good and responsible use. It is both an outline of my own practice and a suggestion for others who want to be transparent and avoid misuse.
These guidelines are informed by the University of Mannheim’s guidelines (“ChatGPT in Teaching”), as well as by the rules set by my professor at the Institute of Psychology where I study. While the following framework is based on my own opinion, it is a considered one.
Responsible Use Cases for AI
A key principle governs all these rules: AI is a tool to support you. It should complement your work, not replace it. This means you should already have the knowledge of what you want to write and how you want to write it. Writing must remain a process guided by your thoughts, your ideas, and your own mind.
This is important to me because writing is a form of learning. I want to understand a topic, think about it deeply, and build my own knowledge before I try to communicate what I have learned. In a way, writing is the process of structuring your own thoughts. If you rely too heavily on AI, you risk losing your grasp of your own text because you engage less with the topic you are writing about.
Importantly, struggling with a topic is an essential part of strengthening your knowledge. This is a key concept in cognitive science: the process of wrestling with ideas, thinking them through, and revisiting them is a normal part of learning that supports neuroplasticity. This is vital for writing because writing is self-expression; its goal isn't to produce a text about something you haven’t fully understood yourself, just for the sake of producing it.
Therefore, the use of AI should complement your thought process and your productive “struggle,” not displace it.
Following this framework, I think the following are "good" use cases for AI:
1. Editing
You write the text yourself first, and then use AI as an editor. Crucially, you think for yourself and produce a text, and then use AI to refine it. This can involve:
Correcting spelling, grammar, and syntax.
Improving awkward sentences so they sound more natural.
Light stylistic polishing.
A short tip: Sometimes I dictate my text first and then use AI to slightly refine it into a cleaner, written form. I find this particularly useful for turning long-winded ideas into natural, flowing text. It is also an excellent way to create a first draft through speaking, which often results in more authentic and natural writing. In all cases, the ideas, arguments, and core content remain entirely my own
2. Translation
Another use case is using AI to assist with translation. For example, my first language is German. While it is easy for me to write about complex topics in German, expressing the same ideas in English is often more challenging. In some cases, I write a text in German and then use AI to help translate it into English so that it sounds as natural and fluent as the original.
3. Brainstorming and Learning
I treat AI like a conversation partner for exploring ideas, but my brain remains the active agent. This could involve:
Testing whether an argument works.
Getting feedback on structure, titles, or possible angles.
Refining ideas I already have.
Learning about new concepts, which I always verify myself.
Think of it like Sherlock Holmes talking to Dr. Watson. You start with an idea and engage in a dialogue, using AI to question and probe your text. This is an active process: you refine and improve your work through the exchange, rather than simply letting AI generate ideas. Crucially, Holmes (you) directs the thought process; he leads the search for truth, while Watson is complementary (sorry to all the Watson fans out there).
4. Research Support
If I don’t have much time, I sometimes use AI to help summarize longer papers or get straight to the main points. This should be used sparingly, because, as already mentioned, struggling with the material is important for understanding. However, when reviewing hundreds of papers for a single argument, it may not be necessary to master every detail. Importantly, never trust AI summaries blindly; always at least skim the cited papers to verify the output.
AI can also be useful for finding relevant studies, though I find that AI research summaries often miss the point or contain errors. In my opinion, any AI-assisted research should always start from one's own questions and thinking. Before you ask for studies you should know what topic you want to explore.
What You Should Not Use AI For
Some of this is implied above, but it’s important to state it explicitly. Here is what I believe is irresponsible use of AI.
1. Skipping Output Review
Never trust the output of an AI without reviewing it carefully. Always check and, if necessary, correct the output before using it. AI may introduce errors, biases, or unintended meanings. Sometimes AI even imagines facts that are not true.
Always go through the output and correct it.
2. Letting AI Write the Text for You
Do not ask AI to produce a complete text before you have created your own draft. Don’t just copy the output and publish it. The arguments, structure, and ideas should originate from you.
To some extent it is fine to use AI for inspiration, but this should be limited and in a complementary way as elaborated above.
If AI’s output sparks an idea, you should develop and write it in your own words.
Avoid crossing the line into copying or passing off AI’s work as your own.
3. Outsourcing Your Arguments
Your reasoning and argumentation should be entirely yours. AI should not create the logical backbone of your work. You should remain in control of your ideas and content.
4. Accepting Generic AI Language
AI often produces a flat, repetitive style with patterns like “this isn’t, this is” or unnecessarily long sentences packed with em dashes. If you use AI for editing, revise its suggestions so that your own voice, style, and “soul” remain in the text. This is what makes a text personal and recognizably yours. Also, slight changes in language can alter the point you wanted to make, so examine the output closely to ensure it truly represents your idea. The best case is that you already state in your output that your style should be preserved even after editing.
Transparency Is Essential
The most important point, in my view, is to be clear about when and how you have used AI. Readers should not be left in a fog of uncertainty.
I believe it is fine to use AI, but you should specify exactly what your contribution was and what role AI played. In the past, I would add a short note at the end of a text if I had used AI and how (e.g. for translation). After thinking about this more, I decided the best approach is to address it in a separate post, openly once. Which is why I created this post.
From now on, whenever I use AI, I will include a reference to this document at the end of my text as my AI Declaration. I hope this clarifies things for my readers and helps them avoid the "fog" of AI-generated content.
Final Thoughts
In my opinion, used well, AI can lower the barrier to writing, for example, by making editing faster. This can free you to focus on your ideas. But don’t let convenience turn into laziness. The goal is always to create something that is unmistakably yours.
You might have noticed that the line between “responsible” and “irresponsible” AI use is a continuum; it is. One must be self-conscious about its use. Before you use AI, ask yourself: Have I thought this through by myself? Do I really want to use AI for this, or could I do this on my own? The decision should be a deliberate one. It is important to still put in the work and to embrace the struggle.
Please note that these guidelines reflect only my opinion. Others may think differently and allow more or less AI use. Crucially, if one does use it, being transparent about how is, in my view, essential.
In summary: Think. Think for yourself. Don’t lose your “soul” in the text. Most importantly, disclose your AI use. Many readers cannot easily tell when a text is AI-generated. Clarity is key.
(I also used AI to refine this text, in the ways outlined above.)
I love the parenthetical add of In My Opinion. Subtle and smart!
I am new to Substack but Share Your Perspective is offering collaboratively created content made by and for the collective. Please check us out.
Do the large language models of Al mimic our brain's power of predictive processing? And over the course of this century of conceptualized Time. Will AI reveal our language defined misconceptions of Reality? Proving beyond the shadow of doubt that we are more conscious of the 'sounds & symbols' nature of language, than the 'substance' nature of reality, especially our own reality?
And from an existential perspective on rationality and religion, will AI reveal that the so-called Fall of Humanity was the invention of language around 100,000 years ago? And that as a 'tool' making animal the best tool our species' ever invented for colonizing and dominating an entire planet is Language? With our multilingual, have you heard (herd) mentality creating what St Augustine described as our mind's alienation from Truth?