Ethics, Copyright and Source Criticism
Reading time: approximately 9 minutes
Being able to create images from text is a technical skill. Doing it responsibly is a professional necessity. AI models are not neutral tools, they are shaped by the data they have been trained on and the choices their developers have made. In this moment, we take a step back from practical creation and focus on the three pillars for responsible use: copyright, bias and source criticism.
What You Will Learn
- The basics of copyright for AI-generated images.
- How to identify and counteract bias in AI models.
- Why source criticism is crucial even for images you create yourself.
1. Copyright: Who Owns the Image?
This is one of the most complex and debated questions in AI right now. The situation is still a legal gray zone, but some main principles are emerging:
- Human creation is the key: In many countries, including the USA and within the EU, a work can only receive copyright protection if it is created by a human. An image that is solely generated by an AI, without significant human creative processing, can often not be copyrighted by the user.
- What does this mean for you? You can generally use the images you create for your own non-commercial teaching, but you probably cannot claim exclusive ownership or sell them. The most important thing is to always read the terms of use for the specific service you are using. Some services give you broader rights than others.
- Training data ethics: A major debate concerns the fact that many models have been trained on billions of images from the internet without the consent of the artists. As an ethical guideline, avoid creating images "in the style of" currently living and working artists. Referring to artists who have long since passed away (like Rembrandt or Monet) is generally less problematic.
2. Bias: The AI Reflects Its Data
An AI model is an echo of the data it has been fed. Because the internet is full of stereotypes and uneven representation, the AI will inevitably reflect this.
- Examples of bias: If you write a prompt for
a professor, chances are you will get an image of an older white man. A prompt fora nursewill likely generate a woman. This is not because the AI is "evil", but because it has statistically seen the most images that reinforce these stereotypes. - How you counteract bias: Be deliberately specific to create inclusive and representative material. Instead of
a group of engineers, writea group of engineers with different ethnicities and genders collaborating at a drafting table. You as a user have a unique opportunity and responsibility to actively guide the AI towards creating images that reflect the diversity that exists in society.
3. Source Criticism for the Creator
Why should you be source-critical of an image you created yourself? The answer is that you did not create it from scratch, the AI did, and it is an unreliable source.
- The AI can be wrong: As we mentioned in moment 4, the AI can generate images with factual errors. A Roman legionary may have a medieval helmet, a diagram of photosynthesis may be incorrect, and a map may show invented cities.
- You are the publisher: When you use an AI-generated image in your lesson material, it is you who vouches for its content. You are the final editor and fact-checker.
- Basic rule: Never trust that an AI-generated image is fact-based. Use your own expert knowledge to verify every detail that is meant to convey facts. Treat the AI as an extremely creative but unreliable assistant.
Next Steps
With this important ethical foundation in place, we are ready to look at the concrete tools. In the next moment, "Comparison of Tools: Midjourney, DALL-E 3, etc.", we go through some of the most popular platforms so you can choose the right tool for your needs.

