Security and Ethics: Navigating a More Reliable AI Landscape

Reading time: approx 10 min

We have explored the enormous potential in ChatGPT-5, from its advanced reasoning to its ability to produce complex materials. But with great power comes great responsibility. Paradoxically, a more reliable AI places higher demands on our source criticism and forces us to reevaluate what knowledge and cheating mean.

This lesson gives you concrete strategies for handling these challenges and for guiding your students to become responsible and critical users of AI.

What you will learn

  • Why a more reliable AI requires more sophisticated source criticism.
  • Practical strategies for designing tasks that counteract cheating and promote learning.
  • How you establish clear and fair classroom rules for AI use.
  • Your role as pedagogical leader in a world where AI is a given.

The Fundamentals: The dangerous "almost-truth"

Earlier AI models were often easy to expose when they were wrong. The answers were clumsy or obviously invented. The challenge with ChatGPT-5 is different. According to OpenAI's own evaluations, GPT-5 makes about 45% fewer factual errors than GPT-4o, and in Thinking mode about 80% fewer than o3, on fact benchmarks like LongFact/FactScore. This is a fantastic improvement, but it also creates a new risk: the subtle inaccuracy.

The errors that remain are often well-formulated, credible, and embedded in correct information, making them much harder to detect. Our task as educators is to equip students to identify these "almost-truths".

Part 1: Source Criticism 2.0 examining a convincing AI

When the AI rarely gets obvious facts wrong, we must shift focus from pure fact-checking to examining reasoning, angles, and intentions.

Strategies to teach students:

  1. Triangulation is more important than ever: The principle is simple: always verify the AI's claims, especially those that are important for your work, with at least two other independent and reliable sources (that are not another AI).

  2. Stress test the argument: One of the most effective methods is to ask the AI to take the opposite position.

    • Example: If a student has received a convincing answer that argues for a certain historical interpretation, they can follow up with: Thank you. Now act as a critical historian and present the three strongest counterarguments against the interpretation you just gave me.
  3. Use "Personalities" to expose angles: Let students ask the same question to the Nerd, the Cynic, and the Robot. By comparing the answers, it becomes obvious how tone, word choice, and perspective can shape a "truth". (Available for Plus, Pro, Team, initially applies to text mode and is selected in Settings → Personalization.)

Note that ChatGPT does not show its internal 'chain-of-thought'. Instead, ask the model to provide step-by-step explanations and let students critically examine them.

Part 2: Cheating or future? Redefining assessment

If a student can generate a perfect essay in 30 seconds, what are we assessing then? The answer is that we must design tasks that are either "AI-proof" or, even better, "AI-inclusive".

Strategies for assessment:

  1. Focus on the process, not just the product: Let AI be part of the toolbox, but assess the human work.

    • Task: Let students use ChatGPT-5 to create a first draft of an argumentative text. The actual assessment then lies in their ability to improve, critique, fact-check, and build further on the AI's text. Require them to submit a logbook or an annotated version where they justify their changes.
  2. Value what AI cannot do: Design tasks that require uniquely human abilities.

    • Personal connection: "Compare the main character's dilemma with a situation you yourself have experienced."
    • Local observation: "Go out and observe the architecture on your street. Use AI to identify the styles, but then write your own reflection on how the buildings affect the atmosphere of the area."

Implementation in the classroom: Rules and approach

Be clear, proactive, and create a dialogue with the students. Banning AI is rarely a sustainable path. Instead, create a simple and clear classroom policy.

Example of AI traffic light:

  • RED LIGHT (Not allowed):

    • Copying an AI-generated text and submitting it as your own.
    • Using AI for a test or assignment where you have been explicitly told not to use it.
  • YELLOW LIGHT (Allowed, but must be reported):

    • Using AI to get feedback on your text (structure, language, arguments).
    • Using AI as a brainstorming partner to get ideas.
    • Using AI to summarize a long text.
    • Requirement: Describe exactly how AI was used (prompt, version, which parts were changed by the student).
  • GREEN LIGHT (Always allowed):

    • Using AI as a reference work to understand a word or a simple concept.
    • Using "Study Mode" to get help with a task. (Activation: Tools → 'Study and learn'. Available for Free, Plus, Pro, Team, Edu is being rolled out gradually.)

An important reminder about privacy

Keep in mind that shared student materials may be subject to the school's and OpenAI's data protection policy. Use the temporary chats feature or adjust the memory settings for the conversation when handling potentially sensitive content.

Next steps

With a solid ethical foundation and practical classroom strategies in place, you are ready to integrate AI into your daily routine. In the next lesson, 'Integration in Your Daily Life: Connect ChatGPT-5 to Your Workflows', we focus on tools and techniques that save you time in your planning and administration.