Run AI Locally with Ollama: A Guide for School Staff

This course gives you who work in schools practical knowledge to take control of your AI usage. We go through the advantages of running AI models locally, from increased data security when handling sensitive information to the ability to work completely offline. You will learn to install and manage the powerful tool Ollama, which makes the process surprisingly simple. The core of the course is a hands-on walkthrough and benchmark of several popular AI models. We evaluate their strengths and weaknesses in a Swedish school context, so you can choose the right tool for your specific tasks, whether it involves creating materials, analyzing texts, generating code, or getting pedagogical support. Everything is tested on a standard computer (Ubuntu with an RTX 4070 8GB VRAM) to give a realistic picture of what is possible outside of cloud services.
Moment i denna kurs
Discover why local AI models are a powerful and secure alternative to cloud-based services for educators.
Learn how to easily install and use Ollama to run an AI model on your own computer with a single command.
A transparent review of the testing process, including the exact questions and evaluation criteria used to compare the AI models.
A deep dive into Gemma3:12b, a model that excels in everything from factual knowledge to pedagogy and linguistic quality.
Get to know Qwen3:8b, a model that excels with strong logical ability, excellent code generation, and high linguistic quality.
Llama3.1:8b is lightning fast and good with facts, but the tendency to hallucinate and fail at pedagogy requires caution.
An analysis of DeepSeek-R1:8b, a model that can generate code but fails at basic facts and language.
Mistral:7b is a fast and popular model, but in our tests it proved to have fundamental flaws in logic and understanding.
A clear table and analysis that helps you choose the right local AI model for your specific needs and work tasks.
