Run AI Locally with Ollama: A Guide for School Staff

Run AI Locally with Ollama: A Guide for School Staff

This course gives you who work in schools practical knowledge to take control of your AI usage. We go through the advantages of running AI models locally, from increased data security when handling sensitive information to the ability to work completely offline. You will learn to install and manage the powerful tool Ollama, which makes the process surprisingly simple. The core of the course is a hands-on walkthrough and benchmark of several popular AI models. We evaluate their strengths and weaknesses in a Swedish school context, so you can choose the right tool for your specific tasks, whether it involves creating materials, analyzing texts, generating code, or getting pedagogical support. Everything is tested on a standard computer (Ubuntu with an RTX 4070 8GB VRAM) to give a realistic picture of what is possible outside of cloud services.

Moment i denna kurs

1
Why Run AI Locally? Security, Cost, and Control

Discover why local AI models are a powerful and secure alternative to cloud-based services for educators.

2
What is Ollama? Your Toolkit for Local AI

Learn how to easily install and use Ollama to run an AI model on your own computer with a single command.

3
Test Methodology: How the Models Were Evaluated

A transparent review of the testing process, including the exact questions and evaluation criteria used to compare the AI models.

4
Model Focus: Gemma3 - Pedagogical and Linguistic Expert

A deep dive into Gemma3:12b, a model that excels in everything from factual knowledge to pedagogy and linguistic quality.

5
Model Focus: Qwen3 - Powerful for Reasoning and Code

Get to know Qwen3:8b, a model that excels with strong logical ability, excellent code generation, and high linguistic quality.

6
Model Focus: Llama3.1 - Fast but Unreliable

Llama3.1:8b is lightning fast and good with facts, but the tendency to hallucinate and fail at pedagogy requires caution.

7
Model Focus: DeepSeek - Uneven Profile with Serious Flaws

An analysis of DeepSeek-R1:8b, a model that can generate code but fails at basic facts and language.

8
Model Focus: Mistral - Fast but Fundamentally Flawed

Mistral:7b is a fast and popular model, but in our tests it proved to have fundamental flaws in logic and understanding.

9
Which Model Should I Choose? A Comparative Summary

A clear table and analysis that helps you choose the right local AI model for your specific needs and work tasks.