
Many schools have digital strategies, but few have a concrete plan for AI competence. At the same time, leading experts argue that the key to mastering AI lies not primarily in code, but in a deep understanding of language, culture, and human behavior.
The Core Argument: Humanists Understand AI
In a LinkedIn post on July 21, 2025 (https://www.linkedin.com/feed/update/urn:li:activity:7352761853527089152/), Wharton professor Ethan Mollick highlights a perspective that challenges the techno-centric view of artificial intelligence. He suggests that those trained in natural sciences and technology are often worse at getting AI to perform complex tasks than those with a background in humanities or social sciences.
The reason is that the ability to formulate precise instructions, understand context, and evaluate nuances is crucial. Mollick writes: "But LLMs give both cultures [humanities and sciences] a chance to contribute in ways that haven't been possible for a long time."
While critics argue that a technical foundational understanding is indispensable to avoid passive and uncritical usage, Mollick's argument points out that strategic potential is unlocked only when technology is combined with humanities knowledge.
From Insight to Strategy: Three Key Areas
For school leadership, this means the valuation of competence must be broadened. Humanities skills are no longer just a soft asset, but a hard strategic advantage.
Responsible Procurement and Ethics Purchasing AI tools without a thorough pedagogical and ethical analysis is a major risk. Staff with schooling in the humanities are trained in source criticism and analyzing complex texts, which is crucial for evaluating what inherent bias an AI service has, whether it complies with data privacy regulations (like GDPR), and what actual pedagogical value it holds beyond sales pitches. The result is better investments and reduced risks.
Future-Proof Professional Development Offering basic courses in new tools is not enough. A proactive organization invests in building a deeper, critical AI literacy. By prioritizing professional development for teachers in languages, history, and social studies, internal experts are created who can lead the faculty in advanced and responsible AI usage. This is a direct investment in the organization's long-term adaptability.
Drive Real Innovation Technical innovation occurs when a tool solves a real problem in a new way. The most groundbreaking applications of AI in schools will likely not be technical, but pedagogical. The ability to see new possibilities for learning, assessment, and creativity requires the type of problem-solving and inventiveness trained within humanities subjects.
Next Steps: A Three-Step Action Plan
Translating insight into action requires a clear plan. Here is a proposal for getting started:
Step 1: Inventory and Map The principal or school head takes responsibility for identifying key individuals with specialized humanities competence within the organization. Simultaneously conduct a simple risk analysis: where are we most vulnerable to unethical or ineffective AI usage?
Step 2: Establish and Formulate A cross-functional working group with humanists and tech specialists creates a first draft of the school's AI policy, max one page. Focus on principles for usage, not on specific tools.
Step 3: Test and Evaluate The working group and selected teams choose a defined area, for example, text production in grade 9, to test on a smaller scale. Measure the effect: did we save time, increase quality, did we strengthen students' critical thinking?
Strategically integrating humanities competence into the school's AI work is ultimately about building a more robust, intelligent, and responsible organization—a school that achieves pedagogical sovereignty in the digital age.
