LORASTRAL | Open-Source Educational Language Model

Vertical Line

Looking for the LORA app? Go to LORA (iOS & Android)

Lorastral-24B

Lorastral-24B is an open-source, child-friendly German language model fine-tuned for safe and engaging educational storytelling. It is optimized for ages 6–12 with age-aligned prompts and structured outputs.

Lorastral-24B is built on Mistral Small 24B and fine-tuned for readability and safety in educational contexts.

Features

  • Based on Mistral Small 24B (open-source foundation)
  • Age-adaptive outputs for primary school ages
  • Child-appropriate language optimized for clarity
  • Structured storytelling for learning through narratives

Readability Benchmarks

LORA already outperforms leading models in readability benchmarks for explaining German terms, ensuring content is more accessible to young learners. As the only model capable of differentiating between age groups (6-8 and 8-10 years), LORA delivers precisely tailored educational content:

ModelFlesch Reading Ease ↑Wiener Sachtextformel ↓Avg Sentence Length ↓Avg Word Length ↓
Lorastral-24B (LORA)80.242.709.061.39
Mistral-8B71.704.2214.921.42
GPT-4o77.173.0913.891.37
Gemini 1.5 Pro80.362.7312.941.34
Claude 3.5 Sonnet44.348.8342.291.41

Higher Flesch Reading Ease (higher is more readable) and lower Wiener Sachtextformel (lower is more readable) scores confirm LORA's superior readability for children.

🧒 Readability (per age group)

Flesch Reading Ease (German)

  • Grades 1–2: 77.99
  • Grades 3–4: 75.80
  • Grades 5–6: 76.84 → Above 70, suitable for CEFR A1–A2.

Wiener Sachtextformel

  • Grades 1–2: 3.13
  • Grades 3–4: 3.42
  • Grades 5–6: 3.28 → Below 5, age-appropriate.

📊 Benchmarks

The model was evaluated on standard zero-shot multiple-choice benchmarks using the Language Model Evaluation Harness (lm-eval-harness) [Gao et al., 2024]:

TaskAccuracy± SENormalized Accuracy± SE
ARC-Easy85.40 %± 0.72 %86.11 %± 0.71 %
ARC-Challenge62.54 %± 1.41 %66.21 %± 1.38 %
Winogrande76.24 %± 1.20 %
Founders Dima Rubanov and Matthias Neumayer

At LORA, we recognize that gender bias in AI systems is a critical issue that needs to be addressed head-on. Many AI models reflect and even amplify real-world biases, including outdated gender stereotypes. For example, AI language models may associate doctors and engineers with men, while assigning roles like nurse and teacher to women. This bias stems from the human-generated data used to train AI. However, we believe AI also presents an opportunity to move beyond human biases and shape a more equitable future. That's why LORA we want to developing trustworthy, ethical AI for children.

Core Team

Supported by

Fachakademie Sozialpädagogik München Mitte / TRUSTIFAI

Funded by

  • BMDW
  • AWS

🤝 Interested in Using Our API?

Want to integrate LORA's child-friendly AI capabilities into your educational application? We're currently in beta and looking for partners.

Contact Us for API Access