Produce language, don't just recognize it. Every exercise demands output, not passive consumption.
Traditional language learning focuses on recognition: pick the right answer, read a text, listen to audio. But understanding is not the same as knowing.
YAKKI EDU flips the paradigm. Every exercise requires language production:
Based on the Output Hypothesis (Merrill Swain, 1985): producing language forces the brain to notice gaps in knowledge.
Vygotsky (1978)
Each student exists between what they already know and what's still beyond reach. YAKKI operates precisely in this zone — challenging enough to learn, but not so hard you give up.
Implementation: Scaffolding Decay
Level 5: Full support (hints, 2 choices)
Level 4: 3 choices
Level 3: 4 choices
Level 2: Partial free answer
Level 1: Full free answer
Bjork (1994)
The learning paradox: easy tasks feel good but create weak memory. Challenging tasks feel frustrating but build lasting knowledge.
Implementation: 15-20% Error Rate
Too many correct answers? System increases difficulty. Too many errors? System adjusts down. Optimal friction maintained automatically.
Pedagogical Matrix
Errors are not failures — they're fuel for growth. YAKKI doesn't just count mistakes; it uses them to personalize every future session.
Implementation: 104 Grammar Rules
Each error maps to a specific rule (e.g., GR-TENSE-PRES-SIMPLE-001). Words go to student vocabulary. 3 correct uses = mastery.
Natural Exposure
The student never knows which words are being practiced. They see normal text, unaware that specific vocabulary was selected based on their error matrix.
Implementation: LLM-Powered Content
Error words are included in prompts to Gemini 2.0 Flash. AI generates content naturally weaving in problem vocabulary. Exposure feels organic.
YAKKI EDU isn't a textbook with gamification. It's real games with learning built into the mechanics.
False friends, L1 interference
Match words that look similar but mean different things. The system knows your native language (Hebrew, Russian, Arabic) and selects traps specific to your L1.
magazine ≠ магазин
accurate ≠ аккуратный
Word order, syntax
Intercepted message is scrambled. Rebuild the correct word order. Token physics: words can be dragged and "snap" to correct positions.
Error detection, attention to detail
Grammar errors are hidden in the text. Find and fix them before time runs out. Faster discovery = more points.
Fluency, precision, automaticity
Cloze exercises under time pressure. Missing word = target. Three shots (attempts) per target. Misses cost Precision Points.
Errors from Intel Reader auto-queue here
3 consecutive correct = mastery
Deep comprehension, thought formulation
This isn't just reading. It's a mission:
1. Receive Dossier
Text adapted to your level (A1-C2)
2. Answer Questions
Bloom's Taxonomy L1-L6
3. Dual Scoring
70% Meaning + 30% Form
4. Error Tracking
Words & rules → matrix
1:3 Rule: For every challenging word, three familiar ones. Optimal comprehension without overload.
Error recorded: word, rule ID (e.g., GR-ARTICLE-DEF-001), context ("I saw dog in park")
Saved to: student_vocabulary table
System fetches student's error matrix. Problem words included in LLM prompt. Gemini generates content targeting weak areas.
Student doesn't know which words are being practiced. They see regular text. Specific vocabulary woven in invisibly.
3 correct uses in varied contexts = word removed from error matrix. Learned, not just memorized.
Every student is a vector in skill space. Progress is measured mathematically.
Ideal profile for each CEFR level. B1 student should know Present Perfect at 80%, articles at 70%.
Student's deviation from standard. Positive = strength, negative = weakness.
Frobenius norm of delta approaching zero = target level reached.
Student "Dima" (B1):
Present Perfect: +15% (above norm)
Articles: -25% (needs work)
Conditionals: +5% (on track)
Intel Points
Quantity of work completed
Precision Points
Quality and accuracy
Recruit → Agent → Senior Agent
→ Handler → Director
Hidden achievements: "First Blood", "Grammar Master", "Speed Demon"
Mobile App
Kotlin + Jetpack Compose
Server
Rust + Axum + SQLite
AI Engine
Google Gemini 2.0 Flash
Architecture
Clean Architecture + MVVM
On-device processing where possible. Gemini Nano for offline (planned). Teachers see only their students. Students see only their own data.
Join our pilot program or download the app to experience the YAKKI method.