How I built an AI tool that reduces lesson planning time by 75% for 800,000+ Filipino public school teachers.
THE CHALLENGE
Filipino K-12 public school teachers spend 2-4 hours per week creating Daily Lesson Logs (DLLs)—a mandatory submission format required by the Department of Education. This administrative burden directly competes with actual teaching preparation time.
VALIDATED PAIN POINTS
— Grade 4 Teacher, Quezon City
— Grade 2 Teacher, Cebu
— High School Teacher, Bicol Region
MARKET ANALYSIS
| Solution | Critical Gap |
|---|---|
| Manual Creation | 2-4 hours per week; inconsistent quality |
| Template Libraries | Static; don't reflect new MATATAG curriculum |
| Paid SaaS Tools | ₱299-599/month; unaffordable for most teachers |
| ChatGPT/Generic AI | Requires prompt engineering; wrong format |
DISCOVERY & VALIDATION
| Method | Participants | Key Focus |
|---|---|---|
| Teacher interviews | 8 teachers (Grades 1-10) | Workflow pain points, current solutions |
| Facebook group observation | 3 communities (~50K members) | Common complaints, shared resources |
| Competitive analysis | 6 existing tools | Pricing, features, format compliance |
| Curriculum document review | DepEd MATATAG guidelines | Required format, learning competencies |
| Finding | Product Implication |
|---|---|
| Teachers spend 70% of planning time on format, not content | Automate the structure; let teachers focus on customization |
| MATATAG curriculum (2024) invalidated existing templates | Build curriculum data directly into the product |
| ₱500/month is the absolute maximum teachers can spend | Must be free; monetize through ads if needed |
| Teachers don't trust cloud storage with their work | Local storage first; no account required |
| Many teachers use phones, not computers | Mobile-responsive is P0, not P1 |
| Filipino and Mother Tongue subjects require specific dialects | Support 12+ regional languages for generation |
PRIORITIZATION & SCOPE
| Feature | Rationale |
|---|---|
| AI-generated DLL matching DepEd format exactly | Core value proposition |
| All K-12 grade levels and subjects | Can't exclude any teacher segment |
| MATATAG curriculum compliance (Grades 1-5) | New curriculum = highest demand |
| Export to Word/PDF | Required for submission |
| Mobile-responsive interface | 60%+ teachers access via phone |
| Zero account requirement | Reduce friction to zero |
Adds complexity; teachers expressed privacy concerns
Local storage sufficient for MVP
Single-teacher use case is 95%+ of demand
| Trade-off | Choice | Rationale |
|---|---|---|
| Build curriculum database vs. let AI infer competencies | Build comprehensive curriculum database | AI hallucination risk too high for official documents |
| Provide shared API vs. users bring their own key | Users provide their own free Gemini API key | Zero ongoing costs; Gemini free tier is generous enough |
| Native mobile app vs. responsive web | Responsive web only | Single codebase; no app store delays; instant updates |
EXECUTION & ITERATION
| Phase | Duration | Deliverable |
|---|---|---|
| Phase 1: Core Engine | Week 1-2 | AI generation + basic UI + single grade/subject |
| Phase 2: Curriculum Data | Week 2-3 | All grades, subjects, MATATAG competencies |
| Phase 3: Export & Polish | Week 4-5 | Word/PDF export, print formatting, mobile optimization |
| Phase 4: Production | Week 5-6 | SEO, analytics, deployment, monitoring |
Pass Criteria: "I would submit this to my principal"
Pass Criteria: <2 structural errors per generated plan
Pass Criteria: <5% regeneration rate due to format issues
| Feedback | Response |
|---|---|
| "The assessment section is too generic" | Added subject-specific assessment templates to prompt engineering |
| "I can't find my dialect" | Expanded Mother Tongue support to 12 regional languages |
| "Exported Word file loses table borders" | Rebuilt export using custom HTML-to-DOCX conversion with inline styles |
| "Plan doesn't match new Grade 3 MATATAG competencies" | Updated curriculum database from latest DepEd memorandum |
The MATATAG curriculum was new (2024) with limited structured data sources. I manually extracted learning competencies from DepEd PDF documents, validated against teacher feedback, and structured into a queryable format. This took 40% of total development time but eliminated the #1 teacher complaint about AI-generated plans.
Teachers submit DLLs in Word format. Standard HTML-to-Word libraries produced broken tables. I implemented a custom export pipeline that preserves table structures, handles merged cells correctly, and maintains print formatting—validated by opening exports in Word 2016, 2019, and 365.
OUTCOMES & IMPACT
Time to first lesson plan
Generation success rate
Export completion rate
Mobile usage
| Expectation | Reality | Learning |
|---|---|---|
| Teachers would customize heavily after generation | Most export with minimal edits | AI output quality was higher than anticipated; simplify the editing UX |
| API key setup would be a friction point | Teachers complete setup quickly with the guide | Clear step-by-step instructions removed the barrier |
| Print would be primary export method | Word export is 3x more popular than print | Teachers share files digitally with supervisors more than printing |
KEY TAKEAWAYS
For education tools, "close enough" isn't acceptable—teachers face real consequences for format errors. Investing in curriculum data accuracy was the right call.
Every step I eliminated (accounts, payments, configuration) increased conversion. Teachers wanted to solve their problem, not learn a new tool.
Facebook teacher groups surfaced more authentic pain points in one week than formal interviews would have in a month.
The lesson plan generator isn't the end—the Word file teachers submit is. I should have prioritized export fidelity earlier.
I treated export as a Phase 3 concern; it should have been Phase 1. The export format defines the entire data structure.
Manual updates from DepEd PDFs don't scale. I'd invest in structured data extraction for ongoing curriculum changes.
I delayed analytics setup; earlier data would have informed feature prioritization faster.
Remote feedback missed UX friction I only caught watching a teacher use the tool in real-time.