Tubara Artificial Intelligence (AI) Policy
Version: 1.0
Effective Date: December 18, 2025
Next Review: February 18, 2026 Review Frequency: Bi-monthly (every two months), subject to change based on significant AI developments
Executive Summary
Tubara is committed to providing safe, accurate, and trustworthy educational technology for children. While artificial intelligence offers exciting possibilities for education, we will not implement AI-powered learning assessment features until independent scientific research demonstrates accuracy levels appropriate for use with children. This policy outlines our position on AI, our criteria for future adoption, and our commitment to transparency with parents and educators.
- Policy Statement 1.1 Current Position As of December 18, 2025, Tubara does not use artificial intelligence for:
❌ Assessing children's learning comprehension
❌ Generating or grading educational quizzes
❌ Making curriculum recommendations based on AI analysis
❌ Any feature that directly impacts a child's educational experience
1.2 Limited Backend Use
Tubara may use AI for: ✅ Backend analytics that do not affect children (e.g., aggregating usage patterns)
✅ Non-child-facing administrative features
✅ Internal research and development (not deployed to users)
All such uses are clearly documented and subject to the same accuracy standards.
2.0 Rationale
2.1 Child Safety First
Our primary responsibility is protecting children's educational development and emotional well being. Current AI systems (as of December 2025) cannot guarantee the level of accuracy required for educational assessment of children.
Specific concerns:
-
AI-generated quiz answers may be incorrect, giving false feedback to children
-
Inaccurate assessment could damage a child's confidence or self-perception
-
Wrong learning recommendations could waste educational time or create gaps in understanding
-
AI "hallucinations" (confidently stated incorrect information) pose particular risks in educational contexts
2.2 Trust Protection
Tubara's value proposition is built on trust and safety. One inaccurate AI assessment could: - Undermine parent trust in all Tubara features
-
Damage our reputation as a safety-first platform
-
Create legal and ethical liability
-
Harm our relationship with the educational community
2.3 Accuracy Threshold
Current large language models (LLMs), including Claude, GPT-4, and others, typically include disclaimers such as:
"AI can make mistakes. Please double-check important information."
This level of uncertainty is incompatible with direct educational assessment of children. We require >98% verified accuracy before deploying AI in child-facing educational features.
2.4 Regulatory Prudence
The regulatory landscape for AI in education is evolving. By waiting for:
-
Clear regulatory guidance
-
Industry best practices
-
Peer-reviewed validation
...we protect Tubara and our users from potential future compliance issues.
2.5 Competitive Differentiation
By taking a responsible, patient approach to AI adoption, we differentiate Tubara as:
-
Trustworthy - We don't chase trends at children's expense
-
Honest - We don't over-promise AI capabilities
-
Safety-focused - Child well-being traded for feature velocity
-
Evidence-based - We adopt technology when proven, not when hyped
3.0 AI Adoption Criteria
Tubara will consider implementing AI-powered learning assessment features when ALL of the following criteria are met:
3.1 Scientific Validation
✅ Peer-reviewed research published in reputable educational or computer science journals
✅ Studies demonstrate >98% accuracy in educational assessment tasks
✅ Research includes validation with children in relevant age groups (3-17)
✅ Studies conducted by independent researchers (not only AI companies)
3.2 Industry Best Practices
✅ Clear best practices established by major educational institutions
✅ Successful deployment by respected ed-tech companies (e.g., Khan Academy, Duolingo)
✅ Documented case studies showing positive educational outcomes
✅ No significant incidents or scandals involving AI educational assessment
3.3 Regulatory Clarity
✅ UK Department for Education guidance on AI in learning (if issued)
✅ Ofsted or relevant homeschool authority positions clarified
✅ COPPA compliance guidance for AI and children (if updated)
✅ ICO (Information Commissioner's Office) guidance on AI decision-making affecting children
3.4 Technical Safeguards Available
✅ Ability to implement human oversight layer (e.g., parent review of AI outputs)
✅ Confidence scoring and uncertainty quantification
✅ Audit trails for AI decisions
✅ Ability to explain AI reasoning to parents
✅ Fallback mechanisms if AI fails
3.5 Parent Community Readiness
✅ Survey data showing parent trust in AI educational assessment
✅ Demand from Tubara users for AI features
✅ Positive feedback from Alpha/Beta testing with parent oversight
✅ Educational community endorsement (teachers, homeschool associations)
4.0 Monitoring & Review Process
4.1 Bi-Monthly Reviews
Schedule: Every two months (January, March, May, July, September, November) Review Team: Founder, Technical Lead, Educational Advisor (when appointed)
Review Agenda:
-
Research monitoring: New AI accuracy studies published?
-
Industry monitoring: Major AI adoptions or incidents in ed-tech?
-
Regulatory monitoring: New guidance from educational authorities?
-
Technical monitoring: Anthropic/OpenAI accuracy improvements?
-
Community monitoring: Parent sentiment and demand?
-
Criteria assessment: Any adoption criteria now met?
4.2 Trigger Events
Reviews may occur outside the bi-monthly schedule if any of these occur:
-
Major AI accuracy breakthrough announced (e.g., >98% benchmark achieved)
-
Significant regulatory guidance issued
-
Major ed-tech company validates AI assessment with published results
-
Major AI failure or scandal in ed-tech sector (requiring policy update)
-
Significant user demand for AI features
4.3 Documentation
Each review will produce:
Written summary of findings Updated assessment against adoption criteria Decision: Maintain policy / Update policy / Begin pilot program Rationale for decision Date of next scheduled review
5.0 What We Monitor
5.1 Academic Research
Databases: Google Scholar, arXiv, JSTOR, ACM Digital Library Keywords: "AI education assessment accuracy," "LLM quiz generation," "AI learning analytics validation" Focus: Peer-reviewed studies on AI accuracy in educational contexts Target: Studies showing >98% accuracy in child-appropriate assessment
5.2 AI Provider Updates
Anthropic: Claude model improvements, benchmark scores, educational use cases OpenAI: GPT model improvements, accuracy metrics, deployment studies Google: Gemini developments, educational AI research Focus: Official accuracy claims, third-party validation, education-specific benchmarks
5.3 Industry Developments
Khan Academy: AI tutoring rollout, validation studies, outcomes data Duolingo: AI assessment accuracy, case studies Educational institutions: University and school district AI pilots Focus: Real-world deployment success, published accuracy data, user satisfaction
5.4 Regulatory Landscape
UK Department for Education: AI in schools guidance Ofsted: AI in homeschooling / educational assessment ICO: AI and children's data / automated decision-making COPPA: Updates regarding AI and child-directed applications Focus: Official guidance, compliance requirements, legal precedents
5.5 Community Sentiment
Parent surveys: Attitudes toward AI in education Homeschool associations: Position statements on AI assessment Educational forums: Teacher and parent discussions Tubara users: Feature requests, concerns, feedback Focus: Trust levels, demand, concerns, readiness
6.0 Alternative Approaches (Current Implementation)
While AI assessment is not yet deployed, Tubara provides valuable learning insights through:
6.1 Factual Data Analytics
Watch time tracking (no AI interpretation) Subject category distribution (simple classification) Channel and video popularity (usage metrics) Learning session patterns (observational data)
6.2 Parent Observation Tools
Structured reflection prompts Note-taking spaces for parent observations Conversation starters for parent-child discussions Milestone tracking (completion-based, not assessment-based)
6.3 Community Wisdom
Parent-to-parent advice and methods Shared learning strategies Community-validated resources Peer support for educational assessment
These approaches provide value without AI accuracy risks.
7.0 Transparency Commitment
7.1 Public Disclosure
This AI Policy is:
✅ Published on Tubara's public website ✅ Accessible in our documentation ✅ Referenced in Terms of Service ✅ Mentioned in Privacy Policy ✅ Updated whenever changes occur
7.2 User Communication
When/if we decide to pilot AI features, we will:
Announce plans to users in advance Explain what AI will do and its limitations Provide opt-in/opt-out mechanisms Clearly label AI-generated content Report accuracy metrics transparently
7.3 Marketing Honesty
We commit to:
Never claiming AI capabilities we don't have Not using "AI" as a marketing buzzword without substance Honestly describing our technology stack Correcting any misperceptions about our use of AI
8.0 Future AI Adoption (When Criteria Met)
8.1 Phased Approach
If/when adoption criteria are met, we will implement a phased rollout:
- Phase 1: Alpha Testing with Oversight
- Small group of volunteer families
- Parent review required before AI content shown to children
- Extensive feedback collection
- Accuracy measurement and validation
- Duration: 3-6 months
Phase 2: Beta with Safeguards
- Larger user group
- Optional AI features (opt-in only)
- Clear AI labeling
- Confidence scores displayed
- Human oversight available
- Duration: 3-6 months
Phase 3: General Availability (If Successful)
- Gradual rollout to all users
- Continued monitoring and quality control
- Parent controls (disable AI features)
- Ongoing accuracy reporting
8.2 Features Considered for Future AI Use
When safe and validated, AI could enhance: - Video content summarization (for parents) - Subject matter classification (backend analytics) - Learning pattern insights (with confidence scores) - Quiz generation (with mandatory parent review) - Study recommendations (suggestive, not directive)
Always with:
- Clear AI disclosure
- Parent oversight options
- Accuracy metrics displayed
- Opt-out mechanisms
9.0 Accountability
9.1 Responsible Parties
Policy Owner: Founder/CEO Review Team: Technical Lead, Educational Advisor (when appointed), Parent Advisory Board (when established) Approval Authority: Founder/CEO must approve any changes to this policy
9.2 Violation Consequences
If AI is deployed without meeting the criteria in this policy:
Immediate feature suspension User notification and apology Internal investigation Policy review and strengthening Public transparency report
9.3 User Feedback
Parents and educators can:
Submit feedback on this policy via support@tubara.world Request policy reviews if they believe criteria are met Raise concerns about any AI use in Tubara Join our Parent Advisory Board (when established) to inform policy
10.0 Policy Updates
10.1 Version History
- Initial policy creation
- Version 1.0
- Date Dec 18, 2025
- Approved By : Founder
- Changes
10.2 Amendment Process
This policy may be amended when:
- Bi-monthly review identifies need for changes
- Trigger event requires policy update
- Regulatory requirements change
- Adoption criteria are met (or modified)
- Community feedback warrants revision
All amendments require:
- Written rationale
- Founder approval
- Public notification (if material changes)
- Updated version published within 7 days
11.0 Contact Information
Questions about this policy? Email: support@tubara.world Subject: "AI Policy Inquiry" We welcome:
Questions about our position on AI Research or evidence supporting AI adoption Concerns about AI in educational technology Suggestions for policy improvements
We commit to responding within 5 business days.
12.0 Conclusion
Tubara's mission is to provide safe, parent-controlled educational video experiences for children. While artificial intelligence offers exciting possibilities for the future of education, we believe that child safety and educational accuracy must come before technological innovation. We will adopt AI when it's proven safe, not because it's trendy. This policy reflects our commitment to:
- Putting children first
- Respecting parent intelligence and judgment
- Making evidence-based decisions
- Building trust through transparency
- Admitting what we don't know
We look forward to the day when AI technology matures to the point where it can safely enhance children's learning. Until then, we focus on what works: parent curation, factual data, and human wisdom.
Document Prepared By: Tubara Product Team Effective Date: December 18, 2025 Next Review: February 18, 2026 Status: DRAFT - Awaiting Founder Approval