The 30-Second Pitch
AI governance is no longer optional. The question is: who controls the governance?
Their Approach
Centralized AI learns from everyone, governed by corporate-written constitutions, optimized for scale.
Our Approach
Federated AI where each community defines its own values, controls its own data, and the AI adapts to them - not the other way around.
Same capabilities. Different power structure.
What is Home AI?
AI that lives in your Village - where your values are the only values that matter.
"Home AI" is an umbrella concept that encompasses three fundamental ideas:
Agentic Governance
AI agents as supervised junior colleagues - capable but accountable to your community.
Pluralistic Decision Making
Multiple value systems coexist without being averaged into a lowest common denominator.
Sovereignty
Your data stays yours. Your rules are the only rules. Real exit rights.
"The common theme that provides the mantle of safety in Home AI is Sovereignty - not isolation, but the capacity to participate in larger networks without surrendering local control."
Think of AI agents not as apps waiting for you to click a button, but as software entities that can hold a goal in mind and work towards it on your behalf. They combine reasoning, planning, and memory - while checking constraints that you have defined in advance.
Home AI behaves less like a calculator and more like a junior colleague who can read, write, and act - all while remaining accountable to your community.
Agentic Governance
From governing static AI models to supervising autonomous AI agents.
What is Governance?
The way a community sets the rules of the game for its AI, decides who gets to change those rules, and ensures that the system's behaviour stays aligned with shared values over time. It's not a policy document - it's the ongoing pattern of agreements, roles, processes, and feedback loops.
What are AI Agents?
AI agents are software entities that can:
- Hold a goal in mind and work towards it step by step
- Combine reasoning, planning, and memory with the ability to call tools
- Talk through APIs - structured doorways that let software ask other software to do things
- Decompose outcomes into actions without continuous human hand-holding
What Does Supervision Entail?
Supervising AI agents means treating them as powerful but fallible junior colleagues:
Boundaries
Specifying goals, tools, data, and spaces each agent can touch. Actions that always require human sign-off.
Observability
Logging what agents decide and exposing traces in forms that ordinary members can inspect.
Feedback & Escalation
Ability to veto, correct, and fold corrections back into agent behaviour. Pause and ask for guidance on high-risk actions.
Stewardship Over Time
Regular reviews, updating constraints as norms evolve, decommissioning agents that no longer serve.
Future Direction: Autonomous Governance
The direction of travel is from humans constantly watching over to a framework where much safety and alignment work happens automatically:
- Self-regulating agents - built with guardrails, catching misuses as they happen
- Dynamic policy enforcement - rules that adjust based on behaviour and feedback
- Evolving human role - focus on goals and acceptable risk, not clicking "approve" on every action
Key constraint: Humans always approve values. The AI facilitates, never decides values.
Pluralistic Decision Making
Value differences are real, legitimate, and often non-reducible to a single goal.
The Problem with Optimization
Large language models tend to collapse diversity into a single score. They're built to find "the" best answer according to a hidden hierarchy of priorities. That's efficient for a centralized platform, but dangerous for a community - it quietly erases minority or context-specific values in the name of smooth optimization.
Value Agents
Instead of one optimizer steamrolling everything else, different agents embody different priorities:
When a decision touches their domain, they each get a structured say. Sometimes one has absolute veto power (legal compliance, hard privacy boundaries). Sometimes they must negotiate a compromise or escalate to human judgment.
How Conflicts Are Resolved
When two agents pull in different directions - an Efficiency Agent trying to skip a step, a Compliance Agent insisting it's mandatory - the system doesn't simply pick the "strongest" optimizer. It runs a conflict-resolution protocol that can:
- Give priority to hard constraints
- Require agents to search for acceptable compromises
- Hand the question up to human oversight
Six Moral Frameworks as Equals
The deliberation system recognizes these frameworks without automatic ranking:
(rights-based)
(outcomes-based)
(character-based)
(relationship-based)
(tradition-based)
(interconnection-based)
No averaging minority views into majority consensus. Different value-positions can persist, exert force, and co-govern what agents are allowed to do.
Democratic Polls
Everyday deliberation tools that make AI governance tangible - not abstract policy, but community conversation.
The Democratization of AI
AI governance sounds like something that happens in boardrooms or academic papers. But real democratization happens in everyday decisions: Should our community use AI to summarize discussions? How should sensitive topics be handled? What does "helpful" mean for us?
Beyond Simple Voting
Traditional polls collapse complex decisions into binary choices. Village polls preserve the nuance that AI governance requires:
Consent-Based Voting
The sociocratic 5-point scale: Enthusiastic Support, Support, Consent (can live with it), Stand Aside, or Object. Objections require rationale and trigger discussion - they're not vetoes but invitations to address concerns.
Ranked Choice
When multiple options exist, ranked voting prevents the "spoiler effect" that silences minority preferences. Your second choice matters if your first can't win.
Quadratic Voting
Voice credits let you express intensity of preference - spend more on issues you care deeply about. Prevents both tyranny of the majority and capture by vocal minorities.
Discussion Threads
Every poll includes structured deliberation: questions, supporting arguments, concerns, and suggestions. The conversation is the point, not just the final tally.
How AI Governance Becomes Everyday Conversation
These aren't abstract tools - they're how communities actually govern their AI:
Example: AI Memory Policy
A family history community debates: "Should AI remember individual preferences for phrasing around recent deaths?"
- Consent vote reveals: 60% support, 30% consent, 10% stand aside
- Discussion surfaces: "What about cultural differences in mourning periods?"
- Result: Approved with amendment allowing individual opt-out
The AI didn't decide this. The community did. Through conversation.
Multi-Phase Deliberation
Important decisions move through structured phases:
1. Discussion Phase
Share perspectives, ask questions, surface concerns. No voting pressure.
2. Preliminary Vote
Temperature check. Reveals where consensus exists and where work remains.
3. Final Vote
Binding decision with full participation requirements.
AI Assists, Community Decides
The AI can help with polls - summarizing discussion threads, highlighting patterns, suggesting when preliminary votes show emerging consensus. But it never:
- Casts votes or influences tallies
- Decides when discussion is "done"
- Overrides objections or concerns
- Creates policy without community approval
Polls aren't just features - they're the infrastructure of self-governance. Every community decision about AI runs through the same democratic process the AI is designed to support.
Federated Architecture
Same capabilities as centralized AI, different power structure.
The Current Landscape
Major technology providers have deployed AI systems with impressive capabilities: conversation summarization, cross-context memory, proposal synthesis, and vote tallying. These systems work. Millions rely on them daily.
But all these systems share one architectural assumption: your data flows through centralized infrastructure.
- Your conversations inform their systems (or directly train their models)
- Your community's values are averaged with millions of others
- Your governance rules are a subset of their governance rules
- Exit means losing everything the AI learned about you
The Village Alternative
| Feature | Centralized | Village (Federated) |
|---|---|---|
| Discussion summarization | Cloud AI processes all threads | Tenant-scoped AI, data never leaves |
| Proposal synthesis | Cross-organization learning | Community-specific patterns only |
| Vote tallying | Corporate algorithm decides | Plural values preserved, no averaging |
| Memory/preferences | Unified across all contexts | Member-controlled, auditable, deletable |
| Governance rules | Corporate constitution applies | Community defines own constitution |
| Data location | Provider infrastructure | Tenant's database, encrypted |
| Exit rights | Limited export, lose AI context | Full export, AI memory is yours |
The Three-Layer Model
We don't reject governance - we federate it.
Layer 1: Universal (Non-Negotiable)
Bedrock rules that cannot be changed by any community:
- Safety boundaries from AI safety research
- Strategic principles - human oversight, transparency, agency respect
- Legal compliance - GDPR, data protection, privacy law
These rules exist because some harms are unacceptable regardless of community preference.
Layer 2: Community Constitution
Each Village defines its own values within universal boundaries:
- Family history community: sensitivity around recent deaths
- Professional association: citation standards
- Cultural organization: specific relational protocols
- Sports club: casual, supportive interaction
Communities vote on these, debate them, and can change them.
Layer 3: Individual Preferences
Within community guidelines, members customize:
- Communication style
- Content recommendations
- Notification preferences
- AI memory opt-in/out
Pattern Recognition with Human Approval
The AI notices patterns in how moderators and members interact. When patterns are consistent, it proposes rule changes.
Example: Moderators consistently edit AI responses about recently deceased relatives to be more gentle.
System proposes: "When discussing deaths within past 5 years, use more sensitive phrasing."
Community decides: They vote. If approved, it becomes part of their constitution. If rejected, the system learns that too.
AI improves through use, but within community-controlled boundaries. No cross-community learning.
Sovereignty
The capacity to participate in larger networks without surrendering local control.
What is Digital Sovereignty?
Having real control over your own fate in the digital world: control over data, infrastructure, identities, and the technology stack that processes them. For a Village, sovereignty means building an architecture in which control is structurally non-negotiable.
Sovereign Architecture Principles
- Local capacity - Key decisions and data flows remain under direct reach, not hidden in distant hyperscalers
- Immutable audit trails - Every agent action logged into inspectable history
- Open interfaces - Models, tools, or vendors can be replaced while governance rules remain intact
- Modular design - No lock-in to any single provider
The Trade-offs
You Accept
- Potentially lower capability ceiling
- Higher infrastructure complexity
- More explicit governance work
In Exchange For
- Complete data sovereignty
- Community-appropriate values
- Real exit rights
- Transparent governance
Indigenous Data Sovereignty & Te Tiriti
Sovereignty in the Village is inseparable from respect for the peoples, places, and histories in which the system is embedded, beginning with tangata whenua.
Many ideas now surfacing under "digital sovereignty" were articulated first by indigenous leaders:
- Collective rights over data as taonga
- Self-determination in how knowledge is used
- Kaitiakitanga rather than extractive ownership
Maori data, language, and cultural features are governed by Maori authority, not by convenience.
Sovereignty is not isolation - it's the capacity to say "no" or "yes, but only under these conditions" when agents propose cross-boundary actions.
Honest Assessment
What we're confident about, what shows promise, and what we're still learning.
What We're Confident About
Sovereignty is technically achievable
- Tenant isolation works (already implemented)
- Local models can handle core tasks
- Member controls are enforceable
Transparency improves trust
- Users respond positively to seeing what AI "knows"
- Source attribution reduces confusion
Federated governance preserves plural values
- No averaging minority views
- Communities hold different values simultaneously
What Shows Promise
Local models for quality
- 7B-70B models handle many tasks
- Quality gap with frontier models narrowing
- Hybrid approach likely optimal
Pattern recognition accuracy
- Early indicators positive
- Need more communities to validate
Cost sustainability
- Infrastructure costs non-trivial
- Need to validate sustainable pricing
What We're Still Learning
Optimal balance between layers
- How much should communities customize vs inherit?
- When do universal rules feel restrictive vs protective?
Member consent fatigue
- How many choices before opt-in becomes friction?
- Can defaults be sensible enough?
Long-term value evolution
- How do community norms change over time?
- Should AI adapt automatically or wait for guidance?
Summary
Federated AI governance where communities keep sovereignty.
Centralized AI governance works but concentrates power. Some communities need an alternative.
Same capabilities, different architecture - tenant-isolated, community-governed, member-controlled.
Continuous improvement, full transparency, real exit rights.
Matching frontier model capabilities on day one. This is a trade-off, and we're honest about it.
Ready to Explore?
See how Village implements Home AI for communities, families, and organizations.