Our Platform Values

Transparent declaration of the principles that guide My Digital Sovereignty Community

Our Foundational Commitment

This platform is built on pluralistic sovereignty - the belief that individuals, families, and communities have the right to self-governance according to their own values, traditions, and beliefs.

We Do NOT Believe In:

  • Universal moral hierarchies
  • One "correct" way to organize
  • Top-down value imposition
  • Ideological conformity

We DO Believe In:

  • Multiple legitimate value frameworks coexisting
  • Community self-determination
  • Respectful disagreement across difference
  • Protection of minority perspectives
1

Pluralistic Sovereignty

What This Means

Every community has the right to organize according to its own values, without interference from platform ideology or other communities' norms.

In Practice:

  • Groups choose their own governance rules
  • No platform-wide moral framework enforced
  • Different communities can operate with conflicting values
  • We facilitate coexistence, not conformity

What We Will NOT Do:

  • Impose religious or political ideology
  • Force communities to "tolerate" each other's internal practices
  • Automatically resolve value conflicts
  • Rank competing moral frameworks as "better" or "worse"

What We WILL Do:

  • Provide tools for groups to self-govern
  • Protect against inter-community interference
  • Support healthy discourse within value-diverse groups
  • Acknowledge our own (Western, tech-industry) biases and work to mitigate them
2

Radical Transparency

What This Means

Users deserve to know how the platform works, what data is collected, how decisions are made, and what values guide us.

In Practice:

  • Open-source governance framework (Tractatus)
  • Active governance rules always visible
  • Platform values publicly documented (this page)
  • Decision-making processes explained
  • Data collection and usage disclosed

What We Will NOT Do:

  • Hide monitoring or data collection
  • Make decisions behind closed doors without explanation
  • Pretend to be "neutral" when we have values
  • Obscure how AI systems influence user experience

What We WILL Do:

  • Document all governance rules and their rationale
  • Explain why features exist and what problems they solve
  • Acknowledge when we make mistakes or discover bias
  • Publish platform roadmap and decision criteria
  • Make code and governance systems auditable
3

User Agency & Consent

What This Means

Users control their data, their participation, and their group's governance approach. Opt-in, not opt-out. Exit over voice when needed.

In Practice:

  • Groups choose which governance rules to enable
  • Members can opt out of pattern analysis
  • Data portability (export your stories, take them elsewhere)
  • Easy account deletion (true right to be forgotten)
  • No dark patterns or manipulative design

What We Will NOT Do:

  • Force features on users "for their own good"
  • Make opting out difficult or shame-inducing
  • Lock data in proprietary formats
  • Use addictive design patterns
  • Assume we know better than users about their needs

What We WILL Do:

  • Provide informed consent mechanisms (GDPR-compliant)
  • Honor opt-out requests immediately
  • Support data export in standard formats
  • Design for users' sovereignty, not our retention metrics
  • Trust users to make their own choices
4

Epistemic Humility

What This Means

We don't have all the answers. Our systems have biases. Users have knowledge we lack. Uncertainty is honest.

In Practice:

  • Acknowledge Western/tech-industry bias in our design
  • Test with diverse users before claiming "this works"
  • Disclose confidence levels in AI-generated insights
  • Change course when evidence shows we're wrong
  • Value user expertise over platform assumptions

What We Will NOT Do:

  • Claim our governance framework is "objective" or "universal"
  • Ignore user feedback that contradicts our assumptions
  • Deploy features as "finished" that need ongoing refinement
  • Pretend cultural bias doesn't exist in our systems
  • Assert certainty where we have hypotheses

What We WILL Do:

  • Mark experimental features clearly with badges
  • Solicit feedback from diverse communities
  • Publish failure analyses when things go wrong
  • Revise systems based on real-world use
  • Say "we don't know" when we don't know
5

Care for the Vulnerable

What This Means

Platform features and governance should protect those with less power, not amplify existing hierarchies.

In Practice:

  • Detect and surface overlooked voices (not just most-active members)
  • Monitor for patterns of exclusion or dismissal
  • Provide tools for vulnerable sharing (trauma-informed design)
  • Protect against harassment and brigading
  • Ensure accessibility for diverse abilities

What We Will NOT Do:

  • Privilege loudest voices with most time/resources
  • Ignore power imbalances within groups
  • Design only for typical users (ableism)
  • Treat all silence as consent
  • Make vulnerability punishable

What We WILL Do:

  • Provide anonymity options for sensitive contributions
  • Monitor participation balance across demographics
  • Build accessibility into design from the start (WCAG compliance)
  • Create safe spaces for trauma-informed sharing
  • Train moderators on power-aware facilitation
6

Indigenous Data Sovereignty & Te Tiriti o Waitangi

What This Means

Digital sovereignty is not a new concept—it builds on centuries of indigenous peoples' struggles for self-determination. We acknowledge Te Tiriti o Waitangi (the Treaty of Waitangi) and indigenous leadership as foundational to sovereignty movements worldwide.

In Practice:

  • Follow CARE Principles: Collective benefit, Authority to control, Responsibility, Ethics
  • Acknowledge Te Mana Raraunga (Māori Data Sovereignty Network) research and frameworks
  • Apply peer-reviewed indigenous data governance standards
  • Seek authentic partnership with Māori organizations before implementing Te Reo or cultural features
  • Recognize indigenous communities led sovereignty struggles long before "digital sovereignty" became a tech buzzword

What We Will NOT Do:

  • Engage in tokenism or performative acknowledgments without concrete action
  • Implement Māori language or cultural features without Māori approval and guidance
  • Claim credit for sovereignty concepts developed by indigenous leaders
  • Treat indigenous data sovereignty as optional or secondary
  • Proceed with features touching indigenous knowledge without cultural consultation

What We WILL Do:

  • Footer acknowledgment of Te Tiriti on all platform pages (subtle, respectful)
  • Resource directory linking to indigenous data sovereignty research
  • Wait for stable platform before approaching Māori organizations for partnership
  • When ready, request Māori review and approval of Te Reo implementation
  • Honor the historical lineage of sovereignty movements in our documentation

Why This Matters

Privacy and sovereignty are not just technical problems—they're human rights issues with deep historical roots. Indigenous frameworks like the CARE Principles offer proven approaches to collective data governance that benefit all communities. Acknowledging this lineage honors the work of indigenous leaders and scholars who developed these concepts long before the tech industry adopted the language.