Defining the objects, policies, and boundaries that make autonomy governable
Advocates for Open, ethical AI Models

The Log object provides an explicit, structured, and privacy-aware record of system activity within a World. Logs enable auditability, accountability, debugging, and review without requiring continuous data retention or surveillance. In the OAII Base Model, Logs are first-class objects designed to support trust, not monitoring.

One of the most common sources of harm in AI systems is not malicious intent, poor data, or even flawed models. It is context leakage. When AI systems fail, they often fail because observations, interpretations, or rules escape the context in which they were valid. Meaning is treated as portable when it is not. This…

When discussions about AI architecture turn to the edge, they often focus on latency, bandwidth, or reliability. Those considerations matter — but in the home, they are secondary. In domestic settings, where intelligence runs is a question of dignity. Aging‑in‑place systems are not abstract infrastructure. They inhabit private spaces, observe intimate routines, and influence moments…

The Agent object represents an autonomous or semi-autonomous actor that reasons over Events and Knowledge within a World and produces outputs or actions subject to Policy constraints. Agents are the locus of decision-making and response, but not the locus of governance. In the OAII Base Model, Agents are explicitly constrained by Policies to ensure ethical…

The Policy object represents an explicit, inspectable, and enforceable set of constraints and decision rules governing how a system responds to Events and uses Knowledge within a World. Policies provide the primary mechanism for ethical transparency, accountability, and privacy enforcement in edge‑primary autonomous systems. Policies are not hidden heuristics; they are first-class objects subject to…

Aging-in-place technologies often fail for a simple reason: they confuse awareness with surveillance. Many systems assume that safety requires continuous monitoring — cameras always on, microphones always listening, data always streaming. This assumption is both technically unnecessary and ethically unsound. The OAII Base Model takes a different position: Safety emerges from meaningful events, not from…