Defining the objects, policies, and boundaries that make autonomy governable
Advocates for Open, ethical AI Models

The Knowledge object represents structured, retained, and interpretable information derived from Signals, Events, and Sensor Knowledge within a World. Knowledge enables continuity, learning, and comparison over time without collapsing into continuous surveillance or opaque global models. In the OAII Base Model, Knowledge is explicit, scoped, revisable, and accountable.

Aging‑in‑place is often framed as a problem of sensors, alerts, or caregiver dashboards. That framing is incomplete — and, in many cases, dangerous. Aging‑in‑place is fundamentally a problem of interpretation under constraint: interpretation of human activity without surveillance, interpretation of change without diagnosis, interpretation of risk without stripping dignity or autonomy. These constraints are not…

The Sensor object represents a source of observable data within a World. Sensors produce Signals by observing some aspect of the environment, device state, or interaction surface. In the OAII Base Model, Sensors are responsible for observation, not interpretation. Sensors enable edge‑primary autonomy by grounding Events in locally observable evidence while remaining hardware‑agnostic and privacy‑aware.

The Signal object represents a unit of observation produced by a Sensor within a World. Signals are the raw or minimally processed inputs from which Events may be recognized. In the OAII Base Model, Signals are intentionally simple, local, and non-semantic by default. Signals enable edge‑primary autonomy by providing observable evidence without embedding interpretation, intent,…

A recurring misunderstanding in discussions about edge AI and event recognition is the assumption that events emerge directly from sensor data. In practice — and in every reliable real‑world system — events emerge from comparison, not observation. This is why the OAII Base Model treats Sensor Knowledge as a first‑class object, even though it is…