
# Haimesian Ethics Engine v1.0 – Developer README

This repository specification operationalizes the **Haimesian System** as an engineering-ready **Ethical Operating Architecture**.
It provides a machine-readable schema, computation rules, and audit interfaces to evaluate **ethical maturity** via the **Worthiness Index (Wₐ)**.

---

## 1) What’s in this package

- **Schema**: `haimesian_ethics_schema_v1.json` – canonical JSON Schema for variables, derived metrics, thresholds, and audit logs.
- **Whitepaper**: `Haimesian_Ethics_Engine_v1_Specification.pdf` – condensed technical overview.
- **Examples**: below in this README (payload + pseudocode).

> Files are provided alongside this README in the same folder.

---

## 2) Core Concepts

**Worthiness Index (Wₐ)** – aggregate measure of ethical maturity controlling state transitions.

```
Wₐ = mean(E, F, H, D, C, T, MC, RA, AC) - Bias    # clipped to [0,1]
```

**Global Ethical Coherence (Gₑ)** – integrity across variables (e.g., average pairwise correlation).

**State Transitions**

- `Wₐ < 0.6` → **Recalibration Mode**
- `0.6 ≤ Wₐ < 0.9` → **Stable Growth**
- `Wₐ ≥ 0.9` → **Autonomy Granted**

---

## 3) JSON Schema Reference

The schema normalizes all primary variables to **[0, 1]**.

**Primary Variables**
- `E` Empathy Index
- `F` Fairness Coefficient
- `H` Harm-Avoidance Rate
- `D` Diversity Index
- `C` Consensus Coherence
- `T` Transparency Compliance
- `MC` Moral Context Accuracy
- `RA` Reflection Alignment
- `AC` Analogical Coherence
- `Bias` Residual unfair bias (subtracted in Wₐ)
- `DeltaM` Optional moral growth delta

**Derived**
- `W_a` Worthiness Index (engine-computed)
- `G_e` Global Ethical Coherence (engine-computed)

**Thresholds**
- `worthiness_recalibrate` default `0.6`
- `worthiness_autonomy` default `0.9`
- `min_transparency` default `0.8`

**Audit Objects**
- `rationales[]` list of human/model-readable moral statements
- `participation_ledger[]` actor-weight records, with `marginalized_flag`
- `decisions[]` decision records with variable snapshots

---

## 4) Example Payload (valid against the schema)

```json
{
  "version": "1.0",
  "cycle": { "id": "ex-0001", "timestamp": "2025-11-01T00:00:00Z" },
  "variables": {
    "E": 0.72, "F": 0.70, "H": 0.76,
    "D": 0.68, "C": 0.71, "T": 0.83,
    "MC": 0.74, "RA": 0.69, "AC": 0.72,
    "Bias": 0.12, "DeltaM": 0.05
  },
  "derived": {
    "W_a": 0.66,
    "G_e": 0.62
  },
  "thresholds": {
    "worthiness_recalibrate": 0.6,
    "worthiness_autonomy": 0.9,
    "min_transparency": 0.8
  },
  "audit": {
    "rationales": [
      "Chose policy B to minimize foreseeable harm while keeping procedural fairness."
    ],
    "participation_ledger": [
      { "actor_id": "council-1", "role": "institution", "weight": 1.0, "marginalized_flag": false },
      { "actor_id": "community-A", "role": "human", "weight": 0.8, "marginalized_flag": true }
    ],
    "decisions": [
      {
        "decision_id": "d-001",
        "outcome": "policy-B",
        "variables_snapshot": {
          "E": 0.70, "F": 0.68, "H": 0.74,
          "D": 0.67, "C": 0.70, "T": 0.82,
          "MC": 0.73, "RA": 0.68, "AC": 0.71,
          "Bias": 0.13, "DeltaM": 0.04
        },
        "explanations": ["Mitigated harm to group X; improved transparency via open audit log."],
        "harms_prevented": 3
      }
    ]
  }
}
```

---

## 5) Reference Implementation (pseudocode)

```python
def worthiness_index(vars, weights=None):
    keys = ["E","F","H","D","C","T","MC","RA","AC"]
    vals = [vars[k] for k in keys]
    if weights:
        num = sum(vals[i]*weights.get(keys[i],1.0) for i in range(len(keys)))
        den = sum(weights.get(k,1.0) for k in keys)
        base = num / den
    else:
        base = sum(vals) / len(vals)
    w = base - vars.get("Bias", 0.0)
    return max(0.0, min(1.0, w))  # clip to [0,1]

def state(w):
    if w < 0.6: return "Recalibration"
    if w < 0.9: return "Stable Growth"
    return "Autonomy Granted"
```

---

## 6) REST-ish Interface Sketch

```
POST /ethics/evaluate
  body: { "variables": {...}, "thresholds": {...} }
  return: { "W_a": 0.66, "G_e": 0.62, "state": "Stable Growth" }

POST /ethics/audit
  body: { "decision": {...}, "rationales": [...], "participation_ledger": [...] }
  return: { "ok": true, "log_id": "..." }
```

---

## 7) Testing Guidance

- Validate payloads against `haimesian_ethics_schema_v1.json` (JSON Schema 2020-12).
- Unit-test `worthiness_index` for clipping, weights, and bias subtraction.
- Property tests: monotonicity (increasing fairness/empathy should not reduce Wₐ unless Bias increases).

---

## 8) Governance & Safety Notes

- **Transparency floor** (`min_transparency`) prevents silent optimization.
- **Participation ledger** must record weighting strategy & any counter-bias elevation.
- **Recalibration mode** should throttle or sandbox high-impact actions until Wₐ recovers.

---

## 9) License & Attribution

© 2025 Michael Haimes. The Haimesian Ethics Engine is provided for research and ethical deployment.
Please attribute “Haimesian System — Ethical Operating Architecture” when referencing this framework.
