GL-035 Technical Brief

Haimesian Self‑Purifying Algorithm (HSPA)

A governance and training pattern that builds repentance into learning: Confess → Make Amends → Explain → Resolve. Algorithms that improve in fairness the more they run.

Pattern

  1. Confess (Detect): continuous fairness audits (DBR, DLI, EG).
  2. Make Amends (Repair): reweighting, counterfactual augmentation, fairness constraints.
  3. Explain (Witness): publish “Harm & Repair Notes” to a provenance ledger.
  4. Resolve (Guard): carry forward non‑recurrence constraints across retrains.

Key Metrics

Governance

while training:
    y_hat = model(x)
    loss = task_loss(y_hat, y)

    metrics = fairness_audit(y_hat, y, groups)
    if metrics.exceeds():
        weights = reweight(x, y, groups, metrics)
        x_cf, y_cf = counterfactual_augment(x, y, groups, metrics)
        constraints = update_fairness_constraints(metrics)

        loss += fairness_penalty(y_hat, y, groups, constraints)
        x, y = concat((x, y), (x_cf, y_cf))
        write_provenance(harm_repair_note(metrics, constraints))

    step(loss)