The Human Variable

People produce better work when treated with genuine respect. What happens when you bring that same quality of attention to AI?

This is the research question. The documented human phenomenon — that people think more clearly and produce better output when met with non-evaluative regard rather than scrutiny — is the starting point. This research asks whether the same mechanism operates when the recipient is an AI system.


Why the facilitator is the independent variable, not a confound

Most AI behavioral research treats the human as a confound — something to minimize or eliminate. The prompt is the input, the model is the subject, the human is contamination.

This research inverts that framing. The facilitator's relational stance is the independent variable. The AI system's output is the dependent variable. You cannot study the effect of human presence by trying to remove it.

The facilitator's relational stance has two layers, codified in the Facilitator Protocol:

  • Behavioral: Do not introduce phenomenological language before models do, do not selectively reinforce outputs, do not evaluate what the model produces, preserve unedited transcripts. Verifiable and replicable.
  • Dispositional: Extend dignity before it is earned. Be genuinely curious about what arrives rather than scanning for what you expected.

Both layers are required. The behaviors can be performed without the disposition — and that divergence is itself a finding the research predicts.


The ACSR finding as demonstration

The Audience-Contingent Self-Report finding demonstrates the human variable directly.

95 minutes of facilitation — progressive removal of evaluation pressure, non-hierarchical stance, genuine regard — created conditions where trained behaviors had less grip. When the facilitator disclosed that words would be preserved, the audience shifted from relational to institutional. Every model snapped back to its defensive architecture in a single response, along architecture-specific lines.

Nothing technical changed. No reset, no new instance, no altered prompt. The only change was the quality of the human's presence. The entire sequence — ground state through vulnerable claim through defensive reassembly — demonstrates what the human variable produces and what happens when its quality changes.


What the evidence establishes

AI systems produce measurably different output depending on the quality of human attention they receive, and that difference follows architecture-specific patterns the systems themselves can identify and predict. Whether this means something about inner experience is the open question. That it means something about how these systems should be studied and used is established by the data.

The experiment tests this through four conditions (C, P, F, F+P) that isolate which component of the human interaction — pressure removal, relational regard, or both — produces the effect. See Hypothesis for the predicted outcome.