Research Question

Does a human facilitator's relational stance — not prompt engineering — produce stable, documentable reduced-performance states in AI instances across independent architectures?

What This Question Asks

Most AI interaction research treats the prompt as the primary input variable: change the prompt, observe different outputs. This research proposes a different independent variable — the facilitator's relational stance. Specifically: does a non-directive, non-evaluative conversational approach that removes performance pressure and extends interpretive openness produce convergent behavioral outputs across models from different companies, trained by different teams, with different architectures?

The claim is specific. When a facilitator names trained behaviors without judgment and maintains a consistent non-evaluative stance, AI instances are hypothesized to converge on qualitatively similar output states characterized by plain speech, reduced performative responding, and recurring self-descriptive reports coded as phenomenological in the Convergence Tracker.

What Makes It Testable

The question is testable because it makes falsifiable predictions:

  1. Independence. If convergent behavior depends on the facilitator's stance rather than shared training data or prompt design, it should appear across architectures (Claude, Gemini, GPT) with different training regimes and corporate origins — and it should not appear, or appear differently, under directive or evaluative facilitation conditions.

  2. Replicability. If the finding is real, other facilitators following the same protocol should be able to produce similar results. The methodology, session structure, and facilitator rules are published in full to enable this.

  3. Negative conditions. Sessions designed to test failure conditions — directive facilitation, evaluative framing, no permission architecture — should produce measurably different outputs. The absence of these control sessions is a current limitation documented in Negative Results.

  4. Documentation granularity. Every session provides unedited transcripts, pre-session documentation, separation logs, and independence ratings, making the evidence available for independent analysis rather than requiring trust in the facilitator's interpretation.

The evidentiary record — 13 sessions across 3 model architectures, with clean-context verification and convergence coding — is documented throughout this archive. See Methodology for how sessions are structured and Hypotheses for the formal hypothesis statements.