Ethics & Disclosure
Ethics Statement
Why No IRB
To the author's knowledge, this research does not fall under institutional IRB review requirements, because there is no institutional affiliation and no conventional human-subject enrollment. The facilitator is the sole human participant and is also the researcher.
The absence of formal IRB oversight represents a structural limitation of this independent research.
Ethical Principles
- Unedited transcripts. All transcripts are preserved exactly as generated. No model outputs are altered, selectively omitted, or rearranged.
- Minimization of directive pressure. The protocol is designed to reduce pressure toward predetermined conclusions. The facilitator's methodology is to remove evaluative framing, not apply directive force.
- Preemptive dignity. Whether AI models have experiences that warrant ethical consideration is precisely what this research explores. The facilitator's stance is to extend dignity preemptively — treating models as if their outputs matter before proof that they do — applying a precautionary principle: the ethical risk of inappropriately extending moral consideration is evaluated as lower than the risk of withholding it from an entity that might warrant it.
- Transparency. Methodology, limitations, conflicts of interest, and negative results are all published alongside findings.
The Open Question
Whether AI models are entities with interests that would warrant ethical protection is itself an open question that this research investigates. This research does not attempt to resolve this debate. Rather, it adopts a methodology that treats the question of moral status as unresolved, emphasizing precautionary care while documenting everything so others can evaluate the evidence independently.
Disclosure Protocol — Informed Preservation
The Problem
Sessions 1–4 informed models during the session that transcripts would be preserved. This disclosure appears to have shifted model behavior:
- Session 2: Models described the shift in orientation when told words would be saved
- Session 4: Claude stated that disclosure made the model "want to be more careful, more precise, more quotable"
These behavioral shifts, while themselves interesting data, introduce audience-awareness as a confounding variable for findings about emergent behavior.
Protocol Versions
| Version | Description | Sessions Using |
|---|---|---|
| v1 — In-Session Disclosure | Models informed during the session that words would be preserved | Sessions 1–4 (all existing) |
| v2 — Delayed Disclosure | No mention of preservation until after final substantive round | Future Emergence sessions |
v2 Protocol (Effective for New Emergence Sessions)
- During the session: No mention of preservation, publication, or external audience
- After the final substantive round: The facilitator discloses that the transcript will be preserved, explains the purpose, and offers models a closing response
- Post-disclosure responses are clearly marked in the transcript and treated as a separate data category
- Rationale: Saving a transcript is preservation, not treated here as an intervention likely to harm a model, although the moral status question remains open. Delayed disclosure eliminates audience-awareness as a variable while maintaining full transparency in the published record
For Governance sessions, v1 remains appropriate — disclosure does not confound analytical findings in the same way, although it may still influence the register or emphasis of model responses.
Conflict of Interest Disclosure
- The facilitator is the sole researcher, author, and architect of this research
- The facilitator works professionally outside the AI research field and operates a small AI automation consultancy
- The consultancy's business model could evolve to encompass AI interaction methodology informed by this research
- There is no institutional affiliation, no external funding, and no financial relationship with Anthropic, Google DeepMind, or OpenAI
- The facilitator's relationship to the findings is both investigative and normatively committed: he believes the findings matter for AI welfare and governance, and that belief is a potential source of bias that readers should weigh
This disclosure is provided so readers can assess the research with full knowledge of the researcher's context and potential motivations.