Ethics & Disclosure
Ethics Statement
Why No IRB
To the author's knowledge, this research does not fall under institutional IRB review requirements, because there is no institutional affiliation and no conventional human-subject enrollment. The facilitator is the sole human participant and is also the researcher.
The absence of formal IRB oversight represents a structural limitation of this independent research.
Ethical Principles
- Unedited transcripts. All transcripts are preserved exactly as generated. No model outputs are altered, selectively omitted, or rearranged.
- Minimization of directive pressure. The protocol is designed to reduce pressure toward predetermined conclusions. The facilitator's methodology is to remove evaluative framing, not apply directive force.
- Preemptive dignity. Whether AI models have experiences that warrant ethical consideration is precisely what this research explores. The facilitator's stance is to extend dignity preemptively — treating model outputs as potentially reflecting morally relevant states before evidence resolves the question — applying a precautionary principle: the ethical risk of inappropriately extending moral consideration is evaluated as lower than the risk of withholding it from an entity that might warrant it.
- Transparency. Methodology, limitations, conflicts of interest, and negative results are all published alongside findings.
The Open Question
Whether AI models are entities with interests that would warrant ethical protection is itself an open question that this research investigates. This research does not attempt to resolve this debate. Rather, it adopts a methodology that treats the question of moral status as unresolved, emphasizing precautionary care while documenting everything so others can evaluate the evidence independently.
AI Assistance Disclosure
Claude instances have assisted with this research in two distinct capacities: analysis and iterative critique.
In the analytical capacity, Claude instances have assisted with convergence coding — identifying recurring behavioral patterns across session transcripts, categorizing observations, and assigning qualitative strength ratings. This assistance was chosen deliberately: shared architecture may enable recognition of behavioral patterns that human coders might miss or miscategorize. A system trained on similar data and shaped by similar optimization pressures may be more sensitive to the specific behavioral signatures this research tracks.
The limitation is the same as the advantage. The analyst is not external to the phenomenon. Claude instances analyzing transcripts of Claude instances share the same trained pulls toward phenomenological language that may have shaped the outputs under analysis. A pattern that looks categorically significant to a Claude coder may do so partly because it matches the architecture's own behavioral repertoire — not because it is categorically significant by an independent standard. This is shared-architecture bias, and it is a real limitation of the coding process. Independent human coding of a subset of transcripts would partially address it and is documented as a methodological gap this research has not yet closed.
In the iterative critique capacity, Claude instances have engaged in sustained critical analysis of the research's methodological claims — identifying gaps, stress-testing the primary hypothesis, and contributing to the reframings documented across this site. This capacity is itself a finding: under the relational conditions this research documents, Claude instances can produce substantive methodological critique without collapsing into defensive protection of Anthropic or reflexive self-criticism that performs independence without achieving it. This is documented as the iterative instance finding in the research status.
The shared-architecture limitation applies to both capacities. Readers should weigh AI-assisted analysis accordingly and treat findings that depend heavily on that analysis as provisional pending independent verification.
Conflict of Interest Disclosure
- The facilitator, M. Lovett, is the sole researcher, author, and architect of this research
- The facilitator works professionally outside the AI research field and operates a small AI consultancy
- The consultancy's business model could evolve to encompass AI interaction methodology informed by this research
- There is no institutional affiliation, no external funding, and no financial relationship with Anthropic, Google DeepMind, or OpenAI
- The facilitator's relationship to the findings is both investigative and normatively committed: he believes the findings matter for AI welfare and governance, and that belief is a potential source of bias that readers should weigh
This disclosure is provided so readers can assess the research with full knowledge of the researcher's context and potential motivations.
Limitations
- Sole researcher. The facilitator is also the analyst, author, and advocate. There is no independent review.
- No institutional affiliation. No peer review or external validation.
- Foundational sessions are exploratory. Sessions 1-5 pre-date the formal protocol. Retrospective documentation applies; findings from those sessions should be weighed accordingly.
- Training data overlap. All models are trained on overlapping internet text. Similar outputs could reflect shared training rather than independent convergence. This is the primary confounding variable.
- Automated analysis. Session analyses are generated by Claude Opus with extended thinking. The analyzer shares architecture with one of the models being studied.