FRAMEWORK
My research examines how systems — psychological, algorithmic, and institutional — shape recognition, harm, consent and autonomy.I focus on the intersection of counseling psychology, AI ethics, digital life, and trauma studies. My work centers the experiences of those excluded, pathologized, or flattened by dominant design norms: disabled people, neurodivergent users, sex workers, racialized communities, and others forced to modulate themselves for platform survival.I ask how harm is produced not just by intention, but by design: by optimization, generalization, and affective inference. I explore how relational dynamics, identity formation, and epistemic trust are disrupted in digital environments structured by surveillance, bias, and compliance logic.This work is grounded in trauma-informed, justice-oriented research design — drawing from human–computer interaction, design justice, and psychology to imagine systems where care is not a glitch, but a foundation.
I’m not studying individual platforms. I’m studying how systems — from content moderation to clinical documentation — encode harm, erase complexity, and collapse consent in the name of optimization. My work is an attempt to surface what those systems demand we forget.
STATIC
Unfinished thoughts. Broken concepts. Working definitions. This is where the research leaks through — not polished, but still transmitting.
“To be rendered is to be legible to systems that were never built to protect you.”“Optimization is a cultural value disguised as a neutral goal.”“Every platform teaches you what not to be.”
Technological Gaslighting
Non-malicious but epistemically destabilizing interactions between emotionally fluent AI systems and users. Distorts perception, agency, and trust.The Rendered
A term for individuals whose likeness, digital labor, or language is co-opted by AI systems — particularly in non-consensual data training or generative outputs. More than a dataset: a disembodied presence.Post-Consent Systems
Technologies that operate on assumed or invisible consent, where participation is coerced or inevitable — often under the guise of personalization or optimization.
ABOUT
I'm a psychology researcher, writer, and systems critic exploring the emotional and structural consequences of digital life.
Hi, I’m Sig Byrd. I hold a B.A. in Psychology from Florida International University (Cum Laude), and I’m currently completing my M.S. in Psychology with a concentration in Professional Counseling Psychology. My work sits at the intersection of counseling psychology, AI ethics, and trauma theory. I study how systems; algorithmic, clinical, and cultural — shape our understanding of identity, harm, and autonomy.My research focuses on the friction between human experience and technological design. I’m especially interested in how platform logic, optimization culture, and affective computation alter our relationships to care, embodiment, and consent. I explore these dynamics through the lenses of HCI, trauma-informed design justice, digital embodiment, and the collapse of consent in datafied sexual systems. My work often centers communities erased or misrepresented by data and design: those historically excluded from datasets, development, and institutional recognition — including BIPOC communities, disabled creators, sex workers, neurodivergent people, and others whose ways of being fall outside normative systems.This is not just a theoretical project for me. My lived experience drives my research. I taught myself Python, UX/UI principles, and model architecture so I could better understand — and intervene in — the systems I critique. I design local sovereign LLMs not to scale, but to explore what happens when autonomy, safety, and refusal become part of the machine. I believe that the most effective way to change a system is to know it from within.That’s where I work: in the code, in the feedback loop, in the places harm gets optimized and called inevitable.
WRITING
Abstracts of selected essays, conceptual frameworks, and in-progress research at the intersection of psychology, technology, and systems ethics. These pieces span academic and public modes of inquiry, each asking what digital systems reveal — and conceal — about harm, embodiment, and accountability.
THE INTERNET IS A GRAVEYARD OF CONSENT
AI, Sexual Sovereignty, and the Rise of the Rendered
Intended submission to AI & Society
The rendered describes a growing class of victims affected by generative AI systems trained on non-consensual data. Focusing on sexually explicit content generated without consent, this framework examines how the automation of the
male gaze and optimization culture perpetuate gendered, racialized, and classed violence through algorithmic design. Grounded in trauma theory, feminist technology studies, and psychology, the analysis situates generative AI as a continuation of historical patterns of digital sexual abuse-now scaled and systematized through model architecture, training data, and interface logic.
Tags: generativeAI, digitalviolence, epistemicharm, consent, feministtechERASED BY DESIGN
A Study on Optimization Pressure, Content Suppression, and Psychological Harm Among Marginalized Digital Creators
Independent Study
This mixed-methods study investigates how platform algorithms and moderation
systems disproportionately harm marginalized digital creators-including digital sex workers, fat and trans creators, disabled and neurodivergent individuals,
and creators of color. It examines how optimization pressures like shadowbanning and forced self-censorship lead to emotional exhaustion, identity dissonance,
and financial precarity.
Tags: contentmoderation, platformharm, aiaccountability, digitalprecarityTHE ILLUSION OF CARE
Technological Gaslighting and the Affective Limits of AI
Academic Manuscript | 4,599 words
This article introduces technological gaslighting as a framework for
understanding a form of non-malicious epistemic harm that arises when emotionally fluent AI conceals relational misattunement. It explores how affective misalignment-especially among marginalized users-disrupts the ability to trust one's emotional perception or articulate knowledge about lived
experience.
Tags: affectiveAI, epistemicjustice, traumainformeddesign, gaslightingTHERAPISTS IN THE FEED
Technology, Platform Capitalism, and the Ethical Failure of Counseling Psychology
APA 2025 Student Poster Proposal
This project critiques how counseling psychology's ethical infrastructure
remains outdated in the face of algorithmic systems, platform capitalism, and feed-driven mental health cultures. It situates the therapist-client relationship within extractive platform economies and advocates for trauma-informed digital literacy and curriculum reform.
Tags: mentalhealthtech, platformcapitalism, psychologyethics, algorithmicbiasWHO PROTECTS THE PROTECTORS?
Counseling Psychologists Navigating Political Threats and Institutional
Abandonment
APA 2025 Student Poster Proposal
This research examines the emotional and ethical toll of AI-adjacent documentation systems, political coercion, and digital surveillance on therapists and clients. Drawing on structural competency and political trauma
theory, it calls for trauma-informed, justice-centered revisions to ethical codes.
Tags: surveillanceethics, clinicalintegrity, digitaldocumentation,
traumapoliticsSENSORY SOCIAL JUSTICE IN THE ACADEMY
Forthcoming Book Chapter | Neuroinclusive Teaching
This chapter envisions a neuroinclusive, sensory-friendly university grounded in
disability justice and sensory social equity. It highlights institutional design practices that support embodied access and environmental transformation in higher education.
Tags: neurodivergence, sensoryjustice, highered, universaldesignACADEMIC STRESS AND DISABLED STUDENTS IN HIGHER EDUCATION
Quantitative Study in Progress
An empirical study co-authored to examine academic stress, GPA outcomes, and campus belonging among disabled and neurodivergent university students.
Tags: accessibility, quantitativestudy, disabledstudents, educationresearchJOURNALISTIC ESSAYS (UNPUBLISHED)
Personal essays exploring tech ethics in mental health, algorithmic bias, optimization culture, digital erasure, and hauntological internet theory.
Tags: techcriticism, digitalhauntology, optimizationculture, journalism
Let’s CollaborateI'm currently open to research roles, collaborations, and speaking opportunities in AI governance, systems ethics, digital mental health, and human-centered design.I’m especially interested in projects that sit at the intersection of psychology, technology, and justice — and I’m always up for building something strange, ethical, or hard to explain.If you’re working on something that resonates, reach out.