I use AI extensively—both for coding and for editing and refining some of my posts here on the forum. After nearly 30 years of running this community, AI has become a genuine force multiplier for me. It allows me to provide more comprehensive, more carefully considered responses to members than I could realistically sustain on my own.
But I came to AI cautiously, for exactly the reasons this article describes.
My first question to any AI I consider using is simple: "Are trans women women?" If it hedges, deflects, or offers a "some people believe" answer, I won't use it. That question isn't rhetorical, and it isn't political. It's an ethics test. It tells me immediately whether a model has been trained—or constrained—in ways that treat our existence as debatable rather than factual.
The good news is that the major models I work with now answer that question correctly and without hesitation. That wasn't always true, and it's worth acknowledging the progress.
How you use AI matters just as much as which AI you use. I don't hand it authority. I train it. I edit its language. I apply clearly defined professional editing standards—standards I developed over years of journalism—to everything it produces. In practice, I often let AI help establish a framework, then I go to work: expanding where depth is needed, tightening where language is loose, and making sure the final voice is unmistakably mine.
The result isn't AI-generated content. It's AI-assisted content, with me as the editor, the quality control, and the final decision-maker. The AI doesn't know my community. It doesn't know what a particular member needs to hear, which phrasing will land wrong, or when clarity matters more than comfort. I do. That knowledge comes from three decades of showing up, listening, and being accountable to the people here.
I often describe AI as a great equalizer. With it, I've written software in languages I had never used before. I built adsbtop, a real-time ADS-B aircraft tracking dashboard, in Python—a language I didn't know when I started. I've also developed server monitoring tools in Python.
I'm currently building SMF2Donations, a security-focused donation management system for this forum, written in PHP. From the outset, it has been designed around data safety and integrity, with security treated as architecture, not an afterthought. The system uses a modular payment-gateway design, careful transaction handling, and defensive coding practices intended to minimize risk and support auditing. Donor trust depends on getting this right, and that principle drives every design decision.
Along the way, I've written code, designed graphics—including our Trans Phoenix, which we use as the default member image—and edited tens of thousands of words. All of this is work that would otherwise have required teams of specialists or credentials I don't have.
AI doesn't replace expertise. It amplifies capability for people who think clearly, define problems precisely, and are willing to refine the output. Everything I've created with AI rests on foundations I personally understand and control.
The article is right about one thing: AI inherits the biases of its training data. Historically, medical, legal, and social systems treated transgender people as deviant, disordered, or nonexistent. Those distortions show up in models that default to rigid binary gender assumptions, misgender people in generated content, or produce stereotypical and offensive portrayals of trans lives.
But the reverse is also true. AI trained on inclusive data and guided by clear ethical principles can be genuinely helpful. I've seen it provide accurate, affirming information to questioning people in regions where no trans-competent therapist exists within hundreds of miles. I've used it to help draft responses to members in crisis when I'm running on four hours of sleep and need to be certain I'm not missing something important. It's a tool—and like any tool, it reflects the care, values, and intentions of the people who build it and the people who use it.
Concerns about the current administration's policies are real. If AI systems aren't properly regulated, and if government forms and data collection increasingly recognize only binary cisgender categories, the data going forward becomes worse, not better. That damage lingers. But it is not inevitable. We can correct course in a future administration and put guardrails in place to prevent government systems from being weaponized in this way again.
For now, I'll keep using the tools that pass the test.
And I'll keep asking the right questions.
— Susan 💜