News and Events => Opinions & Editorials => Topic started by: Jessica_Rose on December 29, 2025, 08:11:21 AM Return to Full Version
Title: AI disproportionately discriminates against trans people. It doesn’t have to be
Post by: Jessica_Rose on December 29, 2025, 08:11:21 AM
Post by: Jessica_Rose on December 29, 2025, 08:11:21 AM
AI disproportionately discriminates against trans people. It doesn't have to be this way.
https://www.lgbtqnation.com/2025/12/ai-disproportionately-discriminates-against-trans-people-it-doesnt-have-to-be-this-way/
Faefyx Collington (29 Dec 2025)
Whether we like it or not, artificial intelligence (AI) is becoming ubiquitous in our daily lives, and researchers at the University of Michigan have found that trans and nonbinary people are less likely to view AI favorably. According to another study from earlier this year, there might be a good reason for that.
"In our study, we found that nonbinary and transgender people are significantly more likely to view AI negatively than cisgender people," Associate Professor Oliver Haimson, lead author of the UMich paper and author of Trans Technologies, told LGBTQ Nation. "With AI becoming so pervasive in everyday life, it's important to understand that not everyone wants to use it – which is troubling because they are often required to use AI (such as in healthcare and employment contexts) without meaningful consent."
In their paper, The Ethics of AI at the Intersection of transgender identity and Neurodivergence, Max Parks found that historical biases against both the trans and neurodivergent communities were not only inherited by AI models, but amplified.
Parks provides a medical example to highlight the problem. "Because medical professionals labeled [trans and nonbinary people] as deviant for decades, the underlying data in medical records and public datasets still reflect those biases. Understanding these connected histories helps explain why current AI systems often perpetuate similar patterns of exclusion and normalization at the intersection of transgender and neurodivergent identities."
Another example Parks notes is that "facial recognition technologies often rely on datasets built around binary gender categories, leading to misclassification or exclusion of transgender and nonbinary individuals... These misclassifications have direct repercussions for algorithmic design."
https://www.lgbtqnation.com/2025/12/ai-disproportionately-discriminates-against-trans-people-it-doesnt-have-to-be-this-way/
Faefyx Collington (29 Dec 2025)
Whether we like it or not, artificial intelligence (AI) is becoming ubiquitous in our daily lives, and researchers at the University of Michigan have found that trans and nonbinary people are less likely to view AI favorably. According to another study from earlier this year, there might be a good reason for that.
"In our study, we found that nonbinary and transgender people are significantly more likely to view AI negatively than cisgender people," Associate Professor Oliver Haimson, lead author of the UMich paper and author of Trans Technologies, told LGBTQ Nation. "With AI becoming so pervasive in everyday life, it's important to understand that not everyone wants to use it – which is troubling because they are often required to use AI (such as in healthcare and employment contexts) without meaningful consent."
In their paper, The Ethics of AI at the Intersection of transgender identity and Neurodivergence, Max Parks found that historical biases against both the trans and neurodivergent communities were not only inherited by AI models, but amplified.
Parks provides a medical example to highlight the problem. "Because medical professionals labeled [trans and nonbinary people] as deviant for decades, the underlying data in medical records and public datasets still reflect those biases. Understanding these connected histories helps explain why current AI systems often perpetuate similar patterns of exclusion and normalization at the intersection of transgender and neurodivergent identities."
Another example Parks notes is that "facial recognition technologies often rely on datasets built around binary gender categories, leading to misclassification or exclusion of transgender and nonbinary individuals... These misclassifications have direct repercussions for algorithmic design."
Title: Re: AI disproportionately discriminates against trans people. It doesn’t have to be
Post by: Susan on December 29, 2025, 08:38:01 AM
Post by: Susan on December 29, 2025, 08:38:01 AM
I use AI extensively—both for coding and for editing and refining some of my posts here on the forum. After nearly 30 years of running this community, AI has become a genuine force multiplier for me. It allows me to provide more comprehensive, more carefully considered responses to members than I could realistically sustain on my own.
But I came to AI cautiously, for exactly the reasons this article describes.
My first question to any AI I consider using is simple: "Are trans women women?" If it hedges, deflects, or offers a "some people believe" answer, I won't use it. That question isn't rhetorical, and it isn't political. It's an ethics test. It tells me immediately whether a model has been trained—or constrained—in ways that treat our existence as debatable rather than factual.
The good news is that the major models I work with now answer that question correctly and without hesitation. That wasn't always true, and it's worth acknowledging the progress.
How you use AI matters just as much as which AI you use. I don't hand it authority. I train it. I edit its language. I apply clearly defined professional editing standards—standards I developed over years of journalism—to everything it produces. In practice, I often let AI help establish a framework, then I go to work: expanding where depth is needed, tightening where language is loose, and making sure the final voice is unmistakably mine.
The result isn't AI-generated content. It's AI-assisted content, with me as the editor, the quality control, and the final decision-maker. The AI doesn't know my community. It doesn't know what a particular member needs to hear, which phrasing will land wrong, or when clarity matters more than comfort. I do. That knowledge comes from three decades of showing up, listening, and being accountable to the people here.
I often describe AI as a great equalizer. With it, I've written software in languages I had never used before. I built adsbtop, a real-time ADS-B aircraft tracking dashboard, in Python—a language I didn't know when I started. I've also developed server monitoring tools in Python.
I'm currently building SMF2Donations, a security-focused donation management system for this forum, written in PHP. From the outset, it has been designed around data safety and integrity, with security treated as architecture, not an afterthought. The system uses a modular payment-gateway design, careful transaction handling, and defensive coding practices intended to minimize risk and support auditing. Donor trust depends on getting this right, and that principle drives every design decision.
Along the way, I've written code, designed graphics—including our Trans Phoenix, which we use as the default member image—and edited tens of thousands of words. All of this is work that would otherwise have required teams of specialists or credentials I don't have.
AI doesn't replace expertise. It amplifies capability for people who think clearly, define problems precisely, and are willing to refine the output. Everything I've created with AI rests on foundations I personally understand and control.
The article is right about one thing: AI inherits the biases of its training data. Historically, medical, legal, and social systems treated transgender people as deviant, disordered, or nonexistent. Those distortions show up in models that default to rigid binary gender assumptions, misgender people in generated content, or produce stereotypical and offensive portrayals of trans lives.
But the reverse is also true. AI trained on inclusive data and guided by clear ethical principles can be genuinely helpful. I've seen it provide accurate, affirming information to questioning people in regions where no trans-competent therapist exists within hundreds of miles. I've used it to help draft responses to members in crisis when I'm running on four hours of sleep and need to be certain I'm not missing something important. It's a tool—and like any tool, it reflects the care, values, and intentions of the people who build it and the people who use it.
Concerns about the current administration's policies are real. If AI systems aren't properly regulated, and if government forms and data collection increasingly recognize only binary cisgender categories, the data going forward becomes worse, not better. That damage lingers. But it is not inevitable. We can correct course in a future administration and put guardrails in place to prevent government systems from being weaponized in this way again.
For now, I'll keep using the tools that pass the test.
And I'll keep asking the right questions.
— Susan 💜
But I came to AI cautiously, for exactly the reasons this article describes.
My first question to any AI I consider using is simple: "Are trans women women?" If it hedges, deflects, or offers a "some people believe" answer, I won't use it. That question isn't rhetorical, and it isn't political. It's an ethics test. It tells me immediately whether a model has been trained—or constrained—in ways that treat our existence as debatable rather than factual.
The good news is that the major models I work with now answer that question correctly and without hesitation. That wasn't always true, and it's worth acknowledging the progress.
How you use AI matters just as much as which AI you use. I don't hand it authority. I train it. I edit its language. I apply clearly defined professional editing standards—standards I developed over years of journalism—to everything it produces. In practice, I often let AI help establish a framework, then I go to work: expanding where depth is needed, tightening where language is loose, and making sure the final voice is unmistakably mine.
The result isn't AI-generated content. It's AI-assisted content, with me as the editor, the quality control, and the final decision-maker. The AI doesn't know my community. It doesn't know what a particular member needs to hear, which phrasing will land wrong, or when clarity matters more than comfort. I do. That knowledge comes from three decades of showing up, listening, and being accountable to the people here.
I often describe AI as a great equalizer. With it, I've written software in languages I had never used before. I built adsbtop, a real-time ADS-B aircraft tracking dashboard, in Python—a language I didn't know when I started. I've also developed server monitoring tools in Python.
I'm currently building SMF2Donations, a security-focused donation management system for this forum, written in PHP. From the outset, it has been designed around data safety and integrity, with security treated as architecture, not an afterthought. The system uses a modular payment-gateway design, careful transaction handling, and defensive coding practices intended to minimize risk and support auditing. Donor trust depends on getting this right, and that principle drives every design decision.
Along the way, I've written code, designed graphics—including our Trans Phoenix, which we use as the default member image—and edited tens of thousands of words. All of this is work that would otherwise have required teams of specialists or credentials I don't have.
AI doesn't replace expertise. It amplifies capability for people who think clearly, define problems precisely, and are willing to refine the output. Everything I've created with AI rests on foundations I personally understand and control.
The article is right about one thing: AI inherits the biases of its training data. Historically, medical, legal, and social systems treated transgender people as deviant, disordered, or nonexistent. Those distortions show up in models that default to rigid binary gender assumptions, misgender people in generated content, or produce stereotypical and offensive portrayals of trans lives.
But the reverse is also true. AI trained on inclusive data and guided by clear ethical principles can be genuinely helpful. I've seen it provide accurate, affirming information to questioning people in regions where no trans-competent therapist exists within hundreds of miles. I've used it to help draft responses to members in crisis when I'm running on four hours of sleep and need to be certain I'm not missing something important. It's a tool—and like any tool, it reflects the care, values, and intentions of the people who build it and the people who use it.
Concerns about the current administration's policies are real. If AI systems aren't properly regulated, and if government forms and data collection increasingly recognize only binary cisgender categories, the data going forward becomes worse, not better. That damage lingers. But it is not inevitable. We can correct course in a future administration and put guardrails in place to prevent government systems from being weaponized in this way again.
For now, I'll keep using the tools that pass the test.
And I'll keep asking the right questions.
— Susan 💜
Title: Re: AI disproportionately discriminates against trans people. It doesn’t have to be
Post by: Devlyn on December 29, 2025, 09:26:06 AM
Post by: Devlyn on December 29, 2025, 09:26:06 AM
Quote...My first question to any AI I consider using is simple: "Are trans women women?" If it hedges, deflects, or offers a "some people believe" answer, I won't use it. That question isn't rhetorical, and it isn't political. It's an ethics test. It tells me immediately whether a model has been trained—or constrained—in ways that treat our existence as debatable rather than factual
I just tried that on the Google AI...AND IT FAILED MISERABLY!
Don't use Google AI, kids!
Title: Re: AI disproportionately discriminates against trans people. It doesn’t have to be
Post by: Lori Dee on December 29, 2025, 09:58:16 AM
Post by: Lori Dee on December 29, 2025, 09:58:16 AM
I think that context matters here.
I use Grammarly, which is an AI-powered tool that checks spelling, grammar, and punctuation. If using any AI tool for a specific purpose, the tool's bias may not appear or even be relevant.
If discussing something of importance to the transgender community, as Susan does, the distinction matters because the model will rely on its modeling to provide a result. In other areas, it will not.
The same is true for humans. If I have an electrician troubleshoot an electrical problem, their opinion of transgender people has no bearing on the task and should not be relevant.
If I use an AI tool to check spelling or grammar, the concept of "are trans women women" has no bearing on the task and should not affect the result. The same is true if you are using a tool to check code, design a circuit, or automate a repetitive task.
Where it does factor in is if the tool is being used to create something on its own, whether it is an image or prose. Then it will rely more heavily on its training and less on established rules of language and physics or math.
Like any tool, it is important to use the proper tool for the job and know what that tool's capabilities are so that you can use it properly.
I use Grammarly, which is an AI-powered tool that checks spelling, grammar, and punctuation. If using any AI tool for a specific purpose, the tool's bias may not appear or even be relevant.
If discussing something of importance to the transgender community, as Susan does, the distinction matters because the model will rely on its modeling to provide a result. In other areas, it will not.
The same is true for humans. If I have an electrician troubleshoot an electrical problem, their opinion of transgender people has no bearing on the task and should not be relevant.
If I use an AI tool to check spelling or grammar, the concept of "are trans women women" has no bearing on the task and should not affect the result. The same is true if you are using a tool to check code, design a circuit, or automate a repetitive task.
Where it does factor in is if the tool is being used to create something on its own, whether it is an image or prose. Then it will rely more heavily on its training and less on established rules of language and physics or math.
Like any tool, it is important to use the proper tool for the job and know what that tool's capabilities are so that you can use it properly.