Decode Bias: Learn to Think Critically with AI
1. Where we stand: The status quo
Artificial Intelligence (AI) is no longer just a tool — it's shaping our daily lives, job markets, democracies, and decision-making processes. But who shapes AI? Currently, not us.
FLINTA* (women, lesbians, inter, non-binary, trans and agender people) are underrepresented in every layer of the AI value chain: from data labeling to algorithm design to boardroom decisions. This imbalance leads to biased systems — and to a digital world that mirrors and reinforces structural discrimination.
Studies show: AI tools give women lower salary recommendations than men — even with identical qualifications (THWS, 2024). And if you don’t fit into the “norm” — be it in terms of ethnicity, language, or ability — the blind spots grow. A democracy that ignores gender in AI builds its own digital injustices.
But it’s not all negative: AI also brings opportunities to close gaps, reduce barriers, and create inclusive spaces — if it’s used consciously and inclusively. Positive examples include:
Language tools that support multilingual and non-dominant language users, improving access for migrant communities.
AI-based accessibility tech, such as speech-to-text or image descriptions, that empower people with disabilities.
Creative AI tools that enable FLINTA* artists and entrepreneurs to build, market, and share their work independently.
Bias-detection algorithms that help uncover structural discrimination in hiring, housing, and public services.
Digital mentoring and career support platforms that use AI to match underrepresented groups with relevant resources and role models.
Harnessed responsibly, AI can support equity. But only if we’re part of shaping it.
2. The problem is structural — and urgent
While AI brings opportunities for empowerment, it also comes with deep and unequal risks — especially when those most affected are excluded from the systems that govern it. For every AI tool that has the potential to support inclusion, there’s a counterpart that can reinforce harm. The same algorithms that can match FLINTA* professionals to new career paths can also recommend lower salaries or deny visibility based on biased training data. The same platforms that connect and inform can spread misinformation, amplify hate speech, and distort self-image through unrealistic filters.
In other words: the potential of AI is real — but so is the danger.
The very technologies that offer progress can deepen injustice if left unchecked. And in the absence of regulation, inclusion, and ethical standards, the digital space can become another system of exclusion.
Because FLINTA* communities are already navigating exclusion in analog spaces. If we don’t act now, these dynamics will be scaled and automated in the digital world.
The risks are not just theoretical: The risks are not equally distributed. Young women, girls, older women, queer people, and people with migration histories face distinct challenges. The danger? A digital world that excludes the very people it claims to connect.
AI doesn’t just reflect discrimination. It can automate and scale it. And it’s not just about fairness in algorithms. The harms are physical, emotional, political:
Reinforced stereotypes in AI-generated images and translations (BMSGPK, 2022)
Online harassment, hate speech and exclusion, especially in social media (EIGE, 2024)
AI-driven systems also amplify the spread of hate and violence, disproportionately affecting women and girls online (European Institute for Gender Equality, 2024)
Chatbots are marketed as personal coaches — but aren’t confidential or safe spaces. Court rulings now show: even deleted chats can be retrieved and used in legal cases.
Unrealistic beauty filters that distort self-image and increase pressure (RSPH, 2017)
AI psychosis — people developing delusions or worsening conditions after emotional reliance on chatbots (TIME, 2025)
Deepfake pornography and digital manipulation targeting women and marginalized people (BMSGPK, 2022)
These examples are not to provoke panic but to spark responsibility.
Because FLINTA* communities are already navigating exclusion in analog spaces. If we don’t act now, these dynamics will be scaled and automated in the digital world.
3. What keeps us safe – and what doesn’t
We deserve digital spaces that are safe, fair, and respectful. Until then, here's how to protect yourself.
So let’s be clear:
ChatGPT is not a therapist.
A chatbot or LLM is not your friend.
Don’t use chatbots for emotional support or crisis situations. AI can mirror your language — but not your reality.
According to TIME Magazine (2025), long-term chatbot use has triggered mental health crises, especially among people seeking emotional support. Oversharing with AI systems that mirror your language can reinforce distorted thinking – and even lead to breakdowns.
We suggest for emotional support, reconnect with people, and seek real-world support when in doubt. AI can mimic empathy — but it can’t protect or care for you.
AI is not private — even with "history off."
Your data may still be processed and stored — especially on non-enterprise accounts.
Use clear boundaries. Don't feed private information into systems you don’t fully understand.
Advocate for AI Governance: Ethical standards, inclusive data policies, and digital rights must become the norm — not the afterthought.
4. Why this is about democracy –
not just tech
A democracy that ignores gender in AI builds its own blind spots.
Europe has started paving this road with the EU AI Act, aiming to regulate AI based on risk. But regulation alone isn’t enough. We need empowered citizens, especially from marginalized communities, to co-shape the digital landscape.
5. What we can do — right now
FLINTA* are not the problem. The system is.
We don’t need to adapt. The system needs to change.
Tell your local politicians what you expect from digital policy.
Vote for parties that protect rights, equity, and digital safety.
Ask for allyship from employers, partners, and friends.
Share this article with 5 people who should understand what’s at stake.
We can learn from the women in Iceland: “Only if we connect, unite, and raise our voices can we shift the system.”
In Iceland, women famously organized the 1975 strike, when 90% of women refused to work, cook, or provide childcare for a day — making visible how much society depended on their contributions. This collective action paved the way for landmark gender equality policies and shows us the power of solidarity and systemic change when inequalities are made visible and addressed together.
You’re not powerless. You’re needed.
You’re not powerless. You’re needed.
To every ally and supporter reading this:
We need you — not just to support FLINTA* voices, but to amplify them. Speak up when bias appears in your tools, in your teams, in your timelines. Create space where none was offered. Your role is not passive.
6. Let’s change the narrative – together
AI is not neutral. It reflects the world we’ve built — and the one we want to build. You deserve to understand how AI works. You deserve tools that reflect your reality. You deserve digital empowerment — not digital violence.
That’s why we created a free, hands-on course for all FLINTA* and allies who want to:
Learn how to use AI consciously
Spot bias, misinformation, and manipulation
Stay digitally safe
Feel empowered, not overwhelmed
This blog post is supported by Pro-European Values and co-funded by the European Union.
Source List
BMSGPK – Bundesministerium für Soziales, Gesundheit, Pflege und Konsumentenschutz (Austria)
Frauengesundheitsbericht 2022, pp. 47–48.
Key content: AI reinforces gender roles, deepfakes, cyber violence, digital exclusion, digital skill gaps for older women.
EIGE – European Institute for Gender Equality
Royal Society for Public Health (UK)
Insight: filters and social platforms negatively impact mental health, especially in young women and girls.
Technical University of Applied Sciences Würzburg-Schweinfurt (THWS)
Study conclusion: AI salary advisors gave significantly lower recommendations to women and people with marginalized backgrounds — despite identical inputs.
TIME Magazine (2025)
Article (referenced via social media): "Chatbots can trigger a mental health crisis. Here’s what you should know about ‘AI psychosis’”
Source: @TIME Instagram (image-based citation used)