Éthique & IA

Project type

AI ethics research study

Date

February - June 2025

Context

This project is part of a collective research initiative addressing the ethical challenges associated with artificial intelligence. Based on a shared overarching theme, each team defined a specific research question to investigate.

Our work involved conducting in-depth research combining interviews, theoretical references, and critical analysis in order to identify precise ethical mechanisms related to our sub-theme. The project concludes with the proposal of a conceptual artifact as a speculative response to these findings.

Tools

Miro, Condens, Photoshop

Skills

Qualitative research, critical analysis, speculative design

Selected content

Study overview, key findings, and final conceptual artifact

Research question

Generative AI systems produce impressive images, yet deeply biased ones.

Our question: How can we raise awareness among younger audiences about the visual stereotypes produced by AI, and help them identify, understand, and challenge them ?

Methodology

  • 12 semi-structured interviews with design students

  • Qualitative analysis and response categorization

  • Critical literature review (Haraway, Crawford, Benjamin, Bourdieu)

  • Monitoring of harmful uses and drifts (deepfakes, sexualization, aesthetic homogenization)

  • Collective synthesis and reframing of key insights

Key insights from the study

  1. Biases are visible… yet normalized.
    Participants are able to identify clichés (gender, skin color, sexualization), but often perceive them as “normal” or “inevitable”.

  2. AI aesthetics homogenize creation.
    Many students note that “everything looks the same.” Generated images tend to impose an “AI norm” that shapes and limits collective imaginaries.

  3. AI is perceived as neutral.
    A recurring misconception emerges: “If the AI generates it, it’s not really me.” This reflects a form of disengagement and an illusion of objectivity.

  4. Risks are poorly understood.
    There is an overreliance on filters and a lack of awareness of harmful uses (deepfakes, hyper-realistic illusions).

Design response: the DcodeIA educational kit

A hybrid toolkit designed to raise awareness of visual bias, develop critical thinking, and encourage more inclusive creative practices.

Kit components

  • Educational booklet

  • “Biased vs. inclusive” image comparisons

  • Collaborative “True / False” workshop

  • Inclusive prompt module

  • Facilitation materials for educators and mediators

How the system works

  1. Observe, identify visual cues (gendered postures, skin whitening, sexualization, aesthetic homogenization).

  2. Discuss, collectively analyze images and confront different perceptions.

  3. Understand, clearly explain how biases are formed within datasets.

  4. Create, generate images using guided prompts designed to avoid stereotypes.

  5. Share, compare outcomes and discuss creative choices.

Use case scenario

A design student is presented with two AI-generated portraits: one stereotypical, the other reworked. Guided by the booklet, they identify visible biases.

They then create their own image using the inclusive prompt module and discuss the result with their group.

The student leaves with a visual resource they can reuse in future projects.

Impact

  • Develops critical reading of AI-generated images

  • Makes invisible stereotypes visible

  • Encourages more conscious and responsible creative practices

  • Provides young people with tools to better understand AI-driven visual culture

Last updated: January 2026

Last updated: January 2026

Last updated: January 2026