Éthique & IA

Project type

Academic research project, qualitative study

Role

Group project (3 people)

Context

Project conducted over a full semester as part of a collective study initiated by the school, focusing on the ethical challenges of artificial intelligence. The entire promotion worked on the same overarching theme (biases, risks, misuses, implications), after which each group defined its own research question by selecting a specific theme and sub-theme to investigate.

Objective

Conduct an in-depth research process combining interviews, theoretical references, and critical analysis in order to identify specific ethical mechanisms related to our sub-theme and to propose a conceptual artifact as a conclusion.

Date

February - June 2025

Selected content

Study overview, key findings, and final artifact

Research question

Generative AI systems produce impressive images, yet deeply biased ones.

Our question: How can we raise awareness among younger audiences about the visual stereotypes produced by AI, and help them identify, understand, and challenge them ?

Methodology

  • 12 semi-structured interviews with design students

  • Qualitative analysis and response categorization

  • Critical literature review (Haraway, Crawford, Benjamin, Bourdieu)

  • Monitoring of harmful uses and drifts (deepfakes, sexualization, aesthetic homogenization)

  • Collective synthesis and reframing of key insights

Key insights from the study

  1. Biases are visible… yet normalized.
    Participants are able to identify clichés (gender, skin color, sexualization), but often perceive them as “normal” or “inevitable”.

  2. AI aesthetics homogenize creation.
    Many students note that “everything looks the same.” Generated images tend to impose an “AI norm” that shapes and limits collective imaginaries.

  3. AI is perceived as neutral.
    A recurring misconception emerges: “If the AI generates it, it’s not really me.” This reflects a form of disengagement and an illusion of objectivity.

  4. Risks are poorly understood.
    There is an overreliance on filters and a lack of awareness of harmful uses (deepfakes, hyper-realistic illusions).

Design response: the DcodeIA educational kit

A hybrid toolkit designed to raise awareness of visual bias, develop critical thinking, and encourage more inclusive creative practices.

Kit components

  • Educational booklet

  • “Biased vs. inclusive” image comparisons

  • Collaborative “True / False” workshop

  • Inclusive prompt module

  • Facilitation materials for educators and mediators

How the system works

  1. Observe, identify visual cues (gendered postures, skin whitening, sexualization, aesthetic homogenization).

  2. Discuss, collectively analyze images and confront different perceptions.

  3. Understand, clearly explain how biases are formed within datasets.

  4. Create, generate images using guided prompts designed to avoid stereotypes.

  5. Share, compare outcomes and discuss creative choices.

Use case scenario

A design student is presented with two AI-generated portraits: one stereotypical, the other reworked. Guided by the booklet, they identify visible biases.

They then create their own image using the inclusive prompt module and discuss the result with their group.

The student leaves with a visual resource they can reuse in future projects.

Impact

  • Develops critical reading of AI-generated images

  • Makes invisible stereotypes visible

  • Encourages more conscious and responsible creative practices

  • Provides young people with tools to better understand AI-driven visual culture

Last updated: January 2026

Last updated: January 2026

Last updated: January 2026