Using participatory action research methods to address epistemic injustice within mental health research and the mental health system.

Front Public Health

CHiMES Collaborative Group and World Psychiatry Associate Collaborating Centre, Department of Psychiatry, University of Oxford, Oxford, United Kingdom.

Published: April 2023


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

In this paper, we describe a model of research practise that addresses epistemic injustice as a central objective, by valuing lived experience and addressing structural disadvantages. We set out here the processes we undertook, and the experiences of those involved in an attempt to transform research practise within a study known as Co-pact. We do not discuss the findings of the research. Rather, we wish to build expertise on how to address epistemic injustice and offer examples of participatory research processes, central values, and practical procedures that we implemented.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC10070762PMC
http://dx.doi.org/10.3389/fpubh.2023.1075363DOI Listing

Publication Analysis

Top Keywords

epistemic injustice
12
address epistemic
8
mental health
8
participatory action
4
action methods
4
methods address
4
injustice mental
4
health mental
4
health system
4
system paper
4

Similar Publications

Generative AI tools in reflective essays: Moderating moral injuries and epistemic injustices.

S Afr Fam Pract (2004)

August 2025

School of Public Health, Faculty of Health Sciences, University of Cape Town, Cape Town.

The emergence of large language models such as ChatGPT is already influencing health care delivery, research and training for the next cohort of health care professionals. In a consumer-driven market, their capabilities to generate new forms of knowing and doing for experts and novices present both promises and threats to the livelihood of patients. This article explores burdens imposed by the use of generative artificial intelligence tools in reflective essays submitted by a fifth of first-year health sciences students.

View Article and Find Full Text PDF

Gender inequalities in authorship have extensively been investigated, yet evidence on ethnic inequalities remains limited, with even fewer studies examining the intersections of the two. Our study aims to identify and measure the magnitude of intersectional (gender-by-ethnicity) inequalities among United Kingdom (U.K.

View Article and Find Full Text PDF

This study examines how democratic values have been promoted through natural sciences education over the last 50 years, providing a comprehensive analysis based on a systematic review of relevant literature. The central problem addressed is understanding the role of natural science education in fostering democratic values such as equity, participation, critical thinking, and ethical responsibility. This research aims to identify and analyze strategies, methodologies, and transformative experiences that contribute to the promotion of democratic values.

View Article and Find Full Text PDF

This article explores the potential of narrative medicine to strengthen the democratic ethos in health care. The heart of narrative medicine is attentive listening, an often scarce resource in our democratic communities. By listening to those who are traditionally voiceless and disenfranchised-the sick, the disabled, the old, the frail-narrative medicine empowers vulnerable patients' voices against the dominant discourse of health professionals and contributes to treating the moral injuries inflicted on patients by epistemic and social injustice.

View Article and Find Full Text PDF

Biases in AI: acknowledging and addressing the inevitable ethical issues.

Front Digit Health

August 2025

Centre of Medical Ethics, The University of Oslo, Oslo, Norway.

Biases in artificial intelligence (AI) systems pose a range of ethical issues. The myriads of biases in AI systems are briefly reviewed and divided in three main categories: input bias, system bias, and application bias. These biases pose a series of basic ethical challenges: injustice, bad output/outcome, loss of autonomy, transformation of basic concepts and values, and erosion of accountability.

View Article and Find Full Text PDF