Supporting Trustworthy AI Through Machine Unlearning.

Sci Eng Ethics

Department of Legal Studies, University of Bologna, Via Zamboni, 27/29, 40121, Bologna, Italy.

Published: September 2024


Category Ranking

98%

Total Visits

921

Avg Visit Duration

2 minutes

Citations

20

Article Abstract

Machine unlearning (MU) is often analyzed in terms of how it can facilitate the "right to be forgotten." In this commentary, we show that MU can support the OECD's five principles for trustworthy AI, which are influencing AI development and regulation worldwide. This makes it a promising tool to translate AI principles into practice. We also argue that the implementation of MU is not without ethical risks. To address these concerns and amplify the positive impact of MU, we offer policy recommendations across six categories to encourage the research and uptake of this potentially highly influential new technology.

Download full-text PDF

Source
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC11390766PMC
http://dx.doi.org/10.1007/s11948-024-00500-5DOI Listing

Publication Analysis

Top Keywords

machine unlearning
8
supporting trustworthy
4
trustworthy machine
4
unlearning machine
4
unlearning analyzed
4
analyzed terms
4
terms facilitate
4
facilitate "right
4
"right forgotten"
4
forgotten" commentary
4

Similar Publications

Machine unlearning (MU) aims to eliminate information that has been learned from specific training data, namely forgetting data, from a pretrained model. Currently, the mainstream of relabeling-based MU methods involves modifying the forgetting data with incorrect labels and subsequently fine-tuning the model. While learning such incorrect information can indeed remove knowledge, the process is quite unnatural as the unlearning process undesirably reinforces the incorrect information and leads to over-forgetting.

View Article and Find Full Text PDF

Enhancing partition distinction: A contrastive policy to recommendation unlearning.

Neural Netw

October 2025

College of Cyber Security, Jinan University, Guangzhou 511443, Guangdong, China. Electronic address:

With the growing privacy and data contamination concerns in recommendation systems, recommendation unlearning, i.e., unlearning the impact of specific learned data, has garnered more attention.

View Article and Find Full Text PDF

Fast yet versatile machine unlearning for deep neural networks.

Neural Netw

October 2025

Jiangxi Qiushi Academy for Advanced Studies, Nanchang 330038, China.

In response to the growing concerns regarding data privacy, many countries and organizations have implemented corresponding laws and regulations, such as the General Data Protection Regulation (GDPR), to safeguard users' data privacy. Among these, the Right to Be Forgotten holds particular significance, signifying the necessity for data to be forgotten from improper use. Recently, researchers have integrated the concept of the Right to Be Forgotten into the field of machine learning, focusing on the unlearning of data from machine learning models.

View Article and Find Full Text PDF

Evidence-based personalised medicine in critical care: a framework for quantifying and applying individualised treatment effects in patients who are critically ill.

Lancet Respir Med

June 2025

Division of Respirology, Department of Medicine, University Health Network, Toronto, ON, Canada; Toronto General Hospital Research Institute, Toronto, ON, Canada; Interdepartmental Division of Critical Care Medicine, University of Toronto, Toronto, ON, Canada. Electronic address: ewan.goligher@utoro

Clinicians aim to provide treatments that will result in the best outcome for each patient. Ideally, treatment decisions are based on evidence from randomised clinical trials. Randomised trials conventionally report an aggregated difference in outcomes between patients in each group, known as an average treatment effect.

View Article and Find Full Text PDF

Recently, the machine unlearning has emerged as a popular method for efficiently erasing the impact of personal data in machine learning (ML) models upon the data owner's removal request. However, few studies take into consideration the security concerns that may exist in the unlearning process. In this article, we propose the first unlearning attack dubbed unlearning attack for regression learning (UnAR) to deliberately influence the predictive behavior of the target sample against regression learning models.

View Article and Find Full Text PDF