Fairness-Aware Multi-Task and Meta Learning

Date

2021-07-22

ORCID

Journal Title

Journal ISSN

Volume Title

Publisher

item.page.doi

Abstract

Nowadays, machine learning plays an increasingly prominent role in our life since decisions that humans once made are now delegated to automated systems. In recent years, an increasing number of reports stated that human bias is revealed in an artificial intelligence system applied by high-tech companies. For example, Amazon was exposed a secret that its AI recruiting tool is biased against women. A critical component of developing responsible machine learning models is ensuring that such models are not unfairly harming any population subgroups. However, most of the existing fairness-aware algorithms focus on solving machine learning problems limited to a single task. How to jointly learn a fair model with multiple biased tasks is barely touched. This dissertation presents fairness-ware optimization algorithms for learning models with multiple tasks under three problem settings. Firstly, under the multi-task learning setting, we propose a multi-task regression model based on a popular non-parametric independence statistic test, i.e., Mann Whitney U statistic. We formulate and analyze the problem as a new non-convex optimization problem. A non-convex constraint is defined based on the group-wise ranking functions of an individual object. Secondly, under the meta-learning setting, we efficiently control unfairness by learning a good pair of parameters. Iteratively adapts and updates such parameter pairs across each task. Meta-learning is also known as learning to lean. It leverages the transferable knowledge learned from previous tasks and then adapted to new environments rapidly. Our proposed framework ensures the generalization capability of both accuracy and fairness onto new tasks. Thirdly, we formulate the problem as a constrained bi-level convex-concave optimization for the online learning setting that involves a primal-dual parameter pair for each level. The parameter pair is updated in an online version to adjust accuracy and fairness notion adaptively. Theoretical analysis justifies the efficiency and effectiveness of the proposed method by demonstrating sub-linear bound in time for both loss regret and violation of fairness constraints. We both theoretically discuss and empirically show that our designed methods are more effective than state-of-the-art counterparts on benchmark datasets.

Description

Keywords

Machine learning, Fairness, Nonconvex programming

item.page.sponsorship

Rights

Citation