Published on 00/00/0000
Last updated on 00/00/0000
Published on 00/00/0000
Last updated on 00/00/0000
Share
Share
INSIGHTS
6 min read
Share
The issue of group fairness in machine learning (ML) models, where certain sub-populations or groups are favored over others, has been recognized for some time – especially as edge devices equipped with ML models such as mobile phones and watches have become heavily involved in our daily life.
While many mitigation strategies have been proposed in centralized learning, many of these methods are not directly applicable in federated learning (FL), where data is privately stored separately on multiple clients. To address this, many proposals try to mitigate bias at the level of clients before aggregation, which we call locally fair training (LFT). However, the effectiveness of these approaches is not well understood.
In this work, we investigate the theoretical foundation of locally fair training by studying the relationship between global model fairness and local model fairness. Real-data experiments demonstrate the promising performance of our proposed approach for enhancing fairness while retaining high accuracy compared to locally fair training methods.
In machine learning, group fairness refers to the equitable treatment of different sub-populations within the dataset. For example, in a healthcare application, ensuring that a predictive model does not favor one demographic group over another is critical. Historically, many strategies have been proposed to mitigate bias in centralized learning settings. However, these methods often do not translate well to federated learning due to the decentralized nature of data storage.
Federated learning necessitates a different approach because data remains on local devices, and only model updates are shared. This decentralized setup complicates the direct application of centralized fairness mitigation strategies. Typically, federated learning approaches aim to improve the prediction accuracy of the global model without explicitly addressing fairness, leading to potential biases in the outcomes.
Many existing solutions in federated learning attempt to mitigate bias at the local level, a method known as locally fair training. The idea is that if each local model is fair, the aggregated global model may also be fair. However, the effectiveness of LFT is not well understood.
We explored whether local fairness guarantees global fairness and under what conditions this holds true. This study provides a theoretical foundation for understanding the relationship between local and global model fairness.
1. Formulation of group-based fairness metrics:
We defined group-based and proper group-based fairness metrics. For proper group-based metrics, the global fairness value can be expressed as a function of fairness-related statistics calculated solely by local clients. This allows for the calculation of global fairness without directly accessing local datasets.
2. Analysis of local and global fairness:
The study investigates the relationship between local and global model fairness. It finds that local fairness does not necessarily imply global fairness and vice versa. However, for proper group-based metrics, global fairness is controlled by local fairness values and the data heterogeneity level. This explains the success of LFT methods in near-homogeneous client settings.
3. Introduction of Federated Globally Fair Training (FedGF):
The study proposes a globally fair training method named FedGFT for proper group-based metrics. FedGFT directly optimizes a regularized objective function consisting of empirical prediction loss and a fairness penalty term. Numerical experiments demonstrate that FedGFT significantly reduces global model bias while retaining high prediction accuracy.
The study introduces a mathematical framework to understand group-based fairness metrics. These metrics quantify the disparity in model performance across different groups. For instance, statistical parity, equal opportunity, and well-calibration are common fairness metrics that fall into this category.
In this study, we proved that for these proper group-based fairness metrics, the global fairness can be derived from local fairness-related statistics. This is pivotal as it allows federated learning systems to provide global fairness without sharing raw data, thus preserving privacy.
From this research, we proposed an algorithm known as Federated Globally Fair Training (FedGFT). Unlike LFT, which only focuses on local fairness, FedGFT optimizes for global fairness by incorporating a fairness regularization term in the objective function. Specifically, FedGFT aims to solve the following problem:
This method can handle arbitrary data heterogeneity among clients, making it more robust in real-world scenarios where data distribution varies significantly across clients.
1. Initialization: The global model parameters are initialized and distributed to local clients.
2. Local training: Each client updates the model using its local data for a few epochs. The local objective function includes a fairness regularization term.
3. Aggregation: The updated local models are sent back to the central server, which aggregates them to form a new global model.
4. Fairness adjustment: The server updates a fairness-related constant using the summary statistics from local clients.
5. Iteration: Steps 2-4 are repeated for a specified number of communication rounds.
We conducted experiments on three real-world datasets: Adult, COMPAS, and CelebA. These datasets represent different application domains, including income prediction, recidivism risk assessment, and facial attribute classification. The results demonstrate that FedGFT consistently outperforms traditional LFT methods in terms of reducing bias while maintaining high accuracy. Some of the key findings were:
The findings of this study have significant implications for the deployment of federated learning systems in sensitive applications such as healthcare, finance, and criminal justice. By ensuring fairness at a global level, FedGFT can help mitigate the risk of biased outcomes that could exacerbate social inequalities.
The study also opens up several avenues for future research. For instance, we’d like to explore fairness metrics that are not covered by their definition of proper group-based metrics, such as calibration. Additionally, extending the approach to handle multiple sensitive attributes and testing on larger datasets are potential directions for further investigation.
While establishing fairness in federated learning is a complex but crucial challenge, our research provides a robust theoretical foundation and a practical solution through the FedGFT algorithm. By focusing on global fairness and leveraging summary statistics from local clients, FedGFT offers a promising approach to mitigating bias in federated learning systems.
In summary, this work is a significant step towards more equitable and trustworthy machine learning models in decentralized settings.
Read the full research paper here and to learn more about Cisco Research here.
Get emerging insights on innovative technology straight to your inbox.
Explore the detailed design of Cisco’s Quantum Random Number Generator (QRNG) which leverages quantum vacuum noise to ensure true randomness.
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.