Differential Privacy/Pseudonymization in Federated Learning for Medical Data
Mar 1, 2024
·
1 min read

A privacy-preserving machine learning system that implements differential privacy and pseudonymization techniques in TensorFlow Federated to protect sensitive patient data in collaborative medical research settings.
This project demonstrates how to balance model performance with patient information security, achieving significant reductions in mean absolute error (MAE) while maintaining high-level data privacy standards compliant with HIPAA and GDPR regulations.
Key Achievements:
- Significantly reduced mean absolute error while preserving data privacy in federated environments
- Implemented differential privacy mechanisms for patient data protection
- Balanced model performance with stringent information security requirements
- Adherence to HIPAA/GDPR compliance standards
Technologies: TensorFlow Federated, Differential Privacy, Python, Privacy-Preserving ML
Affiliation: University of Washington