Abstract: The data-driven machine learning (ML) has caused an implicit conflict between the usefulness of machine learning models and personal privacy. Although the centralized models of ML require enormous data in order to achieve the state-of-the-art performance, such aggregation poses significant privacy risks and logistical challenges, particularly when it comes to sensitive data, which may fall under privacy policies like GDPR and HIPAA. FL has emerged as a promising decentralized model to enable joint model training on distributed data without distributing raw data to clients. Nonetheless, as recent studies have demonstrated, FL cannot be considered a panacea.....
[1].
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning for differential privacy In Proceedings of the 2016 ACM Special Interest Group on Information and Computer Security Conference, on Computer and Communications Security, pp. 308-318. ACM.
[2].
Bonawitz, K., Ivanov, V., Kreuter, B., Marcedone, A., McMahan, H. B., Patel, S., .& Seth, K. (2017). Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM Special Interest Group on Computer and Communications Security Conference (pp. 1175-1191). ACM.
[3].
Dwork, C., McSherry, F., Nissim, K., & Smith, A. (2006).Calibrating noise to sensitivity in private data analysis In Theory of Cryptography Conference. pp. 265-284. Springer, Berlin, Heidelberg.
[4].
Dwork, C., & Roth, A. (2014).The algorithmic foundations of differential privacy Foundations and Trends in Theoritical Computer Science, 9(3-4), 211-407.
[5].
Fredrikson, M., Jha, S., &Ristenpart, T. (2015). Model inversion attacks based on confidence information and simple counterparts In Proceedings of the 22nd ACM Special Interest Group for Security and Privacy Conference on Computer and Communications Security (pp. 1322-1333). ACM..