[
[1] “MELLODDY consortium.” https://cordis.europa.eu/project/rcn/223634/factsheet/en. Accessed: November 21, 2020.
]Search in Google Scholar
[
[2] T. Yang, G. Andrew, H. Eichner, H. Sun, W. Li, N. Kong, D. Ramage, and F. Beaufays, “Applied federated learning: Improving google keyboard query suggestions,” arXiv preprint arXiv:1812.02903, 2018.
]Search in Google Scholar
[
[3] L. Muñoz-González, K. T. Co, and E. C. Lupu, “Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging,” arXiv preprint arXiv:1909.05125, 2019.
]Search in Google Scholar
[
[4] M. Nasr, R. Shokri, and A. Houmansadr, “Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 739–753, IEEE, 2019.10.1109/SP.2019.00065
]Search in Google Scholar
[
[5] R. Shokri, M. Stronati, C. Song, and V. Shmatikov, “Membership inference attacks against machine learning models,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18, IEEE, 2017.10.1109/SP.2017.41
]Search in Google Scholar
[
[6] Z. He, T. Zhang, and R. B. Lee, “Model inversion attacks against collaborative inference,” in Proceedings of the 35th Annual Computer Security Applications Conference, pp. 148–162, 2019.10.1145/3359789.3359824
]Search in Google Scholar
[
[7] E. Bagdasaryan, A. Veit, Y. Hua, D. Estrin, and V. Shmatikov, “How to backdoor federated learning,” in International Conference on Artificial Intelligence and Statistics, pp. 2938–2948, PMLR, 2020.
]Search in Google Scholar
[
[8] A. Team, “Learning with privacy at scale,” Apple Mach. Learn. J, vol. 1, no. 9, 2017.
]Search in Google Scholar
[
[9] P. Kairouz, K. Bonawitz, and D. Ramage, “Discrete distribution estimation under local privacy,” arXiv preprint arXiv:1602.07387, 2016.
]Search in Google Scholar
[
[10] E. Hesamifard, H. Takabi, and M. Ghasemi, “Cryptodl: Deep neural networks over encrypted data,” arXiv preprint arXiv:1711.05189, 2017.
]Search in Google Scholar
[
[11] O. Goldreich, S. Micali, and A. Wigderson, “How to play any mental game, or a completeness theorem for protocols with honest majority,” in Providing Sound Foundations for Cryptography: On the Work of Shafi Goldwasser and Silvio Micali, pp. 307–328, 2019.10.1145/3335741.3335755
]Search in Google Scholar
[
[12] L. Song, R. Shokri, and P. Mittal, “Membership inference attacks against adversarially robust deep learning models,” in 2019 IEEE Security and Privacy Workshops (SPW), pp. 50–56, IEEE, 2019.10.1109/SPW.2019.00021
]Search in Google Scholar
[
[13] Y. Zhang, R. Jia, H. Pei, W. Wang, B. Li, and D. Song, “The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks,” arXiv preprint arXiv:1911.07135, 2019.
]Search in Google Scholar
[
[14] N. Carlini, C. Liu, Ú. Erlingsson, J. Kos, and D. Song, “The secret sharer: Evaluating and testing unintended memorization in neural networks,” in 28th {USENIX} Security Symposium ({USENIX} Security 19), pp. 267–284, 2019.
]Search in Google Scholar
[
[15] J. Geiping, H. Bauermeister, H. Dröge, and M. Moeller, “Inverting Gradients–How easy is it to break privacy in federated learning?,” arXiv preprint arXiv:2003.14053, 2020.
]Search in Google Scholar
[
[16] N. Papernot, S. Chien, S. Song, A. Thakurta, and U. Erlingsson, “Making the shoe fit: Architectures, initializations, and tuning for learning with privacy,” 2020.
]Search in Google Scholar
[
[17] L. Lyu, H. Yu, and Q. Yang, “Threats to federated learning: A survey,” arXiv preprint arXiv:2003.02133, 2020.
]Search in Google Scholar
[
[18] E. De Cristofaro, “An Overview of Privacy in Machine Learning,” arXiv preprint arXiv:2005.08679, 2020.
]Search in Google Scholar
[
[19] Y. Kaya, S. Hong, and T. Dumitras, “On the Effectiveness of Regularization Against Membership Inference Attacks,” arXiv preprint arXiv:2006.05336, 2020.
]Search in Google Scholar
[
[20] G. Kaissis, A. Ziller, J. Passerat-Palmbach, T. Ryffel, D. Usynin, A. Trask, I. Lima, J. Mancuso, F. Jungmann, M.-M. Steinborn, A. Saleh, M. Makowski, D. Rueckert, and R. Braren, “End-to-end privacy preserving deep learning on multi-institutional medical imaging,” Nature Machine Intelligence, May 2021.10.1038/s42256-021-00337-8
]Search in Google Scholar
[
[21] S. Hidano, T. Murakami, S. Katsumata, S. Kiyomoto, and G. Hanaoka, “Model inversion attacks for prediction systems: Without knowledge of non-sensitive attributes,” in 2017 15th Annual Conference on Privacy, Security and Trust (PST), pp. 115–11509, IEEE, 2017.10.1109/PST.2017.00023
]Search in Google Scholar
[
[22] Z. Wang, M. Song, Z. Zhang, Y. Song, Q. Wang, and H. Qi, “Beyond inferring class representatives: User-level privacy leakage from federated learning,” in IEEE INFOCOM 2019-IEEE Conference on Computer Communications, pp. 2512–2520, IEEE, 2019.
]Search in Google Scholar
[
[23] L. Zhu, Z. Liu, and S. Han, “Deep leakage from gradients,” in Advances in Neural Information Processing Systems, pp. 14747–14756, 2019.
]Search in Google Scholar
[
[24] B. Zhao, K. R. Mopuri, and H. Bilen, “iDLG: Improved Deep Leakage from Gradients,” arXiv preprint arXiv:2001.02610, 2020.
]Search in Google Scholar
[
[25] T. Orekondy, S. J. Oh, Y. Zhang, B. Schiele, and M. Fritz, “Gradient-leaks: Understanding and controlling deanonymization in federated learning,” arXiv preprint arXiv:1805.05838, 2018.
]Search in Google Scholar
[
[26] N. Papernot, A. Thakurta, S. Song, S. Chien, and Ú. Erlingsson, “Tempered sigmoid activations for deep learning with differential privacy,” arXiv preprint arXiv:2007.14191, 2020.
]Search in Google Scholar
[
[27] B. Avent, J. Gonzalez, T. Diethe, A. Paleyes, and B. Balle, “Automatic discovery of privacy-utility pareto fronts,” arXiv preprint arXiv:1905.10862, 2019.
]Search in Google Scholar
[
[28] A. Chakraborty, M. Alam, V. Dey, A. Chattopadhyay, and D. Mukhopadhyay, “Adversarial attacks and defences: A survey,” arXiv preprint arXiv:1810.00069, 2018.
]Search in Google Scholar
[
[29] X. Chen, C. Liu, B. Li, K. Lu, and D. Song, “Targeted back-door attacks on deep learning systems using data poisoning,” arXiv preprint arXiv:1712.05526, 2017.
]Search in Google Scholar
[
[30] M. Lecuyer, V. Atlidakis, R. Geambasu, D. Hsu, and S. Jana, “Certified robustness to adversarial examples with differential privacy,” in 2019 IEEE Symposium on Security and Privacy (SP), pp. 656–672, IEEE, 2019.10.1109/SP.2019.00044
]Search in Google Scholar
[
[31] M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan, and N. K. Jha, “Systematic poisoning attacks on and defenses for machine learning in healthcare,” IEEE journal of biomedical and health informatics, vol. 19, no. 6, pp. 1893–1905, 2014.
]Search in Google Scholar
[
[32] G. A. Kaissis, M. R. Makowski, D. Rückert, and R. F. Braren, “Secure, privacy-preserving and federated machine learning in medical imaging,” Nature Machine Intelligence, pp. 1–7, 2020.10.1038/s42256-020-0186-1
]Search in Google Scholar
[
[33] S. G. Finlayson, J. D. Bowers, J. Ito, J. L. Zittrain, A. L. Beam, and I. S. Kohane, “Adversarial attacks on medical machine learning,” Science, vol. 363, no. 6433, pp. 1287–1289, 2019.
]Search in Google Scholar
[
[34] M. Fredrikson, E. Lantz, S. Jha, S. Lin, D. Page, and T. Ristenpart, “Privacy in pharmacogenetics: An end-to-end case study of personalized warfarin dosing,” in 23rd {USENIX} Security Symposium ({USENIX} Security 14), pp. 17–32, 2014.
]Search in Google Scholar
[
[35] P. Kairouz, H. B. McMahan, B. Avent, A. Bellet, M. Bennis, A. N. Bhagoji, K. Bonawitz, Z. Charles, G. Cormode, R. Cummings, et al., “Advances and Open Problems in Federated Learning,” arXiv preprint arXiv:1912.04977, 2019.
]Search in Google Scholar
[
[36] A. Hard, K. Rao, R. Mathews, S. Ramaswamy, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage, “Federated learning for mobile keyboard prediction,” arXiv preprint arXiv:1811.03604, 2018.
]Search in Google Scholar
[
[37] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “Federated optimization in heterogeneous networks,” arXiv preprint arXiv:1812.06127, 2018.
]Search in Google Scholar
[
[38] P. Vepakomma, O. Gupta, T. Swedish, and R. Raskar, “Split learning for health: Distributed deep learning without sharing raw patient data,” arXiv preprint arXiv:1812.00564, 2018.
]Search in Google Scholar
[
[39] C. Dwork and A. Roth, “The algorithmic foundations of differential privacy,” Foundations and Trends® in Theoretical Computer Science, vol. 9, no. 3-4, pp. 211–407, 2013.10.1561/0400000042
]Search in Google Scholar
[
[40] M. Abadi, A. Chu, I. Goodfellow, H. B. McMahan, I. Mironov, K. Talwar, and L. Zhang, “Deep learning with differential privacy,” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, Oct 2016.10.1145/2976749.2978318
]Search in Google Scholar
[
[41] “PyTorch-DP.” https://github.com/facebookresearch/pytorch-dp. Accessed: June 29, 2020.
]Search in Google Scholar
[
[42] B. Kulynych and M. Yaghini, “mia: A library for running membership inference attacks against ML models,” Sept. 2018.
]Search in Google Scholar
[
[43] E. Hesamifard, H. Takabi, M. Ghasemi, and R. N. Wright, “Privacy-preserving machine learning as a service,” Proceedings on Privacy Enhancing Technologies, vol. 2018, no. 3, pp. 123–142, 2018.
]Search in Google Scholar
[
[44] Q. Li, Y. Diao, Q. Chen, and B. He, “Federated learning on non-iid data silos: An experimental study,” 2021.
]Search in Google Scholar
[
[45] Y. Le Cun, L. D. Jackel, B. Boser, J. S. Denker, H. P. Graf, I. Guyon, D. Henderson, R. E. Howard, and W. Hubbard, “Handwritten digit recognition: Applications of neural network chips and automatic learning,” IEEE Communications Magazine, vol. 27, no. 11, pp. 41–46, 1989.10.1109/35.41400
]Search in Google Scholar
[
[46] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in Advances in neural information processing systems, pp. 1097–1105, 2012.
]Search in Google Scholar
[
[47] A. Hore and D. Ziou, “Image quality metrics: Psnr vs. ssim,” in 2010 20th international conference on pattern recognition, pp. 2366–2369, IEEE, 2010.
]Search in Google Scholar
[
[48] D. S. Kermany, M. Goldbaum, W. Cai, C. C. Valentim, H. Liang, S. L. Baxter, A. McKeown, G. Yang, X. Wu, F. Yan, et al., “Identifying medical diagnoses and treatable diseases by image-based deep learning,” Cell, vol. 172, no. 5, pp. 1122–1131, 2018.
]Search in Google Scholar
[
[49] S. Wagh, D. Gupta, and N. Chandran, “SecureNN: Efficient and Private Neural Network Training,” p. 44.
]Search in Google Scholar
[
[50] T. Ryffel, D. Pointcheval, and F. Bach, “ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function Secret Sharing,” arXiv:2006.04593 [cs, stat], June 2020. arXiv: 2006.04593.
]Search in Google Scholar
[
[51] I. Chillotti, M. Joye, and P. Paillier, “Programmable bootstrapping enables efficient homomorphic inference of deep neural networks.” Cryptology ePrint Archive, Report 2021/091, 2021. https://eprint.iacr.org/2021/091.10.1007/978-3-030-78086-9_1
]Search in Google Scholar
[
[52] A. Salem, Y. Zhang, M. Humbert, P. Berrang, M. Fritz, and M. Backes, “Ml-leaks: Model and data independent membership inference attacks and defenses on machine learning models,” arXiv preprint arXiv:1806.01246, 2018.
]Search in Google Scholar
[
[53] A. Sablayrolles, M. Douze, C. Schmid, Y. Ollivier, and H. Jégou, “White-box vs black-box: Bayes optimal strategies for membership inference,” in International Conference on Machine Learning, pp. 5558–5567, PMLR, 2019.
]Search in Google Scholar
[
[54] M. A. Rahman, T. Rahman, R. Laganière, N. Mohammed, and Y. Wang, “Membership Inference Attack against Differentially Private Deep Learning Model,” Transactions on Data Privacy, vol. 11, no. 1, pp. 61–79, 2018.
]Search in Google Scholar
[
[55] S. Truex, L. Liu, M. E. Gursoy, L. Yu, and W. Wei, “Demystifying Membership Inference Attacks in Machine Learning as a Service,” IEEE Transactions on Services Computing, 2019.
]Search in Google Scholar
[
[56] C. A. C. Choo, F. Tramer, N. Carlini, and N. Papernot, “Label-only membership inference attacks,” arXiv preprint arXiv:2007.14321, 2020.
]Search in Google Scholar
[
[57] Y. Park and M. Kang, “Membership Inference Attacks Against Object Detection Models,” arXiv:2001.04011 [cs], Jan. 2020. arXiv: 2001.04011.
]Search in Google Scholar
[
[58] R. Gilad-Bachrach, N. Dowlin, K. Laine, K. Lauter, M. Naehrig, and J. Wernsing, “Cryptonets: Applying neural networks to encrypted data with high throughput and accuracy,” in International Conference on Machine Learning, pp. 201–210, 2016.
]Search in Google Scholar
[
[59] B. D. Rouhani, M. S. Riazi, and F. Koushanfar, “Deepse-cure: Scalable provably-secure deep learning,” in Proceedings of the 55th Annual Design Automation Conference, p. 2, ACM, 2018.10.1145/3195970.3196023
]Search in Google Scholar
[
[60] M. Barni, C. Orlandi, and A. Piva, “A privacy-preserving protocol for neural-network-based computation,” in Proceedings of the 8th workshop on Multimedia and security, pp. 146–151, ACM, 2006.10.1145/1161366.1161393
]Search in Google Scholar
[
[61] P. Mohassel and Y. Zhang, “SecureML: A system for scalable privacy-preserving machine learning,” in 2017 IEEE Symposium on Security and Privacy (SP), pp. 19–38, IEEE, 2017.10.1109/SP.2017.12
]Search in Google Scholar
[
[62] S. L. Garfinkel, J. M. Abowd, and S. Powazek, “Issues encountered deploying differential privacy,” in Proceedings of the 2018 Workshop on Privacy in the Electronic Society, pp. 133–137, 2018.10.1145/3267323.3268949
]Search in Google Scholar
[
[63] J. Lee and C. Clifton, “How much is enough? choosing “ for differential privacy,” in International Conference on Information Security, pp. 325–340, Springer, 2011.10.1007/978-3-642-24861-0_22
]Search in Google Scholar
[
[64] T. Farrand, F. Mireshghallah, S. Singh, and A. Trask, “Neither private nor fair: Impact of data imbalance on utility and fairness in differential privacy,” arXiv preprint arXiv:2009.06389, 2020.
]Search in Google Scholar
[
[65] J. Zhao, T. Wang, T. Bai, K.-Y. Lam, Z. Xu, S. Shi, X. Ren, X. Yang, Y. Liu, and H. Yu, “Reviewing and improving the gaussian mechanism for differential privacy,” arXiv preprint arXiv:1911.12060, 2019.
]Search in Google Scholar