Differentially Private Machine Learning: Theory, Algorithms, and Applications

 

Back to the program

 

Title: Differentially Private Machine Learning: Theory, Algorithms, and Applications

Speaker: Kamalika Chaudhuri, University of California, USA

For the Talk slides Please click here
For the talk video on YouTube please click here

Date: September 09, Wednesday
Time: 18:00 – 20:00 (Israel, UTC+03:00)
16:00 – 18:00 (UK, UTC+01:00)
11:00 – 13:00 (EDT, UTC-04:00)
02:00 – 04:00 (AEST, UTC+10:00) – September 10

Abstract:  Differential privacy has emerged as one of the de-facto standards for measuring privacy risk when performing computations on sensitive data and disseminating the results. Algorithms that guarantee differential privacy are randomized, which causes a loss in performance, or utility. Managing the privacy-utility tradeoff becomes easier with more data. Many machine learning algorithms can be made differentially private through the judicious introduction of randomization, usually through noise, within the computation. In this tutorial we will describe the basic framework of differential privacy, key mechanisms for guaranteeing privacy, and how to find differentially private approximations to several contemporary machine learning tools: convex optimization, Bayesian methods, and deep learning.

Bio: Kamalika Chaudhuri received a Bachelor of Technology degree in Computer Science and Engineering in 2002 from Indian Institute of Technology, Kanpur, and a PhD in Computer Science from University of California at Berkeley in 2007. After a postdoc at the Information Theory and Applications Center at UC San Diego, in July 2010, she joined the CSE department at UC San Diego as an assistant professor. She received an NSF CAREER Award in 2013. Kamalika’s research interests are in the foundations of trustworthy machine learning, including privacy-preserving machine learning and learning in the presence of adversaries.