Date & Time: 10th June 2020 at 15:15 UTC+0

Title: Privacy and security threats in social network ad targeting and delivery

Abstract: The enormous financial success of online advertising platforms is partially due to the precise targeting and delivery features they offer. Recently, such platforms have been criticized for allowing advertisers to discriminate against users belonging to sensitive groups, i.e., to exclude users belonging to a certain race or gender from receiving their ads.  In this talk, I discuss two threads of work where we develop measurement methodologies in order understand the extent of discrimination in ads on online platforms.

First, I examine ad targeting — the advertisers’ choices about who they wish to bid on.  Most platforms now allow advertisers to target users directly by uploading their personally identifiable information (PII), but it remains unclear which sources of PII ad platforms use for targeting.  Focusing on Facebook, I first develop a methodology that uses Facebook’s aggregate statistics in order to determine whether a given piece of PII matches an active account.  I then demonstrate that despite all privacy settings being enabled, phone numbers and email addresses added for security purposes (e.g.,  two-factor authentication), those provided to the Facebook Messenger app, and those included in friends’ uploaded contact databases are all used by Facebook to allow advertisers to target individual users.

Second, I examine ad delivery — the platform’s choices about who should see an ad.  I first develop a measurement methodology using Facebook’s advertiser interface that can measure the influence of Facebook’s choices about how to deliver ads.  I then demonstrate that ad delivery can be significantly skewed on Facebook, due to the platform’s own predictions about the “relevance” of ads to different groups of users. I show significant skew in delivery along gender and racial lines for “real” ads for employment and housing opportunities despite neutral targeting parameters.  These findings demonstrate previously unknown mechanisms that can lead to potentially discriminatory ad delivery, even when advertisers set their targeting parameters to be highly inclusive.

Bio: Alan Mislove is a Professor and Associate Dean and Director of Undergraduate Programs at the Khoury College of Computer Sciences at Northeastern University, which he joined in 2009. He received his B.A., M.S., and Ph.D. in computer science from Rice University in 2002, 2005, and 2009, respectively.   Prof. Mislove’s research concerns distributed systems and networks, with a focus on using social networks to enhance the security, privacy, and efficiency of newly emerging systems. He work comprises over 50 peer-reviewed papers, has received over 11,000 citations, and has been supported by over $5M in grants from government agencies and industrial partners. He is a recipient of an NSF CAREER Award (2011), a Google Faculty Award (2012), a Facebook Secure the Internet grant (2018), the ACM SIGCOMM Test of Time Award (2017), the IETF Applied Networking Research Prize (2018, 2019), the USENIX Security Distinguished Paper Award (2017), the NDSS Distinguished Paper Award (2018), the IEEE Cybersecurity Award for Innovation (2017), a Facebook Secure the Internet Grant, and his work has been covered by the Wall Street Journal, the New York Times, and the CBS Evening News.