ProperData Seminar Series:

Privacy Frontiers

Fall 2023: ProperData is launching a new seminar series (Privacy Frontiers) to present the latest developments and trends related to privacy and data transparency, from leading experts in technology and the policy aspects of privacy.

The seminars will take place on the first Friday of every month (typically 9:00am PT), and will run throughout AY 2023-24. The seminar is open not only to ProperData members, but also to the public. The format will be hybrid or zoom. Zoom registration is required.


Friday, October 6

11:00- 12:00 pm PT

Leveraging Deep Learning to Understand Users’ Views about Privacy

This is a Joint Computer Science/ProperData Seminar.

Speaker: Nina Taft

Senior Staff Research Scientist (leads the Applied Privacy Research Group), Google

Host: Gene Tsudik (UCI)

Location: In person at UCI at Donald Bren Hall 6011 and on Zoom.

Zoom Registration: Please register here. [now closed]

Nina Taft: “Leveraging Deep Learning to Understand Users’ Views about Privacy”

Speaker: Nina Taft

Title: Leveraging Deep Learning to Understand Users’ Views about Privacy

Abstract: We will start with a brief overview of some of the work the Applied Privacy Research group at Google is engaged in. Then we will focus on text analysis pipelines we’ve been developing to automatically extract privacy insights from smartphone app reviews.  We design a multi-stage methodology that leverages recent advances in NLP and LLMs, to identify whether or not a review discusses a privacy related topic, to assign a 2-level hierarchy of topic tags, to summarize thematically similar privacy reviews, and to assign emotion tags to each review . We’ll summarize our methodology for each of these steps and then present examples of what this analysis pipeline uncovers when applied to 600M app reviews. We share some long-term trends, country level comparisons, and uncover a surprising number of privacy positive reviews. We will discuss how this approach to understanding user opinions about privacy can complement traditional methods of user studies and surveys, and how it can be leveraged to provide actionable insights to 3P developers.

Bio: Nina Taft is a Principal Scientist/Director at Google where she leads the Applied Privacy Research group. Nina received her PhD from UC Berkeley, and has worked in industrial research labs since then – at SRI, Sprint Labs, Intel Berkeley Labs, and Technicolor Research before joining Google.  For many years, Nina worked in the field of networking, focused on Internet traffic modeling, traffic matrix estimation, and intrusion detection. In 2017 she received the top-10 women in networking IEEE N2Women award.  In the last decade, she has been working on privacy enhancing technologies with a focus on applications of machine learning for privacy. She has been chair of the SIGCOMM, IMC and PAM conferences, has published over 90 papers, and holds 10 patents.


Friday, November 3

9:00am – 10:00 am

Security, Privacy, and Safety for AR/VR: The Next 10 Years

Speaker: Franziska Roesner

Associate Professor, Paul G. Allen School of Computer Science & Engineering, University of Washington

Host: David Choffnes (Northeastern)

Location: Virtual on Zoom.

Zoom Registration: Register here. [now closed]

Recording: Available here.

Franziska Roesner: “Security, Privacy, and Safety for AR/VR: The Next 10 Years”

Speaker: Franziska Roesner

Title: Security, Privacy, and Safety for AR/VR: The Next 10 Years

Abstract: Augmented, virtual, and mixed reality technologies have reached the cusp of commercial viability. Though these technologies bring great potential benefits, they also raise new and serious computer security and privacy risks. For example, risks may arise for both AR/VR/MR input (due to the need for applications to continuously receive and process sensor data, posing privacy risks to users and bystanders) and for AR/VR/MR output (in which application may, for instance, overlay distracting content on a user’s view of the real world). How should we design AR/VR/MR systems to mitigate these risks, enabling exciting future use cases while protecting the security, privacy, and safety of end users? I will discuss our lab’s past 10 years of research on this topic, and present challenges for the next 10 years.

Bio: Franziska (Franzi) Roesner is the Brett Helsel Associate Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where she co-directs the Security and Privacy Research Lab. Her research focuses broadly on computer security and privacy for end users of existing and emerging technologies. Her work has studied topics including online tracking and advertising, security and privacy for sensitive user groups, security and privacy in emerging augmented reality (AR) and IoT platforms, and online mis/disinformation. She is the recipient of a Consumer Reports Digital Lab Fellowship, an MIT Technology Review ”Innovators Under 35” Award, an Emerging Leader Alumni Award from the 4 University of Texas at Austin, a Google Security and Privacy Research Award, and an NSF CAREER Award. Her work has received paper awards or runners-up at USENIX Security, the IEEE Symposium on Security & Privacy, the ACM Internet Measurement Conference (IMC), and the ACM Web Conference, as well as Test of Time Awards at the USENIX Symposium on Networked Systems Design & Implementation (NSDI) and the IEEE Symposium on Security & Privacy. She serves on the USENIX Security and USENIX Enigma Steering Committees, and she previously served as part of the DARPA ISAT advisory group.


Friday, December 1

9:00am – 10:00 am

Towards Transparency of the Algorithmically Mediated World

Speaker: Christo Wilson

Associate Professor, Khoury College of Computer Sciences, Northeastern University

Host: David Choffnes (Northeastern)

Location: Virtual on Zoom.

Zoom Registration: Register here. [now closed]

Recording: Available here.

Christo Wilson: “Towards Transparency of the Algorithmically Mediated World”

Speaker: Christo Wilson

Title: Towards Transparency of the Algorithmically Mediated World

Abstract: In this talk, I explore how empirical studies can help us understand the algorithms that shape our lives. I present case studies investigating real world systems for consequential harms, drawn from the domains of web search, social media, and online hiring. My findings demonstrate the power and promise of “algorithm auditing” techniques to improve transparency while also complicating prevailing narratives about algorithmic harms.

Bio: Christo Wilson is an Associate Professor in the Khoury College of Computer Sciences at Northeastern University. He is a founding member of the Cybersecurity and Privacy Institute at Northeastern and serves as Associate Dean of Undergraduate Programs in Khoury College. Professor Wilson’s research focuses on online security and privacy, with a specific interest in algorithmic auditing. His work is supported by the U.S. National Science Foundation, a Sloan Fellowship, the Mozilla Foundation, the Knight Foundation, the Russell Sage Foundation, the Democracy Fund, the Anti Defamation League, the Data Transparency Lab, the European Commission, Google, Pymetrics, Northwestern University, Underwriters Laboratories, and Verisign Labs.


Friday, January 5

9:00am – 10:00 am

Winter Recess, no seminar.


Friday, February 2

9:00am – 10:00 am

Yuan Tian

Speaker: Yuan Tian

Assistant Professor, Electrical and Computer Engineering, University of California, Los Angeles

Host: Athina Markopoulou (UCI)

Location: In person at UCI at ISEB 1200 and on Zoom.

Zoom Registration: Register here. [now closed]

Recording: Available here.

Yuan Tian: “Towards Regulated Security and Privacy in Emerging Computing Platforms”

Speaker: Yuan Tian

Title: Towards Regulated Security and Privacy in Emerging Computing Platforms

Abstract: Computing is undergoing a significant shift. First, the explosive growth of the Internet of Things (IoT) enables users to interact with computing systems and physical environments in novel ways through perceptual interfaces. Second, machine learning algorithms collect vast amounts of data and make critical decisions on new computing systems. While these trends bring unprecedented functionality, they also drastically increase the number of untrusted algorithms, implementations, interfaces, and the amount of private data they process, endangering user security and privacy. To regulate these security and privacy issues, regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act) went into effect. However, a massive gap exists between the desired high-level security/privacy/ethical properties (from regulations, specifications, and users’ expectations) and low-level real implementations.

To bridge the gap, my work aims to 1) change how platform architects design secure systems, 2) assist developers by detecting security and privacy violations of implementations, and 3) build usable and scalable privacy-preserving systems. In this talk, I will present how my group designs principled solutions to ensure the security and privacy of emerging computing platforms. I will introduce two developer tools we build to detect security and privacy violations with machine-learning-augmented analysis. Using the tools, we found large numbers of GDPR violations in web plugins and security property violations in IoT messaging protocol implementations. Additionally, I will discuss our recent work on scalable privacy-preserving machine learning, the first privacy-preserving machine learning framework for modern machine learning models and data with all operations on GPUs.

Bio: Yuan Tian is an Assistant Professor of Electrical and Computer Engineering, Computer Science, and the Institute for Technology, Law and Policy (ITLP) at the University of California, Los Angeles. She was an Assistant Professor at the University of Virginia, and she obtained her Ph.D. from Carnegie Mellon University in 2017. Her research interests involve security and privacy and their interactions with computer systems, machine learning, and human-computer interaction. Her current research focuses on developing new computing platforms with strong security and privacy features, particularly in the Internet of Things and machine learning. Her work has real-world impacts as countermeasures and design changes have been integrated into platforms (such as Android, Chrome, Azure, and iOS) and also impacted the security recommendations of standard organizations such as the Internet Engineering Task Force (IETF). She is a recipient of the Okawa Foundation Award 2022, Google Research Scholar Award 2021, Facebook Research Award 2021, NSF CAREER award 2020, NSF CRII award 2019, Amazon AI Faculty Fellowship 2019. Her research has appeared in top-tier venues in security, machine learning, and systems. Her projects have been covered by media outlets such as IEEE Spectrum, Forbes, Fortune, Wired, and Telegraph.


Friday, March 1

9:00am – 10:00 am

No seminar.


Friday, April 5

9:00am – 10:00 am

Ari Waldman

Speaker: Ari Waldman

Professor of Law and, by courtesy, Sociology, University of California, Irvine

Host: Athina Markopoulou (UCI)

Location: In person at UCI at ISEB 1200 and on Zoom.

Zoom Registration: Register Here. [now closed]

Ari Waldman: “The Algorithmic Dead Hand”

Speaker: Ari Waldman

Title: The Algorithmic Dead Hand

Abstract: Constitutional law scholars are familiar with so-called “dead hand” arguments. In short, the dead hand theory argues that current generations should not be bound or constrained by past generations; as Thomas Jefferson said in his early formulation of the argument, the dead hand of the past has no claim to our present and future. His full quote: “the earth belongs … to the living: … the dead have neither powers nor rights over it … . On similar ground it may be proved, that no society can make a perpetual constitution, or even a perpetual law. The earth belongs always to the living generation.” Originalists have often felt compelled to respond to “dead hand” arguments—as early as James Madison’s original response to Jefferson and as recently as junior scholars’ responses today—because they amount to a direct assault on the legitimacy of relying too much on the past to make decisions about the present and future.

In this project, I argue that algorithms and so-called “artificial intelligence,” a catch-all phrase that most often refers to a set of technologies in which machines are programmed and “trained” on data inputs to “learn” to make certain conclusions, also raise dead hand concerns. Algorithms increased the influence of the past over the future; they rely on past data to train machines to anything from writing poems (large language models) to allocating police resources in cities (“predictive policing”) to authenticating identity for government benefits (data matching). They rely on historical data to “train” machines to make decisions about people today and into the future. And they raise similar concerns to originalist approaches to interpreting constitutions. This talk will describe what I mean about the “algorithmic dead hand” and demonstrate how dead hand critiques challenge the legitimacy of using machine learning told to make decisions about current and future generations wholesale.

Bio: Ari Ezra Waldman is professor of law and, by courtesy, professor of sociology at the University of California, Irvine. His research focuses on information economy governance and how law and technology affect marginalized populations. He earned his PhD in sociology at Columbia University, his JD at Harvard Law School, and his BA, magna cum laude, at Harvard College.


Friday, May 3

9:00am – 10:00 am

James Owen Weatherall

Speaker: James Owen Weatherall

Professor, Department of Logic and Philosophy of Science, University of California, Irvine

Host: Athina Markopoulou (UCI)

Location: In person at UCI at ISEB 6610 and on Zoom.

Zoom Registration: Register Here.

James Owen Weatherall: “Multiple Mechanisms for Polarization”

Speaker: James Owen Weatherall

Title: Multiple Mechanisms for Polarization

Abstract: Recent modeling work has uncovered a number of mechanisms that are sufficient to cause belief polarization in societies. This talk will present several models of polarization and discuss ways they might be useful in thinking about real world phenomena. It will also reflect on what it means that polarization can arise from so many different mechanics, and how that complicates policy proposals intended to reduce polarization.

Bio: James Owen Weatherall is a Professor of Logic and Philosophy of Science at the University of California, Irvine. He is the author of three books, including most recently The Misinformation Age: How False Beliefs Spread, with Cailin O’Connor, published by Yale University Press.  


Friday, June 7

9:00am – 10:00 am

Oana Goga

Speaker: Oana Goga

Chargée de Recherches (equivalent to a tenured faculty position), CNRS: Centre national de la recherche scientifique

Host: Athina Markopoulou (UCI)

Location: Virtual on Zoom.

Zoom Registration: Register Here.


Logistics

Zoom links will be sent by email to registered participants. You may also join our mailing list, see sign up button above.

For all other inquiries, please contact properdata@uci.edu.