Workshop on Multimodal Content Moderation (MMCM)


at 2023 IEEE International Conference on Computer Vision and Pattern Recognition (CVPR)

Date: June 18, 2023

Venue: East 17, Vancouver Convention Center, Vancouver, Canada

WELCOME


Welcome to the 1st IEEE Workshop on Multimodal Content Moderation (MMCM) being held in conjunction with CVPR 2023!

Content moderation (CM) is a rapidly growing need in today’s world, with a high societal impact, where automated CM systems can discover discrimination, violent acts, hate/toxicity, and much more, on a variety of signals (visual, text/OCR, speech, audio, language, generated content, etc.). Leaving or providing unsafe content on social platforms and devices can cause a variety of harmful consequences, including brand damage to institutions and public figures, erosion of trust in science and government, marginalization of minorities, geo-political conflicts, suicidal thoughts and more. Besides user-generated content, content generated by powerful AI models such as DALL-E and GPT present additional challenges to CM systems.

With the prevalence of multimedia social networking and online gaming, the problem of sensitive content detection and moderation is by nature multimodal. Moreover, content moderation is contextual and culturally multifaceted, for example, different cultures have different conventions about gestures. This requires CM approach to be not only multimodal, but also context aware and culturally sensitive.

WHAT'S NEW


📢 Program announced! (Check Program)

📢 New dates for author notification and camera ready deadline. (Check Important Dates)

📢 Submission deadline extended! New deadline: March 22, 2023, 11:59:59 Pacific Time (Check Important Dates)

INVITED SPEAKERS & PANELISTS


Mevan Babakar

News and Information Credibility Lead, Google

Renée DiResta

Research Manager, Stanford University

Hany Farid

Professor, UC Berkeley

Tarleton Gillespie

Senior Principal Researcher, Microsoft Research

Dmitriy Karpman

CTO and Co-founder, Hive

Matt Lease

Professor, UT Austin

Mohammad Norouzi

Imagen Co-founder

Pietro Perona

Caltech, Amazon

CALL FOR PAPERS


This workshop intends to draw more visibility and interest to this challenging field, and establish a platform to foster in-depth idea exchange and collaboration. Authors are invited to submit original and innovative papers. We aim for broad scope, topics of interest include but are not limited to:

  • Multi-modal content moderation in image, video, audio/speech, text;
  • Context aware content moderation;
  • Datasets/benchmarks/metrics for content moderation;
  • Annotations for content moderation with ambiguous policies, perspectivism, noisy or disagreeing labels;
  • Content moderation for synthetic/generated data (image, video, audio, text); utilizing synthetic dataset;
  • Dealing with limited data for content moderation.
  • Continual & adversarial learning in content moderation services;
  • Explainability and interpretability of models;
  • Challenges of at-scale real-time content moderation needs vs. human-in-the-loop moderation;
  • Detecting misinformation;
  • Detecting/mitigating biases in content moderation;
  • Analyses of failures in content moderation.

Submission Link: https://cmt3.research.microsoft.com/MMCM2023/Submission/Index

Authors are required to submit full papers by the paper submission deadline. These are hard deadlines due to the tight timeline; no extensions will be given. Please note that due to the tight timeline to have accepted papers included in the CVPR proceedings, no supplemental materials or rebuttal will be accepted.

Papers are limited to eight pages, including figures and tables, in the CVPR style. Additional pages containing only cited references are allowed. Papers with more than 8 pages (excluding references) or violating formatting specifications will be rejected without review. For more information on the submission instructions, templates, and policies (double blind review, dual submissions, plagiarism, etc.), please consult CVPR 2023 - Author Guidelines webpage. Please abide by CVPR policies regarding conflict, plagiarism, double blind review, dual submissions, and attendance.

Accepted papers will be included in the CVPR proceedings, on IEEE Xplore, and on CVF website. Authors will be required to transfer, to the IEEE, copyrights for any papers published in the conference proceedings. At least one author is expected to attend the workshop and present the paper.

IMPORTANT DATES


Event Date
Paper Submission Deadline March 22, 2023, 11:59:59 PM Pacific Time  March 20, 2023, 11:59:59 PM Pacific Time
Final Decisions to Authors April 10, 2023, 11:59:59 PM Pacific Time April 2, 2023, 11:59:59 PM Pacific Time
Camera Ready Deadline April 14, 2023, 11:59:59 PM Pacific Time April 8, 2023, 11:59:59 PM Pacific Time

Authors are required to submit full papers by the paper submission deadline. These are hard deadlines due to the tight timeline; no extensions will be given. Please note that due to the tight timeline to have accepted papers included in the CVPR proceedings, no supplemental materials or rebuttal will be accepted.

ORGANIZERS


Mei Chen

Principal Research Manager
Responsible & OpenAi Research, Microsoft

Cristian Canton

Research Manager
Responsible AI, Meta

Davide Modolo

Research Manager
AWS AI Labs, Amazon

Maarten Sap

Assistant Professor
LTI, Carnegie Mellon University

Maria Zontak

Sr. Applied Scientist
Alexa Sensitive Content Intelligence, Amazon

Chris Bregler

Director / Principal Scientist
Google AI

PROGRAM


Time (PST) Event Title Speaker(s)
08:30 - 08:45 Opening Remarks and Logistics for the Day Mei Chen, Microsoft
08:45 - 09:15 Talk Red teaming Generative AI Systems Lama Ahmad, OpenAI
Pamela Mishkin, OpenAI
09:15 - 09:45 Talk Fact Checking 101 Mevan Babakar, Google
09:45 - 10:15 Talk Content Moderation: Two Histories and Three Emerging Problems Tarleton Gillespie, Microsoft
10:15 - 10:30 Coffee Break
10:30 -11:00 Talk Bias, Causality and Generative AI Pietro Perona, Amazon
11:00 - 11:45 Panel Discussion Policy, Social Impact, Trust & Safety Lama Ahmad, OpenAI
Pamela Mishkin, OpenAI
Mevan Babakar, Google
Renée DiResta, Stanford University
Tarleton Gillespie, Microsoft
Pietro Perona, Amazon
11:45 - 13:00 Lunch Break
13:00 - 13:30 Talk Generative Media Unleashed: Advancing Media Generation with Safety in Mind Mohammad Norouzi, Stealth Startup
13:30 - 14:00 Talk Data Collection for Content Moderation Dmitriy Karpman, Hive AI
14:00 - 14:15 Accepted Paper CrisisHateMM: Multimodal Analysis of Directed and Undirected Hate Speech in Text-Embedded Images from Russia-Ukraine Conflict Surendrabikram Thapa
14:15 - 14:30 Accepted Paper Prioritised Moderation for Online Advertising Phanideep Gampa
14:30 - 15:00 Talk Understanding Health Risks for Content Moderators and Opportunities to Help Matt Lease, UT Austin
15:00 - 15:30 Coffee Break
15:30 - 16:00 Talk Building end-to-end content moderation pipelines in the real world Todor Markov, Open AI
16:00 - 16:30 Talk Disrupting Disinformation Hany Farid, UC Berkeley
16:30 - 16:40 Work-in-Progress Spotlight Safety and Fairness for Content Moderation in Generative Models Sarah Laszlo, Google
16:40 - 17:25 Panel Discussion Technology & Approach Mohammad Norouzi, Stealth Startup
Dmitriy Karpman, Hive AI
Matt Lease, UT Austin
Todor Markov, Open AI
Hany Farid, UC Berkeley
17:25 - 17:30 Closing Remarks Mei Chen, Microsoft


Red teaming Generative AI Systems

Speakers: Lama Ahmad, OpenAI and Pamela Mishkin, OpenAI

Abstract: As generative AI systems continue to evolve, it is crucial to rigorously evaluate their robustness, safety, and potential for misuse. In this talk, we will explore the application of red teaming methodologies to assess the vulnerabilities and limitations of these cutting-edge technologies. By simulating adversarial attacks and examining system responses, we aim to uncover latent risks and propose effective countermeasures to ensure the responsible deployment of generative AI systems in new domains and modalities.

Fact Checking 101

Speakers: Mevan Babakar, Google

Abstract: In this talk Mevan will be taking a deep dive into the world of fact checks, and fact checking. Together we'll be exploring the real-world context: How many fact checkers are out there? How are they organised? How do they fit into the information ecosystem? We'll be doing a deep dive into how fact checking actually works on the ground: Is it an effective intervention? Does it change minds? How are fact checks actually made? And we'll be ending on what the challenges are in the modern day with respect to specific examples of mis/disinformation, GenAI and global data infrastructure. Together we'll explore the opportunities and limitations, and how these will affect the future of information credibility around the world.

Content Moderation: Two Histories and Three Emerging Problems

Speakers: Tarleton Gillespie, Microsoft

Abstract: The technical challenges of identifying toxic content are so immense, they can often eclipse the fact that identification is just one element of ‘content moderation’ as a much broader sociotechnical practice. Considering the broader historical context of content moderation helps to explain why moderation is so difficult, identify why good technical solutions don’t always make good social or political ones, and reframe what problems we’re even trying to solve. I will close by highlighting three problems that I hope will open provocative and challenging questions for those working on moderation as a technical problem.

Generative Media Unleashed: Advancing Media Generation with Safety in Mind

Speakers: Mohammad Norouzi, Stealth Startup

Abstract: This talk explores the exciting opportunities in diffusion models for image, video, and 3D generation. We'll dive into various generatgive media applications that'll transform and expand the creative economy, and help humans become more creative. At the same time, I will emphasize on the vital importance of trust and safety to ensure responsible and ethical utilization of these powerful technologies. I'll put forward a number of proposals for addressing safety, specifically in the generative media space.

Data Collection for Content Moderation

Speakers: Dmitriy Karpman, Hive AI

Abstract: Data collection and curation is an integral, yet often overlooked component of building content moderation systems. In this presentation we'll discuss optimizing data annotation, the effects of data quality and quantity on overall model performance, techniques for identifying and alleviating biases in models, and discussing appropriate applications of synthetic data.

CrisisHateMM: Multimodal Analysis of Directed and Undirected Hate Speech in Text-Embedded Images from Russia-Ukraine Conflict

Speakers: Authors

Abstract: Text-embedded images are frequently used on social media to convey opinions and emotions, but they can also be a medium for disseminating hate speech, propaganda, and extremist ideologies. During the Russia-Ukraine war, both sides used text-embedded images extensively to spread propaganda and hate speech. To aid in moderating such content, this paper introduces CrisisHateMM, a novel multimodal dataset of over 4,700 text-embedded images from the Russia-Ukraine conflict, annotated for hate and non-hate speech. The hate speech is annotated for directed and undirected hate speech, with directed hate speech further annotated for individual, community, and organizational targets. We benchmark the dataset using unimodal and multimodal algorithms, providing insights into the effectiveness of different approaches for detecting hate speech in text-embedded images. Our results show that multimodal approaches outperform unimodal approaches in detecting hate speech, highlighting the importance of combining visual and textual features. This work provides a valuable resource for researchers and practitioners in automated content moderation and social media analysis. The CrisisHateMM dataset and codes are made publicly available here.

Prioritised Moderation for Online Advertising

Speakers: Authors

Abstract: Online advertisement industry aims to build a preference for a product over its competitors by making consumers aware of the product at internet scale. However, the ads that violate the applicable laws and location specific regulations can have serious business impact with legal implications. At the same time, customers are at risk of getting exposed to egregious ads resulting in a bad user experience. Due to the limited and costly human bandwidth, moderating ads at the industry scale is a challenging task. Typically at Amazon Advertising, we deal with ad moderation workflows where the ad distributions are skewed by non defective ads. It is desirable to increase the review time that the human moderators spend on moderating genuine defective ads. Hence prioritisation of deemed defective ads for human moderation is crucial for the effective utilisation of human bandwidth in the ad moderation workflow. To incorporate the business knowledge and to better deal with the possible overlaps between the policies, we formulate this as a policy gradient ranking algorithm with custom scalar rewards. Our extensive experiments demonstrate that these techniques show a substantial gain in number of defective ads caught against various tabular classification algorithms, resulting in effective utilisation of human moderation bandwidth.

Understanding Health Risks for Content Moderators and Opportunities to Help

Speakers: Matt Lease, UT Austin

Abstract: Social media platforms must detect a wide variety of unacceptable user-generated images and videos. Such detection is difficult to automate due to high accuracy requirements, continually changing content, and nuanced rules for what is and is not acceptable. Consequently, platforms rely in practice on a vast and largely invisible workforce of human moderators to filter such content when automated detection falls short. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to moderators. Given this, what can be done to help reduce such impacts?
My talk will discuss two works in this vein. The first involves the design of blurring interfaces for reducing moderator exposure to disturbing content whilst preserving the ability to quickly and accurately flag it. We find that interactive blurring can reduce psychological impacts on workers without sacrificing moderation accuracy or speed (see demo at http://ir.ischool.utexas.edu/CM/demo/). Following this, I describe a broader analysis of the problem space, conducted in partnership with clinical psychologists responsible for wellness measurement and intervention in commercial moderation settings. This analysis spans both social and technological approaches, reviewing current best practices and identifying important directions for future work, as well as the need for greater academic-industry collaboration

Building end-to-end content moderation pipelines in the real world

Speakers: Todor Markov, Open AI

Abstract: In this talk, we explore a holistic approach for building a natural language classification system tailored for content moderation in real-world scenarios. We discuss the importance of crafting well-defined content taxonomies and labeling guidelines to ensure data quality, and detail the active learning pipeline developed to handle rare events effectively. We also examine various techniques used to enhance the model's robustness and prevent overfitting. This approach generalizes to diverse content taxonomies and how the resulting classifiers can outperform standard off-the-shelf models in the context of content moderation.

Disrupting Disinformation

Speakers: Hany Farid, UC Berkeley

Abstract: We are awash in disinformation consisting of lies and conspiracies, with real-world implications ranging from horrific humans rights violations to threats to our democracy and global public health. Although the internet is vast, the peddlers of disinformation appear to be more localized. I will describe a domain-level analysis for predicting if a domain is complicit in distributing or amplifying disinformation. This process analyzes the underlying domain content and the hyperlinking connectivity between domains to predict if a domain is peddling in disinformation. These basic insights extend to an analysis of disinformation on Telegram and Twitter.

Safety and Fairness for Content Moderation in Generative Models

Speakers: Sarah Laszlo, Google

Abstract: With significant advances in generative AI, new technologies are rapidly being deployed with generative components. Generative models are typically trained on large datasets, resulting in model behaviors that can mimic the worst of the content in the training data. Responsible deployment of generative technologies requires content moderation strategies, such as safety input and output filters. Here, we provide a theoretical framework for conceptualizing responsible content moderation of text-to-image generative technologies, including a demonstration of how to empirically measure the constructs we enumerate. We define and distinguish the concepts of safety, fairness, and metric equity, and enumerate example harms that can come in each domain. We then provide a demonstration of how the defined harms can be quantified. We conclude with a summary of how the style of harms quantification we demonstrate enables data-driven content moderation decisions.

INVITED SPEAKERS & PANELISTS BIO


Lama Ahmad leads the Researcher Access Program at OpenAI, which facilitates collaborative research on key areas related to the responsible deployment of AI and mitigating risks associated with such systems. Most recently, she co-led the external red teaming effort for the DALL-E 2 deployment. A member of the Deployment Planning team at OpenAI, Lama works on conducting analyses to prepare for safe and successful deployment of increasingly advanced AI.
Mevan is the News and Information Credibility Lead at Google, working to tackle misinformation globally, and to support journalists and publishers around the world. Previously she was deputy CEO of Full Fact, the UK’s independent fact checking charity where she worked on the problems mis/disinformation for seven years and founded Full Fact's automated fact checking team. Mevan was previously Interim CEO at Democracy Club which empowers voters and everyday democracy in the UK. Mevan also sat on the board of the International Fact Checking Network, which oversees 300 fact checking organizations worldwide.
Renée DiResta is the Research Manager at the Stanford Internet Observatory. She investigates the spread of malign narratives across social networks and assists policymakers in understanding and responding to the problem. She has advised Congress, the State Department, and other academic, civic, and business organizations, and has studied disinformation and computational propaganda in the context of pseudoscience conspiracies, terrorism, and state-sponsored information warfare.
Hany Farid is a professor at the University of California, Berkeley with a joint appointment in electrical engineering & computer sciences and the School of Information. He is also a member of the Berkeley Artificial Intelligence Lab, Berkeley Institute for Data Science, Center for Innovation in Vision and Optics, Development Engineering, Vision Science Program, and is a senior faculty advisor for the Center for Long-Term Cybersecurity. His research focuses on digital forensics, forensic science, misinformation, image analysis, and human perception.
He received his undergraduate degree in computer science and applied mathematics from the University of Rochester in 1989, his M.S. in computer science from SUNY Albany, and his Ph.D. in computer science from the University of Pennsylvania in 1997. Following a two-year post-doctoral fellowship in brain and cognitive sciences at MIT, he joined the faculty at Dartmouth College in 1999 where he remained until 2019.
He is the recipient of an Alfred P. Sloan Fellowship and a John Simon Guggenheim Fellowship and is a fellow of the National Academy of Inventors.
Tarleton Gillespie is a Senior Principal Researcher at Microsoft Research New England, part of the Social Media Collective, Microsoft Research’s team of sociologists, anthropologists, and communication & media scholars studying the impact of sociotechnical systems on social and political life. Tarleton also retains an affiliated Associate Professor position with Cornell University, where he has been on the faculty for nearly two decades.
Tarleton’s current work investigates how social media platforms and other algorithmic information systems shape public discourse. His latest book, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media (Yale University Press, 2018) examines how the content guidelines imposed by social media platforms set the terms for what counts as ‘appropriate’ user contributions, and ask how this private governance of cultural values has broader implications for freedom of expression and the character of public discourse. The book was a finalist for the 2019 PROSE Award from Association of American Publishers (AAP).
Dmitriy Karpman is currently the Co-Founder and CTO at Hive. Dmitriy has also previously worked as a software engineering intern at Google and as the CTO and Co-Founder of Kiwi. Dmitriy has a wealth of experience in research, having worked as a research specialist at the Center for Geospatial Intelligence and as a research assistant at Washington University in St. Louis. Dmitriy received their Doctor of Philosophy in Computer Science from Stanford University. Dmitriy also holds a Bachelor of Science from the University of Missouri-Columbia in Computer Science, Mathematics, and Statistics.
Matthew Lease is an ACM Distinguished Member, a AAAI Senior Member, and an Amazon Scholar. His research integrates AI and human-computer interaction (HCI) techniques across the fields of crowdsourcing and human computation, information retrieval, and natural language processing. At the University of Texas at Austin, Lease is a Professor in the School of Information and a faculty founder and leader of Good Systems, a $20M, 8-year university Grand Challenge to develop responsible AI technologies. Lease received Early Career awards from DARPA, NSF, and IMLS, with paper awards at CIST 2022, ECIR 2019, and HCOMP 2016. He has served on the AAAI Human Computation (HCOMP) steering committee since 2017.
Pamela Mishkin is interested in how to make language models safe and fair, from a technical and policy perspective. Previously she Led product management at The Whistle, a small start-up building tech tools for international human rights groups. Before that, she researched economic policy at the Federal Reserve Bank of New York and worked with the Department of Digital Culture, Media and Sport in the UK on online advertising policy. She holds a BA in Computer Science and Math from Williams College and an MPhil in Technology Policy from the University of Cambridge (Herchel-Smith Fellow).
Mohammad Norouzi is the co-founder of Imagen, a text-to-image diffusion model with an unprecedented degree of photorealism and a deep level of language understanding. He was a staff research scientist at Google Brain in Toronto. He is interested in developing simple and efficient machine learning algorithms that help solve challenging problems across a broad range of application domains including natural language processing and computer vision.
He joined the Google brain team in Mountain View in January 2016 and moved to Toronto in January 2018. He completed his PhD in computer science at the University of Toronto in December 2015. His advisor was David Fleet, and he was supported by a Google PhD fellowship in machine learning. His PhD thesis focused on scalable similarity search. He is from Iran, where he finished undergraduate studies at Sharif University of Technology.
Todor is a Machine Learning Researcher at Open AI in the Applied AI team. He has worked on fine-tuning and content moderation for the OpenAI API. He has also worked previously on multi-agent deep reinforcement learning based hide-and-seek at OpenAI. He completed in B.S. in Symbolic Systems and M.S. in Statistics from Stanford University.
Professor Perona is currently interested in visual recognition, more specifically visual categorization. He is studying how machines can learn to recognize frogs, cars, faces and trees with minimal human supervision, and how machines can learn from human experts. His project `Visipedia' has produced two smart device apps (iNaturalist and Merlin Bird ID) that anyone can use to recognize the species of plants and animals from a photograph.
In collaboration with Professors Anderson and Dickinson, professor Perona is building vision systems and statistical techniques for measuring actions and activities in fruit flies and mice. This enables geneticists and neuroethologists to investigate the relationship between genes, brains, and behavior. Professor Perona is also interested in studying how humans perform visual tasks, such as searching and recognizing image content. One of his recent projects studies how to harness the visual ability of thousands of people on the web.

PROGRAM COMMITTEE


  • Christopher Clarke, PhD student, University of Michigan
  • Gaurav Mittal, Senior Researcher, Microsoft
  • J.P. Lewis, Staff Research Scientist, Google Research
  • Jay Patravali, Data & Applied Scientist II, Microsoft
  • Jialin Yuan, PhD Student, Oregon State University
  • Jiarui Cai, Applied Scientist, AWS AI Labs
  • Lan Wang, PhD Student, Michigan State University
  • Mahmoud Khademi, Researcher 2, Microsoft
  • Mamshad Nayeem Rizve, PhD Student, University of Central Florida
  • Matthew Hall, Principal Applied Scientist, Microsoft
  • Reid Pryzant, Senior Researcher, Microsoft
  • Rishi Madhok, Senior Applied Science Manager, Microsoft
  • Sandra Sajeev, Data Scientist 2, Microsoft
  • Sarah Laszlo, Staff Research Scientist, Google Research
  • Satarupa Guha, Applied Scientist II, Microsoft
  • Simon Baumgartner, Software Engineer, Google Research
  • Soumik Mandal, Applied Scientist, Amazon
  • Tobias Rohde, Applied Scientist II, Amazon
  • Xuhui Zhou, PhD student, Carnegie Mellon University
  • Ye Yu, Senior Software Engineer, Microsoft
  • Zhen Gao, Applied Scientist II, Amazon

Contact


If you have any questions, please feel free to reach us at below