What are bridging systems?
Bridging systems increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation.
Our goal in this working paper is to articulate a research and practice direction around bridging systems. You can read the full paper here. The abstract is below.
Divisiveness appears to be increasing in much of the world, leading to concern about political violence and a decreasing capacity to collaboratively address large-scale societal challenges. In this working paper we aim to articulate an interdisciplinary research and practice area focused around what we call bridging systems: systems which increase mutual understanding and trust across divides, creating space for productive conflict, deliberation, or cooperation. We give examples of bridging systems across three domains: recommender systems on social media, software for conducting civic forums, and human-facilitated group deliberation. We argue that these examples can be more meaningfully understood as processes for attention-allocation (as opposed to “content distribution” or “amplification”), and develop a corresponding framework to explore similarities — and opportunities for bridging — across these seemingly disparate domains. We focus particularly on the potential of bridging-based ranking to bring the benefits of offline bridging into spaces which are already governed by algorithms. Throughout, we suggest research directions that could improve our capacity to incorporate bridging into a world increasingly mediated by algorithms and artificial intelligence.
A final version of the paper will be published with the Knight First Amendment Institute at Columbia University, following its symposium Optimizing for What? Algorithmic Amplification and Society.
You might also be interested in the earlier policy paper on bridging-based ranking.
- Washington Post: Social media can be polarizing. A new type of algorithm aims to change that. by Will Oremus (11 Jan 2023)
- Berkman Klein Center: bridging systems; border tech; meme power (12 Jan 2023)
- King’s College London: New approach to social media algorithms could counteract destructive polarisation (19 Jan 2023)
Aviv Ovadya Aviv Ovadya is an affiliate at the Berkman Klein Center for Internet & Society at Harvard University (at the Institute for Rebooting Social Media), and a visiting scholar at the Leverhulme Centre for the Future of Intelligence at Cambridge University. This work began while he was a Technology and Public Purpose Fellow at the Harvard Kennedy School’s Belfer Center. He can be found at his website, as @metaviv on Twitter, on Mastodon, and via his newsletter.
Luke Thorburn is a researcher in the UKRI Centre for Doctoral Training in Safe and Trusted AI at King’s College London. He also co-authors the Understanding Recommenders project for the Center for Human-Compatible AI at the University of California, Berkeley and has worked with the newDemocracy Foundation on technology for convening deliberative mini-publics. He can be found at his website, as @LukeThorburn_ on Twitter, and on Mastodon.
We are putting together a working group to expand the paper and the broader work, adding more open problems and deeper explorations across disciplines. Please complete this form if you would be interested in contributing, or just kept in the loop on further developments.
Any enquiries related to this work should be directed to the authors, whose email addresses are listed in the paper.