Platforms Can Optimize for Metrics Beyond Engagement
Some of the pushback comes from the fact that Meta is putting money toward the project. Although no external researchers are being paid, the University of Toronto has contracted with Meta to manage the university-based parts of the collaboration. This project has significant administrative and engineering costs, in part because we decided to ensure research integrity by externally writing key parts of the code that Meta will run. This funding might have been more trouble than it was worth, but there’s also no reason researchers should have to scrape together pennies or spend taxpayer money when working with the largest companies in the world to develop socially beneficial technology. In the future, third-party funders could support the academic and civil society end of platform research collaborations, as they have sometimes done.
The problem with instinctive distrust of platforms is not that platforms are above criticism, but that blanket distrust blocks some of the most valuable work that can be done to make these systems less harmful, more beneficial, and more open. Many observers are placing their hopes in transparency, especially transparency required by law. The recently passed EU Digital Services Act requires platforms to make data available to qualified researchers, and a number of similar policy proposals have been introduced to the US Congress. Yet our work necessarily goes far beyond “data access.”
In our view, only an experiment that involves intervening on a live platform can test the hypothesis that recommender systems can be oriented to long-term positive outcomes, and develop sharable technology to do so. More than that, it’s unlikely that law alone can compel a company to engage in good faith on a complex project like this one; designing the core experiment took over a year and wouldn’t have been possible without the expertise of the Meta engineers who work with the platform’s technology daily. In any case, attempts to pass American laws ensuring researcher access to data have, so far, gone nowhere.
Yet collaborative experiments with public results are disincentivized. The answer isn’t to do technosocial research in secret—or worse, not at all—but to do it to higher ethical standards. Our experiment is being overseen by the University of Toronto’s human subjects experimentation review process (IRB), which is recognized by all the other universities involved as meeting their ethics requirements. All of the users in our study will have given informed consent to participate, and will be paid for their time. We were happy to find champions within Meta who believe in open research.
This level of cooperation requires navigating complex expectations about what information can, should, and won’t be shared. We designed a novel approach to resolving disagreements about confidentiality. We received contractual guarantees that our research will result in a scientific publication meeting peer review standards, and can’t be altered or held up for any reason other than legitimate privacy and confidentiality concerns. We also negotiated the freedom to talk publicly about our collaboration, and in the event the project is halted, the freedom to disclose the reasons why. We’re pretty sure nobody has seen an agreement like this before in an academic-industry collaboration. It took time to design and negotiate this new way of doing research.