Trust Through Trickery: Platform Design, and Online Harassment Facing Journalists
Around the world, every day, journalists face harassment and intimidation online. This harassment includes impersonation and doxxing, as well as state-level censorship and intimidation. They can be one-off incidents, sustained campaigns, or both. Because of flawed privacy and security features on these platforms, journalists remain unprotected. Nonetheless, they must use these platforms to find, publish, and research stories.

The design of these platforms matters. Trust Through Trickery looks at design in the context of harassment. Inspired by dark patterns research, we ask: Do design elements lure users into a dangerous, false sense of trust online?
From 2019 to 2020, we led a research project on the online harassment of journalists, with Elyse Voegeli and Vandikia Shukla with the Harvard Kennedy School funded by the Open Technology Fund. Trust through Trickery is a qualitative and quantitative research study on harassment in messaging apps and social networks. It also addresses the ways that design is implicated in that process, and which design elements amplify or create harassment.
We created one of the first interdisciplinary surveys with designers and journalists meant to compare knowledge and awareness of design terms, dark patterns, and the design processes. We ran two surveys, one with 230 designers and another with 81 journalists; and two feature design workshops with 20 journalists in Mexico City affiliated with the Online News Association. We also interviewed 31 journalists in China, Hong Kong, Iran, Palestine, Malta, Guatemala, Afghanistan, the United Kingdom, Canada, the United States, Pakistan, India, Nigeria, Germany, Romania, and Mexico.
As a published report, Trust Through Trickery breaks down the design elements that foster trust, and how trust was defined by various platforms. It speaks to the disconnect between definitions of trust used by those who build platforms and those who use them. For example, we found that journalists favored consistency and transparency over any other feature, and we proposed that these be considered in any scaffolding for product design. Crucially, they are valuable only when applied to policies and tools (rather than cosmetic interface changes). The report calls for strong, easy-to-understand privacy and moderation logic, clearer and more nuanced ways to report and block harassing content, and better privacy filters.
Check out the report here, our article in Slate summarizing our findings and our findings echoed here in a report by Pen America.