Welcome!

I am a Postdoctoral Researcher at Institute for Strategy, Technology and Organization (ISTO ) at the Ludwig-Maximilians-Universität München (LMU Munich) and a PhD candidate at Télécom Paris/Center for Research in Economics and Statistics (CREST).

My research focus on digital markets. In paricular I am interested in consumers behaviour and firms strategies to ensure quality under information asymmetry.

Research Intererst: Digital Economics, Empirical Industrial Organization, Reputation Systems, Online Labor Markets

Email me at: chiara.belletti@lmu.de

Research

"Reputation Concerns and the End-Game Effect: When Reputation Works and When it Does Not"

with Elizaveta Pronkina and Michelangelo Rossi

×

Do reputation incentives work when sellers are about to exit an online marketplace? Using data from Airbnb, this paper examines how end-of-game considerations affect effort decisions of sellers. We take advantage of a regulation on short-term rentals in the City of Los Angeles to identify hosts who anticipated their imminent exit from the platform due to non-compliance with new exogenous eligibility rules. We measure host's effort with listing's ratings in effort-related categories such as check-in, cleanliness and communication. Our empirical strategy is twofold. Firstly, we compare the changes in effort-related ratings to ratings on location for hosts who left the platform as a result of the regulation. Secondly, we conduct a difference-in-differences analysis, comparing the effort of host in the City of Los Angeles with that of hosts in neighboring cities.With both strategies, we document a statistically significant decrease in effort-related ratings in the last periods of the host's career. Our results suggest that end-of-game considerations affect the power of reputation systems as an incentive for sellers to exert effort.

Selected presentations: ZEW ICT Conference 2022, EARIE 2022, CESifo Area Conference 2022, Paris Digital Economics Conference 2024

"Moral Hazard in Micro-Tasking. Evidence from a Structural Model"

with Louis Pape

×

Crowd-sourcing platform provide data used to train machine learning algorithms and artificial intelligence. However, a classical Principal-Agent problem, fostered by low monetary rewards of outsourced tasks, limits the quality of the data produced on such platforms. This problem results from firms not monitoring the quality of the work done withsufficient frequency. In this paper we investigate the quality of data annotation on a crowd-sourcing platform, by modelling the simultaneous demand and supply of effort on the platform. Our model considers the moderating impact of expectations from each platform’s side on the other side’s choice. The equilibrium outcome, observed as rejection/validation decisions in the data, is derived through fulfilled rational expectations. We estimate our model with propriety data from a leading micro-tasking platform and we reveal that rejection rates underestimate quality of executed tasks. Additionally, we simulate different counter-factual incentive schemes to induce higher quality work. In partial equilibrium, we find that a wage penalty for workers with a rejected task could induce higher effort and require less monitoring from the firms.

Selected presentations: AFREN 2023, ORG Seminar LMU 2023

"Crowd-sourcing AI Related Tasks: Insights from an Online Labor Platform"

with Ulrich Laitenberger and Paola Tubaro

×

This paper provides new descriptive evidence on how crowd-sourcing platforms are used to outsource AI related jobs, mostly data training. Deriving insights from proprietary data from a leading commercial crowd-sourcing platform and expoiting data analysis technique to identify AI related tasks, the paper studies the the demand volume and the content of AI related jobs outsourced on the platform. It also investigates the behaviors adopted by requesters to ensure the quality of tasks execution. In exploring these different strategic dimensions of the requesters’ behaviour, we offer valuable insights for new outsourcing firms, elucidating the methods they can employ to ensure collection of quality output. Furthermore, we advise the platform on the prevalent tools favored by their clients, which can be strengthened to enhance its attractiveness.

Selected presentations: INDL-6 Berlin 2023

Other Publications

Crowdworking in France and Germany, with Ulrich Laitenberger, Daniel Erdsiek, and Paola Tubaro. ZEW expert brief Nr. 21-09

CV

Teaching

Teaching Assistant: