Integrating Behavior, Text, and Networks to Forecast Online Participation

As online platforms increasingly rely on voluntary contributions—from open science to collaborative innovation—the ability to anticipate user engagement becomes both a scientific and practical priority. Yet predicting who will stay active, who will disengage, and why, remains a complex challenge. Our recent paper, KEGNN: Knowledge-Enhanced Graph Neural Networks for User Engagement Prediction (Fan et al., International Conference on Multimedia Retrieval 2025), introduces a novel framework that addresses this gap by integrating behavioral, social, and semantic signals into a unified predictive model.

Continue reading

A Laboratory Ethnography at Scale: Lessons from 3,000 Synthetic Biology Teams

This new preprint is the result of a collaboration initiated during my postdoctoral stay at the Barabasi lab in Boston, which I continued at the LPI as an affiliated professor. In this project, we introduce the synthetic biology competition iGEM as a model system for the Science of Science and Innovation, enabling large-scale “laboratory ethnography.” We present the collection and analysis of laboratory notebooks data from 3,000 teams, which we deposited on the open archive Zenodo. We highlight the organizational characteristics (intra- and inter-team collaboration networks) of teams related to learning and success in the competition. In particular, we emphasize how teams overcome coordination costs as they grow in size, as well as the crystallization of the inter-team collaboration network over time, limiting access to relational capital for peripheral teams. This work is currently funded by an ANR JCJC grant to collect field data and build network models of collaborations and performance.

Continue reading

Community Review Systems in Science Funding

Resource allocation is crucial for the development of innovative projects in science and technology. In response to the urgent COVID-19 pandemic in 2020, we implemented an agile “community review” system with JOGL to quickly allocate micro-grants for the prototyping of innovative solutions. In this paper published in f1000Research we analyzed the results of 7 review cycles. Implemented across 147 projects, this process is characterized by its speed (median duration of 10 days), scalability (4 reviewers per project regardless of the total number of projects), and robustness, measured by the preservation of the projects’ ranking order after the random removal of reviewers. Including applicants in the review process does not introduce significant bias, showing a correlation of r=0.28 between evaluations, similar to that observed for non-applicants and within traditional funding methods. This system allows for agile improvement of proposals, promoting the implementation of successful early prototypes and the constructive revision of initially rejected projects. This work demonstrates the effectiveness of a frugal community review for agile resource allocation in open innovation contexts.