Menu Close

Dino Pedreschi

Responsible AI for social media governance – the GPAI experience

Social media platforms are one of the main channels through which AI systems influence people’s lives, and therefore influence countries and cultures. The number of social media users worldwide is estimated at around 5 billion at the current moment, or 60% of the world’s population. Crucially, the experience of a social media user, on any given platform, is heavily influenced by AI systems that run on that platform.

  • The central component of any social media system is a content feed of items presented to the user. The feed for each individual user is curated by a recommender algorithm: an AI system that monitors each individual user’s interaction with platform content, learns what they like to engage with, and then gives them ‘more of the same’. This ability to tailor content to users is what gives social media systems their vast appeal. But the mechanisms through which recommender algorithms learn raise important questions that remain to be answered. Our project has proposed concrete measures for the governance of recommender algorithms, through the study of their effects on platform users.
  • Another pervasive AI influence on social media platforms is in their content moderation processes. Content moderation has to be supported by automated tools, given the huge volume of content that is posted, and AI content classifiers are the key tools that companies deploy. Again there are concerns about the processes through which these classifiers are trained. Our project is trialling a mechanism for training classifiers outside of companies, in a semi-public domain, which we think may offer a better model for their governance.
  • A final way in which social media companies channel AI influences is through their role as content disseminators. The revolutionary advances in generative AI that have taken place over the last years allow ordinary citizens to create, and distribute, AI-generated content, through text generation tools like ChatGPT, and image generation tools like MidJourney. Alongside the many productive uses of such tools, there are concerns that they will facilitate the production of disinformation, potentially destabilising political processes, and other information ecosystems. Our project has made a proposal for the governance of generative AI tools, which has had considerable traction within EU policymaking bodies, and also within the US Senate.

In this talk I will briefly account for the work and results of GPAI “Responsible AI for Social Media Governance” project, that I co-lead together with Alistair Knott (Victoria University Wellington, NZ),  along the three mentioned lines.

Dino Pedreschi

Dino Pedreschi is a professor of computer science at the University of Pisa and member of GPAI – Global Partnership on AI. He is a pioneering scientist in data science and artificial intelligence. He co-leads the Pisa KDD Lab – Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa, Scuola Normale Superiore and the Italian National Research Council – CNR. His research contributions span on big data analytics and mining, machine learning and AI, and their impact on society: human mobility and sustainable cities, social network analysis, complex social and economic systems, data ethics, bias and discrimination analysis, privacy-preserving data analytics, explainable AI, governance of AI. His scientific production has received more than 20K citations, with an h-index of 69 (source: GS, June 2024). He is currently shaping the research frontier of Human-centered Artificial Intelligence, as a leading figure in the European network of research labs Humane-AI-Net (scientific director of “Social AI”) and proponent of the research line on human-AI coevolution. He is a founder of SoBigData.eu, the European H2020 Research Infrastructure “Big Data Analytics and Social Mining Ecosystem” www.sobigdata.eu. Dino is currently Italy’s nominated expert of the Responsible AI working group of GPAI – the Global Partnership on AI, the director of the Italian National PhD Program in Artificial Intelligence, and the coordinator of the project “Human-centered AI” within the Next Generation EU Partnership “FAIR – Future AI Research”.