Summary: | <p>Sophisticated AI systems are increasingly everywhere. In many ways, we have already been affected by the rollout of AI systems into more and more areas of life, from insurance and law to healthcare and the media – often without really noticing. However, 2023 will likely prove to be a particularly critical moment in the history of AI. Ever since the public release of ChatGPT, a so-called Large Language Model (LLM), in December 2022 by the US start-up OpenAI, we are witnessing a proliferation of a form of AI that has been labelled ‘Generative AI’ due to the ability of these systems to create seemingly everything from realistic text to images. ChatGPT reached 100 million users in just two months and has now been built into Microsoft’s Bing search engine. Various applications rely on the system, which is increasingly integrated into other software, too. Meanwhile, the ‘AI race’ is heating up, with Google releasing its own chatbot and other technology companies vying to get a piece of the cake by building and releasing their own models.<br />Powerful and technologically impressive as some of these developments are, they also raise important questions about their democratic impact. Up until now, we could take for granted humans’ central role in shaping democratic deliberation and culture. But what does it mean for the future of democracy, if humans are increasingly side-lined by AI? Does it matter if news articles, policy briefs, lobbying pieces, and entertainment are no longer created solely by humans? How will an increasingly automated journalism and media culture affect democratic participation and deliberation? How can we protect democratic values, like public deliberation and self-governance, in societies which stand to be reshaped through AI? And how might these new technologies be used to promote democratic values?</p>
<p>To investigate this situation and to gauge the opinions of experts and academics, the Balliol Interdisciplinary Institute project ‘Automating Democracy: Generative AI, Journalism, and the Future of Democracy’ convened a group of experts for a public symposium at Balliol College Oxford, in collaboration with the Institute for Ethics in AI and the Oxford Internet Institute. The aim of the symposium, organised jointly by Dr Linda Eggert, an Early Career Fellow in Philosophy, and Felix M. Simon, a communication researcher and DPhil student at the Oxford Internet Institute, was to identify key issues in this space and start a conversation among academics, industry experts, and the public about the questions outlined above. The symposium featured three panel discussions on ‘The Technology, Context, and Socioeconomics of LLMs,’ ‘How Generative AI is Impacting the News Media,’ and on ‘Regulating Generative AI Democratically and Globally.’ <br />Speakers included leading experts on AI, the news, and democratic theory: Hannah Kirk, an AI researcher and DPhil student at the Oxford Internet Institute; Hal Hodson, a special projects writer and technology journalist at The Economist; Laura Ellis, the BBC’s Head of Technology Forecasting; Gary Rogers, co-founder of news agency RADAR and Senior Newsroom Strategy Consultant at Fathm; Dr Gemma Newlands, Departmental Research Lecturer in AI and Work at the Oxford Internet Institute; Polly Curtis, the Chief Executive of think tank Demos; Prof John Tasioulas, Director of the Institute for Ethics in AI and Professor of Ethics and Legal Philosophy at the University of Oxford; and Prof Hélène Landemore, Professor of Political Science at Yale University.</p>
<p>After briefly introducing and defining LLMs and Generative AI, this report provides a summary of the main themes that emerged during the symposium and outlines a list of open questions to be addressed in future research and discussions.</p>
|