Jean-Francois Savard (ENAP) , Stany Nzobonimpa (ENAP)
This research investigates bias, quality, and coherence of “state-of-the-art” Artificial Intelligence (AI) driven Language Models. In this project iteration, the researchers will evaluate OpenAI’s ChatGPT model and its responses to a diverse group of respondents spanning North America, Europe, and Africa. In an era where AI-driven language models like ChatGPT are increasingly influencing decision-making and shaping opinions, it is crucial to assess their advice and guidance, particularly on issues of public interest.
The study will employ a non-representative large sample of participants drawn from various contexts across the selected continents. The participants’ diverse geographical and cultural backgrounds from three continents will provide a rich and varied dataset that will be leveraged in a rigorous comparative analysis. The study will use advanced machine-learning techniques to assess the coherence and topical orientation of ChatGPT’s advice on the selected topics across the three continents. Comparative analysis will help identify regional variations in the model’s responses. Furthermore, the research will integrate theoretical insights from the field of public administration to provide a comprehensive evaluation of ChatGPT’s guidance. By scrutinizing how the language model navigates the nuances and complexities of these topics, the study aims to shed light on the potential societal impact of AI-driven chatbots on the decision-making processes of the respondents. The findings will inform the development of AI-driven conversational agents and offer valuable insights for policymakers, educators, and society.