Q11 | 001 The Computer in Motion
Tracks
Burns - Theatre 1
Saturday, July 5, 2025 |
9:00 AM - 10:30 AM |
Burns, Theatre 1 |
Overview
Symposium talk
Lead presenting author(s)
Axel Matthey
University of Lausanne
Modeling in history: using LLMs to automatically produce diagrammatic models synthesizing Piketty's historiographical thesis on economic inequalities
Abstract - Symposia paper
This research integrates theoretical digital history with economic history. Employing Large Language Models, we aim to automatically produce historiographical diagrams for analysis. Our experience with the manual production of historiographical diagrams suggests that LLMs might be useful to support the automatic generation of such historiographical diagrams which aim at facilitating the visualization and understanding of complex historical narratives and causal relationships between historical variables. Our initial exploration involved using Google’s LLM (Gemini 1.5 Pro) and OpenAI’s GPT-4o to convert a concise historical article by Piketty into a simplified causal diagram. This article is A Historical Approach to Property, Inequality and Debt: Reflections on Capital in the 21st Century . LLMs have demonstrated remarkable capabilities in various domains, including understanding and generating code, translating languages, and even creating different creative text formats. We show that LLMs can be trained to analyze historical texts, identify causal relationships between concepts, and automatically generate corresponding diagrammatic models. This could significantly enhance our ability to visualize and comprehend complex historical narratives, making implicit connections explicit, and facilitating further exploration and analysis. Historiographical theories explore the nature of historical inquiry, focusing on how historians represent and interpret the past: in this research, the use of diagrams is being considered as a means to enhance the communication, visualization, and understanding of these complex theories.
A/Prof Charles Luke Alan Stark
Assistant Professor
Western University
Computational Illegalism
Abstract - Symposia paper
In his analysis of the concept in his lectures on the development of the “punitive society,” Michel Foucault describes the eighteenth century as a period of “systematic illegalism,” including both lower-class or popular illegalism and “the illegalism of the privileged, who evade the law through status, tolerance, and exception” (Foucault 2015, 142). In this paper, I argue that illegalism has new utility as an analytic concept in the history of computing. Illegalism is a characteristic of both the business models and rhetorical positioning of many contemporary digital media firms. Indeed, such “computational illegalism” is so rife that commentators often seem to accept it as a necessary aspect of Silicon Valley innovation.
In this presentation, I describe illegalism as theorized by Foucault and others and develop a theory of platform illegalism grounded in the history of technical and business models for networked computing since the 1970s. This presentation is part of a larger project in which I document the prevalence of illegalism on the part of digital platforms in various arenas, focusing in particular on platform labor and generative AI; examine the range of responses to such illegalism from consumers, activists, and governments; and formulate recommendations regarding ways to account for platform illegalism in scholarly and activist responses as part of governance mechanisms for digitally mediated societies.
In this presentation, I describe illegalism as theorized by Foucault and others and develop a theory of platform illegalism grounded in the history of technical and business models for networked computing since the 1970s. This presentation is part of a larger project in which I document the prevalence of illegalism on the part of digital platforms in various arenas, focusing in particular on platform labor and generative AI; examine the range of responses to such illegalism from consumers, activists, and governments; and formulate recommendations regarding ways to account for platform illegalism in scholarly and activist responses as part of governance mechanisms for digitally mediated societies.
Dr Clare Kim
University of Illinois Chicago
The Datafied "Enemy," Computational Work, and Japanese American Incarceration during World War II
Abstract - Symposia paper
Following the events of Pearl Harbor in December 1941, a series of U.S. presidential proclamations and executive orders authorized the legal designation and treatment of people of Japanese ancestry as “enemy aliens.” The designation of the US West Coast as military zones under Executive Order 9066 enabled the removal and subsequent incarceration of more than 120,000 Japanese Americans in internment camps. The problem of identifying, incarcerating, and managing Japanese enemy alien populations necessitated the treatment of these military zones and spaces as information environments, where the classification of Japanese and Japanese American residents as enemy alien, citizen, or an alternative subject position could be adjudicated. This paper explores how conflict in the Pacific theater of World War II contoured the entanglements between computational work and Asian and Asian Americans residing in the U.S., recounting the setup of statistical laboratories established to track and manage Japanese American incarceration. It reveals how datafication practices were collapsed and equated with bodies that were racialized as an enemy alien and yellow peril, which paradoxically effaced other subject positions to which Japanese Americans came to occupy at the time: in particular, the invisible labor to which they furnished to statistical work as technical experts themselves.
Dr Barbara Hof
University of Lausanne
Concluding discussions for HaPoC symposium
Presenting author(s)
Dr Ksenia Tatarchenko
Dr Arianna Borrelli
Dr Arianna Borrelli
