Project Details: Truth, authenticity and record mediation offer foundational arguments for historians. These factors now need to be defined for others, such as Artificial Intelligence model developers using historical data as the basis of their public models. This project aims to outline the ethics of developing Artificial Intelligence personas as representations of real historical people.
This project, named as History, Archives and Ethics (HAE), will take an interdisciplinary approach to the creation of AI models for educational and research purposes. HAE brings together data, communication, philosophy, history, and artificial intelligence experts, alongside archivists and librarians in collaboration to create a trial Artificial Intelligence model using personal historical data. The interdisciplinary approach helps researchers to understand the requirements of other experts to create models, ethically use data, how to best share information with a public audience.
This project unifies ten experts from a range of research and professional areas.
Co-CI Dr Lee-Talbot has secured participant agreement from the people listed below. These contributors on each have declared support for AI innovation, scepticism or tentative acceptance. Meetings with HAE contributors provided an opportunity to identify who will participate in the project, in what capacity, potential consensus issues, budget requests and desired outcomes for participation.
HAE currently consists of three teams:
data and communication: Professor Richard Dazeley, Dr Wei Luo, Associate Professor Guangyan Huang, Associate Professor Toija Cinque.
history and philosophy: Associate Professor Patrick Stokes (CI), Dr Bart Ziino, Dr Deborah Lee-Talbot (Co-CI).
Information Specialists: Antony Catrice: Deakin Archives; Kristen Thornton: Deakin Special collections; Brad Adams, Deakin Arts and Education Librarian.
The data and communication team will assist with industry trends and connections, model creation and creating the AI prompts and programming needed to meet the objectives of the history, philosophy and IS teams. The history and philosophy team will offer the foundational discussion points for issues concerning the ethical use of data of deceased persons using historiographical and philosophical approaches. The IS team will provide insight into researcher trends, and legal requirements. The large number of contributors ensures the progress of the project should members be unable to contribute and a diversity of perspectives.
With the opportunity to meet and work together at three collaborative workshops, HAE will seek to establish ethical guidelines for a history-based AI project with the potential to produce a journal article. Using data from Deakin’s institutional archive and Special Collections the team will create, then interrogate, how an AI model can be created to present complex, publicly available personal data, for a public audience. The team will then develop guidelines for responsible AI that draws on historical data.
Topics for discussion at the workshops will touch on issues such as a lack of connection between:
– data and communications, who create the chatbots,
– historians and philosophers, who have historical knowledge and ethics, and
– IS who understand user profiles, research methods, copyright and privacy is currently a hurdle to the development of innovative and successful research and education AI models and need to be explored further.
To create an AI model that uses historical data ethically, and is intended to also entice users/researchers to use the Deakin collection and engage with history we need:
multimedia (photos OR videos OR audio)
manuscript (to construct a narrative; for keyword searches; to give users/researchers a point of entry into the rest of the collection)
Open to the public (no privacy or copyright concerns)
Deakin collections will be used to create a pilot project for ethical assessment. From the Deakin institutional archive, we have identified base line (simple) and digitised data for use in the Charlesworth Collection. A more complex data (multiple individual presence; Indigenous-Pacific Knowledge) that needs to be digitised is available in a recent acquisition by Deakin Special Collections: Clive Moore, Box 8 and will be used should the simple data be successfully (ethically) used.
Research Problem
Researchers are increasingly positioned as either bystanders or active users and contributors to Artificial Intelligence research and output. For example, the State Library of Queensland created Charlie, a Virtual Veteran from the First World War, in 2024. Charlie’s responses to my questions about “his” war experiences were taken from publicly available archival data from the Australian War Memorial, TROVE and the State Library. The responses were composites of personal information taken from letters and diaries of a range of participants. Such personal material demonstrates an urgent need to understand how personal historical data can be used ethically in an AI project.
This project was supported by incentive funding by Deakin University the Science and Society Network.

