This project uses HPC to detect and study historical discourses. It focuses on eighteenth-century data and contrasts historical scholarship that builds its cases from a limited number of documents and then aims to generalize from it. In addition, it contrasts some recent uses of “big data” in history that performs its analysis at aggregate level. This approach has the potential to discover unknown and rich insights from historical corpora. The use of HPC is instrumental in the project’s workflows for the study of historical corpora. HPC resources are crucial for the storage, processing, and management of large data volumes; building and deploying large and complex NLP models; and, finally, providing the guiding historian with explanations for the results and efficiently adapting the existing workflow to the instruction of the historian. In developing and implementing reusable workflows, the project will analyse a particular case study based on eighteenth-century data.