[2024] HAGARICE - Harnessing Argumentation Graphs in Augmented Reality for Immersive Co-Creation and Exploration (€15,000)
This project aims to develop the tools for an improved collaborative creation of argumentation graphs and their exploitation for assisted reasoning. Existing online argumentation platforms often depict a debate using a directed graph, offering a visual representation that enhances human grasp of the arguments and their interconnections. These platforms have various applications in domains such as education or e-democracy, where they facilitate broad public involvement in the development of laws. However, there are several problems preventing their adoption for assisted reasoning:
This project will provide solutions for the aforementioned problems.
[2024-2026] SMARTER - Structured Multi-sourced AI Reasoning for Enhanced Decision-Making (€6,000)
In this project, we will instantiating a custom large-language model based on the recent Meta AI’s Llama 2 70B model and training it with both synthetic natural language arguments and arguments extracted from existing platforms. We will perform training through LoRA, a novel method to fine-tune LLMs that freezes the pre-trained model weights and adds trainable rank decomposition matrices to each layer of the Transformer architecture, which can capture task-specific features with fewer parameters. This synergy between argumentation graphs, knowledge graphs, and the LLM will be integrated in a decision-support system and evaluated in various decision scenarios.
[2023-2026] Achieving Self-directed Integrated Cancer Aftercare (ASICA) in melanoma: Developing and enhancing the intervention using evidence from the pilot study and emergent technologies (£421,293.28)
This project, funded through Cancer Research UK, is led by Prof. Peter Murchie and co-investigated by myself, Dr. Dewei Yi, and Dr. Julia Allan. This project will optimize the existing platform (ASICA) for integration into the UK National Health Service melanoma survivorship care. ASICA is a theory and evidence-based digital intervention designed to improve the early detection of melanoma recurrences and second primaries by prompting and supporting people with melanoma to conduct regular total skin self-examination. This project has three workpages.
[2023-2024] ZEEFLEET Tool Feasibility Study (£19,929.35)
This project, funded through Innovate UK, is led by Prof. Nir Oren and co-investigated by myself and Dr. Andrew Starkey. With the help of our partner, Better Environment and Transport (BEAT), the goal of this project is to deliver a prototype design and supporting technical analysis to identify which AI and Operations Research techniques best support decisions around the specification and deployment of zero-emission specialist vehicles and recharging infrastructure, as well as routing of vehicles at a realistic (eg. city/authority) scale.
[2022] Verifying Building Code Regulations from Drawings (£5,000)
Currently, the process of ensuring compliance with building regulations demands substantial human effort. Our innovative solution aims to leverage artificial intelligence to revolutionize this process. Through the application of machine learning techniques, our goal is to develop a system capable of 'reading' architectural drawings and employing normative reasoning and argumentation theory to assess their compliance with building regulations. The envisioned tool promises to dramatically reduce the time required for validation, fundamentally transforming the validation process itself. This advancement stands to yield considerable resource savings for architectural firms by expediting the assessment of plans for regulatory adherence. To substantiate our approach, we plan to focus on a specific subset of regulations—initially targeting fire safety regulations. This focused validation will serve as a proof of concept, mitigating the risks associated with the commercialization of this innovative idea.
[2021] Verifying the Compliance of Argumentation Principles with Human Reasoning (£5,000)
The theory of computational argumentation has gained significant attention across various disciplines including computer science, artificial intelligence, psychology, linguistics, and philosophy. It finds application in diverse fields such as medicine, law, agribusiness, and engineering design rationale, where it helps justify decisions made by AI technologies to end-users. These applications rely on specific techniques that adhere to argumentation principles, ensuring rational functioning. However, it remains uncertain whether these principles align with human reasoning and to what extent. By establishing this connection, the gap between human reasoning and formal argumentation can be bridged, leading to improved explanations for end-users and increased trust in AI applications. The project aims to evaluate whether people understand these argumentation principles, if their reasoning aligns with them, and how satisfaction of specific principles affects others, through user experiments and handcrafted examples.