In this practical, we will replicate the work of Ruiz-Dolz on RBAM with natural language arguments.
Setting up and understanding the code
- Clone the Github repository and install the necessary librairies.
- Explore the datasets, what is the format? What do the
type
values correspond to?
- Which files correspond to the US2016 dataset?
Reproducing the experiments
Note that this step can take a lot of time. Modifications to the code can be needed to fasten the experiment or adapt it to your setting.
- Investigate the
paper-experiments.py
and use the distilbert
model.
- Train and test the model.
- Compare the results with the reported results, do you get similar results?
- Explore the with other models used in the paper.
Going further
- Other datasets have been proposed to performa RBAM. Adapt them and evaluate the previous models. E.g., use the processed Kialo and microtexts datasets from E. Faugier.
- Perform RBAM with LLMs. Use Ollama or LMStudio to download a LLM (e.g., Mistral7B, Llama3, etc.) and perform RBAM using few-shot learning.