In this practical, we will replicate the work of Ruiz-Dolz on RBAM with natural language arguments.

Setting up and understanding the code

  1. Clone the Github repository and install the necessary librairies.
  2. Explore the datasets, what is the format? What do the type values correspond to?
  3. Which files correspond to the US2016 dataset?

Reproducing the experiments

Note that this step can take a lot of time. Modifications to the code can be needed to fasten the experiment or adapt it to your setting.

  1. Investigate the paper-experiments.py and use the distilbert model.
  2. Train and test the model.
  3. Compare the results with the reported results, do you get similar results?
  4. Explore the with other models used in the paper.

Going further

  1. Other datasets have been proposed to performa RBAM. Adapt them and evaluate the previous models. E.g., use the processed Kialo and microtexts datasets from E. Faugier.
  2. Perform RBAM with LLMs. Use Ollama or LMStudio to download a LLM (e.g., Mistral7B, Llama3, etc.) and perform RBAM using few-shot learning.