Understanding LLM shortcuts in multiple-choice question answers
This project aims to investigate shortcut behavior in LLMs when answering multiple-choice questions. We will programmatically generate multiple variants of the same dataset by introducing stylistic and structural variations, such as differences in tone and number of options, to identify and quantify the extent to which LLMs rely on unintended heuristics to achieve higher accuracy.

