Automatically Generating Reading Comprehension Questions to Target Specific Skills Using Soft-Prompts

It takes skilled teachers a significant amount of time and effort to create high quality reading comprehension questions, often making it impractical to target a particular learner’s weaknesses. It is possible to use statistical language models to write reading comprehension questions automatically, but targeting questions to build specific skills remains an open problem. We present a new language model that more often generates a question of requested skill type, ensuring questions are more skill-specific. Additionally, we propose a new automatic evaluation method that 1) is more closely aligned with real-world settings and 2) holistically considers the set of target questions for a context, better capturing the diversity of the generated question set. Using this new evaluation method, we show that our language model generates sets of questions more similar to the ones teachers write, outperforming baselines on two datasets.

Thursday, May 25, 2023

9:10 PDT
10:10 MDT
12:10 EDT
13:10 ADT`
17:10 BST

Research Team

Spencer von der Ohe
MSc, Computing Science
University of Alberta

Dr. Alona Fyshe
Co-Lead, Computational Modelling
University of Alberta

Dhruv Mullick
University of Alberta