As we age, we adjust how we associate objects with each other. We examine aspects of age, animacy, and object similarity by using an interpretable computational model trained to perform an odd-one-out-among-three task and the dataset of human responses used to train that model. The trained model contains a vector embedding for each object used in the task; that vector is used for relating that object with other objects, where each vector dimension corresponds with a human-identifiable category (like “body-part related”). First, we use this model to select questions for experimentally examining differences between child (age 6) and adult prioritization of taxonomic and thematic features in performing that odd-one-out task. Second, we examine what information is encoded when the model is trained to have very few dimensions. Finally, we compare which features of the model best explain the responses for adult respondents of different age groups.