CEMS 2026
Human-like concept networks derived from LLM embeddings
Concept networks have the potential to explain flexible semantic cognition. Unfortunately, constructing human-generated concept networks is slow and resource-intensive, which creates a methodological bottleneck. Here we investigate the possibility of using LLM embeddings to derive[KC1.1][SS1.2] concept networks that are structurally similar to human-created networks. For each concept tested (e.g., chocolate), Gemini was used to generate 50 sub-types (e.g., milk chocolate, chocolate sauce) and the embeddings corresponding to each subtype were generated. Concept networks were created by correlating feature vectors across these embeddings, and network measures were extracted (e.g., modularity). We ran sweeps across multiple parameters (e.g., network threshold, embedding dimensionality) to optimize correspondence between human and LLM-generated concept networks. Results suggest that concept networks constructed from Gemini embeddings can successfully capture aspects of semantic structure. The ability to automate the creation of human-like concept networks opens the door to further exploration of the context-dependence and flexibility of conceptual meaning.