Google Scholar: cites
Language in Vivo vs. in Silico : Size Matters but Larger Language Models Still Do Not Comprehend Language on a Par with Humans
Dentella, Vittoria
Guenther, Fritz (Humboldt-Universitat zu Berlin. Institut für Psychologie)
Leivada, Evelina (Universitat Autònoma de Barcelona. Departament de Filologia Catalana)

Data: 2024
Descripció: 22 pàg.
Resum: Understanding the limits of language is a prerequisite for Large Language Models (LLMs) to act as theories of natural language. LLM performance in some language tasks presents both quantitative and qualitative differences from that of humans, however it remains to be determined whether such differences are amenable to model size. This work investigates the critical role of model scaling, determining whether increases in size make up for such differences between humans and models. We test three LLMs from different families (Bard, 137 billion parameters; ChatGPT-3. 5, 175 billion; ChatGPT-4, 1. 5 trillion) on a grammaticality judgment task featuring anaphora, center embedding, comparatives, and negative polarity. N=1,200 judgments are collected and scored for accuracy, stability, and improvements in accuracy upon repeated presentation of a prompt. Results of the best performing LLM, ChatGPT-4, are compared to results of n=80 humans on the same stimuli. We find that humans are overall less accurate than ChatGPT-4 (76% vs. 80% accuracy, respectively), but that this is due to ChatGPT-4 outperforming humans only in one task condition, namely on grammatical sentences. Additionally, ChatGPT-4 wavers more than humans in its answers (12. 5% vs. 9. 6% likelihood of an oscillating answer, respectively). Thus, while increased model size may lead to better performance, LLMs are still not sensitive to (un)grammaticality the same way as humans are. It seems possible but unlikely that scaling alone can fix this issue. We interpret these results by comparing language learning in vivo and in silico, identifying three critical differences concerning (i) the type of evidence, (ii) the poverty of the stimulus, and (iii) the occurrence of semantic hallucinations due to impenetrable linguistic reference.
Drets: Aquest document està subjecte a una llicència d'ús Creative Commons. Es permet la reproducció total o parcial, la distribució, la comunicació pública de l'obra i la creació d'obres derivades, fins i tot amb finalitats comercials, sempre i quan es reconegui l'autoria de l'obra original. Creative Commons
Llengua: Anglès
Document: Working paper ; recerca ; Versió publicada
Matèria: Large Language Models ; Grammaticality ; Language ; Scaling

DOI: 10.48550/arXiv.2404.14883


10 p, 395.2 KB

El registre apareix a les col·leccions:
Documents de recerca > Working papers

 Registre creat el 2025-04-05, darrera modificació el 2025-04-09



   Favorit i Compartir