LINGUIST List 36.2731

Mon Sep 15 2025

Calls: CIDL25 Workshop: Do LLMs Exhibit Natural-language Processing Cognitive Abilities? (Romania)

Editor for this issue: Valeriia Vyshnevetska <valeriialinguistlist.org>



Date: 13-Sep-2025
From: Monica Vasileannu <colocviu.lingvistica.2025gmail.com>
Subject: CIDL25 Workshop: Do LLMs Exhibit Natural-language Processing Cognitive Abilities?
E-mail this message to a friend

Full Title: CIDL25 Workshop: Do LLMs Exhibit Natural-language Processing Cognitive Abilities?
Short Title: CIDL25

Date: 21-Nov-2025 - 22-Nov-2025
Location: Bucharest, Romania
Web Site: https://litere.ro/cidl-en/

Linguistic Field(s): Cognitive Science; Computational Linguistics; General Linguistics; Semantics; Syntax

Call Deadline: 25-Sep-2025

2nd Call for Papers:

Convenors: Andrei Mărășoiu, Sandra Brânzaru (Faculty of Philosophy), Alexandru Nicolae (Faculty of Letters)

We invite submissions to a workshop to be held within the 25th International Conference of the Department of Linguistics of the Faculty of Letters.

Questions we aim to explore:
- Do LLMs have metalinguistic abilities (do they have the ability to generate analyses of language data/theoretical linguistic abilities of language samples so as to identify whether a sentence is syntactically ambiguous? How well do they handle linguistic recursion tasks? Can they identify what type of recursion a sentence contains (adjectival, possessive, PP…)? Can they draw a syntactic tree for it and add more layers of recursion?
- Do LLMs meet plausible criteria associated with linguistic metasemantic theories or mental metasemantic theories (cf. https://link.springer.com/article/10.1007/s11229-024-04723-8)?
- What would it mean for a Large Language Model to understand or acquire a language? What would it mean for them to meaningfully use a language or words? Is understanding language or grasping the meaning of words based on an innate structure? Does that structure need to be biological? Do LLMs need sensory grounding for language understanding?
- Are LLMs outputs based on linguistic representations? Do LLMs have internal representations? What would that mean? What kind of representations would that mean? If they do, then do they acquire them? Can connectionism account for it?
- Can LLMs actually tell us something about Universal Grammar? Are there alternative ways for acquiring language? Can LLMs learn new languages via identification of grammar rules within data patterns?

Abstracts should clearly state the research questions, approach, method, data and (expected) results. They should not exceed 300 words (including examples, excluding references) and should be sent to [email protected].

Call deadline: September 25, 2025.
Notification of acceptance: October 10, 2025.




Page Updated: 15-Sep-2025


LINGUIST List is supported by the following publishers: