LINGUIST List 35.265

Mon Jan 22 2024

Calls: Agency and Intentions in Artificial Intelligence

Editor for this issue: Zackary Leech <zleechlinguistlist.org>



Date: 21-Jan-2024
From: Julie Goncharov <julie.goncharovuni-goettingen.de>
Subject: Agency and Intentions in Artificial Intelligence
E-mail this message to a friend

Full Title: Agency and Intentions in Artificial Intelligence
Short Title: AIAI

Date: 15-May-2024 - 17-May-2024
Location: University of Goettingen, Germany
Contact Person: Julie Goncharov
Meeting Email: [email protected]
Web Site: https://ail-workshop.github.io/aiai-conference/index.html

Linguistic Field(s): Applied Linguistics; Cognitive Science; Computational Linguistics; Discipline of Linguistics; Linguistic Theories; Philosophy of Language
Subject Language(s): English (eng)

Call Deadline: 11-Feb-2024

Meeting Description:

"Agency and Intentions in Artificial Intelligence" (AIAI) builds on the success of our workshop series "Agency and Intentions in Language" (AIL), which brings together scholars in theoretical linguistics, philosophy, and psychology who are interested in questions related to agency, intentions, reasoning about actions, and causation. AIAI aims at extending this interdisciplinary theoretical discussion of fundamental principles underlying human-human interaction to human-machine interaction, broadly construed.

The talk of “artificial intelligence” is everywhere. From its use in medical diagnosis to relationship chatbots, AI technology is improving rapidly in the diverse tasks it can perform, offering genuine benefits to human social life along with novel risks. With so much at stake, it is surprising that we have so little basic theoretical understanding of AI systems as unique agents that encode, or can be interpreted as encoding, intentional actions in communication with humans. The goal of this conference is to start a sober conversation about AI systems as agent-like collaborators.

In particular, we are interested in understanding whether (and how) the conceptual baggage that Large Language Models (LLMs) come with is similar to (or different from) the conceptual fundamentals that underlie human linguistic competence. LLMs are often used by humans in unique forms of request-making, collaboration, and problem solving. This is the same range of tasks that has shaped the evolution of our faculty of language. We are now in the era where these two language related systems interact with each other.

We want to bring together theoretical linguists, philosophers, cognitive scientists, and computer scientists in a rich and multi-faceted discussion regarding conceptual representations of agency and intentions in LLMs and their connection to related representations based on human linguistic competence. The conference is interdisciplinary in nature. Rather than viewing the complexity of these topics as a challenge to productive conversation, we think of it as an opportunity to bring thinkers from diverse backgrounds together to share various tools, methods, theories, and perspectives on how to make sense of agency in non-human computational systems and their interaction with human agency. We do not expect all of the presenters at the conference to share the same methodological assumptions or research backgrounds, nor do we expect such congruence in our attendees. This allows all participants to benefit from seeing questions of agency and AI from new standpoints. Additionally, it encourages speakers and attendees to present their ideas and questions in clear and accessible ways so that, say, a linguist can effectively communicate their work to philosophers and cognitive scientists.

Keynote speakers
Andrey Kutuzov (University of Oslo)

Rick Nouwen (Utrecht University)

Anna Strasser (Berlin)

Important dates
November 13, 2023 open Call for Papers

February 11, 2024 abstract submission deadline

March 11, 2024 notification of acceptance

May 15-17, 2024 conference dates

Scientific committee
Julie Goncharov (University of Göttingen)
Kyle Thompson (Harvey Mudd College)
Olga Kellert (Universidade da Coruña, Grupo LyS, contracted CL and AI scientist)
Hedde Zeijlstra (University of Göttingen)
Brian Keeley (Pitzer College)
Thomas Weskott (University of Göttingen)

2nd Call for Papers:

We cordially invite submissions from linguists, philosophers, cognitive scientists and computer scientists exploring topics related to agency and intentions with respect to human linguistic competence and/or in AI systems. Some questions and topics that will be in the scope of the conference are as follows:

Are AI systems, or LLMs in particular, unique kinds of agents? How should we understand the human propensity to treat them as such? Do AI systems and LLMs produce linguistic outputs that can be understood through the concepts of "intentional action" or "intentions"?
Are AI systems or LLMs unique language users? How can we best study, discuss, and engage with their linguistic outputs?
What semantic properties and conceptions can be attributed to outputs from LLMs or other AI systems? How similar (different) are they from semantic properties and conceptions that we use for theorizing about human linguistic competence?
What do LLMs teach us about concepts themselves, specifically those related to agency, such as "intentions" and "decision-making" and "reasons" and "judgment"? Are there fundamental differences in the way "intentional action" is captured in human language as compared to how it is captured in LLMs?
Are LLMs participating in acts and expressions in similar ways to human agents? For example, do LLMs encode for something like an “understanding” of concepts? Do they “refer” to things and ideas in their linguistic outputs? Are they “responding” to human requests and inquiries?
Are LLM concept vectors sufficiently grounded, i.e., are they connected in the right ways to the real world, to constitute certain semantic properties that human expressions possess?
How are specific ethical problems related to AI informed by the above questions about the linguistic capacities of AI systems? How might those ethical issues be better addressed?
Can cognitive scientific models of human thinking, agency, and decision-making benefit from studying LLMs? What can cognitive science tell us about how LLMs “process” information?
How do computer scientists think about the role of agency and intentions when developing LLMs?
The list of topics above is not exhaustive. The heart of the topics is a drive to learn and discover more about AI systems as potential agents and decision-makers. While the conference is not directly focused on providing solutions to ethical problems in AI development, questions of ethics and moral responsibility both motivate the discussion and will be included in the conference. What AIAI will uniquely achieve, though, is an interdisciplinary conversation about the technical philosophical and linguistic features of the very AI systems that humans will continue to employ in ever more domains of social life. A new phase AI is here, and we think this offers new opportunities and challenges for people in all areas of life. Our goal is to meet these opportunities and challenges through the unique theoretical perspectives offered by linguists, philosophers, computer scientists, and cognitive scientists. After all, it is impossible to take practical moral action in response to AI systems if we cannot make sense of what AI is, does, or intends.

For more information see https://ail-workshop.github.io/aiai-conference/call.html




Page Updated: 22-Jan-2024


LINGUIST List is supported by the following publishers: