LINGUIST List 31.1552

Thu May 07 2020

Confs: Italian; Comp Ling, Ling Theories, Neuroling, Semantics, Syntax/Italy

Editor for this issue: Lauren Perkins <>

Date: 07-May-2020
From: Cristiano Chesi <>
Subject: Acceptability & Complexity evaluation task for Italian (AcCompl-it) for EVALITA 2020
E-mail this message to a friend

Acceptability & Complexity evaluation task for Italian (AcCompl-it) for EVALITA 2020
Short Title: AcCompl-it

Date: 30-Nov-2020 - 03-Dec-2020
Location: Bologna, Italy
Contact: Cristiano Chesi
Contact Email: < click here to access email >
Meeting URL:

Linguistic Field(s): Computational Linguistics; Linguistic Theories; Neurolinguistics; Semantics; Syntax

Subject Language(s): Italian

Meeting Description:

AcCompl-It is a task aimed at developing and evaluating methods to classify Italian sentences according to their degree of Acceptability and Complexity and/or to model specific phenomena related to linguistic complexity and/or acceptability. The task is of interest for different communities focusing on psycholinguistics and theoretical linguistics as well as NLP-based applications such as natural language generation and complexity assessment.

Program Information:

Participants are invited to develop computational models capable of predicting Acceptability and/or Complexity scores for a set of sentences extracted both from naturalistic and synthetic corpora and also including classic morphosyntactic manipulations tested in psycholinguistics and formal linguistics literature.
The task is articulated into the following subtasks:
1. ACCEPT Subtask: providing an acceptability score on a 1-7 Likert scale for each sentence in the test set, along with an estimation of its standard error;
2. COMPL Subtask: providing a complexity score on a 1-7 Likert scale for each sentence in the test set, along with an estimate of its standard error;
3. OPEN Subtask: modeling linguistic phenomena correlated with the human ratings of sentence acceptability and/or complexity in the datasets provided.

The three subtasks are independent. Participants can decide to participate in just one of them, though we encourage participation in multiple subtasks, since the complexity metrics might be influenced by the grammatical status of an expression, and vice versa. A full specification of the tasks, data and acceptability/complexity definitions will be provided in the web site.
In the first two subtasks, the reference metrics will be 7-points Likert scale scores, 1 = lowest, 7 = highest.
In the open subtask, the linguistic phenomena to be modelled will be freely chosen by each participant, who will build the model on the basis of the data in the training sets provided for the ACCEPT and COMPL Subtasks.
Participants are free to use external resources, but every resource used has to be described in detail in the final report.

All details about the task, data, and evaluation can be found in the website

Important Dates:
- 29 May 2020: development and distribution (on the Task Web Page) of datasets for training and development
- 4 September 2020: development and distribution (on the Task Web Page) of datasets
for testing
- 4 - 24 September 2020: evaluation window and collection of participants' results
- 2 October 2020: assessment returned to participants
- 6 November 2020: technical reports due to organizers (camera-ready)
- 2-3 December 2020: final workshop in Bologna

Updates will be made available in the Evalita 2020 website (, check it often.

Pauli Brattico*
Dominique Brunato**
Cristiano Chesi*
Felice Dell'Orletta**
Simonetta Montemagni**
Giulia Venturi**
Roberto Zamparelli***

* NETS - IUSS Center for NEurocognition and Theoretical Syntax, IUSS Pavia, Italy
** Istituto di Linguistica Computazionale ''A. Zampolli'' (CNR), Pisa, Italy
*** Centro Interdipartimentale MEnte/Cervello - Dipartimento di Psicologia e Scienze Cognitive - Università di Trento, Italy

Join the google group:!forum/accompl-it-evalita2020

And contact the organizers:

Page Updated: 07-May-2020