Control of Mixed-Initiative Discourse Through Meta-Locutionary Acts: A Computational Model
David Graham Novick
Committee: Sarah Douglas (chair), Stephen Fickas, Kent Stevens, Scott DeLancey
Dissertation Defense(Dec 1969)
Keywords:

Human-computer interaction typically displays single-initiative interaction in which either the computer or the human controls the conversation. The interaction is largely preplanned and depends on well-formed language. In contrast, human-human conversations are characterized by unpredictability, ungrammatical utterances, non-verbal expression, and mixed-initiative control in which the conversants take independent actions. Traditional natural-language systems are largely unable to handle these aspects of "feral" language. Yet human-human interaction is coherent for the participants; the conversants take turns, make interruptions, detect and cure misunderstandings, and resolve ambiguous references. How can these processes of control be modeled formally in a manner sufficient for use in computers?

Non-sentential aspects of conversation such as nods, fragmentary utterances, and correction can be seen reflecting control information for interaction. Such actions by the conversants, based on the context of their interaction, determine the form of the conversation. In this view ungrammaticality, for example, is not a problem but a guide to these "meta" acts. This dissertation develops a theory of "meta-locutionary" acts that explains these control processes. The theory extends speech-act theory to real-world conversational control and encompasses a taxonomy of meta-locutionary acts.

The theory of meta-locutionary acts was refined and validated by a protocol study and computational simulation. In the protocol study, subjects were given a cooperative problem-solving task. The conversants' interaction, both verbal and non-verbal, was transcribed as illocutionary and meta-locutionary acts. The computational model was developed using a rule-based system written in Prolog. The system represents the independent conversational knowledge of both conversants simultaneously, and can simulate their simultaneous action. Simulations of the protocol conversations using the computational model showed that meta-locutionary acts are capable of providing control of mixed-initiative discourse. The model agents can, for example, take and give turns. A single agent can simultaneously take multiple acts of differing control. The simulations also confirmed that conversations need not be strictly planned. Rather, mixed-initiative interaction can be plausibly controlled by contextually determined operators.

This research has application to natural language processing, user interface design and multiple-agent artificial intelligence systems. The theory of meta-locutionary acts will integrate well with existing speech-act-based natural-language systems.