Visible to the public Biblio

Filters: Keyword is conversational agent  [Clear All Filters]
2020-07-16
Ciupe, Aurelia, Mititica, Doru Florin, Meza, Serban, Orza, Bogdan.  2019.  Learning Agile with Intelligent Conversational Agents. 2019 IEEE Global Engineering Education Conference (EDUCON). :1100—1107.

Conversational agents assist traditional teaching-learning instruments in proposing new designs for knowledge creation and learning analysis, across organizational environments. Means of building common educative background in both industry and academic fields become of interest for ensuring educational effectiveness and consistency. Such a context requires transferable practices and becomes the basis for the Agile adoption into Higher Education, at both curriculum and operational levels. The current work proposes a model for delivering Agile Scrum training through an assistive web-based conversational service, where analytics are collected to provide an overview on learners' knowledge path. Besides its specific applicability into Software Engineering (SE) industry, the model is to assist the academic SE curriculum. A user-acceptance test has been carried out among 200 undergraduate students and patterns of interaction have been depicted for 2 conversational strategies.

Pérez-Soler, Sara, Guerra, Esther, de Lara, Juan.  2019.  Flexible Modelling using Conversational Agents. 2019 ACM/IEEE 22nd International Conference on Model Driven Engineering Languages and Systems Companion (MODELS-C). :478—482.

The advances in natural language processing and the wide use of social networks have boosted the proliferation of chatbots. These are software services typically embedded within a social network, and which can be addressed using conversation through natural language. Many chatbots exist with different purposes, e.g., to book all kind of services, to automate software engineering tasks, or for customer support. In previous work, we proposed the use of chatbots for domain-specific modelling within social networks. In this short paper, we report on the needs for flexible modelling required by modelling using conversation. In particular, we propose a process of meta-model relaxation to make modelling more flexible, followed by correction steps to make the model conforming to its meta-model. The paper shows how this process is integrated within our conversational modelling framework, and illustrates the approach with an example.

2019-12-16
McDermott, Christopher D., Jeannelle, Bastien, Isaacs, John P..  2019.  Towards a Conversational Agent for Threat Detection in the Internet of Things. 2019 International Conference on Cyber Situational Awareness, Data Analytics And Assessment (Cyber SA). :1–8.

A conversational agent to detect anomalous traffic in consumer IoT networks is presented. The agent accepts two inputs in the form of user speech received by Amazon Alexa enabled devices, and classified IDS logs stored in a DynamoDB Table. Aural analysis is used to query the database of network traffic, and respond accordingly. In doing so, this paper presents a solution to the problem of making consumers situationally aware when their IoT devices are infected, and anomalous traffic has been detected. The proposed conversational agent addresses the issue of how to present network information to non-technical users, for better comprehension, and improves awareness of threats derived from the mirai botnet malware.

Xue, Zijun, Ko, Ting-Yu, Yuchen, Neo, Wu, Ming-Kuang Daniel, Hsieh, Chu-Cheng.  2018.  Isa: Intuit Smart Agent, A Neural-Based Agent-Assist Chatbot. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). :1423–1428.
Hiring seasonal workers in call centers to provide customer service is a common practice in B2C companies. The quality of service delivered by both contracting and employee customer service agents depends heavily on the domain knowledge available to them. When observing the internal group messaging channels used by agents, we found that similar questions are often asked repetitively by different agents, especially from less experienced ones. The goal of our work is to leverage the promising advances in conversational AI to provide a chatbot-like mechanism for assisting agents in promptly resolving a customer's issue. In this paper, we develop a neural-based conversational solution that employs BiLSTM with attention mechanism and demonstrate how our system boosts the effectiveness of customer support agents. In addition, we discuss the design principles and the necessary considerations for our system. We then demonstrate how our system, named "Isa" (Intuit Smart Agent), can help customer service agents provide a high-quality customer experience by reducing customer wait time and by applying the knowledge accumulated from customer interactions in future applications.
DiPaola, Steve, Yalçin, Özge Nilay.  2019.  A multi-layer artificial intelligence and sensing based affective conversational embodied agent. 2019 8th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW). :91–92.

Building natural and conversational virtual humans is a task of formidable complexity. We believe that, especially when building agents that affectively interact with biological humans in real-time, a cognitive science-based, multilayered sensing and artificial intelligence (AI) systems approach is needed. For this demo, we show a working version (through human interaction with it) our modular system of natural, conversation 3D virtual human using AI or sensing layers. These including sensing the human user via facial emotion recognition, voice stress, semantic meaning of the words, eye gaze, heart rate, and galvanic skin response. These inputs are combined with AI sensing and recognition of the environment using deep learning natural language captioning or dense captioning. These are all processed by our AI avatar system allowing for an affective and empathetic conversation using an NLP topic-based dialogue capable of using facial expressions, gestures, breath, eye gaze and voice language-based two-way back and forth conversations with a sensed human. Our lab has been building these systems in stages over the years.

Lopes, José, Robb, David A., Ahmad, Muneeb, Liu, Xingkun, Lohan, Katrin, Hastie, Helen.  2019.  Towards a Conversational Agent for Remote Robot-Human Teaming. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :548–549.

There are many challenges when it comes to deploying robots remotely including lack of operator situation awareness and decreased trust. Here, we present a conversational agent embodied in a Furhat robot that can help with the deployment of such remote robots by facilitating teaming with varying levels of operator control.

Park, Chan Mi, Lee, Jung Yeon, Baek, Hyoung Woo, Lee, Hae-Sung, Lee, JeeHang, Kim, Jinwoo.  2019.  Lifespan Design of Conversational Agent with Growth and Regression Metaphor for the Natural Supervision on Robot Intelligence. 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI). :646–647.
Human's direct supervision on robot's erroneous behavior is crucial to enhance a robot intelligence for a `flawless' human-robot interaction. Motivating humans to engage more actively for this purpose is however difficult. To alleviate such strain, this research proposes a novel approach, a growth and regression metaphoric interaction design inspired from human's communicative, intellectual, social competence aspect of developmental stages. We implemented the interaction design principle unto a conversational agent combined with a set of synthetic sensors. Within this context, we aim to show that the agent successfully encourages the online labeling activity in response to the faulty behavior of robots as a supervision process. The field study is going to be conducted to evaluate the efficacy of our proposal by measuring the annotation performance of real-time activity events in the wild. We expect to provide a more effective and practical means to supervise robot by real-time data labeling process for long-term usage in the human-robot interaction.
Pérez, Joaquín, Cerezo, Eva, Gallardo, Jesús, Serón, Francisco J..  2018.  Evaluating an ECA with a Cognitive-Affective Architecture. Proceedings of the XIX International Conference on Human Computer Interaction. :22:1–22:8.
In this paper, we present an embodied conversational agent (ECA) that includes a cognitive-affective architecture based on the Soar cognitive architecture, integrates an emotion model based on ALMA that uses a three-layered model of emotions, mood and personality, from the point of view of the user and the agent. These features allow to modify the behavior and personality of the agent to achieve a more realistic and believable interaction with the user. This ECA works as a virtual assistant to search information from Wikipedia and show personalized results to the user. It is only a prototipe, but can be used to show some of the possibilities of the system. A first evaluation was conducted to prove these possibilities, with satisfactory results that also give guidance for some future work that can be done with this ECA.
Sannon, Shruti, Stoll, Brett, DiFranzo, Dominic, Jung, Malte, Bazarova, Natalya N..  2018.  How Personification and Interactivity Influence Stress-Related Disclosures to Conversational Agents. Companion of the 2018 ACM Conference on Computer Supported Cooperative Work and Social Computing. :285–288.
In this exploratory study, we examine how personification and interactivity may influence people's disclosures around sensitive topics, such as psychological stressors. Participants (N=441) shared a recent stressful experience with one of three agent interfaces: 1) a non-interactive, non-personified survey, 2) an interactive, non-personified chatbot, and 3) an interactive, personified chatbot. We coded these responses to examine how agent type influenced the nature of the stressor disclosed, and the intimacy and amount of disclosure. Participants discussed fewer homelife related stressors, but more finance-related stressors and more chronic stressors overall with the personified chatbot than the other two agents. The personified chatbot was also twice as likely as the other agents to receive disclosures that contained very little detail. We discuss the role played by personification and interactivity in interactions with conversational agents, and implications for design.
Ruane, Elayne, Faure, Théo, Smith, Ross, Bean, Dan, Carson-Berndsen, Julie, Ventresque, Anthony.  2018.  BoTest: A Framework to Test the Quality of Conversational Agents Using Divergent Input Examples. Proceedings of the 23rd International Conference on Intelligent User Interfaces Companion. :64:1–64:2.
Quality of conversational agents is important as users have high expectations. Consequently, poor interactions may lead to the user abandoning the system. In this paper, we propose a framework to test the quality of conversational agents. Our solution transforms working input that the conversational agent accurately recognises to generate divergent input examples that introduce complexity and stress the agent. As the divergent inputs are based on known utterances for which we have the 'normal' outputs, we can assess how robust the conversational agent is to variations in the input. To demonstrate our framework we built ChitChatBot, a simple conversational agent capable of making casual conversation.
Karve, Shreya, Nagmal, Arati, Papalkar, Sahil, Deshpande, S. A..  2018.  Context Sensitive Conversational Agent Using DNN. 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA). :475–478.
We investigate a method of building a closed domain intelligent conversational agent using deep neural networks. A conversational agent is a dialog system intended to converse with a human, with a coherent structure. Our conversational agent uses a retrieval based model that identifies the intent of the input user query and maps it to a knowledge base to return appropriate results. Human conversations are based on context, but existing conversational agents are context insensitive. To overcome this limitation, our system uses a simple stack based context identification and storage system. The conversational agent generates responses according to the current context of conversation. allowing more human-like conversations.
Alam, Mehreen.  2018.  Neural Encoder-Decoder based Urdu Conversational Agent. 2018 9th IEEE Annual Ubiquitous Computing, Electronics Mobile Communication Conference (UEMCON). :901–905.
Conversational agents have very much become part of our lives since the renaissance of neural network based "neural conversational agents". Previously used manually annotated and rule based methods lacked the scalability and generalization capabilities of the neural conversational agents. A neural conversational agent has two parts: at one end an encoder understands the question while the other end a decoder prepares and outputs the corresponding answer to the question asked. Both the parts are typically designed using recurrent neural network and its variants and trained in an end-to-end fashion. Although conversation agents for other languages have been developed, Urdu language has seen very less progress in building of conversational agents. Especially recent state of the art neural network based techniques have not been explored yet. In this paper, we design an attention driven deep encoder-decoder based neural conversational agent for Urdu language. Overall, we make following contributions we (i) create a dataset of 5000 question-answer pairs, and (ii) present a new deep encoder-decoder based conversational agent for Urdu language. For our work, we limit the knowledge base of our agent to general knowledge regarding Pakistan. Our best model has the BLEU score of 58 and gives syntactically and semantically correct answers in majority of the cases.
Fast, Ethan, Chen, Binbin, Mendelsohn, Julia, Bassen, Jonathan, Bernstein, Michael S..  2018.  Iris: A Conversational Agent for Complex Tasks. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. :473:1–473:12.
Today, most conversational agents are limited to simple tasks supported by standalone commands, such as getting directions or scheduling an appointment. To support more complex tasks, agents must be able to generalize from and combine the commands they already understand. This paper presents a new approach to designing conversational agents inspired by linguistic theory, where agents can execute complex requests interactively by combining commands through nested conversations. We demonstrate this approach in Iris, an agent that can perform open-ended data science tasks such as lexical analysis and predictive modeling. To power Iris, we have created a domain-specific language that transforms Python functions into combinable automata and regulates their combinations through a type system. Running a user study to examine the strengths and limitations of our approach, we find that data scientists completed a modeling task 2.6 times faster with Iris than with Jupyter Notebook.
2018-11-28
Hoshida, Masahiro, Tamura, Masahiko, Hayashi, Yugo.  2017.  Lexical Entrainment Toward Conversational Agents: An Experimental Study on Top-down Processing and Bottom-up Processing. Proceedings of the 5th International Conference on Human Agent Interaction. :189–194.

The purpose of this paper is to examine the influence of lexical entrainment while communicating with a conversational agent. We consider two types of cognitive information processing:top-down processing, which depends on prior knowledge, and bottom-up processing, which depends on one's partners' behavior. Each works mutually complementarily in interpersonal cognition. It was hypothesized that we will separate each method of processing because of the agent's behavior. We designed a word choice task where participants and the agent described pictures and selected them alternately and held two factors constant:First, the expectation about the agent's intelligence by the experimenter's instruction as top-down processing; second, the agent's behavior, manipulating the degree of intellectual impression, as bottom-up processing. The results show that people select words differently because of the diversity of expressed behavior and thus supported our hypothesis. The findings obtained in this study could bring about new guidelines for a human-to-agent language interface.

Suzanna, Sia Xin Yun, Anthony, Li Lianjie.  2017.  Hierarchical Module Classification in Mixed-Initiative Conversational Agent System. Proceedings of the 2017 ACM on Conference on Information and Knowledge Management. :2535–2538.

Our operational context is a task-oriented dialog system where no single module satisfactorily addresses the range of conversational queries from humans. Such systems must be equipped with a range of technologies to address semantic, factual, task-oriented, open domain conversations using rule-based, semantic-web, traditional machine learning and deep learning. This raises two key challenges. First, the modules need to be managed and selected appropriately. Second, the complexity of troubleshooting on such systems is high. We address these challenges with a mixed-initiative model that controls conversational logic through hierarchical classification. We also developed an interface to increase interpretability for operators and to aggregate module performance.

Zou, Shuai, Kuzushima, Kento, Mitake, Hironori, Hasegawa, Shoichi.  2017.  Conversational Agent Learning Natural Gaze and Motion of Multi-Party Conversation from Example. Proceedings of the 5th International Conference on Human Agent Interaction. :405–409.

Recent developments in robotics and virtual reality (VR) are making embodied agents familiar, and social behaviors of embodied conversational agents are essential to create mindful daily lives with conversational agents. Especially, natural nonverbal behaviors are required, such as gaze and gesture movement. We propose a novel method to create an agent with human-like gaze as a listener in multi-party conversation, using Hidden Markov Model (HMM) to learn the behavior from real conversation examples. The model can generate gaze reaction according to users' gaze and utterance. We implemented an agent with proposed method, and created VR environment to interact with the agent. The proposed agent reproduced several features of gaze behavior in example conversations. Impression survey result showed that there is at least a group who felt the proposed agent is similar to human and better than conventional methods.

Sandbank, Tommy, Shmueli-Scheuer, Michal, Herzig, Jonathan, Konopnicki, David, Shaul, Rottem.  2017.  EHCTool: Managing Emotional Hotspots for Conversational Agents. Proceedings of the 22Nd International Conference on Intelligent User Interfaces Companion. :125–128.

Building conversational agents is becoming easier thanks to the profusion of designated platforms. Integrating emotional intelligence in such agents contributes to positive user satisfaction. Currently, this integration is implemented using calls to an emotion analysis service. In this demonstration we present EHCTool that aims to detect and notify the conversation designer about problematic conversation states where emotions are likely to be expressed by the user. Using its exploration view, the tool assists the designer to manage and define appropriate responses in these cases.