Top Banner
Deep Reinforcement Learning for Dialogue Generation Jiwei Li 1 , Will Monroe 1 , Alan Ritter 3 , Michel Galley 2 , Jianfeng Gao 2 and Dan Jurafsky 1 1 Stanford University, Stanford, CA, USA 2 Microsoft Research, Redmond, WA, USA 3 Ohio State University, OH, USA {jiweil,wmonroe4,jurafsky}@stanford.edu, [email protected] {mgalley,jfgao}@microsoft.com Abstract Recent neural models of dialogue generation offer great promise for generating responses for conversational agents, but tend to be short- sighted, predicting utterances one at a time while ignoring their influence on future out- comes. Modeling the future direction of a di- alogue is crucial to generating coherent, inter- esting dialogues, a need which led traditional NLP models of dialogue to draw on reinforce- ment learning. In this paper, we show how to integrate these goals, applying deep reinforce- ment learning to model future reward in chat- bot dialogue. The model simulates dialogues between two virtual agents, using policy gradi- ent methods to reward sequences that display three useful conversational properties: infor- mativity, coherence, and ease of answering (re- lated to forward-looking function). We evalu- ate our model on diversity, length as well as with human judges, showing that the proposed algorithm generates more interactive responses and manages to foster a more sustained conver- sation in dialogue simulation. This work marks a first step towards learning a neural conversa- tional model based on the long-term success of dialogues. 1 Introduction Neural response generation (Li et al., 2015; Vinyals and Le, 2015; Luan et al., 2016; Wen et al., 2015; Shang et al., 2015; Yao et al., 2015; Xu et al., 2016; Wen et al., 2016; Li et al., 2016; Su et al., 2016) is of growing interest. The LSTM sequence-to-sequence (SEQ2SEQ) model (Sutskever et al., 2014) is one type of neural generation model that maximizes the prob- ability of generating a response given the previous dialogue turn. This approach enables the incorpora- tion of rich context when mapping between consec- utive dialogue turns (Sordoni et al., 2015) in a way not possible, for example, with MT-based dialogue models (Ritter et al., 2011). Despite the success of S EQ2S EQ models in di- alogue generation, two problems emerge: First, SEQ2SEQ models are trained by predicting the next dialogue turn in a given conversational context using the maximum-likelihood estimation (MLE) objective function. However, it is not clear how well MLE approximates the real-world goal of chatbot develop- ment: teaching a machine to converse with humans, while providing interesting, diverse, and informative feedback that keeps users engaged. One concrete example is that S EQ2S EQ models tend to generate highly generic responses such as“I don’t know” re- gardless of the input (Sordoni et al., 2015; Serban et al., 2015b; Serban et al., 2015c; Li et al., 2015). This can be ascribed to the high frequency of generic responses found in the training set and their compati- bility with a diverse range of conversational contexts. Apparently “I don’t know” is not a good action to take, since it closes the conversation down. Another common problem, illustrated in Table 1 (the example in the bottom left), is when the sys- tem becomes stuck in an infinite loop of repetitive responses. This is due to MLE-based SEQ2SEQ mod- els’ inability to account for repetition. In example 2, the dialogue falls into an infinite loop after three turns, with both agents generating dull, generic utter- ances like i don’t know what you are talking about and you don’t know what you are saying. Looking at the entire conversation, utterance (2) i’m 16 turns out to be a bad action to take. While it is an informative and coherent response to utterance (1) asking about age, it offers no way of continuing the conversation. 1 1 A similar rule is often suggested in improvisational comedy: https://en.wikipedia.org/wiki/Yes,_and... arXiv:1606.01541v3 [cs.CL] 25 Jun 2016
11

Deep Reinforcement Learning for Dialogue Generation

Oct 27, 2021

Download

Documents

dariahiddleston
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Deep Reinforcement Learning for Dialogue Generation

Deep Reinforcement Learning for Dialogue Generation

Jiwei Li1, Will Monroe1, Alan Ritter3, Michel Galley2, Jianfeng Gao2 and Dan Jurafsky1

1Stanford University, Stanford, CA, USA2 Microsoft Research, Redmond, WA, USA

3Ohio State University, OH, USA{jiweil,wmonroe4,jurafsky}@stanford.edu, [email protected]

{mgalley,jfgao}@microsoft.com

Abstract

Recent neural models of dialogue generationoffer great promise for generating responsesfor conversational agents, but tend to be short-sighted, predicting utterances one at a timewhile ignoring their influence on future out-comes. Modeling the future direction of a di-alogue is crucial to generating coherent, inter-esting dialogues, a need which led traditionalNLP models of dialogue to draw on reinforce-ment learning. In this paper, we show how tointegrate these goals, applying deep reinforce-ment learning to model future reward in chat-bot dialogue. The model simulates dialoguesbetween two virtual agents, using policy gradi-ent methods to reward sequences that displaythree useful conversational properties: infor-mativity, coherence, and ease of answering (re-lated to forward-looking function). We evalu-ate our model on diversity, length as well aswith human judges, showing that the proposedalgorithm generates more interactive responsesand manages to foster a more sustained conver-sation in dialogue simulation. This work marksa first step towards learning a neural conversa-tional model based on the long-term success ofdialogues.

1 Introduction

Neural response generation (Li et al., 2015; Vinyalsand Le, 2015; Luan et al., 2016; Wen et al., 2015;Shang et al., 2015; Yao et al., 2015; Xu et al., 2016;Wen et al., 2016; Li et al., 2016; Su et al., 2016) is ofgrowing interest. The LSTM sequence-to-sequence(SEQ2SEQ) model (Sutskever et al., 2014) is one typeof neural generation model that maximizes the prob-ability of generating a response given the previousdialogue turn. This approach enables the incorpora-tion of rich context when mapping between consec-

utive dialogue turns (Sordoni et al., 2015) in a waynot possible, for example, with MT-based dialoguemodels (Ritter et al., 2011).

Despite the success of SEQ2SEQ models in di-alogue generation, two problems emerge: First,SEQ2SEQ models are trained by predicting the nextdialogue turn in a given conversational context usingthe maximum-likelihood estimation (MLE) objectivefunction. However, it is not clear how well MLEapproximates the real-world goal of chatbot develop-ment: teaching a machine to converse with humans,while providing interesting, diverse, and informativefeedback that keeps users engaged. One concreteexample is that SEQ2SEQ models tend to generatehighly generic responses such as“I don’t know” re-gardless of the input (Sordoni et al., 2015; Serbanet al., 2015b; Serban et al., 2015c; Li et al., 2015).This can be ascribed to the high frequency of genericresponses found in the training set and their compati-bility with a diverse range of conversational contexts.Apparently “I don’t know” is not a good action totake, since it closes the conversation down.

Another common problem, illustrated in Table 1(the example in the bottom left), is when the sys-tem becomes stuck in an infinite loop of repetitiveresponses. This is due to MLE-based SEQ2SEQ mod-els’ inability to account for repetition. In example2, the dialogue falls into an infinite loop after threeturns, with both agents generating dull, generic utter-ances like i don’t know what you are talking aboutand you don’t know what you are saying. Looking atthe entire conversation, utterance (2) i’m 16 turns outto be a bad action to take. While it is an informativeand coherent response to utterance (1) asking aboutage, it offers no way of continuing the conversation.1

1A similar rule is often suggested in improvisational comedy:https://en.wikipedia.org/wiki/Yes,_and...

arX

iv:1

606.

0154

1v3

[cs

.CL

] 2

5 Ju

n 20

16

Page 2: Deep Reinforcement Learning for Dialogue Generation

Baseline mutual information model (Li et al. 2015) Proposed reinforcement learning modelA: Where are you going ? (1) A: Where are you going ? (1)B: I ’m going to the restroom . (2) B: I’m going to the police station . (2)A: See you later . (3) A: I’ll come with you . (3)B: See you later . (4) B: no , no , no , no , you ?re not going anywhere . (4)A: See you later . (5) A: Why? (5)B: See you later . (6) B: I need you to stay here. (6)... A: I don’t know what you are talking about. (7)... ...A: how old are you ? (1) A How old are you ? (1)B: I’m 16 . (2) B I’m 16. Why are you asking ? (2)A: 16 ? (3) A I thought you were 12. (3)B: I don’t know what you are talking about . (4) B What made you think so ? (4)A: you don’t know what you are saying. (5) A I don’t know what you are talking about. (5)B: I don’t know what you are talking about . (6) B You don’t know what you are saying. (6)A: you don’t know what you are saying. (7) ...... ...

Table 1: Left Column: Dialogue simulation between two agents using a 4-layer LSTM encoder-decoder trained on theOpensubtitle dataset. The first turn (index 1) is input by the authors. Then the two agents take turns conversing, takingas input the other agent’s prior generated turn. The output is generated using the mutual information model (Li et al.,2015) in which an N-best list is first obtained using beam search based on p(t|s) and reranked by linearly combining thebackward probability p(s|t), where t and s respectively denote targets and sources. Right Column: Dialogue simulatedusing the proposed reinforcement learning model. The new model has more forward-looking utterances (questions like“Why are you asking?” and offers like “I’ll come with you”) and lasts longer before it falls into conversational blackholes.

These challenges suggest we need a conversa-tion framework that has the ability to (1) integratedeveloper-defined rewards that better mimic the truegoal of chatbot development and (2) model the long-term influence of a generated response in an ongoingdialogue.

To achieve these goals, we draw on the insights ofreinforcement learning, which have been widely ap-plied in MDP and POMDP dialogue systems (see Re-lated Work section for details). We introduce a neu-ral reinforcement learning (RL) generation method,which can optimize long-term rewards designed bysystem developers. Our model uses the encoder-decoder architecture as its backbone, and simulatesconversation between two virtual agents to explorethe space of possible actions while learning to maxi-mize expected reward. We define simple heuristic ap-proximations to rewards that characterize good con-versations: good conversations are forward-looking(Allwood et al., 1992) or interactive (a turn suggestsa following turn), informative, and coherent. The pa-rameters of an encoder-decoder RNN define a policyover an infinite action space consisting of all possibleutterances. The agent learns a policy by optimizing

the long-term developer-defined reward from ongo-ing dialogue simulations using policy gradient meth-ods (Williams, 1992), rather than the MLE objectivedefined in standard SEQ2SEQ models.

Our model thus integrates the power of SEQ2SEQ

systems to learn compositional semantic meanings ofutterances with the strengths of reinforcement learn-ing in optimizing for long-term goals across a conver-sation. Experimental results (sampled results at theright panel of Table 1) demonstrate that our approachfosters a more sustained dialogue and manages toproduce more interactive responses than standardSEQ2SEQ models trained using the MLE objective.

2 Related Work

Efforts to build statistical dialog systems fall into twomajor categories.

The first treats dialogue generation as a source-to-target transduction problem and learns mappingrules between input messages and responses from amassive amount of training data. Ritter et al. (2011)frames the response generation problem as a statisti-cal machine translation (SMT) problem. Sordoni etal. (2015) improved Ritter et al.’s system by rescor-

Page 3: Deep Reinforcement Learning for Dialogue Generation

ing the outputs of a phrasal SMT-based conversationsystem with a neural model that incorporates priorcontext. Recent progress in SEQ2SEQ models inspireseveral efforts (Vinyals and Le, 2015) to build end-to-end conversational systems which first apply anencoder to map a message to a distributed vector rep-resenting its semantics and generate a response fromthe message vector. Serban et al. (2015a) proposea hierarchical neural model that captures dependen-cies over an extended conversation history. Li et al.(2015) propose mutual information between messageand response as an alternative objective function inorder to reduce the proportion of generic responsesproduced by SEQ2SEQ systems.

The other line of statistical research focuses onbuilding task-oriented dialogue systems to solvedomain-specific tasks. Efforts include statistical mod-els including Markov Decision Processes (MDPs)(Levin et al., 1997; Levin et al., 2000; Walker et al.,2003; Pieraccini et al., 2009), POMDP (Young et al.,2010; Young et al., 2013) models, and models thatstatistically learn generation rules (Oh and Rudnicky,2000; Ratnaparkhi, 2002; Banchs and Li, 2012; Nioet al., 2014). This dialogue literature thus widely ap-pliess reinforcement learning (Walker, 2000; Schatz-mann et al., 2006; Gasic et al., 2013; Singh et al.,1999; Singh et al., 2000; Singh et al., 2002) to traindialogue policies. But task-oriented RL dialoguesystems often rely on carefully limited dialogue pa-rameters, or hand-built templates with state, actionand reward signals designed by humans for each newdomain, making the paradigm difficult to extend toopen-domain scenarios.

Also relevant is prior work on reinforcement learn-ing for language understanding - including learningfrom delayed reward signals by playing text-basedgames (Narasimhan et al., 2015; He et al., 2015),executing instructions in windows help interactingwith game environments (Branavan et al., 2011).

Our goal is to integrate these two paradigms, draw-ing on the advantages of both. We are thus inspiredby recent work like Wen et al. (2016) trains an end-to-end task-oriented dialogue system that links inputrepresentations to slot-value pairs in a database. Alsorelevant is a recent work in Su et al. (2016) thatadopted a two-stage training using gradient-basedmethods by combining reinforcement learning withneural generation models. Su et al. (2016) explored

the combine model on tasks that involves the inter-action with real user and show the reinforcementlearning helps the dialogue performance.

3 Reinforcement Learning forOpen-Domain Dialogue

In this section, we describe in detail the componentsof the proposed RL model.

The learning system consists of two agents. Weuse p to denote sentences generated from the firstagent and q to denote sentences from the second.The two agents take turns talking with each other.A dialogue can be represented as a alternating se-quence of sentences generated by the two agents:p1, q1, p2, q2, ..., pi, qi. We view the generated sen-tences as actions that are taken according to a policydefined a by an encoder-decoder recurrent neural net-work language model.

The parameters of the network are optimized tomaximize expected future reward using policy search,as described in Section 4.3. Policy gradient methodsare more appropriate for our scenario than Q-learning(Mnih et al., 2013), because we can initialize theencoder-decoder RNN using MLE parameters thatalready produce plausible responses, before changingthe objective and tuning towards a policy that max-imizes long-term reward. Q-learning, on the otherhand, directly estimates the future expected rewardof each action, which can differ from the MLE ob-jective by orders of magnitude, thus making MLEparameters inapropriate for initialization. The compo-nents (states, actions, reward, etc.) of our sequentialdecision problem are summarized in the followingsub-sections.

3.1 Action

An action a is the dialogue utterance to generate.The action space is infinite since arbitrary-length se-quences can be generated.

3.2 State

A state is denoted by the previous two dialogue turns[pi, qi]. The dialogue history is further transformedto a vector representation by feeding the concatena-tion of pi and qi into an LSTM encoder model asdescribed in Li et al. (2015).

Page 4: Deep Reinforcement Learning for Dialogue Generation

3.3 Policy

A policy takes the form of an LSTM encoder-decoder(i.e., pRL(pi+1|pi, qi) ) and is defined by its param-eters. Note that we use a stochastic representationof the policy (a probability distribution over actionsgiven states). A deterministic policy would result ina discontinuous objective that is difficult to optimizeusing gradient-based methods.

3.4 Reward

r denotes the reward obtained for each action. In thissubsection, we discuss major factors that contributeto the success of a dialogue and describe how approx-imations to these factors can be operationalized incomputable reward functions.

Ease of answering A turn generated by a machineshould be easy to respond to. This aspect of a turnis related to its forward-looking function: the con-straints a turn places on the next turn (Schegloff andSacks, 1973; Allwood et al., 1992). We propose tomeasure the ease of answering a generated turn byusing the negative log likelihood of reponding to thatutterance with a dull response. We manually con-structed a list of dull responses S consisting 8 turnssuch as “I don’t know what you are talking about”,“I have no idea”, etc., that we and others have foundoccur very frequently in SEQ2SEQ models of con-versations. The reward function is given as follows:

r1 = −1

NS

∑s∈S

1

Nslog pseq2seq(s|a) (1)

where NS denotes the cardinality of NS and Ns de-notes the number of tokens in the dull response s.Although of course there are more ways to generatedull responses than the list can cover, many of theseresponses are likely to fall into similar regions in thevector space computed by the model. A system lesslikely to generate utterances in the list is thus alsoless likely to generate other dull responses.pseq2seq represents the likelihood output by

SEQ2SEQ models. It is worth noting that pseq2seqis different from the stochastic policy functionpRL(pi+1|pi, qi), since the former is learned basedon the MLE objective of the SEQ2SEQ model whilethe latter is the policy optimized for long-term futurereward in the RL setting. r1 is further scaled by thelength of target S.

Information Flow We want each agent to con-tribute new information at each turn to keep the di-alogue moving and avoid repetitive sequences. Wetherefore propose penalizing semantic similarity be-tween consecutive turns from the same agent. Lethpi and hpi+1 denote representations obtained fromthe encoder for two consecutive turns pi and pi+1.The reward is given by the negative log of the cosinesimilarity between them:

r2 = − log cos(θ) = − loghpi · hpi+1

‖hpi‖‖hpi+1‖(2)

Semantic Coherence We also need to measure theadequacy of responses to avoid situations in whichthe generated replies are highly rewarded but areungrammatical or not coherent. We therefore con-sider mutual information between the action a andprevious turns in the history to ensure the generatedresponses are coherent and appropriate:

r3 =1

Nalog pseq2seq(a|qi, pi)+

1

Nqi

log pbackwardseq2seq (qi|a)

(3)pseq2seq(a|pi, qi) denotes the probability of generat-ing response a the given previous dialogue utterances[pi, qi]. pbackward

seq2seq (qt|a) denotes the backward proba-bility of generating the previous dialogue utteranceqt based on response a. pbackward

seq2seq is trained in a simi-lar way as standard SEQ2SEQ models with sourcesand targets swapped. Again, to control for the influ-ence of target length, both log pseq2seq(a|qi, pi) andlog pbackward

seq2seq (qi|a) are scaled by the length of targets.The final reward for action a is a weighted sum of

the rewards discussed above:

r(a, [pi, qi]) = λ1r1 + λ2r2 + λ3r3 (4)

where λ1 + λ2 + λ3 = 1. We set λ1 = 0.25, λ2 =0.25 and λ3 = 0.5. A reward is observed after theagent reaches the end of each sentence.

4 Simulation

The central idea behind our approach is to simulatethe process of two virtual agents taking turns talkingwith each other, through which we can explore thestate space and learn a policy pRL(pi+1|pi, qi) thatleads to optimal expected reward.

Page 5: Deep Reinforcement Learning for Dialogue Generation

4.1 Supervised Learning

For the first stage of training, we build on prior workof predicting a generated target sequence given dia-logue history from the supervised SEQ2SEQ model(Vinyals and Le, 2015). Results from supervisedmodels will be later used for initialization.

We trained a SEQ2SEQ model with attention (Bah-danau et al., 2014) on the Opensubtitle dataset, whichconsists of roughly 80 million source-target pairs.We treated each turn in the dataset as a target and theconcatenation of two previous sentences as sourceinputs.

4.2 Mutual Information

Samples from SEQ2SEQ models are often times dulland generic, e.g., “i don’t know” (Li et al., 2015)We thus do not want to initialize the policy modelusing the pre-trained SEQ2SEQ models because thiswill lead to a lack of diversity in the RL models’experiences. Li et al. (2015) showed that modelingmutual information between sources and targets willsignificantly decrease the chance of generating dullresponses and improve general response quality. Wenow show how we can obtain an encoder-decodermodel which generates maximum mutual informa-tion responses.

As illustrated in Li et al. (2015), direct decodingfrom Eq 3 is infeasible since the second term requiresthe target sentence to be completely generated. In-spired by recent work on sequence level learning(Ranzato et al., 2015), we treat the problem of gen-erating maximum mutual information response as areinforcement learning problem in which a rewardof mutual information value is observed when themodel arrives at the end of a sequence.

Similar to Ranzato et al. (2015), we use policy gra-dient methods (Sutton et al., 1999; Williams, 1992)for optimization. We initialize the policy model pRL

using a pre-trained pSEQ2SEQ(a|pi, qi) model. Givenan input source [pi, qi], we generate a candidate listA = {a|a ∼ pRL}. For each generated candi-date a, we will obtain the mutual information scorem(a, [pi, qi]) from the pre-trained pSEQ2SEQ(a|pi, qi)and pbackward

SEQ2SEQ(qi|a). This mutual information scorewill be used as reward and back propagate to theencoder-decoder model, tailoring it to generate se-quences with higher rewards. We refer the readers toZaremba and Sutskever (2015) and Williams (1992)

for details. The expected reward for a sequence isgiven by:

J(θ) = E[m(a, [pi, qi])] (5)

The gradient is estimated using the likelihood ratiotrick:

∇J(θ) = m(a, [pi, qi])∇ log pRL(a|[pi, qi]) (6)

We update the parameters in the encoder-decodermodel using stochastic gradient descent. A curricu-lum learning strategy is adopted (Bengio et al., 2009)as in (Ranzato et al., 2015) such that, for every se-quence of length T we use the MLE loss for the firstL tokens and reinforce algorithm for the rest T − Ltokens. We gradually anneal the value of L to zero.Baseline strategy is employed to decrease the learn-ing variance. For simplicity, we set the baseline bto a fixed value of −3 which approximates the aver-age MMI reward score. This makes final gradient asfollows:

∇J(θ) = ∇ log pRL(a|[pi, qi])[m(a, [pi, qi])− b](7)

4.3 Dialogue Simulation between Two AgentsWe simulate conversation between the two virtualagents and have them take turns talking with eachother. The simulation proceeds as follows: at theinitial step, a message from the training set is fed tothe first agent. The agent encodes the input messageto a vector representation and starts decoding to gen-erate a response output. Combining the immediateoutput from the first agent with the dialogue history,the second agent updates the state by encoding thedialogue history into a representation and uses thedecoder RNN to generate responses, which are sub-sequently fed back to the first agent, and the processis repeated.

Optimization We initialize the policy model pRL

with parameters from the mutual information modeldescribed in the previous subsection. We then usepolicy gradient methods to find parameters that leadto a large expected reward. For a sequence of actions{a1, a2, ..., aT }, the reward function is given by:

L = E[i=T∑i=1

R(ai, [pi, qi])] (8)

Page 6: Deep Reinforcement Learning for Dialogue Generation

...

...

...

...

m

How old are you?

I’m 16, why are you asking?

… ……

I’m 16

Input Message

. . .

16?

I thought you were 12.

. . .. . .

. . .

Turn 1

p1,2

p1,3

Turn 2

q11,1

q11,2

q21,1q21,2

q31,1

q31,2

...

...

...

……

Turn n

p1n,1

p1n,2

p1,1

p2n,1

p2n,2

p3n,1p3n,2

encode decode encode decode encode decode

Figure 1: Dialogue simulation between the two agents.

where R(ai, [pi, qi]) denotes reward resulting fromaction ai. Again, the reinforce algorithm (Williams,1992) is used for gradient updates:

∇J(θ) =∑i

∇ log p(ai|pi, qi)i=T∑i=1

R(ai, [pi, qi])

(9)We refer readers to Williams (1992) and Glynn

(1990) for more details. For each simulated in-stance, we generate 5 candidate for each dialogueterm. Since most of the generated translations inthe N-best list are similar, differing only by punctu-ation or minor morphological variations, candidatesare generated by sampling from the distribution withadditional Gaussian noise to foster more diverse can-didates.

4.4 Curriculum Learning

Curriculum Learning strategy is again employed inwhich we begin with simulating the dialogue for 2turns, and gradually increase the number of simulatedturns. We generate 5 turns at most, as the numberof candidates to examine grows exponentially in thesize of candidate list. Five candidate responses aregenerated at each step of the simulation.

5 Experimental Results

In this section, we describe experimental resultsalong with qualitative analysis. We evaluate dialoguegeneration systems using both human judgments and

two automatic metrics: conversation length and di-versity.

5.1 Dataset

The dialogue simulation requires high-quality initialinputs fed to the agent. For example, an initial inputof “why ?” is undesirable since it is unclear howthe dialogue could proceed. We take a subset of10 million messages from the Opensubtitle datasetand extract 0.8 million sequences with the lowestlikelihood of generating the response “i don’t knowwhat you are taking about” to make ensure initialinputs are easy to respond to.

5.2 Automatic Evaluation

Evaluating dialogue systems is difficult. Metrics suchas BLEU (Papineni et al., 2002) and perplexity havebeen widely used for dialogue quality evaluation (Liet al., 2015; Vinyals and Le, 2015; Sordoni et al.,2015), but it is widely debated how well these auto-matic metrics are correlated with true response qual-ity (Liu et al., 2016; Galley et al., 2015). Since thegoal of the proposed system is not to predict thehighest probability response, but rather the long-termsuccess of the dialogue, we do not employ BLEU orperplexity for evaluation2.

2We found the RL model performs worse on BLEU score. Ona random sample of 2,500 conversational pairs, single referenceBLEU scores for RL models, mutual information models andvanilla SEQ2SEQ models are respectively 1.28, 1.44 and 1.17.BLEU is highly correlated with perplexity in generation tasks.Since the RL model is trained based on future reward rather than

Page 7: Deep Reinforcement Learning for Dialogue Generation

Model # of simulated turnsSEQ2SEQ 2.68

mutual information 3.40RL 4.48

Table 2: The average number of simulated turns fromstandard SEQ2SEQ models, mutual information modeland the proposed RL model.

Length of the dialogue The first metric we pro-pose is the length of the simulated dialogue. We saya dialogue ends when one of the agents starts gener-ating dull responses such as “i don’t know” 3 or twoconsecutive utterances from the same user are highlyoverlapping4.

The test set consists of 1,000 input messages. Toreduce the risk of circular dialogues, we limit thenumber of simulated turns to be less than 8. Resultsare shown in Table 2. As can be seen, using mutualinformation leads to more sustained conversationsbetween the two agents. The proposed RL model isfirst trained based on the mutual information objec-tive and thus benefits from it in addition to the RLmodel. We observe that the RL model with dialoguesimulation achieves the best evaluation score.

Diversity We report degree of diversity by calculat-ing the number of distinct unigrams and bigrams ingenerated responses. The value is scaled by the totalnumber of generated tokens to avoid favoring longsentences as described in (Li et al., 2015). The result-ing metric is thus a type-token ratio for unigrams andbigrams.

For both the standard SEQ2SEQ model and the pro-posed RL model, we use beam search with a beamsize 10 to generate a response to a given input mes-sage. For the mutual information model, we firstgenerate n-best lists using pSEQ2SEQ(t|s) and thenlinearly re-rank them using pSEQ2SEQ(s|t). Resultsare presented in Table 4. We find that the proposedRL model generates more diverse outputs when com-pared against both the vanilla SEQ2SEQ model and

MLE, it is not surprising that the RL based models achieve lowerBLEU score.

3We use a simple rule matching method, with a list of 8phrases that count as dull responses. Although this can leadto both false-positives and -negatives, it works pretty well inpractice.

4Two utterances are considered to be repetitive if they sharemore than 80 percent of their words.

the mutual information model.

Model # Unigram BigramSEQ2SEQ 0.0062 0.015

mutual information 0.011 0.031RL 0.017 0.041

Table 4: Diversity scores (type-token ratios) for the stan-dard SEQ2SEQ model, mutual information model and theproposed RL model.

Human Evaluation We explore three settings forhuman evaluation: the first setting is similar to whatwas described in Li et al. (2015), where we employcrowdsourced judges to evaluate a random sample of500 items. We present both an input message and thegenerated output to 3 judges and ask them to decidewhich of the two outputs is better (denoted as single-turn general quality). Ties are permitted. Identicalstrings are assigned the same score. We measurethe improvement achieved by the RL model over themutual information model by the mean difference inscores between the models.

For the second setting, judges are again presentedwith input messages and system outputs, but areasked to decide which of the two outputs is easier torespond to (denoted as single-turn ease to answer).Again we evaluate a random sample of 500 items,each being assigned to 3 judges.

For the third setting, judges are presented with sim-ulated conversations between the two agents (denotedas multi-turn general quality). Each conversationconsists of 5 turns. We evaluate 200 simulated con-versations, each being assigned to 3 judges, who areasked to decide which of the simulated conversationsis of higher quality.

Setting Gain 95%CIsingle-turn general quality 0.02 [-0.02, 0.05]single-turn ease to answer 0.12 [0.08, 0.16]multi-turn general quality 0.17 [0.12, 0.22]

Table 5: RL gains over the mutual information systembased on pairwise human judgments.

Results for human evaluation are shown in Table5. The proposed RL system does not introduce asignificant boost in single-turn response quality with95% CT [-0.02, 0.05], which is in line with our ex-

Page 8: Deep Reinforcement Learning for Dialogue Generation

Input Message Mutual Information Model RLHow old are you ? I’m 16. I’m 16, why are you asking ?What’s your full name ? I have no idea . What’s yours ?I don’t want to go home tonight. Really ? Why ?Do you have any feelings for me ? I don’t know what you are talking about. I don ’t want to hurt your feelings .How much time do you have here? Not long enough. Sorry, sir. Ten seconds .Shall we get started ? Of course ! Yes . We ’ve got a lot of work to do here .Do you play football ? No, i don’t Yes. I love football !We’d have to talk to him. I mean, he’s a good guy About what ?How come you never say it? Because I don’t want to hurt you . I don ’t think it ’s a good idea to say it .

Table 3: Sampled responses generated from the mutual information models and the proposed RL model.

pectation, as the RL model is not optimized to predictthe next utterance, but rather to increase long-term re-ward. The RL system produces responses that are sig-nificantly easier to answer than mutual information,as demonstrated by the single-turn ease to answersetting, and also higher quality multi-turn dialogues,as demonstrated by the multi-turn general qualitysetting.

Qualitative Analysis and Discussion We show arandom sample of generated responses in Table 3 andsimulated conversations in Table 1 at the beginningof the paper. From Table 3, we can see that theRL based agent indeed generates more interactiveresponses than the other baselines. We also find thatthe RL model has a tendency to end a sentence withanother question and hand the conversation over tothe user. From Table 1, we observe that the RL modelmanages to produce more interactive and sustainedconversations than the mutual information model.

During error analysis, we found that although wepenalize repetitive utterances in consecutive turns,the dialogue sometimes enters a cycle with lengthgreater than one, as shown in Table 6. This can beascribed to the limited amount of conversational his-tory we consider. Another issue observed is that themodel sometimes starts a less relevant topic duringthe conversation. There is a tradeoff between rele-vance and less repetitiveness, as manifested in thereward function we define in Eq 4.

The fundamental problem, of course, is that themanually defined reward function can’t possiblycover the crucial aspects that define an ideal conversa-tion. While the heuristic rewards that we defined areamenable to automatic calculation, and do capturesome aspects of what makes a good conversation,ideally the system would instead receive real rewards

A: What’s your name ?B: Daniel.A: How old are you ?B. Twelve. What’s your name ?A. Daniel.B: How old are you ?A: Twelve. What’s your name ?B: Daniel.A: How old are you ?B ...

Table 6: An simulated dialogue with a cycle longer thanone.

from humans. Another problem with the currentmodel is that we can only afford to explore a verysmall number of candidates and simulated turns sincethe number of cases to consider grow exponentially.

6 Conclusion

We introduce a reinforcement learning framework forneural response generation by simulating dialoguesbetween two agents, integrating the strengths of neu-ral SEQ2SEQ systems and reinforcement learningfor dialogue. Like earlier neural SEQ2SEQ models,our framework captures the compositional modelsof the meaning of a dialogue turn and generates se-mantically appropriate responses. Like reinforementlearning dialogue systems, our framework is ableto generate utterances that optimize future reward,successfully capturing global properties of a goodconversation. Despite the fact that model uses verysimple, operationable heuristics for capturing theseglobal properties, the framework generates more di-verse, interactive responses that foster a more sus-tained conversation.

Page 9: Deep Reinforcement Learning for Dialogue Generation

7 Acknowledgement

We would like to thank Chris Brockett, Bill Dolanand other members of the NLP group at Microsoft Re-search for insightful comments and suggestions. Wealso want to thank Kelvin Guu, Percy Liang, ChrisManning, Sida Wang, Ziang Xie and other membersof the Stanford NLP groups for useful discussions.Jiwei Li is supported by the Facebook Fellowship,to which we gratefully acknowledge. This work par-tially supported by NSF Award IIS-1514268. Anyopinions, findings, and conclusions or recommen-dations expressed in this material are those of theauthors and do not necessarily reflect the views ofNSF or Facebook.

References

Jens Allwood, Joakim Nivre, and Elisabeth Ahlsen. 1992.On the semantics and pragmatics of linguistic feedback.Journal of Semantics, 9:1–26.

Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio.2014. Neural machine translation by jointly learning toalign and translate. arXiv preprint arXiv:1409.0473.

Rafael E Banchs and Haizhou Li. 2012. Iris: a chat-oriented dialogue system based on the vector spacemodel. In Proceedings of the ACL 2012 System Demon-strations, pages 37–42. Association for ComputationalLinguistics.

Yoshua Bengio, Jerome Louradour, Ronan Collobert, andJason Weston. 2009. Curriculum learning. In Pro-ceedings of the 26th annual international conferenceon machine learning, pages 41–48. ACM.

SRK Branavan, David Silver, and Regina Barzilay. 2011.Learning to win by reading manuals in a monte-carloframework. In Proceedings of the 49th Annual Meetingof the Association for Computational Linguistics: Hu-man Language Technologies-Volume 1, pages 268–277.Association for Computational Linguistics.

Michel Galley, Chris Brockett, Alessandro Sordoni,Yangfeng Ji, Michael Auli, Chris Quirk, MargaretMitchell, Jianfeng Gao, and Bill Dolan. 2015.deltableu: A discriminative metric for generationtasks with intrinsically diverse targets. arXiv preprintarXiv:1506.06863.

Milica Gasic, Catherine Breslin, Mike Henderson,Dongkyu Kim, Martin Szummer, Blaise Thomson, Pir-ros Tsiakoulis, and Steve Young. 2013. On-line policyoptimisation of bayesian spoken dialogue systems viahuman interaction. In Acoustics, Speech and SignalProcessing (ICASSP), 2013 IEEE International Confer-ence on, pages 8367–8371. IEEE.

Peter W Glynn. 1990. Likelihood ratio gradient estima-tion for stochastic systems. Communications of theACM, 33(10):75–84.

Ji He, Jianshu Chen, Xiaodong He, Jianfeng Gao, LihongLi, Li Deng, and Mari Ostendorf. 2015. Deep re-inforcement learning with an action space defined bynatural language. arXiv preprint arXiv:1511.04636.

Esther Levin, Roberto Pieraccini, and Wieland Eckert.1997. Learning dialogue strategies within the markovdecision process framework. In Automatic SpeechRecognition and Understanding, 1997. Proceedings.,1997 IEEE Workshop on, pages 72–79. IEEE.

Esther Levin, Roberto Pieraccini, and Wieland Eckert.2000. A stochastic model of human-machine interac-tion for learning dialog strategies. Speech and AudioProcessing, IEEE Transactions on, 8(1):11–23.

Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao,and Bill Dolan. 2015. A diversity-promoting objectivefunction for neural conversation models. arXiv preprintarXiv:1510.03055.

Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, andBill Dolan. 2016. A persona-based neural conversationmodel. arXiv preprint arXiv:1603.06155.

Chia-Wei Liu, Ryan Lowe, Iulian V Serban, Michael Nose-worthy, Laurent Charlin, and Joelle Pineau. 2016. Hownot to evaluate your dialogue system: An empiricalstudy of unsupervised evaluation metrics for dialogueresponse generation. arXiv preprint arXiv:1603.08023.

Yi Luan, Yangfeng Ji, and Mari Ostendorf. 2016.Lstm based conversation models. arXiv preprintarXiv:1603.09457.

Volodymyr Mnih, Koray Kavukcuoglu, David Silver, AlexGraves, Ioannis Antonoglou, Daan Wierstra, and Mar-tin Riedmiller. 2013. Playing atari with deep reinforce-ment learning. NIPS Deep Learning Workshop.

Karthik Narasimhan, Tejas Kulkarni, and Regina Barzilay.2015. Language understanding for text-based gamesusing deep reinforcement learning. arXiv preprintarXiv:1506.08941.

Lasguido Nio, Sakriani Sakti, Graham Neubig, TomokiToda, Mirna Adriani, and Satoshi Nakamura. 2014.Developing non-goal dialog system based on examplesof drama television. In Natural Interaction with Robots,Knowbots and Smartphones, pages 355–361. Springer.

Alice H Oh and Alexander I Rudnicky. 2000. Stochasticlanguage generation for spoken dialogue systems. InProceedings of the 2000 ANLP/NAACL Workshop onConversational systems-Volume 3, pages 27–32. Asso-ciation for Computational Linguistics.

Kishore Papineni, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2002. Bleu: a method for automatic eval-uation of machine translation. In Proceedings of the40th annual meeting on association for computational

Page 10: Deep Reinforcement Learning for Dialogue Generation

linguistics, pages 311–318. Association for Computa-tional Linguistics.

Roberto Pieraccini, David Suendermann, KrishnaDayanidhi, and Jackson Liscombe. 2009. Are we thereyet? research in commercial spoken dialog systems. InText, Speech and Dialogue, pages 3–13. Springer.

Marc’Aurelio Ranzato, Sumit Chopra, Michael Auli,and Wojciech Zaremba. 2015. Sequence level train-ing with recurrent neural networks. arXiv preprintarXiv:1511.06732.

Adwait Ratnaparkhi. 2002. Trainable approaches to sur-face natural language generation and their applicationto conversational dialog systems. Computer Speech &Language, 16(3):435–455.

Alan Ritter, Colin Cherry, and William B Dolan. 2011.Data-driven response generation in social media. InProceedings of the conference on empirical methods innatural language processing, pages 583–593. Associa-tion for Computational Linguistics.

Jost Schatzmann, Karl Weilhammer, Matt Stuttle, andSteve Young. 2006. A survey of statistical user simula-tion techniques for reinforcement-learning of dialoguemanagement strategies. The knowledge engineeringreview, 21(02):97–126.

Emanuel A. Schegloff and Harvey Sacks. 1973. Openingup closings. Semiotica, 8(4):289–327.

Iulian V Serban, Alessandro Sordoni, Yoshua Bengio,Aaron Courville, and Joelle Pineau. 2015a. Build-ing end-to-end dialogue systems using generative hi-erarchical neural network models. arXiv preprintarXiv:1507.04808.

Iulian V Serban, Alessandro Sordoni, Yoshua Bengio,Aaron Courville, and Joelle Pineau. 2015b. Hierarchi-cal neural network generative models for movie dia-logues. arXiv preprint arXiv:1507.04808.

Iulian Vlad Serban, Ryan Lowe, Laurent Charlin, andJoelle Pineau. 2015c. A survey of available corpora forbuilding data-driven dialogue systems. arXiv preprintarXiv:1512.05742.

Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neuralresponding machine for short-text conversation. arXivpreprint arXiv:1503.02364.

Satinder P Singh, Michael J Kearns, Diane J Litman, andMarilyn A Walker. 1999. Reinforcement learning forspoken dialogue systems. In Nips, pages 956–962.

Satinder Singh, Michael Kearns, Diane J Litman, Mar-ilyn A Walker, et al. 2000. Empirical evaluation ofa reinforcement learning spoken dialogue system. InAAAI/IAAI, pages 645–651.

Satinder Singh, Diane Litman, Michael Kearns, and Mari-lyn Walker. 2002. Optimizing dialogue managementwith reinforcement learning: Experiments with the nj-fun system. Journal of Artificial Intelligence Research,pages 105–133.

Alessandro Sordoni, Michel Galley, Michael Auli, ChrisBrockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie,Jianfeng Gao, and Bill Dolan. 2015. A neural networkapproach to context-sensitive generation of conversa-tional responses. arXiv preprint arXiv:1506.06714.

Pei-Hao Su, Milica Gasic, Nikola Mrksic, Lina Rojas-Barahona, Stefan Ultes, David Vandyke, Tsung-HsienWen, and Steve Young. 2016. Continuously learningneural dialogue management. arxiv.

Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014.Sequence to sequence learning with neural networks.In Advances in neural information processing systems,pages 3104–3112.

Richard S Sutton, David A McAllester, Satinder P Singh,Yishay Mansour, et al. 1999. Policy gradient methodsfor reinforcement learning with function approximation.In NIPS, volume 99, pages 1057–1063.

Oriol Vinyals and Quoc Le. 2015. A neural conversationalmodel. arXiv preprint arXiv:1506.05869.

Marilyn A Walker, Rashmi Prasad, and Amanda Stent.2003. A trainable generator for recommendations inmultimodal dialog. In INTERSPEECH. Citeseer.

Marilyn A. Walker. 2000. An application of reinforce-ment learning to dialogue strategy selection in a spokendialogue system for email. Journal of Artificial Intelli-gence Research, pages 387–416.

Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Pei-HaoSu, David Vandyke, and Steve Young. 2015. Seman-tically conditioned lstm-based natural language gen-eration for spoken dialogue systems. arXiv preprintarXiv:1508.01745.

Tsung-Hsien Wen, Milica Gasic, Nikola Mrksic, Lina MRojas-Barahona, Pei-Hao Su, Stefan Ultes, DavidVandyke, and Steve Young. 2016. A network-basedend-to-end trainable task-oriented dialogue system.arXiv preprint arXiv:1604.04562.

Ronald J Williams. 1992. Simple statistical gradient-following algorithms for connectionist reinforcementlearning. Machine learning, 8(3-4):229–256.

Zhen Xu, Bingquan Liu, Baoxun Wang, Chengjie Sun, andXiaolong Wang. 2016. Incorporating loose-structuredknowledge into lstm with recall gate for conversationmodeling. arXiv preprint arXiv:1605.05110.

Kaisheng Yao, Geoffrey Zweig, and Baolin Peng. 2015.Attention with intention for a neural network conversa-tion model. arXiv preprint arXiv:1510.08565.

Steve Young, Milica Gasic, Simon Keizer, FrancoisMairesse, Jost Schatzmann, Blaise Thomson, and KaiYu. 2010. The hidden information state model: A prac-tical framework for pomdp-based spoken dialogue man-agement. Computer Speech & Language, 24(2):150–174.

Page 11: Deep Reinforcement Learning for Dialogue Generation

Steve Young, Milica Gasic, Blaise Thomson, and Jason DWilliams. 2013. Pomdp-based statistical spoken di-alog systems: A review. Proceedings of the IEEE,101(5):1160–1179.

Wojciech Zaremba and Ilya Sutskever. 2015. Reinforce-ment learning neural turing machines. arXiv preprintarXiv:1505.00521.