Chatbot 8

About the project

This is the chatbot I have been developing for a while now - used in my Raspbinator and Nvidianator projects.

Project info

Difficulty: Moderate

Platforms: Raspberry Pi

Estimated time: 1 hour

License: GNU General Public License, version 3 or later (GPL3+)

Items used in this project

Hardware components

Raspberry Pi 3 Model B Raspberry Pi 3 Model B x 1

Software apps and online services

wit.ai wit.ai

Story

Time to talk.

I thought I should give this it’s own specific post – this is the chatbot that I’ve used in my Raspbinator and Nvidinator projects. The GitHub linked below will be updated over time as I make improvements to it.

I’ve decided on the name Chatbot 8 – as before I used GitHub I had it on my Google Drive and each iteration I increased the number; the first one I was happy to use in the Raspbinator was iteration 8 and now, the name has kind of stuck.

Key Goals:

  • To make a bot that can respond to human input, learn and return more organic responses over time.
  • To be able to be trained from large text files such as scripts for movies and transcripts of conversations.
  • Have it able to be integrated easily into other projects.

The workings.

So here’s the code on my GitHub.

I’ve made these chatbots to work with Raspberry Pi projects – thus everything will be based on Pi’s and the Raspbian OS.

There are a few of dependencies:

Everything else should be included with the Python packages on Raspbian.

At a high level the logic of the system is as follows:

Bot says initial “Hello”.Human responds.Bot stores the response to “Hello” and searches its database for anything its said before that closely matches what the human input was, then brings up a result from a prior interaction.

By storing human responses to the bots Mongo database and assigning them to things the bot has previously said, then comparing inputs from the person to those items to find appropriate responses, you can get some reasonably decent responses from the bot.

As an example; if the bot says “whats the weather like” and I type in “its raining outside” it will store that response and tie it to that input. Now if someone else comes along and types “hows the weather”, it will search its database for close matches and find the previous response “whats the weather like”, at which point it will search for responses to that and find my response “its raining outside”. So while its not really ‘thinking’ about its responses it does end up coming back with some reasonable replies.

It will first search with a reasonably high accuracy for known inputs, then if that fails it will reduce to a medium accuracy and then finally a low accuracy. I am currently using this library for comparing strings:

The levels of accuracy are:

  • fuzz.ratio(Str1.lower(),Str2.lower())
  • fuzz.partial_ratio(Str1.lower(),Str2.lower())
  • fuzz.token_set_ratio(Str1,Str2)

You can see on the site I’ve linked above how these functions work – but generally the accuracy needed degrades as it goes through the functions. The threshold for each can also be adjusted.

So if the top returned ratio for the input string/stored strings is below the threshold it will drop to the second for a partial match and do the same and finally if that fails; it will move onto the set ratio match. The final one is good for strings of different sizes that have matching words in.

Now what happens if there are no matches on the above? Before the bot responds it stores the input its received into the database, its also splitting up every input and storing all the individual words. So when it can’t find a previous reply to your input it has a 40% chance of generating a random sentence from these words and a 60% chance of picking a totally random complete sentence it knows of.

Now you may be thinking that this causes the bot to talk a lot of nonsense – you’d be right, but what happens is at first it will just repeat what you are saying as you talk to it. But the more you input in and reply the more it learns and the random sentences it generates can sometimes actually make some degree of sense; and when you reply to this it then has a reference point for when it receives an input similar to what it just said.

Here’s another example:

If I say to the bot: “I like cheese” and it has nothing in its database for this input and enough words to generate a random sentence, as essentially a guess it could come back with: “Hello television like usually”. Which of course doesn’t make sense, but if I then respond with “Yes I like television too” it stores that reply. Now, say, someone else comes along and types in “I usually watch television” it will run that through the database and find its similar to what it said before (“Hello television like usually”) and find my response (“Yes I like television too”), giving the illusion of a real response.

It’s essentially learning from the beginning, it knows nothing so it will try its best to use prior experience and as a last resort, guess – until it learns more so it doesn’t have to guess any more.

It’s also capable of maintaining conversations with multiple people – creating a new class for each person it talks to and recording their last responses along with the bots. So that when you switch people using the ‘change_name’ command or by inputting a different name into the conversation function from an external program, it can carry on the conversation with a person it’s already talked with in that session.

Modular.

Recently I’ve added the ability for the chatbot to be imported into other programs and be able to get text input and receive outputs from the bot – with an easy interface that only requires an input string with a name. The bot itself then handles the previous replies and bot responses; along with the conversation switching and returns a response.

It can also be run independently for testing, making it easy to train and test even when as a component of another project – such as the integration with the STT/TTS and ML parts of The Nvidianator.

To use it simply put the bot .py file in the same folder as the program it will be used with and use:

import bot_8 as chatbot

Once imported into the Python program it can be interacted with with the command:

reply = chatbot.conversation(inputWords, humanid)

The input words are of course what the input is, so this can be taken from a speech to text function such as wit.ai or some other text input.

The humanid is the name of the person currently interacting with it.

With both of these inputs put into the function it will return the reply as a string – which can then be used for further processing in the program that has imported the chatbot.

Training.

I’ve also added in a training module – this can be handy for loading in large text files such as film scripts or conversation transcripts, so the bot can be trained up on existing data – I have tried it with the script for Metal Gear Solid and it works pretty well.

It works by scanning each line and putting the data into the bot – as each new line goes in the bot code assigns it as a response to the prior line.

It has been programmed to filter out blank lines and sentences that begin with none-alpha characters, as well as splitting lines with “:” in; so that a script with names denoting who is talking and with various notes in should have these lines skipped and names removed. This is a bit messy at the moment though and could do with some work, but it basically means you can chuck in a script and it (should) get through it neatly, picking only the relevant speech text.

With the bot trained on a script you can type in inputs from the game/movie/conversation and it will pretty reliably return the correct responses – in terms of the Metal Gear Solid script above you can get all sorts of cool quotes out of it by typing in things the characters say.

For example, with the above MGS training data:

  • If I type “Are you a rookie?”
  • The bot responds with a quote from Meryl: “Careful, I’m no rookie!!”

And so on.

The training module can be called with the switch “-fresh” which will erase the database and train from scratch, without this switch it will further train the existing database.

The training data needs to be in the same folder as the bot and be called "learning.txt".

There is also a deleteDB module that, when run, does what it says and clears the database.

In theory if it was trained on a huge amount of normal human conversation it would come back with a great deal of organic responses; also depending on what training inputs they receive each bot could have its own distinct personality.

Ongoing.

I’m going to keep improving upon this to make it better over time. So keep checking back on my GitHub for updates.

I am also working on getting the chatbot to be packaged up with Docker – so that it can be easily deployed with all dependencies and also have separate persistent bots on the same machine without even having to have MongoDB installed on the OS itself.

Do you have any ideas to improve the bot? Let me know.

Also feel free to download it and try yourself – just be aware that starting from blank it will take a lot of training data/talking to it before it starts to make any sense.

See you all in the next project.

Code

Github

https://github.com/LordofBone/Chatbot_8

Github

https://github.com/seatgeek/fuzzywuzzy

Credits

Photo of 314Reactor

314Reactor

Technology loving nerd with a passion for trying to bring SciFi to life.

   

Leave your feedback...