Bot Engine

About the project

A customisable & expandable interface that uses my Chatbot with additional ML based sentiment analysis with visual/audio output.

Project info

Difficulty: Moderate

Platforms: Raspberry PiTensorFlow

Estimated time: 2 hours

License: GNU General Public License, version 3 or later (GPL3+)

Items used in this project

Hardware components

Raspberry Pi 4 Model B Raspberry Pi 4 Model B x 1

Software apps and online services

TensorFlow TensorFlow
nltk nltk
docker docker

Story

Hello

Hello

Hello

What is the Bot Engine?

So you may have seen my recent Chatbot update - I wanted to take this as a component and integrate it into something bigger; so I started work on something inspired by the AI from Command & Conquer: Tiberian Sun: CABAL.

I wanted to create a framework that could be used to incorporate not only the chatbots functions but also have a face, a voice and the ability to analyse inputs using ML.

The basic idea is that someone could clone the code out, get it setup; then customise it or build on top of it to make their own robot/AI interface.

So feel free to take the code/framework and make your own bots with it.

How is it used?

All taken from the GitHub page:

Essential setup + Windows/Linux x86-64

Ensure you have Python 3.9 installed (this is what I have been using for it so far).

Install Docker:

Windows - https://docs.docker.com/desktop/windows/install/ Linux - use the script under Chatbot_8/build:

install_docker_linux.sh

Then run:

sudo usermod -aG docker "$USER"

Then reboot.

Ensure you have git installed and get Bot Engine and the Chatbot with:

git clone --recursive https://github.com/LordofBone/Bot_Engine

And if there are any updates to the chatbot in future you should be able to grab the latest code with:

git submodule update --remote --recursive

On Linux+RPi you will also need to ensure an additional library is installed for pyscopg2:

sudo apt-get install libpq-dev

Although if you build the Chatbot Docker container from the script under Chatbot_8/Build/install_docker_linux.sh, this will install it for you.

Then set up a python venv (https://docs.python-guide.org/dev/virtualenvs/) and install the requirements:

pip install -r requirements.txt

Raspberry Pi OS 64-bit ARM specific setup

When it comes to installing this on a Raspberry Pi at the time of writing you will need to do the above, but also you will need the 64 bit version of the RPI OS (Bullseye) - https://downloads.raspberrypi.org/raspios_arm64/images/ which you can install to an SD card with the imager: https://www.raspberrypi.com/software/

The above requirements should still install everything needed, then stop at tensorflow and tensorflow-gpu.

This is where at the moment, some manual intervention is required:

Go here and download the wheel for 2.7.0 Python 3 64 Bit ARM: https://github.com/Qengineering/TensorFlow-Raspberry-Pi_64-bit

Then while still in the venv made above:

PIP_EXTRA_INDEX_URL=https://snapshots.linaro.org/ldcg/python-cache/


pip3 install tensorflow-2.7.0-cp39-cp39-linux_aarch64.whl

The index URL is to include tensorflow-io which is required as per the issue here: https://github.com/tensorflow/io/issues/1441

and it grabs that wheel from: https://snapshots.linaro.org/ldcg/python-cache/

Hopefully, in time, the proper tensorflow wheel is just added to the Bullseye repo. I did try compiling Tensorflow for 32 bit RPi OS, but it was a nightmare, and I don't think was going to work - the prior OS Buster also did not have the required PostgreSQL-dev-13 installation in its repo that is required.

eSpeak setup

The Bot Engine is designed to use eSpeak as its native TTS; but there is no reason why this cannot be changed to use something else.

Windows

Download the.msi installer from: https://github.com/espeak-ng/espeak-ng/releases

Add the installation location to your PATH, eg:

C:Program FileseSpeak NG

Linux (x86-64/ARM)

Should be as simple as:

sudo apt-get install espeak-ng python3-gst-1.0 espeak-ng-data libespeak-ng-dev

Then setting the voice:

espeak-ng -v gmw/en

and testing with:

espeak-ng "Hello, how are you?" 2>/dev/null

Check the User Guide for more information on both of the above: https://github.com/espeak-ng/espeak-ng/blob/master/docs/guide.md

Setting up the Chatbot and Sentiment models

Windows/Linux x86-64

Under the Chatbot_8/build directory again.

Windows - should be able to run:

build_postgresql_container.cmd

Input any name for the container, input password as 'chatbot' and port as '5432' if the script asks for them.

Linux - should be able to run:

build_postgresql_container_linux.sh <container_name> <postgres password (chatbot)> <port (5432)>

Raspberry Pi OS 64-bit ARM/Linux General

First ensure you have a PostgreSQL server setup as per the instructions of the Chatbot - https://github.com/LordofBone/Chatbot_8#docker-postgresql-installation - for Linux on Raspberry Pi due to the fact that the 64 bit OS is required for Tensorflow you will need to use the:

build_postgresql_container_linux.sh <container_name> <postgres password (chatbot)> <port (5432)>

Script, rather than the '_pi' one, as the 64 bit OS has a different repo and doesn't require the workaround present there.

Also install Portainer using:

portainer_build_linux.sh

Which will allow you go to localhost:9000 to administer Docker containers easily within Linux.

Running the training

Ensure the Docker container from above is running.

Drop a training file into the Chatbot data path (the same instructions for the training file for the chatbot apply here - https://github.com/LordofBone/Chatbot_8#training-the-bot):

Chatbot_8/data/training/

Run:

python train_ai.py

After that it will then train the bot from the training data supplied, and also train a markovify model for the bot and put it under Chatbot_8/models/markovify/bot_1.

It will also run the sentiment training, which will then download a set of various NLTK + Tensorflow datasets and models and the train on them.

The bot should now be installed, setup and ready to go.

Running the Bot Engine

Should be as simple as:

python launch_ai.py

How does it work?

The default GUI uses tkinter in order to load up the gifs under 'images/' as well as take in text input from the user, it accesses other modules such as the emotion engine and voice controller to display different animations. There is some threading used here to get tkinter to be able to do multiple things at once, like run gifs, check the current state of the bot, allow users to type while it is animating a gif and doing the rest of the code under the hood - it could be considered a bit hacky and I'm not yet sure of the full ramifications of threading it out this way; but nothing has gone wrong yet.

Also, as mentioned later, in the GUI config this can be switched, modified or turned off to run in a terminal only.

The voice controller uses espeak-ng by default, but this of course can be replaced with any TTS you wish. It should be able to show when the bot is talking and play custom.wav files from 'audio/' as well as convert any text to speech.

The chatbot functions controller will handle all interfacing with the chatbot module allowing text to be input and grabbed out - it should also handle if the PostgreSQL database drops and attempt reconnects to the chatbot module so that when/if the DB comes back online the chatbot will work again without crashing the entire system.

There is also an 'admin mode' where if the words "admin mode access" are inputted into the core_systems input loop; this allows for re-training of the bot DB, muting the audio etc. with special commands.

The emotion controller uses NLTK Vader, NLTK Twitter analysis or Tensorflow analysis in order to ascertain the sentiment of an inputted sentence (a value between -1 and 1) - it then uses the value from this to input into an average. This then determines the current status of the bot which can then be used with other modules. The thresholds for positive/negative bot attitude are set under 'config/emotion_config.py' as well as what engine it is using.

It also loads a number of replies from the chatbot module and then chooses the reply with the sentiment value that most closely matches its own averaged value. So out of 10 sentences with a current sentiment score of '-0.225' and there is a sentence that has a sentiment score of '0.222' and nothing else closer - it will pick that sentence.

Essentially its replies should be affected by its current 'emotional' state, which is in turn; determined by inputs into the bot.

So here is the default 'passive' look, which is when the average sentiment is between the bottom threshold and the top threshold determined from the averages from inputs:

Here's the default 'angry' look:

And finally, here is the 'happy' look:

Again, the above all replaceable/customisable.

The training files for these are all under the 'ml/' folder and will download the datasets and run the training, save the models; either by running:

python utils/sentiment_training_suite.py

Or by running the training suite that will do everything from sentiment analysis to training the chatbot:

python train_ai.py

I've found some interesting results can come from actually switching the Chatbot DB off and letting it just use markovify model ML generation to generate sentences in conjunction with the emotion sentiment analysis system. It can be rather spooky when it gives an oddly specific reply to something you've asked it when it doesn't have any known responses from the database to choose from and it's just generating sentences from the MK model.

How is it customisable?

Under the config folder there are a number of files:

emotion_config.py

You can configure what sentiment analysis engine is being used here as well as the amount of previous sentiments to keep in memory to average out.

The thresholds for what are considered a positive and a negative mood are also set here, as well as the initial mood on startup.

gui_config.py

Config for window name, colour and whether the GUI is activated.

nltk_config.py

Contains locations for all the NLTK models and datasets.

tensorflow_config.py

Contains locations for all the Tensorflow models and datasets as well as training configuration such as epochs etc.

voice_config.py

Here audio can be switched on or off and the varying levels of the Chat bots pitch and cadence can be adjusted.

Modifying the existing GUI

The bot can be modified by changing the configuration of the TTS or by replacing the TTS engine entirely.

The gifs under images/ can all be replaced also with anything you like - it should work with any animations put in of any length.

The online/training audio samples can also be changed under audio/ although at the moment only 'online.wav' is used on launch of the bot.

Adding/changing an output interface

By default, the tkinter GUI is used to display an output of a face with animations; but it can be set to terminal mode by going to config/machine_interface_config.py - 'interface_mode' to 'TERM' rather than 'GUI'. This setting is then used under 'functions/core_systems.py':

def boot(self):
if interface_mode == "GUI":
gui_control = GUIController(self)
gui_control.begin()
elif interface_mode == "TERM":
VoiceControllerAccess.play_online()
term_control = TerminalController(self)
term_control.talk_loop()
elif interface_mode == "ROBOT":
VoiceControllerAccess.play_online()
term_control = TerminalController(self)
term_control.talk_loop()

So it can be expanded out with your own frontend, by passing in the CoreSystem class to another class in a new module this way the chatbots functions can be accessed:

self.core_access.bot_talk_io("hello")

intro_words = self.core_access.bot_reply

You can also import the EmotionEngineInterface class from functions.emotion_controller and VoiceControllerAccess from functions.voice_controller and use them to access the emotional state of the bot and also access voice controls, such as:

'VoiceControllerAccess.talking' which can be used in a loop to keep the animation/robotics movement going while the audio is playing.

The actual TTS is handled in the emotion_controller.py module:

VoiceControllerAccess.tts(reply)

And this can be turned off with the audio config - config/voice_config.py

This is so the bot can still output audio when in other interface modes.

The emotion controller can be used to ascertain the current emotion of the bot, so it can be coded around to make faces with animations, robotics or anything else:

EmotionEngineInterface.get_emotion()

With the above, the config and training the chatbot on custom data - you can use the bot engine to integrate into any robotics system or basically anything else.

What are some examples of it in action?

I have made a quick high-level demo here on my YouTube channel:


Depending on the data set it can be a bit flaky with its responses; but overall it serves its purpose well enough there. I have found that this can easily be overt-rained or undertrained and you have to be careful what movie scripts/corpuses/conversations it has been trained on, otherwise it can come up with some really odd responses.

In the video it shows all 3 states of the bot and the sometimes unusual circumstances it can interpret something you've said to it as positive or negative. This will change depending on whether you are using NLTK/NLTK Vader/Tensorflow twitter analysis; given some of the datasets from twitter and how they have been marked as good or bad alongside different word weightings its understandable how it can misinterpret things.

The GIF animation stuff I am happy with, as mentioned above that took a lot of working with threading and getting it to do multiple things at once as tkinter is generally suited to only doing one thing at a time.

The voice as you can hear is kind of spooky and robotic but in future I'll look into more natural sounding TTS systems as they come out and hopefully get one configured to sound more like CABAL from Tiberian Sun; but of course this is just the default one included and meant to be customised.

What are the future plans?

The future plans are to expand out the emotion engine to actually analyse the emotional intent of the inputs and choose outputs suitable, rather than just analysing positive/negative statements.

On top of that also the Chatbot will be modified to hopefully get better/more accurate/meaningful responses to be fed to the Bot Engine.

I also want to look into adding a proper 3d face with some animations depending on its mood and also the ability to move its mouth along with the words it is speaking; this would be included in as a module within the bot engine that could be used, modified or replaced entirely.

I also have plans to build into it a speech-to-text system so that it can be talked to, as well as building in camera tech so it can use a camera to see things. The skies the limit really - I definitely intend to use it in the next iteration of my Terminator Skull.

I want to make it so that the code is a bit cleaner and easier to import into other systems, so for instance; rather than passing in the core_systems class into another class in order for that to access the chatbot functions, I want to make the core_systems module importable - so the entire engine can just be cloned into another project and cleanly integrated via imports.

Other than that I will see what feedback I get on the project and see where I can go from there, it has been an excellent learning exercise either way and has given me some tools and experience that will be useful in future.

You will also notice there is a 'deploy' folder in the code - this is where I intend to add in some scripts to automate the installation of the Bot Engine and make it less of a tricky process.

There is also a 'thoughts_processor' module under the functions folder that is currently in progress - this will be utilised in future to given the bot the ability to think on its own and formulate new thoughts based of off what it has been trained on and possibly its mood. This will hopefully help build the bot up to be a bit more of an interesting and interactive agent that seems a bit more 'alive'.

So feel free to grab the code, set it up, let me know if I've missed anything from the installation procedures (especially on RPi/Linux) and do something with it!

Goodbye

Goodbye

Goodbye

Code

Bot Engine code

Code for the Bot Engine, instructions for setup on Windows x86-64 + Linux x86-64/Pi ARM 64 are in the readme

Credits

Photo of 314Reactor

314Reactor

Technology loving nerd with a passion for trying to bring SciFi to life.

   

Leave your feedback...