Search

Exailerate

About the project

Have you ever been with a group of friends or family and felt that the room has low energy? We all know how hard it can be to lift the mood of a single person, let alone a group. We propose a solution with ExAilerate using the Google AIY Vision Kit! ExAilerate will find the preferences for each friend or family member, then will

Project info

Difficulty: Easy

Platforms: Raspberry PiAIY

Estimated time: 1 hour

License: Apache License 2.0 (Apache-2.0)

Items used in this project

Hardware components

Google Aiy Vision Full Kit - Includes Pi Zero Wh - V1.1 Google Aiy Vision Full Kit - Includes Pi Zero Wh - V1.1 x 1
Micro Hdmi To Hdmi Adapter Micro Hdmi To Hdmi Adapter x 1
Ethernet Hub And Usb Hub W/ Micro Usb Otg Connector Ethernet Hub And Usb Hub W/ Micro Usb Otg Connector x 1
Microsd Card With Adapter - 16gb (class 10) Microsd Card With Adapter - 16gb (class 10) x 1

Story

Building the Kit:

To build the kit, we followed the official AIY kit assembly guide found here:https://aiyprojects.withgoogle.com/vision#assembly-guide

We’ve taken pictures of the process which are included below.

Kit Contents

Assembled Kit

Choosing the necessary libraries:
  • While the AIY libraries contain everything needed for the Joy recognition, we needed to find another library to handle displaying the images for the user configuration and final presentation.
  • The TkInter library provided several necessary functions that we needed.  This includes the ability to load all files in a directory as a slide show as well as have these images cycled either manually or on a timer.
  • To fully use the functionality that we need from TkInter we also need to install the Python Imaging Library along with the Python Imaging Library - ImageTk Module.
  • All necessary dependencies can be installed with the command:
  1. sudo apt-get install python3-pil python3-pil.imagetk

Using the AIY Face Detection library

To start, we have to import all of the necessary libraries.

  1. import io
  2. import os
  3. import sys
  4. import collections
  5.  
  6. from itertools import cycle
  7. import tkinter as tk
  8. from PIL import Image, ImageTk
  9. from picamera import PiCamera
  10.  
  11. from aiy.board import Board
  12. from aiy.leds import Color, Leds, Pattern, PrivacyLed
  13. from aiy.toneplayer import TonePlayer
  14. from aiy.vision.inference import CameraInference
  15. from aiy.vision.models import face_detection

The image display using TkInter is used to display the slideshow with the option to display each slide based on a trigger or a timer. Pictures used for the slide show (total 19) are from pexels free stock images website.

  1. # Displays all images contained within given list
  2. # display_slides will run a slideshow of all loaded images, use run() to start
  3. # show_slides will display slides one at a time using the next() function
  4. class ImageViewer(tk.Tk):
  5. def __init__(self, image_files, x, y):
  6. print("Initializing Image View")
  7. tk.Tk.__init__(self)
  8.  
  9. self.geometry('+{}+{}'.format(x, y))
  10.  
  11. self.size = len(image_files)
  12. self.shown_total = -1
  13. self.pictures = cycle(image for image in image_files)
  14. self.pictures = self.pictures
  15. self.picture_display = tk.Label(self)
  16. self.picture_display.pack()
  17. self.images=[]
  18. self.return_name = ""
  19. self.bind('<Escape>', self.toggle_screen)
  20. def toggle_screen(self, event):
  21. self.attributes("-fullscreen", False)
  22. def show_slides(self):
  23. print("Showing Slides")
  24. self.shown_total += 1
  25. img_name = next(self.pictures)
  26. self.return_name = img_name
  27. image_pil = Image.open(img_name)
  28.  
  29. self.images.append(ImageTk.PhotoImage(image_pil))
  30.  
  31. self.picture_display.config(image=self.images[-1])
  32.  
  33. self.title(img_name)
  34. def display_slides(self):
  35. img_name = next(self.pictures)
  36. image_pil = Image.open(img_name)
  37.  
  38. self.images.append(ImageTk.PhotoImage(image_pil))
  39.  
  40. self.picture_display.config(image=self.images[-1])
  41.  
  42. self.title(img_name)
  43. self.after(DELAY, self.display_slides)
  44.  
  45. def next(self):
  46. print("Next Slide")
  47. self.show_slides()
  48. self.run()
  49. def get_title(self):
  50. return self.return_name
  51. def display(self):
  52. self.mainloop()
  53. def run(self):
  54. self.update_idletasks()
  55. self.update()
  56. def is_finished(self):
  57. return self.size == self.shown_total

We then create our functions which will use the AIY libraries.

  1. # detect if emotion is above the detection threshold
  2. def detect_emotion(model_loaded, joy_moving_average, joy_threshold_detector, animator, player):
  3. for faces, frame_size in run_inference(model_loaded):
  4. joy_score = joy_moving_average.send(average_joy_score(faces))
  5. animator.update_joy_score(joy_score)
  6. event = joy_threshold_detector.send(joy_score)
  7. if event == 'high':
  8. print('High joy detected.')
  9. player.play(JOY_SOUND)
  10. return "joy"
  11. elif event == 'low':
  12. print('Low joy detected.')
  13. player.play(SAD_SOUND)
  14. return "sad"

Once we are able to detect the users emotion, we can create the function needed to save their image preference.

  1. # Test the user for each image and configure the user file for future use
  2. def preference_config(pref_file, image_view):
  3.  
  4. # initialize all components
  5. leds = Leds()
  6. board = Board()
  7. player = Player(gpio=BUZZER_GPIO, bpm=10)
  8. animator = Animator(leds)
  9.  
  10. camera = PiCamera(sensor_mode=4, resolution=(820, 616))
  11.  
  12. # turn on privacy light
  13. leds.update(Leds.privacy_on(brightness=128))
  14. def model_loaded():
  15. player.play(MODEL_LOAD_SOUND)
  16.  
  17. joy_moving_average = moving_average(10)
  18. joy_moving_average.send(None) # Initialize.
  19. joy_threshold_detector = threshold_detector(JOY_SCORE_LOW, JOY_SCORE_HIGH)
  20. joy_threshold_detector.send(None) # Initialize.
  21.  
  22. # cycle through all pictures in the image_view until all have been viewed
  23. # each new image is displayed after the user emotions has been recognized for that image
  24. while not image_view.is_finished():
  25. emotion = detect_emotion(model_loaded, joy_moving_average, joy_threshold_detector, animator, player)
  26. if "joy" in emotion:
  27. pref_file.write(image_view.get_title() + 'n')
  28. image_view.next()
  29. animator.shutdown()
  30. leds.update(Leds.privacy_off())

The full code can be found in the file exAilerate.py in our github repository:

https://github.com/piela001/exAilerate

Running the program

Clone the repository

  1. git clone https://github.com/piela001/exAilerate.git

Run the installer

  1. ./install.sh

Start the program

  1. ./exAilerate.py

Follow the command prompt to configure all new users. Once everyone is configured, run the program again to start PARTY MODE!

More details of our project, including our trial and error, and more info about why we chose what we did is available at:

https://docs.google.com/document/d/1w48Zfb2NK-u5amr86Q1gw50dpLKvsy14RsLNvOAnNL8/edit?usp=sharing

https://youtu.be/gw87xa_H2sk

Code

exAilerate

Credits

Photo of piela001

piela001

Software Engineer

   

Leave your feedback...