Aiy - Quick Nature Guide

About the project

Quickly learn more about the nature around you with the AIY Vision Kit by Google in collaboration with the iNaturalist community.

Project info

Difficulty: Easy

Platforms: Raspberry Pi

Estimated time: 3 hours

License: GNU General Public License, version 3 or later (GPL3+)

Items used in this project

Hardware components

Google Aiy Vision Full Kit - Includes Pi Zero Wh - V1.1 Google Aiy Vision Full Kit - Includes Pi Zero Wh - V1.1 x 1
USB Wall Charger - 5V, 1A (Black) USB Wall Charger - 5V, 1A (Black) x 1

Software apps and online services

Command Line Command Line To stop the default demo and launch the project demo

Story

OVERVIEW

The main goal of this project is to retrieve a quick information of the object being detected via the AIY Vision Kit, as part of Google's AIY Vision and Voice Challenge Contest.

The project uses 3 Nature Explorer machine learning (ML) models, trained on photos contributed by the iNaturalist community to recognise over 4000 different species ranging from birds, insects to plants.

A webpage is used to display the camera stream and the information retrieved from the iNaturalist Application Programming Interface (API).

REQUIREMENTS

  • AIY Vision Kit containing Nature Explorer models, Raspberry Pi Zero (RPI), Raspberry Pi Camera
  • A device connected in the same network as the AIY Vision Kit
  • pyinaturalist, a python client for iNaturalist API
  • GOALS

    The main goal of this project is to retrieve more information about an object being detected rather than only its name. The project aims to provide a smooth user experience from taking a picture of the object to finding out details about it with minimal user interaction.

    In addition, another goal of this project is to keep it simple.

    Therefore, a simple HTTP server was used to display the camera stream, to help the user get an idea of what is within the camera frame and aid the Nature Explorer models to retrieve the most accurate results.

    Web frameworks would probably have made the development a lot easier such as handling data between pages and transferring data between the web and RPI but then again, this would require extra modules to be installed.

    A web page is used to avoid external hardware for the user to set up in order to use the demo. Therefore, the only external component needed is a Bluetooth device which the user will already have to be able to pair with the Vision Kit and use its existing demos. The web page is served at the IP address of the RPI which can be found via the AIY app.

    RUNNING THE DEMO

    Turn on your Vision Kit using the provided cable into a USB wall charger or a powerbank.

    Your Vision Kit might likely be running a script that runs by default when your kit is turned on via service such as the Joy Detector demo which uses the RPI camera resources. To stop this service, run:

  • sudo systemctl stop joy_detection_demo
  • Required modules not built in python3 should already be pre-installed with the Vision Kit, except pyinaturalist. So, install pyinaturalist globally via pip3.

  • pip3 install pyinaturalist
  • Git clone the project repository:

  • git clone https://github.com/icapistrano/AIY_nature_guide.git
  • Go to project folder and run web_server.py

  • cd AIY_nature_guide
  • python3 web_server.py
  • Find the IP address of your RPI via the AIY app. Make sure you are connected in the same network as the RPI and go to http://your-pi-address:8000/. This was tested on Linux and Windows.

    CODE EXPLANATION

    This section will explain the key technologies and tools used in this project.

    WEB STREAMING

    Thankfully, streaming camera feed was already efficiently developed and properly documented in the picamera documentaion. Here is the wonderful source code: PiCamera - Web Streaming.

    Nature Explorer Models

    The AIY Vision Kit provides multiple ML models to experiment with. Some demos can run with the video feed, however, the nature explorer models which is used for this project requires a picture and the category i.e. birds, insects or plants as parameters.

    Therefore, the user has to press a button to select the category and a 'capture' button to take picture which will then launch the object identifier with the chosen ML model on a new Python interpreter behind the scenes.

    Common Gateway Interface (CGI)

    CGI allows web servers to execute application programs like python scripts. Specifically this project uses CGI to parse web user's request in two ways:

  • Changing the LED colour and animation depending on which web button was pressed
  • Execute a python script to retrieve the output of the ML model based on which web button was pressed and when it was pressed
  • Asynchronous JavaScript and XML (AJAX)

    AJAX allows web pages to be updated asynchronously by exchanging data with the server behind the scenes. Specifically, this project uses AJAX to send form data without reloading the page.

    Code

    Githup repo for AIY - Quick Nature Guide

    Credits

    Photo of icapistrano

    icapistrano

    I enjoy making things with code!

       

    Leave your feedback...