Fart Monitoring Using Coral Dev Board Micro

About the project

A low-powered edge device that detects farts and keeps track of them using a TensorFlow model running on a Coral Edge TPU

Project info

Difficulty: Moderate

Platforms: GoogleM5StackTensorFlow

Estimated time: 2 hours

License: MIT license (MIT)

Items used in this project

Hardware components

Generic 3000mAh Power Bank Generic 3000mAh Power Bank x 1
M5Stack M5StickC PLUS M5Stack M5StickC PLUS x 1
Google Coral Dev Board Micro Google Coral Dev Board Micro x 1

Software apps and online services

Arduino IDE Arduino IDE
TensorFlow TensorFlow
FreeRTOS Real-time operating system for microcontrollers FreeRTOS Real-time operating system for microcontrollers
Google Coralmicro software platform Google Coralmicro software platform



According to the Wikipedia:

Flatulence, in humans, is the expulsion of gas from the intestines via the anus, commonly referred to as farting. Flatus is the medical word for gas generated in the stomach or bowels. The scientific study of this area of medicine is termed flatology. In many societies, flatus is taboo.

In this project, I have built a proof of concept of a non-invasive low-powered edge device that uses a microphone to keep tabs on farts' sound.

Hardware setup

We will be using a Coral Dev Board Micro which has an onboard PDM mono microphone and an ML accelerator (Coral Edge TPU coprocessor).

Also, we will be using an M5StickC Plus Development board to display the count on its LCD screen and make a buzzing sound for the alert. One thing we have to keep in mind is that Coral Dev Board Micro headers pins are 1.8V tolerant except TX/RX pins which are used as a UART connection to communicate with a computer for debugging. The grove connector on the M5StickC Plus is repurposed as a serial connection. Since we are only interested in the inferencing results from the Coral Dev Board Micro, we can just use TX and GND pins for one-way communication.

The connection diagram is given below.

Setup Development Environment

To start building apps with FreeRTOS, you need to download the Dev Board Micro source code and install some dependencies, as follows:

$ git clone --recurse-submodules -j8 https://github.com/google-coral/coralmicro
$ cd coralmicro
$ bash setup.sh
$ bash build.sh

Example Application

We will be using an existing examples/classify_audio application from the Coralmicro repository which recognizes various sounds heard with the onboard microphone, using the YamNet TensorFlow model running on the Edge TPU. The YamNet is an audio event classifier trained on the AudioSet dataset to predict audio events from the AudioSet ontology and it predicts 521 audio event classes from the AudioSet-YouTube corpus it was trained on. It employs the Mobilenet_v1 depthwise-separable convolution architecture.

Compile and Build the application

All example applications have been already built while setting up the development environment above but if need to make some changes, we would need to rebuild it. For example, we have made some changes in the examples/classify_audio/classify_audio.cc file to toggle onboard USER_LED when the fart is detected:

static bool enable;
for (const auto& c : results) {
if (c.id == 55) {
enable = !enable;
LedSet(Led::kUser, enable);

Now we can rebuild the application using the following command.

$ make -C build/examples/classify_audio

Deploy the firmware and model

Plug in the board to your computer and verify that it's detected using the command below.

$ lsusb

Bus 020 Device 003: ID 18d1:9308 Google Inc. Coral Dev Board Micro Serial: 2e2f000e129fad00

To deploy the firmware and model together, execute the below command.

$ python3 scripts/flashtool.py -e classify_audio

If we need to make some changes in the code only we can deploy the firmware only using the following command.

$ python3 scripts/flashtool.py -e classify_audio --nodata

When flashing completes, the board reboots and loads the app. The white Edge TPU LED turns on to indicate the Edge TPU is running.


To see the inferencing results, plug in the board to your computer and use any serial port terminal application with a baud rate of 115200. I am using the screen command as follows.

$ screen /dev/cu.usbmodem144101 115200

Yamnet preprocess time: 10ms, invoke time: 37ms, total: 47ms
No results

Yamnet preprocess time: 10ms, invoke time: 37ms, total: 47ms
55: 0.976562

The average Digital Signal Processing (DSP) time is 10ms and the inferencing time is 37ms so it can reliably predict 20 audio events per second which are considerably fast for such a low-powered device.

The following Arduino sketch is uploaded to the M5 StickC Plus and used to read the inferencing results and display the counter and turn on a buzzer when the fart sound is detected. The code looks for the Yamnet class label 55 (for the fart) continuously.

#include <M5StickCPlus.h>

unsigned long count = 0;

void setup ()
Serial2.begin( 115200, SERIAL_8N1, 32, 33 );

void loop ()

if (Serial2.available() > 0) {
String s = Serial2.readStringUntil('n');

if (s.startsWith("55: ")) {

void updateScreen()
M5.Lcd.setCursor(10, 40);
M5.Lcd.printf("FART Monitor");
M5.Lcd.setCursor(10, 70);
M5.Lcd.printf("Count: %d", count);

void buzz()

Live Demo

I am not going to test the device on real farts, but we can use many YouTube video clips with fart sounds for the demo.


This project presents a quick walk-through of an embedded machine learning application built using the Coral Dev Board Micro. It is a low-powered and portable device that respects users' privacy by running the inferencing at the edge.

Schematics, diagrams and documents


Connection Diagram


Coralmicro repository


Photo of knaveen


Bioinformatician, Researcher, Programmer, Maker, Community contributor Machine Learning Tokyo


Leave your feedback...