Stereo System & Spectrum Analyzer Using The Nordic Audio Dk

About the project

This project showcases the bi-channel BLE Audio spectrum visualization by making a DIY Stereo using nRF5340 Audio Development Kits

Project info

Items used in this project

Hardware components

Bluetooth LE Audio development kit for the nRF5340 SoC Bluetooth LE Audio development kit for the nRF5340 SoC x 3
Development Kit For The nRF5340 Development Kit For The nRF5340 x 1
TFT Capacitive Touch Shield for Arduino TFT Capacitive Touch Shield for Arduino x 1
TRRS 3.55mm Jack Breakout TRRS 3.55mm Jack Breakout x 1
Generic Mini USB powered Speakers Generic Mini USB powered Speakers x 1
ArduEz One Breadboard Shield ArduEz One Breadboard Shield x 1
WIRE JUMPER MALE TO FEMALE 15CM 10PK WIRE JUMPER MALE TO FEMALE 15CM 10PK x 1

Software apps and online services

nRF Connect SDK nRF Connect SDK https://www.nordicsemi.com/Products/Development-software/nrf-connect-sdk
Zephyr RTOS Zephyr RTOS https://www.zephyrproject.org/
ARM CMSIS-DSP ARM CMSIS-DSP https://github.com/ARM-software/CMSIS-DSP

Story

From the very beginning, Bluetooth technology has proven itself to be the go-to solution for wireless audio. These days, Bluetooth audio devices are everywhere from wireless speakers and vehicle infotainment systems to wireless earbuds. With the advent of LE Audio which operates on Bluetooth Low Energy (BLE) introduces a new high-quality, low-power audio codec called the Low Complexity Communications Coded (LC3). LC3 provides high quality even at lower data rates than the standard SBC codec used in Bluetooth Classic implementations.  In this project, I am going to make a DIY stereo system that demonstrates audio playback over isochronous channels (ISO) using LC3 codec and the audio data is converted to the frequency domain using the Fast Fourier Transform (FFT). The meaning of the word Isochronous is occurring at the same time but in the context of BLE, it means supporting data transmissions that are a time-sensitive and synchronized rendering of these data streams across multiple receivers.

Hardware Setup

We are using Nordic Semiconductor's nRF5340 Audio Development Kit which is the recommended platform for Bluetooth LE Audio products and can function as a USB dongle to send or receive audio data from a PC. It can also function as a Business Headset, a broadcast receiver, or a True Wireless Stereo (TWS) Earbud.  The three main components of this DK are the nRF5340 SoCnPM1100 PMIC, and Cirrus Logic’s CS47L63 Audio DSP. The CS47L63’s high-performance DAC and differential output driver are optimized for direct connection to an external headphone load.



This project requires 3 x Nordic nRF5340 Audio DKs, as 1 x gateway and 2 x headsets devices. The gateway receives the stereo (2-channels) audio data from external sources, computer or smartphone, over USB or I2S (Inter-IC Sound serial bus)  and forwards it to the headsets. The headset is a receiver device that plays back the audio it gets from the gateway. Each headset receives mono (1-channel) audio data and can play back on a speaker separately. 


We use a USB stereo speaker with an inbuilt amplifier in this project. This kind of speaker can receive 2-channel (left/right) audio data over a 3.5mm TRS (Tip-Ring-Sleeve) jack. Tip and Ring are usually connected to the left and right mono channels respectively and the sleeve is connected to the ground.


The nRF5340 Audio DK outputs the audio only to the left channel of the HEADPHONE audio jack. This is because of the mono hardware codec chip used on the development kits. We need an external TRS (or TRRS) jack to combine left/right mono channels as shown in the schema below.



We are using an nRF5340 DK with a breadboard shield to house the TRRS audio jack. 


Here the nRF5340 DK serves two purposes, it uses the same connection to bypass DAC output to the TRRS audio jack which is connected to the USB speakers and also it inputs the audio data to the ADC pins, ADC0 and ADC2 for the FFT analysis.



To display the FFT spectrum, a TFT display shield is stacked on top of the breadboard shield.



Setup Development Environment

For the development work, I am using macOS but the setup process is similar for all platforms. First, we need to download nRF connect for Desktop from here: 

https://www.nordicsemi.com/Software-and-tools/Development-Tools/nRF-Connect-for-desktop/Download.

The nRF Connect for Desktop is a cross-platform tool that enables testing and development with nRF5340 Audio DK. Please follow the installation guide in the link above. When the installation is completed, open the app and click on the Toolchain Manager and choose nRF Connect SDK v2.1.0.


By default, the SDK is installed in the /opt/nordic/ncs directory in MacOS. After installation, click on the Open Terminal which opens up a command line terminal with all environment variables initialized to get started quickly with the development.



Audio DK Application

We will be using the example application located at the /opt/nordic/ncs/v2.1.0/nrf/applications/nrf5340_audio directory. Both device types, gateway, and headsets will use the same code base, but different firmware. The gateway and headsets both run in the connected isochronous stream (CIS) mode which is the default mode of the application. In the default configuration, the gateway application uses the USB serial port as the audio source. Since we will be using a smartphone for the audio data source, we need to switch to using the I2S serial connection by appending the following line to the prj.conf file.

  1. CONFIG_AUDIO_SOURCE_I2S=y

Also, we need an audio jack cable to connect the audio source (smartphone) to the analog LINE IN on the development kit to use I2S. The application workflow for the gateway and headsets is as follows.

Gateway

  1. The gateway receives audio data from the audio source over I2S. 
  2. Audio data is sent to the synchronization module and then encoded by the software codec. 
  3. The encoded audio data is sent to the Bluetooth LE Host. 
  4. The host sends the encoded audio data to the LE Audio Controller Subsystem for nRF53 on the network core.
  5. The subsystem forwards the audio data to the hardware radio and sends it to the headset devices.

Headsets

  1. The headsets receive the encoded audio data on their hardware radio on the network core side.
  2. The LE Audio Controller Subsystem for nRF53 running on each of the headsets sends the encoded audio data to the Bluetooth LE Host on the headsets’ application core.
  3. Audio data is sent to the stream control module and placed in a FIFO buffer.
  4. Audio data is sent from the FIFO buffer to the synchronization module.
  5. Audio data is decoded by the software codec.
  6. Decoded audio data is sent to the hardware audio output over I2S.

The nRF5340 Audio application only supports the LC3 software codec, developed specifically for use with LE Audio.

Build and flash the nRF5340 Audio DK firmware

The recommended method for building the application and programming it to the Audio DK is running the buildprog.py Python script, which is located in the /opt/nordic/ncs/v2.1.0/nrf/applications/nrf5340_audio/tools/buildprog directory. The script automates the process of selecting configuration files and building different versions of the application.  The script depends on the settings defined in the /opt/nordic/ncs/v2.1.0/nrf/applications/nrf5340_audio/tools/buildprog/nrf5340_audio_dk_devices.json file. We need to find the serial number on the sticker on the nRF5340 Audio DK and update the nrf5340_audio_dk_snr entry in the JSON file as follows.


  1. [
  2. {
  3. "nrf5340_audio_dk_snr": 1050173897,
  4. "nrf5340_audio_dk_dev": "headset",
  5. "channel": "left"
  6. },
  7. {
  8. "nrf5340_audio_dk_snr": 1050143142,
  9. "nrf5340_audio_dk_dev": "gateway",
  10. "channel": "NA"
  11. },
  12. {
  13. "nrf5340_audio_dk_snr": 1050138112,
  14. "nrf5340_audio_dk_dev": "headset",
  15. "channel": "right"
  16. }
  17. ]

To build the application for all three Audio DKs, execute the command below at the root directory of the project.

  1. $ cd /opt/nordic/ncs/v2.1.0/nrf/applications/nrf5340_audio
  2. $ python3 tools/buildprog/buildprog.py -c app -b debug -d both

The Audio DKs are programmed according to the serial numbers set in the JSON file. Make sure to connect the development kits to your PC using a USB and turn them on using the POWER switch before you run the command below.

  1. $ python buildprog.py -c both -b debug -d both -p

The nRF5340 DK Application

The nRF5340 DK samples the audio data (DAC output)  using ADC pin at 12-bit resolution. In the application, the CPU frequency is set to 128MHz for real-time processing of the incoming ADC samples.

  1. NRF_CLOCK_S->HFCLKCTRL = (CLOCK_HFCLKCTRL_HCLK_Div1 << CLOCK_HFCLKCTRL_HCLK_Pos);

The complex FFT is calculated using the ARM CMSIS-DSP library which provides optimized compute kernels for the Cortex-M processor cores. A Fast Fourier Transform (FFT) is an algorithm that computes a sequence's discrete Fourier transform (DFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the frequency domain. The spectrum, measurement of an FFT analyzer, is displayed on the TFT LCD for each audio channel separately using the LVGL (Light and Versatile Graphics Library). The full code is given below.

  1. lude <zephyr/zephyr.h>
  2. #include <lvgl.h>
  3. #include <hal/nrf_saadc.h>
  4. #include <arm_math.h>
  5. #include <arm_const_structs.h>
  6. #define LOG_LEVEL CONFIG_LOG_DEFAULT_LEVEL
  7. #include <zephyr/logging/log.h>
  8. #define ADC_DEVICE_NAME DT_INST(0, nordic_nrf_saadc)
  9. #define ADC_RESOLUTION 12
  10. #define ADC_GAIN ADC_GAIN_1_6
  11. #define ADC_REFERENCE ADC_REF_INTERNAL
  12. #define ADC_ACQUISITION_TIME ADC_ACQ_TIME(ADC_ACQ_TIME_MICROSECONDS, 10)
  13. #define ADC_1ST_CHANNEL_ID 0
  14. #define ADC_1ST_CHANNEL_INPUT NRF_SAADC_INPUT_AIN0 // Arduino Naming A0
  15. #define ADC_2ND_CHANNEL_ID 2
  16. #define ADC_2ND_CHANNEL_INPUT NRF_SAADC_INPUT_AIN2 // Arduino Naming A2
  17. #define ADC_MAX_VAL ((1U << ADC_RESOLUTION) - 1U)
  18. #define DISP_WIDTH 320
  19. #define DISP_HEIGHT 240
  20. #define BUFFER_SIZE 512
  21. #define WINDOW_SIZE 512
  22. #define MAG_SCALE 400
  23. static int16_t m_sample_buffer_1[BUFFER_SIZE];
  24. static int16_t m_sample_buffer_2[BUFFER_SIZE];
  25. static float32_t sample_buffer_norm_1[BUFFER_SIZE];
  26. static float32_t sample_buffer_norm_2[BUFFER_SIZE];
  27. static float32_t fftInput_1[WINDOW_SIZE*2];
  28. static float32_t fftOutput_1[WINDOW_SIZE];
  29. static float32_t fftInput_2[WINDOW_SIZE*2];
  30. static float32_t fftOutput_2[BUFFER_SIZE];
  31. LOG_MODULE_REGISTER(app);
  32. uint32_t colors[] = {
  33. 0x24c4e6, 0x05a6fb, 0x0571fb, 0x053ffb, 0x0509fb, 0x3305fb, 0x6905fb,
  34. 0x9705fb, 0xcd05fb, 0xfb05f7, 0xfb05c1, 0xfb058f, 0xfb055a, 0xfb0528,
  35. 0xfb1505, 0xfb4a05, 0xfb7c05, 0xfbb205, 0xfbe405, 0xe0fb05, 0xaefb05,
  36. 0x78fb05, 0x46fb05, 0x11fb05, 0x05fb2c, 0x05fb5d, 0x05fb93, 0x05fbc5,
  37. 0x05fbfb, 0x05c9fb, 0x0593fb, 0x0584fb,
  38. };
  39. typedef struct {
  40. lv_obj_t *obj;
  41. float values[WINDOW_SIZE];
  42. float peaks[WINDOW_SIZE];
  43. int mid_point;
  44. } spectrum_t;
  45. spectrum_t spectrum_1;
  46. spectrum_t spectrum_2;
  47. const struct device *adc_dev;
  48. static const struct adc_channel_cfg m_1st_channel_cfg = {
  49. .gain = ADC_GAIN,
  50. .reference = ADC_REFERENCE,
  51. .acquisition_time = ADC_ACQUISITION_TIME,
  52. .channel_id = ADC_1ST_CHANNEL_ID,
  53. #if defined(CONFIG_ADC_CONFIGURABLE_INPUTS)
  54. .input_positive = ADC_1ST_CHANNEL_INPUT,
  55. #endif
  56. };
  57. static const struct adc_channel_cfg m_2nd_channel_cfg = {
  58. .gain = ADC_GAIN,
  59. .reference = ADC_REFERENCE,
  60. .acquisition_time = ADC_ACQUISITION_TIME,
  61. .channel_id = ADC_2ND_CHANNEL_ID,
  62. #if defined(CONFIG_ADC_CONFIGURABLE_INPUTS)
  63. .input_positive = ADC_2ND_CHANNEL_INPUT,
  64. #endif
  65. };
  66. const struct adc_sequence_options sequence_opts = {
  67. .interval_us = 0,
  68. .callback = NULL,
  69. .user_data = NULL,
  70. .extra_samplings = BUFFER_SIZE -1,
  71. };
  72. static int adc_sample(void)
  73. {
  74. int ret;
  75. const struct adc_sequence sequence_1 = {
  76. .options = &sequence_opts,
  77. .channels = BIT(ADC_1ST_CHANNEL_ID),
  78. .buffer = m_sample_buffer_1,
  79. .buffer_size = sizeof(m_sample_buffer_1),
  80. .resolution = ADC_RESOLUTION,
  81. };
  82. const struct adc_sequence sequence_2 = {
  83. .options = &sequence_opts,
  84. .channels = BIT(ADC_2ND_CHANNEL_ID),
  85. .buffer = m_sample_buffer_2,
  86. .buffer_size = sizeof(m_sample_buffer_2),
  87. .resolution = ADC_RESOLUTION,
  88. };
  89. if (!adc_dev) {
  90. return -1;
  91. }
  92. ret = adc_read(adc_dev, &sequence_1);
  93. //LOG_ERR("ADC [0] read err: %dn", ret);
  94. ret = adc_read(adc_dev, &sequence_2);
  95. //LOG_ERR("ADC [2] read err: %dn", ret);
  96. /* print the AIN0, AIN2 values */
  97. //for (int i = 0; i < BUFFER_SIZE; i++) {
  98. // LOG_INF("%d, %dn", m_sample_buffer_1[i], m_sample_buffer_2[i]);
  99. //}
  100. return ret;
  101. }
  102. static void spectrum_draw_event_cb(lv_event_t *e)
  103. {
  104. lv_event_code_t code = lv_event_get_code(e);
  105. if (code == LV_EVENT_REFR_EXT_DRAW_SIZE) {
  106. lv_event_set_ext_draw_size(e, LV_VER_RES);
  107. } else if (code == LV_EVENT_COVER_CHECK) {
  108. lv_event_set_cover_res(e, LV_COVER_RES_NOT_COVER);
  109. } else if (code == LV_EVENT_DRAW_POST) {
  110. lv_obj_t *obj = lv_event_get_target(e);
  111. spectrum_t *spectrum = lv_event_get_user_data(e);
  112. lv_draw_ctx_t *draw_ctx = lv_event_get_draw_ctx(e);
  113. lv_opa_t opa = lv_obj_get_style_opa(obj, LV_PART_MAIN);
  114. if (opa < LV_OPA_MIN) return;
  115. lv_draw_rect_dsc_t draw_rect_dsc;
  116. lv_draw_rect_dsc_init(&draw_rect_dsc);
  117. draw_rect_dsc.bg_opa = LV_OPA_COVER;
  118. lv_draw_line_dsc_t draw_line_dsc;
  119. lv_draw_line_dsc_init(&draw_line_dsc);
  120. draw_line_dsc.width = 1;
  121. int x_step = (int)(DISP_WIDTH - 16) / (WINDOW_SIZE / 16);
  122. int bar_count = 1;
  123. // skip first 2
  124. for (int i = 2; i < WINDOW_SIZE / 4; i += 4) {
  125. float ave = 0;
  126. for (int j = 0; j < 4; j++) {
  127. ave += spectrum->values[i + j];
  128. }
  129. ave /= 4;
  130. int bar_value = MIN(125.0f, 0.25f * ave);
  131. ave = 0;
  132. for (int j = 0; j < 4; j++) {
  133. ave += spectrum->peaks[i + j];
  134. }
  135. ave /= 4;
  136. int peak_value = MIN(125.0f, 0.25f * ave);
  137. draw_rect_dsc.bg_color = lv_color_hex(colors[bar_count - 1]);
  138. /* 5 is the bar width, bar_value is bar height */
  139. lv_area_t above_rect;
  140. above_rect.x1 = bar_count * x_step;
  141. above_rect.x2 = bar_count * x_step + 5;
  142. above_rect.y1 = spectrum->mid_point - (int)(bar_value / 2);
  143. above_rect.y2 = spectrum->mid_point;
  144. lv_draw_rect(draw_ctx, &draw_rect_dsc, &above_rect);
  145. lv_area_t below_rect;
  146. below_rect.x1 = bar_count * x_step;
  147. below_rect.x2 = bar_count * x_step + 5;
  148. below_rect.y1 = spectrum->mid_point;
  149. below_rect.y2 = spectrum->mid_point + (int)(bar_value / 2);
  150. lv_draw_rect(draw_ctx, &draw_rect_dsc, &below_rect);
  151. draw_line_dsc.color = lv_color_hex(colors[bar_count - 1]);
  152. lv_point_t above_line[2];
  153. /* upside line always 2 px above the bar */
  154. above_line[0].x = bar_count * x_step;
  155. above_line[0].y = spectrum->mid_point - (int)(peak_value / 2) - 2;
  156. above_line[1].x = bar_count * x_step + 6;
  157. above_line[1].y = spectrum->mid_point - (int)(peak_value / 2) - 2;
  158. lv_draw_line(draw_ctx, &draw_line_dsc, &above_line[0],
  159. &above_line[1]);
  160. lv_point_t below_line[2];
  161. /* under line always 2 px below the bar */
  162. below_line[0].x = bar_count * x_step;
  163. below_line[0].y = spectrum->mid_point + (int)(peak_value / 2) + 2;
  164. below_line[1].x = bar_count * x_step + 6;
  165. below_line[1].y = spectrum->mid_point + (int)(peak_value / 2) + 2;
  166. lv_draw_line(draw_ctx, &draw_line_dsc, &below_line[0],
  167. &below_line[1]);
  168. bar_count++;
  169. }
  170. }
  171. }
  172. void create_spectrum_object(spectrum_t *spectrum)
  173. {
  174. spectrum->obj = lv_obj_create(lv_scr_act());
  175. lv_obj_remove_style_all(spectrum->obj);
  176. lv_obj_refresh_ext_draw_size(spectrum->obj);
  177. lv_obj_set_size(spectrum->obj, DISP_WIDTH - 16, (DISP_HEIGHT - 16) / 2);
  178. lv_obj_set_pos(spectrum->obj, 16, spectrum->mid_point - 58);
  179. lv_obj_clear_flag(spectrum->obj, LV_OBJ_FLAG_CLICKABLE | LV_OBJ_FLAG_SCROLLABLE);
  180. lv_obj_add_event_cb(spectrum->obj, spectrum_draw_event_cb, LV_EVENT_ALL, spectrum);
  181. }
  182. static void update_spectrum(spectrum_t *spectrum, float *magnitudes)
  183. {
  184. for (int i = 0; i < WINDOW_SIZE; i++) {
  185. float mag = magnitudes[i] * MAG_SCALE;
  186. if (mag > spectrum->values[i]) {
  187. spectrum->values[i] = mag;
  188. } else {
  189. spectrum->values[i] = 0.7 * spectrum->values[i] + 0.3 * mag;
  190. }
  191. if (mag > spectrum->peaks[i]) {
  192. spectrum->peaks[i] = mag;
  193. } else {
  194. spectrum->peaks[i] = 0.95 * spectrum->peaks[i] + 0.05 * mag;
  195. }
  196. }
  197. }
  198. int main(void)
  199. {
  200. /* Set CPU frequency to 128 MHz */
  201. NRF_CLOCK_S->HFCLKCTRL = (CLOCK_HFCLKCTRL_HCLK_Div1 << CLOCK_HFCLKCTRL_HCLK_Pos);
  202. adc_dev = DEVICE_DT_GET(ADC_DEVICE_NAME);
  203. if (!adc_dev) {
  204. LOG_ERR("device_get_binding ADC_0 failedn");
  205. return -1;
  206. }
  207. int err;
  208. err = adc_channel_setup(adc_dev, &m_1st_channel_cfg);
  209. err = adc_channel_setup(adc_dev, &m_2nd_channel_cfg);
  210. if (err) {
  211. LOG_ERR("Error in adc setup: %dn", err);
  212. }
  213. /* Trigger offset calibration
  214. * As this generates a _DONE and _RESULT event
  215. * the first result will be incorrect.
  216. */
  217. NRF_SAADC->TASKS_CALIBRATEOFFSET = 1;
  218. while (1) {
  219. err = adc_sample();
  220. if (err) {
  221. LOG_ERR("Error in adc sampling: %dn", err);
  222. }
  223. // normalize audio buffer [0.0 - 1.0]
  224. for (int i = 0; i < BUFFER_SIZE; i++) {
  225. sample_buffer_norm_1[i] = (m_sample_buffer_1[i] * 1.0f) / ADC_MAX_VAL;
  226. sample_buffer_norm_2[i] = (m_sample_buffer_2[i] * 1.0f) / ADC_MAX_VAL;
  227. }
  228. // calculate mean
  229. float32_t mean_1, mean_2;
  230. arm_mean_f32(sample_buffer_norm_1, BUFFER_SIZE, &mean_1);
  231. arm_mean_f32(sample_buffer_norm_2, BUFFER_SIZE, &mean_2);
  232. // populate FFT inputs by removing DC bias
  233. for (int i = 0; i < WINDOW_SIZE*2; i += 2) {
  234. fftInput_1[i] = sample_buffer_norm_1[i/2] - mean_1; // Re
  235. fftInput_1[i+1] = 0; // Im
  236. fftInput_2[i] = sample_buffer_norm_2[i/2] - mean_2; // Re
  237. fftInput_2[i+1] = 0; // Im
  238. //printf("%f, %fn", fftInput_1[i], fftInput_2[i]);
  239. }
  240. // calculate FFT
  241. arm_cfft_f32(&arm_cfft_sR_f32_len512, fftInput_1, 0, 1);
  242. arm_cfft_f32(&arm_cfft_sR_f32_len512, fftInput_2, 0, 1);
  243. // calculate magnitudes
  244. arm_cmplx_mag_f32(fftInput_1, fftOutput_1, WINDOW_SIZE);
  245. arm_cmplx_mag_f32(fftInput_2, fftOutput_2, WINDOW_SIZE);
  246. k_sleep(K_MSEC(1));
  247. }
  248. }
  249. void display_main(void)
  250. {
  251. const struct device *display_dev;
  252. display_dev = DEVICE_DT_GET(DT_CHOSEN(zephyr_display));
  253. if (!device_is_ready(display_dev)) {
  254. LOG_ERR("Device not ready, aborting.");
  255. while (1) {}
  256. }
  257. spectrum_1.mid_point = (DISP_HEIGHT / 4) - 1;
  258. spectrum_2.mid_point = ((3 * DISP_HEIGHT) / 4) - 1;
  259. create_spectrum_object(&spectrum_1);
  260. create_spectrum_object(&spectrum_2);
  261. display_blanking_off(display_dev);
  262. while (1) {
  263. update_spectrum(&spectrum_1, fftOutput_1);
  264. update_spectrum(&spectrum_2, fftOutput_2);
  265. lv_obj_invalidate(spectrum_1.obj);
  266. lv_obj_invalidate(spectrum_2.obj);
  267. lv_task_handler();
  268. k_sleep(K_MSEC(1));
  269. }
  270. }
  271. K_THREAD_DEFINE(display_thread, 8192, display_main, NULL, NULL, NULL, 7, 0, 0);

Build and flash the nRF5340 DK application firmware

Execute the command below in the terminal by opening it using the Toolchain manager as described in the Setup Development Environment section.

  1. $ git clone https://github.com/metanav/nRF5340_Audio_DK_stereo_player_spectrum_analyzer.git
  2. $ cd nRF5340_Audio_DK_stereo_player_spectrum_analyzer
  3. $ west build -b nrf5340dk_nrf5340_cpuapp --pristine

Connect the nRF5340 DK using a USB cable and execute the command below.

  1. $ west flash

Once the flashing is completed successfully, the application starts running. In the working demo, the Audio DKs are powered using the Lipo batteries which come with the bundle. The USB speakers and nRF5340 DK are powered using a power bank and a USB connector.

Testing Demo

All assemblies are fitted inside a transparent plastic showcase to give it a look and feel of a stereo player.

Final Demo

We can see the live demonstration of the audio streaming from a smartphone audio source connected to the Audio DK gateway device to the Audio DK headsets. A real-time FFT spectrum is displayed at the TFT LCD for both channels which can be used as a Spectrum Analyzer and monitor the synchronization of the two mono channels.


Conclusion

The nRF5340 Audio DK is a versatile development kit that contains everything needed to start development. The low-powered and highly synchronized audio streaming capabilities make it a good fit for many audio-based development use cases.

Schematics, diagrams and documents

Schematics

Code

Code Repository

https://github.com/metanav/nRF5340_Audio_DK_stereo_player_spectrum_analyzer.git

Credits

Photo of knaveen

knaveen

Bioinformatician, Researcher, Programmer, Maker, Community contributor Machine Learning Tokyo

   

Leave your feedback...