feat(freertos): Add examples showing basic freertos SMP usage and common APIs

This commit is contained in:
Xiaoyu Liu 2024-05-06 11:35:48 +08:00
parent ea2f512cb5
commit 2082ce09b6
13 changed files with 928 additions and 1 deletions

View File

@ -77,7 +77,16 @@ examples/system/flash_suspend:
temporary: true
reason: the other targets are not tested yet
examples/system/freertos:
examples/system/freertos/basic_freertos_smp_usage:
enable:
- if: IDF_TARGET == "esp32c3" or IDF_TARGET == "esp32s3"
reason: no target specific functionality, testing on a single core target and a multiple core target is sufficient
depends_components:
- freertos
- console
- esp_timer
examples/system/freertos/real_time_stats:
disable:
- if: IDF_TARGET != "esp32" and (NIGHTLY_RUN != "1" or IDF_TARGET == "linux")
reason: no target specific functionality, testing on a single target is sufficient

View File

@ -0,0 +1,8 @@
# For more information about build system see
# https://docs.espressif.com/projects/esp-idf/en/latest/api-guides/build-system.html
# The following five lines of boilerplate have to be in your project's
# CMakeLists in this exact order for cmake to work correctly
cmake_minimum_required(VERSION 3.16)
include($ENV{IDF_PATH}/tools/cmake/project.cmake)
project(basic_freertos_smp_usage)

View File

@ -0,0 +1,204 @@
| Supported Targets | ESP32-C3 | ESP32-S3 |
| ----------------- | -------- | -------- |
# FreeRTOS basic API SMP usages Example
(See the README.md file in the upper level 'examples' directory for more information about examples.)
FreeRTOS offers a rich array of communication objects and task notification mechanisms that facilitate interaction and synchronization between concurrent tasks. This example demonstrates the applications of some useful APIs, including task creation, queue, mutex / spinlock, and task notification, within the context of a Symmetric Multiprocessor (SMP) architecture.
## Contents of this example
Below is short explanation of remaining files in the project folder.
```
├── CMakeLists.txt
├── main
│   ├── CMakeLists.txt
│   ├── basic_freertos_smp_usage.h
│   ├── basic_freertos_smp_usage.c
│   ├── create_task_example.c
│   ├── queue_example.c
│   ├── lock_example.c
│   ├── task_notify_example.c
│   └── batch_processing_example.c
├── pytest_smp_examples.py
└── README.md This is the file you are currently reading
```
This example includes 5 parts:
### Creating task example
The first part is shows how to create tasks that can be pinned (affinity with a specific core) or unpinned (no particular affinity with any core) on ESP32 series CPU cores thanks to the API function `xTaskCreatePinnedToCore()`.
In this case, there are 4 tasks created in total:
* `pinned_task0_core0` task is created and pinned on core 0
* `pinned_task1_core0` task is also created and pinned on core 0
* `pinned_task2_core1` task is created and pinned on core 1
* `unpinned_task` task is the last one, it is unpinned, which means it can be scheduled to run on any core.
A task can be unpinned by setting the `xCoreID` field to `tskNO_AFFINITY` when calling `xTaskCreatePinnedToCore()`.
#### Example Output
In the task function, the API `esp_cpu_get_core_id()` is called to query on which core this task is currently running. The example should have the following console output that, "pinned_task0_core0" and "pinned_task1_core0" are running on core#0, while "pinned_task2_core1" is running on core#1, and "unpinned_task" can be running on both core#0 and core#1:
```
...
I (2123) create task example: task#0 is running on core#0
I (2133) create task example: task#1 is running on core#0
I (2133) create task example: task#2 is running on core#1
I (2153) create task example: task#3 is running on core#0
I (2283) create task example: task#0 is running on core#0
I (2293) create task example: task#1 is running on core#0
I (2313) create task example: task#2 is running on core#1
I (2323) create task example: task#3 is running on core#0
I (2453) create task example: task#0 is running on core#0
I (2463) create task example: task#1 is running on core#0
I (2483) create task example: task#3 is running on core#0
I (2483) create task example: task#2 is running on core#1
I (2623) create task example: task#0 is running on core#0
I (2633) create task example: task#1 is running on core#0
I (2643) create task example: task#3 is running on core#0
I (2653) create task example: task#2 is running on core#1
I (2793) create task example: task#0 is running on core#0
I (2803) create task example: task#1 is running on core#0
I (2803) create task example: task#3 is running on core#1
...
```
### Queue communication example
The second part is about how to use FreeRTOS built-in queue to transmit data between tasks. In this example, one task is sending a number every 250 millisecond to a msg queue by calling API `xQueueGenericSend()`, and another task receives data from this queue by calling API `xQueueReceive()`
#### Example Output
The example should have the following console output:
```
I (1737813) queue example: sent data = 0
I (1737813) queue example: received data = 0
I (1738063) queue example: sent data = 1
I (1738063) queue example: received data = 1
I (1738313) queue example: sent data = 2
I (1738313) queue example: received data = 2
I (1738563) queue example: sent data = 3
I (1738563) queue example: received data = 3
I (1738813) queue example: sent data = 4
I (1738813) queue example: received data = 4
I (1739063) queue example: sent data = 5
I (1739063) queue example: received data = 5
I (1739313) queue example: sent data = 6
I (1739313) queue example: received data = 6
I (1739563) queue example: sent data = 7
I (1739563) queue example: received data = 7
I (1739813) queue example: sent data = 8
I (1739813) queue example: received data = 8
I (1740063) queue example: sent data = 9
I (1740063) queue example: received data = 9
I (1740313) queue example: sent data = 10
I (1740313) queue example: received data = 10
...
```
### Locks example
In the third part, a simple comparison of performance between mutexes, spinlocks and atomic operations is presented, along with an instance of how to use mutexes as a mechanism for protecting shared resources.
To highlight the differences in performance between mutexes, spinlocks and atomic operations, this example implements two tasks that share a resource, which will be protected by mutex and spinlock and declared as an atomic type variable, respectively. Note: if this example runs on single core, only 1 task of each type will be created.
The result illustrates that the spinlocks are faster because they don't trigger any context switch, but they are CPU-intensive. Using atomic operation is faster than using spinlock, because it doesn't involve entering and exiting critical sections.
#### Example Output
The example should have the following console output:
```
I (5025) lock example: mutex task took 1562156 us on core1
I (5025) lock example: mutex task took 1567546 us on core0
I (7095) lock example: spinlock task took 73325 us on core0
I (7095) lock example: spinlock task took 68326 us on core1
I (9105) lock example: atomic task took 11806 us on core0
I (9105) lock example: atomic task took 6810 us on core1
I (10105) lock example: mutex task 0 created
I (10105) lock example: task0 read value = 0 on core #0
I (10105) lock example: mutex task 1 created
I (10605) lock example: task0 set value = 1
I (10605) lock example: task1 read value = 1 on core #1
I (11105) lock example: task1 set value = 2
I (11105) lock example: task0 read value = 2 on core #1
I (11605) lock example: task0 set value = 3
I (11605) lock example: task1 read value = 3 on core #1
I (12105) lock example: task1 set value = 4
I (12105) lock example: task0 read value = 4 on core #1
I (12605) lock example: task0 set value = 5
I (12605) lock example: task1 read value = 5 on core #1
I (13105) lock example: task1 set value = 6
...
```
### Task notification example
Two tasks communicate via FreeRTOS task notification systems: one is sending notifications while the other receives them.
#### Example Output
The example should have the following console output:
```
I (392163) task notify example: send_task sends a notification
I (392163) task notify example: 1 tasks pending
I (392163) task notify example: rcv_task is processing this task notification
I (393163) task notify example: send_task sends a notification
I (393163) task notify example: 1 tasks pending
I (393163) task notify example: rcv_task is processing this task notification
I (394163) task notify example: send_task sends a notification
I (394163) task notify example: 1 tasks pending
I (394163) task notify example: rcv_task is processing this task notification
I (395163) task notify example: send_task sends a notification
I (395163) task notify example: 1 tasks pending
I (395163) task notify example: rcv_task is processing this task notification
I (396163) task notify example: send_task sends a notification
I (396163) task notify example: 1 tasks pending
I (396163) task notify example: rcv_task is processing this task notification
...
```
### Batch processing example
In the last part, a practical demonstration is provided wherein queues, mutexes, and task notifications are integrated to implement a realistic workflow, thereby exemplifying their practical utility in real-world scenarios.
A task named **rcv_data_task** mimics receiving the irregularly arrived data. Every time a data item is received, it is pushed into a queue, and the received item number is increased by 1; once the task collects 5 data items, it sends a task notification to the **proc_data_task** to process this batch of data from the queue. When the latter task finishes processing, it will decrease the received item number by 5. Because both these 2 tasks can modify this global number, the modification action is protected by a mutex.
#### Example Output
The example should have the following console output:
```
I (2675163) batch processing example: enqueue data = 43
I (2675563) batch processing example: enqueue data = 29
I (2676013) batch processing example: enqueue data = 8
I (2676463) batch processing example: enqueue data = 56
I (2676873) batch processing example: enqueue data = 19
I (2676873) batch processing example: dequeue data = 43
I (2676873) batch processing example: dequeue data = 29
I (2676883) batch processing example: dequeue data = 8
I (2676883) batch processing example: dequeue data = 56
I (2676883) batch processing example: dequeue data = 19
I (2676893) batch processing example: decrease s_rcv_item_num to 0
I (2677413) batch processing example: enqueue data = 51
I (2677713) batch processing example: enqueue data = 5
I (2678243) batch processing example: enqueue data = 93
I (2678603) batch processing example: enqueue data = 66
I (2679213) batch processing example: enqueue data = 32
I (2679213) batch processing example: dequeue data = 51
I (2679213) batch processing example: dequeue data = 5
I (2679223) batch processing example: dequeue data = 93
I (2679223) batch processing example: dequeue data = 66
I (2679233) batch processing example: dequeue data = 32
I (2679233) batch processing example: decrease s_rcv_item_num to 0
...
```
## How to use this example
This example utilizes an interactive console component so that you can select the part you would like to run through the terminal. You can type 'help' to get the list of commands; use UP/DOWN arrows to navigate through command history; press TAB when typing command name to auto-complete. For more information on the interactive terminal console component, please refer to [console](../../console/README.md). The supported commands include:
* **help**: get the list of commands
* **create_task**: run the creating task example
* **queue**: run the queue example
* **lock**: run the locks example
* **task_notification**: run the task notification example
* **batch_processing**: run the batch processing example
Once a component starts running, it will be stopped in about 5 seconds. If you would like to extend the running time, please modify the value of macro **COMP_LOOP_PERIOD** in the header file inc.h.

View File

@ -0,0 +1,9 @@
set(srcs "basic_freertos_smp_usage.c"
"create_task_example.c"
"queue_example.c"
"lock_example.c"
"task_notify_example.c"
"batch_processing_example.c")
idf_component_register(SRCS ${srcs}
INCLUDE_DIRS "."
PRIV_REQUIRES console esp_timer)

View File

@ -0,0 +1,96 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include "esp_console.h"
#include "basic_freertos_smp_usage.h"
#include "sdkconfig.h"
static void register_creating_task(void)
{
const esp_console_cmd_t creating_task_cmd = {
.command = "create_task",
.help = "Run the example that demonstrates how to create and run pinned and unpinned tasks",
.hint = NULL,
.func = &comp_creating_task_entry_func,
};
ESP_ERROR_CHECK(esp_console_cmd_register(&creating_task_cmd));
}
static void register_queue(void)
{
const esp_console_cmd_t queue_cmd = {
.command = "queue",
.help = "Run the example that demonstrates how to use queue to communicate between tasks",
.hint = NULL,
.func = &comp_queue_entry_func,
};
ESP_ERROR_CHECK(esp_console_cmd_register(&queue_cmd));
}
static void register_lock(void)
{
const esp_console_cmd_t lock_cmd = {
.command = "lock",
.help = "Run the example that demonstrates how to use mutex and spinlock to protect a shared resource",
.hint = NULL,
.func = &comp_lock_entry_func,
};
ESP_ERROR_CHECK(esp_console_cmd_register(&lock_cmd));
}
static void register_task_notification(void)
{
const esp_console_cmd_t task_notification_cmd = {
.command = "task_notification",
.help = "Run the example that demonstrates how to use task notifications to synchronize tasks",
.hint = NULL,
.func = &comp_task_notification_entry_func,
};
ESP_ERROR_CHECK(esp_console_cmd_register(&task_notification_cmd));
}
static void register_batch_proc_example(void)
{
const esp_console_cmd_t batch_proc_example_cmd = {
.command = "batch_processing",
.help = "Run the example that combines queue, mutex, task notification together",
.hint = NULL,
.func = &comp_batch_proc_example_entry_func,
};
ESP_ERROR_CHECK(esp_console_cmd_register(&batch_proc_example_cmd));
}
static void config_console(void)
{
esp_console_repl_t *repl = NULL;
esp_console_repl_config_t repl_config = ESP_CONSOLE_REPL_CONFIG_DEFAULT();
/* Prompt to be printed before each line.
* This can be customized, made dynamic, etc.
*/
repl_config.prompt = PROMPT_STR ">";
repl_config.max_cmdline_length = 1024;
esp_console_dev_uart_config_t uart_config = ESP_CONSOLE_DEV_UART_CONFIG_DEFAULT();
ESP_ERROR_CHECK(esp_console_new_repl_uart(&uart_config, &repl_config, &repl));
esp_console_register_help_command();
// register entry functions for each component
register_creating_task();
register_queue();
register_lock();
register_task_notification();
register_batch_proc_example();
ESP_ERROR_CHECK(esp_console_start_repl(repl));
printf("\n"
"Please type the component you would like to run.\n");
}
void app_main(void)
{
config_console();
}

View File

@ -0,0 +1,21 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#pragma once
/*------------------------------------------------------------*/
/* Macros */
#define PROMPT_STR CONFIG_IDF_TARGET
#define TASK_PRIO_3 3
#define TASK_PRIO_2 2
#define COMP_LOOP_PERIOD 5000
#define SEM_CREATE_ERR_STR "semaphore creation failed"
#define QUEUE_CREATE_ERR_STR "queue creation failed"
int comp_creating_task_entry_func(int argc, char **argv);
int comp_queue_entry_func(int argc, char **argv);
int comp_lock_entry_func(int argc, char **argv);
int comp_task_notification_entry_func(int argc, char **argv);
int comp_batch_proc_example_entry_func(int argc, char **argv);

View File

@ -0,0 +1,122 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include "freertos/FreeRTOS.h"
#include "esp_log.h"
#include "basic_freertos_smp_usage.h"
#define DATA_BATCH_SIZE 5
// static TaskHandle_t proc_data_task_hdl;
static QueueHandle_t msg_queue;
static const uint8_t msg_queue_len = 10;
static SemaphoreHandle_t s_mutex; // mutex to protect shared resource "s_rcv_item_num"
static volatile int s_rcv_item_num; // received data item number
static volatile bool timed_out;
const static char *TAG = "batch processing example";
/* This example describes a realistic scenario where there are 2 tasks, one of them receives irregularly arrived external data,
and the other task is responsible for processing the received data items. For some reason, every 5 data items form a batch
and they are meant to be processed together. Once the receiving data obtains a data item, it will increment a global variable
named s_rcv_item_num by 1, then push the data into a queue, of which the maximal size is 10; when s_rcv_item_num is not less
than 5, the receiving thread sends a task notification to the processing thread, which is blocked waiting for this signal to
proceed. Processing thread dequeues the first 5 data items from the queue and process them, and finally decrease the s_rcv_item_num by 5.
Please refer to README.md for more details.
*/
static void rcv_data_task(void *arg)
{
int random_delay_ms;
int data;
TaskHandle_t proc_data_task_hdl = (TaskHandle_t)arg;
while (!timed_out) {
// random delay to mimic this thread receives data irregularly
data = rand() % 100;
random_delay_ms = (rand() % 500 + 200);
vTaskDelay(random_delay_ms / portTICK_PERIOD_MS);
// increase receive item num by 1
if (xSemaphoreTake(s_mutex, portMAX_DELAY) == pdTRUE) {
s_rcv_item_num += 1;
xSemaphoreGive(s_mutex);
}
// enqueue the received data
(void)xQueueGenericSend(msg_queue, (void *)&data, portMAX_DELAY, queueSEND_TO_BACK);
ESP_LOGI(TAG, "enqueue data = %d", data);
// if s_rcv_item_num >= batch size, send task notification to proc thread to process them together
if (s_rcv_item_num >= DATA_BATCH_SIZE) {
xTaskNotifyGive(proc_data_task_hdl);
}
}
vTaskDelete(NULL);
}
static void proc_data_task(void *arg)
{
int rcv_data_buffer[DATA_BATCH_SIZE] ;
int rcv_item_num;
int data_idx;
while (!timed_out) {
// blocking wait for task notification
while (ulTaskNotifyTake(pdFALSE, portMAX_DELAY)) {
// every time this task receives notification, reset received data item number
rcv_item_num = 0;
for (data_idx = 0; data_idx < DATA_BATCH_SIZE; data_idx++) {
// keep reading message queue until it's empty
if (xQueueReceive(msg_queue, (void *)&rcv_data_buffer[data_idx], 0) == pdTRUE) {
ESP_LOGI(TAG, "dequeue data = %d", rcv_data_buffer[data_idx]);
rcv_item_num += 1;
} else {
break;
}
}
// mimic to process the data in buffer and then clean it
for (data_idx = 0; data_idx < rcv_item_num; data_idx++) {
rcv_data_buffer[data_idx] = 0;
}
// decrease the s_rcv_item_num by batch size if it's not less the batch size, else set it as 0
if (xSemaphoreTake(s_mutex, portMAX_DELAY) == pdTRUE) {
s_rcv_item_num -= rcv_item_num;
xSemaphoreGive(s_mutex);
ESP_LOGI(TAG, "decrease s_rcv_item_num to %d", s_rcv_item_num);
}
}
}
vTaskDelete(NULL);
}
// batch processing example: demonstrate how to use task notification to implement batch processing
// use queue to transmit data between tasks, and use mutex to protect a shared global number
int comp_batch_proc_example_entry_func(int argc, char **argv)
{
timed_out = false;
s_mutex = xSemaphoreCreateMutex();
if (s_mutex == NULL) {
ESP_LOGE(TAG, SEM_CREATE_ERR_STR);
return 1;
}
msg_queue = xQueueGenericCreate(msg_queue_len, sizeof(int), queueQUEUE_TYPE_SET);
if (msg_queue == NULL) {
ESP_LOGE(TAG, QUEUE_CREATE_ERR_STR);
return 1;
}
TaskHandle_t proc_data_task_hdl;
xTaskCreatePinnedToCore(proc_data_task, "proc_data_task", 4096, NULL, TASK_PRIO_3, &proc_data_task_hdl, tskNO_AFFINITY);
xTaskCreatePinnedToCore(rcv_data_task, "rcv_data_task", 4096, proc_data_task_hdl, TASK_PRIO_3, NULL, tskNO_AFFINITY);
// time out and stop running after COMP_LOOP_PERIOD milliseconds
vTaskDelay(pdMS_TO_TICKS(COMP_LOOP_PERIOD));
timed_out = true;
// delay to let tasks finish the last loop
vTaskDelay(1500 / portTICK_PERIOD_MS);
return 0;
}

View File

@ -0,0 +1,62 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include "freertos/FreeRTOS.h"
#include "esp_log.h"
#include "basic_freertos_smp_usage.h"
#define SPIN_ITER 350000 //actual CPU cycles consumed will depend on compiler optimization
#define CORE0 0
// only define xCoreID CORE1 as 1 if this is a multiple core processor target, else define it as tskNO_AFFINITY
#define CORE1 ((CONFIG_FREERTOS_NUMBER_OF_CORES > 1) ? 1 : tskNO_AFFINITY)
static volatile bool timed_out;
const static char *TAG = "create task example";
static void spin_iteration(int spin_iter_num)
{
for (int i = 0; i < spin_iter_num; i++) {
__asm__ __volatile__("NOP");
}
}
static void spin_task(void *arg)
{
// convert arg pointer from void type to int type then dereference it
int task_id = (int)arg;
ESP_LOGI(TAG, "created task#%d", task_id);
while (!timed_out) {
int core_id = esp_cpu_get_core_id();
ESP_LOGI(TAG, "task#%d is running on core#%d", task_id, core_id);
// consume some CPU cycles to keep Core#0 a little busy, so task3 has opportunity to be scheduled on Core#1
spin_iteration(SPIN_ITER);
vTaskDelay(pdMS_TO_TICKS(150));
}
vTaskDelete(NULL);
}
// Creating task example: show how to create pinned and unpinned tasks on CPU cores
int comp_creating_task_entry_func(int argc, char **argv)
{
timed_out = false;
// pin 2 tasks on same core and observe in-turn execution,
// and pin another task on the other core to observe "simultaneous" execution
int task_id0 = 0, task_id1 = 1, task_id2 = 2, task_id3 = 3;
xTaskCreatePinnedToCore(spin_task, "pinned_task0_core0", 4096, (void*)task_id0, TASK_PRIO_3, NULL, CORE0);
xTaskCreatePinnedToCore(spin_task, "pinned_task1_core0", 4096, (void*)task_id1, TASK_PRIO_3, NULL, CORE0);
xTaskCreatePinnedToCore(spin_task, "pinned_task2_core1", 4096, (void*)task_id2, TASK_PRIO_3, NULL, CORE1);
// Create a unpinned task with xCoreID = tskNO_AFFINITY, which can be scheduled on any core, hopefully it can be observed that the scheduler moves the task between the different cores according to the workload
xTaskCreatePinnedToCore(spin_task, "unpinned_task", 4096, (void*)task_id3, TASK_PRIO_2, NULL, tskNO_AFFINITY);
// time out and stop running after 5 seconds
vTaskDelay(pdMS_TO_TICKS(COMP_LOOP_PERIOD));
timed_out = true;
// delay to let tasks finish the last loop
vTaskDelay(500 / portTICK_PERIOD_MS);
return 0;
}

View File

@ -0,0 +1,171 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include <stdatomic.h>
#include "freertos/FreeRTOS.h"
#include "esp_log.h"
#include "esp_timer.h"
#include "basic_freertos_smp_usage.h"
#define SHARE_RES_THREAD_NUM 2
#define ITERATION_NUMBER 100000
// declare a static global integer as a protected shared resource that is accessible to multiple tasks
static volatile int s_global_num = 0;
static atomic_int s_atomic_global_num;
static SemaphoreHandle_t s_mutex;
static portMUX_TYPE s_spinlock = portMUX_INITIALIZER_UNLOCKED;
static volatile bool timed_out;
const static char *TAG = "lock example";
// Take a mutex to protect the shared resource. If mutex is already taken, this task will be blocked until it is available;
// when the mutex is available, FreeRTOS will reschedule this task and this task can further access the shared resource
static void inc_num_mutex_iter(void *arg)
{
int core_id = esp_cpu_get_core_id();
int64_t start_time, end_time, duration = 0;
start_time = esp_timer_get_time();
while (s_global_num < ITERATION_NUMBER) {
if (xSemaphoreTake(s_mutex, portMAX_DELAY) == pdTRUE) {
s_global_num++;
xSemaphoreGive(s_mutex);
}
}
end_time = esp_timer_get_time();
duration = end_time - start_time;
ESP_LOGI(TAG, "mutex task took %lld us on core%d", duration, core_id);
vTaskDelete(NULL);
}
// Enter a critical section and take a spinlock to protect the shared resource. If the spinlock is already taken, this task busy-wait here until it is available.
// In contrast to the mutex, when in a critical section interrupts are disabled, which means nothing will interrupt the task and the freertos scheduler will never run
// and reschedule the task.
static void inc_num_spinlock_iter(void *arg)
{
int core_id = esp_cpu_get_core_id();
int64_t start_time, end_time, duration = 0;
start_time = esp_timer_get_time();
while (s_global_num < ITERATION_NUMBER) {
portENTER_CRITICAL(&s_spinlock);
s_global_num++;
portEXIT_CRITICAL(&s_spinlock);
}
end_time = esp_timer_get_time();
duration = end_time - start_time;
ESP_LOGI(TAG, "spinlock task took %lld us on core%d", duration, core_id);
vTaskDelete(NULL);
}
static void inc_num_atomic_iter(void *arg)
{
int core_id = esp_cpu_get_core_id();
int64_t start_time, end_time, duration = 0;
start_time = esp_timer_get_time();
while (atomic_load(&s_atomic_global_num) < ITERATION_NUMBER) {
atomic_fetch_add(&s_atomic_global_num, 1);
}
end_time = esp_timer_get_time();
duration = end_time - start_time;
ESP_LOGI(TAG, "atomic task took %lld us on core%d", duration, core_id);
vTaskDelete(NULL);
}
static void inc_num_mutex(void *arg)
{
int task_index = *(int*)arg;
ESP_LOGI(TAG, "mutex task %d created", task_index);
while (!timed_out) {
xSemaphoreTake(s_mutex, portMAX_DELAY); // == pdTRUE
int core_id = esp_cpu_get_core_id();
ESP_LOGI(TAG, "task%d read value = %d on core #%d", task_index, s_global_num, core_id);
s_global_num++;
// delay for 500 ms
vTaskDelay(pdMS_TO_TICKS(500));
xSemaphoreGive(s_mutex);
ESP_LOGI(TAG, "task%d set value = %d", task_index, s_global_num);
}
vTaskDelete(NULL);
}
/* Lock example: show how to use mutex and spinlock to protect shared resources
Firstly, a shared resource `s_global_num` is protected by a mutex and there are 2 tasks,
whose task function is `inc_num_mutex_iter`, take turns to access and increase this number.
Once the number value reaches 100000, the time duration from starting running till the
current time is measured and recorded, then both these 2 tasks will be deleted.
Next, `s_global_num` is reset and there are another 2 tasks, calling task function
`inc_num_spinlock_iter`, that access and increase this shared resource until it reaches
100000, under the protection of a spinlock. The expected result is these 2 tasks will have
less time overhead in comparison with the previous 2 tasks because they involve less context
switching for task execution.
After that, another 2 tasks are create to complete the same
addition job, but the shared resource is an atomic type integer. It should have a shorter
running time than the spinlock tasks, because atomic operation is a kind of look-free implementation
and it saves the time of entering and exiting the critical section.
Note: if this example runs on single core, only 1 task of each type will be created.
Finally, it illustrates show the shared resource `s_global_num` is protected by a mutex
and in turn accessed by multiple tasks. */
int comp_lock_entry_func(int argc, char **argv)
{
s_global_num = 0;
int thread_id;
int core_id;
timed_out = false;
// create mutex
s_mutex = xSemaphoreCreateMutex();
if (s_mutex == NULL) {
ESP_LOGE(TAG, SEM_CREATE_ERR_STR);
return 1;
}
// create 2 tasks accessing a shared resource protected by mutex
for (core_id = 0; core_id < CONFIG_FREERTOS_NUMBER_OF_CORES; core_id++) {
xTaskCreatePinnedToCore(inc_num_mutex_iter, NULL, 4096, NULL, TASK_PRIO_3, NULL, core_id);
}
// reset s_global_num
vTaskDelay(2000 / portTICK_PERIOD_MS);
s_global_num = 0;
// create 2 tasks accessing a shared resource protected by spinlock
for (core_id = 0; core_id < CONFIG_FREERTOS_NUMBER_OF_CORES; core_id++) {
xTaskCreatePinnedToCore(inc_num_spinlock_iter, NULL, 4096, NULL, TASK_PRIO_3, NULL, core_id);
}
vTaskDelay(2000 / portTICK_PERIOD_MS);
// create 2 tasks accessing an atomic shared resource
atomic_init(&s_atomic_global_num, 0);
for (core_id = 0; core_id < CONFIG_FREERTOS_NUMBER_OF_CORES; core_id++) {
xTaskCreatePinnedToCore(inc_num_atomic_iter, NULL, 4096, NULL, TASK_PRIO_3, NULL, core_id);
}
// reset s_global_num
vTaskDelay(1000 / portTICK_PERIOD_MS);
s_global_num = 0;
// create 2 tasks to increase a shared number in turn
for (thread_id = 0; thread_id < SHARE_RES_THREAD_NUM; thread_id++) {
xTaskCreatePinnedToCore(inc_num_mutex, NULL, 4096, &thread_id, TASK_PRIO_3, NULL, tskNO_AFFINITY);
}
// time out and stop running after 5 seconds
vTaskDelay(pdMS_TO_TICKS(COMP_LOOP_PERIOD));
timed_out = true;
// delay to let tasks finish the last loop
vTaskDelay(1500 / portTICK_PERIOD_MS);
return 0;
}

View File

@ -0,0 +1,71 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include "freertos/FreeRTOS.h"
#include "esp_log.h"
#include "basic_freertos_smp_usage.h"
static QueueHandle_t msg_queue;
static const uint8_t msg_queue_len = 40;
static volatile bool timed_out;
const static char *TAG = "queue example";
static void print_q_msg(void *arg)
{
int data; // data type should be same as queue item type
int to_wait_ms = 1000; // the maximal blocking waiting time of millisecond
const TickType_t xTicksToWait = pdMS_TO_TICKS(to_wait_ms);
while (!timed_out) {
if (xQueueReceive(msg_queue, (void *)&data, xTicksToWait) == pdTRUE) {
ESP_LOGI(TAG, "received data = %d", data);
} else {
ESP_LOGI(TAG, "Did not received data in the past %d ms", to_wait_ms);
}
}
vTaskDelete(NULL);
}
static void send_q_msg(void *arg)
{
int sent_num = 0;
while (!timed_out) {
// Try to add item to queue, fail immediately if queue is full
if (xQueueGenericSend(msg_queue, (void *)&sent_num, portMAX_DELAY, queueSEND_TO_BACK) != pdTRUE) {
ESP_LOGI(TAG, "Queue full\n");
}
ESP_LOGI(TAG, "sent data = %d", sent_num);
sent_num++;
// send an item for every 250ms
vTaskDelay(250 / portTICK_PERIOD_MS);
}
vTaskDelete(NULL);
}
// Queue example: illustrate how queues can be used to synchronize between tasks
int comp_queue_entry_func(int argc, char **argv)
{
timed_out = false;
msg_queue = xQueueGenericCreate(msg_queue_len, sizeof(int), queueQUEUE_TYPE_SET);
if (msg_queue == NULL) {
ESP_LOGE(TAG, QUEUE_CREATE_ERR_STR);
return 1;
}
xTaskCreatePinnedToCore(print_q_msg, "print_q_msg", 4096, NULL, TASK_PRIO_3, NULL, tskNO_AFFINITY);
xTaskCreatePinnedToCore(send_q_msg, "send_q_msg", 4096, NULL, TASK_PRIO_3, NULL, tskNO_AFFINITY);
// time out and stop running after 5 seconds
vTaskDelay(pdMS_TO_TICKS(COMP_LOOP_PERIOD));
timed_out = true;
// delay to let tasks finish the last loop
vTaskDelay(500 / portTICK_PERIOD_MS);
return 0;
}

View File

@ -0,0 +1,62 @@
/*
* SPDX-FileCopyrightText: 2024 Espressif Systems (Shanghai) CO LTD
*
* SPDX-License-Identifier: Unlicense OR CC0-1.0
*/
#include "freertos/FreeRTOS.h"
#include "esp_log.h"
#include "basic_freertos_smp_usage.h"
static volatile bool timed_out;
const static char *TAG = "task notify example";
/* In this example, there is a thread waiting for a synchronization signal from another thread before it start processing
Task synchronization could also be achieved with `xSemaphoreTake`, but FreeRTOS suggest using task notifications
as a faster and more lightweight alternative.
*/
static void notification_rcv_func(void *arg)
{
int pending_notification_task_num;
while (!timed_out) {
pending_notification_task_num = ulTaskNotifyTake(pdTRUE, portMAX_DELAY);
{
ESP_LOGI(TAG, "%d tasks pending", pending_notification_task_num);
while (pending_notification_task_num > 0) {
// do something to process the received notification
ESP_LOGI(TAG, "rcv_task is processing this task notification");
pending_notification_task_num--;
}
}
}
vTaskDelete(NULL);
}
static void notification_send_func(void *arg)
{
TaskHandle_t rcv_task_hdl = (TaskHandle_t)arg;
// send a task notification every 1000 ms
while (!timed_out) {
xTaskNotifyGive(rcv_task_hdl);
ESP_LOGI(TAG, "send_task sends a notification");
vTaskDelay(1000 / portTICK_PERIOD_MS);
}
vTaskDelete(NULL);
}
int comp_task_notification_entry_func(int argc, char **argv)
{
timed_out = false;
TaskHandle_t rcv_task_hdl;
xTaskCreatePinnedToCore(notification_rcv_func, NULL, 8192, NULL, TASK_PRIO_3, &rcv_task_hdl, tskNO_AFFINITY);
xTaskCreatePinnedToCore(notification_send_func, NULL, 8192, rcv_task_hdl, TASK_PRIO_3, NULL, tskNO_AFFINITY);
// time out and stop running after 5 seconds
vTaskDelay(pdMS_TO_TICKS(COMP_LOOP_PERIOD));
timed_out = true;
// delay to let tasks finish the last loop
vTaskDelay(500 / portTICK_PERIOD_MS);
return 0;
}

View File

@ -0,0 +1,92 @@
# SPDX-FileCopyrightText: 2022-2024 Espressif Systems (Shanghai) CO LTD
# SPDX-License-Identifier: CC0-1.0
import pytest
from pytest_embedded_idf.dut import IdfDut
@pytest.mark.esp32c3
@pytest.mark.esp32s3
@pytest.mark.generic
def test_creating_task(
dut: IdfDut
) -> None:
dut.expect(r'esp32(?:[a-zA-Z]\d)?>')
# test creating_task
dut.write('create_task')
dut.expect('create task example: task#0 is running on core#0')
dut.expect('create task example: task#1 is running on core#0')
dut.expect(r'create task example: task#2 is running on core#\d')
dut.expect(r'create task example: task#3 is running on core#\d')
@pytest.mark.esp32c3
@pytest.mark.esp32s3
@pytest.mark.generic
def test_queue(
dut: IdfDut
) -> None:
dut.expect(r'esp32(?:[a-zA-Z]\d)?>')
# test queue tasks
verify_amount = 5
dut.write('queue')
dut.expect('queue example: sent data')
dut.expect('queue example: received data')
for _ in range(verify_amount):
data = eval(dut.expect(r'queue example: sent data = (\d+)').group(1))
dut.expect('queue example: received data = ' + str(data))
@pytest.mark.esp32c3
@pytest.mark.esp32s3
@pytest.mark.generic
def test_locks(
dut: IdfDut
) -> None:
dut.expect(r'esp32(?:[a-zA-Z]\d)?>')
# test locks
dut.write('lock')
dut.expect(r'lock example: mutex task took \d+ us on core\d')
dut.expect(r'lock example: spinlock task took \d+ us on core\d')
dut.expect(r'lock example: atomic task took \d+ us on core\d')
dut.expect(r'task0 read value = 0 on core #\d')
dut.expect('task0 set value = 1')
dut.expect(r'task\d read value = 1 on core #\d')
dut.expect(r'task\d set value = 2')
dut.expect(r'task0 read value = 2 on core #\d')
@pytest.mark.esp32c3
@pytest.mark.esp32s3
@pytest.mark.generic
def test_task_notification(
dut: IdfDut
) -> None:
dut.expect(r'esp32(?:[a-zA-Z]\d)?>')
# test task notification
dut.write('task_notification')
dut.expect('task notify example: send_task sends a notification')
dut.expect('task notify example: 1 tasks pending')
dut.expect('task notify example: rcv_task is processing this task notification')
@pytest.mark.esp32c3
@pytest.mark.esp32s3
@pytest.mark.generic
def test_batch_proc_example(
dut: IdfDut
) -> None:
dut.expect(r'esp32(?:[a-zA-Z]\d)?>')
# test batch processing example
dut.write('batch_processing')
batch_size = 5
data_buf = [None] * batch_size
for i in range(batch_size):
res = dut.expect(r'batch processing example: enqueue data = (\d+)')
data_buf[i] = eval(res.group(1)) if res else None
for i in range(batch_size):
expected_string = 'batch processing example: dequeue data = ' + str(data_buf[i])
dut.expect(expected_string)
dut.expect(r'batch processing example: decrease s_rcv_item_num to \d')