Memory To Memory - Multi Block
该示例使用 DMA 的 Multi Block 功能进行 memory 到 memory 的数据搬运。
当传输数据的总大小超过 65535 时,可以使用 Multi Block 功能。
Multi Block 功能使用 block link list 进行传输,可以将较大数据分割成若干个 link list item 进行传输,确保每个 link list item 的数据不超过 65535。
环境需求
该示例支持以下开发套件:
Hardware Platforms |
Board Name |
---|---|
RTL8752H HDK |
RTL8752H EVB |
更多信息请参考 快速入门。
配置选项
该示例可配置的宏如下:
GDMA_INTERRUPT_MODE
:配置该宏可选择 DMA 的中断模式,可选择的值如下:
INT_TRANSFER
:配置该宏可选择是否开启GDMA_INT_Transfer
中断。INT_BLOCK
:配置该宏可选择是否开启GDMA_INT_Block
中断。
编译和下载
该示例的工程路径如下:
Project file: board\evb\io_sample\GDMA\Mem2Mem_multi_block\mdk
Project file: board\evb\io_sample\GDMA\Mem2Mem_multi_block\gcc
请按照以下步骤操作构建并运行该示例:
打开工程文件。
按照 快速入门 中 编译 APP Image 给出的步骤构建目标文件。
编译成功后,在路径
mdk\bin
或gcc\bin
下会生成 app binapp_MP_xxx.bin
文件。按下 reset 按键,开始运行。
测试验证
EVB 启动复位后,DMA 开始搬运数据。数据搬运完成后,在 Debug Analyzer 工具内会显示传输完成信息。
若开启
INT_TRANSFER
,打印 log 如下:[io_gdma] io_handle_gdma_msg: GDMA transfer data completion!
若开启
INT_BLOCK
,则每次 Block 传输完成,打印如下 log,总计打印次数为GDMA_MULTIBLOCK_SIZE
值。[io_gdma] io_handle_gdma_msg: GDMA blockx transfer data completion! [io_gdma] io_handle_gdma_msg: GDMA block1 transfer data completion! ... [io_gdma] io_handle_gdma_msg: GDMA blockx transfer data completion!
备注
如果检测到搬运的数据有误,会在 Debug Analyzer 工具上显示错误数据信息。
代码介绍
该章节分为以下几个部分:
源码路径
工程路径:
sdk\board\evb\io_sample\GDMA\Mem2Mem_multi_block
源码路径:
sdk\src\sample\io_sample\GDMA\Mem2Mem_multi_block
该工程的工程文件代码结构如下:
└── Project: adc_continuous_gdma
└── secure_only_app
└── include
├── app_define.h
└── rom_uuid.h
├── cmsis includes CMSIS header files and startup files
├── overlay_mgr.c
├── system_rtl876x.c
└── startup_rtl876x.s
├── lib includes all binary symbol files that user application is built on
├── rtl8752h_sdk.lib
├── gap_utils.lib
└── ROM.lib
├── peripheral includes all peripheral drivers and module code used by the application
├── rtl876x_rcc.c
├── rtl876x_nvic.c
└── rtl876x_gdma.c
├── profile
└── app includes the ble_peripheral user application implementation
├── main.c
├── ancs.c
├── app.c
├── app_task.c
└── io_gdma.c
初始化
当 EVB 复位启动时,调用 main()
函数,将执行以下流程:
int main(void)
{
extern uint32_t random_seed_value;
srand(random_seed_value);
board_init();
le_gap_init(APP_MAX_LINKS);
gap_lib_init();
app_le_gap_init();
app_le_profile_init();
pwr_mgr_init();
task_init();
os_sched_start();
return 0;
}
备注
le_gap_init()
,gap_lib_init()
,app_le_gap_init
,app_le_profile_init
等为 privacy 管理模块相关的初始化,参考 LE Peripheral Privacy 中的初始化流程介绍。
与外设相关的初始化流程具体如下:
在执行
os_sched_start()
开启任务调度后,在app_main_task
主任务内,执行driver_init
对外设驱动进行初始化配置。在
driver_init
中执行driver_gdma_init
,该函数为 DMA 外设的初始化,包含如下流程:基本 DMA 初始化参考 Memory to Memory - Single Block 中的 初始化 节。
设置每次 block 传输后自动加载 LLI 结构体中源地址与目的地址值。
使能 Multi-block 传输。
设置传输 LLI 类型结构体首地址。
配置每次 block 传输后 LLI 结构体中源地址、目的地址、链表指针、控制寄存器。
若开启
INT_TRANSFER
,使能GDMA_INT_Transfer
中断;若开启INT_BLOCK
,使能GDMA_INT_Block
中断。
void driver_gdma_init(void) { uint32_t i, j = 0; /*--------------Initialize test buffer---------------------*/ for (i = 0; i < GDMA_TRANSFER_SIZE; i++) { for (j = 0; j < GDMA_MULTIBLOCK_SIZE; j++) { GDMA_Send_Buffer[j][i] = (i + j) & 0xff; GDMA_Recv_Buffer[j][i] = 0; } } RCC_PeriphClockCmd(APBPeriph_GDMA, APBPeriph_GDMA_CLOCK, ENABLE); GDMA_InitTypeDef GDMA_InitStruct; GDMA_StructInit(&GDMA_InitStruct); GDMA_InitStruct.GDMA_ChannelNum = GDMA_CHANNEL_NUM; GDMA_InitStruct.GDMA_DIR = GDMA_DIR_MemoryToMemory; GDMA_InitStruct.GDMA_BufferSize = GDMA_TRANSFER_SIZE;//determine total transfer size GDMA_InitStruct.GDMA_SourceInc = DMA_SourceInc_Inc; GDMA_InitStruct.GDMA_DestinationInc = DMA_DestinationInc_Inc; GDMA_InitStruct.GDMA_SourceDataSize = GDMA_DataSize_Byte; GDMA_InitStruct.GDMA_DestinationDataSize = GDMA_DataSize_Byte; GDMA_InitStruct.GDMA_SourceMsize = GDMA_Msize_1; GDMA_InitStruct.GDMA_DestinationMsize = GDMA_Msize_1; GDMA_InitStruct.GDMA_SourceAddr = (uint32_t)GDMA_Send_Buffer; GDMA_InitStruct.GDMA_DestinationAddr = (uint32_t)GDMA_Recv_Buffer; GDMA_InitStruct.GDMA_Multi_Block_Mode = GDMA_MULTIBLOCK_MODE;//LLI_TRANSFER; GDMA_InitStruct.GDMA_Multi_Block_En = 1; GDMA_InitStruct.GDMA_Multi_Block_Struct = (uint32_t)GDMA_LLIStruct; for (uint32_t i = 0; i < GDMA_MULTIBLOCK_SIZE; i++) { if (i == GDMA_MULTIBLOCK_SIZE - 1) { //GDMA_LLIStruct[i].LLP=0; GDMA_LLIStruct[i].SAR = (uint32_t)GDMA_Send_Buffer[i]; GDMA_LLIStruct[i].DAR = (uint32_t)GDMA_Recv_Buffer[i]; GDMA_LLIStruct[i].LLP = 0; /* Configure low 32 bit of CTL register */ GDMA_LLIStruct[i].CTL_LOW = BIT(0) | (GDMA_InitStruct.GDMA_DestinationDataSize << 1) | (GDMA_InitStruct.GDMA_SourceDataSize << 4) | (GDMA_InitStruct.GDMA_DestinationInc << 7) | (GDMA_InitStruct.GDMA_SourceInc << 9) | (GDMA_InitStruct.GDMA_DestinationMsize << 11) | (GDMA_InitStruct.GDMA_SourceMsize << 14) | (GDMA_InitStruct.GDMA_DIR << 20); /* Configure high 32 bit of CTL register */ GDMA_LLIStruct[i].CTL_HIGH = GDMA_InitStruct.GDMA_BufferSize; } else { GDMA_LLIStruct[i].SAR = (uint32_t)GDMA_Send_Buffer[i]; GDMA_LLIStruct[i].DAR = (uint32_t)GDMA_Recv_Buffer[i]; GDMA_LLIStruct[i].LLP = (uint32_t)&GDMA_LLIStruct[i + 1]; /* Configure low 32 bit of CTL register */ GDMA_LLIStruct[i].CTL_LOW = BIT(0) | (GDMA_InitStruct.GDMA_DestinationDataSize << 1) | (GDMA_InitStruct.GDMA_SourceDataSize << 4) | (GDMA_InitStruct.GDMA_DestinationInc << 7) | (GDMA_InitStruct.GDMA_SourceInc << 9) | (GDMA_InitStruct.GDMA_DestinationMsize << 11) | (GDMA_InitStruct.GDMA_SourceMsize << 14) | (GDMA_InitStruct.GDMA_DIR << 20) | (GDMA_InitStruct.GDMA_Multi_Block_Mode & LLP_SELECTED_BIT); /* Configure high 32 bit of CTL register */ GDMA_LLIStruct[i].CTL_HIGH = GDMA_InitStruct.GDMA_BufferSize; } } GDMA_Init(GDMA_Channel, &GDMA_InitStruct); /* GDMA irq config */ NVIC_InitTypeDef NVIC_InitStruct; NVIC_InitStruct.NVIC_IRQChannel = GDMA_Channel_IRQn; NVIC_InitStruct.NVIC_IRQChannelCmd = (FunctionalState)ENABLE; NVIC_InitStruct.NVIC_IRQChannelPriority = 3; NVIC_Init(&NVIC_InitStruct); /** Either single block transmission completion interruption or transmission completion interruption can be choose. * Synchronized modifications are also required in GDMA_Channel_Handler if a single block transmission interrupt is used. */ #if (GDMA_INTERRUPT_MODE == INT_TRANSFER) GDMA_INTConfig(GDMA_CHANNEL_NUM, GDMA_INT_Transfer, ENABLE); #elif (GDMA_INTERRUPT_MODE == INT_BLOCK) GDMA_INTConfig(GDMA_CHANNEL_NUM, GDMA_INT_Block, ENABLE); #endif }
功能实现
在主函数中执行
os_sched_start()
,开启任务调度。当 stack 准备好时,在app_handle_dev_state_evt
函数中执行GDMA_Cmd()
开启 DMA 搬运。void app_handle_dev_state_evt(T_GAP_DEV_STATE new_state, uint16_t cause) { ... if (gap_dev_state.gap_init_state != new_state.gap_init_state) { if (new_state.gap_init_state == GAP_INIT_STATE_STACK_READY) { APP_PRINT_INFO0("GAP stack ready"); /*stack ready*/ GDMA_Cmd(GDMA_CHANNEL_NUM, ENABLE); } } ... }
若开启
INT_TRANSFER
,当传输完成时,触发GDMA_INT_Transfer
中断,进入中断处理函数。关闭 DMA
GDMA_INT_Transfer
中断。定义消息类型
IO_MSG_TYPE_GDMA
,发送 msg 给 task。在主 task 中,对发送的消息数据进行处理。清除 DMA
GDMA_INT_Transfer
中断标志位。执行
io_handle_gdma_msg
,打印数据传输完成信息,判断 GDMA_Recv_Buf 与 GDMA_Send_Buf 是否相同,不相同则打印错误数据。
void GDMA_Channel_Handler(void) { GDMA_INTConfig(GDMA_CHANNEL_NUM, GDMA_INT_Transfer, DISABLE); T_IO_MSG int_gdma_msg; int_gdma_msg.type = IO_MSG_TYPE_GDMA; int_gdma_msg.subtype = 0; if (false == app_send_msg_to_apptask(&int_gdma_msg)) { APP_PRINT_ERROR0("[io_gdma] GDMA_Channel_Handler: Send int_gdma_msg failed!"); //Add user code here! GDMA_ClearINTPendingBit(GDMA_CHANNEL_NUM, GDMA_INT_Transfer); return; } GDMA_ClearINTPendingBit(GDMA_CHANNEL_NUM, GDMA_INT_Transfer); } void io_handle_gdma_msg(T_IO_MSG *io_gdma_msg) { APP_PRINT_INFO0("[io_gdma] io_handle_gdma_msg: GDMA transfer data completion!"); for (uint32_t i = 0; i < GDMA_MULTIBLOCK_SIZE; i++) { for (uint32_t j = 0; j < GDMA_TRANSFER_SIZE; j++) { if (GDMA_Send_Buffer[i][j] != GDMA_Recv_Buffer[i][j]) { APP_PRINT_INFO2("[io_gdma]io_handle_gdma_msg: Data transmission error! GDMA_Send_Buffer = %d, GDMA_Recv_Buffer = %d", GDMA_Send_Buffer[i][j], GDMA_Recv_Buffer[i][j]); } GDMA_Recv_Buffer[i][j] = 0; } } }
若开启
INT_BLOCK
,当每次 Block 传输完成时,触发GDMA_INT_Block
中断,进入中断处理函数。关闭 DMA
GDMA_INT_Block
中断。定义消息类型
IO_MSG_TYPE_GDMA
,发送 msg 给 task。在主 task 中,对发送的消息数据进行处理。执行
io_handle_gdma_msg
,打印数据传输完成信息,判断GDMA_Recv_Buf
与GDMA_Send_Buf
是否相同,不相同则打印错误数据。记录当前传输的 Block 数量
GDMA_INT_Block_Counter
。若当前 Block 数量小于GDMA_MULTIBLOCK_SIZE
时,开启 DMAGDMA_INT_Block
中断。反之清空当前 Block 数量,代表传输完成。
void GDMA_Channel_Handler(void) { GDMA_INTConfig(GDMA_CHANNEL_NUM, GDMA_INT_Block, DISABLE); T_IO_MSG int_gdma_msg; int_gdma_msg.type = IO_MSG_TYPE_GDMA; int_gdma_msg.subtype = 0; int_gdma_msg.u.buf = (void *)&GDMA_INT_Block_Counter; if (false == app_send_msg_to_apptask(&int_gdma_msg)) { APP_PRINT_ERROR0("[io_gdma]GDMA_Channel_Handler: Send int_gdma_msg failed!"); //Add user code here! GDMA_ClearINTPendingBit(GDMA_CHANNEL_NUM, GDMA_INT_Block); return; } } void io_handle_gdma_msg(T_IO_MSG *io_gdma_msg) { uint8_t *p_buf = io_gdma_msg->u.buf; APP_PRINT_INFO1("[io_gdma] io_handle_gdma_msg: GDMA block%d transfer data completion!", *p_buf); for (uint32_t j = 0; j < GDMA_TRANSFER_SIZE; j++) { if (GDMA_Send_Buffer[*p_buf][j] != GDMA_Recv_Buffer[*p_buf][j]) { APP_PRINT_INFO2("[io_gdma]io_handle_gdma_msg: Data transmission error! GDMA_Send_Buffer = %d, GDMA_Recv_Buffer = %d", GDMA_Send_Buffer[*p_buf][j], GDMA_Recv_Buffer[*p_buf][j]); } GDMA_Recv_Buffer[*p_buf][j] = 0; } GDMA_INT_Block_Counter++; if (GDMA_INT_Block_Counter < GDMA_MULTIBLOCK_SIZE) { GDMA_INTConfig(GDMA_CHANNEL_NUM, GDMA_INT_Block, ENABLE); } else { GDMA_INT_Block_Counter = 0; } }