视频学习中学习者注意状态表征机制与监测方法研究
结题报告
批准号:
62007023
项目类别:
青年科学基金项目
资助金额:
24.0 万元
负责人:
皮忠玲
依托单位:
学科分类:
教育信息科学与技术
结题年份:
2023
批准年份:
2020
项目状态:
已结题
项目参与者:
皮忠玲
国基评审专家1V1指导 中标率高出同行96.8%
结合最新热点,提供专业选题建议
深度指导申报书撰写,确保创新可行
指导项目中标800+,快速提高中标率
客服二维码
微信扫码咨询
中文摘要
注意力集中是保证学习者视频有效学习的重要前提,而他们往往难以察觉自身注意状态的变化。虽然学习者在视频学习过程中的注意状态可以通过点击行为、眼动轨迹和脑电信号等方式表征,但是这些单一模态数据在表征时均存在一定的局限性。针对此问题,本项目拟结合心理学实验和混合神经网络技术,探讨视频学习中学习者注意状态表征机制,提出基于多模态数据融合的注意状态表征模型与分类方法。具体内容包括:①研究学习者视频点击行为、眼动轨迹和脑电信号三种模态的表征形式,建立表征视频学习过程中学习者注意状态的框架模型;②采用混合神经网络技术,建构多模态注意状态分类器;③建构面向大规模视频学习的注意状态动态监测的量化方法,并通过教学实践验证。研究成果可用于揭示视频学习中学习者注意状态表征机制,丰富注意状态动态监测的手段,为现代教育科学中学习者的学习状态分析、教学视频质量评价,以及基于视频的有效教学提供理论依据和技术支撑。
英文摘要
Focused attention is a precondition of effective learning from videos. However, learners are hard to detect the changes of their attentional states. Although learners’ attentional states can be detected by clickstream, eye movements, and Electroencephalogram (EEG), the single modal data has limitations to representing their attentional states. To address the above research question, this project aims to investigate the representational mechanism of learners’ attentional states and a classified model to detect the attentional states based on multimodal data by series of psychological experiments and mixed neural networks. It includes three sections: ① It tests the representational mechanism of learners’ attentional states based on their clickstream, eye movements, and EEG signals when learning from videos; ② It tests a classified model of learners’ attentional states based on the multimodal data via mixed neural networks; ③ It builds the detecting approach of attentional states in learning from videos and tests their effectiveness. The results will explain the representational mechanism of learners’ attentional states, extend the dynamic detecting approaches, as well as provide the theoretical bases and technological supports to analyze learners’ learning states, evaluate the quality of videos, and learn effectively.
期刊论文列表
专著列表
科研奖励列表
会议论文列表
专利列表
DOI:10.1016/j.lindif.2021.102055
发表时间:2021-10
期刊:Learning and Individual Differences
影响因子:3.6
作者:Yang J.;Zhang Y.;Pi Z.;Xie Y
通讯作者:Xie Y
The emotional design of an instructor: body gestures do not boost the effects of facial expressions in video lectures
讲师的情感设计:视频讲座中肢体动作并不能增强面部表情的效果
DOI:10.1080/10494820.2022.2105898
发表时间:2022
期刊:Interactive Learning Environments
影响因子:5.4
作者:Zhongling Pi;Renjia Liu;Hongjuan Ling;Xingyu Zhang;Shuo Wang;Xiying Li
通讯作者:Xiying Li
The mutual influence of an instructor's eye gaze and facial expression in video lectures
视频讲座中讲师的眼神和面部表情的相互影响
DOI:10.1080/10494820.2021.1940213
发表时间:2021-06-12
期刊:INTERACTIVE LEARNING ENVIRONMENTS
影响因子:5.4
作者:Pi, Zhongling;Zhang, Yi;Yang, Jiumin
通讯作者:Yang, Jiumin
DOI:10.1111/bjet.13316
发表时间:2023-04
期刊:Br. J. Educ. Technol.
影响因子:--
作者:Zhongling Pi;Xinru Zhang;Xingyu Zhang;Mingyi Gao;Xiying Li
通讯作者:Zhongling Pi;Xinru Zhang;Xingyu Zhang;Mingyi Gao;Xiying Li
Personalization and pauses in speech: Children’s learning performance via instructional videos
个性化和演讲停顿:通过教学视频了解儿童的学习表现
DOI:10.1007/s12144-022-03497-x
发表时间:2022-07
期刊:Current Psychology
影响因子:2.8
作者:Yang J.;Wu C.;Liu C.;Zhang Y.;Pi Z.
通讯作者:Pi Z.
共同观看对视频学习的影响及多模态预测模型建构
  • 批准号:
    62377035
  • 项目类别:
    面上项目
  • 资助金额:
    49.00万元
  • 批准年份:
    2023
  • 负责人:
    皮忠玲
  • 依托单位:
国内基金
海外基金