EAGER: SaTC-EDU: Teaching Security in Undergraduate Artificial Intelligence Courses Using Transparency and Contextualization
EAGER:SaTC-EDU:利用透明度和情境化在本科人工智能课程中教授安全性
基本信息
- 批准号:2041960
- 负责人:
- 金额:$ 30万
- 依托单位:
- 依托单位国家:美国
- 项目类别:Standard Grant
- 财政年份:2020
- 资助国家:美国
- 起止时间:2020-09-01 至 2023-12-31
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
This project will explore how to teach undergraduate computer science students about security in systems that use artificial intelligence (AI). This is important for educating a workforce that is knowledgeable about robust and trustworthy AI. The aim is to design an AI curriculum that will foster a security mindset for identifying vulnerabilities that could cause harm, whether through attacks by a malicious actor, or through perpetuating or amplifying social biases. The educational approach will focus on transparency and contextualization. Transparency involves making the inner workings of a system accessible to students, so they can understand which aspects of the system’s construction lead to its vulnerabilities. Contextualization involves situating AI techniques in real-world environments to understand their specific security implications. Contextualization is fundamental for teaching conventional security topics. For instance, accessing personal location data can serve a legitimate purpose in Google Maps, but is typically suspicious behavior in a game. The same piece of code may be used in each case, but its legitimacy is determined by its broader context. The team will conduct research on, and develop instructional materials and assessment tools for, integrating transparency and contextualization into the undergraduate AI curriculum. Since security in AI is a new area within computer science education research, the main goal is to develop initial designs for instruction and assessment that integrate transparency and contextualization at a level appropriate for undergraduates. The goal is to develop proof-of-concept instructional materials and techniques, and assessments for security concepts and skills in undergraduate AI courses. Instruction will be designed for four kinds of learning objectives. Students should: (1) know that AI systems can cause harms and are not immune to attacks; (2) be able to explain sources of vulnerabilities; (3) be able to identify vulnerabilities in a specific system, which could include attacking it; and (4) be able to defend an AI system by modifying it to mitigate threats. The team will identify AI topics in existing curricula that have security implications. The team will create tasks that illustrate the concrete security issues and conduct cognitive task analyses with experts in AI and security to see how they approach those problems. This process will yield the initial learning goals. The team will conduct an assessment survey on those goals with students who have taken the undergraduate AI course, to establish a baseline level of knowledge and elicit potential misconceptions. Based on the foundation that students have and the learning goals, the team will design initial instruction, iterating on the design through one-on-one think-alouds and small-group tutoring sessions with student participants. The team will test the instruction in a controlled experiment, comparing the AI plus security materials to AI-only materials, using pre- and post-tests to measure learning. Finally, a study using the designed instruction in an undergraduate course will illustrate how it works in a typical setting. This project is supported by a special initiative of the Secure and Trustworthy Cyberspace (SaTC) program to foster new, previously unexplored, collaborations between the fields of cybersecurity, artificial intelligence, and education. The SaTC program aligns with the Federal Cybersecurity Research and Development Strategic Plan and the National Privacy Research Strategy to protect and preserve the growing social and economic benefits of cyber systems while ensuring security and privacy.This award reflects NSF's statutory mission and has been deemed worthy of support through evaluation using the Foundation's intellectual merit and broader impacts review criteria.
该项目将探索如何教授计算机科学本科生关于使用人工智能(AI)的系统的安全性。这对于教育一支了解强大和值得信赖的人工智能的劳动力非常重要。我们的目标是设计一个人工智能课程,培养一种安全思维,以识别可能造成伤害的漏洞,无论是通过恶意行为者的攻击,还是通过延续或放大社会偏见。教育方法将侧重于透明度和情境化。透明性是指让学生可以访问系统的内部工作,这样他们就可以了解系统结构的哪些方面导致了系统的脆弱性。情境化涉及将人工智能技术置于现实环境中,以了解其特定的安全影响。情境化是教授传统安全主题的基础。例如,访问个人位置数据可以在Google地图中实现合法目的,但在游戏中通常是可疑行为。在每种情况下都可以使用同一段代码,但其合法性取决于其更广泛的背景。该团队将进行研究,并开发教学材料和评估工具,将透明度和情境化整合到本科人工智能课程中。由于人工智能的安全性是计算机科学教育研究中的一个新领域,主要目标是开发教学和评估的初始设计,将透明度和情境化整合到适合本科生的水平。其目标是开发概念验证教学材料和技术,并评估本科人工智能课程中的安全概念和技能。教学将针对四种学习目标进行设计。学生应:(1)知道人工智能系统可能会造成伤害,并且无法免受攻击;(2)能够解释漏洞的来源;(3)能够识别特定系统中的漏洞,其中可能包括攻击它;以及(4)能够通过修改它来保护人工智能系统以减轻威胁。该团队将确定现有课程中具有安全影响的AI主题。该团队将创建说明具体安全问题的任务,并与人工智能和安全专家进行认知任务分析,以了解他们如何解决这些问题。这个过程将产生最初的学习目标。该团队将对参加过本科人工智能课程的学生进行评估调查,以建立知识的基线水平并引发潜在的误解。根据学生的基础和学习目标,团队将设计初始教学,通过一对一的思考和与学生参与者的小组辅导课程来迭代设计。该团队将在一个受控实验中测试该指令,将人工智能加安全材料与仅人工智能材料进行比较,使用前测试和后测试来衡量学习。最后,在本科课程中使用设计的教学研究将说明它是如何在一个典型的设置。 该项目得到了安全和值得信赖的网络空间(SaTC)计划的特别倡议的支持,以促进网络安全,人工智能和教育领域之间新的,以前未探索的合作。SATC计划与联邦网络安全研究和发展战略计划和国家隐私研究战略保持一致,以保护和维护网络系统日益增长的社会和经济效益,同时确保安全和隐私。该奖项反映了NSF的法定使命,并通过使用基金会的知识价值和更广泛的影响审查标准进行评估,被认为值得支持。
项目成果
期刊论文数量(3)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Teaching Ethics in Computing: A Systematic Literature Review of ACM Computer Science Education Publications
计算机教学伦理:ACM 计算机科学教育出版物的系统文献综述
- DOI:10.1145/3634685
- 发表时间:2024
- 期刊:
- 影响因子:2.4
- 作者:Brown, Noelle;Xie, Benjamin;Sarder, Ella;Fiesler, Casey;Wiese, Eliane S.
- 通讯作者:Wiese, Eliane S.
The Shortest Path to Ethics in AI: An Integrated Assignment Where Human Concerns Guide Technical Decisions
人工智能道德的最短路径:人类关注指导技术决策的综合任务
- DOI:10.1145/3501385.3543978
- 发表时间:2022
- 期刊:
- 影响因子:0
- 作者:Brown, Noelle;South, Koriann;Wiese, Eliane S.
- 通讯作者:Wiese, Eliane S.
Designing Ethically-Integrated Assignments: It’s Harder Than it Looks
设计符合道德的任务:比看起来更难
- DOI:10.1145/3568813.3600126
- 发表时间:2023
- 期刊:
- 影响因子:0
- 作者:Brown, Noelle;South, Koriann;Venkatasubramanian, Suresh;Wiese, Eliane S.
- 通讯作者:Wiese, Eliane S.
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
Eliane Wiese其他文献
Eliane Wiese的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('Eliane Wiese', 18)}}的其他基金
CRII: CHS: Improving Code Readability with Scalable Feedback on Students' Code Structure
CRII:CHS:通过对学生代码结构的可扩展反馈来提高代码可读性
- 批准号:
1948519 - 财政年份:2020
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
相似海外基金
SaTC-EDU: EAGER: Developing metaverse-native security and privacy curricula for high school students
SaTC-EDU:EAGER:为高中生开发元宇宙原生安全和隐私课程
- 批准号:
2335807 - 财政年份:2023
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: SaTC-EDU: Secure and Privacy-Preserving Adaptive Artificial Intelligence Curriculum Development for Cybersecurity
合作研究:EAGER:SaTC-EDU:安全和隐私保护的网络安全自适应人工智能课程开发
- 批准号:
2335624 - 财政年份:2023
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
EAGER: SaTC-EDU: Exploring Visualized and Explainable Artificial Intelligence to Improve Students’ Learning Experience in Digital Forensics Education
EAGER:SaTC-EDU:探索可视化和可解释的人工智能,以改善学生在数字取证教育中的学习体验
- 批准号:
2039289 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: SaTC-EDU: Artificial Intelligence-Enhanced Cybersecurity: Workforce Needs and Barriers to Learning
协作研究:EAGER:SaTC-EDU:人工智能增强的网络安全:劳动力需求和学习障碍
- 批准号:
2113954 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
EAGER: SaTC-EDU: A Life-Cycle Approach for Artificial Intelligence-Based Cybersecurity Education
EAGER:SaTC-EDU:基于人工智能的网络安全教育的生命周期方法
- 批准号:
2114680 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
EAGER: SaTC-EDU: Cybersecurity Education in the Age of Artificial Intelligence: A Novel Proactive and Collaborative Learning Paradigm
EAGER:SaTC-EDU:人工智能时代的网络安全教育:一种新颖的主动协作学习范式
- 批准号:
2114974 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
EAGER: SaTC-EDU: Transformative Educational Approaches to Meld Artificial Intelligence and Cybersecurity Mindsets
EAGER:SaTC-EDU:融合人工智能和网络安全思维的变革性教育方法
- 批准号:
2115025 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: SaTC-EDU: Learning Platform and Education Curriculum for Artificial Intelligence-Driven Socially-Relevant Cybersecurity
合作研究:EAGER:SaTC-EDU:人工智能驱动的社会相关网络安全的学习平台和教育课程
- 批准号:
2114936 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
Collaborative Research: EAGER: SaTC-EDU: Teaching High School Students about Cybersecurity and Artificial Intelligence Ethics via Empathy-Driven Hands-On Projects
合作研究:EAGER:SaTC-EDU:通过同理心驱动的实践项目向高中生传授网络安全和人工智能伦理知识
- 批准号:
2114991 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant
EAGER: SaTC-EDU: Exploring Visualized and Explainable Artificial Intelligence to Improve Students’ Learning Experience in Digital Forensics Education
EAGER:SaTC-EDU:探索可视化和可解释的人工智能,以改善学生在数字取证教育中的学习体验
- 批准号:
2039287 - 财政年份:2021
- 资助金额:
$ 30万 - 项目类别:
Standard Grant