Symbol Processing System Modeled after Brains
以大脑为模型的符号处理系统
基本信息
- 批准号:13680438
- 负责人:
- 金额:$ 1.98万
- 依托单位:
- 依托单位国家:日本
- 项目类别:Grant-in-Aid for Scientific Research (C)
- 财政年份:2001
- 资助国家:日本
- 起止时间:2001 至 2002
- 项目状态:已结题
- 来源:
- 关键词:
项目摘要
During the research, we encountered puzzling experimental results that would imply that the representation capability of the recurrent neural networks (RNN) is limited further than usually believed. Those results were puzzling because they exhibit, for example, learnability althogh limited and unstable learned results. We made further investigation to obtain results to show why the RNN learning is possible and methods to circumvent the insufficent capability(a) If noise-tolerance is requested, then general counters are not learnable, therefore stacks are not learnable either. Based upon the results, we proposed a single-turn counter that cannot count-up when it counts down once and we showed constructively that the single-turn counter and in the same way finite-turn counter is implementable but the infinite turn counter is not. In consequnence we showed that RNN can represent at most a finite state automaton with finite-turn counters and that the experimental results showing learnabili … More ty of counters are in fact showing at most the learnability of finite-turn counters and not that of counters(b) Theoretically a finite state automaon cannot be learned without a suitable learning bias and in RNN cases it is impossible to prove or disprove in general the equivalence of two learned automaton in the RNN. We proposed a new stochastic learning alogrithm of RNN with classical perceptrons as its computation units. A bias naturally introduced by the algorithm make it possible to learn a finite state automaton. Since the state transition represented by the RNN is of finite space, it is guranteed for us to get a finite state automaton representaion ofRNN of the type. The algorithm is guranteed to converge with probability one if a solution exists, although the expected time to convergence might be infinite(c) We characterized the languages generated by a finite state automaton with finite0 or single-turn counters. The languages form a hierarchical structure different from Chomsky's hierarchy Less
在研究过程中,我们遇到了令人困惑的实验结果,这意味着递归神经网络(RNN)的表示能力比通常认为的更有限。这些结果令人困惑,因为它们表现出,例如,可学习性,尽管有限和不稳定的学习结果。我们进行了进一步的调查,以获得结果,以显示为什么RNN学习是可能的,以及规避能力不足的方法(a)如果要求噪声容忍,那么一般计数器是不可学习的,因此堆栈也不可学习。在此基础上,我们提出了一种单回合计数器,当它计数一次时不能计数,并建设性地证明了单回合计数器和有限回合计数器是可实现的,而无限回合计数器则不可实现。因此,我们证明了RNN最多可以表示一个有限回合计数器的有限状态自动机,并且实验结果显示了可学习性……更多的计数器实际上最多显示了有限回合计数器的可学习性,而不是计数器的可学习性(b)理论上,如果没有适当的学习偏差,有限状态自动机是无法学习的,在RNN的情况下,一般不可能证明或反驳两个学习自动机的等价性RNN。提出了一种以经典感知器为计算单元的RNN随机学习算法。算法自然引入的偏差使得学习有限状态自动机成为可能。由于RNN所表示的状态转移是有限空间的,因此可以保证我们得到该类型RNN的有限状态自动机表示。如果存在解,则该算法保证以概率1收敛,尽管期望收敛时间可能是无限的(c)。我们描述了由有限状态自动机生成的语言具有有限或单回合计数器。语言的层次结构不同于乔姆斯基的层次结构
项目成果
期刊论文数量(10)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
Akito Sakurai, Daisuke Hyodo: "Simple recurrent neural networks and random indexing"Proc. International Conference on Information Processing. (2002)
Akito Sakurai、Daisuke Hyodo:“简单的循环神经网络和随机索引”Proc。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Akito,Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. vo.11. 573-584 (2001)
Akito,Sakurai:“一种快速且收敛的随机 MLP 学习算法”国际神经系统杂志。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
T,Harada, O,Araki, A,Sakurai: "Learning Context-Free Grammars with Recurrent Neural Networks"Proc. International Joint Conference on Neural Networks. 2602-2607 (2001)
T,Harada,O,Araki,A,Sakurai:“使用循环神经网络学习上下文无关语法”Proc。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
Akito Sakurai: "A Fast and Convergent Stochastic MLP Learning Algorithm"International Journal of Neural Systems. 11. 573-584 (2001)
Akito Sakurai:“一种快速且收敛的随机 MLP 学习算法”国际神经系统杂志。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
T.Harada, O.Araki, A.Sakurai: "Learning Context-Free Grammars with Recurrent Neural Networks"Proc. International Joint Conference on Neural Networks. 2602-2607 (2001)
T.Harada、O.Araki、A.Sakurai:“使用循环神经网络学习上下文无关语法”Proc。
- DOI:
- 发表时间:
- 期刊:
- 影响因子:0
- 作者:
- 通讯作者:
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
数据更新时间:{{ journalArticles.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ monograph.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ sciAawards.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ conferencePapers.updateTime }}
{{ item.title }}
- 作者:
{{ item.author }}
数据更新时间:{{ patent.updateTime }}
SAKURAI Akito其他文献
SAKURAI Akito的其他文献
{{
item.title }}
{{ item.translation_title }}
- DOI:
{{ item.doi }} - 发表时间:
{{ item.publish_year }} - 期刊:
- 影响因子:{{ item.factor }}
- 作者:
{{ item.authors }} - 通讯作者:
{{ item.author }}
{{ truncateString('SAKURAI Akito', 18)}}的其他基金
A proposal of structural mixture distribution model. its application and basic analysis-
结构混合分布模型的提出。
- 批准号:
21500146 - 财政年份:2009
- 资助金额:
$ 1.98万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Time series information processing by networking finite state neural networks
通过联网有限状态神经网络进行时间序列信息处理
- 批准号:
18500118 - 财政年份:2006
- 资助金额:
$ 1.98万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Symbol Processing System Modeled after Brains
以大脑为模型的符号处理系统
- 批准号:
15500095 - 财政年份:2003
- 资助金额:
$ 1.98万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
相似海外基金
Collaborative Research: Conference: Large Language Models for Biological Discoveries (LLMs4Bio)
合作研究:会议:生物发现的大型语言模型 (LLMs4Bio)
- 批准号:
2411529 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Standard Grant
Collaborative Research: Conference: Large Language Models for Biological Discoveries (LLMs4Bio)
合作研究:会议:生物发现的大型语言模型 (LLMs4Bio)
- 批准号:
2411530 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Standard Grant
Investigating the potential for developing self-regulation in foreign language learners through the use of computer-based large language models and machine learning
通过使用基于计算机的大语言模型和机器学习来调查外语学习者自我调节的潜力
- 批准号:
24K04111 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
CAREER: Symbolic Learning with Neural Language Models
职业:使用神经语言模型进行符号学习
- 批准号:
2338833 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Continuing Grant
Multi-agent Self-improving of Large Language Models (LLMs)
大型语言模型 (LLM) 的多智能体自我改进
- 批准号:
2903811 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Studentship
Integrating Large Language Models for Long Horizon Task Planning in Multi-robot Scenarios
集成大型语言模型以实现多机器人场景中的长期任务规划
- 批准号:
24K07399 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Grant-in-Aid for Scientific Research (C)
Tuning Large language models to read biological literature
调整大型语言模型以阅读生物文献
- 批准号:
BB/Y514032/1 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Research Grant
CAREER: Regularizing Large Language Models for Safe and Reliable Program Generation
职业:规范大型语言模型以安全可靠地生成程序
- 批准号:
2340408 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Continuing Grant
Collaborative Research: SHF: Medium: Toward Understandability and Interpretability for Neural Language Models of Source Code
合作研究:SHF:媒介:实现源代码神经语言模型的可理解性和可解释性
- 批准号:
2423813 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Standard Grant
Conference: New horizons in language science: large language models, language structure, and the neural basis of language
会议:语言科学的新视野:大语言模型、语言结构和语言的神经基础
- 批准号:
2418125 - 财政年份:2024
- 资助金额:
$ 1.98万 - 项目类别:
Standard Grant