Study on cooperation mechanism and it's dynamic behavior of many learning machines

多学习机合作机制及其动态行为研究

基本信息

项目摘要

Ensemble learning algorithms, such as bagging and Ada-boost, try to improve upon the performance of a weak learning machine by using many weak learning machines ; such learning algorithms have recently received considerable attention. We have analyzed the dynamics of the generalization error of ensemble learning by using statistical mechanics methods within the framework of on-line learning. Within this framework, the overlap (or direction cosine) between the teacher and the initial student weight vectors plays important roles in ensemble learning. When overlaps between the teacher and the students are homogeneous, a simple average of the student outputs can be used as an integration method for ensemble learning (bagging). From our analysis, we found that the generalization error was equal to half that of a single linear perceptron when the number of linear perceptrons K became infinite for the no noise case. In addition, we found that the generalization error converged with that of th … More e infinite case with 0(1/K) when the number of linear perceptrons was finite for both the no noise case and the noisy case. In an inhomogeneous case, the generalization error can be improved by introducing weights to average the outputs of the learning machines (i.e., to use a weighted average rather than a simple average), and the weights should be adapted to minimize the generalization error (i.e., parallel boosting). In ensemble learning, there is no interaction between the students. In mutual learning, learning is performed between two students who learn from a teacher in advance. Therefore, the knowledge each student has obtained from the teacher is exchanged, which may improve the performance of the students. Moreover, the interaction may mimic the integration mechanism of ensemble learning. We showed that the mutual learning asymptotically converged into bagging. Moreover, a student with a larger initial overlap for mutual learning transiently passes through a state of parallel boosting during the learning in the limit of step size goes to zero. Less
集成学习算法,如Baging和Ada-Boost,试图通过使用多个弱学习机来提高弱学习机的性能,这种学习算法最近受到了相当大的关注。在在线学习的框架下,利用统计力学方法分析了集成学习泛化误差的动态变化规律。在这个框架内,教师和初始学生权重向量之间的重叠(或方向余弦)在整体学习中起着重要作用。当教师和学生之间的重叠是同类的时,可以使用学生输出的简单平均值作为集成学习的方法(装袋)。通过分析我们发现,在无噪声的情况下,当线性感知器的数目K变为无穷大时,泛化误差等于单个线性感知器的一半。此外,我们还发现泛化误差与…的泛化误差是一致的对于无噪声和有噪声的情况,当线性感知器的数目都是有限时,更多的是0(1/K)的无限情况。在非均匀情况下,可以通过引入权重来平均学习机的输出(即使用加权平均而不是简单平均)来改善泛化误差,并且应该调整权值以最小化泛化误差(即并行提升)。在集体学习中,学生之间没有互动。在相互学习中,学习是在两个学生之间进行的,他们事先向老师学习。因此,每个学生从老师那里获得的知识被交换,这可能会提高学生的表现。此外,这种交互作用可能模仿了集成学习的整合机制。我们证明了相互学习渐近收敛于装袋。此外,初始相互学习重叠较大的学生在步长限制为零的学习过程中会短暂地经历并行助推状态。较少

项目成果

期刊论文数量(48)
专著数量(0)
科研奖励数量(0)
会议论文数量(0)
专利数量(0)
線形ウィークラーナーによるアンサンブル学習の汎化誤差の解析
使用线性弱学习器的集成学习泛化误差分析
Ensemble learning of linear perception : On-line learning theory
线性感知的集成学习:在线学习理论
Analysis of ensemble learning using simple perceptrons based on online learning theory
基于在线学习理论的简单感知器集成学习分析
  • DOI:
  • 发表时间:
    2005
  • 期刊:
  • 影响因子:
    0
  • 作者:
    Miyoshi;S.;Hara;K.;Okada;M.
  • 通讯作者:
    M.
Statistical mechanics of mutual learning with a latent teacher.
与潜在老师相互学习的统计机制。
Analysis of ensemble learning using simple perceptrons based on on-line learning theory
基于在线学习理论的简单感知器集成学习分析
{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

数据更新时间:{{ journalArticles.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ monograph.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ sciAawards.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ conferencePapers.updateTime }}

{{ item.title }}
  • 作者:
    {{ item.author }}

数据更新时间:{{ patent.updateTime }}

HARA Kazuyuki其他文献

Prediction of Turbulence Temporal Evolution in PANTA by Long-Short Term Memory Network
通过长短期记忆网络预测 PANTA 中的湍流时间演化
  • DOI:
    10.1585/pfr.17.1201048
  • 发表时间:
    2022
  • 期刊:
  • 影响因子:
    0.8
  • 作者:
    AIZAWACARANZA Masaomi;SASAKI Makoto;MINAGAWA Hiroki;NAKAZAWA Yuuki;LIU Yoshitatsu;JAJIMA Yuki;KAWACHI Yuichi;ARAKAWA Hiroyuki;HARA Kazuyuki
  • 通讯作者:
    HARA Kazuyuki

HARA Kazuyuki的其他文献

{{ item.title }}
{{ item.translation_title }}
  • DOI:
    {{ item.doi }}
  • 发表时间:
    {{ item.publish_year }}
  • 期刊:
  • 影响因子:
    {{ item.factor }}
  • 作者:
    {{ item.authors }}
  • 通讯作者:
    {{ item.author }}

{{ truncateString('HARA Kazuyuki', 18)}}的其他基金

Another "Analytical Revolution": Psychoanalysis in a Conceptual History of Analysis
另一场“分析革命”:分析概念史中的精神分析
  • 批准号:
    23520096
  • 财政年份:
    2011
  • 资助金额:
    $ 1.98万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
IMPROVEMENT OF CONVERGENCE OF LEARNING OF MULTI-LAYER NEURAL NETWORKS AND APPLICATION FOR SEARCH ENGINE
多层神经网络学习收敛性的提高及搜索引擎的应用
  • 批准号:
    13680472
  • 财政年份:
    2001
  • 资助金额:
    $ 1.98万
  • 项目类别:
    Grant-in-Aid for Scientific Research (C)
{{ showInfoDetail.title }}

作者:{{ showInfoDetail.author }}

知道了