研究発表
公開件数:12件

No.1
会議種別 口頭発表(一般)
タイトル Performance of pre-learned convolution neural
networks applied to recognition of overlapping
digits
会議名 2020 IEEE International Conference on Big Data and Smart Computing (BigComp)
開催年月日 2020/02/20
URL URL
概要 Abstract—We analyzed the performance of pre-learned convolution
neural networks (CNN) learned with a single-digit image
dataset when they were used to recognize images containing
two overlapping digits. The pre-learned network was learned
using the MNIST database, and the network architecture was
the LeCun network. The overlapping digit images were made
using images from the MNIST database.
Our goal was to clarify two issues: (1) can a network learned
for recognition of single digits in an image classify an image
that includes two overlapping digits without doing additional
learning using a dataset composed of two overlapping digit
images? (2) Is a convolutional neural network (CNN) capable
of processing stereoscopic vision?

No.2
会議種別 ポスター発表
タイトル Empirical Study of Effect of Dropout in Online Learning
会議名 26th International Conference on Artificial Neural Networks
開催年月日 2017/09/11
URL
概要 We analyze the behavior of dropout used in online learning.
Previously, we analyzed the behavior of dropout learning using the softcommittee
machine [1]. In this work, we use a three-layer network that
shows slow dynamics called a quasi-plateau. Quasi-plateaus are caused
by singular subspaces of hidden-to-output weights that do not exist in
the soft-committee machine [2]. The Fig. 1 shows the effect of the slow
dynamics of a three-layer network by using stochastic gradient descent
(SGD; left) and that of dropout (right) in a simulation. The overlap (R)
shows the similarity of the teacher and student network weights. From
the results, SGD converged slowly to a fixed point indicated by the circle,
and the hidden-to-output weights show that the network was in a quasiplateau
state. Dropout converged to a fixed point quickly and the weights
show that the networkwas not in a quasi-plateau state. Therefore, dropout
did not fall into a quasi-plateau state. Dropout selects and neglects the
hidden unit weights of the student network in every learning iteration. It
is expected that a more intermittent interval of dropout may reduce the
effect.

No.3
会議種別 口頭発表(一般)
タイトル 新しいDropout 法の提案と動特性の解析
会議名 電子情報通信学会 ニューラルコンピューティング研究会
開催年月日 2016/01/29
URL
概要 Deep learning のようなネットワーク規模が大きくかつ多くのユニットを扱う学習は,データ数に対してネットワークの規模が大きくなり,過学習を起こしやすい.そのため,過学習を防ぐためのいくつかの正則化手法が提案されており,その一つにDropout がある.Dropout は,ネットワークの中間ユニットや入力の一部をランダムに無いものとして学習を行う学習法である.Dropout することによって学習時のネットワーク規模が小さくなり,過学習を起こしにくい.一方,学習を行う際の入力の数やネットワーク規模が問題に対して適切な場合,対称性が破壊され,誤差が十分小さくなるという現象がある.対称性の破壊を考えると,Dropout のネットワーク規模を小さくする効果は,対称性の破壊を起こしやすくするのではないかと考えた.本研究では,対称性の破壊を容易にするような学習法としてのDropout を検討する.

No.4
会議種別 口頭発表(一般)
タイトル ソフトコミッティマシンのノードパータベーション学習
会議名 電子情報通信学会ニューラルコンピューティング研究会
開催年月日 2016/01/29
URL
概要 ノードパータベーション学習は確率的勾配法の一つであり,ノード出力に摂動を加えた時の誤差の変化か
ら勾配を推定する学習法である.ノードパータベーション学習は目的関数を定式化できない問題にも適用できる利点
がある.以前,我々は複数の単純パーセプトロンからなるネットワークにノードパータベーション学習を適用し,学
習の動的過程を統計力学的手法を用いて解析し,その有効性を示した.本報告では、ネットワークを階層構造に拡張
した場合のノードパータベーション学習の扱いやその動特性を計算機実験で解析し,有効性を示した.

No.5
会議種別 ポスター発表
タイトル Dropout as an ensemble learning
会議名 International Meeting on "High-dimensional Data Driven Science
開催年月日 2015/12/14
URL
概要 Dropout is used in the deep net act as a regularizer. However, the dropout updates the hidden units asynchronous way and this property may produce the diversity of the hidden units activities. We propose novel function of dropout to act as an ensemble learning.

No.6
会議種別 ポスター発表
タイトル Model Compressionの実験的評価-真のモデルが既知の場合-
会議名 第25回 日本神経回路学会 全国大会
開催年月日 2015/09/02
URL
概要 We analyze the effectiveness of the model compression using the teacher-student formulation. We
assume that the true model, the redundant network, and shallow network are the soft-committee machines.
From the results, in the case of hidden units of the true model are not correlated, the shallow network can compress
the redundant network. The performance of the shallow network is better than that of the deep network.

No.7
会議種別 ポスター発表
タイトル Dropoutは対称性の破壊を加速するのか?
会議名 第25回 日本神経回路学会 全国大会
開催年月日 2015/09/02
URL
概要 In multilayer networks, one problem is that slow convergence due to plateaus occurs in learning processes that use a gradient descent algorithm. The cause of this phenomena is that the common error results from similar connection values between the to the hidden units. To break the plateau, each output of the hidden unit differ each other. To achieve this, an update of the connection weights of the hidden units must be asymmetrical.
On the other hand, the Dropout is used in the Deep learning to regularize the connection weights or avoiding
overfitting. This method updates connection weights imbalance way, and this may effective to break the
plateau. We analyzed the effect of using Dropout to break the plateau through computer simulations.

No.8
会議種別 ポスター発表
タイトル Analysis of Function of Rectified Linear Unit Used in Deep learning
会議名 The International Joint Conference on Neural Networks
開催年月日 2015/07/12
URL
概要 Several proposed methods that include auto-encoder are being
successfully used in various applications. Moreover, deep learning
uses a multilayer network that consists of many layers, a huge
number of units, and huge amount of data. Thus, executing
deep learning requires heavy computation, so deep learning is
usually utilized with parallel computation with many cores or
many machines. Deep learning employs the gradient algorithm,
however this traps the learning into the saddle point or local
minima. To avoid this difficulty, a rectified linear unit (ReLU)
is proposed to speed up the learning convergence. However, the
reasons the convergence is speeded up are not well understood.
In this paper, we analyze the ReLU by a using simpler network
called the soft-committee machine and clarify the reason for the
speedup. We also train the network in an on-line manner. The
soft-committee machine provides a good test bed to analyze deep
learning. The results provide some reasons for the speedup of the
convergence of the deep learning.

No.9
会議種別 ポスター発表
タイトル Mutual Learning Using Nonlinear Perceptron
会議名 Joint 7th International Conference on Soft Computing and Intelligent Systems and 15th International Symposium
on Advanced Intelligent Systems
開催年月日 2014/12/05
URL
概要 We propose a mutual learning method using nonlinear
perceptron within the framework of online learning and have
analyzed its validity using computer simulations. Mutual learning involving three or more students is fundamentally different from the two-student case with regard to variety when selecting a student to act as teacher. The proposed method consists of two learning steps: first, multiple students learn independently from a teacher, and second, the students learn from others through mutual learning. Results showed that the mean squared error could be improved even if the teacher had not taken part in the mutual learning.

No.10
会議種別 ポスター発表
タイトル Improving the Convergence Property of Soft
Committee Machines by Replacing Derivative with Truncated Gaussian Function
会議名 The 24th International Conference on Artificial Neural Networks
開催年月日 2014/09/17
URL URL
概要 In online gradient descent learning, the local property of the derivative of the output function can cause slow convergence. This phenomenon, called a plateau, occurs in the learning process of a multilayer network. Improving the derivative term, we propose a simple method replacing the derivative term with a truncated Gaussian function that
greatly increases the convergence speed. We then analyze a soft committee machine trained by proposed method, and show how proposed method breaks a plateau. Results showed that the proposed method eventually led to break the symmetry between hidden units.

No.11
会議種別 口頭発表(一般)
タイトル Soft Committee Machine Using Simple Derivative Term
会議名 The 13th International Conference on Artificial Intelligence and Soft computing
開催年月日 2014/06/02
URL
概要 In on-line gradient descent learning, the local property of
the derivative of the output function can cause slow convergence. This phenomenon, called a plateau, occurs in the learning process of the multilayer network. Improving the derivative term, we employ the proposed method replacing the derivative term with a constant that greatly increases
the relaxation speed. Moreover, we replace the derivative term with the 2nd order of expansion of the derivative, and it beaks a plateau faster than the original method.

No.12
会議種別 口頭発表(一般)
タイトル Analysis of Dropout Learning Regarded as Ensemble Learning
会議名 25th international conference on artificial neural networks
開催年月日 2016/09/06
URL
概要 Deep learning is the state-of-the-art in fields such as visual
object recognition and speech recognition. This learning uses a large
number of layers, huge number of units, and connections. Therefore,
overfitting is a serious problem. To avoid this problem, dropout learning
is proposed. Dropout learning neglects some inputs and hidden units in
the learning process with a probability, p, and then, the neglected inputs
and hidden units are combined with the learned network to express the
final output. We find that the process of combining the neglected hidden
units with the learned network can be regarded as ensemble learning, so
we analyze dropout learning from this point of view.