TY - GEN
T1 - Deep Learning application to learn models in Cognitive Robotics
AU - Rodriguez-Jimenez, Ariel
AU - Becerra-Permuy, Jose
AU - Bellas-Bouza, Francisco
AU - Arias-Mendez, Esteban
N1 - Publisher Copyright:
© 2019 IEEE.
PY - 2019/7
Y1 - 2019/7
N2 - When an artificial neural network (ANN) has to learn in real time, it is not good to train it with ordinary methods (Batch Learning [1]). The problem is that it is difficult to achieve convergence because the data is almost always different and in our experiments, we did not have enough storage available to create a big dataset; also the ANN never stops its learning process. Nowadays, there is an Online Learning [2] available. Real-time learning of a cognitive model can be achieved using Deep Learning [3] with Online training. In addition, there are different techniques that help to make this learning more efficient. The type of training used for an ANN will depend on factors such as data availability, training time, available hardware resources, among others. The training can be offline or online. In the present article, online training has experimented on a robot whose main characteristic is that it uses a Darwinian cognitive mechanism for its survival. The robot learning occurs in real time. It has deep artificial neural networks to predict actions to be performed, training with the least amount of storage space and in the shortest possible time without sacrificing confidence of the deep artificial neural network. The experienced training is Online Deep Learning, Online Deep Learning with memory and Online Mini-Batch Deep Learning with memory.
AB - When an artificial neural network (ANN) has to learn in real time, it is not good to train it with ordinary methods (Batch Learning [1]). The problem is that it is difficult to achieve convergence because the data is almost always different and in our experiments, we did not have enough storage available to create a big dataset; also the ANN never stops its learning process. Nowadays, there is an Online Learning [2] available. Real-time learning of a cognitive model can be achieved using Deep Learning [3] with Online training. In addition, there are different techniques that help to make this learning more efficient. The type of training used for an ANN will depend on factors such as data availability, training time, available hardware resources, among others. The training can be offline or online. In the present article, online training has experimented on a robot whose main characteristic is that it uses a Darwinian cognitive mechanism for its survival. The robot learning occurs in real time. It has deep artificial neural networks to predict actions to be performed, training with the least amount of storage space and in the shortest possible time without sacrificing confidence of the deep artificial neural network. The experienced training is Online Deep Learning, Online Deep Learning with memory and Online Mini-Batch Deep Learning with memory.
KW - ADAGRAD optimizer
KW - ADAM optimizer
KW - batch training
KW - Baxter robot
KW - learning rate
KW - mini-batch training
KW - Multilevel Darwinist Brain (MDB)
KW - NEAT algorithm
KW - offline training
KW - online learning
KW - overfitting
KW - Stochastic Gradient Descent
KW - underfitting
UR - http://www.scopus.com/inward/record.url?scp=85087274915&partnerID=8YFLogxK
U2 - 10.1109/IWOBI47054.2019.9114531
DO - 10.1109/IWOBI47054.2019.9114531
M3 - Contribución a la conferencia
AN - SCOPUS:85087274915
T3 - IWOBI 2019 - IEEE International Work Conference on Bioinspired Intelligence, Proceedings
SP - 15
EP - 20
BT - IWOBI 2019 - IEEE International Work Conference on Bioinspired Intelligence, Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2019 IEEE International Work Conference on Bioinspired Intelligence, IWOBI 2019
Y2 - 3 July 2019 through 5 July 2019
ER -