A Noniterative Supervised On-Chip Training Circuitry for Reservoir Computing Systems

Show simple item record

dc.contributor.author Galán-Prado, Fabio
dc.contributor.author Rosselló, Josep L.
dc.date.accessioned 2024-02-06T11:18:30Z
dc.identifier.uri http://hdl.handle.net/11201/164566
dc.description.abstract Artificial neural networks (ANNs) is an exponentially growing field, mainly because of its wide range of applications to everyday life such as pattern recognition or time series forecasting. In particular, reservoir computing (RC) arises as an optimal computational framework suited for temporal/sequential data analysis. The direct on-silicon implementation of RCs may help to minimize power and maximize processing speed, that is especially relevant in edge intelligence applications where energy storage is considerably restricted. Nevertheless, most of the RC hardware solutions present in the literature perform the training process off-chip at the server level, thus increasing processing time and overall power dissipation. Some studies integrate both learning and inference on the same chip, although these works are normally oriented to implement unsupervised learning (UL) with a lower expected accuracy than supervised learning (SL), or propose iterative solutions (with a subsequent higher power consumption). Therefore, the integration of RC systems including both inference and a fast noniterative SL method is still an incipient field. In this article, we propose a noniterative SL methodology for RC systems that can be implemented on hardware either sequentially or fully parallel. The proposal presents a considerable advantage in terms of energy efficiency (EE) and processing speed if compared to traditional off-chip methods. In order to prove the validity of the model, a cyclic echo state NN with on-chip learning capabilities for time series prediction has been implemented and tested in a field-programmable gate array (FPGA). Also, a low-cost audio processing method is proposed that may be used to optimize the sound preprocessing steps.
dc.format application/pdf
dc.relation.isformatof Versió postprint del document publicat a: https://doi.org/10.1109/TNNLS.2022.3201828
dc.relation.ispartof Ieee Transactions On Neural Networks And Learning Systems, 2022
dc.rights (c) IEEE, 2022
dc.subject.classification 53 - Física
dc.subject.classification 62 - Enginyeria. Tecnologia
dc.subject.other 53 - Physics
dc.subject.other 62 - Engineering. Technology in general
dc.title A Noniterative Supervised On-Chip Training Circuitry for Reservoir Computing Systems
dc.type info:eu-repo/semantics/article
dc.type info:eu-repo/semantics/acceptedVersion
dc.date.updated 2024-02-06T11:18:31Z
dc.date.embargoEndDate info:eu-repo/date/embargoEnd/2024-10-01
dc.embargo 2024-10-01
dc.subject.keywords edge computing
dc.subject.keywords Álgebra max-plus
dc.subject.keywords Hardware de tipo neuromórfico
dc.subject.keywords reservoir computing
dc.subject.keywords tropical algebra
dc.rights.accessRights info:eu-repo/semantics/embargoedAccess
dc.identifier.doi https://doi.org/10.1109/TNNLS.2022.3201828


Files in this item

This item appears in the following Collection(s)

Show simple item record

Search Repository


Advanced Search

Browse

My Account

Statistics