Incremental extreme learning machine
This new theory shows that in order to let SLFNs work as universal approximators, one may simply randomly choose input-to-hidden nodes, and then we only need to adjust the output weights linking the hidden layer and the output layer. In such SLFNs implementations, the activation functions for additi...
主要作者: | |
---|---|
其他作者: | |
格式: | Thesis |
出版: |
2008
|
主题: | |
在线阅读: | https://hdl.handle.net/10356/3804 |