摘要:在BP神經(jīng)網(wǎng)絡(luò)現(xiàn)有算法的基礎(chǔ)上提出一種新的算法,該算法的基本原理是任選一組自由權(quán),通過(guò)解線性方程組求得隱層權(quán),將選定的自由權(quán)與求得的權(quán)合在一起,就得到所需的學(xué)習(xí)權(quán)值。該算法不存在傳統(tǒng)方法的局部極小及收斂速度慢的問(wèn)題。
關(guān)鍵詞:神經(jīng)網(wǎng)絡(luò);MATALB仿真;新BP算法
中圖分類號(hào):TP183 文獻(xiàn)標(biāo)識(shí)碼:A 文章編號(hào):1009-3044(2009)05-1197-02
BP Artificial Neural Network's New Algorithm
SONG Shao-yun, ZHONG-tao
(Yuxi Normal University, School of Information Technology and Engineering, Yuxi 653100, China)
Abstract: The BP neural networks algorithm presented in this paper is based on the existing algorithm, which basic principle is choosing a freedom weight, by solving the linear equations to achieve hidden layer, combination freedom weight, then obtain weight is necessary weight. This algorithm hasn't the traditional method such as the local minimum and the slower rate of convergence in BP neural networks algorithm.
Key words: neural network; MATALB simulation; new BP algorithm
1 引言
BP人工神經(jīng)網(wǎng)絡(luò)實(shí)際上是通過(guò)梯度下降法來(lái)修正各層神經(jīng)元之間的權(quán)值,使誤差不斷下降以達(dá)到期望的精度。從本質(zhì)上講,這種計(jì)算過(guò)程是一種迭代過(guò)程,迭代算法一般都與初值的選擇密切相關(guān),如初值選擇的好,則收斂速度快,如初值選擇不好,則收斂速度慢或根本不收斂。當(dāng)采用梯度下降算法調(diào)整權(quán)值時(shí),不可避免地存在陷入局部極小的問(wèn)題。盡管很多文獻(xiàn)都報(bào)道了各種各樣的改進(jìn)方法來(lái)加快收斂速度,如變學(xué)習(xí)率,加慣性項(xiàng)(也稱動(dòng)量項(xiàng))等方法。然而,由于這些方法都由于迭代及優(yōu)化的基本思想,不可能從根本上解決對(duì)初值的依賴性及局部極小問(wèn)題。
2 BP網(wǎng)絡(luò)結(jié)構(gòu)及參數(shù)假設(shè)
考慮三層結(jié)構(gòu)的BP網(wǎng)絡(luò),如圖1所示。網(wǎng)絡(luò)參數(shù)假設(shè)如下:1)樣本數(shù)量:k個(gè);2)輸入節(jié)點(diǎn):n個(gè);3)隱層神經(jīng)元:v個(gè);4)輸出層神經(jīng)元:m個(gè);5)輸入向量:xp;6)隱層加權(quán)和向量:nethp;7)隱層輸出向量:hp;8)輸出層加權(quán)和向量:netvp;10)教師向量:tp。
其中:p=1,2,3,…,k。輸入層與隱層、隱層與輸出層的權(quán)矩陣分別為: