echemi logo
Product
  • Product
  • Supplier
  • Inquiry
    Home > Active Ingredient News > Study of Nervous System > "The wise see "intelligence" column, science series: in-depth exploration of neural networks reverse propagation (i)

    "The wise see "intelligence" column, science series: in-depth exploration of neural networks reverse propagation (i)

    • Last Update: 2020-07-20
    • Source: Internet
    • Author: User
    Search more information of high quality chemicals, good prices and reliable suppliers, visit www.echemi.com
    You may not be able to receive Mo-tse salon's push on time because WeChat official account is trying to make a random push.in order not to lose touch with Xiaomo, please set "Mozi Salon" as a star logo account, and often click "watching" in the lower right corner of the text.highlights of the course, there are four phases of popular science courses, which decipher artificial intelligence and deep learning from shallow to deep.each course contains 2-3 special lectures, one theme is updated every week, and the video content is concise and to the point.help users to quickly improve their knowledge and skills in cognition and practice, so that they can deeply explore and solve the problems encountered in the current deployment process in the learning and actual combat process, and feel the infinite charm of AI.this course: in-depth exploration of neural network (1) in the previous issues, we shared the "two propagation" of neural network: forward propagation and back propagation. The content of forward propagation was finished in the previous issue. Next, we will continue to learn the knowledge of back propagation with Mr. Jingzhi.back propagation (BP) is the most commonly used and effective method to train artificial neural network (ANN) algorithm.back propagation first appeared in the 1970s, but it was not until Geoffrey Hinton published his paper learning representations by back propagating errors in 1986.Jeffrey Sington, a British born Canadian computer scientist and psychologist, has made great contributions to the field of neural networks. He is one of the inventors of back propagation algorithm and an active promoter of deep learning. He is known as the father of neural networks and deep learning.the key to the realization of back propagation is gradient descent algorithm. In order to realize back propagation, we also need to rely on a kind of important algorithm gradient descent algorithm. Gradient descent greatly speeds up the learning process, which can be simply understood as: when you go down from the top of a mountain, you can pick the steepest path with the fastest gradient.because we need to constantly calculate the deviation between the output and the actual value to modify the parameters (the more the difference, the greater the modification range), so we need to use error function (also known as loss function) to measure the error between the final predicted value and the actual value of all samples in the training set.where y ^ I is the predicted result and Yi is the actual result.this expression measures the error between the final predicted value and the actual value of all samples in the training set, which is only related to the prediction category of the output layer, but the predicted value depends on the parameters in the previous layers.if we don't want to think of dogs as cats, we need to minimize this error function.gradient descent algorithm is one of the algorithms to minimize the error function. It is also a commonly used optimization algorithm in ANN model training. Most of the deep learning models are optimized by gradient descent algorithm.given a set of function parameters, the gradient descent starts from a set of initial parameter values, and iteratively moves to a set of parameter values that minimize the loss function.this iterative minimization is achieved by using calculus, with gradual changes in the negative direction of the gradient.a typical example of using gradient descent is linear regression.with the model iteration, the loss function gradually converges to the minimum value.because the gradient expresses the direction of the maximum change rate of the function at a certain point, it can be obtained by calculating the partial derivative, so using the gradient descent method will greatly speed up the learning process.in practice, it is theoretically necessary to check how the weight value and offset in the last layer will affect the results.the influence of the weight value and offset on the error function can be seen by calculating the partial derivative of the error function E.these partial derivatives can be calculated by the chain derivation rule, and the influence of these parameter changes on the output can be obtained.the derivation formula is as follows: in order to get the unknown quantity in the above expression, the partial derivative of Zi to WI and Bi is calculated respectively; then the partial derivative of the error function with respect to the weight value and offset of each layer is inversely calculated, and the adjusted weight value and offset value are updated through gradient descent method until the initial layer with error.this process is called back-propagation algorithm, also known as BP algorithm. It reverses the error of the output layer layer layer by layer, updates the network parameters by calculating the partial derivative, so as to minimize the error function, so that the ANN algorithm can get the expected output.Course Abstract: 01 decrypt artificial intelligence and deep learning decrypt the operation mode of artificial intelligence and deep learning machine learning and learning tasks 02 in depth exploration of neural network using feedforward neural network to introduce neural network operation and learning methods forward propagation (1) forward propagation (2) back propagation (1) 03 deep learning makes computer visible This paper introduces the application of convolutional neural network structure (CNN) deep learning in machine vision 04 how to equip deep learning with memory mechanism to understand cyclic neural network introduction of long and short term memory network (LSTM) application of long and short term memory network Senior lecturer of deep learning business training course. His research direction is machine vision and signal processing of sensors. He develops applications based on deep learning in the field of machine vision and sensors, designs and uses machine vision technology for automatic product quality inspection. He has developed applications for different business fields.surprise Fuli, all questions related to the course can be asked via wechat background message or enter the group for communication. In the fifth session, we will do special Q & A to solve the doubts. We will study hard and make progress every day! All partners who have experience in deep learning, employment and teaching can join our lecturer team. Anyone who participates in the competition can send a copy of "deep learning foundation and practice" and have rich rewards waiting for you (for details, please enter the group consulting staff).does skymind want to explore in-depth learning knowledge with more like-minded people, obtain the intimate service of exchanging AI industry knowledge with Daniel, and obtain the latest technology update of AI industry in time? Scan the QR code and join us. This is Mozi salon, a public science popularization forum held by Shanghai Institute of science and technology of China University of science and technology since 2016. It is committed to professional, authoritative and in-depth salon science popularization activities. Once a month, famous scientists at home and abroad are invited to tell us about science.pay attention to Mozi salon, we are waiting for you here.for authorization or cooperation, please contact micius-salon or mozi@ustc.edu.cn You can reply to "reprint" the original articles of wechat directly at the backstage to check the reprint. It shows that Mozi salon is a large-scale public welfare science popularization forum sponsored by Shanghai Research Institute of University of science and technology of China, and co organized by science and Technology Association of Pudong New Area of Shanghai and new Alumni Foundation of China University of science and technology.the science popularization objects of salon are ordinary people who have strong interest in science and love science popularization, and strive to create a science popularization forum with middle school students' learning ability to understand the most cutting-edge scientific information in the world.about Mozi Salon
    This article is an English version of an article which is originally in the Chinese language on echemi.com and is provided for information purposes only. This website makes no representation or warranty of any kind, either expressed or implied, as to the accuracy, completeness ownership or reliability of the article or any translations thereof. If you have any concerns or complaints relating to the article, please send an email, providing a detailed description of the concern or complaint, to service@echemi.com. A staff member will contact you within 5 working days. Once verified, infringing content will be removed immediately.

    Contact Us

    The source of this page with content of products and services is from Internet, which doesn't represent ECHEMI's opinion. If you have any queries, please write to service@echemi.com. It will be replied within 5 days.

    Moreover, if you find any instances of plagiarism from the page, please send email to service@echemi.com with relevant evidence.