博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
机器学习中使用的神经网络(三)
阅读量:5288 次
发布时间:2019-06-14

本文共 2399 字,大约阅读时间需要 7 分钟。

神经元的简单模型

Idealized neurons

• To model things we have to idealize them (e.g. atoms)
– Idealization removes complicated details that are not essential for understanding the main principles.
– It allows us to apply mathematics and to make analogies to other, familiar systems.
– Once we understand the basic principles, its easy to add complexity to make the model more faithful.
• It is often worth understanding models that are known to be wrong (but we must not forget that they are wrong!)
– E.g. neurons that communicate real values rather than discrete spikes of activity.
Linear neurons
• These are simple but computationally limited
– If we can make them learn we may get insight into more complicated neurons

Binary threshold neurons 二值阈值神经元

McCulloch-Pitts (1943): influenced Von Neumann.
– First compute a weighted sum of the inputs.
– Then send out a fixed size spike of activity if the weighted sum exceeds a threshold.
– McCulloch and Pitts thought that each spike is like the truth value of a proposition and compute the truth value of another proposition!


There are two equivalent ways to write the equations for a binary threshold neuron:

Rectified Linear Neurons

(sometimes called linear threshold neurons)
They compute a linear weighted sum of their inputs.
The output is a non-linear function of the total input


Sigmoid neurons 这个神经元经常使用
These give a real-valued output that is a smooth and bounded function of their total input.
– Typically they use the logistic function
– They have nice smooth derivatives, the derivatives change continuously  and they're nicely behaved and they make it easy to do learning.


Stochastic binary neurons 随机二进制神经元

These use the same equations as logistic units.
– But they treat the output of the logistic as the probability of producing a spike in a short time window.

Instead of outputtiing that probability as a real number they actually make a probabilistic decision, and so what they acutally output is either a one or a zero. They're intrisically random. So they're treating the P as the probability of producing a one, not as a real number.

• We can do a similar trick for rectified(改正的) linear units:

– The output is treated as the Poisson rate for spikes.

So the rectified linear unit determines the rate, but intrinsic randomness in the unit determines when the spikes are actually produced.

 

转载于:https://www.cnblogs.com/jinee/p/4472585.html

你可能感兴趣的文章
Hibernate : Disabling contextual LOB creation as createClob() method threw error
查看>>
【bzoj4872】[Shoi2017]分手是祝愿 期望dp
查看>>
字符串元转分
查看>>
程序员网址大全(转)
查看>>
thinkphp 防sql注入
查看>>
Go语言规范1 - 统一规范篇
查看>>
s5-11 距离矢量路由选择协议
查看>>
MSSQL-SQL SERVER一些使用中的技巧
查看>>
用Vue中遇到的问题和处理方法(一)
查看>>
ASP.Net 打通服务器代码和前台界面的特殊符号
查看>>
201521123044 《Java程序设计》第1周学习总结
查看>>
winform 实现类似于TrackBar的自定义滑动条,功能更全
查看>>
MIT Scheme 的基本使用
查看>>
程序员的“机械同感”
查看>>
RAP在centos上的部署
查看>>
java 8 新特性
查看>>
在16aspx.com上下了一个简单商品房销售系统源码,怎么修改它的默认登录名和密码...
查看>>
VS2015 create a C++ console application based on WinRT
查看>>
c++回调函数
查看>>
神经网络初探
查看>>