Jump to content

Talk:Hopfield network: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Mrazvan22 (talk | contribs)
Undid revision 1262015873 by 2806:1016:6:AE3A:C0BC:3B5B:2471:46A2 (talk) tidying up
 
(29 intermediate revisions by 20 users not shown)
Line 1: Line 1:
{{WikiProject banner shell|class=B|1=
{{WikiProject Neuroscience |importance=Mid}}
{{WikiProject Computer science |importance=Mid}}
{{WikiProject Physics|importance=Mid}}
}}

== "Currents" is confusing wording ==
At least for electrical engineers like me, currents means... well currents. In this article, it seems to be the wording for "state" or "current state". If a majority agrees, please change the wording, someone. <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/2003:E5:272D:89:683D:F2BE:862D:9DE|2003:E5:272D:89:683D:F2BE:862D:9DE]] ([[User talk:2003:E5:272D:89:683D:F2BE:862D:9DE#top|talk]]) 01:59, 18 June 2023 (UTC)</small> <!--Autosigned by SineBot-->

== Incomplete descriptions of Running ==
== Incomplete descriptions of Running ==
According to the description under the section Running, a node must be picked, after which the behavior is deterministic. This is a rather incomplete description. How is the behavior of the node defined after it is picked? I suppose it is updating or replacing the value s_i with it's activation a_i (according to the definition given earlier), but it would be better if this is stated explicitly. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/140.78.124.18|140.78.124.18]] ([[User talk:140.78.124.18|talk]]) 17:06, 29 June 2009 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
According to the description under the section Running, a node must be picked, after which the behavior is deterministic. This is a rather incomplete description. How is the behavior of the node defined after it is picked? I suppose it is updating or replacing the value s_i with it's activation a_i (according to the definition given earlier), but it would be better if this is stated explicitly. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/140.78.124.18|140.78.124.18]] ([[User talk:140.78.124.18|talk]]) 17:06, 29 June 2009 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
Line 6: Line 15:


My error - I misread the context of his statement. The condition of symmetric weights guarantees that following the update rule makes energy a monotonically decreasing
My error - I misread the context of his statement. The condition of symmetric weights guarantees that following the update rule makes energy a monotonically decreasing
function, which guarantees convergence to local minima, however, non-symmetric weights do not seem to impare the use of the network as a content-addressable memory system.
function, which guarantees convergence to local minima, however, non-symmetric weights do not seem to impare the use of the network as a content-addressable memory system. <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Corvi42|Corvi42]] ([[User talk:Corvi42|talk]] • [[Special:Contributions/Corvi42|contribs]]) </span></small><!-- Template:Unsigned -->
:With non-symmetric weights, the attractors can be periodic or chaotic, which makes it less usable for retrieving a memory. [[User:Dicklyon|Dicklyon]] ([[User talk:Dicklyon|talk]]) 16:10, 28 March 2014 (UTC)


== Correction - connections need not be symmetric! ==
== Correction - connections need not be symmetric! ==
Line 13: Line 23:
the "special case" of symmetric weights, but says that the network performs just as well with non-symmetric weights. Specifically
the "special case" of symmetric weights, but says that the network performs just as well with non-symmetric weights. Specifically
he says: "The flow in phase space produced by this model algorithm has the properties necessary for a content-addressable memory
he says: "The flow in phase space produced by this model algorithm has the properties necessary for a content-addressable memory
whether or not Tij is symmetric" (Hopfield, 1982, p. 2556) <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Corvi42|Corvi42]] ([[User talk:Corvi42|talk]] • [[Special:Contributions/Corvi42|contribs]]) </span></small><!-- Template:Unsigned -->
whether or not Tij is symmetric" (Hopfield, 1982, p. 2556)


:That's right. He says in the 1982 paper (and repeats in the ones from 1984 and 1986) that the weights should be more or less symmetric in order to converge. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:27, 5 July 2007 (UTC)
:That's right. He says in the 1982 paper (and repeats in the ones from 1984 and 1986) that the weights should be more or less symmetric in order to converge. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:27, 5 July 2007 (UTC)
Line 27: Line 37:
(I'll try after I've studied enough to understand the connection myself).
(I'll try after I've studied enough to understand the connection myself).


Cheers <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:157.82.246.144|157.82.246.144]] ([[User talk:157.82.246.144|talk]] • [[Special:Contributions/157.82.246.144|contribs]]) </span></small><!-- Template:Unsigned -->
Cheers


:The Ising model is a model of [[ferromagnetism]]. Atoms are bipolar (i.e. either positive or negative) and they have connections and local interactions of atoms can lead to some state transitions on a global level. They are the theoretical foundation of Hopfield Networks and Hopfield specifically mentions them and changes the atoms to [[artificial neuron|McCulloch-Pitts neurons]], i.e. he gives them a threshold. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:33, 5 July 2007 (UTC)
:The Ising model is a model of [[ferromagnetism]]. Atoms are bipolar (i.e. either positive or negative) and they have connections and local interactions of atoms can lead to some state transitions on a global level. They are the theoretical foundation of Hopfield Networks and Hopfield specifically mentions them and changes the atoms to [[artificial neuron|McCulloch-Pitts neurons]], i.e. he gives them a threshold. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:33, 5 July 2007 (UTC)

::See reference to Little in the main article. [[User:Msuzen|mcyp]] ([[User talk:Msuzen|talk]]) 20:38, 18 March 2021 (UTC)


==Definitions?==
==Definitions?==
Line 50: Line 62:
or summing over all <math>i</math> and <math>j</math>
or summing over all <math>i</math> and <math>j</math>
:<math>E = -\frac12\sum_{i,j}{w_{ij}{s_i}{s_j}}+\sum_i{\theta_i\ s_i}</math>
:<math>E = -\frac12\sum_{i,j}{w_{ij}{s_i}{s_j}}+\sum_i{\theta_i\ s_i}</math>
would fix the problem. But I'm not so confident to modify the main text. I'd appreciate if somebody could check it.
would fix the problem. But I'm not so confident to modify the main text. I'd appreciate if somebody could check it. <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:61.213.69.186|61.213.69.186]] ([[User talk:61.213.69.186|talk]] • [[Special:Contributions/61.213.69.186|contribs]]) </span></small><!-- Template:Unsigned -->

--
i agree, and i've changed it, (before looking here)
i agree, and i've changed it, (before looking here)
I T.A a neural networks course...
I T.A a neural networks course...
you can easily see this be derivating E w.r.t S_j to get h_j <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:128.139.226.37|128.139.226.37]] ([[User talk:128.139.226.37|talk]] • [[Special:Contributions/128.139.226.37|contribs]]) </span></small><!-- Template:Unsigned -->
you can easily see this be derivating E w.r.t S_j to get h_j


:right. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:38, 5 July 2007 (UTC)
:right. --[[User:Male1979|Ben]] <sup>[[User_talk:Male1979|T]]</sup>/<sub>[[Special:Contributions/Male1979|C]] </sub> 14:38, 5 July 2007 (UTC)
Line 62: Line 74:
I have an argument on energy function. Sometimes threshold is a more complicated function and we cannot easily incorporate it into Energy function. I mean as I have seen in [http://www.shef.ac.uk/psychology/gurney/notes/l5/l5.html "Associative memories - the Hopfield net"], it should not contain this term:
I have an argument on energy function. Sometimes threshold is a more complicated function and we cannot easily incorporate it into Energy function. I mean as I have seen in [http://www.shef.ac.uk/psychology/gurney/notes/l5/l5.html "Associative memories - the Hopfield net"], it should not contain this term:
:<math>\sum_i{\theta_i\ s_i}</math>
:<math>\sum_i{\theta_i\ s_i}</math>
Am I right? <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Amshali|Amshali]] ([[User talk:Amshali|talk]] • [[Special:Contributions/Amshali|contribs]]) </span></small><!-- Template:Unsigned -->
Am I right?
±±±±±±± <small>—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/131.123.28.88|131.123.28.88]] ([[User talk:131.123.28.88|talk]]) 19:57, 20 February 2008 (UTC)</small><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->


== Name of article ==
== Name of article ==
Line 102: Line 113:
So it would probably be missleading to link the two of them.
So it would probably be missleading to link the two of them.
--[[User:Toukip|Toukip]] ([[User talk:Toukip|talk]]) 04:28, 16 November 2010 (UTC)
--[[User:Toukip|Toukip]] ([[User talk:Toukip|talk]]) 04:28, 16 November 2010 (UTC)

:Also, the Hopfield net can use any kind of nonlinearity, not just a threshold. So saying a Hopfield net is a symmetric recurrent interconnection of perceptron units is not general enough. [[User:Dicklyon|Dicklyon]] ([[User talk:Dicklyon|talk]]) 16:19, 28 March 2014 (UTC)


== Picture ==
== Picture ==
Line 107: Line 120:
Arows in the picture (A Hopfield net with four nodes) have wrong direction.
Arows in the picture (A Hopfield net with four nodes) have wrong direction.
--[[User:Bojan PLOJ|Bojan PLOJ]] ([[User talk:Bojan PLOJ|talk]]) 11:37, 6 March 2010 (UTC)
--[[User:Bojan PLOJ|Bojan PLOJ]] ([[User talk:Bojan PLOJ|talk]]) 11:37, 6 March 2010 (UTC)

:Indeed, the picture seems very wrong to me, the connectivity does not seem right and it is clearly not the prototypical graph that is used for an Hopfield network. I'll try to modify that when I have time... [[User:Piero le fou|Piero le fou]] ([[User talk:Piero le fou|talk]]) 14:12, 28 March 2014 (UTC)

::The arrow direction is not the problem. The problem is that where lines converge, it's not clear that it means a summing node; some of the arrowheads mean weights, and some don't. It's an unusual way to draw it; usually weighting and summing are localized to the neuron. [[User:Dicklyon|Dicklyon]] ([[User talk:Dicklyon|talk]]) 15:44, 28 March 2014 (UTC)

:::The ''main'' problem with the illustration [[File:Hopfield-net.png]] is that it's completely unintelligible to the average interested Wikipedia reader. There is no discussion of the meanings and uses of the various symbols in it, not even a simple key to name those different symbols. [[User:Yahya Abdal-Aziz|yoyo]] ([[User talk:Yahya Abdal-Aziz|talk]]) 02:06, 26 November 2016 (UTC)


== Initializing the Network ==
== Initializing the Network ==
Line 113: Line 132:
[[User:Chekhov.seagull|Chekhov.seagull]] ([[User talk:Chekhov.seagull|talk]]) 19:29, 8 March 2010 (UTC)
[[User:Chekhov.seagull|Chekhov.seagull]] ([[User talk:Chekhov.seagull|talk]]) 19:29, 8 March 2010 (UTC)
: I can write about this. A Hopfield network is first of all trained with patterns that fix the weights. For initialisation, the user needs to set the values of the units to the input pattern. [[User:Mrazvan22|Razvan Marinescu]] ([[User talk:Mrazvan22|talk]]) 12:08, 12 January 2013 (UTC)
: I can write about this. A Hopfield network is first of all trained with patterns that fix the weights. For initialisation, the user needs to set the values of the units to the input pattern. [[User:Mrazvan22|Razvan Marinescu]] ([[User talk:Mrazvan22|talk]]) 12:08, 12 January 2013 (UTC)

== Inputs/outputs? ==
== Inputs/outputs? ==


Line 118: Line 138:
: This is true, but by no means the only difference. What make Hopfield Networks special are also the following features:
: This is true, but by no means the only difference. What make Hopfield Networks special are also the following features:
* They are are reccurent and converge to attractors
* They are are reccurent and converge to attractors
* The <small><span class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[User:Mrazvan22|Mrazvan22]] ([[User talk:Mrazvan22|talk]] • [[Special:Contributions/Mrazvan22|contribs]]) </span></small><!-- Template:Unsigned -->
* The

== New Results ==
What about this recent work: http://arxiv.org/abs/1206.2081 [[Special:Contributions/113.190.172.15|113.190.172.15]] ([[User talk:113.190.172.15|talk]]) 14:40, 19 September 2014 (UTC)SOC

== Training - Normalization not needed? ==

Currently the rule for learning is:
<math> w_{ij}=\frac{1}{n}\sum_{\mu=1}^{n}\epsilon_{i}^\mu \epsilon_{j}^\mu </math>
where <math>n</math> is the number of learnt patterns.

Why would the normalization (dividing by <math>n</math>) needed? Many don't use it, including Hopfield himself in the original 1982 paper (eq. 2 there).

It is only needed to avoid 'very large' weights when storing 'a lot' of memories. (a lot compared to the capacity of the actual computer's memory).

If true, I feel that the normalization can be removed for distilling the essence (and essential parts) of the algorithm, and be mentioned as an alternative, and not as the definition.

== Capacity ==
Using ~0.5N^2 weights (floats) to remember ~0.138N^2 bits does not seem to give an advantage (at least in size) maybe a discussion addressing performance would be helpful. <!-- Template:Unsigned IP --><small class="autosigned">—&nbsp;Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/84.75.58.165|84.75.58.165]] ([[User talk:84.75.58.165#top|talk]]) 15:49, 7 August 2017 (UTC)</small> <!--Autosigned by SineBot-->

== External links modified ==

Hello fellow Wikipedians,

I have just modified 2 external links on [[Hopfield network]]. Please take a moment to review [[special:diff/809017904|my edit]]. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit [[User:Cyberpower678/FaQs#InternetArchiveBot|this simple FaQ]] for additional information. I made the following changes:
*Added archive https://web.archive.org/web/20111005202201/http://www.tristanfletcher.co.uk/DLVHopfield.pdf to http://www.tristanfletcher.co.uk/DLVHopfield.pdf
*Added archive https://web.archive.org/web/20121025125326/http://gna.org/projects/neurallab/ to http://gna.org/projects/neurallab/

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

{{sourcecheck|checked=false|needhelp=}}

Cheers.—[[User:InternetArchiveBot|'''<span style="color:darkgrey;font-family:monospace">InternetArchiveBot</span>''']] <span style="color:green;font-family:Rockwell">([[User talk:InternetArchiveBot|Report bug]])</span> 17:12, 6 November 2017 (UTC)

== Self-promotion ==

The last section seems unrelated to the rest of the paper, being self-promotion for Liu, Qun; Mukhopadhyay, Supratik, the authors of the paper summarized in that section. [[User:EnricX|EnricX]] ([[User talk:EnricX|talk]]) 10:45, 1 December 2023 (UTC)

== Last section is a copy paste of a paper abstract ==

Last section seems to be copied from a fairly obscure paper, and contains phrases such as “in this paper”. It is relatively irrelevant to the rest. [[Special:Contributions/2600:1017:B80C:78A7:ED0A:AB76:4E91:EEDF|2600:1017:B80C:78A7:ED0A:AB76:4E91:EEDF]] ([[User talk:2600:1017:B80C:78A7:ED0A:AB76:4E91:EEDF|talk]]) 09:39, 30 December 2023 (UTC)

:Yes, it is just the abstract of the article. It certainly does not fit and probably does not belong. [[Special:Contributions/193.121.164.143|193.121.164.143]] ([[User talk:193.121.164.143|talk]]) 06:29, 2 January 2024 (UTC)

== Why is this low importance? ==

This model just wone its author a Nobel prize. How is this of low importance? [[User:Sprlzrd|Sprlzrd]] ([[User talk:Sprlzrd|talk]]) 15:13, 11 November 2024 (UTC)

Latest revision as of 14:08, 9 December 2024

"Currents" is confusing wording

[edit]

At least for electrical engineers like me, currents means... well currents. In this article, it seems to be the wording for "state" or "current state". If a majority agrees, please change the wording, someone. — Preceding unsigned comment added by 2003:E5:272D:89:683D:F2BE:862D:9DE (talk) 01:59, 18 June 2023 (UTC)[reply]

Incomplete descriptions of Running

[edit]

According to the description under the section Running, a node must be picked, after which the behavior is deterministic. This is a rather incomplete description. How is the behavior of the node defined after it is picked? I suppose it is updating or replacing the value s_i with it's activation a_i (according to the definition given earlier), but it would be better if this is stated explicitly. —Preceding unsigned comment added by 140.78.124.18 (talk) 17:06, 29 June 2009 (UTC)[reply]

I will re-write the running section, and explicitly detail the process Razvan Marinescu (talk) 12:08, 12 January 2013 (UTC)[reply]

Further Correction - symmetric weights

[edit]

My error - I misread the context of his statement. The condition of symmetric weights guarantees that following the update rule makes energy a monotonically decreasing function, which guarantees convergence to local minima, however, non-symmetric weights do not seem to impare the use of the network as a content-addressable memory system. — Preceding unsigned comment added by Corvi42 (talkcontribs)

With non-symmetric weights, the attractors can be periodic or chaotic, which makes it less usable for retrieving a memory. Dicklyon (talk) 16:10, 28 March 2014 (UTC)[reply]

Correction - connections need not be symmetric!

[edit]

If you refer to the origina Hopfield paper ( citied at the bottom of the page ) he discusses the performance of networks with the "special case" of symmetric weights, but says that the network performs just as well with non-symmetric weights. Specifically he says: "The flow in phase space produced by this model algorithm has the properties necessary for a content-addressable memory whether or not Tij is symmetric" (Hopfield, 1982, p. 2556) — Preceding unsigned comment added by Corvi42 (talkcontribs)

That's right. He says in the 1982 paper (and repeats in the ones from 1984 and 1986) that the weights should be more or less symmetric in order to converge. --Ben T/C 14:27, 5 July 2007 (UTC)[reply]

Connection between Hopfield Net and Ising model?

[edit]

Hello!

I had some classes this week which involved the definitions of Hopfield networks and Ising model, and came here to look for further information/links.

There is a link in this article to Ising model, but nothing is written in the article body that explains the connections between the two concepts, maybe someone could fill that gap in?

(I'll try after I've studied enough to understand the connection myself).

Cheers — Preceding unsigned comment added by 157.82.246.144 (talkcontribs)

The Ising model is a model of ferromagnetism. Atoms are bipolar (i.e. either positive or negative) and they have connections and local interactions of atoms can lead to some state transitions on a global level. They are the theoretical foundation of Hopfield Networks and Hopfield specifically mentions them and changes the atoms to McCulloch-Pitts neurons, i.e. he gives them a threshold. --Ben T/C 14:33, 5 July 2007 (UTC)[reply]
See reference to Little in the main article. mcyp (talk) 20:38, 18 March 2021 (UTC)[reply]

Definitions?

[edit]

The relation between the a[i]'s and the s[i]'s is not clear. Are the a[i]'s just the updated values of the s[i]'s? In that case, why not call them both s[i]?

Another terminological matter: The article says

Hopfield nets can either have units that take on values of 1 or -1, or units that take on values of 1 or 0.

and goes on to give the updating rules in the two cases. This seems like to much attention to a trivial matter of scaling. I would suggest choosing one convention or the other for the article and then mentioning that the other convention is also used. --Macrakis 16:21, 15 August 2006 (UTC)[reply]

I also don't find it that important, whether they are or . But I find it important that units can be also continuous. Bipolar units are only one particular case studied in Hopfield's papers. --Ben T/C 14:37, 5 July 2007 (UTC)[reply]

I agree this is quite ambiguous, and the case with 0 and 1 is actually mathematically more complicated (for example, I believe the Energy function would first of all be different and even more complicated ... -1 and 1 provides some desirable symmetry that symplifies mathematical calculations and proofs of convergence, training rules). I will change that to -1 and 1, explicitly say that the article uses this convention .... and only mention the other convention. Razvan Marinescu (talk) 12:18, 12 January 2013 (UTC)[reply]

Energy formula

[edit]

Currently energy is written as:

I feel this is incorrect. Either removing 1/2

or summing over all and

would fix the problem. But I'm not so confident to modify the main text. I'd appreciate if somebody could check it. — Preceding unsigned comment added by 61.213.69.186 (talkcontribs)

i agree, and i've changed it, (before looking here) I T.A a neural networks course... you can easily see this be derivating E w.r.t S_j to get h_j — Preceding unsigned comment added by 128.139.226.37 (talkcontribs)

right. --Ben T/C 14:38, 5 July 2007 (UTC)[reply]

energy

[edit]

I have an argument on energy function. Sometimes threshold is a more complicated function and we cannot easily incorporate it into Energy function. I mean as I have seen in "Associative memories - the Hopfield net", it should not contain this term:

Am I right? — Preceding unsigned comment added by Amshali (talkcontribs)

Name of article

[edit]

Would it not be more encyclopedic for this article to be entitled Hopfield _Network_, rather than Hopfield _Net_? The reasons should be obvious. Opinions?65.183.135.40 (talk) 06:21, 6 March 2008 (UTC)[reply]

Associative memory: Terminology and cross-refs

[edit]

I am confused and the Wikipedia is currently not in a very helpful state regarding this:

Is "Content-addressable memory" synonymous to "associative memory" (as indicated here) or is it a term specifically describing a kind of memory hardware (as indicated in the article by that title)? Is a Hopfield network also an "auto-associative memory"?

For right or wrong, I felt an urge to make the following modifications:

1) I modified the crossref to "Content-addressable memory" to point to the more general "associative memory" (disambig); this I did because the article on CAM is too specific and (for the time being) not appropriate to reference from here. Depending on the correct answer to my confusion question, perhaps it is the CAM article that should be generalised to make it compatible with the ANN context?

2) I also added See also: Associative memory and Auto-associative memory.

195.60.183.2 (talk) 18:19, 17 July 2008 (UTC)[reply]

Training and Limitations

[edit]

As it stands right now, the article doesn't say how one can train Hopfield nets to make particular inputs become local minima. It also doesn't say whether you can have two Hopfield nets with the same local minima but different "basins of attraction".

Also, I assume there is some limit to the number of minima that a network can be trained to (I would guess something like the logarithm of the number of nodes, or something similar) but the article doesn't say anything about this. If there is in fact no limit, it would be good to mention that.

120.18.115.209 (talk) 06:39, 15 October 2009 (UTC)[reply]

Binary treshold unit = Perceptron

[edit]

Perhaps the Binary treshold unit section can be merged with perceptron —Preceding unsigned comment added by StookWagen (talkcontribs) 10:39, 2 December 2009 (UTC)[reply]

A perceptron and a hopfield net differ by the shape of their network: the perceptron is feed-forward whereas hopfield nets are recurrent. So it would probably be missleading to link the two of them. --Toukip (talk) 04:28, 16 November 2010 (UTC)[reply]

Also, the Hopfield net can use any kind of nonlinearity, not just a threshold. So saying a Hopfield net is a symmetric recurrent interconnection of perceptron units is not general enough. Dicklyon (talk) 16:19, 28 March 2014 (UTC)[reply]

Picture

[edit]

Arows in the picture (A Hopfield net with four nodes) have wrong direction. --Bojan PLOJ (talk) 11:37, 6 March 2010 (UTC)[reply]

Indeed, the picture seems very wrong to me, the connectivity does not seem right and it is clearly not the prototypical graph that is used for an Hopfield network. I'll try to modify that when I have time... Piero le fou (talk) 14:12, 28 March 2014 (UTC)[reply]
The arrow direction is not the problem. The problem is that where lines converge, it's not clear that it means a summing node; some of the arrowheads mean weights, and some don't. It's an unusual way to draw it; usually weighting and summing are localized to the neuron. Dicklyon (talk) 15:44, 28 March 2014 (UTC)[reply]
The main problem with the illustration is that it's completely unintelligible to the average interested Wikipedia reader. There is no discussion of the meanings and uses of the various symbols in it, not even a simple key to name those different symbols. yoyo (talk) 02:06, 26 November 2016 (UTC)[reply]

Initializing the Network

[edit]

I tried using this article (and some other sources) to create a hopfield net, but I found, in addition to the lack of notes on how to train the network, it is very unclear how the network ought to be initialized. Perhaps psuedocode describing the algorithms involved would be helpful? Chekhov.seagull (talk) 19:29, 8 March 2010 (UTC)[reply]

I can write about this. A Hopfield network is first of all trained with patterns that fix the weights. For initialisation, the user needs to set the values of the units to the input pattern. Razvan Marinescu (talk) 12:08, 12 January 2013 (UTC)[reply]

Inputs/outputs?

[edit]

Is it correct to say in a Hopfield net, unlike more general recurrent NNs, all nodes are both input and output nodes? It would be nice if the article said something explicit about this. 71.95.146.73 (talk) 18:36, 30 July 2011 (UTC)[reply]

This is true, but by no means the only difference. What make Hopfield Networks special are also the following features:

New Results

[edit]

What about this recent work: http://arxiv.org/abs/1206.2081 113.190.172.15 (talk) 14:40, 19 September 2014 (UTC)SOC[reply]

Training - Normalization not needed?

[edit]

Currently the rule for learning is: where is the number of learnt patterns.

Why would the normalization (dividing by ) needed? Many don't use it, including Hopfield himself in the original 1982 paper (eq. 2 there).

It is only needed to avoid 'very large' weights when storing 'a lot' of memories. (a lot compared to the capacity of the actual computer's memory).

If true, I feel that the normalization can be removed for distilling the essence (and essential parts) of the algorithm, and be mentioned as an alternative, and not as the definition.

Capacity

[edit]

Using ~0.5N^2 weights (floats) to remember ~0.138N^2 bits does not seem to give an advantage (at least in size) maybe a discussion addressing performance would be helpful. — Preceding unsigned comment added by 84.75.58.165 (talk) 15:49, 7 August 2017 (UTC)[reply]

[edit]

Hello fellow Wikipedians,

I have just modified 2 external links on Hopfield network. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

This message was posted before February 2018. After February 2018, "External links modified" talk page sections are no longer generated or monitored by InternetArchiveBot. No special action is required regarding these talk page notices, other than regular verification using the archive tool instructions below. Editors have permission to delete these "External links modified" talk page sections if they want to de-clutter talk pages, but see the RfC before doing mass systematic removals. This message is updated dynamically through the template {{source check}} (last update: 5 June 2024).

  • If you have discovered URLs which were erroneously considered dead by the bot, you can report them with this tool.
  • If you found an error with any archives or the URLs themselves, you can fix them with this tool.

Cheers.—InternetArchiveBot (Report bug) 17:12, 6 November 2017 (UTC)[reply]

Self-promotion

[edit]

The last section seems unrelated to the rest of the paper, being self-promotion for Liu, Qun; Mukhopadhyay, Supratik, the authors of the paper summarized in that section. EnricX (talk) 10:45, 1 December 2023 (UTC)[reply]

Last section is a copy paste of a paper abstract

[edit]

Last section seems to be copied from a fairly obscure paper, and contains phrases such as “in this paper”. It is relatively irrelevant to the rest. 2600:1017:B80C:78A7:ED0A:AB76:4E91:EEDF (talk) 09:39, 30 December 2023 (UTC)[reply]

Yes, it is just the abstract of the article. It certainly does not fit and probably does not belong. 193.121.164.143 (talk) 06:29, 2 January 2024 (UTC)[reply]

Why is this low importance?

[edit]

This model just wone its author a Nobel prize. How is this of low importance? Sprlzrd (talk) 15:13, 11 November 2024 (UTC)[reply]