summary refs log tree commit diff stats
diff options
context:
space:
mode:
authorscrewtape <screwtape@sdf.org>2023-05-09 21:33:05 +0000
committerscrewtape <screwtape@sdf.org>2023-05-09 21:33:05 +0000
commitdb3241c19e18458d1c4990bb4168fa3ed8afd4f3 (patch)
treeaebe214d9f258259dd4720c5721ef1b92a81d0cf
parentacb3e90c674ce9a55c94077e05bb952389a16c33 (diff)
downloadbinry-hop-book-db3241c19e18458d1c4990bb4168fa3ed8afd4f3.tar.gz
Introduce Krotov and Hopfield 2016
-rw-r--r--01-autoassociative-memory.txt8
1 files changed, 8 insertions, 0 deletions
diff --git a/01-autoassociative-memory.txt b/01-autoassociative-memory.txt
index e69de29..715133a 100644
--- a/01-autoassociative-memory.txt
+++ b/01-autoassociative-memory.txt
@@ -0,0 +1,8 @@
+* Autoassociative memory
+
+is where a partial, or distorted memory produces a previously known memory. Our starting point here is the open access conference paper Krotov & Hopfield 2016, Dense Associative Memory for Pattern Recognition.
+
+Hopfield networks are a deep single layer of spiking neurons connected in a way that closely tracks some models of animal neurons, formulated as a Lyapunov function which means if we think long enough, we will converge to a specific memory. The conference paper presents the mathematics for what are called modern hopfield networks- which use rectified polynomial activation functions in the Lyapunov condition, which results in them being able to store very many memories compared to the number of neurons. This can be thought of as a polynomial resulting in very different heights quickly, for relatively small linear distances between memories. This is why rectified polynomials can be used rather than a rectified linear function, otherwise popular in deep learning.
+
+sm0l networks are desirable for us.
+