Hong Kong researchers have found that artificial intelligence, in the form of large language models (LLMs), possesses memory capabilities similar to those of humans.
According to a paper by computing experts from Hong Kong Polytechnic University, there are several key factors in reasoning ability. This includes knowledge learned, the specific input, and the ability to produce results aligned with the learned knowledge.
They state: “According to this definition, the memory capabilities of LLMs can also be considered a form of reasoning.”
Does AI memory work the same way as human memory?
Using available datasets from Hugging Face, thousands of Chinese poems were analyzed and memorized. Several datasets were able to recall around 1,900 poems. In response to this, they said: “These results are remarkable. A human, without specialized memory training, would struggle to memorize even 100 poems under such conditions, whereas the LLMs were able to memorize almost 100% of the 2,000 poems.”
That said, there were some limitations when it came to predicting the next part of the poem, making several mistakes, which they blamed on the complex nature of the language. However, even if the predictions were not always exact, the responses still came up with correct linguistic conventions, which suggests a form of creativity and reasoning.
Hence the research describes the concept of AI memory as “Schrodinger’s memory.” The term is inspired by the quantum theory paradox where an object’s state is indeterminate until observed.
For LLMs, they argue that memory can only be evaluated after a specific question is asked, similar to how like human memory is assessed when it responds to a particular query. For instance, humans may not be able to remember exactly how many poems they know but can generally remember a specific poem when asked.
The researchers explain that both the human brain and LLMs dynamically generate outputs based on inputs. Rooted in the Transformer model, the LLMs’ architecture could be seen as a simplified version of how the human brain operates.
OpenAI is among those exploring this idea. In February, ReadWrite reported that a “memory” feature was being integrated into ChatGPT, which would allow the AI to retain information about users over time.
In the same month, researchers at MIT found a way for AI chatbots to maintain nonstop conversations without crashing or slowing down by preserving initial data points in its memory.
Featured image: Ideogram