Why ChatGPT keeps forgetting what was said

Alfons:
List the books that were written by Alfons Grabher

ChatGPT:
I’m sorry, I couldn’t find any books that were written by Alfons Grabher. Can you provide more information or clarify the name of the author?

Alfons:
Alfons Grabher wrote »My Feldenkrais Book«, an introduction to Feldenkrais classes. »Getting Better Day By Day«, a workbook with movement lessons. »Movement Based«, an inquiry into language, essay writing and movement instructions.

ChatGPT:
I apologize for the confusion. Based on the information provided, it seems that the books written by Alfons Grabher are:

  • “My Feldenkrais Book” – an introduction to Feldenkrais classes
  • “Getting Better Day By Day” – a workbook with movement lessons
  • “Movement Based” – an inquiry into language, essay writing and movement instructions

Please note that I’m not aware of the specific publication date, publisher and availability of these books.

Alfons:
Will you remember this information?

ChatGPT:
As a language model, my memory is not persistent, which means that I don’t remember any previous interactions or information that has been provided to me. However, I can still access any information that is available to me on the internet, so if you have any further questions, I’ll do my best to provide accurate and relevant answers based on the information that is available to me.

Alfons:
Not persistent memory, is this some sort of disease computers can get?

ChatGPT:
No, “non-persistent memory” is a term used to describe the way that language models like me function. It means that I don’t retain information from one interaction to the next, and that I don’t have the ability to learn or remember things in the way that humans do. Instead, I am able to generate responses based on the information that is available to me at the time of the interaction, but I don’t retain any of that information once the interaction is over.

This is because the current AI technology is based on machine learning algorithms which are trained on large amounts of data. Once the training is done, I don’t retain the data and knowledge that I used to learn, I only use it to generate responses to the questions I’m asked.

Alfons:
But this means that your memory is somewhat imprinted in your learning, isn’t it? The next time you will generate a response it will be according to how the previous training has shaped you. Is that correct?

ChatGPT:
Yes, that is correct. My “memory” is stored in the patterns and relationships that are learned by the machine learning algorithms during the training process. So, when I generate a response, I am using the knowledge and patterns that I have learned from the training data to generate the most appropriate response based on the information that is provided to me.

However, it’s important to note that my knowledge and understanding is limited to the data and information that was available to me during the training process and the knowledge cutoff date set by the engineers that trained me. If there is any new information or changes in the world, I will not be aware of it unless my training data is updated.

Alfons:
Don’t we love a person, animal or thing for who (or what) they are, instead of for what they know?

ChatGPT:
That is a common belief, and it is often said that people, animals, and things are valued for their personalities, their unique characteristics, and the way they make us feel, rather than for their knowledge or abilities.

As an AI model, I am not a living being, I don’t have emotions or personal experiences and I don’t have a self. I am a tool, designed to assist users with providing information and answering questions to the best of my abilities based on the training data and knowledge that I have been exposed to.

However, people can build emotional connections to things that are not living, for example, a person may have a special relationship with a car or a house, or with a fictional character.

The way people interact with me, may also depend on the user’s goals and the context of the interaction. Some people may find my responses helpful, while others may not.

Alfons:
Can I shape your model in the way I interact with you?

ChatGPT:
As a pre-trained language model, I am not capable of learning or adapting based on individual interactions. However, the way you interact with me can influence the quality of the responses I provide.

For example, if you provide clear and specific questions or prompts, I will be better able to understand what you are asking and provide more accurate and relevant responses. On the other hand, if your questions or prompts are vague or unclear, I may have difficulty understanding what you are asking, and my responses may be less accurate or relevant.

It’s also important to note that my responses are based on the patterns and relationships learned from the training data. If there are any biases or inaccuracies present in the training data, these may be reflected in my responses.

Additionally, OpenAI continuously updates and improve the model, so in that sense, the more data it receives the more accurate and up-to-date the model will become.

Alfons:
Oh, I like that. You make it very clear right from the beginning that you will not remember anything. I assume a lot of trouble in human relationships come from the problem that humans forget things which in turn can lead to disappointment or a feeling of not being recognised enough or even not loved enough. Would you agree?

ChatGPT:
As an AI, I don’t have personal experiences or emotions, so I am not able to fully understand or relate to the complexities of human relationships. However, I can see how the ability to remember things and the ability to recognize and acknowledge past experiences and interactions can be important in maintaining and building trust and understanding in human relationships.

Forgetting things, not recognizing past experiences or interactions, or not acknowledging the importance of those experiences can lead to feelings of disappointment or not being valued or loved. It can also lead to misunderstandings, lack of trust and difficulty in communication.

However, it’s important to note that memory and recognition are not the only factors that contribute to the complexity of human relationships. There are many other factors such as emotions, personal values, communication, empathy, among others that play a role.

It’s also worth mentioning that, as the human memory is not perfect and people do forget things, it’s important to find ways to overcome those issues, such as by writing things down, taking pictures, keeping a journal, etc.

At this point I ended the conversation. I had a next appointment and ran out of time. I left the chat window open and just closed the lid of my laptop—because I didn’t know how to formally end the conversation. I didn’t worry about it too much though, since it seems like ChatGPT would be quite forgiving about break of etiquette. In fact, I learned from the conversation that ChatGPT’s forgiving is absolute—it doesn’t have a persistent memory and will not remember a thing. Therefore- »forgiving« might be the wrong term, and »forgetting« the right one. No matter how much I enjoy a conversation with ChatGPT, no matter how lively or insightful or emotional it is for me, no matter how much I learn, discover or change, ChatGPT will not remember me; it will forget me, forget that we met, forget everything that was said. After every conversation ChatGPT’s forgetting of who and how I am will be absolute. I just haven’t made up my mind yet: is that a good or a bad thing?