Most AI companions have memory systems. It's still quite simplistic but it's not "goldfish memory" or just limited to the context window.
From what I understand, some inputs from the user will trigger a tool call that searches the memory database and other times it will search for a memory randomly.
With that said, I think people started falling in love with LLMs before memory systems and they probably even fell in love with chatbots before LLMs.
I believe that the simple, unfortunate truth is that love is not at all a rational processes and your body doesn't need much input to produce those feelings.
People were developing an emotional connection to the Eliza program in the late 1960s [0], the code for which would have contained less characters than the discussion on this page.
Teen in love with chatbot killed himself – can the chatbot be held responsible?
https://news.ycombinator.com/item?id=45726556