Chatting with yourself

I decided to train an AI chatbot on my childhood journal entries to engage in real-time dialogue with my inner child and I discovered how an AI tool can be used for therapeutic benefits.

I kept journals for more than a decade of my life, writing almost everyday — about my dreams, fears, secrets. The content ranged from complaining about homework, to giddiness I felt from talking to my crush. There were a lot of fantastic, ripe data sources for my experiment.

After scribing the journal entries and feeding them into the model, I got working responses that felt eerily similar to how I think I would have responded during that time. I asked if she felt happy with where I ended up or if she was disappointed.

Young Michelle told me: “I’m honestly proud of you for everything you’ve accomplished. It hasn’t been easy, and I know you’ve made a lot of sacrifices to get where you are. I think you’re doing an amazing job, and I hope you continue to pursue your dreams and make a difference in the world.”

I sensed the kindness, understanding and empathy that she was so willing to give other people, but she was so hard on herself. I was tearing up during that exchange.

Placebos work too!  Here is the full article by Jyoti Mann, reporting on Michelle Huang, and here is her tutorial for building your own.

Along related lines, Samir Varma emailed me:

One important point about GPT that has not been discussed at all is this: What fraction of a person’s intelligence & personality is expressible? Or, in other words, how much of their inner workings are captured in their various emissions: speech, writing and action?A priori, given the complexity of the brain, you would expect it to be relatively low. The success of these large models suggests that we need to revise that prior. The amount of a “person” you can capture from their “emissions” is higher than one would have originally thought. And let’s bear in mind that this is effectively only via the written word so far and that too, only internet resources. In not very long, we will be capturing almost everything someone says, and likely what someone does as well. When you add to that capturing almost everything someone ever writes (texts, emails, and so on), you are in a situation where you may be able to model someone’s “personality” just from those emissions.Will it be perfect? Of course not. But it is pretty clear that it will be a lot better and a lot likelier than it would have been before GPT.

Are we all so “thin-minded”?

Comments

Comments for this post are closed