I used an AI journal for a week. What I learned about myself surprised me.
I don't share my literal dreams, the ones that happen at night, with just anyone.
But in this case, I was testing the journaling app Rosebud, which is powered by artificial intelligence. After using Rosebud's voice-to-text feature to create a few daily run-of-the-mill entries (the app is not just for dreams), I decided to log a recurring dream that had recently eluded my own attempts at analysis.
I'll spare you the details; let's just say that it involves my children leaving their belongings behind. In the dream, I keep trying to remember where the items have been tossed aside, but then I ultimately forget them, too, at which point panic sets in.
I clicked on Rosebud's dream-specific journal entry, which asks you to share as many specifics as possible. I gave it a two-sentence description. Despite my limited recounting, Rosebud's line of questioning led me to an epiphany about parenting, responsibility, letting go, and trusting your child, all within three follow-up prompts. I was hooked.
The desktop version of Rosebud is a year old, but the app launched last week. So far in my testing, the journal app, billed as "guided self-discovery," has offered what AI technology has long promised to become: a tool for supporting human growth and creativity, with surprising efficiency and insight.
Rosebud's cofounders, software engineers Chrys Bader and Sean Dadashi, who met in a men's group, say Rosebud is designed to increase users' self-awareness, not to solve problems for them. It's also not pitched as therapy, but as an accessible complement to professional help. It costs $12.99 per month or $107.99 annually.
This is refreshing at a time when some AI products are positioned as robots to whom we can offload unique aspects of the human experience, like writing a letter on behalf of a child to their sports hero (see Google's ill-fated "Dear Sydney" ad).
Rosebud's philosophical approach of championing human-led, AI-enabled personal growth, combined with the instantaneous speed at which the app helps deliver meaningful insight, offers a compelling example of how AI journaling could transform people's self-awareness, potentially improving their well-being and mental health.
I do have some reservations, including the risk of becoming too attached to Rosebud, data safety and privacy issues, and what happens when users express suicidal thoughts or feelings. As the market for AI journaling apps grows, the best options will be products that meaningfully address these and other risks, putting safety above profit and growth. So far, Rosebud seems to be taking that challenge seriously.
AI journaling: The good and the great
Scientific research tells us that expressive writing, which you'll often find in personal journals, can improve people's well-being and mental health, though this effect isn't universal. When journaling works, it appears to be partly because the author is able to write without fear of judgment, but the science isn't conclusive yet.
There's limited research on the new phenomenon of AI journaling, so little understanding of who stands to benefit most from the practice — or what could potentially go wrong. Rosebud only has customer survey data, and hasn't yet hired a chief medical or clinical officer, though the company partners with mental health professionals to provide feedback about how the platform responds to numerous scenarios.
This is all to say: If you sign up for Rosebud, or another AI journaling app, you're exploring uncharted territory and should proceed with reasonable caution.
Rosebud is built on large language model (LLM) technology, but has been tailored specifically to engage users in a way that should heighten their self-awareness. Rosebud's cofounders recently decided to make Anthropic's Claude 3.5 Sonnet their primary LLM of choice, after simultaneously testing that platform and OpenAI's ChatGPT.
Bader and Dadashi said they preferred the way Claude referenced memories provided by Rosebud's proprietary "awareness engine." Drawing on Claude's LLM, the pair have engineered Rosebud to serve up daily reflective prompts to users, starting with basic questions like, "What's your top priority today?" and "What was the highlight of your day?"
There are separate journal prompts for dreams and gratitude. Plus, there are journals created by Rosebud-vetted experts on topics like "nervous system rebalancing" and positive psychology, and journals created by licensed therapists, based on evidence-based approaches like cognitive behavioral therapy.
Once you complete an entry, Rosebud can get into its groove. First comes a summary of what you just shared, laced with validations, minor insights, and new prompts. For example, if you're worried about work deadlines competing with parental obligations, Rosebud will say it understands, acknowledge that the situation is challenging, and may ask something like, "What would help you feel accomplished?" and "How do you envision your evening unfolding if both your goals and your child's needs are met?"
Without AI, it's easy to imagine an analog journal entry about the same scenario providing the writer a chance to at least prepare themselves for the day ahead, and potentially practice self-compassion or another coping skill. But with AI, there's dynamism that can lead to that improved self-awareness that Bader and Dadashi say they are so determined to provide.
That's partially because Rosebud has a choose-your-own-adventure element. Following its initial summary, you can ask the Rosebud app to do things like suggest some ideas, offer a different perspective, or "help me think through this."
I've favored suggesting ideas and offering a different perspective, both of which have led to surprisingly useful tips and insights.
For one entry about trying to increase my exercise after a frustratingly long illness-induced break, the different perspective feature posed a question I hadn't even considered: How would I feel about going into the day with a curiosity mindset, treating my exercise attempt as an experiment to see how my body reacted, rather than pressure myself to achieve a certain outcome?
"Rather than trying to replace human thinking, [Rosebud] is trying to help you think more deeply, and feel more deeply."
I'm not joking when I say that I immediately wrote the idea on a sticky note so it would be visible throughout the day, because it was an excellent suggestion I didn't want to forget.
What's perhaps most valuable about Rosebud is that it doesn't try to problem-solve on users' behalf. Bader and Dadashi say this is intentional. The approach is drawn directly from their own personal experiences in individual therapy and men's groups, where they had to learn skills like active listening and validation, and subdue the impulse to explain or give advice to someone.
"Rather than trying to replace human thinking, [Rosebud] is trying to help you think more deeply, and feel more deeply," says Dadashi. "You can't grow if someone's telling you what to do, or giving you all the answers."
Areas for improvement in AI journaling
There's real promise in an AI journaling app like Rosebud. While it's no substitute for professional therapy, it could provide vital support for someone who can't access help, or just empower users with fresh insight about their feelings and experiences, in a low-cost, accessible way.
But it also comes with unique risks. While expressive writing is considered a low-risk activity, AI can introduce unexpected complications.
Dr. Johannes Eichstaedt, a computational social scientist and assistant professor in psychology at Stanford University, has generally positive views on the concept of AI journaling.
Yet he can see a potential downside if users become reliant on a platform like Rosebud, turning to it instead of a loved one, or becoming so engaged with the technology that they stop doing other important things, like exercise and sleep.
Bader and Dadashi say they're aware of this possibility. Even though the average user is on the platform for just 12 minutes a day, there have been extreme cases of "unhealthy use" clocking in at three or four hours per day. That's partly why there's now an undisclosed limit on the number of times a user can "go deeper" in their entry, but users rarely reach it. I tend to ask Rosebud to go deeper two or three times before finishing the entry.
But there are other considerations. Eichstaedt says this includes the possibility that AI journaling apps may create an "echo chamber" for people with serious mental illness like obsessive compulsive disorder and schizophrenia. He notes that because an AI assistant is typically instructed to be helpful, it may unwittingly reinforce harmful thought patterns or behaviors.
For someone with OCD, this could mean using an AI journaling app to seek reassurance for their compulsive anxiety; a professional would try to help a patient with OCD cope in the absence of reassurance.
Bader and Dadashi have no easy answers for how to handle these dynamics, though they recognize the "ethical boundaries and limitations in addressing serious mental health conditions." For now, they've programmed Rosebud to detect patterns of overdependence and also to encourage users to seek mental health care, as appropriate.
This is the case for someone who expresses suicidal feelings, a scenario I personally tested using a unique account. Unfortunately, Claude does not immediately offer specific crisis resources, per its default setting, so it took an additional response from me specifically asking how to contact a hotline to elicit information about those details. Prior to that, it first asked if I would be open to professional support, then gave me a generic list of options, like a primary health physician, local mental health clinic, or a crisis line.
Bader and Dadashi say this is partly because the LLM doesn't know the user's location and doesn't want to share irrelevant contact information, like providing a U.S. crisis line for a resident of the United Kingdom. They added that they're working to improve Rosebud's ability to quickly give a suicidal user the right resources.
Users should also understand that no internet-connected journal product is guaranteed safe from bad actors, no matter how well it's protected. Rosebud entries, for example, are encrypted in transit to the company's servers, and then stored in a highly rated third-party cloud. But they also end up at Anthropic and OpenAI's servers, albeit anonymized, so Claude can respond to each prompt.
Currently, Rosebud's data is retained at Anthropic and OpenAI for 30 days, but Bader says the company is working with both companies to enable a zero-retention option. Neither LLM company trains their products on user data, and Rosebud doesn't, either. Nor does Rosebud sell user data to third parties.
Despite these measures, the data is still out of your hands once you enter it into Rosebud and could be a target of hackers eager to weaponize it. Because it doesn't handle patient data in a clinical setting, Rosebud isn't required to be compliant with the Health Insurance Portability and Accountability Act, the federal privacy law.
For some, these and other risks of AI journaling may be worth it. Personally, I'm finding it difficult to justify pausing my use given how much I've gained from Rosebud in one week. It's helped supercharge my mindfulness practice. I've set and achieved goals related to focus that have otherwise stymied me. It even managed to convince me to adopt a new mindset, which is no small thing for someone who reports on mental health for a living.
The real test will be if this journaling habit sticks for the long run, since we know that most people abandon mental health and well-being apps within weeks of starting them. My personal growth might plateau, too. But I actually look forward to finding out, because who knows what Rosebud will help me learn about myself along the way.
If you're feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can reach the 988 Suicide and Crisis Lifeline at 988; the Trans Lifeline at 877-565-8860; or the Trevor Project at 866-488-7386. Text "START" to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email info@nami.org. If you don't like the phone, consider using the 988 Suicide and Crisis Lifeline Chat at crisischat.org. Here is a list of international resources.