Blog

Conversations that earn trust: how we’re designing personal finance AI at Enrich

Brent
Brent
Cover Image for Conversations that earn trust: how we’re designing personal finance AI at Enrich

What's the difference between AI that feels trustworthy and AI that you can actually trust?

Discussing our finances can often feel awkward. Even when we hear reassurances like "money isn't everything," personal finance is undeniably a loaded topic for most of us.

We often use finances as an indicator of accomplishment or as a measure of self-worth. We can find ourselves collapsing our rich, unique lives into fungible numbers that we might not feel so great about. Conversations about finance almost feel like conversations about health; it might be ok with a trusted professional, but not something we tend to bring up casually.

When money is tight, these conversations are even harder. Our finances become sources of doubt, stress, and fear. As a result, our biggest questions are the ones we're most afraid to bring up. This is especially true for the financially vulnerable households that Enrich aims to help: people navigating irregular income, benefits cliffs, or mounting debt. Existing finance apps aren't designed to support these sensitive conversations.

There is no immediate clean technical solution to this. At its core this is a human and social challenge. But people are starting to explore vulnerable topics by turning to AI, and at Enrich we think this is an interesting opportunity.

Chat with a heart icon

When does chatting with an AI make sense?

There's growing evidence that people may sometimes find it easier to open up to AI. For example, one study found people perceived less fear of judgment when talking to chatbots compared to humans. Given how we commonly feel sensitive about personal finance, it's easy to imagine how reduced fear of judgment could help.

Another study found that chatbots' consistent availability and non-judgmental presence contribute to building user trust, particularly in service contexts. When someone is worried about a deposit or a bill, getting an answer right away—without fear of judgement—helps.

There is also evidence that chatbot use can provide some social benefit, but the picture is nuanced. Research from MIT found that companion chatbot use doesn't directly predict loneliness like we might expect; the relationship depends on individual factors like neuroticism and social attraction. But a larger 2025 study with nearly 1,000 participants found that higher daily usage correlated with worse psychosocial outcomes, while the benefits of voice-based chatbots diminished at high usage levels. So chat can be supportive, but it's important that it be offered thoughtfully, with care and boundaries.

Seeming trustworthy vs being trustworthy

These findings help offer a glimpse about why people may be opening up to AI. But they also reveal something important: it's remarkably easy to design AI systems that feel trustworthy without actually being trustworthy.

The dynamics that reduce fear of judgment can be exploited. A chatbot can simulate warmth, respond instantly, and never push back. In doing so it create a false sense of safety that leads people to act on guidance they shouldn't trust. This is the dark pattern of trustworthiness: optimizing for user comfort and perception of trust over actual user outcomes.

We think there's a meaningful difference between earning trust and manufacturing the feeling of trust. A system earns trust by being transparent about its limitations, by presenting verified information, by knowing when to hand off to a human. It earns trust through design choices that prioritize the user's actual wellbeing over engagement metrics.

This distinction shapes how we're building with AI at Enrich.

Chat bubble with a receipt icon

What a good finance conversation might look like

So what would it look like to have a genuinely trustworthy personal finance conversation with AI?

An ideal personal finance chatbot might:

  • patiently explain what is going on with our money and expenses
  • surface patterns, like "Here is what changed this week"
  • non-judgmentally discuss sensitive topics like debt and government resources
  • help us feel a little less alone while navigating complexity
  • be open to questions we'd not typically feel comfortable asking a human
  • help us understand scenarios and think through decisions… but without actually telling us what to do

Above all, an AI might offer true accessibility and tailored levels of clarity, all with reduced worry about stigma. A super promising combo—if designed with integrity.

What needs extra care

There must be clear boundaries, and this is where the trust-earning versus trust-seeming distinction really matters.

No matter how advanced the tech, a personal finance bot is still not a financial advisor. A trust-earning system must make this explicit. It does not imply authority, guess about benefit eligibility or tax situations, or offer prescriptive advice.

A finance chatbot should also not simulate emotional bonding. Research suggests that emotional bonding with AI can have negative effects over time, particularly for heavy users who may develop emotional dependence or experience reduced real-world socialization. A system that earns trust doesn't optimize for attachment. It actually maintains an appropriate distance.

There's also evidence that many chatbots are failing to meet basic ethical standards for sensitive conversations, including poor crisis management, reinforcing users' negative beliefs, and creating false impressions of empathy. And separate research found that AI chatbots responded appropriately to mental health scenarios only 60% of the time (versus 93% for licensed therapists). Meanwhile, there are cases where people have made poor financial decisions after taking AI guidance at face value.

Suffice to say: a good bot should support a person. It should never replace their judgment or present itself as a stand-in for human help and connection.

Chat with a heart icon

The framing we are figuring out

Our design principles are oriented around earning trust, not signaling it.

  1. Humane, but not human. It's important we design AI experiences to avoid anthropomorphizing and false intimacy. The system is not a finance therapist or a friend; it's an interface to information. We will not exploit the parasocial dynamics that make chatbots feel like companions.
  2. The right kinds of inference. LLMs generate responses by inferring what to say next. In a sense, it's all fancy guesswork. Oftentimes those guesses are helpful! But we do not want guesses about numbers or account details. A trust-earning system doesn't blur the line between "generated plausible response" and "verified fact." So we've set up vetted tools for queries and presentation.
  3. Guided reasoning instead of direct recommendations. Research on advice-taking suggests that decision-makers often prefer receiving information to help them think through options rather than being told what to choose. It preserves their autonomy while still improving accuracy. A trust-seeming system might offer confident recommendations because that feels more helpful. A trust-earning system supports the user's own reasoning process, because that's what actually helps. We aim to design a system that supports decision-making, but does not suggest the decisions.

What we're building toward

Used well, chat lowers the barrier to asking for help. It can help families understand their financial story and reduce the cognitive and emotional burden of navigating benefits and irregular income. It can make a stressful question feel like a simple, readily available conversation.

And because financial health is actually about what happens in life rather than on a screen, we're also exploring how to proactively watch for changes to surface what matters early.

AI will not replace the human parts of financial life. But when designed with care, it can support people at moments when clarity and confidence matter most.

That is the work we are doing at Enrich. We are building tools that earn trust through transparency and verified information, and by respecting the person on the other side of the screen.