Case Study · Parleva

Designing for Confidence,
Not Completion

How I built Parleva, an AI conversation app that helps language learners start speaking in under a minute.

Challenge

Language learners often freeze in real conversations despite completing lessons, streaks, and drills.

Approach

A conversation-first AI practice system that removes placement tests, adapts live, and extends realistic scenarios beyond scripted exchanges.

Result

80% activation, 34-second median time to first conversation, 83% started within 2 minutes, 204 users returned for 2+ sessions.

Learning

The strongest signal was not activation alone. It was conversation depth.

Project Snapshot

At a glance

Role Solo designer + builder
Platform Web + Android
Scope Strategy, brand, UX, AI behavior, implementation, analytics
Goal Help learners start speaking quickly
Core metric Meaningful conversation depth
Product Parleva — AI language conversation practice app
Timeline 2026 launch + early iteration
Users Adult language learners wanting real conversation practice
Tools Figma, Firebase, Google Play, Claude Code
Scale 591 registered users · 861 sessions · 277 MAU

I led Parleva end-to-end as a solo product designer and builder — owning the full product loop from strategy and brand through design, AI behavior, implementation, analytics, and launch. Because I owned both design and implementation, every decision had to work beyond the screen, across product strategy, engineering feasibility, AI behavior, measurement, and growth.

Metrics Snapshot

The data told a clear story

Fast start → meaningful conversation depth → repeat behavior.

Speed to Value
80%
Of registered users started at least one conversation
34s
Median time from signup to first message, among those
83%
Started speaking within 2 minutes, among those
Conversation Depth
232
Sessions reached 10+ turns — meaningful depth
50%
Of food & drink sessions reached 10+ turns
Repeat Behavior
204
Users returned for 2+ sessions — 52% of users with at least one session
277
Monthly active users — 861 total sessions across 591 registered
The Problem

Most language apps measure what is easy to measure

Lessons completed. Streaks maintained. XP earned. Words recognized. Those signals can create the appearance of progress — but they do not always translate to the moment learners care about most:

Can I say something to a real person without freezing?

Most learners are not starting from zero. They may recognize words, pass quizzes, and complete modules. But when a real conversation becomes unpredictable, confidence disappears. The gap is not just knowledge. It is exposure.

Traditional language apps often treat conversation as something learners unlock after enough preparation. Parleva starts from a different belief:

If the goal is speaking, conversation should not be the reward at the end of learning. It should be the method.

Traditional language apps
  • Lessons
  • Quizzes
  • Streaks
  • Delayed speaking
  • Completion
  • Progress as points
Parleva
  • Scenarios
  • Realistic conversation
  • Calm consistency
  • Immediate speaking
  • Confidence
  • Progress as capability
Strategy & Brand Foundation

Before designing screens, I defined the foundation

The brand was built around a clear positioning idea: Practice real conversations. Build real confidence. That positioning shaped the product experience as much as the marketing.

1
Conversation over drills
2
Confidence over completion
3
Encouragement over evaluation
4
Calm consistency over streak pressure
5
Real-world usefulness over abstract progress
Avoid
  • "Complete your lesson"
  • "Keep your streak"
  • "You missed 3 days"
  • "Don't fall behind"
Use
  • "Start a conversation"
  • "Keep going"
  • "You're getting more comfortable"
  • "Try saying it this way"
Design Challenge

Three product questions shaped the architecture

Challenge 1

How do we get users into conversation before anxiety or friction builds?

Design implication Remove setup friction
Product response No placement test, scenario-first onboarding, and fast entry into the first exchange
Challenge 2

How do we keep conversations realistic without overwhelming learners?

Design implication Balance realism with emotional safety
Product response Adaptive difficulty, conversational hooks, lightweight suggestions, and one coaching signal at a time
Challenge 3

How do we motivate repeat practice without streak pressure or gamified guilt?

Design implication Build intrinsic motivation
Product response Calm UX, useful conversations, no XP, no streak anxiety, progress tied to real-world capability
System Design

Three systems working together

Parleva is not a language course with a conversation mode. It is a conversation-first practice product. A learner chooses a scenario and starts talking. The AI plays the role, adapts to the learner's ability, and provides focused coaching — no placement test, no XP, no lesson map required.

Scenario context
+
Learner output
+
Conversation rules
+
Difficulty signal
=
Next adaptive response

1 — Conversation Engine

Most language practice tools close the scene as soon as the task resolves. Parleva doesn't. The barista follows up. The stranger asks where you are from. The hotel receptionist adds a small complication. Each scenario is designed with conversational hooks that create realistic follow-ups, light complications, and opportunities to recover. The goal is productive discomfort — the kind that helps learners practice the part of conversation most apps avoid.

1
User message
2
AI responds in role
3
One coaching signal
4
Suggested replies
5
Next turn
Step Purpose
User messageGets the learner actively producing language
AI responds in rolePreserves realism and scenario immersion
One coaching signalHelps without overwhelming
Suggested repliesPrevents the blank-page moment
Next turnKeeps the learner in conversation

2 — Adaptive Intelligence

Parleva has no placement test. Placement tests create friction before value and can reinforce the anxiety the app is trying to reduce. Instead, Parleva calibrates through use. Every conversation generates a live signal based on the learner's actual output: message length, correctness, and language choice. The learner never has to declare their ability. They just start talking, and the product meets them where they are.

Learner output Word count + correctness + language choice Easier / same / harder signal Level adjustment Next AI response

Calibration happens through use, not a placement test.

3 — Motivation Model

Parleva has no XP, streak pressure, leaderboards, or "you missed a day" notifications. Speaking requires vulnerability. You have to be willing to sound imperfect, pause, get something wrong, and try again. The product needs to protect that psychological safety, not add pressure on top of it.

Gamified motivation
  • XP
  • Streaks
  • Loss aversion
  • Leaderboards
  • Pressure to return
Parleva motivation
  • Real-world practice
  • Calm consistency
  • Intrinsic confidence
  • Personal capability
  • Reason to return

Parleva does not ask: How do we make users feel bad for leaving?
It asks: How do we make each conversation feel worth coming back to?

Foundations path screen
Foundations — structured themed practice without streaks or XP
Dashboard screen
In seconds — start something new or pick up where you left off
Key UX Decisions

Five decisions that shaped the experience

No placement test

Calibration happens invisibly through conversation. Users reach value before being asked to define themselves.

Tradeoff No explicit level upfront
Why it matters Prioritize speaking over classification

Conversation-first onboarding

No tutorial. Users pick a scenario and begin. The product teaches itself through use — before hesitation builds.

Tradeoff Less upfront explanation
Why it matters Let the first conversation demonstrate the product

Minimal interface

The conversation is the interface. Feedback and support appear only when useful — not as a permanent dashboard.

Tradeoff Less visible feature density
Why it matters Keep focus on speaking, not interface mechanics

One piece of feedback at a time

Either praise or correction — not both. One signal keeps the rhythm conversational and the learner moving forward.

Tradeoff Fewer corrections per turn
Why it matters Confidence over exhaustive feedback

Suggestions as confidence scaffolding

Suggested replies at safe, natural, and stretch levels prevent the blank-page moment where anxiety takes over.

Tradeoff Can reduce originality if overused
Why it matters Optional scaffolding without removing free response
Key Flows

Three moments that define the experience

First-time experience

Choose language Choose scenario AI opens in role User responds Calibration begins

Within a median of 34 seconds, the first conversation is underway. The first experience is not preparation — it is the product.

Conversation loop

Learner sends message AI responds in character One coaching signal Suggestions appear Conversation continues

The objective is not to complete a module. The objective is to stay in conversation.

Returning user

Open app Choose scenario Start again

Returning users are not greeted with guilt, streak loss, or a dashboard of missed activity. They return to scenarios and start again.

First conversation opening screen
Low-friction setup
Scenario selection drawer
Real-world contexts, not lessons
Category scenario list
Structure without forcing a curriculum
Positive feedback in conversation
Coaching appears inside the flow
Results

Early data showed the core experience was working

Speed to Value
80%
Of registered users started at least one conversation
34s
Median time from signup to first message, among those
83%
Started speaking within 2 minutes, among those

Removing the gates and getting learners into the behavior quickly was validated. For a product built around speaking confidence, the first meaningful moment is the first exchange. Parleva compressed that gap to under a minute.

Conversation Depth
232
Sessions reached 10+ turns
50%
Of food & drink sessions reached 10+ turns

I used 10+ turns as a proxy for meaningful conversation depth — it indicated the learner moved beyond testing the app and stayed long enough to adapt, recover, and continue. 232 sessions reaching that threshold proved the behavior happened repeatedly, not just once.

Repeat Behavior
204
Users returned for 2+ sessions — 52% of users with at least one session
2.2
Average sessions per active user

The most important signal was not any single metric in isolation. It was the relationship between depth and return behavior. 204 users returned for multiple sessions — active users averaged 2.2 sessions — suggesting that users who entered the experience often explored beyond a single interaction. Speed helped them start, but depth helped them believe the experience was worth repeating.

Start quickly
Reach meaningful conversation depth
Return for another session
Build confidence through repetition
User profile screen showing progress and subscription
Live product — real user progress and subscription data
Early User Feedback

Users talked about confidence, not features

The strongest qualitative signals did not praise features. They described a shift in identity.

"There is no better way to learn language than through real conversation."

"Ideal for learners who want real conversational practice."

Tradeoffs

Senior-level judgment requires acknowledging what you gave up

No gamification vs. retention pressure

Removing streaks and XP removes one of the most reliable engagement mechanics in consumer apps. Parleva is betting that confidence-driven engagement is more aligned with its purpose than guilt-driven return behavior. That means the core conversation loop has to stand on its own.

Invisible intelligence vs. perceived simplicity

The adaptive level system does meaningful work in the background, but invisible systems can be hard for users to appreciate. If the AI adapts well, the experience simply feels natural. The design direction is to surface adaptation through subtle moments of recognition, not scores or dashboards.

Open conversation vs. structured learning

Parleva's open-ended model gives learners freedom, but some users still want direction. Too much structure would make Parleva feel like the apps it was designed to move beyond. The roadmap addresses this through lightweight learning paths that support conversation without replacing it.

AI flexibility vs. product consistency

LLMs are powerful because they can generate natural conversation. They are risky for the same reason. Parleva needed more than a prompt — it needed a behavior system defining rules for role consistency, correction boundaries, suggestion logic, and failure handling. The AI had to feel natural, but the product behavior had to remain designed.

What I Learned

Activation is not the same as confidence

Getting users into a conversation quickly matters. The 34-second median time to first conversation proved that onboarding was working. But the deeper challenge was helping users stay in the conversation long enough for the experience to become meaningful.

The strongest signal came from conversation depth. Once a session crossed into 10+ turns, it started to resemble what Parleva was designed for: not a lesson, but a real exchange.

The question became less: How do we get more people to start?
And more: How do we help more people stay long enough for the product to work?

That shifted the next iteration toward stronger scenario hooks, clearer suggestions, more natural voice pacing, better continuity, and post-session reflection.

If I were starting again, I would instrument conversation quality earlier — not just whether users started, but where they stalled, which prompts created momentum, and which scenarios helped beginners recover fastest.

AI-Augmented Workflow

AI was leverage, not authorship

How AI was used in this project

Claude Code and ChatGPT helped accelerate exploration, implementation support, prompt iteration, and edge-case testing. But the important design work was not "using AI."

It was directing it. That meant defining behavior rules, testing failure modes, evaluating output quality, tightening conversation patterns, and deciding what the product should and should not do.

The product judgment still had to come from me.

What's Next

Helping more users reach meaningful conversation depth

Next 1

Memory and Continuity

Remember what a user practiced and where they struggled — making each return feel more personal, not more complex.

Next 2

Progress That Feels Human

Reflect real-world capability — scenarios practiced, conversations completed, topics ready to revisit — not XP or abstract levels.

Next 3

Deeper Session Insight

A calm post-session synthesis: what went well, what stretched the learner, one thing to carry forward. A reflection, not a report card.

Parleva began with a simple belief. Early data showed that users could reach conversation quickly. The deeper signal showed that meaningful conversation depth was the path to return behavior. And the strongest user feedback pointed to something more important than engagement: a shift in identity.

"It helped me to have a voice."

Every design decision from here is a path back to that moment.