
Persona Builder
Developer tool for designing consistent personalities for conversational AI
Overview
I’m always pushing to innovate voice assistants in ways which leverage the great potential of our social brains. In 2021 I won $10,000 in a company-wide hack-a-thon by proposing a developer tool which allows front-end voice developers to customize assistants using personality traits instead of code. By inviting humanized terms such as “agency” and “assertiveness” directly into the process of creating machines, I hoped to challenge developers to see their inventions more like social companions than cold machinery.
Role: Project leader, generated prototype, organized and presented business pitch
Timeline: 1 week (March 2021)
Team: Anthony Serravalle (dialog engineer), Shweta Naik (project manager), Richard Beaufort (NLG engineer), Jenny Zellman (UX design)
Award: Cerence Hackathon 1st place winner 2021
Background
When speaking to voice assistants, users tend to unconsciously assign human-like personalities to the AI. This assignment is based on the sound and tone of the system voice, but also how the system makes decisions, what information it provides, and how it expresses information. Research has shown that users have more positive interactions with voice systems whose personalities they perceive to be compatible with their own, and whose behaviors they perceive to be adapting to the circumstances of the situated interaction.
The Problem
In the design and development of voice user interfaces, it is impractical to directly program one personality, or system persona, into the implementation of the system behaviour – the core code. This approach often leads to inconsistent system personas and lacks the flexibility to dynamically modify the parts of the code that make up the system’s personality. The key problem is how to separate reusable core code which remains the same regardless of the system persona, from the software elements which trigger an expression of personality traits. This leads to:
Frustrated customers who want consistent end-user experiences
Expensive and time consuming custom solutions involving tedious development cycles
Products which tend to leave voice design best practices behind
SOLUTION
A tool which empowers developers to customize an assistant by personality traits

Description
There are many aspects of voice user interfaces (VUIs) which impact end-user’s perception of the assistant personality. For example:
dialog actions (e.g., confirmation, clarification questions)
system outputs (e.g., spoken prompts, sounds, on-screen output)
text-to-speech generation (eg. tone of voice)
For each expression of dialog behavior we assign values along a personality trait (e.g., brevity/verbosity, agency/dependence, friendliness/professional). For instance, in response to a user posing the question: “When is my next meeting?”, a voice assistant could respond “At 3 pm”, or ”Your next meeting is scheduled today at 3pm with Bob and Mary.” The former expression would fall low on the brevity/verbosity axis (verbosity: 1), while the latter expression would correspond to a higher value on the same axis (verbosity: 10).
Inspiration
Something I noticed sort of immediately when I started building assistants, was that it’s actually very difficult to program in a personality. Writing prompts is often very tech driven, and surprisingly hard codes the kind of information I as a designer imagined when I first started. I’m a big sci-fi fan, and I’ve found it interesting that in shows and movies like Interstellar and Westworld, you can see the programmers or these futuristic bots adding “personality” characteristics.