How ‘Human’ Should They Be? In Short, Not Very

One of the great things about AI agents is that you can easily converse with the technology as if it were a person. Thanks to natural language processing, the artificial intelligence (AI) agent understands, interprets, and generates human-sounding responses. This begs the question: How human-like should an agent be? 

In short, not very. While agents may seem lifelike, designing them to feel overly human may frustrate users and lead to misplaced (and ultimately, unmet) expectations. 

Getting AI agent design right is paramount. It’s a brand new category, so nobody has all the answers, but designers would do well to focus on the core principles of inclusivity, trustworthiness, and human-centricity. These help ensure there’s always a bold line between human and technology. 

The market for agents is expected to hit $47 billion by 2030, reflecting the demand for technologies that enhance efficiency and productivity, augment human potential, and take action autonomously. Clearly, agents will be in high demand to bolster customer-facing scenarios like service and marketing, as well as internally, helping workers across organizations perform all manner of tasks. But designing them to act as a tool, not a fellow worker, is the key to successfully integrating them into the workflow.

When chatbots or agents try too hard to sound human, it can backfire. Imagine asking a serious question about your bank account or a medical test, and the agent responds very conversationally, with empathetic language. You may be led to believe it’s a real person, and when you realize it’s actually a piece of technology, feel frustrated and lose trust. Do you want real answers or AI pretending to be your new best friend?

Businesses need to consider a range of guidelines when building an agent. They may sound human, but that doesn’t mean they should take on human characteristics, too. UX designers need to consider how to help users clearly distinguish between digital agents and humans, addressing questions like: Should they have a name or use pronouns? Should they be represented by an avatar or even a real person? Should they be able to understand and respond to emotional cues? 

Yvonne Gando, senior director of UX at Salesforce, said businesses might be tempted to make the agent more relatable by giving it human traits. She encourages businesses to focus on competence over personality, and stresses the importance of maintaining the distinction between human and AI.

Agents should boost human potential, not pretend to be a “digital human.” Why? Giving an AI agent human-like qualities sets users up for unrealistic expectations. It can’t actually empathize or understand emotions. Plus, when we blur the line between human and machine, people may develop unnecessary emotional attachments or start leaning on agents in ways that could lead to ethical problems or misuse.

“Focus on what the user needs, not the AI doing it,” Gando said. 

Here are some suggested design principles for AI agents. 

AI agent design principle #1: Focus on the work, not the agent doing the work

The agent’s job is to carry out complex tasks — to free up workers so they can tackle more complex issues and be more productive and efficient. The focus, then, should be on the tasks, activities, and outcomes a user can achieve through the agent, rather than the agent itself. Before deciding to give your agent a gender or other human trait, ask yourself if it’s a requirement for the agent to carry out the task. This helps you focus on the user’s needs. 

“When AI agents are given human-like personalities, users may begin to relate to them as they would to a person,” said Gando. “If the agent can’t meet those expectations or misunderstands a question, this often leads to disappointment and a loss of trust in the brand. It’s more effective to keep AI interactions helpful and approachable without creating a false sense of human connection.”

Avoid giving agents pronouns, because it could be construed as giving it equal weight to a human worker. Instead of an agent saying, “I wanted to give you these documents,” consider “Here are documents you might find helpful.”

AI agent design principle #2: Always identify agents as such

AI agents are becoming so adept at communicating in natural language that it can be hard to know whether you’re talking with a person or a piece of technology. Three states — California, Utah, and Colorado — have enacted laws requiring bot use disclosure in certain circumstances. Mandates aside, the best practice is to immediately identify the use of AI with any customer interaction. 

“You should always start a conversation knowing that the entity you’re interacting with is AI,” said Gando. “There is a real loss of trust and transparency when the customer doesn’t know.”

Salesforce’s Agentforce, by default, alerts administrators and agent managers when they’re about to implement or use AI agents. Disclosures make sure the recipients of agent-generated emails know they were created and executed by AI. A nonremovable and non-editable disclaimer below the signature line adds extra transparency.

If an agent is stumped by a request or has outrun what it’s been programmed to do, it should do a smooth handoff to a human. In that situation, the agent should clearly alert the customer to the transfer. 

Einstein Service Agent, for example, automatically escalates more complicated service calls or questions to humans based on the parameters set by the company.

AI agent design principle #3: Do not refer to agents as ‘digital workers’

Earlier this year, an HR technology company faced swift backlash when it announced that it would “give digital workers official employee records,” complete with onboarding, managers, and adding them to org charts. 

The takeaway? Humans rightfully take umbrage at being equated to machines. As useful and integral to work as they will be, AI agents should not be elevated to human status. They should not be positioned as teammates or stakeholders, but rather, as part of a workflow. AI agents are technology helpers. They are not employees. They are technology that elevates human potential while preserving the distinctiveness and value of human qualities. 

Another tip: When labeling an agent, focus on the job function rather than the agent performing it. For example, it’s best practice to refer to an agent as “customer service” or “renewals” versus a “customer service rep” or “renewals manager.” 

AI agent design principle #4: Be inclusive, accessible, and on-brand

AI agents, especially those used in customer-facing situations, must reflect your brand’s unique voice and tone. A high-end retailer and a dollar store, for example, are both in the retail business but have vastly different audiences and expectations. Agents should be designed accordingly. 

Like all technology, agents should be inclusive and accessible, accommodating different ways of interacting. It’s critical to make sure users with disabilities can interact with AI effectively, whether through screen readers, keyboard navigation, or voice commands.

Salesforce has developed a series of ethical agent guidelines, which may be helpful. 

Consider text and voice options, providing text responses for users who prefer or need text-based interactions and voice options for those who prefer or require audio interaction. Use straightforward and unambiguous language that’s easy for users to understand. This clarity is especially important for users with cognitive disabilities or for those using translation services.

Use inclusive and simple language, being careful to avoid biases, stereotypes, and jargon that would confuse or turn off users. 

“Be approachable,” said Gando. “If the agent is overly casual, you come across as not taking customers seriously. If it’s too formal, it comes across as clinical and stiff. There is a balance to strike.”  

To ensure you’re on the right track, consider establishing feedback loops where users can rate the agent’s effectiveness. This is one way it can continually learn and adapt. 

Productivity over personality

Agents represent the next major leap in AI. They’re here to enhance productivity, not impersonate people. If you get too caught up in making them seem human, you risk disappointing your users and distracting them from the agent’s real potential. Agents don’t need names or personalities; they just need capabilities that drive business outcomes. 

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *