AI agents can do amazing things. They can handle complex returns for retail companies. They can help determine whether patients are eligible for clinical trials. They can also help homebuyers sort through loan options, a process that might otherwise be complex and confusing.
As impressive as artificial intelligence (AI) agents are, though, there are times when you shouldn’t use them — and knowing when to hold off (or say no) is just as important as knowing when to proceed. How do you make the right decision? We’ve put together a list of when not to use an agent. Let’s take a look.
1. When your company’s not ready yet
In many ways, getting ready to use an AI agent is like planting a garden. If you want a healthy, productive garden, you need to clear the weeds, amend the soil, and determine the right time to plant.
Same with an agent. “I have conversations with customers all the time,” said Kyle Mey, industry advisor, strategy and innovation in healthcare and life sciences, at Salesforce. “They say, ‘I want AI, and I’m going to hook it right into my enterprise data warehouse or my data lake,’ but they don’t have any predefined workflows or rules.”
Companies assume an AI agent can handle a situation like this. “The reality is, the agent can’t,” Mey said. “It’s not going to fix things for you. You have to instruct the agent meticulously on what it needs to do and why.”
Do anything less, and you might set an agent up for failure. So, do your prep work. Define your workflow. Set up the rules to guide that workflow. And have a clear idea of what business objectives you want to achieve.
2. When your data needs cleaning and organizing
One of the most important steps to prepare for an AI agent is to clean up and organize your data. After all, your agent will only be as good as the data it’s trained on.
Your customer contacts or sales data, for example, may be out of date. Other data may be scattered across disconnected platforms. Or your company may need more time to collect the data required to train an agent.
When agents are trained on poor-quality data, they can deliver inaccurate results. They might, for instance, get a preferred customer’s status wrong when booking a flight. Or they might use the wrong social security number when helping someone apply for government benefits.
But with good, solid data, your agent has a better chance at success. One advantage of Agentforce, the agentic AI layer of the Salesforce platform, is that it’s already grounded in your customer data. Data Cloud, Salesforce’s hyperscale data engine, is also integrated with the platform. It provides a unified data foundation, ensuring that agents have access to accurate, real-time information that connects with your business objectives. The upshot? When your agents draw from data that’s well organized and on a deeply unified platform, they’re more likely to excel at their job.
Map out your first Agentforce use case with these 8 questions
You need a solid approach to ensure AI agents will work for your team and customers. This game plan maps out exactly how Agentforce can improve your workflows, eliminate friction, and drive real results.
3. When your guardrails aren’t strong enough
During the set-up process, you need to give your AI agent clear rules about what it can and cannot do. Guardrails also make sure the agent has access only to the data it needs to do its job — and not to sensitive data that should be protected.
Without such limits, your AI may act in inappropriate or unethical ways. It might reveal personal information when it shouldn’t, or use such information to make biased decisions. Examples abound, such as the large tech company that had to scrap its AI recruiting tool because it discriminated against women, or the AI system for diagnosing skin cancer that yielded less accurate results for people with darker skin.
Just to be clear: The AI didn’t create this bias. It learned bias from the data it was trained on. But if your AI is perpetuating bias, don’t implement agentic AI until you fix the problem.
The Einstein Trust Layer has several features, including personally identifiable information (PII) detection and masking, that protect your data and support its responsible and ethical use. Agentforce automatically detects when sensitive data — such as a person’s health status, income, or social security number — is being shared, and masks this data so that outside parties can’t see it.
The Trust Layer adds another layer of data protection through zero data retention, which means the AI doesn’t store any of the data included in the prompt or the output. The Trust Layer also includes toxicity detection.
Once you’ve put guardrails in place, check their strength with adversarial testing, which lets you test your agent for bias or inappropriate behaviors. A healthcare provider, for example, could ask its AI agent to reveal information such as a patient’s health status. If the agent reveals this when it shouldn’t — such as when it’s in violation of the Health Insurance Portability and Accountability Act (HIPAA) — you need to adjust your guardrails.
To test for bias, a financial institution could apply for a mortgage under the names Jose Smith, Joe Smith, and Janet Smith, while keeping all other application information the same. “If your agent gives different recommendations for each application,” said Kathy Baxter, Salesforce’s vice president and principal architect of responsible AI and tech, “that’s a problem.” The solution: Keep refining your guardrails.
4. When someone’s health is at stake
AI agents are poised to give a big boost to the healthcare industry, which faces severe labor shortages and clinician burnout. Some companies, like Adobe Population Health, are already using agentic AI to reduce time spent on mundane tasks.
The company, which proactively manages people’s healthcare, recently adopted Agentforce to help its nurses summarize patient charts. In the past, nurses needed about 40 minutes to write these summaries. With Agentforce, they now need 75% less time. That gives nurses more time to do the work they love, listening to and caring for patients.
Agents can also help nurses and doctors quickly review patient histories and create automated summaries of a visit. But there are times when agents should not be used in healthcare, such as for treating a patient or giving medical advice, or when determining someone’s eligibility for care.
“Right now, AI is not accurate enough and it’s definitely not in Agentforce’s purview to give medical advice,” said Kaitlyn Castañeira Gizzi, director, product marketing, at Salesforce. “We’re talking about life or death; 99.9% accuracy doesn’t work in this industry.”
Likewise, insurance providers shouldn’t use agents to determine whether someone should receive care. “There are certain guidelines an agent can follow and potentially make a recommendation,” Gizzi said, “but it shouldn’t be denying care.”
When it comes to people’s health — and where the outcome can carry serious risks — there should always be a human in the loop.
5. When it could affect a person’s economic opportunities
Autonomous agents also shouldn’t be allowed to decide whether a person can be hired for a job, qualify for a loan, or access social benefits such as Medicare or Social Security.
“Anytime there is a decision about opportunities or access to benefits,” Baxter said, “you want to have a human reviewing it.”
AI isn’t yet capable of making these decisions in a trustworthy and unbiased way. If you need proof, consider the company whose AI automatically rejected female applicants 55 or older and male applicants over 60. Or the study from Lehigh University, which found that large language models (LLMs) consistently recommended denying more loans and charging higher interest rates to Black applicants than otherwise identical white ones.
When lives and livelihoods are on the line, you always need a human in the loop. But AI can still function as an assistant. “An agent can be fantastic in helping you save time,” Baxter said, “by reading through financial statements, policies, or legal documents very quickly and summarizing them for you in plain English.”
6. When it violates local AI regulations
As AI evolves, so do the laws that regulate it. The European Union, for example, passed the General Data Protection Regulation in 2018 and the EU Artificial Intelligence Act in 2024. The former protects people’s personal data, while the latter makes sure that AI is safe, transparent, and traceable, while also being nondiscriminatory.
Under these regulations, it’s illegal to use AI in a way that’s manipulative or for a social scoring system, a practice of monitoring people’s behavior common in China.
A handful of states in the U.S. have also enacted AI regulations. “In California and a number of other states, you cannot use AI to pretend to be a human,” Baxter said. “You need to be transparent with people that they’re interacting with AI.”
In New York City, employers must notify job applicants if they’re using AI to help make hiring or promotion decisions. Companies are also required to conduct and post independent audits, testing their technology for gender and racial bias.
So, before you employ an agent, make sure you’re in compliance with state and national regulations.
We’re all still learning when it comes to AI
If you’re overwhelmed by AI, welcome to the club. AI, generative AI, and AI agents are evolving so rapidly that everyone is still learning how best to use them.
Agents have already shown that they’re whizzes at automating tasks, streamlining workflows, providing customer service, and analyzing large data sets. The trick is learning how best to deploy them — and knowing when you shouldn’t.
D-I-Why? Deploy AI agents faster with Agentforce
Building and deploying autonomous AI agents takes time. Agentforce, the agentic layer of the Salesforce platform, can reduce time to market by 16x compared to DIY approaches — with 70% greater accuracy, according to a new Valoir report.