top of page
Greenlight Labs Logo - 1.png

The Rise of AI Assisted Fraud in UX Research Recruiting

  • Writer: Adam Marguiles
    Adam Marguiles
  • Nov 21
  • 5 min read
ree


I recently wrapped up a project for a client evaluating several email marketing platforms. At first, recruiting participants felt like a dream. We launched a screener survey and were flooded with responses within hours. Finally, an easy recruit, we thought. So many people were eager to speak with us and share their experiences. We hit our recruiting quota in record time, paused outreach, and celebrated the strong turnout. 


That excitement did not last long.


The first few interviews were great and gave us the kind of rich, detailed conversations we needed. Then a pattern started to emerge. Participants with perfectly tailored backgrounds at ideal companies began joining calls with their cameras off. That alone was not a dealbreaker, because plenty of people have privacy concerns, but what followed raised flags.


When we asked if they could turn on their camera, they suddenly had connection issues. When we asked if they could share their screen to walk us through their email setup, they were not allowed to show company information. That is not uncommon, so we moved on. The moment we asked them to describe their role or talk through how they used the platform, everything fell apart.


Their responses sounded like a job description. They were polished, generic, and vague enough to apply to almost anyone. Follow up questions were met with long pauses and then another surface level answer. It became clear that these people had never worked in email marketing. They had never opened the tool they claimed to be experts in. They were faking it for the incentive money.


By the end of the first week, the reality was obvious. The majority of our qualified participants were frauds.


We had stopped recruiting because we believed we were done. This meant one week of interviews wasted, our calendar for the following week full of more suspected fraudsters, and we were now behind on a tight project timeline. We had to restart our recruiting from the beginning. It was a gut punch and a wake up call.


It quickly became clear that this was not just bad luck. Something had shifted. AI tools like ChatGPT have made it possible for anyone to sound like an expert in a field they know nothing about. With many UX research studies offering attractive incentives, a new kind of participant fraud is growing fast. If researchers do not adapt, they risk wasting time, losing budget, and making product decisions based on false insights. Here is what is happening, how these fraudsters are doing it, and what UX researchers can do to protect their studies.



How AI is Helping Fraudsters Fake Their Way Into Studies

AI has made it incredibly easy for people to misrepresent themselves in order to qualify for paid research. Before AI, faking expertise required preparation and industry knowledge. Today, someone can ask a model to write a summary of a typical day for a Senior Email Marketing Manager, memorize a few lines, and instantly appear credible.

Here is how fraudsters are using AI to get through the process.

Before the interview

  • They use AI to craft the perfect screener survey answers.

  • They prompt tools with questions such as: What tools would a CRM Manager use day to day.

  • They ask for realistic sounding work responsibilities, achievements, and terminology and repeat it back during the interview.

During the interview

  • They keep the camera off so they can refer to AI generated responses in real time.

  • They type questions into AI to answer follow up questions.

  • Their answers are often general because they have no personal context to draw from.

AI has essentially given fraudsters a live script that updates with each question. If you are not prepared, it can be surprisingly convincing at first.



Patterns and Red Flags to Watch For

You never want to enter a user interview assuming someone is lying. At the same time, there are consistent signs that often indicate the participant is not who they claim to be.

Behavioral Red Flags

  • Camera off throughout the interview, often with an excuse involving internet issues.

  • Very slow response times, often caused by waiting for AI to generate answers.

  • Answers sound like job descriptions or textbook explanations.

  • No personal stories, examples, shortcuts, or opinions.

  • Struggles with follow up questions or contradicts earlier statements.

Profile Red Flags

  • Claims to work at a well known company in a role that almost always requires a LinkedIn presence, yet no profile exists.

  • LinkedIn exists but is very new, empty, or inconsistent with their story.

No LinkedIn profile for someone who claims to be a Manager at Shell is a red flag. No LinkedIn profile for someone who runs a small Etsy shop makes sense. Context matters.



How to Confirm Your Suspicion Without Being Confrontational

You can end an interview at any time, but it can feel uncomfortable to cut someone off if you are not fully sure. Sometimes it is worth asking one or two grounding questions that AI struggles to fake. These questions pull the participant out of memorized talking points and into real lived experience.

Here are a few approaches that help confirm your suspicion.

Ask for a recent personal experience

  • Can you tell me about the last time you used this tool in the past two weeks? What were you trying to do and what happened?

Real users immediately recall specific details. Fraudsters often cannot.

Ask for a personal hack or shortcut

  • What is one workaround or hack you use in this tool that saves you time?

AI rarely generates imperfect or very human answers. It tends to provide best practice advice rather than real world behavior.

Give choices that only practitioners understand For example, if you are interviewing e commerce marketers, you could ask:

  • When you run a/b tests, which part usually slows the process down the most for you? A: Getting enough traffic to reach significance B: Getting design or development resources C: Leadership asking to stop the test early

A real user will pick one instantly and share a story. A fraudster will guess or stall.



Why This Matters for UX Research

Participant fraud in research is not a small issue. It affects more than incentives and wasted hours. Poor quality participants lead to inaccurate insights. That means teams make product decisions based on false data. It slows down research cycles, damages trust, and can cause companies to invest in the wrong features or strategies.

If we want research to remain a trusted input for product decision making, we need to adapt to this new reality.



A Positive Path Forward

This problem is not going away, but it is manageable. Researchers can protect studies with better screeners, light identity checks, smarter interview tactics, and tools that help validate participants. Most importantly, this is a reminder that human centered research requires real people with real experiences. AI cannot replace that.

By staying aware and adjusting our methods, we can keep our studies honest, our data high quality, and our research impactful. The goal is not to treat participants like criminals. The goal is to ensure we are speaking to the right people so we can build products that truly meet user needs.


 
 
 

Comments


bottom of page