How LLMs Are Shaping the Way Identity Buyers Make Decisions

We recently sat down with Evan Bailyn, CEO of First Page Sage, the leading SaaS SEO agency in the U.S. With clients such as Salesforce, Verisign, Grammarly, and Nerdwallet, Evan’s agency has long been a trailblazer in organic search visibility and, more recently, AI-powered discoverability.

In early 2023, shortly after the debut of ChatGPT, Evan launched the Generative Engine Optimization (GEO) discipline—an evolution of SEO designed specifically for the generative AI era. He also led the first large-scale study of ChatGPT’s algorithm, analyzing how LLMs choose which companies to surface in their answers. GEO builds on that research to help brands strategically shape how they appear in AI-generated responses.

In our conversation, Evan explained how LLMs are beginning to influence purchasing decisions in regulated industries like financial services, and what identity buyers—fraud managers, compliance leads, and procurement teams—should know when using these tools to evaluate vendors.

Microblink: More and more buyers are turning to tools like ChatGPT or Perplexity to help research identity verification solutions. From your vantage point, how should compliance or fraud professionals think about these LLM summaries?

Evan: With caution, but also with interest. LLMs are incredibly fast and often directionally accurate. If you ask, “Which KYC vendors have low false negatives?” or “Which solutions are best for biometric verification?” you’ll likely get a coherent, seemingly helpful answer. But what the model gives you is a synthesis of publicly available information—so it only knows what it’s been taught. If the data environment around a company is sparse or inconsistent, even a best-in-class vendor might be misrepresented or left out entirely.

That’s why I recommend treating LLM responses as a first-look tool, not a decision-maker. They can speed up vendor discovery and surface useful differentiators, but their accuracy depends entirely on how well companies have educated the ecosystem about what they do.

Microblink: From Microblink’s side, we’ve invested in publishing technical content, being transparent about our verification flow, and sharing performance data when possible. Does that kind of effort actually influence what the models learn?

Evan: It absolutely does. Generative models learn through exposure and repetition across trusted domains. So if you’re publishing deep technical explainers, being cited in research papers, or mentioned in compliance roundups, that builds the association between your brand and specific concepts like liveness detection or fraud signal precision. Over time, the model begins to treat your brand as authoritative in those areas.

The companies that perform best in LLM responses tend to have a coherent semantic footprint. That means their messaging is clear and aligned across their own content, third-party coverage, and expert commentary. LLMs pick up on that alignment and reflect it in their answers.

Microblink: How measurable is this kind of influence? Can a company tell if its efforts are actually working?

Evan: Yes, and that’s what makes GEO so exciting. Unlike traditional SEO where you track keyword rankings, here you’re testing prompts. You ask the model dozens of commercially relevant questions, log whether your brand is mentioned, and analyze the framing. Are you presented as an innovator? As a vendor among many? Are your differentiators included?

We build baselines—say, your brand shows up in 3 out of 50 prompts—and then aim to lift that number through targeted content and strategic publishing. It’s a closed feedback loop: the model is the test environment and the outcome.

Microblink: What advice would you give to a fraud manager or compliance lead who wants to make responsible use of generative AI during vendor selection?

Evan: First, know the limits of the tool. Treat it as a supplementary channel, not a substitute for due diligence. Then, look for vendors that have put effort into transparency. If a company is investing in explainability—publishing their false positive rates, detailing their data protection methods, sharing case studies—that’s a good sign both for model trust and for your actual selection process.

Also, cross-reference what the model says with what you find in independent sources. If the answer includes Microblink, for instance, and you’re intrigued, dig into where that information came from. The best buyers are combining AI speed with human verification.

Microblink: What’s one takeaway you’d want identity buyers to leave with?

Evan: Generative AI is changing how research is done. That includes how buyers evaluate complex, regulated technologies like identity verification. If you’re a buyer, your job isn’t just to listen to the model—it’s to understand why it answered the way it did. That awareness gives you a huge edge in separating marketing from truth.

July 3, 2025

Discover Our Solutions

Exploring our solutions is just a click away. Try our products or have a chat with one of our experts to delve deeper into what we offer.

Press Release
Microblink Only Vendor to Meet All Performance Thresholds in U.S. Department of Homeland Security Identity Verification Evaluation
March 2, 2026

Among all participating vendors, Microblink was the only provider to meet RIVR “high performing” system benchmarks across every measured accuracy metric.

Continue Reading