Web Development

Client-rendered React SPAs are killing your SEO and most agencies don't know it

Why public-facing React SPAs can disappear from search engines and how to fix it without a rewrite

We built a 198-page client-rendered React SPA with perfect Lighthouse scores and every SEO best practice we knew. Google indexed 20 pages in six months. This is the story of what went wrong, why HTML still matters in 2026, and the build-time prerendering fix that solved it.

14 min read|March 31, 2026
SEOReactWeb Development

The page Google never saw

We shipped 198 pages. Google indexed 20.

Not because the content was thin. Not because of a penalty. Not because we forgot a sitemap. Google simply could not see our website.

For six months we watched Google Search Console tell us the same thing. The main statuses were 147 pages stuck in "Discovered - currently not indexed," another 13 labeled "Crawled - currently not indexed," and just 20 pages actually appearing in search results.

We had structured data. We had canonical tags. We had react-helmet-async injecting unique titles and descriptions on every page. We had a sitemap, an RSS feed, internal linking, and Open Graph tags. We checked every box on every SEO checklist we could find.

None of it mattered. Here is what we missed, why it happened, and exactly how we fixed it.

This is not a case against React itself. Next.js is React. Astro can render React components too. The problem is public-facing, client-rendered React SPAs that send almost no meaningful HTML on the first request.

What Google actually sees

What Google sees vs. what you see

Open your React SPA in a browser. You see a polished website with animations, images, blog articles, service pages. Everything looks perfect.

Now right-click, View Page Source.

You see this:

<!doctype html>
<html lang="en">
<head>
  <title>Your Full-Stack Digital Partner for Growth</title>
</head>
<body>
  <div id="root"></div>
  <script type="module" src="/src/main.tsx"></script>
</body>
</html>

An empty div and a JavaScript file. That is the entirety of what your server sends to every visitor, every crawler, every search engine, on every single URL.

When a regular browser loads this, JavaScript executes, React mounts, the router reads the URL, the correct component renders, and react-helmet-async swaps in the right title and meta tags. It all happens in milliseconds. You never notice.

Googlebot works differently.

How Googlebot processes JavaScript

How Googlebot processes your React site

Google crawls the web in two phases:

Phase 1: Crawl. Googlebot fetches the raw HTML. It reads the <title>, <meta> tags, headings, text content, and links. If the page is static HTML, this is enough. Google indexes it and moves on.

Phase 2: Render. If the page relies on JavaScript, Google adds it to a rendering queue. A separate service (the Web Rendering Service) eventually spins up a headless Chromium instance, executes the JavaScript, and extracts the rendered DOM. Google's own JavaScript SEO basics and Page indexing report documentation both point to this crawl-then-render split once you know where to look.

That rendering queue is the problem. Google processes billions of pages. JavaScript rendering is expensive. Your 198-page agency site is competing for rendering resources with every other JavaScript-heavy site on the internet.

The result for a React SPA:

  1. Googlebot fetches /blog/your-article
  2. Your server returns the same empty <div id="root"></div> for every URL
  3. Google sees no content, no unique title, no unique description
  4. The page enters the rendering queue
  5. Days, weeks, or months pass before rendering happens
  6. When rendering does happen, Google may still deprioritize the page because the initial crawl found nothing

This is why our Search Console showed "Discovered - currently not indexed" for 147 pages. Google found the URLs in our sitemap, looked at the HTML, saw an empty shell, and put them in a queue it was in no hurry to process.

Why HTML still matters in 2026

Why plain HTML still wins

JavaScript frameworks exist because developers need them. Single-page applications deliver smooth transitions, state management, and fast in-app navigation. These are real benefits for the user experience.

But search engines, social media scrapers, many AI crawlers, and accessibility tools all process HTML first. Many never execute JavaScript at all.

When you share a link on LinkedIn, Slack, or iMessage, the preview card is generated from the raw HTML. If your og:title and og:description are injected by JavaScript, those services will show either nothing or your default fallback meta tags. The same link shared from a static HTML page shows the correct title, description, and image every time.

Accessibility readers process the initial HTML. RSS readers process the initial HTML. Prerender services process the initial HTML. Google's initial crawl processes the initial HTML.

The first 50 milliseconds of a page load — before any JavaScript runs — determine how the entire internet sees your site. If that HTML is empty, your site is invisible to most of the systems that drive traffic and discoverability.

This is not an argument against React or JavaScript frameworks. It is an argument for making sure your HTML is complete before JavaScript enhances it. That same assumption sits underneath our local business website development guide and our technical SEO optimization guide: crawlers have to receive meaningful HTML before any of the rest of your SEO work can matter.

The hidden SEO killers in React SPAs

Three things that silently killed our SEO

Beyond the empty HTML shell, we found three patterns in our codebase that compounded the problem.

1. Deferred rendering with IntersectionObserver

Every section below the hero was wrapped in a DeferredSection component. It used IntersectionObserver to only render content when the user scrolled near it — a good performance optimization for real users.

export default function DeferredSection({ children }) {
  const [shouldRender, setShouldRender] = useState(false);

  useEffect(() => {
    const observer = new IntersectionObserver(([entry]) => {
      if (entry.isIntersecting) setShouldRender(true);
    });
    observer.observe(placeholderRef.current);
  }, []);

  if (!shouldRender) return <div style={{ minHeight: 600 }} />;
  return <Suspense fallback={<Placeholder />}>{children}</Suspense>;
}

Google's renderer uses a fixed viewport. It does not scroll. Every section wrapped in this pattern was invisible to the crawler — replaced by empty placeholder divs. Google saw a page with a hero section and nothing else.

2. Scroll-triggered animations starting at opacity: 0

Our ScrollReveal component set every child element to opacity: 0 with a CSS transform. An IntersectionObserver would animate them to visible when scrolled into view.

<Tag style={{ opacity: 0, transform: 'translateY(30px)' }}>
  {children}
</Tag>

Even if Google's renderer somehow triggered these elements, the initial inline style of opacity: 0 meant the content was technically invisible. That can cause crawlers to treat the page as incomplete or de-emphasize the hidden content.

3. CSS keyframe animations starting from opacity: 0

The hero itself used CSS animations with animation-fill-mode: both:

@keyframes heroSlideDown {
  from { opacity: 0; transform: translateY(-40px); }
  to { opacity: 1; transform: translateY(0); }
}

CSS animations with fill-mode: both start from the from state. Before the animation plays, the element is invisible. If the rendering service captures the page before animations complete, the content appears hidden.

Each of these patterns is a reasonable performance or UX optimization on its own. Combined with a client-rendered SPA, they created a site that was nearly invisible to search engines.

How we fixed it

How we fixed it: build-time prerendering

The solution was to generate static HTML for every route at build time. After vite build produces the JavaScript bundle, a post-build script:

  1. Reads every URL from the generated sitemap
  2. Spins up a local static file server pointing at the dist/ folder
  3. Launches a headless browser (Chromium)
  4. Visits each route, waits for React to render, and captures the full HTML
  5. Writes the rendered HTML back into dist/ so the hosting platform serves real content
vite build → dist/ (empty SPA shell)
prerender  → dist/ (198 static HTML pages with full content)
deploy     → each URL serves complete HTML + JS for interactivity

The key details that made this work:

Monkey-patching IntersectionObserver during prerendering. The prerender script replaces the browser's IntersectionObserver with a version that immediately reports every element as visible. This forces DeferredSection to render all children without scrolling.

await page.evaluateOnNewDocument(() => {
  window.__PRERENDER = true;
  window.IntersectionObserver = class {
    constructor(callback) { this._cb = callback; }
    observe(el) {
      this._cb([{ isIntersecting: true, target: el }], this);
    }
    unobserve() {}
    disconnect() {}
  };
});

Disabling opacity: 0 animations during prerendering. Components like ScrollReveal check a global flag and skip animation styles when prerendering:

const isPrerendering =
  typeof window !== "undefined" && window.__PRERENDER === true;

if (isPrerendering) {
  return <Tag className={className}>{children}</Tag>;
}
// Otherwise, render with opacity: 0 and scroll animation

Using networkidle0 wait strategy. The script waits until all network requests settle before capturing HTML. This ensures lazy-loaded chunks and data fetches complete.

Reusable browser tab pool. Instead of opening and closing a tab per page, the script creates 10 tabs upfront and reuses them across all 198 URLs. This cut prerender time significantly.

The Vercel deployment gotcha

Running Chromium on Vercel

If you deploy to Vercel, Puppeteer will not work out of the box during the build step. Vercel's build containers lack the system libraries Chromium needs (libnspr4.so, libatk-1.0.so, and others).

The fix is to use @sparticuz/chromium, a Chromium binary built for restricted environments. It bundles all required shared libraries and runs without root access.

async function launchBrowser() {
  if (process.env.VERCEL || process.env.CI) {
    const chromium = (await import("@sparticuz/chromium")).default;
    const puppeteerCore = (await import("puppeteer-core")).default;
    return puppeteerCore.launch({
      args: chromium.args,
      executablePath: await chromium.executablePath(),
      headless: true,
    });
  }

  // Locally: regular puppeteer works fine
  const puppeteer = (await import("puppeteer")).default;
  return puppeteer.launch({ headless: true });
}

Install both as dev dependencies:

npm install --save-dev puppeteer-core @sparticuz/chromium

Keep puppeteer for local development. The script detects the environment and uses the right binary.

Choosing the right tool from the start

When to use a React SPA vs. Next.js vs. Astro

If you are an agency building client websites, the framework choice determines how much SEO work you will do later. Here is a direct comparison:

If you build marketing sites, service pages, or content hubs for clients, this is a rendering decision before it is a framework preference. That is true whether you are shipping a brochure site, a location-page strategy, or a larger web development services engagement.

React SPA (Vite, Create React App)

  • Sends empty HTML to every crawler
  • Requires build-time prerendering or a prerender service to be indexable
  • Every page shares the same initial HTML response
  • Good for: internal dashboards, admin panels, authenticated apps — anything that does not need search engine visibility

Next.js (App Router)

  • Server-renders HTML by default (React Server Components)
  • Each page sends complete, unique HTML on first request
  • Built-in generateStaticParams for static generation at build time
  • Built-in generateMetadata for per-page SEO tags
  • Good for: marketing sites, blogs, e-commerce, any public-facing site

Astro

  • Ships zero JavaScript by default
  • Generates pure static HTML at build time
  • Supports React/Vue/Svelte components as interactive "islands"
  • Fastest possible page loads and perfect crawlability
  • Good for: content sites, blogs, documentation, landing pages

Static HTML

  • The fastest and most crawlable option
  • No build step, no framework, no JavaScript dependency
  • Search engines, social platforms, and most HTML-first fetchers read it cleanly
  • Good for: simple marketing pages, landing pages, sites with infrequent content changes

The decision matrix is simple:

  • If the page needs to appear in search results → the initial HTML must contain the content
  • If the page is behind authentication → SPA is fine
  • If you need both interactivity and SEO → use Next.js or Astro with islands
  • If performance and crawlability are the top priorities → static HTML or Astro

We built our 198-page site as a React SPA because we are a development agency and React is our core stack. That decision cost us six months of search visibility. The prerendering fix works, but if we were starting over, we would use Next.js or Astro for every public-facing page.

Pre-launch SEO checklist for SPAs

Pre-launch SEO checklist for SPAs

If you already have a React SPA in production and cannot migrate to a different framework, run through this checklist:

HTML delivery

  • View Page Source on every key page. Is the content visible without JavaScript?
  • Does each page have a unique <title> and <meta name="description"> in the raw HTML?
  • Is your structured data (JSON-LD) present in the raw HTML?
  • Are Open Graph tags present in the raw HTML (not injected by JavaScript)?

Content visibility

  • Are any sections deferred behind IntersectionObserver? Google does not scroll.
  • Do any elements start with opacity: 0 or display: none and rely on JavaScript to become visible?
  • Do CSS animations use fill-mode: both or fill-mode: backwards with a starting opacity of 0?
  • Are React.lazy() chunks loading fast enough for the renderer to capture them?

Technical SEO

  • Does every page have a <link rel="canonical"> in the raw HTML?
  • Is your sitemap submitted in Search Console and returning a 200 status?
  • Does the www to non-www redirect (or vice versa) work correctly for all paths?
  • Are 404 pages returning actual 404 status codes (not 200 with "not found" text)?

Prerendering (if implementing)

  • Are pre-rendered HTML files being served before the SPA fallback rewrite?
  • Does the prerender script handle all dynamic routes (blog slugs, service areas)?
  • Is IntersectionObserver being patched during prerendering?
  • Are animation-related opacity: 0 styles being stripped during prerendering?
  • Does the Chromium binary work in your CI/CD build environment?

What happened after the fix

What happened after the fix

After deploying pre-rendered HTML for all 198 pages:

  • Pages previously stuck in "Discovered - currently not indexed" began moving to "Indexed" within days of resubmitting the sitemap
  • Social media link previews started showing correct titles, descriptions, and images instead of generic fallback text
  • Lighthouse SEO score stayed at 100 with no console errors from hydration mismatches
  • Page load performance was unaffected — users still get the same SPA experience with smooth transitions and animations

The fix was not a migration. We kept our React + Vite + React Router stack. We added a build step that generates the HTML Google needs, without changing how the site works for actual visitors.

Frequently asked questions

Frequently asked questions

Does Google execute JavaScript?

Yes. Google's Web Rendering Service (WRS) uses a headless Chromium instance to render JavaScript-heavy pages. But rendering is a separate, delayed step. Pages enter a queue that can take days to weeks. Static HTML is usually much easier for Google to process on the first crawl because the content is already there.

Is react-helmet-async enough for SEO?

No, not on its own. react-helmet-async injects <title> and <meta> tags into the DOM after JavaScript executes. The raw HTML still contains your default fallback tags. Social media crawlers and the initial Google crawl see the fallback, not the page-specific tags.

Can I use a prerender service instead of build-time prerendering?

Yes. Services like Prerender.io detect crawler user agents and serve cached rendered HTML. The trade-off is cost (after the free tier) and a dependency on an external service. Build-time prerendering is free and self-contained.

Will prerendering break my SPA routing?

No. The pre-rendered HTML files are served for the initial page load. Once JavaScript boots, React takes over and client-side routing works normally. For hosting platforms like Vercel, static files are served first; if no static file matches the URL, the SPA fallback rewrite kicks in.

Should I migrate from React SPA to Next.js?

If you are starting a new project that needs search visibility, yes. Next.js handles server rendering, metadata, and static generation out of the box. If you have an existing SPA with hundreds of components, prerendering at build time is a faster path to indexability than a full rewrite.

What about Bing, social media crawlers, and AI chatbots?

Bing's crawler has a more limited JavaScript rendering budget than Google. Social media platforms (LinkedIn, X, Facebook) and messaging apps almost never execute JavaScript — they rely entirely on raw HTML meta tags. Many AI crawlers and answer-engine fetchers also appear to rely heavily on raw HTML. Pre-rendered HTML helps with all of these, not just Google.

Related Articles

Need Help Implementing This?

Our team at Luminous Digital Visions specializes in SEO, web development, and digital marketing. Let us help you achieve your business goals.

Get Free Consultation