
How I Prevent My APIs From Being Destroyed by Bots
A practical guide to protecting your APIs from bots using rate limiting, request throttling, caching...

Most SEO articles on the internet explain what SEO is.
They talk about keywords, search engines, and general optimization advice.
But when you actually start building a content site, you quickly realize something:
Theory doesn’t help much when you need real implementation.
When I built my own blog with Next.js, I didn’t want a vague checklist. I wanted a practical SEO foundation that works automatically for every post I publish.
So instead of experimenting randomly, I implemented a simple but reliable stack that handles the core SEO infrastructure of a blog:
JSON-LD structured data (BlogPosting)
Breadcrumb schema (BreadcrumbList)
RSS feed (/rss.xml)
Dynamic sitemap (/sitemap.xml)
Open Graph fallback image
This article walks through the exact setup I use in production.
If you’re building a content site with Next.js, this gives you a clean, scalable SEO base.
Search engines don’t interpret pages the same way humans do.
A human reads a page and understands context immediately.
A search engine tries to infer meaning from structure and signals.
If your site lacks structured signals, the crawler has to guess.
Adding things like:
structured data
sitemap endpoints
consistent metadata
RSS feeds
reduces ambiguity.
The result is:
clearer crawl signals
more reliable indexing
better preview cards when shared
higher click-through rate from search and social
Think of this stack less as “SEO tricks” and more as content infrastructure.
For every blog post page, I inject two structured data schemas:
BlogPosting — tells search engines that the page is an article
BreadcrumbList — defines page hierarchy
This improves how search engines understand page structure.
Example implementation:
// app/blog/[slug]/page.tsx
import Script from "next/script";
const blogUrl = `https://blogs.sagarsangwan.dev/blog/${currentBlog.slug}`;
const fallbackImage = "https://blogs.sagarsangwan.dev/images/blog-og.png";
const ogImage = currentBlog.coverImage ?? fallbackImage;
const plainText = currentBlog.content
.replace(/<[^>]+>/g, " ")
.replace(/\s+/g, " ")
.trim();
const wordCount = plainText ? plainText.split(" ").length : 0;
const readTimeMinutes = Math.max(1, Math.ceil(wordCount / 200));
const jsonLd = {
"@context": "https://schema.org",
"@type": "BlogPosting",
headline: currentBlog.title,
description: currentBlog.description ?? currentBlog.title,
image: ogImage,
datePublished: currentBlog.createdAt,
dateModified: currentBlog.updatedAt,
wordCount,
timeRequired: `PT${readTimeMinutes}M`,
articleSection: currentBlog.tags[0]?.name ?? "Engineering",
inLanguage: "en-US",
author: {
"@type": "Person",
name: "Sagar Sangwan",
url: "https://www.sagarsangwan.dev",
},
mainEntityOfPage: {
"@type": "WebPage",
"@id": blogUrl,
},
keywords: currentBlog.tags.map((t) => t.name).join(", "),
};
const breadcrumbJsonLd = {
"@context": "https://schema.org",
"@type": "BreadcrumbList",
itemListElement: [
{ "@type": "ListItem", position: 1, name: "Home", item: "https://blogs.sagarsangwan.dev/" },
{ "@type": "ListItem", position: 2, name: "Blogs", item: "https://blogs.sagarsangwan.dev/" },
{ "@type": "ListItem", position: 3, name: currentBlog.title, item: blogUrl },
],
};
return (
<>
<Script
id={`jsonld-blog-${currentBlog.slug}`}
type="application/ld+json"
dangerouslySetInnerHTML={{
__html: JSON.stringify(jsonLd).replace(/</g, "\\u003c"),
}}
/>
<Script
id={`jsonld-breadcrumb-${currentBlog.slug}`}
type="application/ld+json"
dangerouslySetInnerHTML={{
__html: JSON.stringify(breadcrumbJsonLd).replace(/</g, "\\u003c"),
}}
/>
</>
);This ensures every blog post provides structured metadata directly to search engines.
Social previews break surprisingly often.
The most common cause:
A post doesn’t have a cover image.
If that happens and you don’t define a fallback, the preview card on LinkedIn, Twitter, or WhatsApp may appear broken.
To avoid this, every post in my blog has a fallback Open Graph image.
// app/blog/[slug]/page.tsx (inside generateMetadata)
const fallbackImage = "https://blogs.sagarsangwan.dev/images/blog-og.png";
const ogImage = blog.coverImage ?? fallbackImage;
return {
title: blog.title,
description: blog.description ?? blog.title,
alternates: {
canonical: `https://blogs.sagarsangwan.dev/blog/${blog.slug}`,
},
openGraph: {
type: "article",
url: `https://blogs.sagarsangwan.dev/blog/${blog.slug}`,
title: blog.title,
description: blog.description ?? blog.title,
images: [{ url: ogImage, width: 1200, height: 630, alt: blog.title }],
},
twitter: {
card: "summary_large_image",
title: blog.title,
description: blog.description ?? blog.title,
images: [ogImage],
},
};This guarantees that every page has a valid preview card.
RSS is often dismissed as outdated.
But in practice it remains a stable distribution channel.
RSS feeds allow:
readers to subscribe through feed apps
automation tools to ingest content
other platforms to track updates
Instead of static files, I generate RSS dynamically from the database.
// app/rss.xml/route.ts
import { NextResponse } from "next/server";
import { desc } from "drizzle-orm";
import { db } from "@/drizzle/src/db";
import { blog } from "@/drizzle/src/db/schema";
export const revalidate = 600;
const BASE_URL = "https://blogs.sagarsangwan.dev";
export async function GET() {
const posts = await db
.select({
title: blog.title,
slug: blog.slug,
description: blog.description,
content: blog.content,
createdAt: blog.createdAt,
updatedAt: blog.updatedAt,
})
.from(blog)
.orderBy(desc(blog.createdAt));
const items = posts
.map((post) => {
const url = `${BASE_URL}/blog/${post.slug}`;
return `
<item>
<title>${post.title}</title>
<link>${url}</link>
<guid isPermaLink="true">${url}</guid>
<description>${post.description}</description>
<pubDate>${new Date(post.createdAt).toUTCString()}</pubDate>
</item>
`;
})
.join("");
const rss = `<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
<channel>
<title>Sagar Sangwan's Blog</title>
<link>${BASE_URL}</link>
<description>Web development and engineering notes</description>
${items}
</channel>
</rss>`;
return new NextResponse(rss, {
headers: { "Content-Type": "application/rss+xml" },
});
}This automatically updates whenever new content is added.
Search engines rely heavily on sitemaps.
Instead of manually maintaining one, Next.js can generate it dynamically from the database.
// app/sitemap.ts
import { MetadataRoute } from "next";
import { db } from "@/drizzle/src/db";
import { blog } from "@/drizzle/src/db/schema";
const BASE_URL = "https://blogs.sagarsangwan.dev";
export default async function sitemap(): Promise<MetadataRoute.Sitemap> {
const blogs = await db.select({
slug: blog.slug,
updatedAt: blog.updatedAt,
}).from(blog);
const blogUrls = blogs.map((b) => ({
url: `${BASE_URL}/blog/${b.slug}`,
lastModified: b.updatedAt ? new Date(b.updatedAt) : new Date(),
changeFrequency: "weekly",
priority: 0.8,
}));
return [
{
url: BASE_URL,
lastModified: new Date(),
changeFrequency: "daily",
priority: 1,
},
...blogUrls,
];
}This ensures every new article is automatically discoverable by crawlers.
One mistake many developers make is forgetting cache invalidation.
If your RSS feed or sitemap is cached and a new post is published, search engines may see outdated data.
The solution is to revalidate SEO endpoints after publishing.
import { revalidatePath, revalidateTag } from "next/cache";
revalidateTag("blog", "max");
revalidatePath("/");
revalidatePath(`/blog/${created.slug}`);
revalidatePath("/rss.xml");
revalidatePath("/sitemap.xml");This guarantees your SEO surfaces stay fresh in production.
During implementation I ran into several issues that appear frequently in developer blogs.
Use a literal value like:
export const revalidate = 600Avoid computed expressions in sensitive contexts.
At some point a post will not include a cover image.
If you don’t define a fallback, preview cards break.
Breadcrumbs are simple to implement but provide clear hierarchy signals to search engines.
Dumping HTML into RSS often breaks feed readability.
Always strip tags or summarize content.
If you publish new posts but SEO endpoints stay cached, your infrastructure technically exists — but doesn’t work correctly.
SEO for blogs isn’t magic.
It’s mostly about building reliable infrastructure once and letting it work for every article you publish.
With this setup you get:
structured semantics for search engines
clear page hierarchy
consistent social previews
automatic content discovery
Once this stack is in place, every new post benefits from it automatically.
And that’s the real goal of technical SEO:
systems that scale with your content.
🔥 Found this blog post helpful? 🔥
If you enjoyed this article and found it valuable, please show your support by clapping 👏 and subscribing to my blog for more in-depth insights on web development and Next.js!
Subscribe here: click me
🚀 Follow me on:
🌐 Website: sagarsangwan.dev
🐦 Twitter/X: @sagar sangwan
🔗 LinkedIn: Sagar Sangwan
📸 Instagram: @codingbysagar
▶️YouTube: @codingbysagar
Your encouragement helps me continue creating high-quality content that can assist you on your development journey. 🚀

Code. Write. Build. Explore. 💻✍️ Software developer by day, mechanical tinkerer by night. When I’m not shipping code or writing blogs, you’ll find me trekking up a mountain, whipping up a feast, or hitting the open road on two wheels. Life is better in high gear.
View more blogs by me CLICK HERE

A practical guide to protecting your APIs from bots using rate limiting, request throttling, caching...

Learn how to build an email queue system in Next.js with Drizzle, Postgres, and Brevo SMTP including...

If you’re a creator, engineer, or indie builder, you’ve probably learned this the hard way: Algorith...
Subscribe to get the latest posts delivered to your inbox