Using WordPress as a Headless CMS with Next.js and GraphQL
This post that you’re reading went through a journey to get here. It was written in WordPress under an installation at wp.adamfortuna.com under my 17-year-old Dreamhost account that hosts a dozen sites for $7/month (including Hardcover and Minafi’s blogs).
This domain, adamfortuna.com, is a Next.js 13 site deployed on Vercel and using Next’s new App directory. The Next.js app uses GraphQL to query WordPress and get the latest posts and generate every page on this site ahead of time for super-speedy access.
This is the idea behind a “Headless CMS” (Content Management System). WordPress (the headless CMS) handles the post creation and this Next.js site handles the user-facing side.
Over the last decade, the Headless CMS approach has improved to the point where now it’s utterly amazing. This post will look into the different pieces needed and how they work together.
WordPress Setup
To start things off, you’ll need to install WordPress (duh). I use Dreamhost for hosting my WordPress sites, with CloudFlare handling the DNS side.
Dreamhost has a one-click installer that will do everything you need ā create the MySQL database, install WordPress, set up your SSL certificates ā all of it. It’s easier than running WordPress locally.
Once your site is set up, you can create an account. The most important setting you’ll need to change is under Settings -> General. Keep your WordPress Address to the subdomain you’re hosting WordPress under, but change the Site Address to the site you want to be the public face.
This should work, but unfortunately, WordPress isn’t quite so smart. If you try to write a post you’ll receive non-stop web console errors like this:
Access to fetch at 'https://adamfortuna.com/wp-json/wp/v2/pages/6730?_locale=user' from origin 'https://wp.adamfortuna.com' has been blocked by CORS policy: Response to preflight request doesn't pass access control check: The value of the 'Access-Control-Allow-Origin' header in the response must not be the wildcard '*' when the request's credentials mode is 'include'.
Code language: JavaScript (javascript)
The problem is that the WordPress admin uses “Site Address” internally for a lot of stuff. To fix this you’ll need to update the rest_url
to use the home_url()
rather than the site_url()
. Open up the functions.php file in your WordPress theme and add this line at the bottom.
add_filter('rest_url', function($url) {
return str_replace(home_url(), site_url(), $url);
});
Code language: PHP (php)
To update this file you can FTP into your server or install a WordPress plugin like WP File Manager. After this is done you should be able to write posts and use WordPress as expected.
WordPress API via GraphQL
The crux of all of this is being able to access your WordPress data from your Next.js (or Eleventy, etc) app. For this, you’ll need a few plugins:
- WPGraphQL – This is THE plugin. It’s amazing. Live it. Love it. It’ll create a GraphQL API for everything you need. You can decide to make the API public if you want to access it on the client side or limit it to authenticated users only. Since I’m generating everything on the server I’m using the authenticated API.
- JSON Basic Authentication – If you want to access raw excerpts, create comments or other restricted data, you’ll need to make authorized API calls. I’ve used this plugin from a few sites and it’s solid.
- WPGraphQL Smart Cache – Usually with WordPress you’d want to cache the pages generated. With a headless CMS you’ll want to cache the API calls.
Side note: Some parts of the API are only available if you’re authenticated. Post excerpts, for example, are only available in “rendered HTML” form unless you authenticate. If you need these for your meta description you’ll need to either parse the HTML or authenticate with the API.
For my setup, I decided to restrict the GraphQL API to only authenticated users. I’m building everything on the server, so there’s no reason to leave it open to the world.
Before I added pagination, I wanted to get every post in a single API call. Unfortunately, WPGraphQL has a maximum limit of 100 elements. If you’d like to increase this, open up the functions.php file for your theme and add the following to the very bottom.
// Increase the graphql page limit
add_filter( 'graphql_connection_max_query_amount', function( $amount, $source, $args, $context, $info ) {
$amount = 1000; // increase post limit to 1000
return $amount;
}, 10, 5 );
Code language: PHP (php)
After that, you should be able to fetch up to 1,000 posts at once!
You can head over to the GraphiQL IDE and try a query. You can click on the avatar to run the command under your user.
Now it’s time to hit the API!
Using GraphQL from Next.js
You can pick and choose which GraphQL client you prefer. I used @apollo/client on Hardcover, but decided to use regular fetch
for this site. Here’s my setup for how that works:
Side note: I’m somewhat new to TypeScript. We’re using it on Hardcover, but I’m far from an expert. If you notice ways this code could be improved, you can always email me, or message me on Mastodon at @[email protected].
src/lib/wordpressClient.ts
This is the client that’ll be used to make GraphQL calls. The hash is to create a unique URL call that way Next.js can cache the results of it. Side note: the hash likely isn’t needed, but Next.js 13 app features are still in alpha. I anticipate removing this once it’s more stable.
import { Md5 } from 'ts-md5'
export const fetchClient = ({
url,
key,
query,
variables = {},
}: {
url: string
key: string
query: string
variables?: any
}) => {
const hash = Md5.hashStr(
JSON.stringify({
...{
url,
query,
key,
},
...variables,
}),
)
return fetch(`${url}#${hash})}`, {
method: 'POST',
cache: 'force-cache',
next: {
revalidate: 60 * 60, // 1 hour
},
headers: {
'Content-Type': 'application/json',
Authorization: `Basic ${key}`,
},
body: JSON.stringify({
query,
variables,
}),
}).then((res) => res.json())
}
export const adamfortunaClient = ({ query, variables = {} }: { query: string; variables?: any }) => {
return fetchClient({
url: 'https://wp.adamfortuna.com/graphql',
key: String(process.env.WP_ADAMFORTUNA_TOKEN),
query,
variables,
})
}
Code language: TypeScript (typescript)
The WP_ADAMFORTUNA_TOKEN
environment variable is a Bearer token with my username and password. I could’ve generated this here but after a few failures, I decided to just save the token.
With this, I can make a call to the API from anywhere:
import { adamfortunaClient } from '@/lib/wordpressClient'
export const findWordpressPost = `
query GetWordPressPost($slug: String!) {
post: postBy(slug: $slug) {
title
}
`
adamfortunaClient({
query: findWordpressPost,
variables: {
slug: "codeschool"
}
})
Code language: TypeScript (typescript)
The call to adamfortunaClient
will return a promise which be awaited or used with a callback passed to .then((result) => result)
.
src/app/[slug]/page.tsx
This is the page you’re viewing right now. It has two responsibilities: getting the current post and preparing all possible paths.
import { notFound } from 'next/navigation'
import { Article } from '@/components/articles/Article'
import { getPostOrPageBySlug } from '@/queries/wordpress/getPostOrPageBySlug'
import { getRecentPosts } from '@/queries/wordpress/getRecentPosts'
interface PageProps {
params: {
slug: string
}
}
export default async function Page({ params: { slug } }: PageProps) {
const article = await getPostOrPageBySlug(slug)
if (!article) {
notFound()
}
return <Article article={article} />
}
export async function generateStaticParams() {
const { articles } = await getRecentPosts({
count: 1000
})
return articles
.map((article) => ({
slug: article.slug,
}))
}
Code language: TypeScript (typescript)
That’s really it. There are three important parts to this page:
The call to generateStaticParams
is only run on the server by Vercel when I deploy the site. That’ll find every article that needs a standalone page and return all of their slugs. When Vercel deploys the app, it’ll create the page with that slug ā hitting the API and caching the result.
The call to getPostOrPageBySlug
(which I’ll show next) will find the actual article by the slug. I’m using post or page because I want to be able to create a page at the URL /about or a blog post like /40 and have a single route for both of them.
Lastly, render the entire article using the <Article />
component. This will output the article header, the body, author information, and webmentions.
src/queries/wordpress/getPostOrPageBySlug.ts
The last piece is the script that’ll set up the GraphQL calls and parse the response into types your app expects. In my case, I created an Article
type. I’m skipping over the parsePage
and parsePost
functions, but those just take the result from the GraphQL query and return an object.
import { adamfortunaClient, parsePage, parsePost } from '@/lib/wordpressClient'
export const findWordpressPost = `
query GetWordPressPost($slug: String!) {
post: postBy(slug: $slug) {
title
content
excerpt(format: RAW)
date
slug
featuredImage {
node {
sourceUrl
mediaDetails {
width
height
}
}
}
tags {
nodes {
name
slug
}
}
}
page: pageBy(uri: $slug) {
title
content
date
slug
featuredImage {
node {
sourceUrl
mediaDetails {
width
height
}
}
}
}
}
`
export const getPostOrPageBySlug = (slug: string) => {
return adamfortunaClient({
query: findWordpressPost,
variables: {
slug,
},
}).then((result) => {
if (!result.data.post && !result.data.page) {
return null
}
if (result.data.post) {
return parsePost(
{
...result.data.post,
project: 'adamfortuna',
},
true,
)
}
if (result.data.page) {
return parsePage({
...result.data.page,
project: 'adamfortuna',
})
}
return null
})
}
Code language: TypeScript (typescript)
There’s a lot going on here. Most of it is just a regular GraphQL query. I’ve learned to love GraphQL for projects like this. I can specify exactly what I want the API to return, even nesting arrays and objects multiple levels deep.
The WPGraphQL WordPress plugin has settings for depth limit which is handy if you’re making this API public.
One thing to note here: The Post is looked up by “slug”, while the Page is looked up by “uri”. This is by design. I’m in the process of moving over some old photo posts from my previous Middleman blog. This type of lookup allows pages like /photos/japan to be handled by this same query (although that needs a route at [...slug]
as the root route).
src/app/[slug]/head.tsx
Next.js 13 has an amazing new addition: head files. These will run during the same page load as your page and populate the <head>
of the page. This is needed for the OpenGraph tags.
import { notFound } from 'next/navigation'
import { getPostOrPageBySlug } from '@/queries/wordpress/getPostOrPageBySlug'
import ArticleMetadata from '@/components/articles/ArticleMetadata'
import GlobalHead from '../globalHead'
const Head = async ({ params }: { params: { slug: string } }) => {
const article = await getPostOrPageBySlug(params.slug)
if (!article) {
notFound()
}
return (
<>
<title>{article.title}</title>
<GlobalHead />
<ArticleMetadata article={article} />
</>
)
}
export default Head
Code language: TypeScript (typescript)
You’ll notice that both the page.ts and the head.ts files both use the same getPostOrPageBySlug
query. This is where it becomes handy that Next.js caches the fetch call.
The head contains about what you’d expect:
src/app/globalHead.tsx
const GlobalHead = () => {
return (
<>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<link href="/favicon.ico" rel="shortcut icon" />
<link rel="alternate" type="application/rss+xml" href="https://feeds.feedburner.com/adamfortuna" />
<link href="https://github.com/adamfortuna" rel="me" />
<link rel="webmention" href="https://wp.adamfortuna.com/wp-json/webmention/1.0/endpoint" />
<link rel="http://webmention.org/" href="https://wp.adamfortuna.com/wp-json/webmention/1.0/endpoint" />
</>
)
}
export default GlobalHead
Code language: TypeScript (typescript)
src/components/articles/ArticleMetadata.ts
And lastly the article metadata. I originally wanted to use next-seo for this. That module uses next/head to add the header info, which isn’t compatible with Next.js 13’s new app directory. Instead, we can do this manually.
import { Article } from '@/types'
const ArticleMetadata = ({ article }: { article: Article }) => (
<>
<meta property="og:title" content={article.title} />
<meta property="og:type" content="article" />
<meta property="og:url" content={`${process.env.NEXT_PUBLIC_URL}/${article.slug}`} />
{article.featuredImage && (
<>
<meta property="og:image" content={article.featuredImage.sourceUrl} />
{article.featuredImage.mediaDetails && (
<>
<meta property="og:image:width" content={String(article.featuredImage.mediaDetails.width)} />
<meta property="og:image:height" content={String(article.featuredImage.mediaDetails.height)} />
</>
)}
</>
)}
<meta property="og:description" content={article.excerpt || article.title} />
<meta name="description" content={article.excerpt || article.title} />
<meta property="article:published_time" content={article.date} />
<meta property="article:author" content="Adam Fortuna" />
<meta property="article:section" content="Technology" />
{article.tags && article.tags.length > 0 && (
<meta property="article:tag" content={article.tags?.map((t) => t.name).join(',')} />
)}
<meta name="twitter:card" content="summary" />
<meta name="twitter:creator" content="@adamfortuna" />
</>
)
export default ArticleMetadata
Code language: TypeScript (typescript)
Some of this is hardcoded because this blog doesn’t have multiple authors. You could just as easily fetch author data from the WordPress API and fill that in here.
Seeing it in Action
With this running, you can see the page you’re looking at right now! Since this blog will likely change over time, I’ll save a snapshot of what it looks like today:
And with that, we have a blog page on Next.js 13, backed by a headless WordPress GraphQL API that’s generated and available.
Webmentions
The last step of this journey (so far) was the question of how to handle comments. On my investing and FIRE blog I went with traditional comments. People could enter their name, email and optionally a URL and it’d show up as a comment. Super-basic, the same way comments have worked since the turn of the millennia.
But there’s a problem with comments. Much of the discussion around an article no longer takes place in the comments section. It takes place on Mastodon, Twitter, Facebook, and in links back to this post by other bloggers.
There are dozens of WordPress plugins and services that will scour social media for links and show pretty totals of “number of shares on Twitter”, or “Facebook likes” for the current post. Some of these involve putting tracking pixels on your site.
A Webmention is an attempt to characterize each of these interactions. A like? That’s a webmention. A comment on social media? That’s a webmention. A retweet or repost? That’s a webmention.
Here’s a definition:
Webmention is an open web standard (W3C Recommendation) for conversations and interactions across the web, a powerful building block used for a growing distributed network of peer-to-peer comments, likes, reposts, and other responses across the web.
Indieweb Webmention page
Webmention Examples
I’ve been scouring bloggers’ sites to see how they use webmentions in different ways. Here are a few examples.
While this is still a relatively new standard, more and more blogs are implementing Webmentions. You can view a bunch of other implementations listed on the Webmention page on Indieweb.
What’s most exciting to me about webmentions is that it exists outside of social media. Links back to a blog post are also the most helpful thing that a blogger can receive for SEO purposes. It’s a win-win to use webmentions.
Webmentions & WordPress
If you’re on WordPress or using it as a headless CMS, there’s a super handy wordpress-webmention plugin by Matthias Pfefferle that you can install. The work that this plugin does is honestly amazing. I’m very happy to not have needed to build this myself. š
To understand how Webmentions work (from someone who just started wrapping my head around them), here are the three basic steps:
Sending Webmentions
When you publish a new post, your blogging software can automatically send out webmentions to all linked articles. Programmatically, you’ll need to fetch the content of each URL you link to and look for a header link like this:
<link rel="webmention" href="<a rel="noreferrer noopener" href="https://webmention.io/localghost.dev/webmention" target="_blank">https://webmention.io/localghost.dev/webmention</a>"/>
Code language: HTML, XML (xml)
Your publishing software would find this, and then make an API call to this endpoint using the webmention standard parameters. That would let this blogger know that we’ve linked to them.
If you’re using a static site generator like Jekyll, there are even gems available to make these requests for you.
In my case, the wordpress-webmention plugin does this automatically – which is kind of magic. Even though I’m my WordPress is running at https://wp.adamfortuna.com, this plugin knows to use my site URL and references all links as https://adamfortuna.com. In other words, this is 100% handled by the plugin if you’re using WordPress.
Receiving Webmentions
If you take a look at the head tag of this page, you’ll see how I’m handling webmentions:
<link rel="webmention" href="https://wp.adamfortuna.com/wp-json/webmention/1.0/endpoint"/>
<link rel="http://webmention.org/" href="https://wp.adamfortuna.com/wp-json/webmention/1.0/endpoint"/>
Code language: HTML, XML (xml)
Whenever someone tries to send me a webmention it’ll hit the Next.js front-end and find these links back to WordPress. This endpoint is provided by the plugin and handles creating all of the webmentions. That’s it. It’s that easy. š
When you receive a webmention it’ll show up as a comment in WordPress. This could be a “Like” if someone liked a post on Twitter or Mastodon that linked back to the article. It could be a “Repost” if someone retweeted or reposted the link. A “Mention” if this article was linked to in another post. Or a “Comment” in cases where there’s a direct link.
Like any other kind of comment, we can use Akismet to monitor for spam. The webmention plugin also has an allowlist for which places you want to always approve comments from.
Side note: You can also send and receive webmentions within your site. At the bottom of my Twitter Epitaph you can see it was linked to from my February 2023 Theme post. I’ll likely disable internal webmentions in the coming months after I verify everything is working.
Showing Webmentions
If you’re using WordPress as usual, the plugin provides settings for how to show your webmentions after a post. In my case though, I need to fetch these from the GraphQL API and show these in Next.js. That turned out to be a little more work than I expected.
The wordpress-webmention plugin adds additional data about the webmention that it’s able to fetch from the author. This includes the author’s name, avatar URL and a link to their website. Some of this can be extracted from an h-card on the calling page, but it’s not always guaranteed to be there.
To make the webmention data available through GraphQL, we need to add it to the GraphQL schema. WPGraphQL has a handy function called register_graphql_field()
which can add additional data to existing objects. In this case, I wanted to add webmention URL, author URL, avatar and name to the comment.
To do this, I added the following code to my themes functions.php (which was at /wp-content/themes/twentytwentytwo/functions.php
)
add_action( 'graphql_register_types', function() {
register_graphql_object_type( 'Webmention', [
'description' => 'A Webmention outside of this site',
'fields' => [
'author_avatar' => [
'type' => 'String',
'description' => 'URL to an image of the author'
],
'author_url' => [
'type' => 'String',
'description' => 'URL associated with the author'
],
'webmention_source_url' => [
'type' => 'String',
'description' => 'The URL pointing to the URL on this site'
],
'webmention_target_url' => [
'type' => 'String',
'description' => 'The URL on this site associated with the Webmention'
],
]
]);
// Forked from https://github.com/wp-graphql/wp-graphql/issues/479
register_graphql_field( 'Comment', 'webmention', [
'type' => 'Webmention',
'resolve' => function( \WPGraphQL\Model\Comment $comment, $args, $depth ) {
// The resolve function for the field gets passed down the Object of the Type it is resolving on.
// Since this field is registered on the `Comment` Type, it will be passed down the Comment
// object (from the WPGraphQL Model Layer), and you can use that do
// resolve the field;
$webmention = [
'author_avatar' => get_comment_meta( $comment->comment_ID, 'avatar', true ),
'author_url' => get_comment_meta( $comment->comment_ID, 'url', true ),
'webmention_source_url' => get_comment_meta( $comment->comment_ID, 'webmention_source_url', true ),
'webmention_target_url' => get_comment_meta( $comment->comment_ID, 'webmention_target_url', true )
];
return $webmention;
}
] );
} );
Code language: TypeScript (typescript)
Once this was added my GraphQL API now had webmention data! š
The full GraphQL query I’m using to fetch these looks something like this:
export const findWordpressPost = `
query GetWordPressPost($slug: String!) {
post: postBy(slug: $slug) {
title
content
excerpt(format: RAW)
date
slug
commentStatus
featuredImage {
node {
sourceUrl
mediaDetails {
width
height
}
}
}
tags {
nodes {
name
slug
}
}
commentCount
comments(first: 1000) {
nodes {
type
databaseId
date
status
content(format: RAW)
webmention {
author_avatar
author_url
webmention_source_url
webmention_target_url
}
author {
node {
url
name
}
}
}
}
}
page: pageBy(uri: $slug) {
title
content
date
slug
featuredImage {
node {
sourceUrl
mediaDetails {
width
height
}
}
}
}
}
`
Code language: TypeScript (typescript)
At this point, the webmention data is being saved, and available to the front-end. Now we just have to use it!
I opted to group together likes and reposts and then handle “mentions” and “comments” as standalone comments. For “mentions” I’m showing the first 280 characters of the URL that linked to the post. I’d love to also show the title of the post, but for that, I’ll need to make some changes to the webmention-wordpress plugin to fetch and cache those in the database.
Each of the avatars links to the author’s URL – which in this case is usually Mastodon. The Mastodon likes are sent as webmentions thanks to Bridgy. It’ll check my posts on social media and see who’s liked, reposted or replied and add ping WordPress with the webmention.
The end result of all of this is that I have my own database of webmentions saved. Another popular tool I’ve seen a bunch of bloggers use is Webmention.io. If you’re not using WordPress but want webmentions that route looks a lot easier than implementing it yourself ā but without total ownership of your data.
Current Takeaways
I’m extremely happy with this setup overall. I’ve used WordPress as a Headless CMS before, but it’s always felt like a second-class citizen to the hosted version. With WPGraphQL and a site URL pointing to my public site, that’s no longer the case.
Some of the steps were unexpected ā like WordPress completely breaking when I changed my site URL and needing to add a rest_url
.
Writing my own formatting for posts was expected. Since I’m not using any of WordPress’s styling, I need to style my own posts. For that I’m using Tailwind.css’s typography plugin. I can add a class of “prose” on the main content and style everything in it from my tailwind.config.js
file. That allows styling every link element that shows up in a blog post from one place.
module.exports = {
plugins: [
require('@tailwindcss/typography'),
],
theme: {
extend: {
typography: (theme) => ({
DEFAULT: {
css: {
a: {
color: 'var(--tw-prose-links)',
fontWeight: theme('fontWeight.semibold'),
textDecoration: 'underline',
textDecorationColor: 'var(--tw-prose-underline)',
transitionProperty: 'color, text-decoration-color',
transitionDuration: theme('transitionDuration.150'),
transitionTimingFunction: theme('transitionTimingFunction.in-out'),
},
'a:hover': {
color: 'var(--tw-prose-links-hover)',
textDecorationColor: 'var(--tw-prose-underline-hover)',
},
},
},
}),
},
}
}
Code language: TypeScript (typescript)
The wordpress-webmention plugin is amazing, and you should install if if you’re using WordPress and want to bring in more conversation outside your blog.
Other than that what I’m most excited about is being able to develop the Next.js site with a reliable CMS providing all of my data from GraphQL. I originally prototyped everything using Stapi as my headless CMS. It worked, but it wasn’t great for the most important piece: writing blog posts. WordPress’s editor has been top-notch since Gutenberg. Add to that the ability to upload images right in the editor and have them sent to a CDN and it’s tough to beat.
My hope is to keep the WordPress side as boring as possible. I don’t need any plugins for the front end which helps. I don’t anticipate adding many more things to it other than for content organization or custom fields.
What’s Next?
This site is starting to feel like a digital garden again ā growing with each little addition. Just this week I added pretty header and footer graphics. āļøš”
I’m still working on migrating my old photo posts (like this one about my first trip to Japan) from the previous version of this blog to WordPress. For that, I’m using a combination of plugins: “PublishPress Series” for creating a series of posts and “modula” for image galleries. This’ll take a little work to implement, but I’ve already started (ex: /photos/japan). Managing photos in WordPress is a breath of fresh air after years of Markdown.
I’d like to update my Projects Page eventually. Currently, all of the data is stored in a single large JSON file, and the page is generated at deployment time. It’d be nice to move this into WordPress – perhaps using a new content type.
Another page I’m interested in redoing is my about page. It’s very basic right now, but I have an idea on how to make it a little more fun while incorporating some other important concepts of mine (like my beliefs and goals).
The code snippets here aren’t styled quite how I like. They’re being generated server-side by the WordPress Code block with syntax highlighting rendered on the server plugin, which is welcomed. It inserts some CSS directly into the post. That allows the post to be a server component for now. If I do allow posts to be interactive (which I will eventually), I’ll have to switch the post body to be a client component. Update: these are looking good now. ā
All of this has been a fun project so far. We’ve been debating whether we should try Next.js’s App directory on Hardcover, and at this point, I’d confidently say not yet. For starters, I struggled to cache GraphQL results with Apollo – which is a non-starter for one of the biggest benefits.
It also requires restructuring the entire page without using <context>
‘s to share data. That will require an architectural change to make it work. We’ll keep thinking about ways to use it, but for now, the pages
setup is working fine.
Eventually, I’d like to add books I’m reading and movies I’m watching. I doubt I’ll bring in other personal data I don’t plan to add my Mastodon posts here though ā I plan to keep those separate.
Lastly, I’d love to do something fun and interactive here. I’m not sure yet what that’ll be but I’m keeping my eye out.
Header image generated using MidJourney with the prompt “WordPress, Headless CMS, Next.js, GraphQL, flat design, unreal, symbolic, [colorful]::2, light colors, white background, [digital garden]::2 –ar 3:2 –no text”.
Let's keep in touch š§āš¤āš§
- Send me an email at [email protected]
- Follow me on Mastodon at @[email protected]
- Subscribe to my monthly newsletter
- Subscribe to my RSS feed
Did you link to this article? Add it here and I'll include it