
### Optimize your Nuxt 4 app for speed in 2026. Learn rendering strategies, caching, bundle optimization, image performance, and hydration best practices.

Get notified when we release new tutorials, lessons, and other expert Nuxt content.
Is your Nuxt 4 app as fast as it could be?
Here's the short answer: Nuxt 4 brings significant performance improvements out of the box. But knowing how to wield them properly makes the difference between a good app and a blazingly fast one (and let's be honest, few things are more satisfying than watching your Lighthouse score climb into the green).
This guide covers the most impactful optimizations you can make to your Nuxt 4 application.
You'll learn how to choose the right rendering strategy for each page and optimize your bundle size. You'll implement caching strategies and reduce hydration overhead. These aren't theoretical concepts. They're practical techniques you can apply today to make your app faster.
Performance matters more than ever in 2026. Core Web Vitals directly impact your search rankings, users expect instant responses, and your competitors are already optimizing their apps.
Let's make your app faster.
This guide assumes you're familiar with Vue 3 and Nuxt basics (pages, components, data fetching). New to Nuxt? Start with the official tutorial first.
Nuxt 4 launched in mid-2025 as a stability-focused release.
After a year of real-world testing, the team shipped thoughtful performance improvements that make your apps faster without any extra work. But understanding what changed helps you squeeze every last drop of speed out of these new features.
The new app/ directory is the biggest structural change in Nuxt 4.
All your application code now lives in this dedicated directory, isolated from root-level files like node_modules/, .git/, and server/. The flat structure still works if you prefer it, but the app/ directory is the new default.
nuxt-4-app/
app/
components/
pages/
layouts/
composables/
server/
nuxt.config.ts
Why does this matter? Vite's file watcher can now focus on a smaller scope of files, giving you snappier Hot Module Replacement during development. The improvement is especially noticeable on Windows and Linux systems, where file watching has traditionally been painfully slow (if you've ever waited five seconds for a hot reload, you know the frustration).
This is the single most impactful performance win in Nuxt 4.
All calls to useFetch or useAsyncData with the same key now share the same data, error, and status refs. But the real improvement comes from how that data is stored.
Data is now returned as a shallowRef instead of a deep reactive ref.
// Nuxt 4 - shallow by default
const { data } = await useFetch('/api/products')
// Better performance for deeply nested objects
This dramatically slashes reactivity overhead because Vue doesn't need to watch every single property in your deeply nested objects. It only tracks the top-level reference, which is exactly how it should be for data you're just displaying.
For most API responses, this is exactly what you want. You fetch data and display it, rarely mutating it deeply.
When you do need deep reactivity (forms with nested fields, complex state with mutations), you can opt in:
// When you need deep reactivity:
const { data } = await useFetch('/api/user', { deep: true })
Nuxt 4 automatically shares payload (the data serialized from server to client during SSR) between prerendered pages.
When multiple components on the same page use useFetch or useAsyncData to fetch the same data, Nuxt deduplicates that data in the payload. For prerendered sites, this sharing also works across pages that fetch identical data. This eliminates duplicate data in your prerendered pages.
You get faster builds and a smaller bundle. This is most impactful on sites with shared data across many pages, like blogs with category listings or e-commerce sites with product catalogs. If you've ever prerendered a 500-page blog and watched the same navigation data get duplicated 500 times, you'll appreciate this one.
The new @nuxt/fonts module automatically self-hosts any font file.
No more external network requests to Google Fonts or other CDNs. Nuxt downloads the fonts during build and serves them from your own domain.
This is both a privacy and performance win. You eliminate the round-trip to third-party servers, fonts load faster, and you avoid the flash of unstyled text.
Nuxt 4 uses separate TypeScript projects for different contexts. Your app code and server code each get their own project configuration, giving you more accurate type inference and faster language server performance.
It's an indirect performance benefit: you catch type errors earlier, before they become runtime bugs.
Now that you understand what Nuxt 4 gives you for free, let's look at how to optimize further by choosing the right rendering strategy for each page.
Nuxt 4 supports multiple rendering strategies (SSR, SSG, ISR, SWR, CSR... yes, it's an alphabet soup), and choosing the right one for each page can give you massive performance gains. You can even mix strategies within a single application using hybrid rendering.
This is Nuxt's default mode, and it's called universal rendering.
The server renders HTML on each request, the browser receives fully rendered content and displays it immediately, then Vue.js swoops in to hydrate the page (attaching event listeners and reactivity to the static HTML) and make it interactive.
// nuxt.config.ts
export default defineNuxtConfig({
ssr: true, // This is default
})
You get a fast initial load because users see content immediately, search engines can crawl fully rendered HTML, and your page works even if JavaScript fails to load.
But there's a tradeoff: each request requires server processing, you need to think about server/browser compatibility, and Time to First Byte is higher compared to static hosting.
Use SSR when you need real-time data, personalized content per user, or SEO-critical pages that update frequently.
Static site generation pre-renders all pages at build time.
You generate static HTML files and deploy them to a CDN. When users request a page, they get pre-rendered HTML instantly. Then Vue.js hydrates for interactivity.
# Generate static files at build time
pnpm dlx nuxi generate
This gives you maximum performance: no server processing needed, files can be hosted on edge locations worldwide, and you can handle unlimited traffic spikes.
The downside? Content is frozen at build time, changes require rebuilding the entire site, and build times grow with your site size.
Use SSG for blogs, documentation, marketing sites, and any content that doesn't change frequently.
ISR combines static performance with automatic updates.
Pages are initially generated at build time or on-demand and cached on CDN with a Time-To-Live. When the TTL expires, the page regenerates in the background while serving stale content.
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
'/blog/**': { isr: 3600 }, // 1 hour
'/products/**': { isr: 300 } // 5 minutes
}
})
The benefit here is that you get CDN-cached static performance with fresh content. No full site rebuilds needed. It really is the best of both worlds, which is why ISR has become such a darling in the Jamstack community.
But ISR is only natively supported on certain platforms (Vercel and Netlify have the best integration), you need a CDN for proper testing, and cache invalidation can be complex.
So use ISR for e-commerce product pages, blog posts that occasionally update, and any content that changes periodically.
SWR extends ISR by adding server-side caching without requiring CDN support.
The first request generates and caches the page, and subsequent requests serve the cached version. When the TTL expires, stale content is served immediately while a new version generates in the background.
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
'/api/**': { swr: 60 } // 1 minute cache
}
})
You always get instant responses, users never wait for regeneration, and it works with any hosting (no CDN required).
The tradeoff is that users may see slightly outdated data, you need a cache invalidation strategy, and cached data lives on your server.
Use SWR for API responses, dashboard data, and any content where slight staleness is acceptable. DHH posted an interesting example of doing this for profile images.
Client-side rendering happens entirely in the browser (it's also known as SPA or Single Page Application).
The server sends a basic HTML shell with your JavaScript bundle, which downloads and executes. Then Vue.js renders the entire application in the browser.
// nuxt.config.ts
export default defineNuxtConfig({
routeRules: {
'/admin/**': { ssr: false }
}
})
This simplifies deployment (no server-side rendering infrastructure), you get full JavaScript access immediately, and your server only serves static files.
But search engines may not index JavaScript-rendered content and users see a blank page until JavaScript loads.
So use CSR for admin dashboards, internal tools, and SaaS applications behind login.
You can mix and match strategies per route, which means your marketing pages can be statically generated for maximum speed while your dashboard uses SSR for fresh data and your admin panel skips server rendering entirely because nobody's going to find it through Google anyway.
This is where Nuxt 4 really shines.
export default defineNuxtConfig({
routeRules: {
// Marketing: Static
'/': { prerender: true },
'/features': { prerender: true },
'/pricing': { prerender: true },
// Blog: ISR with CDN caching
'/blog/**': { isr: 3600 },
// Products: SWR with medium cache
'/products/**': { swr: 1800 },
// Admin: Client-side only
'/admin/**': { ssr: false }
}
})
Your marketing pages load instantly from CDN. Blog posts update hourly. Product pages stay relatively fresh. And your admin area doesn't waste precious server cycles on SSR that nobody will ever see in a search result anyway.
Now that you understand rendering strategies, let's look at route rules in more detail and see how you can use them for fine-grained control.
Route rules give you granular control over each page: caching behavior, custom headers, redirects, and more. It's the most powerful optimization tool in Nuxt 4.
You can control cache behavior for each route.
export default defineNuxtConfig({
routeRules: {
'/api/posts': {
cache: {
maxAge: 300, // 5 minutes fresh
staleMaxAge: 3600, // 1 hour stale
swr: true,
name: 'posts-cache'
}
}
}
})
The maxAge sets how long the cache is fresh. After that, staleMaxAge determines how long stale content can be served while regenerating (because serving something slightly old is almost always better than making users wait). And name helps you identify the cache later when debugging.
Set custom headers for specific routes.
export default defineNuxtConfig({
routeRules: {
'/api/**': {
cors: true,
headers: {
'Cache-Control': 'public, max-age=3600'
}
}
}
})
This is useful for CORS configuration, security headers, or cache control directives that differ from your default settings.
Redirects at the route rule level happen on the server.
export default defineNuxtConfig({
routeRules: {
'/old-blog/**': {
redirect: { to: '/blog', statusCode: 301 }
}
}
})
Server-side redirects are faster than client-side redirects because users never download the old page, and search engines update their indexes correctly.
You can also set route rules directly in your page files.
First, enable the experimental feature:
// nuxt.config.ts
export default defineNuxtConfig({
experimental: {
inlineRouteRules: true
}
})
Then use defineRouteRules in your pages:
<script setup lang="ts">
defineRouteRules({
swr: 3600 // Cache for 1 hour
})
const route = useRoute()
const { data: post } = await useFetch(`/api/blog/${route.params.slug}`)
</script>
<template>
<article>
<h1>{{ post.title }}</h1>
<div v-html="post.content"></div>
</article>
</template>
This keeps your caching strategy close to the component that needs it. You can see at a glance how each page is optimized, and you won't have to go hunting through your nuxt.config to remember what caching rules apply where.
Start with conservative TTLs and increase them over time.
You can always make caches longer. But if you start too aggressive, you'll have users complaining that they can't see their changes, and you'll be scrambling to figure out how to bust caches across your CDN.
Use wildcards carefully. They match all routes under that path. Test that your rules apply to the routes you expect.
Document your strategy. A comment explaining why you chose specific TTLs helps future you (and your team) understand the decisions.
Test in a production environment. ISR needs a CDN to work properly. Local testing won't give you accurate results.
Now let's look at optimizing what gets cached by reducing your bundle size.
Smaller bundles download faster, parse faster, and execute faster. This directly impacts Time to Interactive and is one of the most impactful optimizations for hydration performance.
Start by seeing what's actually lurking in your bundle.
Nuxt provides a built-in analyzer:
pnpm dlx nuxi analyze
This opens a visual representation of your bundle where large blocks represent large modules, showing what's taking up space in your final bundle.
Look for large dependencies that could be lazy-loaded, components that don't need to load immediately, and duplicate code that could be shared.
Add the Lazy prefix to components that aren't needed on initial render.
<script setup lang="ts">
const showModal = ref(false)
</script>
<template>
<div>
<button @click="showModal = true">Open Modal</button>
<!-- Only loads when showModal becomes true -->
<LazyModal v-if="showModal" @close="showModal = false" />
</div>
</template>
The modal component only downloads when users click the button, and until then, it's not in the bundle.
Lazy load modals, dialogs, below-the-fold content, heavy components like charts, editors, and maps, and anything behind user interactions.
Use dynamic imports for heavy libraries that are used conditionally.
export const useHeavyLibrary = async () => {
// Only loads when called
const { default: HeavyLib } = await import('heavy-library')
return new HeavyLib()
}
Now you can use this composable in your components:
<script setup lang="ts">
const processData = async () => {
const lib = await useHeavyLibrary()
return lib.process(data)
}
</script>
The library only downloads when processData is called. Users who never use that feature never download it, which is exactly how things should work. Why punish everyone for a feature only 5% of users touch?
Import specific functions instead of entire libraries. This helps Vite's tree-shaking feature eliminate unused code.
// ❌ Bad: Loads entire lodash
import _ from 'lodash'
// ✅ Good: Only loads debounce
import { debounce } from 'lodash-es'
// ✅ Better: Smaller alternative
import { debounce } from 'radash'
The difference can be staggering. Lodash is around 70kb. Importing just debounce from lodash-es is 1.9kb. That's a 95% reduction for the same functionality (and your users on spotty mobile connections will thank you).
Always check if your library supports tree-shaking. Use the ES module version when available.
If you only use a plugin on some pages, convert it to a composable:
// ❌ Bad: Plugin loaded on every page
// plugins/analytics.ts
export default defineNuxtPlugin(() => {
const analytics = initAnalytics()
return { provide: { analytics } }
})
Convert it to a composable:
// ✅ Good: Composable loaded only when used
// composables/useAnalytics.ts
export const useAnalytics = () => {
const analytics = useState('analytics', () => initAnalytics())
return analytics
}
Now analytics only loads on pages that call useAnalytics(). Your dashboard users aren't downloading analytics code they'll never execute. For browser-only libraries that can't be converted, use the .client.ts suffix to exclude them from SSR entirely.
Smaller bundles mean faster downloads and quicker hydration. Now let's look at caching on the server side.
Server caching reduces database queries, lowers API costs, and speeds up response times. Nitro (Nuxt's server engine that handles SSR, API routes, and caching) provides powerful caching utilities built right into Nuxt 4, and once you start using them, you'll wonder how you ever lived without them.
Wrap expensive API handlers with cachedEventHandler.
export default cachedEventHandler(async (event) => {
// Expensive operation
const posts = await $fetch('<https://api.example.com/posts>')
return {
posts,
timestamp: new Date().toISOString()
}
}, {
maxAge: 60 * 60, // Cache for 1 hour
name: 'posts-cache'
})
The first request executes the handler and caches the result. Subsequent requests within the hour get the cached response instantly, with no database query or external API call.
You can also cache per parameter by using custom cache keys with the getKey option.
import { getRouterParam } from 'h3'
export default cachedEventHandler(async (event) => {
const id = getRouterParam(event, 'id')
const post = await $fetch(`https://api.example.com/posts/${id}`)
return post
}, {
maxAge: 60 * 10, // 10 minutes
getKey: (event) => `post-${getRouterParam(event, 'id')}`
})
Each post ID gets its own cache entry. Updating post 123 doesn't invalidate the cache for post 456.
This is the most elegant caching strategy.
export default cachedEventHandler(async (event) => {
const products = await db.query('SELECT * FROM products')
return {
products,
generatedAt: new Date().toISOString()
}
}, {
maxAge: 60, // Cache fresh for 1 minute
staleMaxAge: 60 * 60, // Serve stale for up to 1 hour
swr: true, // Enable background revalidation
})
Here's what each option does:
maxAge: How long the cache is considered fresh (1 minute). During this time, cached responses are served with no revalidation.staleMaxAge: How long stale content can be served after maxAge expires (1 hour). This sets an upper limit on staleness.swr: Enables background revalidation. When true, stale content is served immediately while fresh content is generated in the background.Here's the timeline:
Without swr: true, stale content would still be served (up to staleMaxAge), but there would be no background revalidation. Users would have to wait for fresh content on the next request after the stale period.
If you have an expensive operation that you need to run multiple times or across multiple handlers, you can use the cachedFunction utility to cache the result.
export const getExpensiveData = cachedFunction(
async (userId: string) => {
console.log(`Calculating for user ${userId}...`)
return await performExpensiveCalculation(userId)
},
{
maxAge: 60 * 60,
getKey: (userId) => `expensive-${userId}`
}
)
Now both handlers use the same cached result:
export default defineEventHandler(async (event) => {
const id = getRouterParam(event, 'id')
const data = await getExpensiveData(id) // Cached!
return { dashboard: data }
})
export default defineEventHandler(async (event) => {
const id = getRouterParam(event, 'id')
const data = await getExpensiveData(id) // Same cache!
return { report: data }
})
Both routes share the cache. So you only run the expensive calculation once per user per hour. Your CPU will thank you, and so will your cloud bill.
Optimizing images is one of the most impactful visual optimizations you can make. The @nuxt/image module makes this straightforward.
Install the official image module:
pnpm dlx nuxi module add image
Now you can use the NuxtImg and NuxtPicture components.
WebP and AVIF are the best image formats for small file sizes (they compress better than JPEG).
WebP is 25-35% smaller than JPEG, and AVIF is typically 50% smaller than JPEG (on modern browsers). Automatic format conversion requires a compatible image provider (like Cloudinary, imgix, or Vercel) or build-time processing with IPX.
<template>
<!-- Automatic WebP conversion -->
<NuxtImg
src="/images/product.jpg"
format="webp"
alt="Product"
width="400"
height="300"
/>
</template>
The image is automatically converted to WebP. Browsers that don't support WebP fall back to the original format.
Use NuxtPicture for automatic format selection:
<template>
<NuxtPicture
src="/images/hero.jpg"
:imgAttrs="{ alt: 'Hero image' }"
/>
</template>
This generates a <picture> element with multiple formats:
<picture>
<source type="image/avif" srcset="/images/hero.avif">
<source type="image/webp" srcset="/images/hero.webp">
<img src="/images/hero.jpg" alt="Hero image">
</picture>
Browsers automatically choose the best format they support. Modern browsers get AVIF, slightly older ones get WebP, and ancient browsers gracefully fall back to good old JPEG. Everyone wins!
You can also generate multiple sizes automatically using the sizes prop.
<template>
<NuxtImg
src="/images/hero.jpg"
sizes="sm:100vw md:50vw lg:400px"
alt="Responsive hero"
/>
</template>
This tells the browser: use full width on small screens, half the viewport on medium screens, and 400px on large screens. Nuxt Image calculates the appropriate image widths based on these sizes and generates a srcset with multiple variants (including 2x versions for retina displays by default).
The browser then picks the best image based on viewport size and device pixel ratio.
Small screens get small images. Large screens get appropriately sized images. No wasted bandwidth, no angry users on metered connections.
Critical images should load immediately (they're often the Largest Contentful Paint element).
<template>
<!-- LCP element: Load immediately -->
<NuxtImg
src="/images/hero.jpg"
alt="Hero"
width="1200"
height="600"
format="webp"
loading="eager"
fetchpriority="high"
preload
/>
</template>
This is typically your hero image (the Largest Contentful Paint element). Setting loading="eager" and fetchpriority="high" tells the browser to prioritize this image.
The preload prop adds a preload link in the document head. This starts downloading the image as early as possible.
Images below the fold should lazy load.
<template>
<!-- Lazy load when near viewport -->
<NuxtImg
src="/images/feature.jpg"
alt="Feature"
width="600"
height="400"
format="webp"
loading="lazy"
fetchpriority="low"
/>
</template>
This image will now only download when it's about to enter the viewport. This reduces initial page weight and speeds up the first load. Your users probably won't scroll to see every image anyway, so why front-load all that data?
When you find yourself repeating the same image configuration across components, presets let you define it once and reuse it everywhere.
export default defineNuxtConfig({
modules: ['@nuxt/image'],
image: {
presets: {
avatar: {
modifiers: {
format: 'webp',
width: 150,
height: 150,
fit: 'cover'
}
},
hero: {
modifiers: {
format: 'webp',
width: 1200,
height: 600,
quality: 90
}
}
}
}
})
Now your components stay clean:
<template>
<NuxtImg
src="/images/profile.jpg"
preset="avatar"
alt="User avatar"
/>
</template>
One prop instead of four. The preset applies all your modifiers automatically, and if you need to adjust your avatar size later, you change it in one place instead of hunting through dozens of components.
Images are optimized. Now let's reduce the JavaScript needed to make your app interactive.
Hydration is when Vue.js takes over server-rendered HTML. The server sends HTML, JavaScript downloads, Vue.js executes and attaches event listeners, and the page becomes interactive. It's a delicate dance between server and client, and when there's too much JavaScript involved, that dance turns into a slog.
Let's fix that.
This is the biggest win in Nuxt 4.
All data from useFetch and useAsyncData is now a shallowRef by default:
// Nuxt 4 default behavior
const { data } = await useFetch('/api/products')
// data is shallowRef - only top-level reactivity
Shallow refs don't track nested properties, making them dramatically faster for complex data structures. They use less memory and reduce hydration time.
Most API responses are immutable. You fetch data and display it, rarely mutating nested properties.
When you do need deep reactivity, you can opt in:
// When you need deep reactivity (rare):
const { data } = await useFetch('/api/user', { deep: true })
Use deep reactivity for forms where nested fields are edited, complex state with mutations, or interactive data tables with inline editing.
But for most data, shallow refs are faster.
You can use pick to select only the fields you need from the server response:
// ❌ Bad: Fetches all fields
const { data: user } = await useFetch('/api/user')
// ✅ Good: Only fetch what you need
const { data: user } = await useFetch('/api/user', {
pick: ['id', 'name', 'avatar']
})
The user object might have 20 fields. But if you only display the name and avatar, why transfer the other 18 fields? That's just extra bytes traveling across the wire, getting parsed by the browser, and sitting in memory for no reason.
Smaller payloads mean faster hydration (and faster requests).
Not everything needs to hydrate immediately.
Below-the-fold content like carousels, sliders, and comment sections can wait until they're visible:
<template>
<div>
<LazyProductCarousel hydrate-on-visible />
<LazyReviewsSection hydrate-on-visible />
</div>
</template>
Search interfaces, modals, and dropdown menus don't need hydration until the user actually interacts with them:
<template>
<div>
<LazySearchBox hydrate-on-interaction />
<LazyFilterPanel hydrate-on-interaction />
</div>
</template>
And non-critical features like analytics and chat widgets can hydrate whenever the browser has a spare moment:
<template>
<div>
<LazyAnalyticsWidget hydrate-on-idle />
<LazyChatWidget hydrate-on-idle />
</div>
</template>
Server components skip hydration entirely (usually). They render on the server and send plain HTML: no JavaScript, no hydration, zero overhead.
First, enable the experimental feature:
export default defineNuxtConfig({
experimental: {
// They are also known as "islands"
componentIslands: true
}
})
Then, add the .server.vue suffix to any component you want to render only on the server:
components/
BlogSidebar.server.vue
NewsletterBox.server.vue
Here's a server component:
<template>
<aside>
<h3>Recent Posts</h3>
<ul>
<li v-for="post in recentPosts" :key="post.id">
<a :href="`/blog/${post.slug}`">{{ post.title }}</a>
</li>
</ul>
</aside>
</template>
<script setup lang="ts">
// Runs only on server, never hydrates
const { data: recentPosts } = await useFetch('/api/posts/recent')
</script>
Use it in your pages like any other component:
<template>
<div>
<!-- Regular component: hydrates -->
<InteractiveComments />
<!-- Server component: never hydrates, zero JS -->
<BlogSidebar />
</div>
</template>
The sidebar renders on the server and sends static HTML, so no JavaScript is sent to the client for it. The page hydrates faster because there's less JavaScript to execute.
If you want to get fancy, you can also add as much interactivity as you want to your server components, but we won't cover that here.
Now let's wrap up with a quick wins checklist.
Here are the most impactful optimizations you can make right now. Bookmark this section. You'll want to come back to it.
You can do these today.
pnpm dlx nuxi analyze to identify bundle issues. Add the Lazy prefix to heavy components like modals and charts.@nuxt/image. Then optimize your LCP image with eager loading and high priority.These take a bit more time but are worth it.
cachedEventHandler to expensive API routes.deep: true unnecessarily).For maximum performance, go deeper. These take more effort but the payoff can be substantial.
Nuxt 4 brings significant performance improvements out of the box. Shallow refs by default give you faster reactivity, the optimized directory structure speeds up development, better font handling eliminates external requests, and the shared payload system reduces build times.
But the real performance gains come from making deliberate choices.
Choose the right rendering strategy for each page and use hybrid rendering to optimize each route independently. Cache aggressively with route rules and Nitro, lazy load everything that's not immediately visible, and optimize images with @nuxt/image.
The biggest wins come from hybrid rendering, bundle optimization with lazy loading, smart caching with SWR and ISR, image optimization with modern formats, and reduced hydration with shallow refs and server components.
You've learned the techniques. Now it's time to put them into practice.
In Mastering Nuxt, you'll build a complete AI-powered chat application from scratch, and you'll implement every optimization from this guide along the way: hybrid rendering, caching strategies with Nitro, database integration with Prisma and Supabase, and production deployment.
It's the natural next step if you're serious about building snappy Nuxt applications (the kind where users actually notice the difference).
Performance isn't a one-time optimization. It's a mindset, a habit, a way of thinking about every line of code you write and every dependency you add. With Nuxt 4's powerful tools and the techniques in this guide, you're equipped to build blazingly fast applications your users will love.
