Performance optimization often feels like searching for a needle in a haystack. But sometimes, the biggest gains come from fixing a single, pervasive anti-pattern: the N+1 query problem.
The Problem
While working on an e-sports platform, I noticed that tournament listing pages were taking 2-3 seconds to load. For a real-time competitive gaming platform, this was unacceptable.
The profiler revealed the culprit: we were making 150+ database queries for a single page load.
What is N+1?
The N+1 query problem occurs when you fetch a list of items (1 query), then fetch related data for each item individually (N queries). It’s insidious because it works correctly—just slowly.
// The problematic code
async function getTournaments() {
// 1 query to get tournaments
const tournaments = await Tournament.find({ status: 'active' });
// N queries to get participant counts
for (const tournament of tournaments) {
tournament.participantCount = await Participant.countDocuments({
tournamentId: tournament._id
});
}
// N more queries to get organizer info
for (const tournament of tournaments) {
tournament.organizer = await User.findById(tournament.organizerId);
}
return tournaments;
}
With 50 tournaments, this code executes 101 queries (1 + 50 + 50).
The Solution: Aggregation Pipelines
MongoDB’s aggregation pipeline can solve this in a single query:
async function getTournaments() {
return Tournament.aggregate([
{ $match: { status: 'active' } },
// Join with participants collection
{
$lookup: {
from: 'participants',
localField: '_id',
foreignField: 'tournamentId',
as: 'participants'
}
},
// Join with users collection for organizer
{
$lookup: {
from: 'users',
localField: 'organizerId',
foreignField: '_id',
as: 'organizerData'
}
},
// Transform the result
{
$project: {
name: 1,
startDate: 1,
prizePool: 1,
participantCount: { $size: '$participants' },
organizer: { $arrayElemAt: ['$organizerData', 0] }
}
},
{ $sort: { startDate: -1 } },
{ $limit: 50 }
]);
}
Result: 101 queries reduced to 1. Response time dropped from 2.3s to 180ms.
Beyond N+1: Strategic Indexing
After fixing N+1 issues, the next bottleneck was query execution time. MongoDB was scanning entire collections instead of using indexes.
Identifying Missing Indexes
Use explain() to analyze query execution:
const explanation = await Tournament.find({ status: 'active' })
.sort({ startDate: -1 })
.explain('executionStats');
console.log(explanation.executionStats);
// {
// totalDocsExamined: 15000, // Scanned all documents!
// totalKeysExamined: 0, // No index used
// executionTimeMillis: 450
// }
Creating Compound Indexes
For queries that filter and sort, compound indexes are essential:
// Create index for the common query pattern
tournamentSchema.index({ status: 1, startDate: -1 });
// After adding index:
// {
// totalDocsExamined: 50, // Only examined needed docs
// totalKeysExamined: 50, // Used index
// executionTimeMillis: 3 // 150x faster!
// }
Index Strategy Guidelines
- Equality matches first: Put fields used with
=before range fields - Sort fields next: Include fields used in
sort() - Range fields last: Fields used with
$gt,$lt, etc.
// For query: { status: 'active', region: 'EU' } sorted by startDate
// Optimal index:
{ status: 1, region: 1, startDate: -1 }
Redis Caching Layer
Some data doesn’t change frequently but is requested constantly. Adding a Redis caching layer provided another 40% improvement:
import Redis from 'ioredis';
const redis = new Redis();
const CACHE_TTL = 300; // 5 minutes
async function getTournaments(filters: TournamentFilters) {
const cacheKey = `tournaments:${JSON.stringify(filters)}`;
// Try cache first
const cached = await redis.get(cacheKey);
if (cached) {
return JSON.parse(cached);
}
// Cache miss - query database
const tournaments = await fetchFromDatabase(filters);
// Store in cache
await redis.setex(cacheKey, CACHE_TTL, JSON.stringify(tournaments));
return tournaments;
}
// Invalidate cache when data changes
async function updateTournament(id: string, data: UpdateData) {
await Tournament.findByIdAndUpdate(id, data);
// Invalidate related cache keys
const keys = await redis.keys('tournaments:*');
if (keys.length > 0) {
await redis.del(...keys);
}
}
Monitoring Query Performance
Prevention is better than cure. Set up monitoring to catch slow queries before they impact users:
import mongoose from 'mongoose';
// Log slow queries (> 100ms)
mongoose.set('debug', (collectionName, method, query, doc, options) => {
const start = Date.now();
return function(err, result) {
const duration = Date.now() - start;
if (duration > 100) {
console.warn(`Slow query detected:`, {
collection: collectionName,
method,
query,
duration: `${duration}ms`
});
}
};
});
For production, send these metrics to your monitoring system (Datadog, New Relic, etc.).
The Results
After implementing these optimizations:
| Metric | Before | After | Improvement |
|---|---|---|---|
| Queries per request | 150+ | 1-3 | 98% reduction |
| Average response time | 2.3s | 85ms | 96% faster |
| P95 response time | 4.1s | 180ms | 96% faster |
| Lighthouse Performance | 45 | 92 | 65% improvement |
Key Takeaways
-
Profile before optimizing: Use database profilers and APM tools to identify actual bottlenecks.
-
N+1 is everywhere: Any time you loop over results and make additional queries, you probably have an N+1 problem.
-
Indexes are not optional: In production with real data volumes, missing indexes cause exponential slowdowns.
-
Cache strategically: Not everything needs caching. Focus on frequently-read, rarely-changed data.
-
Monitor continuously: Performance degrades over time as data grows. Set up alerts for slow queries.
Conclusion
Database performance isn’t about clever tricks—it’s about understanding how databases work and respecting their constraints. The 65% Lighthouse improvement came not from a single magic fix, but from systematically eliminating inefficiencies.
Every millisecond matters. Your users—and your infrastructure costs—will thank you.