A comprehensive code review of Reiatsu uncovered critical security vulnerabilities and performance issues. This post chronicles the systematic fixes that took the framework from v1.1.0 to v1.2.1, addressing memory leaks, DoS vulnerabilities, type safety improvements, and much more.
Related Project

After building Reiatsu from first principles and getting it to a functional state, I decided it was time for a reality check. I ran a deep code review on the entire framework every middleware, every utility, every line of code. The goal? Find weaknesses before production users did.
What I discovered was both humbling and educational. While the core architecture was solid, there were 30+ issues ranging from critical security vulnerabilities to subtle performance drains. This blog post is the story of how I systematically addressed them.
I categorized every issue into four priority levels:
The Problem: The rate limiter stored client request counts in a Map, but never cleaned up expired entries. Over time, this would cause unbounded memory growth.
typescript1// Before: Memory leak waiting to happen 2const requestCounts = new Map<string, { count: number; resetTime: number }>(); 3// Map just keeps growing...
The Fix: Added a cleanup interval that runs every 60 seconds to remove expired entries:
typescript1// Cleanup expired entries every 60 seconds to prevent memory leaks 2setInterval(() => { 3 const now = Date.now(); 4 for (const [key, data] of requestCounts.entries()) { 5 if (now > data.resetTime) { 6 requestCounts.delete(key); 7 } 8 } 9}, 60000);
Impact: Prevents memory exhaustion in long-running servers with high traffic.
Similar Issue: The cache middleware had the same problem TTL-based entries never got cleaned up.
The Fix: 30-second cleanup interval for expired cache entries:
typescript1// Clean up expired cache entries every 30 seconds 2setInterval(() => { 3 const now = Date.now(); 4 for (const [key, entry] of cacheStore.entries()) { 5 if (now > entry.expiresAt) { 6 cacheStore.delete(key); 7 } 8 } 9}, 30000);
The Problem: The body parser had no size limit. An attacker could send gigabytes of data and crash the server.
typescript1// Before: Accept unlimited data 2req.on("data", (chunk) => { 3 rawBody += chunk.toString(); // No limit! 4});
The Fix: Added a 10MB limit with proper cleanup:
typescript1let size = 0; 2const MAX_SIZE = 10 * 1024 * 1024; // 10MB 3 4req.on("data", (chunk) => { 5 size += chunk.length; 6 7 if (size > MAX_SIZE) { 8 exceededLimit = true; 9 return; // Stop processing 10 } 11 12 rawBody += chunk.toString(); 13});
Why not req.destroy()? Destroying the stream can cause race conditions. The exceededLimit flag lets us handle the error gracefully after the stream ends.
The Problem: The timeout handler could fire after the request completed, causing errors and resource leaks.
The Fix: Always clear the timeout in a finally block:
typescript1try { 2 await Promise.race([next(), timeoutPromise]); 3} finally { 4 clearTimeout(timeoutId!); // Always cleanup 5}
The asyncHandler utility was just wrapping handlers in a try-catch and re-throwing completely redundant with native async/await:
typescript1// Before: Unnecessary wrapper 2export const asyncHandler = (handler: Handler): Handler => { 3 return async (ctx: Context) => { 4 try { 5 await handler(ctx); 6 } catch (error) { 7 throw error; // Why?! 8 } 9 }; 10}; 11 12// After: Simple pass-through with deprecation notice 13/** 14 * @deprecated This utility is redundant with native async/await. 15 * Will be removed in v2.0.0. Use async handlers directly. 16 */ 17export const asyncHandler = (handler: Handler): Handler => { 18 return handler; 19};
The original email regex was way too simple: /^[^\s@]+@[^\s@]+\.[^\s@]+$/
Problems:
user..name@domain.comNew RFC 5322-compliant regex:
typescript1const emailRegex = 2 /^[a-zA-Z0-9.!#$%&'*+/=?^_`{|}~-]+@[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?(?:\.[a-zA-Z0-9](?:[a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?)*$/;
Also fixed a template literal bug where error messages were evaluated at definition time instead of validation time:
typescript1// Before: ${len} evaluates immediately (wrong!) 2min(len: number, msg = `Must be at least ${len} characters`) 3 4// After: Lazy evaluation 5min(len: number, msg?: string) { 6 const message = msg || `Must be at least ${len} characters`; 7 // ... 8}
The origin: true preset reflects any origin, effectively disabling CORS protection. I added clear warnings:
typescript1/** 2 * @security WARNING: origin: true reflects ANY requesting origin, 3 * effectively disabling CORS protection. Use only in development. 4 */ 5development: { 6 origin: true, // Reflects any origin 7 credentials: true, 8 methods: ["GET", "POST", "PUT", "PATCH", "DELETE", "OPTIONS"], 9}
The Problem: Middleware added properties to Context via declaration merging, but there was no type-safe way to use them:
typescript1// User has to just "know" these exist 2ctx.user?.sub; 3ctx.requestId; 4ctx.files;
The Solution: Explicit interfaces that compose with Context:
typescript1export interface AuthContext { 2 isAuthenticated: boolean; 3 user?: UserPayload; 4} 5 6export interface RequestIdContext { 7 requestId: string; 8} 9 10// Usage with type safety! 11const handler = (ctx: Context & AuthContext & RequestIdContext) => { 12 console.log(ctx.user.sub); // Type-safe! 13 console.log(ctx.requestId); // Autocomplete works! 14};
Enhanced error handling to distinguish between different failure modes:
typescript1catch (err: any) { 2 if (err.code === "ENOENT") { 3 await next(); // File not found, try next middleware 4 } else if (err.code === "EACCES" || err.code === "EPERM") { 5 ctx.res.writeHead(403, { "Content-Type": "text/plain" }); 6 ctx.res.end("Forbidden"); 7 } else if (err.code === "EISDIR") { 8 await next(); // Tried to read a directory 9 } else { 10 // Other I/O errors 11 console.error("Static file error:", err); 12 ctx.res.writeHead(500); 13 } 14}
The template engine uses new Function() (similar to eval()). I added validation and clear warnings:
typescript1/** 2 * @security WARNING: This function uses `new Function()` which is similar to `eval()`. 3 * Only use with trusted template sources. Do NOT use with user-generated content. 4 */ 5export function compile(template: string) { 6 // Validate against obvious injection attempts 7 if ( 8 template.includes("require(") || 9 template.includes("import(") || 10 template.includes("process.") || 11 template.includes("global.") 12 ) { 13 throw new Error("Template contains forbidden code patterns"); 14 } 15 // ... 16}
The logger was creating log objects unconditionally, even when they might not be used. I changed it to lazy creation:
typescript1// Before: Always create the object 2const logData: LogData = { 3 /* ... */ 4}; 5 6// After: Create only when needed 7const createLogData = (): LogData => ({ 8 requestId: ctx.requestId, 9 method: method || "UNKNOWN", 10 // ... only built when actually logging 11}); 12 13console.log(formatIncomingRequest(createLogData(), config));
This reduces unnecessary object allocations on every request.
JavaScript has garbage collection, but that doesn't mean you can ignore memory management. Long-running servers with unbounded Map or Set structures will leak memory. Always ask: "When does this data structure get cleaned up?"
Small oversights compound into critical vulnerabilities:
origin: true = CORS bypassnew Function() = code injection riskSecurity isn't one big thing it's hundreds of small things done right.
Spending time on proper TypeScript interfaces pays dividends. The Context & AuthContext pattern gives users:
Writing tests for the critical fixes revealed edge cases I hadn't considered:
Tests aren't just for catching bugs they're for understanding your own code.
Adding @security JSDoc warnings, deprecation notices, and usage examples makes the framework safer by default. Good documentation prevents mistakes before they happen.
The framework is in much better shape, but there's more to do:
Remaining P2 Issues:
P3 Improvements:
Architecture Ideas:
Reiatsu v1.2.1 is live on npm:
bash1npm install reiatsu
The security and performance improvements are transparent existing code just works better and safer.
Check out the GitHub repo for the full source code and documentation.
This exercise reinforced something important: code review isn't about finding flaws, it's about continuous improvement. Every issue I fixed made me a better developer. Every test I wrote deepened my understanding.
Building from first principles means you own every line of code. That's both empowering and humbling. You learn by doing, breaking, fixing, and iterating.
If you're building your own framework or library, I highly recommend doing a systematic code review like this. Document everything. Fix issues in priority order. Write tests. Learn from the process.
The code will get better, and so will you.
Related posts based on tags, category, and projects