
Easy in-memory caching for AWS Lambda
Problem
It is known that when an AWS Lambda function is called, a new instance is created, which is not immediately purged after the request has been processed. AWS keeps this runtime for a certain amount of time for reusing in later requests or calls. AWS recommends using the runtime's execution environment to make the Lambda function more performant. But how does it work?
Solution
Put simply, in a Node.js Lambda there is only the handler function itself and a global JavaScript context. Each time the lambda is called, the handler function is executed – which makes caching within the function context impossible. However, this function has access to the global context. Any variables or objects can be stored in it. A subsequent call has access to this data as long as this runtime exists. If you combine this behavior with a dependency injection solution like tsyringe, you get instance caching for database/HTTP clients “for free”. It is also very easy to cache static configurations in memory.
Example
import { singleton } from "tsyringe";
@singleton()
class HttpClient {}
import { container, instanceCachingFactory } from "tsyringe";
export interface CoffeeCache {
configs?: CoffeeConfig[];
}
container.register("CoffeeCache", {
useFactory: instanceCachingFactory<CoffeeCache>(() => ({
/* "configs" key must be filled later */
})),
});
import { inject } from "tsyringe";
class CoffeeHandler {
constructor(@inject("CoffeeCache") cache: CoffeeCache) {}
}

Further Aspects
- Documentation of AWS Lambda execution context: https://docs.aws.amazon.com/lambda/latest/dg/runtimes-context.html
- AWS Lambda best practices: https://docs.aws.amazon.com/lambda/latest/dg/best-practices.html
- Source on recommended DI solution tsyringe: https://github.com/microsoft/tsyringe
---
Author: Robert Gruner / Software Engineer / Office Leipzig
Download Toilet Paper #147: Easy in-memory caching for AWS Lambda (PDF)
Want to write the next ToiletPaper? Apply at jambit!