Configure Piiano Vault for performance in large-scale systems
Learn how to configure Piiano Vault when you require high performance in large-scale systems
Vault supports a massive scale of hundreds of millions of records and thousands of requests per second. Configuring such systems is not covered by this guide as it requires a comprehensive understanding of the system's parameters. For such systems, talk to us. We're here to help you with the design.
The hosted and self-hosted implementations of Vault are performant for most systems without the need to adjust the configuration. However, for self-hosted installation, ensure you meet the hardware prerequisite requirements.
Out of the box, both options support:
- Storing hundreds of thousands of records
- Servicing tens of requests per second
- Average response time of under 10ms
This guide focuses on the configuration required for larger self-hosted systems with higher demands. The recommendations typically scale to tens of millions of records with hundreds of requests per second. For the hosted solution, contact us to tune your Vault.
Tuning for scale
Any performant system requires multiple instances of the Vault container. The number depends on your requirements, the number of zones you want to span, and several other factors. All Vault instances must be configured identically and connect to the same database.
Configuration: Increase the number of connections to the database. The default is 16 connections and is recommended for a single core. Multiply this by your number of Vault instances. For example, for 4 instances, add these environment variables:
Vault Instance: Use at least 4 cores and 8 GB of RAM and adjust according to load. For example, an appropriate AWS instance type is
Use an instance type with more memory, such as 16GB or 32GB. Increase the number of cores from two to four and adjust according to load, but only if required.
When running in AWS:
- Use AWS Aurora Postgres instead of AWS RDS Postgres.
- Use the
db.r5.xlargeinstance type or similar.
Increase the working memory parameter to 16MB, which works well for 1,000 concurrent requests:
- Set with
SET work_mem TO '16MB';
- Verify that the settings are applied with
For AWS, this requires the creation of a custom parameter group to persist the change.
When running in AWS, these parameters are automatically configured. In other cases, configure:
max_connectionsto at least
PVAULT_DB_MAX_OPEN_CONNS+ 5 (ensures non Vault users can still connect)
shared_buffersto database total memory / 4
effective_cache_sizeto database total memory * 0.75
maintenance_work_memto database total memory / 20
work_memto database total memory / (4 * max_connections)
max_workerprocessesto database number of CPUs
max_parallel_workersto database number of CPUs
- Run the Vault with
false. When set to true, some additional internal validations and checks slightly slow down the responses. These extra checks are for internal debugging purposes and are irrelevant to production use.
- With large numbers of requests per second, consider reducing the log level