Configure Piiano Vault for performance in large-scale systems
Learn how to configure Piiano Vault when you require high performance in large-scale systems
Vault supports a massive scale of hundreds of millions of records and thousands of requests per second. Configuring such systems is not covered by this guide as it requires a comprehensive understanding of the system's parameters. For such systems, talk to us. We're here to help you with the design.
Baseline performance
The hosted and self-hosted implementations of Vault are performant for most systems without the need to adjust the configuration. However, for self-hosted installation, ensure you meet the hardware prerequisite requirements.
Out of the box, both options support:
- Storing hundreds of thousands of records
- Servicing tens of requests per second
- Average response time of under 10ms
This guide focuses on the configuration required for larger self-hosted systems with higher demands. The recommendations typically scale to tens of millions of records with hundreds of requests per second. For the hosted solution, contact us to tune your Vault.
Tuning for scale
Any performant system requires multiple instances of the Vault container. The number depends on your requirements, the number of zones you want to span, and several other factors. All Vault instances must be configured identically and connect to the same database.
Vault parameters
Configuration: Increase the number of connections to the database. The default is 16 connections and is recommended for a single core. Multiply this by your number of Vault instances. For example, for 4 instances, add these environment variables:
PVAULT_DB_MAX_OPEN_CONNS=64
PVAULT_DB_MAX_IDLE_CONNS=64
Vault Instance: Use at least 4 cores and 8 GB of RAM and adjust according to load. For example, an appropriate AWS instance type is c6gn.xlarge
.
Database parameters
Database hardware
Use an instance type with more memory, such as 16GB or 32GB. Increase the number of cores from two to four and adjust according to load, but only if required.
AWS specific
When running in AWS:
- Use AWS Aurora Postgres instead of AWS RDS Postgres.
- Use the
db.r5.xlarge
instance type or similar.
Parameters
Increase the working memory parameter to 16MB, which works well for 1,000 concurrent requests:
- Set with
SET work_mem TO '16MB';
- Verify that the settings are applied with
SHOW work_mem;
For AWS, this requires the creation of a custom parameter group to persist the change.
When running in AWS, these parameters are automatically configured. In other cases, configure:
max_connections
to at leastPVAULT_DB_MAX_OPEN_CONNS
+ 5 (ensures non Vault users can still connect)shared_buffers
to database total memory / 4effective_cache_size
to database total memory * 0.75maintenance_work_mem
to database total memory / 20work_mem
to database total memory / (4 * max_connections)max_workerprocesses
to database number of CPUsmax_parallel_workers
to database number of CPUs
Other considerations
- Run the Vault with
PVAULT_DEVMODE
set tofalse
. When set to true, some additional internal validations and checks slightly slow down the responses. These extra checks are for internal debugging purposes and are irrelevant to production use. - With large numbers of requests per second, consider reducing the log level
PVAULT_LOG_LEVEL
towarn
instead ofinfo
. - With large number of requests per second, it is recommended to limit the timeout of every request to 5 seconds. You do that with setting
PVAULT_SERVICE_TIMEOUT_SECONDS
to5
. This will limit the negative impact of any potentially slow query.