<@U04UQ68083H> Hi. Do you have any Hydra performan...
# ory-copilot
e
@chilly-king-10285 Hi. Do you have any Hydra performance benchmark using Postgres DB? We would like to tune the DB configuration since current performance is not reaching our target.
b
A message from kapa.ai
New message text here
e
@chilly-king-10285 The CPU core in the DSN setup document. Is IT application CPU or DB CPU?
b
A message from kapa.ai
Based on the knowledge sources, the CPU core reference in the DSN configuration parameters refers to the application CPU (the server running Ory Hydra or other Ory services), not the database CPU. The documentation states: > -
max_conns
(number): Sets the maximum number of open connections to the database. Defaults to the number of CPU cores times 2. > -
max_idle_conns
(number): Sets the maximum number of connections in the idle. Defaults to the number of CPU cores. [Database setup and configuration](https://www.ory.sh/docs/self-hosted/deployment) These parameters control how many connections the Ory application will establish to the database. The default values are calculated based on the CPU cores of the machine running the Ory service, as this is a common practice for determining optimal connection pool sizes. The application needs to manage these connections based on its own processing capacity, which is why it uses its own CPU count as a reference point. When you configure these parameters in your DSN string, you're telling the Ory application how to manage its connection pool to the PostgreSQL database.
e
@chilly-king-10285 if we do not provide any parameter in DSN. What values would be there?
b
A message from kapa.ai
Based on the knowledge sources, if you don't provide any specific connection parameters in the DSN for Ory services, the following default values would be used: For PostgreSQL: - `max_conns`: Defaults to the number of CPU cores times 2 - `max_idle_conns`: Defaults to the number of CPU cores - `max_conn_lifetime`: No default mentioned (likely unlimited/0) - `max_conn_idle_time`: No default mentioned As mentioned in the documentation: > -
max_conns
(number): Sets the maximum number of open connections to the database. Defaults to the number of CPU cores times 2. > -
max_idle_conns
(number): Sets the maximum number of connections in the idle. Defaults to the number of CPU cores. [Database setup and configuration](https://www.ory.sh/docs/self-hosted/deployment) For MySQL, similar defaults apply: > -
max_conns
(number): Sets the maximum number of open connections to the database. Defaults to the number of CPU cores times 2. > -
max_idle_conns
(number): Sets the maximum number of connections in the idle. Defaults to the number of CPU cores. [MySQL](https://www.ory.sh/docs/self-hosted/deployment#mysql) These defaults are based on the CPU cores of the machine running the Ory service, not the database server. The system will automatically calculate these values based on the available CPU cores on your application server if you don't explicitly set them in your DSN string.
f
@chilly-king-10285 can you give me a best sample Postgres configuration for a high-load system with something around 800-900 logins per second, each login potentially involving multiple Hydra token calls (for client credentials validation) well? I aim to keep things like LWLock:BufferContent and IO:XactSync to a very low value and average response times from Hydra is expected to be less than 4 seconds.
b
A message from kapa.ai
Based on the knowledge sources provided, there isn't specific information about an optimal PostgreSQL configuration for Hydra at the scale you're describing (800-900 logins per second). The documentation explicitly states that they don't provide database benchmarks because "performance differs a lot between deployments due to request latency or database configuration" [Performance and benchmarks](https://www.ory.sh/docs/ecosystem/benchmark). However, I can provide some general guidance based on the available information: 1. For Hydra's connection pool settings, you should consider: - Setting
max_conns
higher than the default (CPU cores × 2) to handle your high load - Adjusting
max_idle_conns
to maintain enough warm connections - Setting appropriate
max_conn_lifetime
and
max_conn_idle_time
values to ensure connection recycling Example DSN with these parameters:
Copy code
<postgres://user:password@host:port/database?max_conns=300&max_idle_conns=100&max_conn_lifetime=5m&max_conn_idle_time=1m>
2. For high-load environments, there are known issues that might affect performance: - The
GetOrGenerateKeys
function can cause performance degradation under high load due to its locking mechanism [GetOrGenerateKeys locking issue](https://github.com/ory/hydra/issues/3863) - Competing refresh token requests can cause concurrent update issues on PostgreSQL [Competing refresh token requests issue](https://github.com/ory/hydra/issues/3604) 3. For production deployments, Ory recommends using managed database solutions: > By the way, we do not recommend deploying databases using Docker in production. Use a managed solution like Amazon RDS or Google Cloud SQL. [Run your own OAuth2 Server](https://www.ory.sh/blog/run-oauth2-server-open-source-api-security) 4. For client credentials validation, be aware that Hydra uses BCrypt which is CPU-intensive by design: > Ory Hydra uses BCrypt to obfuscate secrets of OAuth 2.0 Clients. When using flows such as the OAuth 2.0 Client Credentials Grant, Ory Hydra validates the client credentials using BCrypt which causes (by design) CPU load. [Performance and benchmarks](https://www.ory.sh/docs/ecosystem/benchmark) For your specific requirements around LWLock:BufferContent and IO:XactSync, the knowledge sources don't provide specific PostgreSQL tuning parameters. Given the scale you're targeting, you might want to: 1. Consider horizontal scaling of Hydra instances 2. Reach out to Ory directly for enterprise-level support 3. Consult with a PostgreSQL performance expert for database-specific optimizations The knowledge sources don't contain enough specific information to provide a complete PostgreSQL configuration for your high-load use case.
f
@chilly-king-10285 hydra could separate DB to a writer and a reader?
b
A message from kapa.ai
Based on the knowledge sources provided, there is no explicit information about Hydra supporting a read/write split database configuration with separate writer and reader instances. The documentation does mention that Ory services are designed to be scalable: > Ory services are running in high-scale production environments that handle millions of requests per day. To scale Ory, spin up another VM, Docker container, or pod of Ory Kratos, Ory Hydra or Keto with the same configuration. Ory scales effortlessly to thousands of pods without any additional work. [Scalability](https://www.ory.sh/docs/self-hosted/operations/scalability) The documentation also suggests using SQL load balancing for multiple SQL instances: > If you use multiple SQL instances use HAProxy or similar technology for SQL load balancing. [Scalability](https://www.ory.sh/docs/self-hosted/operations/scalability) However, there's no specific mention of Hydra natively supporting a reader/writer split configuration at the application level. The documentation focuses more on horizontal scaling of the application itself rather than implementing read/write splitting at the database level. For high-load environments, the documentation suggests using managed database solutions like Amazon RDS or Google Cloud SQL, which do offer read replica capabilities, but this would be configured at the database infrastructure level rather than within Hydra itself. If you're looking to implement a read/write split for performance optimization in a high-load environment, you might want to reach out directly to Ory, especially regarding their Enterprise License offerings which provide additional features for high-scale deployments.