Hi, I already made a discussion on Keto's github, ...
# talk-keto
s
Hi, I already made a discussion on Keto's github, but i figured out it might be easier and quicker to expose my problem directly to the community https://github.com/ory/keto/discussions/1291. It's about Keto's response time when performing a check against an Ory Network Project, which is always over 200ms
p
Hi @shy-postman-43226 This seems quite high since I see you are located in Italy. /cc @famous-art-85498 @refined-kangaroo-48640
s
Umh, aren't Ory Network clusters multiregion? Our infrastructure is deployed in aws eu-west-1, and I fear the latency would not change testing in the cloud.... but I can give it a shot
r
Hey @shy-postman-43226. We’re very close to enabling Keto multi-region in production. Watch out for an announcement around that. Beyond that, improving latency for Keto and other APIs is a goal for Q2!
s
thank you! I'll wait for news then!
e
@refined-kangaroo-48640 is there a way to run Ory Keto as a sidecar similar to how Aserto achieves it's low latency?
r
Since the relation tuples are stored in our own database, you’d incur even more latency since you typically need several trips to the database to find an answer to a Keto query.
We’re actively working on it 🙂
e
@refined-kangaroo-48640 latency in reads or latency in writes? Because in our scenario, updates happen rarely and reads need to be asap. So the Aserto model works well for us given that the reads are low latency and the updates are eventually consistent. Like we’re not as concerned with Zookies / immediate consistency. That being said OPL is a big plus. So i’m hoping that the previous functionality (regarding eventual consistency) is achievable now. If it’s not available, specific roadmap dates would be good to know. Btw, this would be for a paid license.
r
We’re targeting optimized read latency first and foremost. Permission writes will generally take many times longer than reads (typically several hundred milliseconds for writes), but we’re thinking that’s OK.
Regretably, I can’t share any definitive dates. But it’s in our planning for Q2 (which starts in a couple days :))
e
Okay. But to be clear, is a sidecar possible with Ory?
r
Can you elaborate on what exactly you’re trying to achieve with a sidecar?
e
Sure. My hope is that the sidecar would provide instantaneous latency since the consumer and provider are collocated in the Kubernetes Pod.
r
We’ve discussed this internally and have some cool ideas on how to improve latency substantially and maybe support a true side-car model.