I have an architectural question, in which any opi...
# talk-keto
l
I have an architectural question, in which any opinion is more than welcomed. It is strongly related to Keto, but not a technical issue. If this is not the appropriate channel, please let me know. 🙂 I have decided using Keto for authorization, together with Oathkeeper as a decision api and Kratos for authentication. I have designed a modest application in a micro-services style. I’ve been able to protect the endpoints with all these three artifacts from Ory. My question come in this scenario. I have an endpoint
<http://api.example.com/students|api.example.com/students>
. In Keto, I check if the logged_in user has the appropriate role to do a GET request to this endpoint, and if yes, Oathkeeper allows the request to go through. Now, when it goes to the service, it is supposed to return a list of all students, which the logged in user HAS ACCESS TOO. Right now, I’m not protecting this through Keto, but I’d love to do so. The reason why, is because I’m not sure how to do it. Should the
students
service directly communicate with Keto and ask for the full list of students that the logged in user has access too? Doesn’t his affect the performance significantly?
s
There is a small guide about how this can be done right now https://www.ory.sh/docs/keto/guides/list-api-display-objects
But there are also some issues about improvements of this
l
I’ve been looking into it, some other articles and also the presentations you and others have made in Ory Summit. Currently, our main concern is the performance. I’ve been looking on the brief documentation that is present regarding performance and complexity, but still, I’m not sure. I’m also planning on running a load test.
I tried to visualise one of the use cases I want to have in Keto. I have tested it, and it works. However, I’m not sure if I’m going to have latency problems in the future.
s
It would be awesome if you can run some load tests and share your results 👍 both what worked and what did not Remember that the latency highly depends on your database as well, so ideally run it with the expected req/s against a production-grade DB instance