Hi, currently running Oathkeeper in production kub...
# talk-oathkeeper
q
Hi, currently running Oathkeeper in production kubernetes. Unfortunately, Oathkeeper memory usage seems to climb steadily until the pods OOM kill and restart. Is this a known issue or is there some way of troubleshooting why this is occurring? Happy to say the other Ory services (Hydra, Kratos, Keto) have steady and minimal CPU/Memory usage, but am somewhat concerned about Oathkeeper's behavior.
f
Hi Michael
I have such config
Copy code
containers:
      - image: {{ .Values.image }}
        name: {{ .Chart.Name }}
        resources:
          requests:
            ephemeral-storage: "100Mi"
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "1000m"
            ephemeral-storage: "150Mi"
and no issues
s
do you have the latest version? and do you have tracing enabled? we had a few memory leaks with not-closed spans
f
no tracing. version: v0.10.1
s
I was rather asking @quaint-pager-64027 who has problems 😅 but thx, always good to compare
f
hahaa 😁 Sorry, I didn’t notice ))
@steep-lamp-91158 let me know if you need more info from me 😉
q
Will check if we have tracing enabled. Have tried versions 0.38.25 and 0.40.1 but both seem to have the same issue.
We currently do not have tracing enabled at this time. We do frequently mutate credentials (oauth ID token to JWT), not sure if might be an issue.
s
hmm it is a bit hard to investigate, but I'd recommend you create an issue with all details you can get if there is none yet
q
Resolved the issue, ultimately was happening due to constant cache misses caused by a random ordering of the claims from our hydration endpoint. Sorting the claims beforehand fixed this.