I am having this in the config: ```config: acces...
# ory-selfhosting
l
I am having this in the config:
Copy code
config:
  access_rules:
    repositories:
      - file:///etc/rules/access-rules.json
And also added a custom resource which i can see is picked up by the maester. But when i call the
/rules
api, I got the response:
Copy code
[]
Anything i’ve missed?
w
hello there, just to be sure, are you deploying them using our charts?
l
yes, i am using the helm charts provided by you. i clone the repo locally and this config looks like this:
Copy code
# -- Mode for oathkeeper controller
# -- Two possible modes are: controller or sidecar
global:
  ory:
    oathkeeper:
      maester:
        mode: controller

replicaCount: 1

image:
  repository: oryd/oathkeeper
  tag: v0.38.9-beta.1
  pullPolicy: Always

# -- Image pull secrets
imagePullSecrets: []
nameOverride: "oauthkeeper"
fullnameOverride: "oauthkeeper"

# -- If enabled, a demo deployment with exemplary access rules and JSON Web Key Secrets will be generated.
demo: false

# -- Configures the Kubernetes service
service:
  # -- Configures the Kubernetes service for the proxy port.
  proxy:
    # -- En-/disable the service
    enabled: false
    # -- The service type
    type: ClusterIP
    # -- The service port
    port: 4455
    # -- The service port name. Useful to set a custom service port name if it must follow a scheme (e.g. Istio)
    name: http
    # -- If you do want to specify annotations, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'annotations:'.
    annotations: {}
    # <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
    # <http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: "true"
    labels: {}
    #      If you do want to specify additional labels, uncomment the following
    #      lines, adjust them as necessary, and remove the curly braces after 'labels:'.
    #      e.g.  app: oathkeeper

  # -- Configures the Kubernetes service for the api port.
  api:
    # -- En-/disable the service
    enabled: true
    # -- The service type
    type: ClusterIP
    # -- The service port
    port: 4456
    # -- The service port name. Useful to set a custom service port name if it must follow a scheme (e.g. Istio)
    name: http
    # -- If you do want to specify annotations, uncomment the following
    # lines, adjust them as necessary, and remove the curly braces after 'annotations:'.
    annotations: {}
    # <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
    # <http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: "true"
    labels: {}
    #      If you do want to specify additional labels, uncomment the following
    #      lines, adjust them as necessary, and remove the curly braces after 'labels:'.
    #      e.g.  app: oathkeeper

# -- Configure ingress
ingress:
  # -- Configure ingress for the proxy port.
  proxy:
    # -- En-/Disable the proxy ingress.
    enabled: true
    className: ""
    annotations: {}
#     <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
#     <http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: "true"
    hosts:
      - host: proxy.oathkeeper.localhost
        paths:
          - path: /
            pathType: ImplementationSpecific
#    tls: []
#        hosts:
#          - proxy.oathkeeper.local
#      - secretName: oathkeeper-proxy-example-tls
    # -- Configuration for custom default service. This service will be used to handle the response when the configured service in the Ingress rule does not have any active endpoints
    defaultBackend: {}
      # service:
      #   name: myservice
      #   port:
      #     number: 80

  api:
    # -- En-/Disable the api ingress.
    enabled: false
    className: ""
    annotations: {}
#      If you do want to specify annotations, uncomment the following
#      lines, adjust them as necessary, and remove the curly braces after 'annotations:'.
#      <http://kubernetes.io/ingress.class|kubernetes.io/ingress.class>: nginx
#      <http://kubernetes.io/tls-acme|kubernetes.io/tls-acme>: "true"
    hosts:
      - host: api.oathkeeper.localhost
        paths:
          - path: /
            pathType: ImplementationSpecific
#    tls: []
#      hosts:
#        - api.oathkeeper.local
#      - secretName: oathkeeper-api-example-tls

# -- Configure ORY Oathkeeper itself
oathkeeper:
  # -- The ORY Oathkeeper configuration. For a full list of available settings, check:
  #   <https://github.com/ory/oathkeeper/blob/master/docs/config.yaml>
  config:
    access_rules:
      repositories:
        - file:///etc/rules/access-rules.json
    authenticators:
      noop:
        enabled: true
      unauthorized:
        enabled: true
      bearer_token:
        enabled: true
        config:
          check_session_url: <http://kratos:4455/session/whoami>
          preserve_path: true
          extra_from: '@this'
          subject_from: 'identity.id'
          token_from:
            header: Authorization
    authorizers:
      allow:
        enabled: true
      deny:
        enabled: true
    mutators:
      header:
        enabled: true
        config:
          headers:
            X-User: "{{ print .Subject }}"
            # You could add some other headers, for example with data from the
            # session.
            # X-Some-Arbitrary-Data: "{{ print .Extra.some.arbitrary.data }}"
      noop:
        enabled: true
      id_token:
        enabled: true
        config:
          issuer_url: <http://localhost:4455/>
          jwks_url: <http://api>..../v1/jwks
#          claims:
#            - '{"customer-claim": "value"}'
    serve:
      proxy:
        port: 4455
      api:
        port: 4456
  # -- If set, uses the given JSON Web Key Set as the signing key for the ID Token Mutator.
  mutatorIdTokenJWKs: {}
  # -- If set, uses the given access rules.
  accessRules: {}
  # -- If you enable maester, the following value should be set to "false" to avoid overwriting
  # the rules generated by the CDRs. Additionally, the value "accessRules" shouldn't be
  # used as it will have no effect once "managedAccessRules" is disabled.
  managedAccessRules: false

secret:
  # -- switch to false to prevent creating the secret
  enabled: true
  # -- Provide custom name of existing secret, or custom name of secret to be created
  nameOverride: ""
  # nameOverride: "myCustomSecret"
  # -- Annotations to be added to secret. Annotations are added only when secret is being created. Existing secret will not be modified.
  secretAnnotations:
    # Create the secret before installation, and only then. This saves the secret from regenerating during an upgrade
    # pre-upgrade is needed to upgrade from 0.7.0 to newer. Can be deleted afterwards.
    <http://helm.sh/hook-weight|helm.sh/hook-weight>: "0"
    <http://helm.sh/hook|helm.sh/hook>: "pre-install, pre-upgrade"
    <http://helm.sh/hook-delete-policy|helm.sh/hook-delete-policy>: "before-hook-creation"
    <http://helm.sh/resource-policy|helm.sh/resource-policy>: "keep"

  # -- default mount path for the kubernetes secret
  mountPath: /etc/secrets
  # -- default filename of JWKS (mounted as secret)
  filename: mutator.id_token.jwks.json

deployment:
  resources: {}
  #  We usually recommend not to specify default resources and to leave this as a conscious
  #  choice for the user. This also increases chances charts run on environments with little
  #  resources, such as Minikube. If you do want to specify resources, uncomment the following
  #  lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  #  limits:
  #    cpu: 100m
  #    memory: 128Mi
  #  requests:
  #    cpu: 100m
  #  memory: 128Mi
  securityContext:
    capabilities:
      drop:
      - ALL
    readOnlyRootFilesystem: true
    runAsNonRoot: true
    runAsUser: 1000
    allowPrivilegeEscalation: false
    privileged: false

  # -- Specify the serviceAccountName value.
  # In some situations it is needed to provides specific permissions to Hydra deployments
  # Like for example installing Hydra on a cluster with a PosSecurityPolicy and Istio.
  # Uncoment if it is needed to provide a ServiceAccount for the Hydra deployment.**
  serviceAccount:
    # -- Specifies whether a service account should be created
    create: true
    # -- Annotations to add to the service account
    annotations: {}
    # -- The name of the service account to use. If not set and create is true, a name is generated using the fullname template
    name: ""

  # <https://github.com/kubernetes/kubernetes/issues/57601>
  automountServiceAccountToken: false

  # -- Node labels for pod assignment.
  nodeSelector: {}
  # If you do want to specify node labels, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'annotations:'.
  #   foo: bar

  extraEnv: []

  # -- Extra volumes you can attach to the pod.
  extraVolumes: []
  # - name: my-volume
  #   secret:
  #     secretName: my-secret

  # -- Extra volume mounts, allows mounting the extraVolumes to the container.
  extraVolumeMounts: []
  # - name: my-volume
  #   mountPath: /etc/secrets/my-secret
  #   readOnly: true

  # -- Configuration for tracing providers. Only datadog is currently supported through this block.
  # If you need to use a different tracing provider, please manually set the configuration values
  # via "oathkeeper.config" or via "deployment.extraEnv".
  tracing:
    datadog:
      enabled: false

      # -- Sets the datadog DD_ENV environment variable. This value indicates the environment where oathkeeper is running.
      # Default value: "none".
      # env: production

      # -- Sets the datadog DD_VERSION environment variable. This value indicates the version that oathkeeper is running.
      # Default value: .Values.image.tag (i.e. the tag used for the docker image).
      # version: X.Y.Z

      # -- Sets the datadog DD_SERVICE environment variable. This value indicates the name of the service running.
      # Default value: "ory/oathkeeper".
      # service: ory/oathkeeper

      # -- Sets the datadog DD_AGENT_HOST environment variable. This value indicates the host address of the datadog agent.
      # If set to true, this configuration will automatically set DD_AGENT_HOST to the field "status.hostIP" of the pod.
      # Default value: false.
      # useHostIP: true

  # -- Configure node tolerations.
  tolerations: []

  labels: {}
  #      If you do want to specify additional labels, uncomment the following
  #      lines, adjust them as necessary, and remove the curly braces after 'labels:'.
  #      e.g.  type: app

  annotations: {}
  #      If you do want to specify annotations, uncomment the following
  #      lines, adjust them as necessary, and remove the curly braces after 'annotations:'.
  #      e.g.  <http://sidecar.istio.io/rewriteAppHTTPProbers|sidecar.istio.io/rewriteAppHTTPProbers>: "true"


# -- Configure node affinity
affinity: {}

# -- Configures controller setup
maester:
  enabled: true

# -- PodDistributionBudget configuration
pdb:
  enabled: false
  spec:
    minAvailable: 1
and applied a custom rule:
Copy code
apiVersion: "<http://oathkeeper.ory.sh/v1alpha1|oathkeeper.ory.sh/v1alpha1>"
kind: Rule
metadata:
  name: customer-rules
spec:
  authenticators:
    - handler: bearer_token
  authorizer:
    handler: allow
  match:
    url: "<http://localhost:4456/cristi/test>"
    methods:
      - "GET"
w
i see, i will look over the values, that make time some time πŸ˜… you could try describing the rule, check it’s status if it was correctly applied
and check logs of the maester controller, if it was reconciled
l
Copy code
Name:         customer-rules
Namespace:    ory
Labels:       <none>
Annotations:  <none>
API Version:  <http://oathkeeper.ory.sh/v1alpha1|oathkeeper.ory.sh/v1alpha1>
Kind:         Rule
Metadata:
  Creation Timestamp:  2022-01-27T10:04:16Z
  Finalizers:
    finalizer.oathkeeper.ory.sh
  Generation:  8
  Managed Fields:
    API Version:  <http://oathkeeper.ory.sh/v1alpha1|oathkeeper.ory.sh/v1alpha1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:annotations:
          .:
          f:<http://kubectl.kubernetes.io/last-applied-configuration|kubectl.kubernetes.io/last-applied-configuration>:
      f:spec:
        .:
        f:authenticators:
        f:authorizer:
          .:
          f:handler:
        f:match:
          .:
          f:methods:
          f:url:
    Manager:      kubectl-client-side-apply
    Operation:    Update
    Time:         2022-01-27T10:04:16Z
    API Version:  <http://oathkeeper.ory.sh/v1alpha1|oathkeeper.ory.sh/v1alpha1>
    Fields Type:  FieldsV1
    fieldsV1:
      f:metadata:
        f:finalizers:
          .:
          v:"finalizer.oathkeeper.ory.sh":
      f:status:
        .:
        f:validation:
          .:
          f:valid:
    Manager:         manager
    Operation:       Update
    Time:            2022-01-27T12:14:04Z
  Resource Version:  3525056
  UID:               6e6eed19-e2aa-47f7-a2e0-855ee663d8bf
Spec:
  Authenticators:
    Handler:  noop
  Authorizer:
    Handler:  allow
  Match:
    Methods:
      GET
    URL:  <http://localhost:4456/cristi/test>
Status:
  Validation:
    Valid:  true
Events:     <none>
Copy code
β”‚ 2022-01-27T12:14:04.943Z    INFO    controllers.Rule    updating ConfigMap                                                                                                                                            β”‚
β”‚ 2022-01-27T12:14:04.962Z    INFO    controllers.Rule    updating ConfigMap
so it seems to get them
port-forwarding to authkeeper and calling GET
<http://localhost:4456/rules>
get me
[]
w
mhm, next step would be to check if the cm has the rules and then if they are picked up by oathkeeper
we use a name derivation to find the cm, you can check if you are calling the valid CM https://github.com/ory/k8s/blob/master/helm/charts/oathkeeper-maester/templates/deployment.yaml#L45
l
Copy code
Name:         oathkeeper-rules
Namespace:    ory
Labels:       <http://app.kubernetes.io/instance=oathkeeper|app.kubernetes.io/instance=oathkeeper>
              <http://app.kubernetes.io/managed-by=Helm|app.kubernetes.io/managed-by=Helm>
              <http://app.kubernetes.io/name=oathkeeper|app.kubernetes.io/name=oathkeeper>
              <http://app.kubernetes.io/version=v0.38.9-beta.1|app.kubernetes.io/version=v0.38.9-beta.1>
              <http://helm.sh/chart=oathkeeper-0.21.5|helm.sh/chart=oathkeeper-0.21.5>
Annotations:  <http://meta.helm.sh/release-name|meta.helm.sh/release-name>: oathkeeper
              <http://meta.helm.sh/release-namespace|meta.helm.sh/release-namespace>: ory

Data
====
access-rules.json:
----
[
  {
    "upstream": {
      "url": "",
      "preserve_host": false
    },
    "id": "customer-rules.ory",
    "match": {
      "url": "<http://localhost:4456/cristi/test>",
      "methods": [
        "GET"
      ]
    },
    "authenticators": [
      {
        "handler": "noop"
      }
    ],
    "authorizer": {
      "handler": "allow"
    },
    "mutators": [
      {
        "handler": "noop"
      }
    ]
  }
]

BinaryData
====

Events:  <none>
w
right so the rules look ok, the CR was transofrmed
next step would be to exec into oathkeeper and check if the file on pod is the same as, k8s has a reconciliation time of about 2min
l
Copy code
--rulesConfigmapName=oathkeeper-rules
      --rulesConfigmapNamespace=ory
so those are right as well
w
it can take 2 min since the CM object is modified for it to be visible in the pod
l
ah lol it appeared after 2 minutes 😐. i am feeling embarrassed right now
sorry for this πŸ˜…
w
no problem πŸ˜„ it is something that is not well documented by k8s itself πŸ˜‰
that is why we also have the sidecar mode, it removes that limitation
l
so you say sidecar mode is better ? will test this, thank you for everything πŸ™‚
w
depends on the use-case. Sidecar is better if you care about timing
but it creates a new container for each pod, so it consumes much more resources