I created GKE cluster using GKE API. Below is the payload and end point I used.
API: https://container.googleapis.com/v1/{parent=projects//locations/}/clusters
Method: POST
RequestBody:
{
"cluster": {
"name": "test",
"masterAuth": {
"clientCertificateConfig": {
"issueClientCertificate": true
}
}....
}
......
.....
}
NOTE: I'am creating GKE cluster with masterAuth enabled by setting clientCertificate to true. After cluster creation, I created the kubeconfig in my local machine using the clusterCaCertificate, clientCertificate and clientKey information from the GKE API ie., by describing the cluster.
Then I listed the nodes using 'kubectl get nodes' command and the response was
Error from server (Forbidden): nodes is forbidden: User "client"
cannot list resource "nodes" in API group "" at the cluster scope
The clusterCaCertificate information provided by the GKE describe APIs has the CN="client" but it should have been "admin". clusterCaCertificate is generated by Google and as a developer I could not find a way of setting the CN. I cannot even access the cluster so cannot perform any roleBinding or similar for user 'client'. Any idea how this can be resolved ?
Take a look here for a workaround and how GKE Engineering team is working on this. I took this from the GitHub report:
So per recommendation, I did post on the kubernetes engine bug tracker and it became this private issue:
https://issuetracker.google.com/u/1/issues/111101728, feel free to reference it, which is equivalent to kubernetes/kubernetes#65400.
In a nutshell, the client cert has CN=client encoded and client user doesn't have any permissions. If you use masterAuth username/password (basic auth), then you can apply the yaml.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: client-binding
subjects:
- kind: User
name: client
roleRef:
kind: ClusterRole
name: "cluster-admin"
apiGroup: rbac.authorization.k8s.io
Which will give the user on the cert admin permissions. Additionally, to remove basic auth you can set the username="" in the api, but this will cause a reboot which will take 5 more minutes to do a master switch.
Related
I have a GCP Project and Anthos Cluster deployed within it.
If I am an admin of an Anthos cluster but not an Owner of the parent project, I have only read rights on Kubernetes and cannot create any resources. Getting:
Error from server (Forbidden)
I've given myself "Kubernetes Engine Admin", "Kubernetes Engine Cluster Admin", "Anthos Multi-cloud Admin" roles, but no success. It seems like "Owner" role is mandatory.
Also my user is attached to ClusterRole/cluster-admin through ClusterRoleBinding/gke-multicloud-cluster-admin, but I definitely need IAM Owner role.
Is this by Anthos design or I am missing something?
This was solved by giving myself these roles:
roles/gkehub.viewer
roles/gkehub.gatewayEditor
Now, I can create Kubernetes resources even if I am not an Owner of the GCP project.
I have a self hosted Elasticsearch cluster running in AWS EKS and I'd like to setup oidc authentication. I followed the instruction: https://www.elastic.co/guide/en/cloud/current/ec-secure-clusters-oidc.html#ec-oidc-client-secret
In the client-secret setting, it mentions
You’ll need to add the client secret to the keystore
so I launched the ES cluster with basic authentication and added the secret to keystore by using the command elasticsearch-keystore add xpack.security.authc.realms.oidc.oidc-realm.rp.client_secret.
After that I update the ES yaml file to include the configuration:
xpack:
security:
authc:
realms:
oidc:
oidc-realm-name:
order: 2
rp.client_id: "client-id"
rp.response_type: "code"
rp.redirect_uri: "<KIBANA_ENDPOINT_URL>/api/security/v1/oidc"
op.issuer: "<check with your OpenID Connect Provider>"
op.authorization_endpoint: "<check with your OpenID Connect Provider>"
op.token_endpoint: "<check with your OpenID Connect Provider>"
op.userinfo_endpoint: "<check with your OpenID Connect Provider>"
op.jwkset_path: "<check with your OpenID Connect Provider>"
claims.principal: sub
claims.groups: "http://example.info/claims/groups"
then I run rollout restart to restart the pod but I got below error when launching the Elasticsearch cluster:
java.lang.IllegalStateException: security initialization failed
Likely root cause: SettingsException[The configuration setting [xpack.security.authc.realms.oidc.oidc-realm.rp.client_secret] is required]
it seems that ES doesn't find the secret I added in Keystore.
Then I realise that it lost the keystore when I run rollout restart to apply the oidc configuration. so my question is what is the right way to add the OIDC on Elasticsearch in K8S?
If you're using Helm for your deployment, the best way is to add it in the values of the chart.
You'll need to create a secret in your cluster, that will be added to the keystore by an InitContainer.
More details on the Helm chart README
I have 2 pods of Redmine deployed with Kubernetes the problem due to this the issue of session management is happening so some users are unable to login due to this so I came up with the idea to store the cache of both the pods in Redis server with Kubernetes(Centralized).
I am giving the below configuration inside the Redmine pod in location.
/opt/bitnami/redmine/config/application.rb
configuration
config.cache_store = :redis_store, {
host: "redis-headless.redis-namespace", #service name of redis
port: 6379,
db: 0,
password: "xyz",
namespace: "redis-namespace"
}, {
expires_in: 90.minutes
}
But this is not working as supposed .Need help where I am doing wrong.
Redmine doesn't store any session data in its cache. Thus, configuring your two Redmines to use the same cache won't help.
By default Redmine stores the user sessions in a signed cookie sent to the user's browser without any server-local session storage. Since the session cookie is signed with a private key, you need to make sure that all installations using the same sessions also use the same application secret (and code and database).
Depending on how you have setup your Redmine, this secret is typically either stored in config/initializers/secret_token.rb or config/secrets.yml (relative to your Redmine installation directory). Make sure that you use the same secret here on both your Redmines.
In the legacy ACL system (pre 1.4), i was able to create acl tokens using the api endpoint /v1/acl/update passing in an existing ID as a parameter in the payload, e.g:
"ID": "##uuid",
This would create a token with that uuid in consul.
In the new system, I cannot create a token and pass in an already chosen ID of that token, either via consul acl client or acl API. Any suggestions?
The only pre-assigned token i'm aware of that works is the bootstrap master token, which can be configured in acl.json at startup and consul will use that to bootstrap the cluster and create the mgmt token:
"tokens": {
"master": "##uuid",
}
Note that the purpose here is ability to recover from outage. If I have 100 tokens in consul and lose the cluster, how do I rebuild with the same tokens (which would be saved off somewhere)?
this was already raised in https://github.com/hashicorp/consul/issues/4977, with the targeted feature included in 1.4.4 release (date TBD)
I am running Sonarqube in Kubernetes and I want to get metrics of Sonarqube pod to Prometheus. I added prometheus.io/scrape: "true" in the service of sonarqube and able to see the endpoint in the Prometheus dashboard but it's showing DOWN status, though my pod is up and running.
Endpoint: http://sonar_ip:9000/metrics. I don't think Sonarqube exposes metrics on /metrics path because executing 'curl http://sonar_ip:9000/metrics' not showing metrics list. Does Sonarqube pod exposes any Prometheus metrics and if yes then on what path? Let me know if you need any further information.
SonarQube 9.3 has added inbuilt support for Prometheus monitoring for all the editions
https://www.sonarqube.org/sonarqube-9-3/
Follow latest docs on same -
https://docs.sonarqube.org/latest/instance-administration/monitoring/
Prometheus monitors your SonarQube instance by collecting metrics from the /api/monitoring/metrics endpoint.
Results are returned in OpenMetrics text format.
See Prometheus' documentation on Exposition Formats for more information on the OpenMetrics text format.
It requires auth in any of below way -
Authorization:Bearer xxxx header: You can use a bearer token during database upgrade and when SonarQube is fully operational. Define the bearer token in the sonar.properties file using the sonar.web.systemPasscode property.
X-Sonar-Passcode: xxxxx header: You can use X-Sonar-passcode during database upgrade and when SonarQube is fully operational. Define X-Sonar-passcode in the sonar.properties file using the sonar.web.systemPasscode property.
username:password and JWT token: When SonarQube is fully operational, system admins logged in with local or delegated authentication can access the endpoint.