I am trying to deploy a simple angular app on k3s have installed GitLab-runner have GitLab service with a Role as a cluster-admin and it is supposed to be able to run all but I can't get it to deploy :
rules:
- apiGroups:
- ""
resources:
- '*'
verbs:
- '*'
I also tried specifically adding the verb 'apps' - no change in behavior
from server for: "deployment.yaml": deployments.apps "gitlab-master" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "deployments" in API group "apps" in the namespace "gitlab-managed-apps"
So far the only solution is to use the SA with privileges as Gitlab-admin...
Related
I am just started with Kong API with One API
I am able to run kong api locally using its official docker image available.
And on other side I am having another Spring-Boot microservice locally running inside same Docker engine.
Problem : What configuration needs in kong api yaml file so that I can connect to my spring-boot microservice ?
My kong -api yaml file
services:
- name: control-service-integration
url: http://localhost:8080/
plugins:
- name: oneapi
config:
edgemicro_proxy: edgemicro_demo_v0
add_application_id_header: true
authentication:
apikey:
header_name: "x-api-key"
upstream_auth:
basic_auth:
username: username
password: password
routes:
- name: control-service-route
request_buffering: false
response_buffering: false
paths:
- /edgemicro-demo-v0
From kon-one api service i am getting always 502 Bad Gateway error.
Let me know if anything information required.
I found the solution for this
in above YAML
services:
- name: control-service-integration
url: http://localhost:8080/
add this value in-front of url section http://host.docker.internal:8080/ after doing lot of trials and errors finally now I am able to connect my app which is running on host.
I am following this guide to consume secrets: https://docs.spring.io/spring-cloud-kubernetes/docs/current/reference/html/index.html#secrets-propertysource.
It says roughly.
save secrets
reference secrets in deployment.yml file
containers:
- env:
- name: DB_USERNAME
valueFrom:
secretKeyRef:
name: db-secret
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: db-secret
key: password
Then it says "You can select the Secrets to consume in a number of ways:" and gives 3 examples. However without doing any of these steps I can still see the secrets in my env perfectly. Futhermore the operations in step 1 and step 2 operate independently of spring boot(save and move secrets into environment variables)
My questions:
If I make the changes suggested in step 3 what changes/improvements does it make for my container/app/pod?
Is there no way to be able to avoid all the mapping in step 1 and put all secrets in an env?
they write -Dspring.cloud.kubernetes.secrets.paths=/etc/secrets to source all secrets, how is it they knew secrets were in a folder called /etc/
You can mount all env variables from secret in the following way:
containers:
- name: app
envFrom:
- secretRef:
name: db-secret
As for where Spring gets secrets from - I'm not an expert in Spring but it seems there is already an explanation in the link you provided:
When enabled, the Fabric8SecretsPropertySource looks up Kubernetes for
Secrets from the following sources:
Reading recursively from secrets mounts
Named after the application (as defined by spring.application.name)
Matching some labels
So it takes secrets from secrets mount (if you mount them as volumes). It also scans Kubernetes API for secrets (i guess in the same namespaces the app is running in). It can do it by utilizing Kubernetes serviceaccount token which by default is always mounted into the pod. It is up to what Kubernetes RBAC permissions are given to pod's serviceaccount.
So it tries to search secrets using Kubernetes API and match them against application name or application labels.
We're trying to deploy our lambda using serverless on BitBucket pipelines, but we're running into an issue when running the serverless config credentials command. This issue also happens in docker containers, and locally on our machines.
This is the command we're running:
serverless config credentials --stage staging --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
And it gives us the error:
Error: Profile default does not exist
The profile is defined in our serverless.yml file. If we rename the serverless file before running the command, it works, and then we can then put the serverless.yml file back and successfully deploy.
e.g.
- mv serverless.yml serverless.old
- serverless config credentials --stage beta --provider aws --key $AWS_ACCESS_KEY --secret $AWS_ACCESS_SECRET
- mv serverless.old serverless.yml
We've tried adding the --profile default switch on there, but it makes no difference.
It's worth noting that this wasn't an issue until we started to use the SSM Parameter Store within the serverless file, the moment we added that, it started giving us the Profile default does not exist error.
serverless.yml (partial)
service: our-service
provider:
name: aws
runtime: nodejs12.x
region: eu-west-1
profile: default
stage: ${opt:stage, 'dev'}
iamRoleStatements:
- Effect: 'Allow'
Action: 'ssm:GetParameter'
Resource:
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-dev'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-beta'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-staging'
- 'arn:aws:ssm:eu-west-1:0000000000:parameter/our-service-launchdarkly-key-live'
- Effect: 'Allow'
Action: 'kms:Decrypt'
Resource:
- 'arn:aws:kms:eu-west-1:0000000000:key/alias/aws/ssm'
environment:
LAUNCH_DARKLY_SDK_KEY: ${self:custom.launchDarklySdkKey.${self:provider.stage}}
custom:
stages:
- dev
- beta
- staging
- live
launchDarklySdkKey:
dev: ${ssm:/our-service-launchdarkly-key-dev~true}
beta: ${ssm:/our-service-launchdarkly-key-beta~true}
staging: ${ssm:/our-service-launchdarkly-key-staging~true}
live: ${ssm:/our-service-launchdarkly-key-live~true}
plugins:
- serverless-offline
- serverless-stage-manager
...
TLDR: serverless config credentials only works when serverless.yml isn't present, otherwise it complains about profile default not existing, only an issue when using SSM Param store in the serverless file.
The profile attribute in your serverless.yaml refers to saved credentials in ~/.aws/credentials. If a [default] entry is not present in that file, serverless will complain. I can think of 2 possible solutions to this:
Try removing profile from your serverless.yaml completely and using environment variables only.
Leave profile: default in your serverless.yaml but set the credentials in ~/.aws/credentials like this:
[default]
aws_access_key_id=***************
aws_secret_access_key=***************
If you go with #2, you don't have to run serverless config credentials anymore.
I am using the serverless framework and AWS Lambdas to deploy two function with different path names (/message and /subscribe) to my subdomain at form.example.com.
I am using the serverless-domain-manager plugin for serverless and successfully configured my domain for the /message function using serverless create_domain, but since I also needed to do that for /subscribe I tried to follow the same process receiving messages that the domain already existed and caught an error Error: Unable to create basepath mapping..
After flipping a configuration (createRoute53Record: false) and re-running it started to work, but now when I run sls deploy for my /message function I get the error message I used to see for /subscribe.
Error (from sls deploy):
layers:
None
Error --------------------------------------------------
Error: Unable to create basepath mapping.
For debugging logs, run again after setting the "SLS_DEBUG=*" environment variable.
Here is my config for the serverless-domain-manager:
plugins:
- serverless-offline
- serverless-domain-manager
custom:
transactionDomain:
dev: ${file(./local-keys.yml):transactionDomain}
prod: ${ssm:mg-production-transaction-domain~true}
newsletterDomain:
dev: ${file(./local-keys.yml):newsletterDomain}
prod: ${ssm:mg-production-newsletter-domain~true}
apiKey:
dev: ${file(./local-keys.yml):apiKey}
prod: ${ssm:mg-production-api-key~true}
customDomain:
domainName: form.example.com
certificateName: 'www.example.com' //sub-domain is included in the certificate
stage: 'prod'
createRoute53Record: true
Does this have to do with the deployment of two functions to the same domain? Is there a proper process to allow that to happen?
If you do not need API gateway specific features, such as usage plan. You can put two lambda behind ALB per path routing.
I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.