Connecting to Google Cloud SQL using proxy -- Error 403: Insufficient Permission - go

EDIT:
I now think the issue is with my Golang pod communicating with the proxy pod via localhost, as in the second error message.
I added the service account credentials JSON file to my Docker image's GOOGLE_APPLICATION_CREDENTIALS environment variable. After doing that, using my-project:us-central1:my-instance as connName below works.
However, when I try using the DB_HOST environment variable in the container as connName, I still get the 404 error below.
ORIGINAL POST
I'm following this guide to connect to Google Cloud SQL from a pod on Kubernetes Engine. The pod is running two containers: one with the Cloud SQL proxy image and another with a Golang service to do the actual database queries.
I'm getting the following error when my Golang service tries to initiate a connection:
ensure that the Cloud SQL API is enabled for your project
(https://console.cloud.google.com/flows/enableapi?apiid=sqladmin).
Error during createEphemeral for
my-project:us-central1:my-instance: googleapi: Error 403:
Insufficient Permission, insufficientPermission
I've looked at a few threads here and elsewhere and here's what I've done so far:
Ensured the Cloud SQL API is in fact enabled.
Added the Editor role to the service account I'm using.
Removed and re-added the Cloud SQL Client role on the service account I'm using.
Verified the correct secrets were created, with the same namespace as the pods.
Below is a snippet of the Golang code I'm using, which was taken from here:
cfg := mysql.Cfg(connName, dbUser, dbPassword)
cfg.DBName = dbName
db, err := mysql.DialCfg(cfg)
if err != nil {
log.Println(err)
return c.NoContent(http.StatusInternalServerError)
}
connName is the same string that shows up in the error: my-project:us-central1:my-instance. I've tried changing that to 127.0.0.1:3306 instead, but I then get this error below:
ensure that the account has access to "127.0.0.1:3306" (and make sure
there's no typo in that name). Error during createEphemeral for
127.0.0.1:3306: googleapi: got HTTP response code 404 with body: Not Found
Also, here is a snippet of the yaml file I'm using to deploy the pods.
env:
- name: DB_HOST
value: 127.0.0.1:3306
- name: DB_USER
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: username
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: cloudsql-db-credentials
key: password
- name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: ["/cloud_sql_proxy",
"-instances=my-project:us-central1:my-instance=tcp:3306",
"-credential_file=/secrets/cloudsql/credentials.json"]
volumeMounts:
- name: cloudsql-instance-credentials
mountPath: /secrets/cloudsql
readOnly: true
volumes:
- name: cloudsql-instance-credentials
secret:
secretName: cloudsql-instance-credentials
I've also verified that the Cloud SQL proxy starts without issue:
2018/04/21 20:41:19 Listening on 127.0.0.1:3306 for my-project:us-central1:my-instance
2018/04/21 20:41:19 Ready for new connections
I'm not sure what else to try here. Any help is appreciated.

Hi as you can read here [1]: "If your program is written in Go you can use the Cloud SQL Proxy as a library, avoiding the need to start the Proxy as a companion process.", So you are already using the SQL proxy as code and there is no need to use the SQL proxy as a POD.
If you still want to use the SQL proxy as a POD, you can use the GO Companion Process as you can read here[2] under: "Companion Process":
import (
"github.com/go-sql-driver/mysql"
)
dsn := fmt.Sprintf("%s:%s#tcp(%s)/%s",
dbUser,
dbPassword,
"127.0.0.1:3306",
dbName)
db, err := sql.Open("mysql", dsn)
[1] https://github.com/GoogleCloudPlatform/cloudsql-proxy#to-use-inside-a-go-program
[2] https://cloud.google.com/sql/docs/mysql/connect-external-app#languages

Related

How to authenticate in REST request to MinIO

I am experimenting with MinIO. I try to send REST API calls directly to MinIO port 9000. So far, I understood that authentication works the same as the Amazon S3 API authentication works - correct? Unfortunately, I am also new to S3.
Here are my questions:
What does a request header to MinIO look like?
I read that I also need a signature that needs to be calculated somehow. How is this calculation done?
I do my experiments on Windows 10 and run MinIO in a Docker Container. My experiments target "http://localhost:9000/"
So far I only get a 403 error for a GET request:
<?xml version="1.0" encoding="UTF-8"?>
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied.</Message>
<Resource>/</Resource>
<RequestId>173BACCB19FAF4C4</RequestId>
<HostId>d200d104-da55-44e2-a94d-ce68ee959272</HostId>
</Error>
I read through the S3 Api Reference "https://docs.aws.amazon.com/pdfs/AmazonS3/latest/API/s3-api.pdf#Type_API_Reference" but to be honest, I got lost.
Can please someone help me out?
You needs to set an authentication values.
URL
GET http://localhost:9099/{bucket name}/{file name}
Select Authorization tab
Select Type AWS Signature
Access Key : copy from minio UI
Secret Key : copy from minio UI
Service name: s3
Postman access
minio browser
Create Key
Access Key / Secret Key
local docker compose file
save as docker-compose.yml
version: "3"
services:
minio-service:
image: minio/minio:latest
volumes:
- ./storage/minio:/data
ports:
- "9000:9000"
- "9099:9099"
environment:
MINIO_ROOT_USER: admin
MINIO_ROOT_PASSWORD: admin-strong
command: server --address ":9099" --console-address ":9000" /data
restart: always # necessary since it's failing to start sometimes
launching container
$ docker compose up
URL
http://localhost:9000/
The confidential is matched docker-compose.yml
user name : admin
password: admin-strong

How to configure kong-api to communicate other spring Microservice

I am just started with Kong API with One API
I am able to run kong api locally using its official docker image available.
And on other side I am having another Spring-Boot microservice locally running inside same Docker engine.
Problem : What configuration needs in kong api yaml file so that I can connect to my spring-boot microservice ?
My kong -api yaml file
services:
- name: control-service-integration
url: http://localhost:8080/
plugins:
- name: oneapi
config:
edgemicro_proxy: edgemicro_demo_v0
add_application_id_header: true
authentication:
apikey:
header_name: "x-api-key"
upstream_auth:
basic_auth:
username: username
password: password
routes:
- name: control-service-route
request_buffering: false
response_buffering: false
paths:
- /edgemicro-demo-v0
From kon-one api service i am getting always 502 Bad Gateway error.
Let me know if anything information required.
I found the solution for this
in above YAML
services:
- name: control-service-integration
url: http://localhost:8080/
add this value in-front of url section http://host.docker.internal:8080/ after doing lot of trials and errors finally now I am able to connect my app which is running on host.

Connect to dynamically create new cluster on GKE

I am using the cloud.google.com/go SDK to programmatically provision the GKE clusters with the required configuration.
I set the ClientCertificateConfig.IssueClientCertificate = true (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#ClientCertificateConfig).
After the cluster is provisioned, I use the ca_certificate, client_key, client_secret returned for the same cluster (see https://pkg.go.dev/google.golang.org/genproto/googleapis/container/v1?tab=doc#MasterAuth). Now that I have the above 3 attributes, I try to generate the kubeconfig for this cluster (to be later used by helm)
Roughly, my kubeconfig looks something like this:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: <base64_encoded_data>
server: https://X.X.X.X
name: gke_<project>_<location>_<name>
contexts:
- context:
cluster: gke_<project>_<location>_<name>
user: gke_<project>_<location>_<name>
name: gke_<project>_<location>_<name>
current-context: gke_<project>_<location>_<name>
kind: Config
preferences: {}
users:
- name: gke_<project>_<location>_<name>
user:
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
On running kubectl get nodes with above config I get the error:
Error from server (Forbidden): serviceaccounts is forbidden: User "client" cannot list resource "serviceaccounts" in API group "" at the cluster scope
Interestingly if I use the config generated by gcloud, the only change is in the user section:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
This configuration seems to work just fine. But as soon as I add client cert and client key data to it, it breaks:
user:
auth-provider:
config:
cmd-args: config config-helper --format=json
cmd-path: /Users/ishankhare/google-cloud-sdk/bin/gcloud
expiry-key: '{.credential.token_expiry}'
token-key: '{.credential.access_token}'
name: gcp
client-certificate-data: <base64_encoded_data>
client-key-data: <base64_encoded_data>
I believe I'm missing some details related to RBAC but I'm not sure what. Will you be able to provide me with some info here?
Also reffering to this question I've tried to only rely on Username - Password combination first, using that to apply a new clusterrolebinding in the cluster. But I'm unable to use just the username password approach. I get the following error:
error: You must be logged in to the server (Unauthorized)

Openshift/Kubernetes ssh Secret doesn't work with Camel SFTP component

Long story short --->
While passing an ssh-key, which is retrieved from a secret in Openshift to apache-camel SFTP component its not able to connect the server; whereas if I directly pass a path of the actual ssh-key file w/o creating secret to the same component, it works just fine. The exception is, invalid key. I tried to read the key file in java and pass it as ByteArray as a privateKey parameter but no luck. Seems like passing the key as byte is not working as all possible means.
SFTP-COMPONENT Properties->
sftp:
host: my.sftp.server
port: 22
fileDirectory: /to
fileName: /app/home/file.txt
username: sftp-user
privateKeyFilePath: /var/run/secret/secret-volume/ssh-privatekey **(Also tried privateKey param with byte array)**
knownHostsFile: resource:classpath:keys/known_hosts
binary: true
Application Detail:
I am using Openshift 3.11.
Developing Camel-SpringBoot Micro-Integration services configured with fabric8 and spring-cloud-kubernetes plugins for deployment.
I am creating the secret as,
oc secrets new-sshauth sshsecret --ssh-privatekey=$HOME/.ssh/id_rsa
I have tried to refer secret with deployment.yml and bootstrap.yml
Using as env variable with secret-key-ref->
deployment.yml->
- name: SSH_SECRET
valueFrom:
secretKeyRef:
name: sshsecret
key: ssh-privatekey
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
enableApi: true
name: sshsecret
Using as mounted volume->
deployment.yml->
volumeMounts:
- mountPath: /var/run/secret/secret-volume
name: secret-volume
volumes:
- name: secret-volume
secret:
secretName: sshsecret
bootstrap.yml->
spring:
cloud:
kubernetes:
secrets:
enabled: true
paths: /var/run/secret/secret-volume
Note: Once the service is deployed I can see the mounted volume is attached with the container and can even bash into the POD and go to the same directory and locate the private key, which completely intact.
Any help will be appreciated. Ask me all questions you need to know to solve this.
It was a very bad mistake from my side. I was using privateKeyUri in camel SFTP component instead of privateKeyFile. I didn't rectify this and always changing those SFTP parameters in config-map directly.
By the way, for those trying to implement similar usecase; use the second option which is, mounting the secret into a volume and then refer the volume path inside Camel. Don't use the secret as ENV variable, so you need not enable secret API inside bootstrap.yml.
Thanks anyway, cheers!
Rito

Can't access Google Cloud Datastore from Google Kubernetes Engine cluster

I have a simple application that Gets and Puts information from a Datastore.
It works everywhere, but when I run it from inside the Kubernetes Engine cluster, I get this output:
Error from Get()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
Error from Put()
rpc error: code = PermissionDenied desc = Request had insufficient authentication scopes.
I'm using the cloud.google.com/go/datastore package and the Go language.
I don't know why I'm getting this error since the application works everywhere else just fine.
Update:
Looking for an answer I found this comment on Google Groups:
In order to use Cloud Datastore from GCE, the instance needs to be
configured with a couple of extra scopes. These can't be added to
existing GCE instances, but you can create a new one with the
following Cloud SDK command:
gcloud compute instances create hello-datastore --project
--zone --scopes datastore userinfo-email
Would that mean I can't use Datastore from GKE by default?
Update 2:
I can see that when creating my cluster I didn't enable any permissions (which are disabled for most services by default). I suppose that's what's causing the issue:
Strangely, I can use CloudSQL just fine even though it's disabled (using the cloudsql_proxy container).
So what I learnt in the process of debugging this issue was that:
During the creation of a Kubernetes Cluster you can specify permissions for the GCE nodes that will be created.
If you for example enable Datastore access on the cluster nodes during creation, you will be able to access Datastore directly from the Pods without having to set up anything else.
If your cluster node permissions are disabled for most things (default settings) like mine were, you will need to create an appropriate Service Account for each application that wants to use a GCP resource like Datastore.
Another alternative is to create a new node pool with the gcloud command, set the desired permission scopes and then migrate all deployments to the new node pool (rather tedious).
So at the end of the day I fixed the issue by creating a Service Account for my application, downloading the JSON authentication key, creating a Kubernetes secret which contains that key, and in the case of Datastore, I set the GOOGLE_APPLICATION_CREDENTIALS environment variable to the path of the mounted secret JSON key.
This way when my application starts, it checks if the GOOGLE_APPLICATION_CREDENTIALS variable is present, and authenticates Datastore API access based on the JSON key that the variable points to.
Deployment YAML snippet:
...
containers:
- image: foo
name: foo
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /auth/credentials.json
volumeMounts:
- name: foo-service-account
mountPath: "/auth"
readOnly: true
volumes:
- name: foo-service-account
secret:
secretName: foo-service-account
After struggling some hours, I was also able to connect to the datastore. Here are my results, most of if from Google Docs:
Create Service Account
gcloud iam service-accounts create [SERVICE_ACCOUNT_NAME]
Get full iam account name
gcloud iam service-accounts list
The result will look something like this:
[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Give owner access to the project for the service account
gcloud projects add-iam-policy-binding [PROJECT_NAME] --member serviceAccount:[SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com --role roles/owner
Create key-file
gcloud iam service-accounts keys create mycredentials.json --iam-account [SERVICE_ACCOUNT_NAME]#[PROJECT_NAME].iam.gserviceaccount.com
Create app-key Secret
kubectl create secret generic app-key --from-file=credentials.json=mycredentials.json
This app-key secret will then be mounted in the deployment.yaml
Edit deyployment file
deployment.yaml:
...
spec:
containers:
- name: app
image: eu.gcr.io/google_project_id/springapplication:v1
volumeMounts:
- name: google-cloud-key
mountPath: /var/secrets/google
env:
- name: GOOGLE_APPLICATION_CREDENTIALS
value: /var/secrets/google/credentials.json
ports:
- name: http-server
containerPort: 8080
volumes:
- name: google-cloud-key
secret:
secretName: app-key
I was using a minimalistic Dockerfile like:
FROM SCRATCH
ADD main /
EXPOSE 80
CMD ["/main"]
which kept my go app in an indefinite "hanging" state when trying to connect to the GCP Datastore. After LOTS of playing I figured out that the SCRATCH Docker image might be missing certain environment tools / variables / libraries which the Google cloud library requires. Using this Dockerfile now works:
FROM golang:alpine
RUN apk add --no-cache ca-certificates
ADD main /
EXPOSE 80
CMD ["/main"]
It does not require me to provide the google credentials environment variable. The library seems to automatically understand where it's running in (maybe from the context.Background() ?) and automatically uses a default service account which Google creates for you when you create your cluster on GKE.

Resources