Trying to curl on rancher pod start up but unable to curl - bash

Currently trying to make an init container on rancher which will send a curl to one of my services. I am having repeated issues trying to get this work and I cannot pinpoint why. I am certain my yaml format is correct and I am installing busy box so curl should be available for use

You are missing the -c option for the shell to tell it that it should read commands from the command line instead from a file:
sh -c curl -X POST ...
So you have to put -c as first container arg:
...
- args:
- -c
- curl
- -X
...

Related

im getting error code 127 while creating jenkins pipeline here is the script

pg_dump -h 10.12.0.4 -U pet--rsmb--prod-l1--usr -w -c -f 2022-08-10t1228z-data.sql
/var/lib/jenkins/workspace/BACKUP-RSMB--POSTGRESQL#tmp/durable-510acc0f/script.sh: 1: /var/lib/jenkins/workspace/BACKUP-RSMB--POSTGRESQL#tmp/durable-510acc0f/script.sh: pg_dump: not found
Your error clearly indicates that the Shell executor cannot find pg_dump command. Either you have not set pg_dump properly in the Jenkins server or you have not added pg_dump to the executable $PATH.

How to give flags and env variables in k3s Airgap installation

I am trying to convert my k3s script installation via curl -sfL
curl -sfL https://get.k3s.io | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr="$cluster_cidr" --disable=traefik" sh -
to Airgap installation with command:
INSTALL_K3S_SKIP_DOWNLOAD=true /usr/local/bin/install.sh in shell script.
I want to pass same flags and env var K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr="$cluster_cidr" --disable=traefik" sh - in INSTALL_K3S_SKIP_DOWNLOAD installation the same way as I am giving in top most curl command ?
The top most curl command is working, but after replacing curl with SKIP_DOWNLOAD its failing.
PS: This is failing to set the flags and variables:
INSTALL_K3S_SKIP_DOWNLOAD=true /usr/local/bin/install.sh | K3S_KUBECONFIG_MODE="644" INSTALL_K3S_EXEC="--flannel-backend=none --cluster-cidr="$cluster_cidr" --disable=traefik" sh -
can someone help me here ?

Coturn AWS EC2 problems running

I'm trying to setup and run coturn TURN server on my EC2 instance which is on ubuntu. I have installed coturn package and trying to run the server using command line only and here is my command -
sudo turnserver -a -syslog -o -n -u [My_Username]:[My_Password] -f -p 3478 -L [AWS_Internal_IP] -X [AWS_External_IP] -r [AWS_External_IP] -v --no-dtls --no-tls -—no-cli
I get turnserver invalid option -- '?'
and the server does not run. Please help.
You should configure coturn in config file (/etc/turnserver.conf).
The last argument in your call to coturn does not start with a double dash, but with a dash and an em-dash.

json configuration of consul watch handler

I'd like to run a script in sudo mode as my consul watch handler, I can run it with command
consul watch -type key -key mykey sudo -u myaccount /scripts/myscript.sh
But I don't know how to define it in json configuration, I've tried below but it does not works
{
"watches":[{
"type":"key",
"key":"mykey",
"handler_type":"script",
"args":["sh","-c","sudo","-u","myaccount","/scripts/myscript.sh"]
}]
}
I am using consul 1.5.2, this is the error:
[ERR] agent: Failed to run watch handler '[sh -c sudo -u myaccount /scripts/myscript.sh]': exit status 1
Can anyone tell me what's wrong with my json configuration?
I moved the sh -c
I got it to work with:
"watches":[{
"type":"key",
"key":"mykey",
"handler_type":"script",
"args":["/bin/sudo","-u","consul","/bin/sh","-c","/home/testscript.sh"]
}]
The -c requires the script to be executable. Also you need the correct sudo privileges. You might even remove the sh -c altogether when the script is executable

docker-compose ignores DOCKER_HOST

I am attempting to run 3 Docker images, MySQL, Redis and a project of mine on Bash for Windows (WSL).
To do that I have to connect to the Docker engine running on Windows, specifically on tcp://locahost:2375. I have appended the following line to .bashrc:
export DOCKER_HOST=tcp://127.0.0.1:2375
I can successfully run docker commands like docker ps or docker run hello-world but whenever I cd into my project directory and run
sudo docker-compose up --build to load the images and spin up the containers I get an error:
ERROR: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
I know that if I use the -H argument I can supply the address but I'd rather find a more permanent solution. For some reason docker-compose seems to ignore the DOCKER_HOST environmental variable and I can't figure out why..
Your problem is sudo. It's a totally different program than your shell and doesn't transfer the exported environment unless you specifically tell it to. You can either add the following line in your /etc/sudoers (or /etc/sudoers.d/docker):
Defaults env_keep += DOCKER_HOST
Or you can just pass it directly to the command line:
sudo DOCKER_HOST=$DOCKER_HOST docker-compose up --build
By set DOCKER_HOST you tell for every run of docker in command line to use http api, instead of default - socket on localhost.
By default http api is not turned on
$ sudo cat /lib/systemd/system/docker.service | grep ExecStart
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
you can add -H tcp://127.0.0.1:2375 for tern on http api on localhost
but usually you want to tern on api for remote servers by -H tcp://0.0.0.0:2375 (do it only with proper firewall)
so you need to change in /lib/systemd/system/docker.service to next line
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375 --containerd=/run/containerd/containerd.sock

Resources