Team,
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
cannot run above file but I can cat it well. I tried my best and still trying to find but no luck so far..
my requirement is to mount bash script from config map to a directory inside container and run it to clone a repo but am getting below message.
cron job
spec:
concurrencyPolicy: Allow
jobTemplate:
metadata:
spec:
template:
metadata:
spec:
containers:
- args:
- -c
- |
set -x
pwd && ls
ls -ltr /
cat /repo/clone.sh
./repo/clone.sh
pwd
command:
- /bin/bash
envFrom:
- configMapRef:
name: sonarscanner-configmap
image: artifactory.build.team.com/product-containers/user/sonarqube-scanner:4.7.0.2747
imagePullPolicy: IfNotPresent
name: sonarqube-sonarscanner
securityContext:
runAsUser: 0
volumeMounts:
- mountPath: /repo
name: repo-checkout
dnsPolicy: ClusterFirst
initContainers:
- args:
- -c
- cd /
command:
- /bin/sh
image: busybox
imagePullPolicy: IfNotPresent
name: clone-repo
securityContext:
privileged: true
volumeMounts:
- mountPath: /repo
name: repo-checkout
readOnly: true
restartPolicy: OnFailure
securityContext:
fsGroup: 0
volumes:
- configMap:
defaultMode: 420
name: product-configmap
name: repo-checkout
schedule: '*/1 * * * *'
ConfigMap
kind: ConfigMap
metadata:
apiVersion: v1
data:
clone.sh: |-
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd ${CODE_REPO_NAME}
pwd
output pod describe
Warning FailedCreatePodSandBox 1s kubelet, node1 Failed create pod sandbox: rpc error: code = Unknown desc = failed to start sandbox container for pod "sonarqube-cronjob-1670256720-fwv27": Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "process_linux.go:303: getting the final child's pid from pipe caused \"EOF\"": unknown
pod logs
+ pwd
+ ls
/usr/src
+ ls -ltr /repo/clone.sh
lrwxrwxrwx 1 root root 15 Dec 5 16:26 /repo/clone.sh -> ..data/clone.sh
+ ls -ltr
total 60
.
drwxr-xr-x 2 root root 4096 Aug 9 08:58 sbin
drwx------ 2 root root 4096 Aug 9 08:58 root
drwxr-xr-x 2 root root 4096 Aug 9 08:58 mnt
drwxr-xr-x 5 root root 4096 Aug 9 08:58 media
drwxrwsrwx 3 root root 4096 Dec 5 16:12 repo <<<<< MY MOUNTED DIR
.
+ cat /repo/clone.sh
#!bin/bash
set -xe
apk add git curl
#Containers that fail to resolve repo url can use below step.
repo_url=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Name | cut -d: -f2)
repo_ip=$(nslookup ${CODE_REPO_URL} | grep Non -A 2 | grep Address | cut -d: -f2)
if grep ${repo_url} /etc/hosts; then
echo "git dns entry exists locally"
else
echo "Adding dns entry for git inside container"
echo ${repo_ip} ${repo_url} >> /etc/hosts
fi
cd / && cat /etc/hosts && pwd
git clone "https://$RU:$RT#${CODE_REPO_URL}/r/a/${CODE_REPO_NAME}" && \
(cd "${CODE_REPO_NAME}" && mkdir -p .git/hooks && \
curl -Lo `git rev-parse --git-dir`/hooks/commit-msg \
https://$RU:$RT#${CODE_REPO_URL}/r/tools/hooks/commit-msg; \
chmod +x `git rev-parse --git-dir`/hooks/commit-msg)
cd code_dir
+ ./repo/clone.sh
/bin/bash: line 5: ./repo/clone.sh: No such file or directory
+ pwd
pwd/usr/src
Assuming the working directory is different thant /:
If you want to source your script in the current process of bash (shorthand .) you have to add a space between the dot and the path:
. /repo/clone.sh
If you want to execute it in a child process, remove the dot:
/repo/clone.sh
Related
I am trying to connect my Spring boot micro services to an ElasticSearch service running in docker. due to the massive change in ElasticSearch in 8.x, I am not finding any relevant, but I did find a good docker-compose.xml example that does NOT contain Logstash, but does have multinode Elasticsearch and Kibana. I am able to docker compose up and log in with the password stored in the .env file. So far so good....but now I don't know how to get my Spring Boot 2.7 services to push logging statements to ElasticSearch so I can view them in Kibana.
Here is my docker-compose:
version: "2.2"
services:
setup:
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
user: "0"
command: >
bash -c '
if [ x${ELASTIC_PASSWORD} == x ]; then
echo "Set the ELASTIC_PASSWORD environment variable in the .env file";
exit 1;
elif [ x${KIBANA_PASSWORD} == x ]; then
echo "Set the KIBANA_PASSWORD environment variable in the .env file";
exit 1;
fi;
if [ ! -f config/certs/ca.zip ]; then
echo "Creating CA";
bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip;
unzip config/certs/ca.zip -d config/certs;
fi;
if [ ! -f config/certs/certs.zip ]; then
echo "Creating certs";
echo -ne \
"instances:\n"\
" - name: es01\n"\
" dns:\n"\
" - es01\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es02\n"\
" dns:\n"\
" - es02\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
" - name: es03\n"\
" dns:\n"\
" - es03\n"\
" - localhost\n"\
" ip:\n"\
" - 127.0.0.1\n"\
> config/certs/instances.yml;
bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key;
unzip config/certs/certs.zip -d config/certs;
fi;
echo "Setting file permissions"
chown -R root:root config/certs;
find . -type d -exec chmod 750 \{\} \;;
find . -type f -exec chmod 640 \{\} \;;
echo "Waiting for Elasticsearch availability";
until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done;
echo "Setting kibana_system password";
until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done;
echo "All done!";
'
healthcheck:
test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"]
interval: 1s
timeout: 5s
retries: 120
es01:
depends_on:
setup:
condition: service_healthy
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata01:/usr/share/elasticsearch/data
ports:
- ${ES_PORT}:9200
environment:
- node.name=es01
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es02,es03
- ELASTIC_PASSWORD=${ELASTIC_PASSWORD}
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es01/es01.key
- xpack.security.http.ssl.certificate=certs/es01/es01.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es01/es01.key
- xpack.security.transport.ssl.certificate=certs/es01/es01.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
es02:
depends_on:
- es01
image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION}
volumes:
- certs:/usr/share/elasticsearch/config/certs
- esdata02:/usr/share/elasticsearch/data
environment:
- node.name=es02
- cluster.name=${CLUSTER_NAME}
- cluster.initial_master_nodes=es01,es02,es03
- discovery.seed_hosts=es01,es03
- bootstrap.memory_lock=true
- xpack.security.enabled=true
- xpack.security.http.ssl.enabled=true
- xpack.security.http.ssl.key=certs/es02/es02.key
- xpack.security.http.ssl.certificate=certs/es02/es02.crt
- xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.http.ssl.verification_mode=certificate
- xpack.security.transport.ssl.enabled=true
- xpack.security.transport.ssl.key=certs/es02/es02.key
- xpack.security.transport.ssl.certificate=certs/es02/es02.crt
- xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt
- xpack.security.transport.ssl.verification_mode=certificate
- xpack.license.self_generated.type=${LICENSE}
mem_limit: ${MEM_LIMIT}
ulimits:
memlock:
soft: -1
hard: -1
healthcheck:
test:
[
"CMD-SHELL",
"curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'",
]
interval: 10s
timeout: 10s
retries: 120
I tried adding Logstash to this, but it fails because port 5000 seems to be allocated. And, from what I can tell its not needed explictly anymore?
My question is, I have this available on http://localhost:5601 but I could really use some help on how to get my logging output to ElasticSearch. I am using pretty standard Spring Boot configuration. Naturally this will eventually be cloud deployed, so ideally a solution for both local and AWS can be found.
My aim is to deploy a container-labelling-webhook solution onto my AKS cluster using flux CD v2. Once I have it operational, I want to rollout to multiple clusters.
Command used to bootstrap AKS Cluster(Flux Installation I mean)
flux bootstrap git --url=https://github.xxxxxx.com/user1/test-repo.git --username=$GITHUB_USER --password=$GITHUB_TOKEN --token-auth=true --path=clusters/my-cluster
✔ Kustomization reconciled successfully
► confirming components are healthy
✔ helm-controller: deployment ready
✔ kustomize-controller: deployment ready
✔ notification-controller: deployment ready
✔ source-controller: deployment ready
✔ all components are healthy
Now, I am trying to deploy my helm charts, note, helm chart deployment by itself works fine, not via Flux though.
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose
✚ generating HelmRepository source
► applying secret with repository credentials
✔ authentication configured
► applying HelmRepository source
✔ source created
◎ waiting for HelmRepository source reconciliation
✗ failed to fetch Helm repository index: failed to cache index to temporary file: Get "https://github.xxxxxx.com/user1/test-repo/tree/main/chart/index.yaml": x509: certificate signed by unknown authority
I am generating certs with the process below:
cat << EOF > ca-config.json
{
"signing": {
"default": {
"expiry": "43830h"
},
"profiles": {
"default": {
"usages": ["signing", "key encipherment", "server auth", "client auth"],
"expiry": "43830h"
}
}
}
}
EOF
cat << EOF > ca-csr.json
{
"hosts": [
"cluster.local"
],
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "AU",
"L": "Melbourne",
"O": "xxxxxx",
"OU": "Container Team",
"ST": "aks1-dev"
}
]
}
EOF
docker run -it --rm -v ${PWD}:/work -w /work debian bash
apt-get update && apt-get install -y curl &&
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64 -o /usr/local/bin/cfssl && \
curl -L https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64 -o /usr/local/bin/cfssljson && \
chmod +x /usr/local/bin/cfssl && \
chmod +x /usr/local/bin/cfssljson
cfssl gencert -initca ca-csr.json | cfssljson -bare /tmp/ca
cfssl gencert \
-ca=/tmp/ca.pem \
-ca-key=/tmp/ca-key.pem \
-config=ca-config.json \
-hostname="mutation-label-webhook,mutation-label-webhook.label-webhook.svc.cluster.local,mutation-label-webhook.label-webhook.svc,localhost,127.0.0.1" \
-profile=default \
ca-csr.json | cfssljson -bare /tmp/label-webhook
root#91bc7986cb94:/work# ls -alrth /tmp/
total 32K
drwxr-xr-x 1 root root 4.0K Jul 29 04:42 ..
-rw-r--r-- 1 root root 2.0K Jul 29 04:43 ca.pem
-rw-r--r-- 1 root root 1.8K Jul 29 04:43 ca.csr
-rw------- 1 root root 3.2K Jul 29 04:43 ca-key.pem
-rw-r--r-- 1 root root 2.2K Jul 29 04:43 label-webhook.pem
-rw-r--r-- 1 root root 1.9K Jul 29 04:43 label-webhook.csr
-rw------- 1 root root 3.2K Jul 29 04:43 label-webhook-key.pem
drwxrwxrwt 1 root root 4.0K Jul 29 04:43 .
root#91bc7986cb94:/work#
root#83faa77cd5f6:/work# cp -apvf /tmp/* .
'/tmp/ca-key.pem' -> './ca-key.pem'
'/tmp/ca.csr' -> './ca.csr'
'/tmp/ca.pem' -> './ca.pem'
'/tmp/label-webhook-key.pem' -> './label-webhook-key.pem'
'/tmp/label-webhook.csr' -> './label-webhook.csr'
'/tmp/label-webhook.pem' -> './label-webhook.pem'
root#83faa77cd5f6:/work# pwd
/work
chmod -R 777 tls/
helm upgrade --install mutation chart --namespace label-webhook --create-namespace --set secret.cert=$(cat tls/label-webhook.pem | base64 | tr -d '\n') --set secret.key=$(cat tls/label-webhook-key.pem | base64 | tr -d '\n') --set secret.cabundle=$(openssl base64 -A <"tls/ca.pem")
I am totally confused as to how to get flux working?
Flux doesn't trust the certificate presented by your git server github.xxxxxx.com
Quick workaround is to use --insecure-skip-tls-verify flag as described here: https://fluxcd.io/docs/cmd/flux_bootstrap_git/
Full command:
flux create source helm label-webhook --url https://github.xxxxxx.com/user1/test-repo/tree/main/chart --namespace label-webhook --cert-file=./tls/label-webhook.pem --key-file=./tls/label-webhook-key.pem --ca-file=./tls/ca.pem --verbose --insecure-skip-tls-verify
It's interesting there wasn't problem with flux bootstrap git step but it probably just create configuration for repository in this step and not establish connection to it.
Whatever certificates you are generating don't have anything to do with your GIT server TLS certificate. Seems you're doing some admission webhook magic but the certs you generate there have nothing in common with github.xxxxxx.com certificate so there is no need to specify if in --ca-file flag.
Permanent solution is to get the CA certificate that signed the github.xxxxxx.com so you need to ask the administrators of the GIT server to provide you CA file and specify that one in --ca-file flag. Not the one you created for your webhook experiments.
When I start a linux server with Cloud-init, I have a few scripts in /etc/cloud/cloud.cfg.d/ and they run in reverse alphabetical order
# ll /etc/cloud/cloud.cfg.d/
total 28
-rw-r--r-- 1 root root 173 Dec 10 12:38 00-cloudinit-lifecycle-hook.cfg
-rw-r--r-- 1 root root 2120 Jun 1 2021 05_logging.cfg
-rw-r--r-- 1 root root 590 Oct 26 17:55 10_aws_yumvars.cfg
-rw-r--r-- 1 root root 29 Dec 1 18:22 20_amazonlinux_repo_https.cfg
-rw-r--r-- 1 root root 586 Dec 10 12:38 50-cloudinit-tomcat.cfg
-rw-r--r-- 1 root root 585 Dec 10 12:40 60-cloudinit-newrelic.cfg
The last to execute is 00-cloudinit-lifecycle-hook.cfg, in which I complete the lifecycle for the Auto Scaling Group with a CONTINUE. The ASG fails if it doesn't receive this signal after a given time out.
The issue is that even if there's an error in 50-cloudinit-tomcat.cfg, it still runs 00-cloudinit-lifecycle-hook.cfg instead of stopping
How can I ensure cloud-init stops and never reaches the last script? I would like the ASG to never receive the CONTINUE signal if there's any error.
Here are the files:
EC2 instance user-data:
#cloud-config
bootcmd:
- [cloud-init-per, once, "app-volume", mkfs, -t, "ext4", "/dev/nvme1n1"]
mounts:
- ["/dev/nvme1n1", "/app-volume", "ext4", "defaults,nofail", "0", "0"]
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
50-cloudinit-tomcat.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "#!/bin/bash -e"
- set +x
- echo ' '
- echo '# ===================================='
- echo '# Tomcat Cloud Init '
- echo '# /etc/cloud/cloud.cfg.d/'
- echo '# ===================================='
- echo ' '
- echo '#===================================='
- echo '# Run Ansible'
- echo '#===================================='
- echo ' '
- set -x
- ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml
when I run ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml directly in the instance I get an error, and I know it returns 2
ansible-playbook /opt/init-config/tomcat/tomcat-config.yaml #shows errors
echo $? # shows "2"
00-cloudinit-lifecycle-hook.cfg
#cloud-config
merge_how:
- name: list
settings: [append]
- name: dict
settings: [no_replace, recurse_list]
runcmd:
- "/opt/lifecycles/lifecycle-hook-continue.sh"
An alternative I can think of, is to send a ABANDON signal instead of CONTINUE as soon as there's en error in one of the cloud-init config. But I can't find in the documentation on to define if there's an error
Question
Given this single-line string:
PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols
What would be the right way to assign each value to its designated variable so that I can use it afterward?
Context
I'm parsing the context of a k8s secret within a CronJob so that I can periodically call a Stored Procedure in our Postgres database.
To do so, I plan on using:
PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY)
echo $PG_OUTPUT_VALUE
The actual entire helm chart I'm currently trying to fix looks like this:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-tr-cleanup-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-tr-cleanup-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- echo "props:" && echo $PG_PROPS && PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-) && echo $PG_USER && PG_TR_CLEANUP_QUERY="SELECT something FROM public.somewhere;" && echo $PG_TR_CLEANUP_QUERY && PG_OUTPUT_VALUE=$(PGPASSWORD=$PG_PASSWD psql -qtAX -h $PG_HOST -p $PG_PORT -U $PG_USER -d $PG_DATABASE -c $PG_TR_CLEANUP_QUERY) && echo PG_OUTPUT_VALUE
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
Current approach
As you can see, I'm currently using:
PG_USER=$(grep "^PG_USER=" | cut -d"=" -f2-)
That is because I initially thought the secret would be output on multiple lines, but it turns out that I was wrong. The echo $PG_USER displays an empty string.
The bash declare command is appropriate here, and is safer than eval.
Suppose the input contains something potentially malicious
line='PG_USER=postgres PG_PORT=1234 PG_PASS=icontain=and*symbols`ls`'
I'm assuming none of the values contain whitespace. Let's split that string
read -ra assignments <<< "$line"
Now, declare each one
for assignment in "${assignments[#]}"; do declare "$assignment"; done
Everywhere we examine the input, we maintain double quotes.
Let's see what we ended up with:
$ declare -p PG_USER PG_PORT PG_PASS
declare -- PG_USER="postgres"
declare -- PG_PORT="1234"
declare -- PG_PASS="icontain=and*symbols\`ls\`"
Option 1
This function can be reused to assign each variable individually:
extract() {
echo "$INPUT" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ;
}
And to use it:
PG_USER=$(extract PG_USER)
PG_PORT=$(extract PG_PORT)
PG_PASS=$(extract PG_PASS)
Option 2
Another potential solution, with a security concern, is to simply use:
eval "$INPUT"
It should only be used if you have validated the input.
Contextual complete answer
And because I've presented the k8s context in the question, here is the answer as plugged into that solution.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: {{ template "fullname" $ }}-cronjob
spec:
concurrencyPolicy: Forbid
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: OnFailure
volumes:
- name: postgres
secret:
secretName: {{ template "fullname" $ }}-postgres
containers:
- name: {{ template "fullname" $ }}-cronjob-pod
image: postgres:12-alpine
imagePullPolicy: Always
env:
- name: PG_PROPS
valueFrom:
secretKeyRef:
name: {{ template "fullname" $ }}-postgres
key: postgres.properties
command:
- /bin/sh
- -c
- >-
extract() { echo "$PG_PROPS" | grep -o "$1=.*" | cut -d" " -f1 | cut -d"=" -f2- ; } &&
export PGHOST=$(extract PG_HOST) &&
export PGPORT=$(extract PG_PORT) &&
export PGDATABASE=$(extract PG_DATABASE) &&
export PGUSER=$(extract PG_USER) &&
PG_SCHEMA=$(extract PG_SCHEMA) &&
PG_QUERY="SELECT tenant_schema FROM $PG_SCHEMA.tenant_schema_mappings;" &&
PGPASSWORD=$(extract PG_PASSWD) psql --echo-all -c "$PG_QUERY"
volumeMounts:
- name: postgres
mountPath: /etc/secrets/postgres
I am trying to get my Capistrano deploy script working, but it is not doing the symlinking as it is configured to do as shown below.
set :linked_files, %w{config/database.yml}
set :linked_dirs, %w{log tmp vendor/bundle public/system}
When it runs the related command, I get the following:
WARN [SKIPPING] No Matching Host for /usr/bin/env [ -f /path/to/shared/config/database.yml ]
If I run this command on the server, either through ssh or through logging onto the server and running the command, I get no response from the command.
user: ~
$ [ -f /path/to/shared/config/database.yml ]
user: ~
$
The file does exist in the specified location and has permissions.
user: ~
$ ll /path/to/shared/config/
total 4.0K
drwxrwxr-x 2 user group 33 Nov 30 10:58 .
drwxrwxr-x 7 user group 89 Nov 30 10:58 ..
-rwxrwxr-x 1 user group 805 Nov 30 10:58 database.yml
user: ~
Shouldn't this return a true or a false, instead of nothing? Is there a configuration I may have changed that suppresses the output? I get no response at all whether the file exists or not.
In your response to the actual question you ask, test (which is what [ is an alias for) does in fact not return output to stdout. It returns an exit code.
user: ~
$ [ -f /path/to/shared/config/database.yml ] # if the file exists
user: ~
$ echo $?
0
user: ~
$ [ -f /path/to/shared/config/database.yml ] # if the file does not exist
user: ~
$ echo $?
1
test -f /path/to/file (or [ -f /path/to/file ]) yields an exit code of 0 if the file exists or 1 if it does not. If you want to check that a file is there and echo the path to it, try:
[ -f /path/to/file ] && echo "/path/to/file"