Passing a parameter's value to shell function prints only the name of the parameter - bash

I need to pass a parameter to my shell function, which looks like this:
function deploy {
docker create \
--name=$1_temp \
-e test_postgres_database=$2 \
-e test_publicAddress="http://${3}:9696"\
# other irrelevant stuff
I am passing the following parameters:
deploy test_container test_name #1 test_database #2 ip_address #3
So when, I pass those 3 parameters, based on them a new container is created. However, the third parameter is something special. So there is another function, which gets the ip of the container.
function get_container_ip_address {
container_id=($(docker ps --format "{{.ID}} {{.Names}}" | grep $1))
echo $(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' ${container_id[0]})
So, the execution of the deploy function actually looks like this:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database ip_address
Let's say the IP address of the container is 1.1.1.1, so the ip_address=1.1.1.1.
However, when I execute the script and create the container, its IP address is:
"http://ip_address:9696" and not "http://1.1.1.1:9696".
I also tried the following:
...
-e test_publicAddress="http://$3:9696"\
...
But I still got the same result. Is there a way I can get the value of the passed parameter? By the way, I am sure it contains the needed ip address as I use it elsewhere (not in a function) and I printed it for testing. Thank you in advance!

so run this like that:
ip_address=$(get_container_ip_address test_container)
deploy test_container test_database $ip_address
when you call it without the $ the script leave it alone like a string ip_address

Related

how to load array parameter in another shell file dynamically over ssh connection

I need to call my executable which is placed in an on-prem server by using an ssh connection and pass a dynamics parameter.
based on my requirement, users should be able to add or remove parameters as they want to work with the executable on the on-prem server.
I wrote a translator to identify any new parameter added to the console but now when I want to pass it via ssh, I am facing 2 problems.
what if I have a value that contains space?
how to load these values dynamically & use them as arguments on my shell script on the server?
**Also take note that I am sending some additional parameters that are not related to my executable argument but I need them as well.
params=(
"$MASTER"
"$NAME"
"$QUEUE"
service.enabled=true
)
for var_name in "${!conf__#}";
do
key=${var_name#conf__};
key=${key//_/.};
value=${!var_name};
params+=( --conf "$key=$value" );
done
echo "${params[#]}"
ssh -o StrictHostKeyChecking=no myuser#server_ip "/bin/bash -s" < deploy_script.sh "${params[#]}"
My deploy_script.sh file will be something like the below file.
#!/bin/bash
set -e
AR_MASTER=${1}
AR_NAME=${2}
AR_QUEUE=${3}
AR_SER_EN=${4}
# How can I get the other dynamic parameters???
main() {
my-executable \
"--master "$AR_MASTER \
"--name "$AR_NAME \
"--queue "$AR_QUEUE \
"--conf service.enabled="$AR_SER_EN \
??? #how to add the additional configuration dynamically?
}
main "$#"
Would you mind help me in figure it out?

RabbitMQ bind exchange to exchange in bash

I would like to bind an exchange to another exchange via a bash script (what I plan to use within a Dockerfile).
As simple code (like JS) it is fine, working perfectly, but I would like to use plain bash script for it, if it is possible.
The part of the JS code, what I would like to have but in bash:
// ...
await ch1.assertExchange('test-exchange', 'headers');
await ch1.assertExchange('another-exchange', 'headers');
await ch1.bindExchange('test-exchange', 'another-exchange','',{
'x-match':'all',
target: 'pay-flow'
});
// ...
When I run the JS code, then it is fine. I got the following results in RabbitMQ:
bash-5.1# rabbitmqadmin -u guest -p guest list bindings
+-----------------+-----------------------+-----------------------+
| source | destination | routing_key |
+-----------------+-----------------------+-----------------------+
| | test-queue | test-queue |
| test-exchange | test-queue | |
| test-exchange | another-exchange | |
+-----------------+-----------------------+-----------------------+
What I tried in bash:
#!/bin/bash
rabbitmqadmin -u guest -p guest declare binding source=test-exchange destination=another-exchange
Then I got the message of:
** Not found: /api/bindings/%2F/e/test-exchange/q/another-exchange
By the CLI/rabbitmqadmin documentation it seems, I am supposed to (or able only to) bind an exchange with a queue.
Anyone has any idea, how to solve it? (Maybe write the binder code in python and run it from the bash script?) Are there any kind of cli tool what capable to do it?
Please see the command's help:
$ rabbitmqadmin help subcommands | grep -F 'declare binding'
declare binding source=... destination=... [destination_type=... routing_key=... arguments=...]
This is the correct set of arguments:
rabbitmqadmin -u guest -p guest declare binding source=test-exchange destination=another-exchange destination_type=exchange
Of course, before you run the above command the two exchanges must exist.
Tested as follows:
$ rabbitmqadmin declare exchange name=test-exchange type=direct
exchange declared
$ rabbitmqadmin declare exchange name=test-exchange-2 type=direct
exchange declared
$ rabbitmqadmin declare binding source=test-exchange destination=test-exchange-2 destination_type=exchange
binding declared
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.

How to set variables using terragrunt before_hook

I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|

Disable scheduling on second instance of same project on AWS

I have 2 instances of the same deployment/project on AWS Elastic Beanstalk.
Both contain a Laravel project which contains scheduling code which runs various commands which can be found in the schedule method/function of the Kernel.php class within 'app/Console' - the problem I have is that if a command runs from one instance then it will also run the command from the second instance which is not what I want to happen.
What I would like to happen is that the commands get run from only one instance and not the other. How do I achieve this in the easiest way possible?
Is there a Laravel package which could help me achieve this?
From Laravel 5.6:
Laravel provides a onOneServer method which you can use if your applications share a single cache server. You could use something like ElastiCache to host Redis or Memcached and use it as your cache server for both of your application instances. Then you would be able to use the onOneServer method like this:
$schedule->command('report:generate')
->fridays()
->at('17:00')
->onOneServer();
For older versions of Laravel:
You could use the jdavidbakr/multi-server-event package. Once you have it set up you should be able to use it like:
$schedule->command('inspire')
->daily()
->withoutOverlappingMultiServer();
I had the same issue to run some cronjobs (nothing related to Laravel) and I found a nice solution (don't remember where I found it)
What I do is check if the instance running the code is the first instance on the Auto Scaling Group, if it's the first then I execute the command otherwise just exit.
This is the way it's implemented:
#!/bin/bash
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id 2>/dev/null`
REGION=`curl -s http://169.254.169.254/latest/dynamic/instance-identity/document 2>/dev/null | jq -r .region`
# Find the Auto Scaling Group name from the Elastic Beanstalk environment
ASG=`aws ec2 describe-tags --filters "Name=resource-id,Values=$INSTANCE_ID" \
--region $REGION --output json | jq -r '.[][] | select(.Key=="aws:autoscaling:groupName") | .Value'`
# Find the first instance in the Auto Scaling Group
FIRST=`aws autoscaling describe-auto-scaling-groups --auto-scaling-group-names $ASG \
--region $REGION --output json | \
jq -r '.AutoScalingGroups[].Instances[] | select(.LifecycleState=="InService") | .InstanceId' | sort | head -1`
# If the instance ids are the same exit 0
[ "$FIRST" = "$INSTANCE_ID" ]
Try implementing those calls using PHP and it should work.

Problem with passing parameters in a shell function

I have the following shell script. There is a deploy function, that is used later in the script to deploy containers.
function deploy {
if [[ $4 -eq QA ]]; then
echo Building a QA Test container...
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-e SPECIAL_ENV_VARIABLE \
-p $2:9696 \
... # skipping other project specific settings
else
docker create \
--name=$1_temp \
-e DATABASE=$3 \
-p $2:9696 \
... # skipping some project specific stuff
fi
During the deployment, I have to do some tests on the application (which is in containers). I use different containers for that, however I need to pass one additional parameter to the deploy function for my QA_test container, because I need a different setting in the docker create. Hence, why I put the if statement in the begining, which checks if the 4th argument equals 'QA', and if it does it creates a specific container with special env variables, otherwise if it has just the 3 arguments, it creates a 'normal' one. I was able to run the code with two seperate deploy functions, but I want to make my code better for readabiity. Anyway, this is how it should go:
Step 1: Normal tests:
deploy container_test 9696 test_database # 3 parameters
run tests... (this is not relevant to the question)
Step 2: QA testing:
deploy container_qa_test 9696 test_database QA # 4 parameters, so I can create a
# a special container
run tests... (again, not relevant to the question)
Step 3: If they are successful, deploy a production-ready container:
deploy production_container 9696 production_database # 3 parameters again
However what happens according to the log:
Step 1: test_container is created. However its created with the upper if, but there is not a 4th parameter that equals QA, however it executes it.
Step 2: this runs normal.
Step 3: production container is built as a QA container
It never reaches the else part, even if the condition is not satisfied. Can anyone give me some tips?
just change [[ $4 -eq QA ]] to :
if [[ "$4" == "QA" ]]; then
-eq used to comapre numbers ....

Resources