I need to use some gcloud commands in order to create a Redis instance on GCP as terraform does not support some options that I need.
I'm trying this:
terraform {
# Before apply, run script.
before_hook "create_redis_script" {
commands = ["apply"]
execute = ["REDIS_REGION=${local.module_vars.redis_region}", "REDIS_PROJECT=${local.module_vars.redis_project}", "REDIS_VPC=${local.module_vars.redis_vpc}", "REDIS_PREFIX_LENGHT=${local.module_vars.redis_prefix_lenght}", "REDIS_RESERVED_RANGE_NAME=${local.module_vars.redis_reserved_range_name}", "REDIS_RANGE_DESCRIPTION=${local.module_vars.redis_range_description}", "REDIS_NAME=${local.module_vars.redis_name}", "REDIS_SIZE=${local.module_vars.redis_size}", "REDIS_ZONE=${local.module_vars.redis_zone}", "REDIS_ALT_ZONE=${local.module_vars.redis_alt_zone}", "REDIS_VERSION=${local.module_vars.redis_version}", "bash", "../../../scripts/create-redis-instance.sh"]
}
The script is like this:
echo "[+]Creating IP Allocation Automatically using <$REDIS_VPC-network\/$REDIS_PREFIX_LENGHT>"
gcloud compute addresses create $REDIS_RESERVED_RANGE_NAME \
--global \
--purpose=VPC_PEERING \
--prefix-lenght=$REDIS_PREFIX_LENGHT \
--description=$REDIS_RANGE_DESCRIPTION \
--network=$REDIS_VPC
The error I get is:
terragrunt apply
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
5b35d0bf15d0a0d61b303ed32556b85417e2317f
ERRO[0002] Hit multiple errors:
Hit multiple errors:
exec: "REDIS_REGION=us-east1": executable file not found in $PATH
ERRO[0002] Unable to determine underlying exit code, so Terragrunt will exit with error code 1
I encountered the same issue and resigned myself to pass the values as parameters instead of environment variables.
It involves to modify the script and is a far less clearer declaration, but it works :|
Related
I need to call my executable which is placed in an on-prem server by using an ssh connection and pass a dynamics parameter.
based on my requirement, users should be able to add or remove parameters as they want to work with the executable on the on-prem server.
I wrote a translator to identify any new parameter added to the console but now when I want to pass it via ssh, I am facing 2 problems.
what if I have a value that contains space?
how to load these values dynamically & use them as arguments on my shell script on the server?
**Also take note that I am sending some additional parameters that are not related to my executable argument but I need them as well.
params=(
"$MASTER"
"$NAME"
"$QUEUE"
service.enabled=true
)
for var_name in "${!conf__#}";
do
key=${var_name#conf__};
key=${key//_/.};
value=${!var_name};
params+=( --conf "$key=$value" );
done
echo "${params[#]}"
ssh -o StrictHostKeyChecking=no myuser#server_ip "/bin/bash -s" < deploy_script.sh "${params[#]}"
My deploy_script.sh file will be something like the below file.
#!/bin/bash
set -e
AR_MASTER=${1}
AR_NAME=${2}
AR_QUEUE=${3}
AR_SER_EN=${4}
# How can I get the other dynamic parameters???
main() {
my-executable \
"--master "$AR_MASTER \
"--name "$AR_NAME \
"--queue "$AR_QUEUE \
"--conf service.enabled="$AR_SER_EN \
??? #how to add the additional configuration dynamically?
}
main "$#"
Would you mind help me in figure it out?
I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars
I have created a test network and I am able to install the chaincode I have created in golang. But when instantiating it I receive the following:
2020-03-24 08:00:00.843 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 04a Using default escc
2020-03-24 08:00:00.844 UTC [chaincodeCmd] checkChaincodeCmdParams -> INFO 04b Using default vscc
Error: chaincode argument error: unexpected end of JSON input
If I build the code in its own directory, it compiles without problems.
I can install and instantiate the code in another development network, but not in one I have created from scratch.
Help would be appreciated!
Thanks!
Use quotation marks when referencing CC_CONSTRUCTOR variable. Otherwise, bash prioritizes inner spaces over inner quotation marks:
peer chaincode instantiate -C $CC_CHANNEL_ID -n $CC_NAME -v $CC_VERSION -c "$CC_CONSTRUCTOR" -o $ORDERER_ADDRESS
Thanks. I am setting env variables, then call the instantiate. Same variables are set for the install, which works fine.
export CC_CONSTRUCTOR='{ "Args" : [ "Message" , "Hello World - Init message" ] }'
export CC_NAME="testcc"
export CC_PATH="testcc"
export CC_VERSION="1.1"
export CC_CHANNEL_ID="testchannel"
peer chaincode instantiate -C $CC_CHANNEL_ID -n $CC_NAME -v $CC_VERSION -c $CC_CONSTRUCTOR -o $ORDERER_ADDRESS
I have tried escaping some that might need, that does not work. And again, the very same golang code and JSON constructor works on another test environment.
If I unset the CC_CONSTRUCTOR variable, I receive another error message, hence with high probability that is the problem.
In same cases, this error is generated by
const stateValue = await ctx.stub.getState(state);
when the state does not exist.
In other cases, it is because
evaluateTransaction is used instead of submitTransaction when reading states
I use a conda env that I create manually, not automatically using Snakemake. I do this to keep tighter version control.
Anyway, in my config.yaml I have the following line:
conda_env: '/rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake'
Then, at the start of my Snakefile I read that variable (reading variables from config in your shell part does not seem to work, am I right?):
conda_env = config['conda_env']
Then in a shell part I hail said parameter like this:
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
threads: 8
shell:
'''
#!/bin/bash
source activate {conda_env}
rsem-calculate-expression \
--paired-end \
{input} \
{rsem_ref_base} \
{analyzed_dir}/{wildcards.sample} \
--strandedness reverse \
--num-threads {threads} \
--star \
--star-gzipped-read-file \
--star-output-genome-bam
'''
Notice the {conda_env}. Now this gives me the following error:
Could not find conda environment: None
You can list all discoverable environments with `conda info --envs`.
Now, if I change {conda_env} for its parameter directly /rst1/2017-0205_illuminaseq/scratch/swo-406/snakemake, it does work! I don't have any trouble reading other parameters using this method (like rsem_ref_base and analyzed_dir in the example rule above.
What could be wrong here?
Highest regards,
Freek.
The pattern I use is to load variables into params, so something along the lines of
rule rsem_quantify:
input:
os.path.join(fastq_dir, '{sample}_R1_001.fastq.gz'),
os.path.join(fastq_dir, '{sample}_R2_001.fastq.gz')
output:
os.path.join(analyzed_dir, '{sample}.genes.results'),
os.path.join(analyzed_dir, '{sample}.STAR.genome.bam')
params:
conda_env=config['conda_env']
threads: 8
shell:
'''
#!/bin/bash
source activate {params.conda_env}
rsem-calculate-expression \
...
'''
Although, I'd also never do this with a conda environment, because Snakemake has conda environment management built-in. See this section in the docs on Integrated Package Management for details. This makes reproducibility much more manageable.
I have a very long command with many arguments, and somehow it's not working the way it should work. The following knife command will connect to remote vCenter and create a VM called node1. How do I wrap the following command and run inside ruby? Am I doing something wrong?
var_name = 'node1'
var_folder = 'folder1'
var_datastore = 'datastore1'
var_template_file = 'template_foo'
var_template = 'foo'
var_location = 'US'
cmd = 'knife vsphere vm clone var_name --dest-folder var_folder --datastore var_datastore --template-file var_template_file --template var_template -f var_location'
system(cmd)
require 'shellwords'
cmd = "knife vsphere vm clone #{var_name.shellescape} --dest-folder #{var_folder.shellescape} --datastore #{var_datastore.shellescape} --template-file #{var_template_file.shellescape} --template #{var_template.shellescape} -f #{var_location.shellescape}"
In your specific case it would work even without shellescape, but better safe than sorry.
Variables are not resolved in your command. Try using #{var_name} etc for all variables in the varaible cmd