I am trying to use OS Process Sampler to run the aws cli commands in JMeter.
I will be running this on docker where JMeter and aws cli both will be installed. But before I can do that, I tried to run this locally on my mac but so far unable to get the aws command to run.
On my local terminal for eg I can run:
a. aws --version
b. bash j.sh (aws --version)
Both returns aws-cli/2.0.8 Python/3.7.4 Darwin/18.7.0 botocore/2.0.0dev12
This confirms aws cli is available in path to be accessible globally.
However when I run the same command from OS Process sampler, I have tried following:
a.
Working Directory: /Users/tester/Downloads/apache-jmeter-5.1.1/bin
Environment: {}
Executing: bash aws --version
RESPONSE: bash: aws --version: No such file or directory
b.
Working Directory: /Users/tester
Environment: {}
Executing: bash j.sh
where j.sh just contains the aws --version command
RESPONSE: j.sh: line 1: aws: command not found
What am I missing?
Thanks Dmitri and Vadim for your response on my question. Unfortunately both examples are for Windows where OS Process sampler will work differently compared to Mac. I got it to work with few more tweaks using OS Process Sampler on Mac as well:
The key difference for Mac is that jmeter needs the location aws cli is installed
/usr/local/bin/aws
I was able to find this via which command
which aws
I also decided to use Beanshell sampler along with logging to do this which will allow me to script and control better my other needs.
Here is my reference code that works:
try {
Process p = Runtime.getRuntime().exec("/usr/local/bin/aws --version");
p.waitFor();
BufferedReader in = new BufferedReader(new InputStreamReader(p.getInputStream()));
StringBuilder logCommandOutput = new StringBuilder();
String line;
while ((line = in.readLine()) != null) {
logCommandOutput.append(line);
} in .close();
log.info("Output: " + logCommandOutput.toString());
} catch (Exception e) {
log.error("exception" + e);
}
Hope this helps someone who is trying to do the same.
For those who need to run AWS CLI v2 commands from JMeter on Windows (I use 10).
Below is my setup.
the original command is:
$ aws dynamodb list-tables
Result:
I believe you need to configure your OS Process Sampler as follows:
Command: /bin/bash
Parameter 1: -c
Parameter 2: aws --version
Demo:
From bash manual page:
-c If the -c option is present, then commands are read from
the first non-option argument command_string. If there are
arguments after the command_string, the first argument is
assigned to $0 and any remaining arguments are assigned to
the positional parameters. The assignment to $0 sets the
name of the shell, which is used in warning and error
messages.
More information: How to Run External Commands and Programs Locally and Remotely from JMeter
To run aws cli command in JMeter:
Add OS Process Sampler to your test plan.
In the Command field enter your command: aws.
In the Command Parameters add any parameters you need, for example this one: --version.
Add a View Results Tree to your Thread Group, run the test and see response body:
aws-cli/2.0.0 Python/3.7.5 Windows/10 botocore/2.0.0dev4
Related
I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done
Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525
You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html
You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside
You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"
I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli
Inside your ~/.aws/config file, add:
cli_pager=
I am trying execute some gcloud commands in bash script from crontab. The script execute sucessfully from command shell but not from the cron job.
I have tried with:
Settng the full path to gcloud like:
/etc/bash_completion.d/gcloud
/home/Arturo/.config/gcloud
/usr/bin/gcloud
/usr/lib/google-cloud-sdk/bin/gcloud
Setting in the begin the script:
/bin/bash -l
Setting in the crontab:
51 21 30 5 6 CLOUDSDK_PYTHON=/usr/bin/python2.7;
/home/myuser/folder1/myscript.sh param1 param2 param3 -f >>
/home/myuser/folder1/mylog.txt`
Setting inside the script:
export CLOUDSDK_PYTHON=/usr/bin/python2.7
Setting inside the script:
sudo ln -s /home/myuser/google-cloud-sdk/bin/gcloud /usr/bin/gcloud
Version Ubuntu 18.04.3 LTS
command to execute: gcloud config set project myproject
but nothing is working, maybe I am doing something wrongly. I hope you can help me.
You need to set your user in your crontab, for it to run the gcloud command. As well explained in this other post here, you need to modify your crontab to fetch the data in your Cloud SDK, for the execution to occur properly - it doesn't seem that you have made this configuration.
Another option that I would recommend you to try out, it's using a Cloud Scheduler to run your gcloud commands. This way, you can use gcloud for your cron jobs in a more integrated and easy way. You can verify more information about this option here: Creating and configuring cron jobs
Let me know if the information helped you!
I found my error, the problem here was only in the command: "gcloud dns record-sets transaction start", the others command was executing sucesfully but only no logging nothing, by that I though that was not executng the other commands. This Command create a temp file ex. transaction.yaml and that file could not be created in the default path for gcloud(snap/bin), but the log simply dont write any thing!. I had to specify the path and name for that file with the flag --transaction-file=mytransaction.yaml. Thanks for your supprot and ideas
I have run into the same issue before. I fixed it by forcing the profile to load in my script.sh,loading the gcloud environment variables with it. Example below:
#!/bin/bash
source /etc/profile
gcloud config set project myprojectecho
echo "Project set to myprojectecho."
I hope this can help others in the future with similar issues, as this also helped me when trying to set GKE nodes from 0-4 on a schedule.
Adding the below line to the shell script fixed my issue
#Execute user profile
source /root/.bash_profile
I am attempting to utilize the AWS CLI along with a for loop in bash to iteratively purge multiple SQS message queues. The bash script works almost as intended, the problem I am having is with the return value each time the AWS CLI sends a request. When the request is successful, it returns an empty value and opens up an interactive pager in the command line. I then have to manually type q to exit the interactive screen and allow the for loop to continue to the next iteration. This becomes very tedious and time consuming when attempting to purge a large number of queues.
Is there a way to configure AWS CLI to disable this interactive pager from popping up for every return value? Or a way to pipe the return values into a separate file instead of being displayed?
I have played around with configuring different return value types (text, yaml, JSON) but haven't had any luck. Also the --no-pagination parameter doesn't change the behavior.
Here's an example of the bash script I'm trying to run:
for x in 1 2 3; do
aws sqs purge-queue --queue-url https://sqs.<aws-region>.amazonaws.com/<id>/<env>-$x-<queueName>.fifo;
done
Just running into this issue myself, I was able to disable the behaviour by invoking the aws cli as AWS_PAGER="" aws ....
Alternatively you could simply export AWS_PAGER="" at the top of your (bash) script.
Source: https://github.com/aws/aws-cli/pull/4702#issue-344978525
You can also use --no-cli-pager in AWS CLI version 2.
See the "Client-side pager" section here https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html
You can disable pager either by exporting AWS_PAGER="" or by modifying you AWS cli config file.
export AWS_PAGER=""
### or update your ~/.aws/config with
[default]
cli_pager=
Alternatively, you can enable the default pager to output of less program as
export AWS_PAGER="less"
or corresponding config change.
Ref: https://docs.aws.amazon.com/cli/latest/userguide/cli-usage-pagination.html#cli-usage-pagination-clientside
You can set the environment variable PAGER to "cat" to force awscli to not start up less:
PAGER=cat aws sqs list-queues
I set up as a shell alias to make my life easier:
# ~/.zshrc
alias aws="PAGER=cat aws"
I am using the aws cli v2 via docker and passing the --env AWS_PAGER="" on the docker run command fixed this issue for me on windows 10 using git bash.
I set it up as an alias as well so things work with jq.
How to set your docker env values:
https://docs.docker.com/engine/reference/commandline/run/#set-environment-variables--e---env---env-file
Example alias:
docker run --rm -it -v c:/users/me/.aws:/root/.aws --env AWS_PAGER="" amazon/aws-cli
Inside your ~/.aws/config file, add:
cli_pager=
maybe some of you openshift/docker pros can help me out with this one. Apologies in advance for my formatting, im on mobile and dont have access to exact error codes right now. I can supply more detailed input/stderr later if needed.
Some details about the environment:
-functioning OC pod running a single postgresql v9.6 container.
-CentOS7 host
-Centos7 local machine
-bash 4.2 shell (both in the container and on my local box)
My goal is to use a one-liner bash command to rsh into a postgresql container, and run the following command to print said containers databases to my local terminal. something like this:
[root#mybox ~]$ oc rsh pod-name /path/to/command/executable/psql -l
result:
rsh cannot find required library, error code 126
The issue I am hitting is that, when executing this one liner, The rsh does not see the target pod’s environment variables. This means it cannot find the supporting libraries that the psql command needs. If i dont supply the full path as shown in my example, it cannot even find the psql command itself.
Annoyingly, running the following one liner prints all of the pods environment variables (including the ones i need for psql) to my local terminal, so they should be accessible somehow.
[root#mybox ~]$ oc rsh pod-name env
Since this is to be executed as part of an automated procedure, the simple, interactive rsh approach (which works, as described below) is not an option.
[root#mybox ~]$ oc rsh pod-name
sh-4.2$ psql -l
(pod happily prints the database info in the remote terminal)
I have tried executing the script which defines the psql environment variable and then chaining the desired command, but i get permission denied when trying to execute the env script.
[root#mybox ~]$ oc rsh pod-name /path/to/env/define/script && psql -l
permission denied, rsh error code 127
I have my crontab set up as follows (this is inside a docker container).
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
SHELL=/bin/bash
5 * * * * bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/
The /usr/local/bin/process-logs is designed to expose some MongoDB logs using mtools to a simple web server.
The problematic part of the script is fairly simple. raw_name is archive_name without the file extension.
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
mlogvis --no-browser "${raw_name}"
If I manually run the command as specified in the crontab config above
bash /usr/local/bin/process-logs > /proc/1/fd/1 2>/proc/1/fd/2
It all works as expected (this is the expected output from mlogvis)
...
copying /usr/local/lib/python3.5/dist-packages/mtools/data/index.html to /some/path/mongod.log-20190313-1552456862.html
...
When the script gets triggered via crontab it throws the following error
usage: mlogvis [-h] [--version] [--no-progressbar] [--no-browser] [--out OUT]
[--line-max LINE_MAX]
mlogvis: error: unrecognized arguments: mongod.log-20190313-1552460462
The mlogvis command that caused the following error (actual values not parameters)
mlogvis --no-browser "mongod.log-20190313-1552460462"
Again if I run this command myself it all works as expected.
mlogvis: http://blog.rueckstiess.com/mtools/mlogvis.html
I don't believe this to be an issue with the file not having correct permissions or not existing as mlogvis produces a different error in these conditions. I've also tested with removing '-' from the file name thinking it might be trying to parse these as arguments but it made no difference.
I know cron execution doesn't have the same execution environment as the user I tested the script as. I've set the PATH to be the same as the user and when the container starts up I execute env >> /etc/environment so all the environment vars and properly set.
Does anyone know of a way to debug this or has anyone encountered similar? All other components of the script are functioning except mlogvis which is core to the purpose of this job.
Summary of what I've tried as a fix:
Set environment and PATH for cron execution to be the same as the user I tested the script as
Replace - in file name(s) to see if it was parsing the parts as arguments
hardcode a filename with full permissions to see if it was permissions related
Manually run the script -> this works
Manually run the mlogvis command in isolation -> this works
try to load /home/user/.bash_profile before executing script and try again. I suspect that you have missing PATH or other environment variable which is not set.
source /home/user/.bash_profile
Please post your complete script, because while executing via crontab,
you have to be sure your raw_name variable was properly created. As
it seems to depend on archive_name, posting some more context can
help us to help you.
In any case, if you are using bash, you can try something like :
aws s3 cp "s3://${s3_bucket}/${file_name}" "${archive_name}"
gunzip "${archive_name}"
# here you have to be sure that archive_name is correct
raw_name_2=${archive_name%%.*}
mlogvis --no-browser "${raw_name_2}"
It is not going to solve your issue, but probably will take you closer to the right path.