is there a simple way to add the very verbose parameter '-vvv' to ssh if it is used by ansible-playbook.
I try to call:
ansible-playbook -vvvv --ssh-extra-args '-vvv' --inventory ${INVENTORYFILE} --vault-password-file ${VAULT_PASSWORD_FILE} ansible/playbook-program-installation.yaml --extra-vars "target=$HOST_TARGET" "$#"
But the result is:
usage: ansible-playbook [-h] [--version] [-v] [--private-key PRIVATE_KEY_FILE]
[-u REMOTE_USER] [-c CONNECTION] [-T TIMEOUT]
[--ssh-common-args SSH_COMMON_ARGS]
[--sftp-extra-args SFTP_EXTRA_ARGS]
[--scp-extra-args SCP_EXTRA_ARGS]
[--ssh-extra-args SSH_EXTRA_ARGS]
[...]
ansible-playbook: error: argument --ssh-extra-args: expected one argument
[...]
How can I force ansible-playbook to accept the parameter '-vvv' for ssh so I got Information why ssh connection fails?
Related
I'm using an official image from Microsoft which contains SQL tools used to interact with Microsoft SQL Servers. If I run the container interactively, I can run sqlcmd at the command line without any issue, because it is in the PATH variable:
$ docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest
root#df20bd19b982:/var/update# sqlcmd
Microsoft (R) SQL Server Command Line Tool
Version 13.1.0007.0 Linux
Copyright (c) 2012 Microsoft. All rights reserved.
usage: sqlcmd [-U login id] [-P password]
[-S server or Dsn if -D is provided]
[-H hostname] [-E trusted connection]
[-N Encrypt Connection][-C Trust Server Certificate]
[-d use database name] [-l login timeout] [-t query timeout]
[-h headers] [-s colseparator] [-w screen width]
[-a packetsize] [-e echo input] [-I Enable Quoted Identifiers]
[-c cmdend]
[-q "cmdline query"] [-Q "cmdline query" and exit]
[-m errorlevel] [-V severitylevel] [-W remove trailing spaces]
[-u unicode output] [-r[0|1] msgs to stderr]
[-i inputfile] [-o outputfile]
[-k[1|2] remove[replace] control characters]
[-y variable length type display width]
[-Y fixed length type display width]
[-p[1] print statistics[colon format]]
[-R use client regional setting]
[-K application intent]
[-M multisubnet failover]
[-b On error batch abort]
[-D Dsn flag, indicate -S is Dsn]
[-X[1] disable commands, startup script, environment variables [and exit]]
[-x disable variable substitution]
[-? show syntax summary]
root#b33a916d4230:/var/update# echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/mssql-tools/bin
root#b33a916d4230:/var/update#
sqlcmd is present in /opt/mssql-tools/bin/ folder which is part of the PATH env. variable.
but If I try to execute the sqlcmd command at the docker run... bash -c 'sqlcmd', it won't find it. I echoed PATH environment variable at the same command line and found that its path i.e /opt/mssql-tools/bin is already in the PATH.
$ docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest bash -c "sqlcmd"
bash: sqlcmd: command not found
And to see the PATH env. variable, I did the following:
$docker run --rm -it -v $(pwd):/var/update/ -w /var/update mcr.microsoft.com/mssql-tools:latest bash -c 'echo $PATH'
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
Question 1: Why Path Variable is different in case we use bash -c 'commands'?
Question 2: If bash -c or sh -c creates a new shell, how to execute shell commands with the container's environment variables especially the PATH environment variable.
When you run an interactive shell as root, it runs the commands from /root/.bashrc, which (in this particular image) include
export PATH="$PATH:/opt/mssql-tools/bin"
A better Docker image would have that setting in the Dockerfile itself, which exports it to all users of the image. You can build an image like that yourself easily.
FROM mcr.microsoft.com/mssql-tools:latest
ENV PATH="$PATH:/opt/mssql-tools/bin"
(Also, the export is superfluous; the variable is already exported by the shell.)
If you don't want to mess with the image, try
docker run --rm -it -v $(pwd):/var/update/ -w /var/update \
mcr.microsoft.com/mssql-tools:latest \
bash -c 'PATH=$PATH:/opt/mssql/bin sqlcmd'
I am trying to download a file from a remote server using the following command
scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\temp\fileNEW.txt"
If I open a command prompt and run the command the file is downloaded. However, I need to put the command in a perl script. if I put the following commands in a perl script
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\temp\fileNEW.txt"';
system(qq($Var1));
where the folder c:\temp exists on the local machine running the perl command.
Then I get the following error
CreateProcessW failed error:2
posix_spawn: No such file or directory
Changing $Var1 to
my $Var1='scp';
and running the script produces
usage: scp [-346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
[-l limit] [-o ssh_option] [-P port] [-S program] source ... target
From this I have deduced that there is some sort of syntax error in my initial definition of $Var1
If I use any of the following values for $Var1
my $Var1='scp -i "c:\users\userX\keyfile.ppk"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt"';
I get the same output
usage: scp [-346BCpqrv] [-c cipher] [-F ssh_config] [-i identity_file]
[-l limit] [-o ssh_option] [-P port] [-S program] source ... target
However, if I try any of the following:
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\temp\fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\\temp\\fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:/temp/fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\\fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:/fileNEW.txt"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\temp"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:\\temp"';
my $Var1='scp -i "c:\users\userX\keyfile.ppk" user1#server.org:"/home/user1/file1.txt" "c:/temp"';
I get the error
CreateProcessW failed error:2
posix_spawn: No such file or directory
So, is the problem the output folder or something else?
I think maybe the double quotes on the first argument are not removed if you use the 1-arg form of system(). Try to pass the arguments in an array instead:
system('scp', '-i', 'c:\users\userX\keyfile.ppk', 'user1#server.org:/home/user1/file1.txt', 'c:\temp\fileNEW.txt');
Im trying to run my playbook on all machines with tag: mytag with additional param my-zone and on the localhost which is executed the playbook. I tried this:
ansible-playbook myplaybook.yml -i myinventory -e --limit localhost,tag_mytag:&my-zone
but it gives me the next error:
ERROR! Specified --limit does not match any hosts
How can I do it?
Please try as below::
ansible-playbook myplaybook.yml -i myinventory --limit localhost --tag mytag -e var=x
assuming here that, i am limiting my playbook to run only to localhost(--limit).
with tag "mytag" (--tag ) and passing an extra variable ( -e ) as "var=x"
I am following the tutorial: https://core.rasa.com/tutorial_basics.html#tutorial-basics
and I am in the step:
Let’s run
python -m rasa_nlu.train -c nlu_model_config.json --fixed_model_name current
And I am having this error:
usage: train.py [-h] [-o PATH] (-d DATA | -u URL) -c CONFIG [-t NUM_THREADS]
[--project PROJECT] [--fixed_model_name FIXED_MODEL_NAME]
[--storage STORAGE] [--debug] [-v]
train.py: error: one of the arguments -d/--data -u/--url is required
I've try the obvious and run:
python -m rasa_nlu.train -c nlu_model_config.json --fixed_model_name current -d
But then it gives me the error:
train.py: error: argument -d/--data: expected one argument
I am really confused, since I am still running the tutorial and I don't understand what this arguments are.
You must supply a path to the dataset after the -d flag, like this for instance
python -m rasa_nlu.train -c nlu_model_config.json --fixed_model_name current -d data/nlu_data.md
I'm messing around with the Ansible tutorial commands and changing some of the parameters just to see what happens.
I can successfully do:
ansible all -m ping
And I can successfully do:
ansible all -a "/bin/echo hello"
But when I modify the example to do anything involving sudo privilege, it fails with a nondescript MODULE FAILURE message.
ansible all -a "/bin/echo hi" --sudo
ansible all -a "/usr/sbin/shutdown -h now" --sudo
On the remote machine, the user I am connecting as does have membership in the wheel group and can successfully execute sudo commands locally.
What am I doing wrong? (CentOS 7)
I have the same problem.
This outputs module failure:
ansible all -i <server>, -m command -a "/<command> <args>" -u <user> -b
These work:
ansible all -i <server>, -m command -a "/<command> <args>" -u <user> -b -K
-- but it asks sudo password
ansible all -i <server>, -m command -a "/<command> <args>" -e
"ansible_ssh_user=<user> ansible_ssh_pass=<pass> ansible_sudo_pass=<pass>" -b