I used to consume messages from amqp-consume with this command below at debian 7, but I installed debian 8 I think the amqp-tools is different and it does not recognize my command.
I noticed some changes. My web interface change the port from 55672 to 15672.
amqp-consume -d -q queue.udrive.admin.uiscsi -s 10.0.1.251 -p 5672 -e "directExchangeUdrive" --vhost "/" -r "" --username=guest --password=guest /bin/bash remoteManageUiSCSI.sh
error: both --server and --url options specify server host
I think the command expects it:
amqp-consume
consuming command not specified
Usage: amqp-consume [-dxA?] [-u|--url=amqp://...] [-s|--server=hostname] [--port=port] [--vhost=vhost] [--username=username] [--password=password] [--ssl] [--cacert=cacert.pem] [--key=key.pem] [--cert=cert.pem] [-q|--queue=queue] [-e|--exchange=exchange] [-r|--routing-key=routing key] [-d|--declare] [-x|--exclusive] [-A|--no-ack] [-c|--count=limit] [-p|--prefetch-count=limit] [-?|--help] [--usage] [OPTIONS]... <command> <args>
I tried all kinds of things on amqp:// and it dodn't work.
I got the answer at other site https://qpid.apache.org/releases/qpid-0.30/programming/book/QpidJNDI.html but I still wonder to know why this answer was not at the "man amqp-consume" or rabbitmq web site....
The command works for me is:
amqp-consume -d -u amqp://test:test#ustorageprod/%2f -q queue.udrive.admin.uiscsi -e "directExchangeUdrive" -r "" /bin/bash remoteManageUiSCSI.sh
amqp-publish -u amqp://test:test#ustorageprod/%2f -r "queue.udrive.ustorage" -e "directExchangeUdrive" -b "$msg"
Related
I am running the below script and getting error.
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
if [ -n "$webproxy" ] ; then
sudo docker exec $webproxy sh -c "$webproxycheck"
fi
Here is my docker ps -a output
$sudo docker ps -a --format "{{.Names}}"|grep webproxy
webproxy-dev-01
webproxy-dev2-01
when i run the command individually it works. For Example:
$sudo docker exec webproxy-dev-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
$sudo docker exec webproxy-dev2-01 sh -c 'curl -k -s https://localhost:${nginx_https_port}/HealthCheckService'
HEALTHCHECK_OK
Here is the error i get.
$ sh healthcheck.sh
OCI runtime exec failed: exec failed: container_linux.go:348: starting container process caused "exec: \"webproxy-dev-01\": executable file not found in $PATH": unknown
Could someone please help me with the error. Any help will be greatly appreciated.
Because the variable contains two tokens (on two separate lines) that's what the variable expands to. You are running
sudo docker exec webproxy-dev-01 webproxy-dev2-01 ...
which of course is an error.
It's not clear what you actually expect to happen, but if you want to loop over those values, that's
for host in $webproxy; do
sudo docker exec "$host" sh -c "$webproxycheck"
done
which will conveniently loop zero times if the variable is empty.
If you just want one value, maybe add head -n 1 to the pipe, or pass a more specific regular expression to grep so it only matches one container. (If you have control over these containers, probably run them with --name so you can unambiguously identify them.)
Based on your given script, you are trying to "exec" the following
sudo docker exec webproxy-dev2-01
webproxy-dev-01 sh -c "curl -k -s https://localhost:${nginx_https_port}/HealthCheckService"
As you see, here is your error.
sudo docker exec webproxy-dev2-01
webproxy-dev-01 [...]
The problem is this line:
webproxy=$(sudo docker ps -a --format "{{.Names}}"|grep webproxy)
which results in the following (you also posted this):
webproxy-dev2-01
webproxy-dev-01
Now, the issue is, that your docker exec command now takes both images names (coming from the variable assignment $webproxy), interpreting the second entry (which is webproxy-dev-01 and sepetrated by \n) as the exec command. This is now intperreted as the given command which is not valid and cannot been found: That's what the error tells you.
A workaround would be the following:
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy | head -n 1)
It only graps the first entry of your output. You can of course adapt this to do this in a loop.
A small snippet:
#!/bin/bash
webproxy=$(sudo docker ps -a --format "{{.Names}}"| grep webproxy )
echo ${webproxy}
webproxycheck="curl -k -s https://localhost:\${nginx_https_port}/HealthCheckService"
while IFS= read -r line; do
if [ -n "$line" ] ; then
echo "sudo docker exec ${line} sh -c \"${webproxycheck}\""
fi
done <<< "$webproxy"
When trying to publish a message to a topic using the mosquitto_pub -l flag, I get the error:
Error: '-l' mode not available, threading support has not been compiled in.
How can I correct this?
For reference mosquitto_pub is version 1.5.3, running on libmosquitto 1.5.3., and the command I am trying to run is:
mosquitto_pub -h <hostname> -p <port> -t "<topic>" --cafile /usr/local/etc/openssl/cert.pem -d -P "$(cat mqtt-token-pub.txt)" -u <username> -l
Note: it works if I use -m "blah" instead of -l
I am trying to run a nested ssh -t -t but it won't provide me the environment variables when working with cat and echo.
#!/bin/bash
pass="password\n"
bla="cat <(echo -e '$pass') - | sudo -S su -"
ssh -t -t -t -t jumpserver "ssh -t -t -t -t server \"$bla\" "
I get an output without any variables taken into consideration. (e.g. PS1 does not get shown but commands work fine) The problem is related to cat <(echo -e '$pass') - but this was the way to keep echo alive after providing the password for sudo.
How can i achieve this and get environment variables to get a proper output?
Thanks.
The -tt is enough. Using more -t does not add any more effect and just makes an impression that you have no idea what are you doing.
What is the point of cat <(echo -e) construction? Writing just echo would result in the same, isn't it?
Why to use sudo su? sudo already does all you need, isn't it?
So how can it look in some fashionable manner?
pass="password\n"
bla="echo '$pass' | sudo -Si"
ssh -tt jumpserver "ssh -tt server \"$bla\""
And does it work? Try to debug the commands with -vvv switches to the ssh. It will show you what is actually executed and passed to each other shell.
I have a kiosk that shuts down every day using rtcwake, and this uses root user. I've used && to execute the boot script after rtcwake completes, however it then starts the browser as root causing problems.
This is the command I use:
echo debian | sudo -S rtcwake -m mem -u -t $(date +%s -d '3 days 7:45') && sudo -u debian -i bash $HOME/kiosk/bin/startup.sh &.
The sudo command does work to some extent. It calls the debian user, and executes the correct script, however, it still screws up my chromium preferences.
Here is the startup script:
echo debian | sudo -S hwclock -w
export HOME=/home/debian
#log boot time
echo "Booting at" $(date) >> $HOME/kiosk/bin/logs/boot.log
#echo debian | sudo -S service connman restart
echo debian | sudo -S at 15:30 -f $HOME/kiosk/bin/shutdown.sh
crontab -u debian crontab.txt
bash $HOME/git.sh
#sudo -i -u debian
#start kiosk
export DISPLAY=:0
chromium-browser --kiosk --disable-gpu
http://localhost/kiosk/Client/main.html &
#update ip
bash /home/debian/git.sh &
I'm wondering what could be causing chrome to be executed as root. I have no idea what is going wrong.
If you execute a command with sudo it will not change environment variables like $HOME. Since per user settings are stored in $HOME, this affects the executed program if it needs such configuration files. Check this for example:
sudo -u debian bash -c 'echo $HOME'
It will print the home folder of the calling user, not the home folder of the user specified trough -u. The sudo command supports the -H command line option to handle this, however if it works depends on the security police in use.
As a solution you can use the su command instead of sudo in this case:
... && su debian -c chromium
Since su itself is executed by root you won't be asked for the password.
You must enter a password to log into a new user shell.
The command needs to be modified as follows:
echo debian | sudo -S rtcwake -m mem -u -t $(date +%s -d '3 days 7:45') && echo debian | sudo -S -u debian -i bash $HOME/kiosk/bin/startup.sh &
This avoids needing a password to log in as normal Debian user, and executes the script.
first of all i'm sorry for my english.
I'm trying to monitor the hard drives of a lot of Windows machines, and i've seen that can be done with smartd. I've read the man page and i've seen that is possible to sent a mail when an error occurs. I've done some test, searching info in google... but i can't make it work, the smartd daemon don't run the mail program.
I've tested with this in smartd.conf:
DEVICESCAN
/dev/hda -m UserName#SomeHost.com -M test -M exec c:\sendmail.cmd
and sendmail.cmd is a test script with a simple line:
"C:\sendEmail.exe" -f UserName#SomeHost.com -m "Hi There" -l c:\log.log -t UserName#SomeHost.com -s SomeHost.com -xu UserName#SomeHost.com -xp PassWord
The cmd script works perfect, but i don't know why smartd dont run that script...
Even I've tried with a hybrid:
/dev/hda -m UserName#SomeHost.com -M test -M exec "C:\sendEmail.exe" -f UserName#SomeHost.com -m "Hi There" -l c:\log.log -t UserName#SomeHost.com -s SomeHost.com -xu UserName#SomeHost.com -xp PassWord
but don't works too.
The Windows log shows how smartd daemon start and run "DEVICESCAN" command, but nothing about the other line.
I've tested with "smartctl -a /dev/hda" and shows the drive info.
What i'm doing wrong?
Thanks!!
Fixed... the problem is "DEVICESCAN". If you run that command all others are ignored. Even i've found an installer to install in a lot of PC's silently and configured.