s3cmd put command: Upload: Command not found - bash

So I want to put a file onto s3. Here is the cmd:
/usr/bin/s3cmd --rr --access_key="$access_key" --secret_key="$secret_key" put "$FILEPATH/$ZIPPED_FILE" "$s3_path/$ZIPPED_FILE"
And this works perfectly except in my bash shell, it prints out this message: upload:: command not found. Anyone encounter this?

This almost certainly means you're running the stdout of s3cmd as a command itself. For instance, that could happen if you were to run:
# BAD: runs the command, then runs its output as another command
`/usr/bin/s3cmd --rr --access_key="$access_key" --secret_key="$secret_key" put "$FILEPATH/$ZIPPED_FILE" "$s3_path/$ZIPPED_FILE"`
To fix that, just take the backticks out, and write:
# GOOD: just invokes your command, with its output written to stdout
/usr/bin/s3cmd --rr --access_key="$access_key" --secret_key="$secret_key" put "$FILEPATH/$ZIPPED_FILE" "$s3_path/$ZIPPED_FILE"

Related

Run script on startup not working if used with watch command

I've a file called /root/run.sh in which following code has been wrote
/usr/bin/watch -n1 "echo hello >> /root/out.txt"
If I launch manually in terminal it in the following manner
bash /root/run.sh
all works fine.
Now I want that this file is called every time I start my os, so I modified the cronjob file through crontab -e and added the following line:
#reboot bash /root/run.sh
Unfortunately, it doesn't work, that means that after the reboot it doesn't write 'hello' in the out.txt file.
If I modify the run.sh script in the following manner:
echo hello >> /root/out.txt
then all works fine, that means after the reboot it writes one time the word 'hello' in the out.txt file.
How can I use the cronjob to execute a watch command?
I solved the problem writing bash /root/run.sh in the file located in /etc/rc.local. Now all works fine but I wonder why I can't put it in the cronjob list through #reboot

Script not working as Command line

i've created simple bash script that do the following
:
#!/usr/bin/env bash
cf ssh "$1"
When I run the command line from the CLI like cf ssh myapp its running as expected, but when I run the script like
. myscript.sh myapp
I got error: App not found
I dont understand what is the difference, I've provided the app name after I invoke the script , what could be missing here ?
update
when I run the script with the following its working, any idea why the "$1" is not working ...
#!/usr/bin/env bash
cf ssh myapp
When you do this:
. myscript.sh myapp
You don't run the script, but you source the file named in the first argument. Sourcing means reading the file, so it's as if the lines in the file were typed on the command line. In your case what happens is this:
myscript.sh is treates as the file to source and the myapp argument is ignored.
This line is treated as a comment and skipped.
#!/usr/bin/env bash
This line:
cf ssh "$1"
is read as it stands. "$1" takes the value of $1 in the calling shell. Possibly - most likely in your case - it's blank.
Now you should know why it works as expected when you source this version of your script:
#!/usr/bin/env bash
cf ssh myapp
There's no $1 to resolve, so everything goes smoothly.
To run the script and be able to pass arguments to it, you need to make the file executable and then execute it (as opposed to sourcing). You can execute the script for example this way:
./script.bash arg1 arg2

How to use bash tee to redirect stdout and stderr to a file from script which has screen in the hash bang

I have a script which needs to run in screen so I included
#!/usr/bin/screen /bin/bash
as the hash bang and it works great. The only problem is that when the script crashes I don't know what happened, the output is lost and all I know is that screen terminated.
My script is interactive so I need to see stdout and stderr in the terminal and I also want stdout and stderr logged in case it crashed.
I tried to run the script like
./test-screen-in-bash.sh 2>&1|tee test1.log
which results in an empty test1.log file
Can somebody please explain what am I doing wrong.
Thanks to #JID comments I was able to find what i was looking for.
I removed the screen from hash bang and used the method from the link provided by #JID here in the first answer.
I ended up with
#!/bin/bash
if [ -z "$STY" ]; then exec screen -L /bin/bash "$0"; fi
./myscript.sh
Now when I run the above, myscript.sh runs in screen and the whole output from the session is dumped to screenlog.n files.

shell script : write sdterr & sdtout to file

I know this has been asked many times, but I can find a suitable answer in my case.
I croned a backup script using rsync and would like to see all output, errors or not, from the all script commands. I must write the command inside the script itself, and do not want to see output in my shell.
I have been trying with no success. Below part of the script.
#!/bin/bash
.....
BKLOG=/mnt/backup_error_$now.txt
# Log everything to log file
# something like
exec 2>&1 | tee $BKLOG
# OR
exec &> $BKLOG
I have been adding at the script beginig all kinds of exec | tee $BKLOG with adding &>, 2>&1at various part of the command line, but all failed. I either get an empty log file or incomplete. I need to see on log file what rsync has done, and the error if script failed before syncing.
Thank you for help. My shell is zsh, so any solution in zsh is welcomed.
To redirect all the stdout/stderr to a file place this line on top of your script:
BKLOG=/mnt/backup_error_$now.txt
exec &> "$BKLOG"

How do I print output of exec() in realtime?

I am running the following (backup) code in a Ruby script:
for directory in directories
print `s3sync.rb --exclude="#{EXCLUDE_REGEXP}" --delete --progress -r #{directory} my.amazon.backup.bucket:#{directory}`
end
I would like the output of the executed subprocess to be echoed to the terminal in real time (as opposed to having to wait until the subprocess returns). How do I do that?
IO.popen creates a process and returns an IO object for stdin and stdout of this process.
IO.popen("s3sync.rb …").each do |line|
print line
end
If you don't need for your code to see stdout, and it's sufficient that a human sees it, than system is fine. If you need your code to see it, there are numerous solutions, popen being the simplest, giving your code access to stdout, and Open3 giving your code access to both stdout and stderr. See: Ruby Process Management
Oops, figured it out right away. I had to use exec() instead of ``
This does what I want:
for directory in directories
exec `s3sync.rb --exclude="#{EXCLUDE_REGEXP}" --delete --progress -r #{directory} my.amazon.backup.bucket:#{directory}`
end

Resources