Issuing Redshift COPY command via psql never acknowledges completion - bash

I have a shell script that issues a command similar to this:
$PGSQL_BIN/psql $RSCONNECTION -c "COPY property.history from 's3://my-bucket/data.txt.gz' CREDENTIALS 'aws_access_key_id=XXXXX;aws_secret_access_key=XXXXX' CSV DELIMITER AS ',' ACCEPTINVCHARS TRUNCATECOLUMNS GZIP TRIMBLANKS BLANKSASNULL EMPTYASNULL DATEFORMAT 'auto' ACCEPTANYDATE COMPUPDATE ON MAXERROR 100;"
The command is successful, but the completion is never acknolowdged, so the shell script does not move onto the next command.
Is there something I'm missing that will make this behave?

psql is probably losing touch with the session. Make sure you've followed the "Change TCP/IP Timeout Settings" instructions from the Redshift Docs. http://docs.aws.amazon.com/redshift/latest/mgmt/connecting-firewall-guidance.html#connecting-firewall-guidance.change-tcpip-settings

Related

Bash commands as variables failing when joining to form a single command

ssh="ssh user#host"
dumpstructure="mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database"
mysql=$ssh "$dumpstructure"
$mysql | gzip -c9 | cat > db_structure.sql.gz
This is failing on the third line with:
mysqldump --compress --default-character-set=utf8 --no-data --quick -u user -p database: command not found
I've simplified my actualy script for the purpose of debugging this specific error. $ssh and $dumpstructure aren't always being joined together in the real script.
Variables are meant to hold data, not commands. Use a function.
mysql () {
ssh user#host mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database
}
mysql | gzip -c9 > db_structure.sql.gz
Arguments to a command can be stored in an array.
# Although mysqldump is the name of a command, it is used here as an
# argument to ssh, indicating the command to run on a remote host
args=(mysqldump --compress --default-character-set=utf8 --nodata --quick -u user -p database)
ssh user#host "${args[#]}" | gzip -c9 > db_structure.sql.gz
Chepner's answer is correct about the best way to do things like this, but the reason you're getting that error is actually even more basic. The line:
mysql=$ssh "$dumpstructure"
doesn't do anything like what you want. Because of the space between $ssh and "$dumpstructure", it'll parse this as environmentvar=value command, which means it should execute the "mysqldump..." part with the environment variable mysql set to ssh user#host. But it's worse than that, since the double-quotes around "$dumpstructure" mean that it won't be split into words, and so the entire string gets treated as the command name (rather than mysqldump being the command name, and the rest being arguments to it).
If this had been the right way to go about building the command, the right way to stick the parts together would be:
mysql="$ssh $dumpstructure"
...so that the whole combined string gets treated as part of the value to assign to mysql. But as I said, you really should use Chepner's approach instead.
Actually, commands in variables should also work and can be in form of `$var` or just $($var). If it says command not found, it could because the command maybe not in you PATH. Or you should give full path of you command.
So let's put this vote down away and talk about this question.
The real problem is mysql=$ssh "$dumpstructure". This means you'll execute $dumpstructure with additional environment mysql=$ssh. So we got command not found exception. It's actually because mysqldump is located on remote server not this host, so it's reasonable this command is not found.
From this point, let's see how to fix this question.
OP want to dumpplicate mysql data from remote server, which means $dumpstructure shoud be executed remotely. Let's see third line mysql=$ssh "$dumpstructure". Now we figure out this would result in problem. So what should be the correct command? The simplest command should be like mysql="$ssh $dumpstructure", which means both $ssh and $dumpstructure will be join into single command line in variable $mysql.
At the end, let's talk about the last command line. I do not agree with variable are meant to hold data, not command. Cause command is also a kind of data. The real problem is how to use it correctly.
OP's command is also supported, at least it is supported on bash 4.2.46.
So the real problem is how to use a variable to hold commands not import a new method to do that, wraping them into a bash function, for example.
So who can tell me why this answer does not come into readers' notice but be voted down?

How can I start an ssh session with a script without redirecting stdin?

I have a series of bash commands, some with interactive prompts, that I need run on a remote machine. I have to have them called in a certain order for different scenarios, so I've been trying to make a bash script to automate the process for me. However, it seems like every way to start an ssh session with a bash script results in the the redirection of stdin to whatever string or file was used to initiate the script in the first place.
Is there a way I can specify that a certain script be executed on a remote machine, but also forward stdin through ssh to the local machine to enable the user to interact with any prompts?
Here's a list of requirements I have to clarify what I'm trying to do.
Run a script on a remote machine.
Somewhere in the middle of that remote script be command that will prompt for input. Example: git commit will bring up vim.
If that command is git commit and it brings up vim, the user should be able to interact with vim as if it was running locally on their machine.
If that command prompts for a [y/n] response, the user should be able to input their answer.
After the user enters the necessary information—by quitting vim or pressing return on a prompt—the script should continue to run like normal.
My script will then terminate the ssh session. The end product is that commands were executed for the user without them needing to be aware that it was through a remote connection.
I've been testing various different methods with the following script that I want run on the remote machine.
#!/bin/bash
echo hello
vim
echo goodbye
exit
It's crucial that the user be able to use vim, and then, when the user finishes, "goodbye" should be printed to the screen and the remote session should be terminated.
I've tried uploading a temporary script to the remote machine and then running ssh user#host bash /tmp/myScript, but that seems to also take over stdin completely, rendering it impossible to let the user respond to prompts for user input. I've tried adding the -t and -T options (I'm not sure if they're different), but I still get the same result.
One commenter mentioned using expect, spawn, and interact, but I'm not sure how to use those tools together to get my desired behavior. It seems like interact will result in the user gaining control over stdin, but then there's no way to have it relinquished once the user quits vim in order to let my script continue execution.
Is my desired behavior even possible?
Ok, I think I've found my problem. I was creating a wrapper script for ssh that looked like this:
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands=$(</dev/stdin)
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
It was there that I was redirecting stdin, not ssh. I should have mentioned this when I formulated my question. I read through that script over and over again, but I guess I just overlooked that one line. Removing that line totally fixed my problem.
Just to clarify, changing my script to the following totally fixed my problem.
#!/bin/bash
tempScript="/tmp/myScript"
remote=user#host
commands="$#"
cat <(echo "$commands") | ssh $remote "cat > $tempScript && chmod +x $tempScript" &&
ssh -t $remote $tempScript
errorCode=$?
ssh $remote << RM
if [[ -f $tempScript ]]; then
rm $tmpScript
fi
RM
exit $errorCode
Once I changed my wrapper script, my test script described in the question worked! I was able to print "hello" to the screen, vim appeared and I was able to use it like normal, and then once I quit vim "goodbye" was printed and the ssh client closed.
The commenters to the question were pointing me in the right direction the whole time. I'm sorry I only told part of my story.
I've searched for solutions to this problem several times in the past, however never finding a fully satisfactory one. Piping into ssh looses your interactivity. Two connects (scp/ssh) is slower, and your temporary file might be left lying around. And the whole script on the command line often ends up in escaping hell.
Recently I encountered that the command line buffer size is usually quite large (getconf ARG_MAX > 2MB where I looked). And this got me thinking about how I could use this and mitigate the escaping issue.
The result is:
ssh -t <host> /bin/bash "<(echo "$(cat my_script | base64 | tr -d "\n")" | base64 --decode)" <arg1> ...
or using a here document and cat:
ssh -t <host> /bin/bash $'<(cat<<_ | base64 --decode\n'$(cat my_script | base64)$'\n_\n)' <arg1> ...
I've expanded on this idea to produce a fully working BASH example script sshx that can run arbitrary scripts (not just BASH), where arguments can be local input files too, over ssh. See here.

PSQL: How can I prevent any output on the command line?

My problem: I'm trying to run a database generation script at the command line via a batch file as part of a TFS build process to enable nightly testing on a known dataset.
The scripts we run are outputting Notices, Warnings and some Errors on the command line. I would like to suppress at least the Notices and Warnings, and if possible the Errors as they don't seem to have an impact on the overall success of the scripts. This output seems to be affecting the success or failure of the process as far as the TFS build process is concerned. It's highlighting every line of output from the scripts as errors and failing the build.
As our systems are running on Windows, most of the potential solutions I've found online don't work as they seem to target Linux.
I've changed the client_min_messages to error in the postgresql.conf file, but when looking at the same configuration from pgAdmin (tools > server configuration) it shows the value as Error but the current value as Notice.
All of the lines in the batch file that call psql use the -q flag as well but that only seems to prevent the basics such as CREATE TABLE and ALTER TABLE etc.
An example line from the batch file is:
psql -d database -q < C:\Database\scripts\script.sql
Example output line from this command:
WARNING: column "identity" has type "unknown"
DETAIL: Proceeding with relation creation anyway.
Specifying the file with the -f flag makes no difference.
I can manually run the batch file on my development machine and it produces the expected database regardless of what errors or messages show on the command prompt.
So ultimately I need all psql commands in my batch files to run silently.
psql COMMAND &> output.txt
Or, using your example command:
psql -d database -q < C:\Database\scripts\script.sql &> output.txt
use psql -o flag to send the command output to the filename you wish or /dev/null if you don't care about it.
The -q option will not prevent the query output.
-q, --quiet run quietly (no messages, only query output)
to avoid the output you have to send the query result to a file
psql -U username -d db_name -pXXXX -c "SELECT * FROM table_name LIMIT 5;" > C:\test.csv
use 1 > : create new file each time
use 2 >> : will create and keep adding

Laravel Envoy and bash prompt

I'm using Envoy to provision a remote server. Provisioning is done by pulling the bash script from a private repo and then execute it.
The bash script ask some confirmation like yes/no (using bash "read -p"): it works as expected when i'm connected to the remote server... the script wait for user input.
Instead Envoy seems to ignore any prompt. Is it an expected behavior?
Any workaround?
Yes, this is expected. There's nothing for read to read from so it doesn't.
You have a few options.
Rewrite your script to use a config file when there's no terminal to prompt from.
Use something like [ -t 0 ] to test if the standard input is a terminal and load a configuration file with defaults. The simplest way to do that is just have a file that contains appropriate variable assignments and just source it . defaults.sh or whatever. You don't even need the -t test if you source the defaults first since then anything the user inputs will over-ride the default value.
Rewrite your script to have sane defaults.
Rewrite whatever runs the script to provide your script input via pipeline/file via redirection (e.g. printf 'answer 1\nanswer 2\n' | ./script.sh or ./script.sh <answerfile).

Getting a shell error code from curl in Jenkins while still displaying output in console

I am using a shell script in Jenkins that, at a certain point, uploads a file to a server using curl. I would like to see whatever output curl produces but also check whether it is the output I expect. If it isn't, then I want to set the shell error code to > 0 so that Jenkins knows the script failed.
I first tried using curl -f, but this causes the pipe to be cut as soon as the upload fails and the error output never gets to the client. Then I tried something like this:
curl ...params... | tee /dev/tty | \
xargs -I{} test "Expected output string" = '{}'
This works from a normal SSH shell but in the Jenkins console output I see:
tee: /dev/tty: No such device or address
I'm not sure why this is since I thought Jenkins was communicating with the slave using a normal SSH shell. In any case, the whole xargs + test thing strikes me as a bit of a hack.
Is there a way to accomplish this in Jenkins so that I can see the output and also test whether it matches a specific string?
When Jenkins communicates with slave via SSH, there is no terminal allocated, and so there is no /dev/tty device for that process.
Maybe you can send it to /dev/stderr instead? It will be a terminal in an interactive session and some log file in non-interactive session.
Have you thought about using the Publish over SSH Plugin instead of using curl? Might save you some headache.
If you just copy the file from master to slave there is also a plugin for that, copy to slave Plugin.
Cannot write any comments yet, so I had to post it as an answer.

Resources