Couchbase script with combination of cbq and cbimport commands is not working - bash

I am trying to write a script file which will export some data using [cbq][1] commands and then import those data into target cluster via [cbimport][2] commands. I want to enhance the script in such a way that it can export huge data and import on another cluster.
However in my local machine, it is failing. Actually script is getting stuck in the SELECT command of the cbq command.
Can someone suggest me how to do it. Below is the test script which I am using:
echo "Hello World"
cbq -u Administrator -p Administrator -e "http://localhost:8093";
\REDIRECT temp.txt;
SELECT * FROM `sample.data` where id="106" --output="temp.txt";
\REDIRECT OFF;
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample.data -d file://C:\Users\myusername\Desktop\temp.txt -f list -g %docId%;
\EXIT;
Below is the response of above script:
$ ./test.sh
Hello World
Connected to : http://localhost:8093/. Type Ctrl-D or \QUIT to exit.
Path to history file for the shell : C:\Users\myuser\.cbq_history
And getting stuck here for very long time.

Specifically for this script of yours, you have a semi-colon terminating the cbq invocation after the URL, so it is simply in interactive mode.
You would want to try:
echo "Hello World"
cbq -u Administrator -p Administrator -e "http://localhost:8093" --output="temp.txt" -s "SELECT * FROM `sample.data` where id='106'"
# add processing to convert from redirected output to cbimport format
cbimport json -c http://{target-cluster}:8091 -u Administrator -p Administrator -b sample.data -d file://C:\Users\myusername\Desktop\temp.txt -f list -g %docId%
as the 3 commands in your script. (Note no use of double quotes in the statement, since the chosen quotes for the shell are double quotes. You could invert this choice too.)

Related

Clear last bash command from history from executing bash script

I have a bash script, which uses expect package to ssh into a remote server.
My script looks like this:
#!/bin/bash
while getopts "p:" option; do
case "${option}" in
p) PASSWORD=${OPTARG};;
esac
done
/usr/bin/expect -c "
spawn ssh my.login.server.com
expect {
\"Password*\" {
send \"$PASSWORD\r\"
}
}
interact
"
I run it like ./login.sh -p <my-confidential-password>
Now once you run it and log in successfully and exit from the remote server, I can hit up-arrow-key from the keyboard and can still see my command with password in the terminal. Or I simply run history it shows up. Once I exit the terminal, then it also appears in bash_history.
I need something within my script that could clear it from history and leave no trace of the command I ran (or password) anywhere.
I have tried:
Clearing it using history -c && history -r, this doesn't work as the script creates its own session.
Also, echo $HISTCMD returns 1 within script, hence I cannot clear using history -d <tag>.
P.S. I am using macOS
You could disable command history for a command:
set +o history
echo "$PASSWORD"
set -o history
Or, if your HISTCONTROL Bash variable includes ignorespace, you can indent the command with a space and it won't be added to the history:
$ HISTCONTROL=ignorespace
$ echo "Hi"
Hi
$ echo "Invisible" # Extra leading space!
Invisible
$ history | tail -n2
7 echo "Hi"
8 history | tail -n2
Notice that this isn't secure, either: the password would still be visible in any place showing running processes (such as top and friends). Consider reading it from a file with 400 permissions, or use something like pass.
You could also wrap the call into a helper function that prompts for the password, so the call containing the password wouldn't make it into command history:
runwithpw() {
IFS= read -rp 'Password: ' pass
./login.sh -p "$pass"
}

script doesn't promt message if called from another script

I have the following example:
run_docker_script
#!/bin/bash
argument=$1
if [ argument==c1 ]; then
DOCKERNAME=container1
else
DOCKERNAME=container2
fi
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c 'read -rp "username:" user'
This is working fine if I call it like ./run_docker_script.sh (means I was asked to give a username).
If I call this script from another one and redirect the output to a file, nothing will be prompted to the console! The script sits there waiting for the input but the user doesn't see anything:
#!/bin/bash
LOG_DIR=results
mkdir -p $LOG_DIR
./run_docker_script.sh c1 >"$LOG_DIR"/result.txt
Any hints?
You are redirecting the prompt to the log file. Probably use tee instead of a plain redirection.
#!/bin/bash
LOG_DIR=results
mkdir -p "$LOG_DIR" # notice quoting
./run_docker_script.sh arg1 arg2 | tee "$LOG_DIR"/result.txt
You will still probably have some issues with buffering. I'm thinking passing the input as an argument to the Docker container would be a better design.
#!/bin/bash
# ^ notice fixed spacing
if [ argument = c1 ]; then
# ^ ^ notice fixed spacing
DOCKERNAME=debian
else
DOCKERNAME=ubuntu
fi
read -r -p "username: " username
docker run -it --rm --entrypoint /bin/bash $DOCKERNAME -c "user=$username"
It's slightly weird that Docker outputs the standard error from the shell within the container to standard output, too, but that's what it does. I don't think there is an easy way to change that.
as i said, the script is working well if i don't redirect the output to a file, means that the user will be asked to provide some input in the console.
But if i redirect the output to the file, the text "username:" will be as well redirected to the file and the user doesn't see anything.

Print all script output to file from within another script

English is not my native language, please accept my apologies for any language issues.
I want to execute a script (bash / sh) through CRON, which will perform various maintenance actions, including backup. This script will execute other scripts, one for each function. And I want the entirety of what is printed to be saved in a separate file for each script executed.
The problem is that each of these other scripts executes commands like "duplicity", "certbot", "maldet", among others. The "ECHO" commands in each script are printed in the file, but the outputs of the "duplicity", "certbot" and "maldet" commands do not!
I want to avoid having to put "| tee --append" or another command on each line. But even doing this on each line, the "subscripts" do not save in the log file. That is, ideally in the parent script, you could specify in which file each script prints.
Does not work:
sudo bash /duplicityscript > /path/log
or
sudo bash /duplicityscript >> /path/log
sudo bash /duplicityscript | sudo tee –append /path/log > /dev/null
or
sudo bash /duplicityscript | sudo tee –append /path/log
Using exec (like this):
exec > >(tee -i /path/log)
sudo bash /duplicityscript
exec > >(tee -i /dev/null)`
Example:
./maincron:
sudo ./duplicityscript > /myduplicity.log
sudo ./maldetscript > /mymaldet.log
sudo ./certbotscript > /mycertbot.log
./duplicityscript:
echo "Exporting Mysql/MariaDB..."
{dump command}
echo "Exporting postgres..."
{dump command}
echo "Start duplicity data backup to server 1..."
{duplicity command}
echo "Start duplicity data backup to server 2..."
{duplicity command}
In the log file, this will print:
Exporting Mysql/MariaDB...
Exporting postgres...
Start duplicity data backup to server 1...
Start duplicity data backup to server 2...
In the example above, the "ECHO" commands in each script will be saved in the log file, but the output of the duplicity and dump commands will be printed on the screen and not on the log file.
I made a googlada, I even saw this topic, but I could not adapt it to my necessities.
There is no problem in that the output is also printed on the screen, as long as it is in its entirety, printed on the file.
try 2>&1 at the end of the line, it should help. Or run the script in sh -x mode to see what is causing the issue.
Hope this helps

Not able use lftp commands running from shellscript

I'm using set of commands from a shellscript. First command is running fine but it is moving to lftp command prompt and expecting manual input instead of running commands from shelscript. Following are the commands i'm using
lftp -e "$HOST"
lftp -u "$USER,$PWD"
lftp -e "cd /inbox"
put $file
bye
Please suggest me some solution
Using lower-case variable names to avoid conflicts with local environment variables or shell-builtins ($USER and $PWD are both builtins, so you shouldn't be setting them yourself):
lftp \
-e "cd /inbox; put $file" \
-u "$user,$pwd" \
"$host"
The point, here, is invoking lftp only once, and passing all the necessary commands to that single invocation.

Why can't I redirect text to a text file?

I'm writing a bash shell script that has to be run with admin permissions (sudo).
I'm running the following commands
sudo -u $SUDO_USER touch /home/$SUDO_USER/.kde/share/config/kcmfonts > /dev/null
sudo -u $SUDO_USER echo "[General]\ndontChangeAASettings=true\nforceFontDPI=96" >> /home/$SUDO_USER/.kde/share/config/kcmfonts
The first command succeeds and creates the file. However the second command keeps erroring with the following:
cannot create /home/username/.kde/share/config/kcmfonts: Permission denied
I can't understand why this keeps erroring on permissions. I'm running the command as the user who invoked sudo so I should have access to write to this file. The kcmfonts file is created successfully.
Can someone help me out?
Consider doing this:
echo "some text" | sudo -u $SUDO_USER tee -a /home/$SUDO_USER/filename
The tee command can assist you with directing the output to the file. tee's -a option is for append (like >>) without it you'll clobber the file (like >).
You don't need to execute the left side with elevated privs (although it is just echo, this is a good thing to form as a habit), you only need the elevated privs for writing to the file. So with this command you're only elevating permissions for tee.
sudo -u $SUDO_USER echo "some text" >> /home/$SUDO_USER/filename
sudo executes the command echo "some text" as `$SUDO_USER".
But the redirection is done under your account, not under the $SUDO_USER account. Redirection is handled by the shell process, which is yours and is not under the control of sudo.
Try this:
sudo -u $SUDO_USER sh -c 'echo "some text" >> /home/$SUDO_USER/filename'
That way, the sh process will be executed by $SUDO_USER, and that's the process that will handle the redirection (and will write to the output file).
Depending on the complexity of the command, you may need to play some games with escaping quotation marks and other special characters. If that's too complex (which it may well be), you can create a script:
$ cat foo.sh
#!/bin/sh
echo "some text" >> /home/$SUDO_USER/filename
$ sudo -u $SUDO_USER ./foo.sh
Now it's the ./foo.sh command (which executes as /bin/sh ./foo.sh) that will run under the $SUDO_USER account, and it should have permission to write to the output file.

Resources