Cassandra cql shell script - shell

I'm having an issue with a simple script here. Just can't find documented help that resolves my issue.
Here is my script.
#!/bin/bash
$VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $VCOPS_BASE/user/conf/cassandra/cqlshrc
-e "cql_statement;"
I left out the cql for simplicity's sake, but everytime I run my file from the command line I simply enter the cql shell.
--execute and echo don't work either and I'm really not sure why I would need to save the cql statement to another file.
Any help would be appreciated.

That's because -e is a cqlsh option, not a bash command that stands on its own. Therefore, it needs to be on the same line as your cqlsh command.
#!/bin/bash
$VCOPS_BASE/cassandra/apache-cassandra-2.1.8/bin/cqlsh --ssl --cqlshrc $VCOPS_BASE/user/conf/cassandra/cqlshrc -e "cql_statement;"
I tested this out with a simpler version:
aploetz#dockingBay94:~/scripts$ cat getEmail.sh
#!/bin/bash
cqlsh -u cassandra -p cassandra -e "SELECT * FROm stackoverflow.users_by_email WHERe email='mreynolds#serenity.com';"
aploetz#dockingBay94:~/scripts$ ./getEmail.sh
email | id | username
------------------------+--------------------------------------+----------
mreynolds#serenity.com | d8e57eb4-c837-4bd7-9fd7-855497861faf | Mal
(1 rows)
aploetz#dockingBay94:~/scripts$

Related

script to connect to a "list.txt" of servers

I am trying to find a way to connect to a list of servers written in a simple textfile to run one command and write the output to a file...
The small problem is, I have to login with a password... but it would not a problem to paste the password into the script.
the full command would be:
ssh "server_from_list.txt uptime | awk -F, '{sub(".*up ",x,$1);print $1}' >> /home/kauk2/uptime.out
lets assume the password is: abcd1234
Any suggestions??? I am not fit in scripting, sorry...
Many thanks to you all in advance...
regards,
Joerg
Ideally you should set up password-less login, but failing that you can use sshpass. First, get a single command working by trying the following:
export SSHPASS=abcd1234
Then you can try:
sshpass -e ssh user#server1 'uname -a'
When you get that debugged and working, you can use GNU Parallel to run the command on all servers in a file called list.txt
user#server1
user#server2
user#server3
user#server4
The command will be:
parallel -k -a list.txt sshpass -e ssh {} 'uptime'

Copy output of sql-query to a file

I want to export a random entry of my database into a file with the command
SELECT * FROM my_table ORDER BY RANDOM() LIMIT 1 \g /path/file;
This query works if I enter it in my db terminal, but I want to us this query with a bash script but then I get the error: syntax error at or near "\g"
My bash script looks like this:
PGPASSWORD=*** psql -U user -d db_name -h localhost -p port -t -c "SELECT * FROM my_table ORDER BY RANDOM() LIMIT 1 \g /path/file"
Bash is interpreting the string and trying to interpolate it. Probably, escaping the backslash will solve your problem.
PGPASSWORD=*** psql -U user -d db_name -h localhost -p port -t -c "SELECT * FROM my_table ORDER BY RANDOM() LIMIT 1 \\g /path/file"
A SQL statement terminated by \g is not supported by the -c command switch. Per documentation of -c:
-c command
...
command must be either a command string that is completely parsable by the server (i.e., it contains no psql-specific features), or a single backslash command. Thus you cannot mix SQL
and psql meta-commands with this option
To redirect the results to a file, there are several options:
shell redirection: psql [other options] -Atc 'SELECT...' >/path/to/data.txt
-A is to switch to unaligned mode (no space fillers to align columns).
put the SQL part in a heredoc text instead of the command line:
psql [options] <<EOF
SELECT ... \g /path/to/file
EOF
This form has the advantage that multiline statements or multiple statements are supported directly.
\copy of the query. Be aware that COPY to a FILE is different: it creates the file on the server with the permissions of postgres and requires being a database superuser. COPY TO STDOUT works too but is not better than SELECT concerning the redirection.
I found a solution for my script, and now it works.
#!/bin/bash
RANDOM_NUMBER=0
while true
do
for i in `seq 1`
do
RANDOM_NUMBER=$(($RANDOM % 100000))
echo $RANDOM_NUMBER
PGPASSWORD=*** psql -U user_name -d db_name -h localhost -p PORT -c
"INSERT INTO numbers (number) VALUES ('$RANDOM_NUMBER');"
done
sleep 10
for i in `seq 1`
do
PGPASSWORD=*** psql -U user_name -d db_name -h localhost -p PORT -c
"DELETE FROM numbers WHERE id = (SELECT id FROM numbers ORDER BY RANDOM() LIMIT 1);"
done
done

How to ssh to a server and get CPU and memory details?

I am writing a shell script where i want to ssh to a server and get the cpu and memory details data of that displayed as a result. I’m using the help of top command here.
Script line:
ssh -q user#host -n “cd; top -n 1 | egrep ‘Cpu|Mem|Swap’”
But the result is
TERM environment variable is not set.
I had checked the same in the server by entering set | grep TERM and got result as TERM=xterm
Please someone help me on this. Many thanks.
Try using the top -b flag:
ssh -q user#host -n "cd; top -bn 1 | egrep 'Cpu|Mem|Swap'"
This tells top to run non-interactively, and is intended for this sort of use.
top need an environment. You have to add the parameter -t to get the result:
ssh -t user#host -n "top -n 1 | egrep 'Cpu|Mem|Swap'"
Got it..!! Need to make a small modification for the below script line.
ssh -t user#host -n "top -n 1 | egrep 'Cpu|Mem|Swap'"
Instead of -t we need to give -tt. It worked for me.
To execute command top after ssh’ing. It requires a tty to run. Using -tt it will enable a force pseudo-tty allocation.
Thanks stony for providing me a close enough answer!! :)

Run cassandra queries from command line

I want to execute cql queries from bash command.
[cqlsh 3.1.8 | Cassandra 1.2.19 | CQL spec 3.0.5 | Thrift protocol 19.36.2]
[root#hostname ~]# /opt/apache-cassandra-1.2.19/bin/cqlsh -k "some_keyspace" -e "SELECT column FROM Users where key=value"
I got:
cqlsh: error: no such option: -e
Options:
--version show program's version number and exit
-h, --help show this help message and exit
-C, --color Always use color output
--no-color Never use color output
-u USERNAME, --username=USERNAME
Authenticate as user.
-p PASSWORD, --password=PASSWORD
Authenticate using password.
-k KEYSPACE, --keyspace=KEYSPACE
Authenticate to the given keyspace.
-f FILE, --file=FILE Execute commands from FILE, then exit
-t TRANSPORT_FACTORY, --transport-factory=TRANSPORT_FACTORY
Use the provided Thrift transport factory function.
--debug Show additional debugging information
--cqlversion=CQLVERSION
Specify a particular CQL version (default: 3.0.5).
Examples: "2", "3.0.0-beta1"
-2, --cql2 Shortcut notation for --cqlversion=2
-3, --cql3 Shortcut notation for --cqlversion=3
Any suggestions ?
First of all, you should seriously consider upgrading. You are missing out on a lot of new features and bug fixes.
Secondly, with cqlsh in Cassandra 1.2 you can use the -f flag to specify a file containing cql statements:
$ echo "use system_auth; SELECT role,is_superuser FROM roles WHERE role='cassandra';" > userQuery.cql
$ bin/cqlsh -u aploetz -p reindeerFlotilla -f userQuery.cql
role | is_superuser
-----------+--------------
cassandra | True
(1 rows)
You can use -f to execute from a file or SOURCE once you start CQLSH. I don't think -e is a valid option with that version.
It's bit dirty and unstable, but here is the answer:
/opt/apache-cassandra-1.2.19/bin/cqlsh -k "keyspace" -f /path/to/file.cql > /path/to/output.txt
tail -2 /path/to/output.txt | head -1 > /path/to/output-value.txt

MySQLdump with arguments

Hello to professionals !
There was a good and simplest script idea to make mysqldump of every database - taken from
dump all mysql tables into separate files automagically?
author - https://stackoverflow.com/users/1274838/elias-torres-arroyo
with script as follows
#!/bin/bash
# Optional variables for a backup script
MYSQL_USER="root"
MYSQL_PASS="PASSWORD"
BACKUP_DIR="/backup/01sql/";
# Get the database list, exclude information_schema
for db in $(mysql -B -s -u $MYSQL_USER --password=$MYSQL_PASS -e 'show databases' | grep -v information_schema)
do
# dump each database in a separate file
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" | gzip > "$BACKUP_DIR/$db.sql.gz"
done
sh
but the problem is that this script does not "understand" arguments like
--add-drop-database
to perform
mysqldump -u $MYSQL_USER --password=$MYSQL_PASS "$db" --add-drop-database | gzip > "$BACKUP_DIR/$db.sql.gz"
Is there any idea how to force this script to understand the additional arguments listed under
mysqldump --help
because while all my tests shows it doesn't.
Thank you in advance for any hint to try !
--add-drop-database works only with --all-databases or --databases.
See please the reference in docs
So in your case mysqldump utility ignore mentioned parameter because you are going to dump one database.

Resources