MongoDB - making db.fsyncUnlock(); work - bash

I have a shell script that backs up MongoDB database.
I have to lock the database before backing it up.
mongo --eval "db.fsyncLock();" works fine, but when I run mongo --eval "db.fsyncUnlock();" it just waits and does nothing.
How can I make unlocking work?
edit: I know I have to keep the connection open, but how?

Executing MongoDB commands from Bash didn't really work, because you have to keep the connection open if you want to unlock the database again.
But when executing commands from Bash it connects to the database, executes the command and disconnects.
I ended up making a Javascript script and executing it from the Mongo Shell.

Related

How can I run a Shell when booting up?

I am configuring an app at work which is on a Amazon Web Server.
To get the app running you have to run a shell called "Start.sh"
I want this to be done automatically after booting up the server
I have already tried with the following bash in the User Data section (Which runs on boot)
#!/bin/bash
cd "/home/ec2-user/app_name/"
sh Start.sh
echo "worked" > worked.txt
Thanks for the help
Scripts provided through User Data are only executed the first time the instance is started. (Officially, it is executed once per instance id.) This is done because the normal use-case is to install software, which should only be done once.
If you wish something to run on every boot, you could probably use the cloud-init once-per-boot feature:
Any scripts in the scripts/per-boot directory on the datasource will be run every time the system boots. Scripts will be run in alphabetical order.

Establishing a simple connection to postgres server for load test in bash

I am currently trying to load test our instance hosting a postgres instance from a bash script. The idea is to spawn a bunch of open connections (without running any queries) and then checking the memory.
To spawn a bunch of connections I do:
export PGPASSWORD="$password"
for i in $(seq 1 $maxConnections);
do
sleep 0.2
psql -h "$serverAddress" -U postgres >/dev/null &
done
However, it seems that the connections don't stay open, as when I check for active connections, I get 0 from the ip of the instance I'm running it from. However, if I do
psql -h "$serverAddress" -U postgres &
manually from the shell, it keeps the connection open. How would I open and maintain open connections within a bash script? I've checked the password is correct, and if I exclude the ampersand from within the script, then I do enter the psql console with an open connection as expected. It's just when I background it in the script that it causes problems.
You can start your psql sessions in a sub-shell while you loop by using the sub-shell parentheses syntax like below. However, if you do this I recommend you write code to manage your jobs and clean them up when you are done.
(psql -h "$serverAddress" -U postgres)&
I tested this and I was able to maintain connections to a postgres instance this way. However, if you are checking for active connection via a select statement like select * from pg_stat_activity; you will see these connections as open and idle to the instance not active as they are not executing any task or query.
If you put this code in a script an execute it you will need to make sure that the script does not terminate before you are ready for all the sessions to die.

running db2 in bash from git's mingw on windows

I have a shell script that runs a few db2 commands which I want to use on windows.
When running this in bash from msysgit 2.5.3 64bit I get an error from db2:
SQL1024N Die Verbindung zur Datenbank ging verloren. SQLSTATE=08003
for instance
start db2 with db2cmd then,
start the bash from the db2cmd window,
then run
db2 connect to <db> user <user>
db2 select * from syscat.tables
The db2 select will produce the same error.
This happens because the bash will start another subshell to execute each db2 command and the db2 connect calls another process db2bp which actually holds the connection.
When db2 connect returns the subshell is closed and the connection is lost.
This happens also when I concatenate the commands with ; or &&.
Is there a way to make bash not execute a subshell or at least not for every command?
The usual method for preventing to spawn a new shell is to prefix each command with a dot (some references for example here). You may also examine the shell built-in exec command. However, I am afraid that running a shell in Windows will have its own oddities, at least judging from my own experience, so you may want to try to experiment with different shell flavours before you get the solution right. Hope it helps anyway!
For Scripting in Bash you should add, after the connect string this little Bugger:
export DB2DBDFT=
That will ensure that all further subshells will use your db2 connection.
Hope this solves your problem.

Run shell script without close the previous process

I got stuck in this problem. I need to run two commands in shell script, but they can't stop each other.
For example, this shell script:
psql database user &
gedit file
If I run these commands up, only the gedit process stays open and I can't see where the process of psql is.
But if I do this:
gedit file &
psql database user
I can see the psql's process, but it's closed by messages of gedit's process.
How can I execute this script without one process close the other?
If you want to suppress output from gedit:
gedit file >/dev/null 2>&1 &
psql database user
However, the claim:
I can see the psql's process, but it's closed by messages of gedit's process.
...simply doesn't happen: Messages from gedit go directly to the terminal; psql can't see them, so it can't possibly be exiting because of them.

Chain dependent bash commands

I'm trying to chain together two commands:
The first in which I start up postgres
The second in which I run a command meant for postgres(a benchmark, in this case)
As far as I know, the '||', ';', and '&/&&' operators all require that the first command terminate or exit somehow. This isn't the case with a server that's been started, so I'm not sure how to proceed. I can't run the two completely in parallel, as the server has to be started.
Thanks for the help!
I would recommend something along the lines of the following in a single bash script:
Start the Postgres server via a command like /etc/init.d/postgresql start or similar
Sleep for a period of time to give the server time to startup; perhaps a minute or two
Then run a psql command that connects to the server to test its up-ness
Tie that command together with your benchmark via &&, so it completes only if the psql command completes (depending on the exact return codes from psql, you may need to inspect the output from the command instead of the return code). The command run via psql would best be a simple query that connects to the server and returns a simple value that can be cross-checked.
Edit in response to comment from OP:
It depends on what you want to benchmark. If you just want to benchmark a command after the server has started, and don't want to restart the server every time, then I would tweak the code to run the psql up-ness test in a separate block, starting the server if not up, and then afterward, run the benchmark test command unconditionally.
If you do want to start the server up fresh each time (to test cold-start performance, or similar), then I would add another command after the benchmarked command to shutdown the server, and then sleep, re-running the test command to check for up-ness (where this time no up-ness is expected).
In other case you should be able to run the script multiple times.
A slight aside: If your test is destructive (that is, it writes to the DB), you may want to consider dumping a "clean" copy of the DB -- that is, the DB in its pre-test state -- and then creating a fresh DB, with a different name from the original, using that dump with each run of the script, dropping it beforehand.

Resources