I have a mongo's primary server and mongo's secondary server.
I want to take a backup mongo in secondary server in 1 time an hour.
I just wrote a simple bash script in secondary server:
mongodump --host localhost --port 27017 --db databasename --out /root/backupdatabasename --oplog
When I run this script. Geting error:
2016-02-15T07:42:46.713+0000 Failed: bad option: --oplog mode only supported on full dumps
As I know: --oplog is an options point-in-time backup.
Please give me advises if I run above script without --oplog
it works fine without --oplog option.
If you remove the database parameter, the command will dump all databases and include the oplog entries created since the backup started.
The oplog is stored per-mongod instance, rather than per-database, so it kind-of makes sense that you can't request the oplog for a single database.
mongodump --host localhost --port 27017 --out /root/backupdatabasename --oplog
Related
I am trying to connect to Vertica nodes through vsql using the -h parameter to specify the host IP. However, I want to specify failover nodes.
According to the documentation backup hosts can be provided as a property in JDBC connection.
How can I implement the same through vsql?
edd is correct, you can use -B SERVER:PORT. Also, if you have native connection load balancing set, you can use the -C option. This will allow the Vertica native load balancer to choose a host for you.
To set the load balancer you run:
SELECT SET_LOAD_BALANCE_POLICY('ROUNDROBIN');
Then when you connect, you use the -C option, and you will see that Vertica has selected a new host.
$ vsql -h host01 -U dbadmin -C
Welcome to vsql, the Vertica Analytic Database interactive terminal.
Type: \h or \? for help with vsql commands
\g or terminate with semicolon to execute query
\q to quit
INFO: Connected using a load-balanced connection.
INFO: Connected to host02 at port 5433.
dbadmin=>
Using -C should work if the node is down on the specified host, as long as the Vertica agent is still running on that host.
The docs say with vsql -B.
Have you tried that option?
I am running the bash files to make a Mongo dump on daily bases.But In local directory I am running a one bash file which connects to server terminal.And in server terminal I am running the other file which makes a Mongo dump.
But is it possible to make one file which connects to MongoDB server terminal and run the commands on the sever.
I tried with many commands but it was not possible to run the commands on the server terminal with one bash file, when the server terminal opens up then the left over commands does not execute.
Is it possible to do one bash file and execute the server commands on the server..?
Connect to your DB remotely using this command :
mongo --username username --password secretstuff --host YOURSERVERIP --port 28015
You can then automate this by including your pertaining commands ( including the above ) in a bash script that you can run from anywhere.
To solve the above problem, answer from Matias Barrios seems to be correct for me. You don't use a script on the server, but use tools on your local machine that connect to the server services and manage them.
Nevertheless, to execute a script on a distant server, you could use ssh. This is not the right solution in your case, but answer the question in your title.
ssh myuser#MongoServer ./script.sh param1
This can be used in a local script and execute script.sh on the server MongoServer (with param1 and) with system privileges of the user myuser.
Beforehand, don't forget to avoid password request with
ssh-copy-id myuser#MongoServer
This will copy your ssh public key in the myuser directory of the MongoServer
I have the following script:
mongodump --gzip -d foobar \
--excludeCollection=foo1 \
--excludeCollection=foo2 \
--excludeCollection=foo3 \
--excludeCollection=foo4 -o ./
But the dump is too large for the server it's on, it's literally taking up all the disk space. Is there anyway to make it dump to another host? Maybe using scp?
The easiest thing to do is to use mongodump from anther computer, if the database is accessible, using the --host parameter and any credentials you may need. It's quite similar to using the mongo shell to connect to a remote instance.
can be used postgres_fdw to connect via ssh tunnel?
The database is accessible only from the DB server, andI need to join from another remote server. The DB server log in with SSH keys.
If it's possible, how please?
Yes It is possible. I solved it for mysql_fdw like that;
I use autossh for port forwarding. With autossh, you can keep connection up all time.
Run command on Postgres server:
autossh -L 127.0.0.1:3306:mysql_ip:3306 root#mysql_ip -N -i .ssh/id_rsa.mysql
Test autossh access from Postgres to Mysql.
Run command on Postgres server;
mysql --host=127.0.0.1 --port=3306 -u mysqldbuser -p
Last different part is;
CREATE SERVER mysql_server FOREIGN DATA WRAPPER mysql_fdw OPTIONS (host '127.0.0.1', port '3306');
Other things are same.
I have Vertica installed in an Ubuntu virtual machine and I'd like to have a specific database started during the boot, instead of me having to login, open admintools and start from there.
So, is there a command line that would allow me to start it without user interaction?
In which run level should I add this?
Also, I use a specific user to run everything Vertica related, does this need to be taken into account in my boot script?
Why not just set the restart policy(reboot on boot) in your admintools "Set Restart Policy"
You have 3 option :
Never
ksafe
always -- chose this one to start on boot.
And that is it !
admintools -t start_db
[dbadmin#hostname ~]$ admintools -t start_db --help
Usage: start_db [options]
Options:
-h, --help show this help message and exit
-d DB, --database=DB Name of database to be started
-p DBPASSWORD, --password=DBPASSWORD
Database password in single quotes
-i, --noprompts do not stop and wait for user input(default false)
-F, --force force the database to start at an epoch before data
consistency problems were detected.