I downloaded a mongoDB database, meaning I got a set of XXX.0 [xxx.1, ...] and XXX.ns files. I installed mongoDB (on Mac OS X using Homebrew) and ran mongod with the dbpath parameter pointing to a directory with these files in it. However, when I use the mongo shell and ask to see available databases or collections, I get nothing but the 'local' database and no collections. What am I doing wrong? how can I get mongoDB to 'see' the database in the directory?
Thanks,
Yariv.
Based on the database files naming that you have downloaded, the files were generated using MongoDB MMAPv1 storage engine. As if it were generated with MongoDB WiredTiger storage engine, you would have collection.*.wt files.
Now, if you have just installed through homebrew the latest stable version of MongoDB which is currently v3.2. This would have come with the new default storage engine of WiredTiger.
If you also have the local.<n> and local.ns files in the directory, MongoDB would give you an error stating incompatibility of storage files like below:
[initandlisten] exception in initAndListen: 28662 Cannot start server. Detected data files in /data/target created by the 'wiredTiger' storage engine, but the specified storage engine was 'mmapv1'., terminating
Without these files mongod will run with WiredTiger storage engine and would ignore the existing(copied) MMAPv1 files.
If this is your case, you can run mongod with the --storageEngine option to specify MMAPv1. This would set mongod to run with MMAPv1 storage engine. Example:
mongod --dbpath <your db files dir> --storageEngine=mmapv1
Also for the record, checkout mongodump and mongorestore to export/import the database contents in binary format instead of copying the database files.
Related
While trying to spin up a server using docker-compose I have an issue when I try to downgrade or upgrade the mysql image. As I am just trying to identify the right mysql/maraidb version I'm not concerned about the data at the moment.
I've been getting the following error
"Unsupported redo log format. The redo log was created with MariaDB 10.6.5."
I am unable to delete the logs ib_logfile0 ib_logfile1. How do I successfully upgrade/downgrade mysql when it is giving such an error?
When the upgraded/downgraded version of the mysql/mariadb is spun up, you can't delete the ib_logfile0 ib_logfile1 log files because the new version won't start and therefore you can't even docker exec into it.
Since data retention is not a priority, the solution here is to remove the specific container or any stopped containers and all unused images (not just dangling images), add the -a flag to the command.
docker system prune -a
Also this issue may happen if you are moving between two different project folders. in that case, try to identify the volumes that may have used the same image and delete them where necessary
docker volume rm volume_name volume_name
More on how to remove stopped images:
https://www.digitalocean.com/community/tutorials/how-to-remove-docker-images-containers-and-volumes
how can I migrate all my data and configuration for matrix synapse and Riot.Im installed on the system to another one VM ?
Can I backup and restore all the rooms (created with Riot.IM) , the chat logs and the users and migrate all the content to another machine ?
The old system is configured without using docker.
Thank you
Information
All the applications are decentralized and there will be configurations file which are holding your server and connection information, Remaining all the data is stores in the Database which you are using. So we have client in your case Riot , Matrix Synapse and Database(Migration)
Riot Migration
We have a configuration file named config.json (default) which has the URL's of your synapse server. While Migrating copy the values of the from your existing riot config file to your new riot config file.
Synapse Migration
Similar to the Riot there is a homeserver.yaml and conf.d/server_name.yaml files in matrix-synapse installation folder, which has all the configurations. Copy the contents from these files to new matrix files and you are done with client and interface, Let's get into Data Migration.
Database Migration
SQLITE3 to PostgreSQL follow the command
create dump file from sqlite
sqlite database .dump > /the/path/to/sqlite-dumpfile.sql
copy that sql dump file to PostgreSQL
/path/to/psql -d database -U username -W < /the/path/to/sqlite-dumpfile.sql
Old PostgreSQL to new PostgreSQL
Create a dump file as backup from older PostgreSQL
pg_dump dbname > outfile
Restore the data from this dump
psql dbname < infile
Using Database migration GUI tools such as Pentaho or dbsoft . Follow the dbsofts article
You can refer to element docs on migration, matrix docs and SQLite to PostgreSQL
I want to back my DynamoDB local server. I have install DynamoDB server in Linux machine. Some sites are refer to create a BASH file in Linux os and connect to S3 bucket, but in local machine we don't have S3 bucket.
So i am stuck with my work, Please help me Thanks
You need to find the database file created by DynamoDb local. From the docs:
-dbPath value — The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to
the current directory. Note that you cannot specify both -dbPath and
-inMemory at once.
The file name would be of the form youraccesskeyid_region.db. If you used the -sharedDb option, the file name would be shared-local-instance.db
By default, the file is created in the directory from which you ran dynamodb local. To restore you'll have to the copy the same file and while running dynamodb, specify the same dbPath.
I'm implementing an instance of Parse Server, I want know where the Parse Server Allocated the files ?
According to File Adapter, the default file storage is GridFS in mongodb.
Depends on the operating system and type of installation you used.
If installed on a linux/unix using the global install npm install -g parse-server mongodb-runner then your parse-server files will normally be under usr/lib/node_modules/parse-server. ( may differ from linux versions )
be careful when editing these files for hot hacks or modifications. If you later choose to upgrade parse-server they will be overwritten.
Your cloud file directly is normally created by you. So this could be home/parse/cloud/main.js. This can be in any location of your choice. To set a new location you will set that in the index file or json (depending on your startup process ).
cloud: '/home/myApp/cloud/main.js', // Absolute path to your Cloud Code
If you installed not using the global install, then obviously you would need to cd to where you cloned the project.
Windows would be similar. Clone (or download the zip) parse-server from the repo. Open a console window and “cd” to the folder where you have cloned/extracted the example server, eq:
cd "C:\parse-server"
Here is where the files will sit on the parse-server. Hopes this helps!
I am using the following base docker file:
https://github.com/wnameless/docker-oracle-xe-11g/blob/master/Dockerfile
I read a bit on how to setup a data Volumne from this SO question and this blog, but not sure how to fit the pieces together.
In short, I would like to manage the oracle data in a data only Docker image, how to do it ?
I've realized volumes mount for db data.
Here is my fork:
Reduce size of image from 3.8G to 825Mb
Database initialization moved out of the image build phase. Now database initializes at the containeer startup with no database files mounted
media reuse support outside of container. Added graceful shutdown on containeer stop
Removed sshd
You may check here:
https://registry.hub.docker.com/u/sath89/oracle-xe-11g
https://github.com/MaksymBilenko/docker-oracle-xe-11g
I tried mapping the datafiles and fast recovery directories in my oracle xe container. However, I changed my mind after losing the files ... so you should be very careful about this approach and understand how docker manages those spaces under all operations.
I found, for example, that if you clean out old containers, the contents of the mapped directories will be deleted even if they are mapped to something outside the docker system area (/var/lib/docker). You can avoid this by keeping containers and starting them up again. But, if you want to version and make a new image... you have to backup those files.
Oracle also id's the files themselves (checksum or inode # or something) and complains about them on startup.... I did not investigate the extent of that issue or even if there is indeed any issue there.
I've opted to not map any of those files/dirs and plan to use datapump or whatever to get the data out until I get a better handle on all that can happen.
So I update the data and version the image... pushing to to the repo for safe-keeping
In general:
# Start data container
docker run -d -v /dbdata --name dbdata -it ubuntu
# Put oracale data in /dbdata some how
# Start container with stabase and look for data at /dbdata
docker run -d --volumes-from dbdata --name db -it ubuntu