I want to back my DynamoDB local server. I have install DynamoDB server in Linux machine. Some sites are refer to create a BASH file in Linux os and connect to S3 bucket, but in local machine we don't have S3 bucket.
So i am stuck with my work, Please help me Thanks
You need to find the database file created by DynamoDb local. From the docs:
-dbPath value — The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to
the current directory. Note that you cannot specify both -dbPath and
-inMemory at once.
The file name would be of the form youraccesskeyid_region.db. If you used the -sharedDb option, the file name would be shared-local-instance.db
By default, the file is created in the directory from which you ran dynamodb local. To restore you'll have to the copy the same file and while running dynamodb, specify the same dbPath.
Related
I am trying to put a start-up script in cloud storage file for a vm. this cloud storage file will contain pull related command.
so the first step to get a ssh-key, I generate it from bitbucket, now when I went for adding the ssh-key in vm metadata, I saw there is already ssh there in metadata.
How can I use this metadata ssh key to pull the repo from bitbucket. I want to write the shell script to pull the code in cloud storage file and then give this file as startup script for the vm.
I am stuck on how can I access ssh-key. I saw somewhere
cat ~/.ssh/id_rsa.pub
I was guessing this file should show the keys it has as I am able to see the ssh-keys in vm metadata, but it says file not found.
I am looking into wrong file
Thanks,
I have vault and need to restore one of the folder from the vault I have initiated the job using AWS CLI and got the inventory using JSON file but unable to get the complete folder from the inventory. Any one can help me restoring the folder?
I am able to get CSV file formate to see the archive ID of the files but is it possible to take the complete folder as it is showing separate archive ID for all files in folder?
I need to read a file in my lambda via sftp and save it locally for processing. The issue is that when running 'sam local' my lambda can only read from the local file system but not write to it.
Lambda functions can only write to a specific local area: /tmp
So that is the location you need to use if you want to write to a file.
See
Can I store a temp file on AWS Lambda Function?
https://aws.amazon.com/blogs/compute/choosing-between-aws-lambda-data-storage-options-in-web-apps/
The file system inside the container created by SAM when you run it locally is read-only.
How to write a binary file from AWS RDS Oracle database directory to local file system on EC2. I tried using Perl script with UTL_FILE, but it can't find read the file. Getting the permissions error.
In AWS RDS Oracle, you do not have access to the file system.
If you need access to the file system then you need to use instance EC2 and install the ORACLE RDBMS.
AWS has an option to integrate with S3: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/oracle-s3-integration.html
You could upload your files there and then download to your local machine. Here are steps to use it with Datapump: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Oracle.Procedural.Importing.html#Oracle.Procedural.Importing.DataPumpS3.Step1
After I've used rethinkdb restore, where does rethinkdb import that data / access that data from?
I've tried searching for this answer, but my choice in keywords to use must be inadequate.
I want to use this directory as a shared volume for my docker container so the docker container is "separate" from the data but also has w/r access to the data.
It imports into the data directory. Which is, by default, folder rethinkdb_data in working directory where you execute rethinkdb. Unless you specify a different with -d.
$ rethinkdb -h
Running 'rethinkdb' will create a new data directory or
use an existing one, and serve as a RethinkDB cluster node. File
path options: -d [ --directory ] path specify
directory to store data and
metadata
If you are using Docker, and you didn't change the data directory with -d, then it's probably is store in 'rethinkdb_datain yourWORKDIR` instruction in Dockerfile. You can mount it outside for persistent.
Take this image as example: https://github.com/stuartpb/rethinkdb-dockerfiles/blob/master/trusty/2.1.4/Dockerfile, it's official RethinkDB docker https://hub.docker.com/_/rethinkdb/
We can see that it has instruction:
WORKDIR /data
And it runs with:
CMD ["rethinkdb", "--bind", "all"]
Therefore, it store data in /data/rethinkdb_data. You can either mount the whole /data or only /data/rethinkdb_data/