As said in /etc/memgraph/memgraph.conf:
# Storage snapshot creation interval (in seconds). Set to 0 to disable periodic
# snapshot creation. [uint64]
--storage-snapshot-interval-sec=300
Which means snapshots are made only automatically. Is there a way to manually run command to save snapshot in /var/lib/memgraph/snapshots/ directory?
You can use the CREATE SNAPSHOT; Cypher query to create a snapshot automatically. Read more about it in the official documentation.
Related
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-rds-dbcluster.html says that PreferredBackupWindow is used if automated backups are enabled using the BackupRetentionPeriod parameter.
It also says that BackupRetentionPeriod Must be a value from 1 to 35.
It is actually possible to disable the automated back-ups? Setting BackupRetentionPeriod to 0 using CloudFormation return the following error: Invalid backup retention period: 0. Retention period must be between 1 and 35.
Unfortunately, you can't disable automated backups on Aurora. Even if you wanted to work around the issue by finding the most recent backup with
aws rds describe-db-cluster-snapshots --db-cluster-identifier=dbname | jq -r .DBClusterSnapshots[].DBClusterSnapshotIdentifier | tail -n1
and then attempting to manually delete the backup with
aws rds delete-db-cluster-snapshot --db-cluster-snapshot-identifier rds:dbname-2021-03-30-04-56
this results in the error
An error occurred (InvalidDBClusterSnapshotStateFault)
when calling the DeleteDBClusterSnapshot operation:
Only manual snapshots may be deleted.
It appears that you have a read replica for your DB instance. Unfortunately due to the instance having read replicas connecting to it, you won't be able to set backup retention to 0. Backups are required to create and manage read replicas binary logs.
"Before a DB instance can serve as a replication source, you must enable automatic backups on the source DB instance by setting the backup retention period to a value other than 0. This requirement also applies to a read replica that is the source DB instance for another read replica."
Usecase:
I have created es-indexes: mywebsiteindex-yyyymmdd , mysharepointindex-yyyymmdd in my laptop/dev machine. I want to export/zip that index as a file. The file may be migrated by someone who has credentials to target machine. And the zip/file may be imported to target-elastic folder.
You can abstract the words 'machine' 'folder' 'zip' in the above. Focus is 'transfer index as a file and reimport at target which I may not have access through http/tcp/ftp/ssh'.
Is there any python/other script out there that can export-from-source and import-to-target? A script that hides internal complexities of node/cluster count differences between dev/prod etc, and just move index.
Note: I already referred to the below page, so no need to reiterate the same
https://www.elastic.co/guide/en/cloud/current/ec-migrate-data.html
There are some options:
You can use the snapshot and restore api to create a snapshot of your index and restore it in your new instance. (recommended way)
You can use the reindex api in your new instance to reindex your index from remote.
You can use Logstash with your old instance as an input and your new instance as the output.
And you can write a script/application using one of the supported clients to query your index, export to a file, read that file and import in your new instance. (logstash can also do that).
But you can't move your data files, this is not supported nor recommended by Elastic.
Assume I'm using a GitLab pipeline and there is a build process that gets everything ready for production. There is a 3rd party database that needs to be downloaded, e.g. a MaxMind Geo database. I don't want to strain their servers every time we run a build, so I'd want to only download the latest database once a month.
What tactics can I use to save a "last run" date, check it, and take action to download the DB if the last run date is more than a month ago?
i would use the cache option in the gitlab-ci.yml
Create a file named "update_date" once you update the db then cache it.
In the logic.py (python is just an example, write it however you'd like), check the file exists and date is not over 30 days ago, in any other case update the DB
db_update:
script:
- logic.py
cache:
paths:
- ./update_date
I want to change the execution time of a load plan in Oracle ODI Scheduling. I change Starting date and times manually but it is again executed in previous date and time settings. How can i be able to execute the load plans in required date and time? Thanks
EDITING VERSION 1:
The Images are
P.S.: We have also converted to "Active for the period" and didn't work. Is it related with the Java version?
Changing the schedules of a Load Plan or a Scenario will update the schedule information in the repository but will not update that information in the agent.
There is therefore an extra step to perform. On the topology tab, a right click on the physical agent will give the option Update Schedule. That will refresh the agent memory with the schedule stored into the repository.
I'm preety new to LevelDB. I need something like "rollback to a specific state", does LevelDB support that? After some search, I know that LevelDB does not support transactions, but it support snapshots. Can I restore my database to a snapshot?
My need is like this:
Initial state
Make some change to the database
If anything wrong, go back to initial state.
LevelDB snapshot are not meant for reverting change but to allow other readers from the same thread to have access to a consistent version of the database.
One way to solve you requirements is to implement an transaction log yourself:
Begin transaction and take snapshot
Log in memory every key that is modified
Then Commit, release the snapshot and the log
Or Rollback, look up the every modified keys in the log and retrieve their initial value from the snapshot and set it again. Release the snapshot and the log.