Is it safe to delete duplicity cache folder? - caching

I'm using duplicity to the backup web project.
There are three different places that handling by duplicity: storage, DB, and system settings.
I want actually to stop backing up storage and left two others.
So the question is how can I know which cache files related to storage backups and is it safe to delete them (or whole duplicity cache folder), as it takes too much space
UPD:
I've move cache to the new place and change path to cache in configs, after that made few test runs to make that all work fine. Then I've removed an unneeded folder and all the rest backups still work. So it's definitely safe to move/delete cache folder

generally, yes it should be safe. but as we are talking backups, how about safety first?
simply move the folders out of the way,
redo the backups you want to continue (observe the "newly" created folder name under cache)[1]
do a verify to make sure you can restore everything
on success you may delete the old cache folders
done.. ede/duply.net
[1] cache folder name should be a md5 hash of the target url -
http://bazaar.launchpad.net/~duplicity-team/duplicity/0.8-series/view/head:/duplicity/commandline.py#L112

Related

Is there a difference between store and move methods in laravel file upload and when to use one over the other?

When uploading images I realize FIRST I can use store method which saves images in the storage/app/public by default then I'll have to create a symbolic link from public/storage to storage/app/public to access the image.
SECOND, I can still use move method and have the image saved in the public/images directly.
I feel like the first way is longer for no reason, is there scenarios of when to use one over the other or it's just a matter of preference ?
Yes it's better in some cases, but it might not be relevant to you, let me explain.
The storage folder is usually considered a "shared" folder. What I'm trying to say with that is that the contents usually should not change when you deploy your application and most of its contents are usually even ignored in git (to prevent your uploads from ending up in your git repository).
So storing your uploads in this case inside the storage/app/public directory means the contents are not in git and the storage folder can be shared between deployments.
This is useful when you are using tools like Envoyer, Envoy or other "zero downtime" deployment tools.
Most (if not all) zero downtime deployment tools work by cloning your application to a fresh directory and running composer install and other setup commands before promoting that fresh directory to the current directory which is used by your webserver to serve your app. Changing a symlink over to a new directory is instant and thus you have zero downtime deployments since all setup (installing dependencies etc.) was done in a folder not yet serving traffic to your users.
And since each deployment starts with a fresh clone of your repository that also means that your public and storage folder are empty again... which is not what you want because you of course want to retain uploads between deployments. A way to work around that is that those deployment tools will have the storage folder stored in another folder and with every deployment it clones your git repo and symlinks the storage folder to that shared storage folder so all your deployments share the same storage directory making sure uploads (but depending on the drivers you use also sessions, caches, and logs) are the same for every deployment.
And from there you can use php artisan:link to symlink the storage/app/public to public/storage so that the files are publicly accessible.
(Note: with the symlink in place it doesn't matter to which path your write, storage/app/public or public/storage because they point to the same folder on the disk).
So this seemingly overcomplicated symlink dance is to make deployments easier and having all your "storage" in a single place, the storage dir.
But when you are not using those zero downtime deployment tools this all seems like a lot of nonsense. But even there it still might be useful to have a single place where all your app storage lives for backups for example instead of having to backup multiple directories.
from laravel documentation https://laravel.com/docs/5.4/filesystem:
move method may be used to rename or move an existing file to a new location
Laravel makes it very easy to store uploaded files using the store method on an uploaded file instance.
So, use storeAs() or store() when you are working with a file that has been uploaded (i.e. within a controller), and move() only when you've already got a file in the disk to move it from one location to another.

Safe to clean out C:\ProgramData\firebird folder when FB offline?

Is it safe to clean out the contents of the C:\ProgramData\firebird
folder, i.e. wipe it, when the Firebird service (superserver, v3.0) is not
running?
I understand that it contains lock tables etc. so should not be touched
while FB is running. But it's not clear to me if it can be wiped safely
when FB is not running, or if it contains data that can be vital when FB
starts up again.
My situation is that I'm migrating a VM with an FB installation.
Migration has been done like this, due to practical reasons (uptime vs.
file transfer & VM conversion time):
Snapshot of source VM, i.e. nightly backup is copied to new location.
Source stays up and running. Copy process takes about 1 day. (We have the databases locked with nbackup when nightly snapshot is taken).
Snapshot is unpacked at target location, converted from VMWare to
HyperV and brought online for additional reconfig and system testing.
A few days pass.
Both source and target Firebird services are stopped, so no database
activity is going on anywhere.
Sync files from source to target, including database files. This file
transfer is much smaller then in step 1 so it can be done during offline
time.
In step 5 I find diffs in the C:\ProgramData\firebird folder, and I'm
wondering what would be the best approach:
A) Wipe the folder at target.
B) Sync so target has the same content as source.
C) Leave target as is.
Please note that when FB service is started again at target, the
database files will be identical with those at the source at the time of
FB shutdown, and probably won't "match" the contents of
C:\ProgramData\firebird at target. I would assume that this fact rules
out option C).
The files in C:\ProgramData\firebird are only used during runtime of the Firebird server and contain transient data. It should be safe to delete these files when Firebird is not running.
In other words, when migrating from one server to another, you do not need to migrate the contents of C:\ProgramData\Firebird.

Flutter cache manager library

I am using this library https://pub.dartlang.org/packages/flutter_cache_manager#-readme-tab-
and I have 2 questions.
Firstly, it is unclear to me whether the getFile(url) function automatically caches the file that is returned or whether I must call putFile() after it is returned.
Secondly, I see that you can override BaseCacheManager to set a maxAgeCacheObject. Does the OS automatically delete files that have expired or must I make sure they are cleaned.
Thanks for the help :)
ad 1) The getFile(url) method will "automatically" cache the result. The putFile() method is only available to eagerly precache data.
ad 2) Both, You should make sure you have a reasonable upper limit. But since files are stored in a temporary directory which the OS is allowed to delete, the files will be removed if the device runs out of storage. --- FWIW - No, the OS does not remove files which are too old, but the cache manager will remove objects which are older than maxAgeCacheObject. (The OS does not know about how old a file can be, it might start deleting the oldest files first, but there is no guarantee for this.)

Pushing File Directory Updates Without Bogging Down the Server

I approached our IT staff to restrict a top-level folder on a directory to prevent users from adding or modifying anything in the root level, but to allow read/write/modify access to all sub-folders. There are several hundred gigabytes and probably over a million files and folders. I was told:
"The way the current permission structure is set on this volumes make what you ask very hard to do. The volume has the one permission structure that is inherited all the way down – Everyone FULL rights to everything – therefore making any changed at the top would have to propagate to every single file in the entire volume and would take days to complete. "
Does this make sense? I get it would take days to perform, but why does that matter? Is he saying the system would be bogged down for days? What are the alternatives to lock down the root? Any help would be appreciated.
If you're using a windows file server, that's not necessarily the case. There is the option of only changing the permissions of a single directory vs. propagating everything down. However, if you want to propagate everything down (generally an IT decision), it will bog down and your file share will probably not be accessible with any acceptable level of performance
https://social.technet.microsoft.com/Forums/ie/en-US/76e50b7d-40b2-4198-a2e2-23cf26f08761/permissions-not-propagating-properly?forum=winserverDS

Is it possible to recover an HSQLDB from the data file alone

A delete . was executed on the folder containing a HSQLDB. The only file which was locked by the system (and thus not deleted) was the database.data file. Is it possible to recover the database from this file alone?
If the delete was done within the BuildServer directory itself and not specifically within the BuildServer/system directory you are out of luck since all the builds and their build step configurations are stored within BuildServer/config/projects.
The Database only stores build logs, changes, users and etc. but not the actual config. They are all XML based configs on the file system.
If the delete was done within BuildServer/system you may be able to start up a clean TC Instance to rebuild the BuildServer/system directory and then shut it down. Once its down switch out the buildserver.data files and bring it up again. (Trying to do this now but its taking forever to start up. If I find out more I'll edit).

Resources