gpg decrypting and moving a file safely - bash

I want to decrypt and move a file safely.
What would be the safest way to do this?
My current approach:
echo "what's the passphrase?"
read -s -r key
gpg --decrypt --batch --passphrase "$key" "file.gpg" > file
mv -f "./file" "/location/file"
Are there any security issues that might occur this way?

I think your approach is ok, but it depends on what you want to achieve. Although:
"As long as you don't move the file across file-system borders, the operation should be safe" - ref.
If your priority is safety, and you don't own the system you are working on, I would consider not saving content in the file, rather copying the content directly to the clipboard (using xclip ref or clipboard-cli, if you can install it). Then you could safely store your data in a desired secure destination. In the end, emptying the buffer's cache would be a final step.
For larger files (measured in GB or more), I think saving the file on the system would be required. Then after a successful copying of it across file-system borders, you would need to clean it up - shred or wipe (ref) would be your friends here.

Related

Zip directory in different batches

I'm trying to zip a massive directory with images that will be fed into a deep learning system. This is incredibly time consuming, so I would like to stop prematurely the zipping proccess with Ctrl + C and zip the directory in different "batches".
Currently I'm using zip -r9v folder.zip folder, and I've seen that the option -u allows to update changed files and add new ones.
I'm worried about some file or the zip itself ending up corrupted if I terminate the process with Ctrl + C. From this answer I understand that the cp can be terminated safely, and this other answer suggests that gzip is also safe.
Putting it all together: Is it safe to end prematurely the zip command? Is the -u option viable for zipping in different batches?
Is it safe to end prematurely the zip command?
In my tests, canceling zip (Info-ZIP, 16 June 2008 (v3.0)) using CtrlC did not create a zip-archive at all, even when the already compressed data was 2.5GB. Therefore, I would say CtrlC is "safe" (you won't end up with a corrupted file, but also pointless (you did all the work for nothing).
Is the -u option viable for zipping in different batches?
Yes. Zip archives compress each file individually, so the archives you get from adding files later on are as good as adding all files in a single run. Just remember that starting zip takes time too. So set the batch size as high as acceptable to save time.
Here is a script that adds all your files to the zip archive, but gives a chance to stop the compression at every 100th file.
#! /bin/bash
batchsize=100
shopt -s globstar
files=(folder/**)
echo "Press enter to stop compression after this batch."
for ((startfile=0; startfile<"${#files[#]}"; startfile+=batchsize)); do
((startfile==0)) && u= || u=u
zip "-r9v$u" folder.zip "${files[#]:startfile:batchsize}"
u=u
if read -t 0; then
echo "Compression stopped before file $startfile."
echo "Re-run this script with startfile=$startfile to continue".
exit
fi
done
For more speed you might want to look into alternative zip implementations.

Block Level Copying and Rsync

I am trying to use grsync (A GUI for rsync) for Windows to run backups. In the directory that I am backing up there are many larger files that are updated periodically. I would like to be able to sync just the changes to those files and not the entire file each backup. I was under the impression that rsync is a block-level file copier and would only copy the bytes that had changed between each sync. Perhaps this is not the case, or I have misunderstood what block-level file coping is!
To test this I used grsync to synchronize a 5GB zip file between two directories. Then I added a very small text file to the zip file and ran grsync again. However it proceeded to copy over the entire zip file again. Is there a utility that would only copy over the changes to this zip file and not the entire file again? Or is there a command within grsync that could be used to this effect?
The reason the entire file was copied is simply that the algorithm that handles block-level changes is disabled when copying between two directories on a local filesystem.
This would have worked, because the file is being copied (or updated) to a remote system:
rsync -av big_file.zip remote_host:
This will not use the "delta" algorithm and the entire file will be copied:
rsync -av big_file.zip D:\target\folder\
Some notes
Even if the target is a network share, rsync will treat it as path of your local filesystem and will disable the "delta" (block changes) algorithm.
Adding data to the beginning or middle of a data file will not upset the algorithm that handles the block-level changes.
Rationale
The delta algorithm is disabled when copying between two local targets because it needs to read both the source and the destination file completely in order to determine which blocks need changing. The rationale is that the time taken to read the target file is much the same as just writing to it, and so there's no point reading it first.
Workaround
If you know for definite that reading from your target filesystem is significantly faster than writing to it you can force the block-level algorithm to run by including the --no-whole-file flag.
If you add a file to a zip the entire zip file can change if the file was added as the first file in the archive. The entire archive will shift. so yours is not a valid test.
I was just looking for this myself, I think you have to use
rsync -av --inplace
for this to work.

How to tar a folder while files inside the folder might being written by some other process

I am trying to create a script for cron job. I have around 8 GB folder containing thousands of files. I am trying to create a bash script which first tar the folder and then transfer the tarred file to ftp server.
But I am not sure while tar is tarring the folder and some other process is accessing files inside it or writing to the files inside it.
Although its is fine for me if the tarred file does not contains that recent changes while the tar was tarring the folder.
suggest me the proper way. Thanks.
tar will hapilly tar "whatever it can". But you will probably have some surprises when untarring, as tar also stored the size of the file it tars, before taring it. So expect some surprises.
A very unpleasant surprise would be : if the size is truncated, then tar will "fill" it with "NUL" characters to match it's recorded size... This can give very unpleasant side effects. In some cases, tar, when untarring, will say nothing, and silently add as many NUL characters it needs to match the size (in fact, in unix, it doesn't even need to do that : the OS does it, see "sparse files"). In some cases, if truncating occured during the taring of the file, tar will complain it encounters an Unexpected End of File when untarring (as it expected XXX bytes but only reads fewer than this), but will still say that the file should be XXX bytes (and the unix OSes will then create it as a sparse file, with "NUL" chars magically appended at the end to match the expected size when you read it).
(to see the NUL chars : an easy way is to less thefile (or cat -v thefile | more on a very old unix. Look for any ^#)
But on the contrary, if files are only appended to (logs, etc), then the side effect is less problematic : you will only miss some bits of them (which you say you're ok about), and not have that unpleasant "fill with NUL characters" side effects. tar may complain when untarring the file, but it will untar it.
I think tar failed (so do not create archive) when an archived file is modified during archiving. As Etan said, the solution depends on what you want finally in the tarball.
To avoid a tar failure, you can simply COPY the folder elsewhere before to call tar. But in this case, you cannot be confident in the consistency of the backuped directory. It's NOT an atomic operation, so some files will be todate while other files will be outdated. It can be a severe issue or not follow your situation.
If you can, I suggest you configure how these files are created. For example: "only recent files are appended, files older than 1 day are never changed", in this case you can easily backup only old files and the backup will be consistent.
More generally, you have to accept to loose last data AND be not consistant (each files is backup at a different date), or you have to act at a different level. I suggest :
Configure the software that produces the data to choose a consistency
Or use OS/Virtualization features. For example it's possible to do consistent snapshot of a storage on some virtual storage...

Faster Alternatives to cp -l for a Whole File Structure?

Okay, so I need to create a copy of a file-structure, however the structure is huge (millions of files) and I'm looking for the fastest way to copy it.
I'm currently using cp -lR "$original" "$copy" however even this is extremely slow (takes several hours).
I'm wondering if there are any faster methods I can use? I know of rsync --link-dest but this isn't any quicker, but really I'd want it to be quicker as I want to create these snap-shots every hour or-so.
The alternative is copying only changes (which I can find quickly) into each folder then "flattening" them when I need to free up space (rsync newer folders into older ones until the last complete snapshot is reached), but I would really rather that each folder be its own complete snapshot.
Why are you discarding link-dest? I use a script with that option and take snapshots pretty often and the performance is pretty good.
In case you reconsider, here's the script I use: https://github.com/alvaroreig/varios/blob/master/incremental_backup.sh
If you have pax installed, you can use it. Think of it as tar or cpio, but standard, per POSIX.
#!/bin/sh
# Move "somedir/tree" to "$odir/tree".
itree=$1
odir=$2
base_tree=$(basename "$itree")
pax -rw "$itree" -s "#$itree#$base_tree#g" "$odir"
The -s replstr is an unfortunate necessity (you'll get $odir/$itree otherwise), but it works nicely, and it has been quicker than cp for large structures thus far.
tar is of course another option if you don't have pax as one person suggested already.
Depending on the files, you may achieve performance gains by compressing:
cd "$original && tar zcf - . | (cd "$copy" && tar zxf -)
This creates a tarball of the "original" directory, sends the data to stdout, then changes to the "copy" directory (which must exist) and untars the incoming stream.
For the extraction, you may want to watch the progress: tar zxvf -

What software do I use to put floppies as images on a hard disk?

I still have a large number of floppies. On some of them there probably is source code I don't want to lose. I also don't want to take look at each one individually, as that's going to take a lot of time. What software would be best for copying all data to a hard disk, preferably while creating an index at the same time?
I would also be interested in imaging mac floppies, but it doesn't have to be on the same machine.
[responses]
The goal is to finally get rid of all those boxes with floppies. I was asking about images as xcopy doesn't copy all (hidden?) sectors, does it? Is xxcopy better?
I don't want to type a name for each floppy.
Disk Utility on the mac probably needs a bit too much keyboard or mouse action, but might be appescriptable
Here is a script I used on my Linux box to perform the same type of task. Basically I just a raw image of each disk to a folder. I had another script I ran later that mounted each and dumped a directory listing into a file.
#!/bin/bash
floppydev='/dev/sdb'
savepath='/srv/floppy_imgs'
while true
do
echo "Press a key to create an image of the next floppy"
read -n 1
dd if=$floppydev of=/dev/null count=1 2> /dev/null
errlvl=$?
#if the disk isn't in the drive then wait
while [ $errlvl -ne 0 ]
do
sleep 1
dd if=$floppydev of=/dev/null count=1 2> /dev/null
errlvl=$?
done
filename=$(date +'img-%Y%m%d-%H%M%S.flp')
if [ ! -f $savepath/$filename ]
then
echo "creating image as $filename"
dd if=$floppydev of=$savepath/$filename
errlvl=$?
if [ $errlvl -ne 0 ]
then
echo 'the image copy failed!'
rm -i $savepath/$filename
else
mlabel -s -i $savepath/$filename ::
md5sum $savepath/$filename > $savepath/$filename.md5
echo "copy complete"
echo " "
fi
fi
done
Use rawread and rawrite.
There may be various implementations, the first one I found was this: http://www.pamarsystems.com/raw.html
I've used WinImage with satisfying results.
On OS X maybe you can simply use DD?
I am not too sure of your goal. Somehow, what you need is a robot, inserting the floppies, copying, etc. :-)
I would just make a bunch of empty folders, insert disk, do select all and drag to nth folder. Or use something like xcopy or xxcopy to transfer recursively data from floppy to folder. Etc.
DOS? Why don't you just create yourself a batch file which makes a new folder in a prescribed location which is named as the current timestamp and copy the contents of your floppy drive over. That way you can watch TV while you insert floppies and run your batch file. As long as you put them into the drive in a known order you can identify which one is which by sorting the resulting folders by timestamp.
Not sure what the equivalent would be on the mac, but I'm sure there is one.
Edit: I think everything you should need is in here
On the Mac, you could probably use the Disk Utility to convert the floppies to DMG files - but I haven't had a Mac floppy drive in years, so I can't test that theory.

Resources