Rclone - Find destination folder automatically at upload to ftp - filter

I have to upload around 40-50 piece of files (same extensions) to ftp, every day, to separated folders, where those file belongs.
Im, complete newbie in scripting, I just got to know rclone and it's amazing what it can do.
So I'm wondering is there any script to reclone to find out destination folder automatically for files to be uploaded, based on their names? More precisely based on the numbers in the file name:
Filenames 2nd and 3rd number are the same as destination folders last two digits,
Destination folders are in different places, but under the same root folder.
Is there any way to ask rclone to check the 2nd and 3rd character of each files waiting to be uploaded, and based on that two numbers, upload it to the a directory where these two numbers are listed.
For example:
50321_XXXXX.txt -----goes_to-----ftp:/xxxx/yyyy/zzzz/nn03/
51124_XXXXX.txt -----goes_to-----ftp:/xxxx/wwww/kkkk/nn11/
53413_XXXXX.txt -----goes_to-----ftp:/xxxx/dddd/aaaa/nn34/
Could you help me with where to go?
Thank you for your answers.
Nothing. I don't know where to start.

Related

Recursive copy to a flat directory

I have a directory of images, currently at ~117k files for about 200 gig in size. My backup solution vomits on directories of that size, so I wish to split them into subdirectories of 1000. Name sorting or type discrimination is not required. I just want my backups to not go nuts.
From another answer, someone provided a way to move files into the split up configuration. However, that was a move, not a copy. Since this is a backup, I need a copy.
I have three thoughts:
1. Files are added to the large directory with random filenames, so alpha sorts aren't a practical way to figure out deltas. Even using a tool like rsync, adding a couple hundred files at the beginning of the list could cause a significant reshuffle and lots of file movement on the backup side.
2. The solution to this problem is to reverse the process: Do an initial file split, add new files to the backup at the newest directory, manually create a new subdir at the 1000 file mark, and then use rsync to pull files from the backup directories to the work area, eg rsync -trvh <backupdir>/<subdir>/ <masterdir>.
3. While some answers to similar questions indicate that rsync is a poor choice for this, I may need to do multiple passes, one of which would be via a slower link to an offsite location. The performance hit of using rsync and its startup parsing is far superior to the length of time reuploading the backup on a daily basis would take.
My question is:
How do I create a script that will recurse into all 117+ subdirectories and dump the contained files into my large working directory, without a lot of unnecessary copying?
My initial research produces something like this:
#!/bin/bash
cd /path/to/backup/tree/root
find . -type d -exec rsync -trvh * /path/to/work/dir/
Am I on the right track here?
It's safe to assume modern versions of bash, find, and rsync.
Thanks!

Checksum File Comparison Tool

So I am looking for a tool that can compare files in folders based on checksums (this is common, not hard to find); however, my use-case is that the files can exist in pretty deep folder paths that can change, I am expected to compare them every few months and ONLY create a package of the different files. I don't care what folders the files are in, the same file can move between folders regularly and files wouldn't change names much, only content (so checksums are a must).
My issue is that almost all of the tools I can find do care about the folder paths when they compare folders, I don't and I actually want it to ignore the folder paths. I rather not develop anything or at least only have to develop a small part of the process to save time.
To be clear the order I am looking for things to happen are:
Program scans directory from 1/1/2020 (A).
Program scans directory from 4/1/2020 (B)
Finds all files where checksum in B don't exist in A and make a new folder with differences (C).
Any ideas?

How to recursively open file .txt in centos 7

I had many files which I got from the censor. Moreover, the files increase every hour. The files consist of 3 parts, rain_date_time. How can I open each file recursively to get what's inside the file and add it to database. I have found the way to read the file one by one, yet I face difficulty in reading
this is my code

automate directory creation in windows 7

I have been tasked with restructuring the directory of files relating to employees. As it is now, each employee has their own folder and all the files are grouped into 3 subfolders, divided by year. I'd like to sort the files in each of the folders into 4 other subfolders that are organized by subject matter. Is there any way to automate the creation of folders and transferring of files into these folders?
If this is not a sufficient information about my issue, please say so and I will attempt to provide a more accurate explanation.
You could use PowerShell or any number of scripting languages/tools (Perl, Python). The trick may be knowing which target folder each of the files should go into. If you can determine that from the name of the file or the file type it will be trivial, but if there is some other criterion it may be harder.

VB Script - move files older than 180 days from modified date to another directory

i would like to know if there is a vb script which will move files from a specific location and there subfolders to another location based on their modified date and to keep the original directory structure in the new location.
The results to be saved in a .txt file.
thx in advance.
This former question here on SO
VBScript - copy files modified in last 24 hours
is a sample from which you can start from. If you have any difficulties to adapt it to your needs, come back and ask again.

Resources