How to create patch only for modified files with xDelta? - cmd

For example i have files in three folders:
+-------------------+---------------------------------+-------------+
| Folder1 | Folder2 | Patches |
+-------------------+---------------------------------+-------------+
| - OldFile1.bin | - OldFile1.bin (Not Modified) | |
| - OldFile2.bin | - NewFile2.bin (Modified) | |
| - OldFile3.bin | - OldFile3.bin (Not Modified) | |
| - ........ | - ......... | |
| - OldFileN.bin | - OldFileN.bin (Modified) | |
+-------------------+---------------------------------+-------------+
Then i wanna create patches for all modified files using xDelta:
FOR %%P in (Folder1\*.bin) do (
call xdelta.exe -9 -e -s "Folder1\%%~nP" "Folder2\%%~nP" "Patches\%%~nP.xdelta"
)
In Patches folder i get all files from Folder1 with 0kb size which wasnt modified. How to ignore them?
In docs nothing says about hash or another file checking.

You can test if there is any modification before generating the patch
FOR %%P in ("Folder1\*.bin") do (
fc /b "Folder1\%%~nxP" "Folder2\%%~nxP" >nul 2>&1 || xdelta.exe -9 -e -s "Folder1\%%~nxP" "Folder2\%%~nxP" "Patches\%%~nP.xdelta"
)

Related

Moving all files from subfolders to main folders with duplicate file names

I've been trying to write a little script to sort image files in my Linux server.
I tried multiple solution found all over StackExchange but it never meets my requirements.
Explanation:
photo_folder are filled with images (various extensions).
Mostly, images are already in this folder.
But sometime, like the example below, images are hidden in one or multiple photo_subfolder and file names are often the same such as 1.jpg, 2.jpg... in each of them.
Basically, I would like to move all image files from photo_subfolder to their photo_folder and all duplicated filenames to be renamed before merging together.
Example:
|parent_folder
| |photo_folder
| | |photo_subfolder1
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
| | |photo_subfolder2
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
| | |photo_subfolder3
| | | 1.jpg
| | | 2.jpg
| | | 3.jpg
Expectation:
|parent_folder
| |photo_folder
| | 1_a.jpg
| | 2_a.jpg
| | 3_a.jpg
| | 1_b.jpg
| | 2_b.jpg
| | 3_b.jpg
| | 1_c.jpg
| | 2_c.jpg
| | 3_c.jpg
Note that files names are just an example. Could be anything.
Thank you!
You can replace the / of the subdirectories with another character, e.g. _ , and then cp/mv the original file to the parent directory.
I try to recreate an example of your directory tree here - very simple, but I hope it can be adapted to your case. Note that I am using bash.
#!/bin/bash
bd=parent
mkdir ${bd}
for i in $(seq 3); do
mkdir -p "${bd}/photoset_${i}/subset_${i}"
for j in $(seq 5); do
touch "${bd}/photoset_${i}/${j}.jpg"
touch "${bd}/photoset_${i}/${j}.png"
touch "${bd}/photoset_${i}/subset_${i}/${j}.jpg"
touch "${bd}/photoset_${i}/subset_${i}/${j}.gif"
done
done
Here is the script that will cp the files from the subdirectories to the parent directory. Basically
find all the files recursively in the subdirectories and loop on them
use sed to replace \ with '_' and store this in a variable new_filepath (I also remove the initial parent_, but this is optional)
copy (or move) the old filepath into parent with filename new_filepath
for xtension in jpg png gif; do
while IFS= read -r -d '' filepath; do
new_filepath=$(echo "${filepath}" | sed s#/#_#g)
cp "${filepath}" "${bd}/${new_filepath}"
done < <(find ${bd} -type f -name "*${xtension}" -print0)
done
ls ${bd}
If you want to remove also the additional parent_ from the new_filepath you can replace the new_filepath above with:
new_filepath=$(echo ${filepath} | sed s#/#_#g | sed s/${bd}_//g)
I assumed that you define all the possible extension in the script. Otherwise to find all the extensions in the directory tree you can use the following snippet from a previous answer
find . -type f -name '*.*' | sed 's|.*\.||' | sort -u

bash get dirname from urls.txt

$ cat urls.txt
/var/www/example.com.com/upload/email/email-inliner.html
/var/www/example.com.com/upload/email/email.html
/var/www/example.com.com/upload/email/email2-inliner.html
/var/www/example.com.com/upload/email/email2.html
/var/www/example.com.com/upload/email/AquaTrainingBag.png
/var/www/example.com.com/upload/email/fitex/fitex-ecr7.jpg
/var/www/example.com.com/upload/email/fitex/fitex-ect7.jpg
/var/www/example.com.com/upload/email/fitex/fitex-ecu7.jpg
/var/www/example.com.com/upload/email/fitex/fitex.html
/var/www/example.com.com/upload/email/fitex/logo.png
/var/www/example.com.com/upload/email/fitex/form.html
/var/www/example.com.com/upload/email/fitex/fitex.txt
/var/www/example.com.com/upload/email/bigsale.html
/var/www/example.com.com/upload/email/logo.png
/var/www/example.com.com/upload/email/bigsale.png
/var/www/example.com.com/upload/email/bigsale-shop.html
/var/www/example.com.com/upload/email/bigsale.txt
Can anyone help me to get dirname for this?
dirname /var/www/example.com.com/upload/email/sss.png works fine, but what about a list of URLs?
Is it possible to achieve this without the use of any form of a loop (for or while). As the number of URLs can be more than several tens of millions. The best way would be with the help of redirection (tee) to a file
As always when it boils down to things like this, Awk comes to the rescue:
awk 'BEGIN{FS=OFS="/"}{NF--}1' <file>
Be aware that this is an extremely simplified version of dirname and does not have the complete identical implementation as dirname, but it will work for most cases. A correct version, which covers all cases is:
awk 'BEGIN{FS=OFS="/"}{gsub("/+","/")}
{s=$0~/^\//;NF-=$NF?1:2;$0=$0?$0:(s?"/":".")};1' <file>
The following table shows the difference:
| path | dirname | awk full | awk short |
|------------+---------+----------+-----------|
| . | . | . | |
| / | / | / | |
| foo | . | . | |
| foo/ | . | . | foo |
| foo/bar | foo | foo | foo |
| foo/bar/ | foo | foo | foo/bar |
| /foo | / | / | |
| /foo/ | / | / | /foo |
| /foo/bar | /foo | /foo | /foo |
| /foo/bar/ | /foo | /foo | /foo/bar |
| /foo///bar | /foo | /foo | /foo// |
note: various alternative solutions can be found in Extracting directory name from an absolute path using sed or awk. The solutions of Kent will all work, the solution of Solid Kim just needs a tiny tweak to fix the multiple slashes (and misses upvotes!)

How to check if a folder has any tab delimited file in it?

I am trying to search for all the tab delimited file in one folder, and if found any then I need to transfer all of them to a another folder using bash.
In my code, I am currently trying to find all files, but somehow it is not working.
Here is my code:
>nul 2>nul dir /a-d "folderName\*" && (echo Files exist) || (echo No file found)
Thanks in advance :)
For a simple move (or copy -- replace mv with cp) of files, #tripleee's answer is sufficient. To recursively search for files and run a command on each, find comes in handy.
Example:
find <src> -type f -name '*.tsv' -exec cp {} <dst> \;
Where <src> is the directory to copy from, and <dst> is the directory to copy to. Note that this searches recursively, so any files with duplicate names will cause overwrites. You can pass -i to cp to have it prompt before overwriting:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
Explained:
find <src> -type f -name '*.tsv' -exec cp -i {} <dst> \;
^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^ ^^
| | | | | | | | | | | ||
| | | | | | | | | | | | --- terminator
| | | | | | | | | | | --- escape for terminator
| | | | | | | | | | --- destination directory
| | | | | | | | | --- the path of each file found
| | | | | | | | --- prompt before overwriting
| | | | | | | --- the copy command
| | | | | | --- flag for executing a command (cp in this case)
| | | | | --- pattern of files to match
| | | | --- flag for specifying file name pattern
| | | --- 'f' for a regular file (as opposed to e.g. 'd' for directory)
| | --- flag for specifying the file type
| --- location to search
--- the find command, useful for searching for files
To get a feel for what happens without actually having find run the real command, you can prefix it with echo to just print each command instead of running it:
find <src> -type f -name '*.tsv' -exec echo cp -i {} <dst> \;
Your attempt has very little valid Bash script in it.
mv foldername/*.tsv otherfolder/
There will be an error message if there are no matching files.
"it is not working". That means very little on stackoverflow.
Let's first examine what you've done:
>nul 2>nul dir /a-d "folderName\*"
So, you're doing a dir (most Linux users would use ls, but soit) on
/a-d
everything under folderName
and the output is in the file nul. For debugging purposes, it would be good to see what is in nul (do cat nul). I would bet it is something like:
dir: cannot access '/a-d': No such file or directory
dir: cannot access 'folderName\*': No such file or directory
That means that dir exits with an error. So, echo No file found will be executed.
This means that your output is probably
No file found
Which is exactly as expected.
In your code, you said you want to find all files. That means you want the output of
ls folderName
or
find folderName
if you want to do things recursively. Because find has been explained above by jsageryd, I won't elaborate on that.
If you just want to look in that specific directory, you might do:
if dir folderName/*.tsv > nul 2>nul ; then
echo "Files exist"
else
echo "No file found"
fi
and go from there.

Find references to files, recursively

In a project where XML/JS/Java files can contain references to other such files, I'd like to be able to have a quick overview of what has to be carefully checked, when one file has been updated.
So, it means I need to eventually have a look at all files referencing the modified one, and all files referencing files which refer to the modified one, etc. (recursively on matched files).
For one level, it's quite simple:
grep -E -l -o --include=*.{xml,js,java} -r "$FILE" . | xargs -n 1 basename
But how can I automate that to match (grand-(grand-))parents?
And how can that be, maybe, made more readable? For example, with a tree structure?
For example, if the file that interests me is called modified.js...
show-referring-files-to modified.js
... I could wish such an output:
some-file-with-ref-to-modified.xml
|__ a-file-referring-to-some-file-with-ref-to-modified.js
another-one-with-ref-to-modified.xml
|__ a-file-referring-to-another-one-with-ref-to-modified.js
|__ a-grand-parent-file-having-ref-to-ref-file.xml
|__ another-file-referring-to-another-one-with-ref-to-modified.js
or any other output (even flat) which allows for quickly checking which files are potentially impacted by a change.
UPDATE -- Results of current proposed answer:
ahmsff.js
|__ahmsff.xml
| |__ahmsd.js
| | |__ahmsd.xml
| | | |__ahmst.xml
| | | | |__BESH.java
| |__ahru.js
| | |__ahru.xml
| | | |__ahrut.xml
| | | | |__ashrba.js
| | | | | |__ashrba.xml
| | | | | | |__STR.java
| | |__ahrufrp.xml
| | | |__ahru.js
| | | | |__ahru.xml
| | | | | |__ahrut.xml
| | | | | | |__ashrba.js
| | | | | | | |__ashrba.xml
| | | | | | | | |__STR.java
| | | | |__ahrufrp.xml
| | | | | |__ahru.js
| | | | | | |__ahru.xml
| | | | | | | |__ahrut.xml
| | | | | | | | |__ashrba.js
| | | | | | | | | |__ashrba.xml
| | | | | | | | | | |__STR.java
| | | | | | |__ahrufrp.xml
(...)
I'd use a shell function (for the recursion) inside an shell script:
Assuming the filenames are unique have no characters that need escaping in them:
File: /usr/local/bin/show-referring-files-to
#!/bin/sh
get_references() {
grep -F -l --include=*.{xml,js,java} -r "$1" . | grep -v "$3" | while read -r subfile; do
#read each line of the grep result into the variable subfile
subfile="$(basename "$subfile")"
echo "$2""$subfile"
get_references "$subfile" ' '"$2" "$3"'\|'"$subfile"
done
}
while test $# -gt 0; do
#loop so more than one file can be given as argument to this script
echo "$1"
get_references "$1" '|__' "$1"
shift
done
There still are lots of performance enhancements possible.
Edit: Added $3 to prevent infinite-loop.

Is there any feasible and easy option to use a local folder as a Hadoop HDFS folder

I have a massive chunk of files in an extremely fast SAN disk that I like to do Hive query on them.
An obvious option is to copy all files into HDFS by using a command like this:
hadoop dfs -copyFromLocal /path/to/file/on/filesystem /path/to/input/on/hdfs
However, I don't want to create a second copy of my files, just to be to Hive query in them.
Is there any way to point an HDFS folder into a local folder, such that Hadoop sees it as an actual HDFS folder? The files keep adding to the SAN disk, so Hadoop needs to see the new files as they are being added.
This is similar to Azure's HDInsight approach that you copy your files into a blob storage and HDInsight's Hadoop sees them through HDFS.
For playing around with small files using the local file system might be fine, but I wouldn't do it for any other purpose.
Putting a file in an HDFS means that it is being split to blocks which are replicated and distributed.
This gives you later on both performance and availability.
Locations of [external] tables can be directed to the local file system using file:///.
Whether it works smoothly or you'll start getting all kinds of error, that's to be seen.
Please note that for the demo I'm doing here a little trick to direct the location to a specific file, but your basic use will probably be for directories.
Demo
create external table etc_passwd
(
Username string
,Password string
,User_ID int
,Group_ID int
,User_ID_Info string
,Home_directory string
,shell_command string
)
row format delimited
fields terminated by ':'
stored as textfile
location 'file:///etc'
;
alter table etc_passwd set location 'file:///etc/passwd'
;
select * from etc_passwd limit 10
;
+----------+----------+---------+----------+--------------+-----------------+----------------+
| username | password | user_id | group_id | user_id_info | home_directory | shell_command |
+----------+----------+---------+----------+--------------+-----------------+----------------+
| root | x | 0 | 0 | root | /root | /bin/bash |
| bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| uucp | x | 10 | 14 | uucp | /var/spool/uucp | /sbin/nologin |
+----------+----------+---------+----------+--------------+-----------------+----------------+
You can mount your hdfs path into local folder, for example with hdfs mount
Please follow this for more info
But if you want speed, it isn't an option

Resources