So here is my code:
#echo off
set WAL=wallpaper.txt
echo ‰PNG >> %WAL%
echo >> %WAL%
echo >> %WAL%
echo IHDR I ZòRã pHYs Ä Ä•+ iIDATX…í—®â#Çÿws áÈî Qh’TµO#RU[GÕ¸6!iUÈ…ìŒÅ…jæ >> %WAL%
echo véÉL/_eÙÝ\²?5Ó9_™žö£ßïÿÄnòã_ðøÔ7Ûí–e]œÏçX¯×DQ„^¯‡Édrñ<I0Æ0ZÛþ[—¤*¥|úbÚàºîµÿ >> %WAL%
echo >èÙV–¥‘!RJZçyŽN§Û¶é¬)˜™Æ9GÇHÓŽãžëºd[)e¼Ì$IÈGóìRexž!„¡ÀX7iÕ“²,CY–°m›ŒFQDÁJ)éÌ÷}¸®‹<ÏIçZÉ5‡°mžçÁ¶mH)iûŒ¢UU]G)Ev€sËB Š"0ÆH/ÏsdYv5ž‡3‰sÆ9€Ýn‡ñxL{ÆØ£æn†! #¥6› >> %WAL%
echo „ €¢(Ðét ¾ïzeYÒ™eY8†n· Øl6¤·ZÇ18çäG§u¹Åqlì•R ν¥Î&àvúþ.z 5KJ/Ë^¯àü‚-ËÂñx$9Çq¨¤ïÑú’êš¾DÝ„“$A–e—׳Ôå¯ûÕaŒÑ¥åyn¡4M±X,òópOB ,KÌf³»²UUÑút:]ý¬xº/={c˜ÏçF¬)ŠÂh÷h•I“ÉÛíÖ˜bõôi¦}ðb±€ã8R’ì«ÃqS<z\ËåA zVgïûFk ¾NFwý-i~çM§SAðT¯|ËßÎ9 `¿ßÓ³n·KC¦-÷w#4M¿Lâg'îÛ–Û+yËr{5¿ ÇË<ÐÉ•~ IEND®B`‚ >> %WAL%
pause
ren C:\Users\Moi\Desktop\test\wallpaper.txt wallpaper.png
pause
exit
What it does is it writes something into a text file and then converts it to a .png
(the text is what I get when I convert a .png to a .txt)
But it only writes
‰PNG
in the file and doesn't convert it to a .png
Is there another way to do it? Am I doing it wrong? All help is appreciated.
That won't work, because echo will append a CRLF at each end of line, and won't tolerate control characters globally.
The "good" method, with only native tools, is:
Encoding image to batch:
Using Powershell, encode your file (the original image, in your case) in Base64.
Split the B64 file into lines of maximum 4096 characters.
Assign that to consecutive variables in your batch.
Decoding image to disk:
Do an echo of all previously encoded variables in a temporary file (i.e. in %TEMP% folder).
Using Powershell, convert the B64 file to a binary file.
Delete temporary B64 file.
Do what you want with the binary file.
Reference:
Base64 encoding/decoding with Powershell
Related
I have a music directory on a debian computer, which from time to time gets too big files in it. To help me with eventual deleting of these files, I've installed mediainfo and made a script, that should go through all the files with in music directory using that command.
I'm trying to utilize the duration parameter to define what needs to deleted or not. The example input is:
mediainfo --Inform="General;%Duration%" /home/administrator/music/Example\ Full\ OST.mp4
7838987
Output returns duration as milliseconds. Please note, that if files have any spaces in them, mediainfo marks a backslash in front of them. I've taken this into account in my script:
#!/bin/bash
for i in /home/administrator/music/*
do
# Changing i to readable form for mediainfo
i=$(echo $i | sed -r 's/[ ^]+/\\&/g')
echo $i
# Go Through files, storing the output to mediadur variable
mediadur=$(mediainfo --Inform="General;%Duration%" $i);
echo $mediadur;
done
echo outputs are:
/home/administrator/music/Example\ Full\ OST.mp4
The mediadur echo output doesn't show anything. But when I copy the first echo output to the first example, it shows same output.
However, if the directory has any media that doesn't have space in its filename, the script works fine. The output of the script:
/home/administrator/music/546721638279.mp3
83017
This problem has left me very puzzled. Any help is appreciated.
You should updates this line:
mediadur=$(mediainfo --Inform="General;%Duration%" "$i");
Double quotes will prevent globbing and word splitting
Actually this is not related to MediaInfo, just that you provide a wrong file name to the command line. And MediaInfo adds no "backslash in front of them".
Your escape method does not work as you expect.
#!/bin/bash
for i in music/*
do
# Go Through files, storing the output to mediadur variable
mediadur=$(mediainfo --Inform="General;%Duration%" "$i");
echo $mediadur;
done
Works well also with file names having spaces.
Short story: I'm trying to write a script that will use FFmpeg to convert the many files stored in one directory to a "standard" mp4 format and save the converted files in another directory. It's been a learning experience (a fun one!) since I haven't done any real coding since using Pascal and FORTRAN on an IBM 370 mainframe was in vogue.
Essentially the script takes the filename, strips the path and extension off it, reassembles the filename with the path and an mp4 extension and calls FFmpeg with some set parameters to do the conversion. If the directory contains only video files with without spaces in the names, then everything works fine. If the filenames contain spaces, then FFmpeg is not able to process the file and moves on to the next one. The error indicates that FFMpeg is only seeing the filename up to the first space. I've included both the script and output below.
Thanks for any help and suggestions you may have. If you think I should be doing this in another way, please by all means, give me your suggestions. As I said, it's been a long time since I did anything like this. I'm enjoying it though.
I've include the code first followed by example output.
for file in ./TBC/*.mp4
do
echo "Start of iteration"
echo "Full text of file name:" $file
#Remove everything up to "C/" (filename without path)
fn_orig=${file#*C/}
echo "Original file name:" $fn_orig
#Length of file name
fn_len=${#fn_orig}
echo "Filename Length:" $fn_len
#file name without path or extension
fn_base=${fn_orig:0:$fn_len-4}
echo "Base file name:" $fn_base
#new filename suffix
newsuffix=".conv.mp4"
fn_out=./CONV/$fn_base$newsuffix
echo "Converted file name:" $fn_out
ffmpeg -i $file -metadata title="$fn_orig" -c:v libx264 -c:a libfdk_aac -b:a 128k $fn_out
echo "End of iteration"
echo
done
echo "Script completed"
With the ffmpeg line commented out, and two files in the ./TBC directory, this is the output that I get
Start of iteration
Full text of file name: ./TBC/Test file with spaces.mp4
Original filename: Test file with spaces.mp4
Filename Length: 25
Base filename: Test file with spaces
Converted file name: ./CONV/Test file with spaces.conv.mp4
End of iteration
Start of iteration
Full text of file name: ./TBC/Test_file_with_NO_spaces.mp4
Original file name: Test_file_with_NO_spaces.mp4
Filename Length: 28
Base file name: Test_file_with_NO_spaces
Converted file name: ./CONV/Test_file_with_NO_spaces.conv.mp4
End of iteration
Script completed
I won't bother to post the results when ffmpeg is uncommented, other than to state that it fails with the error:
./TBC/Test: No such file or directory
The script then continues to the next file which completes successfully because it has no spaces in its name. The actual filename is "Test file with spaces.mp4" so you can see that ffmpeg stops after the word "Test" when it encounters a space.
I hope this has been clear and concise and hopefully someone will be able to point me in the right direction. There is a lot more that I want to do with this script such as parsing subdirectories and ignoring non-video files, etc.
I look forward to any insight you can give!
try quoting you output file:
ffmpeg -i "$file" ... "$fn_out"
bash separates arguments based on spaces, so you have to tell him that $fn_out is one single argument; whence the "" to show that this is one argument.
There is another edge-case where spaces break bash for loops.
"BASH for loop works nicely under UNIX / Linux / Windows and OS X while working on set of files. However, if you try to process a for loop on file name with spaces in them you are going to have some problem. For loop uses $IFS variable to determine what the field separators are. By default $IFS is set to the space character..."
https://www.cyberciti.biz/tips/handling-filenames-with-spaces-in-bash.html
Before:
for file in $(find . -name '*.txt'); do echo "$file"; done
Outputs:
./files/my
documents/item1.txt
./files/my
documents/item2.txt
./files/my
documents/item3.txt
Therefore you should set IFS to ignore spaces.
After:
IFS=$'\n'
for file in $(find . -name '*.txt'); do echo "$file"; done
Outputs:
./files/my documents/item1.txt
./files/my documents/item2.txt
./files/my documents/item3.txt
I am a researcher and I have to read research papers. Unfortunately, the characters are not dark enough, so the papers are hard to read when printed on the paper. Note that the printer's cartridge has no problem, but the characters are not printed dark enough (the text is already in black: take a look at a sample).
This is how the characters look like in Photoshop:
[
Note that the background is transparent when you import a PDF document in photoshop.
I use this awful solution:
First, I Import the PDF document into Photoshop. The pages are imported as individual images with transparent background.
Then, for each page I do either of these two methods:
Method 1: Copy the layer over itself multiple times, so that the image gets darker
Method 2: Apply a Min filter on the image
This is how it looks like after conversion (left: Min filter, right: layer duplication)
[
This solves my problem for printing a single page and I can read the printed contents easily. However, it is hard to convert every page of every PDF paper using PHOTOSHOP!!!!. Is there any wiser solution/tool/application ???
Here is what I need:
1. How to convert PDF to high-quality image (either in Linux or Windows, with any tool).
2. How to apply Min Filter (or any better filter) on the image files automatically. (e.g. a script or whatever)
Thanks!
It can be done using four tools:
pdftoppm: Convert PDF file to separate png images
octave (MATLAB's open-source alternative): Generate an octave script that applies the min filter on the images. After running the script,
pdflatex: Create a single .tex file that imports all the images, then compile the .tex file to obtain a single PDF
bash!: A bash script that automates the process
Here is a fully automated solution as a single bash script:
pdftoppm -rx 300 -ry 300 -png $1 img
# Create an octave script than applieds min-filter to all files.
echo "#!/usr/bin/octave -qf" > script.m
echo "pkg load image" >> script.m
for i in `ls -1 img*.png`
do
echo "i = imread('$i');" >> script.m
echo "i(:,:,1) = ordfilt2(i(:,:,1),1,ones(3,3));" >> script.m
echo "i(:,:,2) = ordfilt2(i(:,:,2),1,ones(3,3));" >> script.m
echo "i(:,:,3) = ordfilt2(i(:,:,3),1,ones(3,3));" >> script.m
echo "imwrite(i,'p$i')" >> script.m
done
# Running the octave script
chmod 755 script.m
./script.m
# Converting png images to a single PDF
# Create a latex file that contains all image files
echo "\documentclass{article}" > f.tex
echo "\usepackage[active,tightpage]{preview}" >> f.tex
echo "\usepackage{graphicx}" >> f.tex
echo "\PreviewMacro[{*[][]{}}]{\includegraphics}" >> f.tex
echo "\begin{document}" >> f.tex
echo -n "%" >> f.tex
for i in `ls -1 pimg*.png`
do
echo "\newpage" >> f.tex
echo "\includegraphics{"$i"}" >> f.tex
done
echo "\end{document}" >> f.tex
#Compiling the latex document
pdflatex -synctex=1 -interaction=nonstopmode f
I wrote the following assertion in makefile to append to an existing text file. Make file is executed using nmake tool.
TempFile.txt :
>> $# echo Hello World !
copy /Y ExistingFile.txt+TempFile.txt ExistingFile.txt
The above is working but writing an extra character at the end.
Hi There !
Hello World !
-
Extra character is not exactly an - character but the carriage return character SUB. How to avoid it ? Is there any other easy way to append text to an existing file ?
You can append text to the end of an existing file also by using pipes:
TempFile.txt :
>> $# echo Hello World !
type TempFile.txt >> ExistingFile.txt
I'm searching (without success) for a script, which would work as a batch file and allow me to prepend a UTF-8 text file with a BOM if it doesn't have one.
Neither the language it is written in (perl, python, c, bash) nor the OS it works on, matters to me. I have access to a wide range of computers.
I've found a lot of scripts to do the reverse (strip the BOM), which sounds to me as kind of silly, as many Windows program will have trouble reading UTF-8 text files if they don't have a BOM.
Did I miss the obvious?
Thanks!
The easiest way I found for this is
#!/usr/bin/env bash
#Add BOM to the new file
printf '\xEF\xBB\xBF' > with_bom.txt
# Append the content of the source file to the new file
cat source_file.txt >> with_bom.txt
I know it uses an external program (cat)... but it will do the job easily in bash
Tested on osx but should work on linux as well
NOTE that it assumes that the file doesn't already have BOM (!)
I wrote this addbom.sh using the 'file' command and ICU's 'uconv' command.
#!/bin/sh
if [ $# -eq 0 ]
then
echo usage $0 files ...
exit 1
fi
for file in "$#"
do
echo "# Processing: $file" 1>&2
if [ ! -f "$file" ]
then
echo Not a file: "$file" 1>&2
exit 1
fi
TYPE=`file - < "$file" | cut -d: -f2`
if echo "$TYPE" | grep -q '(with BOM)'
then
echo "# $file already has BOM, skipping." 1>&2
else
( mv "${file}" "${file}"~ && uconv -f utf-8 -t utf-8 --add-signature < "${file}~" > "${file}" ) || ( echo Error processing "$file" 1>&2 ; exit 1)
fi
done
edit: Added quotes around the mv arguments. Thanks #DirkR and glad this script has been so helpful!
(Answer based on https://stackoverflow.com/a/9815107/1260896 by yingted)
To add BOMs to the all the files that start with "foo-", you can use sed. sed has an option to make a backup.
sed -i '1s/^\(\xef\xbb\xbf\)\?/\xef\xbb\xbf/' foo-*
If you know for sure there is no BOM already, you can simplify the command:
sed -i '1s/^/\xef\xbb\xbf/' foo-*
Make sure you need to set UTF-8, because i.e. UTF-16 is different (otherwise check How can I re-add a unicode byte order marker in linux?)
As an improvement on Yaron U.'s solution, you can do it all on a single line:
printf '\xEF\xBB\xBF' | cat - source.txt > source-with-bom.txt
The cat - bit says to concatenate to the front of source.txt what's being piped in from the print command. Tested on OS X and Ubuntu.
I find it pretty simple. Assuming the file is always UTF-8(you're not detecting the encoding, you know the encoding):
Read the first three characters. Compare them to the UTF-8 BOM sequence(wikipedia says it's 0xEF,0xBB,0xBF).
If it's the same, print them in the new file and then copy everything else from the original file to the new file.
If it's different, first print the BOM, then print the three characters and only then print everything else from the original file to the new file.
In C, fopen/fclose/fread/fwrite should be enough.
open in notepad. click save-as. under encoding, select "UTF-8(BOM)" (this is under plain "UTF-8").
I've created a script based on Steven R. Loomis's code.
https://github.com/Vdragon/addUTF-8bomb
Checkout https://github.com/Vdragon/C_CPP_project_template/blob/development/Tools/convertSourceCodeToUTF-8withBOM.bash.sh for example of using this script.
in VBA Access:
Dim name As String
Dim tmpName As String
tmpName = "tmp1.txt"
name = "final.txt"
Dim file As Object
Dim finalFile As Object
Set file = CreateObject("Scripting.FileSystemObject")
Set finalFile = file.CreateTextFile(name)
'Add BOM
finalFile.Write Chr(239)
finalFile.Write Chr(187)
finalFile.Write Chr(191)
'transfer text from tmp to final file:
Dim tmpFile As Object
Set tmpFile = file.OpenTextFile(tmpName, 1)
finalFile.Write tmpFile.ReadAll
finalFile.Close
tmpFile.Close
file.DeleteFile tmpName
Here is the batch file I use for this purpose in Windows. It should be saved with ANSI (Windows-1252) encoding for the /p= part.
#echo off
if [%~1]==[] goto usage
if not exist "%~1" goto notfound
setlocal
set /p AREYOUSURE="Adding UTF-8 BOM to '%~1'. Are you sure (Y/[N])? "
if /i "%AREYOUSURE%" neq "Y" goto canceled
:: Main code is here. Create a temp file containing the BOM, then append the requested file contents, and finally overwrite the original file
(echo|set /p=)>"%~1.temp"
type "%~1">>"%~1.temp"
move /y "%~1.temp" "%~1" >nul
#echo Added UTF-8 BOM to "%~1"
pause
exit /b 0
:usage
#echo Usage: %0 ^<FILE_NAME^>
goto end
:notfound
#echo File not found: "%~1"
goto end
:canceled
#echo Operation canceled.
goto end
:end
pause
exit /b 1
You can save the file as e.g. C:\addbom.bat and use the following .reg file to add it to right-click context menu of all files:
Windows Registry Editor Version 5.00
[HKEY_CLASSES_ROOT\*\Shell\Add UTF-8 BOM]
[HKEY_CLASSES_ROOT\*\Shell\Add UTF-8 BOM\command]
#="C:\\addbom.bat \"%1\""