Get string length - installation

Is there a way to find out the length of a string in NSIS?
I am trying to test if a file is empty(has no contents). One way is to read the file and store the contents in a string(called contentStr) then see how long that contentStr string is. If its > 0 then its not empty.
The other method is to check if contentStr == "" but as you can see below it doesn't work. Any empty file never returns 1 when it should:
!macro IsFileEmpty fName res
!insertmacro ReadFile "${fName}" ${res}
StrCmp ${res} "" +1 +2
IntOp ${res} 1 + 0
IntOp ${res} 0 + 0
!macroend

To get a string length, use StrLen :
StrLen $0 "123456" # ==> $0 = 6
If you want to get the file size before trying to read it, look at the technique pointed by Francisco in another answer.

Have you checked your file size is really 0 bytes? Maybe your file has spaces or newline characters... In those cases you'll need to StrRep or Trim your string.
If you just want to know the file size, you can use this macro and function:
!macro FileSize VAR FILE
Push "${FILE}"
Call FileSizeNew
Pop ${VAR}
!macroend
Function FileSizeNew
Exch $0
Push $1
FileOpen $1 $0 "r"
FileSeek $1 0 END $0
FileClose $1
Pop $1
Exch $0
FunctionEnd
More info here:
http://nsis.sourceforge.net/Getting_File_Size

It is a little bit weird to do it this way and it only partially works:
...
StrCmp ${res} "" 0 +2 ; +1 and 0 is the same, jumps to next instruction
StrCpy ${res} 1 ; No need to do math here
IntOp ${res} ${res} + 0 ; NaN + 0 = 0 but what if you read a number from the file?
If the file might start with a number you need to jump like zbynour suggested:
...
StrCmp ${res} "" 0 +3
StrCpy ${res} 1
Goto +2
StrCpy ${res} 0
If you flip the test you can get what you want with the same number of instructions:
!macro IsFileNOTEmpty fName res
!insertmacro ReadFile "${fName}" ${res}
StrCmp ${res} "" +2 0
StrCpy ${res} 1 ; Or IntOp ${res} ${res} | 1 if you really wanted to do extra math ;)
IntOp ${res} ${res} + 0
!macroend
or even better:
!macro IsFileEmpty fName res
!insertmacro ReadFile "${fName}" ${res}
StrLen ${res} ${res} ; If this is enough depends on how you want to deal with newlines at the start of the file (\r or \n)
!macroend
...all of this code assumes you want to test if the file starts with some text, if the file starts with a 0 byte it will always say the file is empty. So if the file has binary content you need to use Francisco R's code that tests the actual size in bytes.

That's because the last line is always executed (in case of ${res} is empty offset will be +1 but next instruction will not be skipped).
The following code should make it working as you expect:
!macro IsFileEmpty fName res
!insertmacro ReadFile "${fName}" ${res}
StrCmp ${res} "" +1 +3
IntOp ${res} 1 + 0
goto end
IntOp ${res} 0 + 0
end:
!macroend

Related

How do I compare whether two numbers have 1 number of difference between them?

Explaining my algorithm:
I'm trying to find out whether My current job for e.g. Write(W) is the same as my previous job, if my current job (W) is the same as my previous job (W) then check whether there's 1 integer of difference between them, for e.g. if the previous job was W9 and my current job is either W8 or W10, then append 0 to my seek array.
I've tried almost every way I could find on the internet to compare integers but none of them work, I continue to receive an invalid arithmetic syntax error when trying to compare current and previous job.
Any ideas?
# Jobs
lastJob=""
currentJob=""
lastNumber=0
currentNumber=0
# Arrays
seek=()
RW=()
shift 3
# Reads final into array
while IFS= read -r line
do
Job+=($line)
done < ./sim.job
#-----------------------------------
# Single HDD Algorithm
#-----------------------------------
for (( i=0; i<=${#Job[#]}; i++ ));
do
currentString="${Job[$i]}"
currentJob=${currentString:0:1}
currentNumber=${currentString:1:3}
if [[ $currentJob == $lastJob ]]
then
if [[ $currentNumber -eq $lastNumber-1 ]] || [[ $currentNumber -eq $lastNumber+1 ]]
then
seek+=(0)
RW+=(60)
else
seek+=(5)
RW+=(60)
fi
else
seek+=(5)
RW+=(60)
fi
lastString="${Job[$i]}"
lastJob=${lastString:0:1}
lastNumber=${currentString:1:3}
done
This prints output:
#-----------------------------------
# Print Information
#-----------------------------------
for (( i=0; i<${#Job[#]}; i++ ));
do
echo -e "${Job[$i]}:${seek[$i]}:${RW[$i]}"
done
Expected Output:
R9:5:60
W9:5:60
W10:0:60
R11:0:60
R13:5:60
R18:5:60
R19:0:60
R20:0:60
R21:0:60
Actual Output:
") syntax error: invalid arithmetic operator (error token is "
R9:5:60
W9:5:60
W10::
R11::
R13::
R18::
R19::
R20::
R21::
sim.job file (Input):
R9
W9
W10
R11
R13
R18
R19
R20
R21
rogue \r were found in my input file, to solve this I used the commands:
To check if \r are in the file: od -c <filename> | tr ' ' '\n' | grep '\r'
To remove rogue \r use: tr -d '\r' < filename
Thanks again #mark-fuso

Process code lines with in specific pattern boundary from text file separately?

I have used this script to extract all the occurences of function data between name __libc_memalign and : from "file1" to "file 2" . Now the file 2 contains multiple(6000 group) occurrences of code with in this pattern. How I can iterate through each group in the file "file2" and process each group?
`awk '/__libc_memalign/ {p=1;print;next} /:/ && p {p=0;print} p' file1.out >file2`
sample input
0 0xc40840 : __libc_memalign
0 0x40bac0 0x7ffe493d0d50 W
0 0x40bac2 0x7ffe493d0d48 W
0 0x40bac4 0x7ffe493d0d40 W
..
0 0xc40840 : __libc_memalign
0 0x40bac0 0x7ffe493d0d50 R
0 0x40bac2 0x7ffe493d0d48 R
0 0x40bac4 0x7ffe493d0d40 R
....
0 0xc40840 : __libc_memalign
0 0x40bab0 0x7ffe493b0d50 W
0 0x40bab2 0x7ffe493dbd48 R
0 0x40bac4 0x7ffe493d0d40 W
It's not really clear what you mean by "group" or "process" but hopefully at least this could nudge you in the right direction.
Assuming there are no empty lines in your groups, add a separator between them; then loop over sequences between empty lines. Your Awk script already seems to put an empty line when it finishes a group, so you can simply
awk '/__libc_memalign/ {p=1; print; next}
/:/ && p {p=0; print} p' file1.out |
while true; do
while read -r line; do
case $line in '') break;; esac
echo "$line"
done |
# Pipe the collected group into "process
process
done
This is fairly clumsy, and can probably be refactored significantly. If you don't particularly need the intermediate results, maybe simply
awk '/__libc_memalign/ {
p=1; cmd = "process" print | cmd; next}
/:/ && p { p=0; close(cmd) }
p { print | cmd }' file1.out

Bash script, command - output to array, then print to file

I need advice on how to achieve this output:
myoutputfile.txt
Tom Hagen 1892
State: Canada
Hank Moody 1555
State: Cuba
J.Lo 156
State: France
output of mycommand:
/usr/bin/mycommand
Tom Hagen
1892
Canada
Hank Moody
1555
Cuba
J.Lo
156
France
Im trying to achieve with this shell script:
IFS=$'\r\n' GLOBIGNORE='*' :; names=( $(/usr/bin/mycommand) )
for name in ${names[#]}
do
#echo $name
echo ${name[0]}
#echo ${name:0}
done
Thanks
Assuming you can always rely on the command to output groups of 3 lines, one option might be
/usr/bin/mycommand |
while read name;
read year;
read state; do
echo "$name $year"
echo "State: $state"
done
An array isn't really necessary here.
One improvement could be to exit the loop if you don't get all three required lines:
while read name && read year && read state; do
# Guaranteed that name, year, and state are all set
...
done
An easy one-liner (not tuned for performance):
/usr/bin/mycommand | xargs -d '\n' -L3 printf "%s %s\nState: %s\n"
It reads 3 lines at a time from the pipe and then passes them to a new instance of printf which is used to format the output.
If you have whitespace at the beginning (it looks like that in your example output), you may need to use something like this:
/usr/bin/mycommand | sed -e 's/^\s*//g' | xargs -d '\n' -L3 printf "%s %s\nState: %s\n"
#!/bin/bash
COUNTER=0
/usr/bin/mycommand | while read LINE
do
if [ $COUNTER = 0 ]; then
NAME="$LINE"
COUNTER=$(($COUNTER + 1))
elif [ $COUNTER = 1 ]; then
YEAR="$LINE"
COUNTER=$(($COUNTER + 1))
elif [ $COUNTER = 2 ]; then
STATE="$LINE"
COUNTER=0
echo "$NAME $YEAR"
echo "State: $STATE"
fi
done
chepner's pure bash solution is simple and elegant, but slow with large input files (loops in bash are slow).
Michael Jaros' solution is even simpler, if you have GNU xargs (verify with xargs --version), but also does not perform well with large input files (external utility printf is called once for every 3 input lines).
If performance matters, try the following awk solution:
/usr/bin/mycommand | awk '
{ ORS = (NR % 3 == 1 ? " " : "\n")
gsub("^[[:blank:]]+|[[:blank:]]*\r?$", "") }
{ print (NR % 3 == 0 ? "State: " : "") $0 }
' > myoutputfile.txt
NR % 3 returns the 0-based index of each input line within its respective group of consecutive 3 lines; returns 1 for the 1st line, 2 for the 2nd, and 0(!) for the 3rd.
{ ORS = (NR % 3 == 1 ? " " : "\n") determines ORS, the output-record separator, based on that index: a space for line 1, and a newline for lines 2 and 3; the space ensures that line 2 is appended to line 1 with a space when using print.
gsub("^[[:blank:]]+|[[:blank:]]*\r?$", "") strips leading and trailing whitespace from the line - including, if present, a trailing \r, which your input seems to have.
{ print (NR % 3 == 0 ? "State: " : "") $0 } prints the trimmed input line, prefixed by "State: " only for every 3rd input line, and implicitly followed by ORS (due to use of print).

How to access memory buffer natively in NSIS

I'm reading a file's content into a buffer using
System::Alloc $Size
pop $Buffer
System::Call "Kernel32::ReadFile(i r0, i $Buffer, i $Size, t.,)"
But I can't figure out how to read from (or write into) $Buffer.
Is there a way do to so (preferably natively, but any suggestion would be appreciated).
Thanks
P.S.
I know about System::Copy but it only lets you write into a buffer and only from another buffer
(trying System::Copy 1 $Buffer "A" crashes the executable)
The built-in NSIS functions FileRead/FileReadUTF16LE and FileReadByte can be used to read text and bytes but you can also call Windows functions directly. To read from a memory buffer you have to use the system struct syntax:
Section
InitPluginsDir
FileOpen $0 "$pluginsdir\test.txt" a
DetailPrint "NSIS:"
FileWrite $0 "Foo"
FileSeek $0 0 SET
FileRead $0 $1
DetailPrint |$1|
DetailPrint "System::Call:"
System::Alloc 100
Pop $1
System::Call '*$1(&m3 "Bar")' ; Write ASCII text into buffer using struct syntax
System::Call 'kernel32::WriteFile(i$0,i$1,i3,*i.r2,i0)i.r9'
DetailPrint "Write: OK=$9 ($2 bytes)"
System::Free $1
System::Call '*(&i100)i.r1' ; Alloc a 100 byte buffer using struct syntax
FileSeek $0 0 SET
System::Call 'kernel32::ReadFile(i$0,i$1,i6,*i.r2,i0)i.r9'
DetailPrint "Read: OK=$9 ($2 bytes)"
System::Call '*$1(&m${NSIS_MAX_STRLEN}.r2)' ; Read ASCII text into variable using struct syntax
System::Free $1
DetailPrint |$2|
FileClose $0
SectionEnd

Convert a decimal number to hexadecimal and binary in a shell script

I have a decimal number in each line of a file.txt:
1
2
3
I am trying (for too long now) to write a one-liner script to have an output where each row has a column with the decimal, hexadecimal and the binary. To ease the task we can say that the original number is expressed in a byte. So the maximum value is 255.
I first try to decode each number as a bynary with prepended 0 so to have an 8 bits pattern:
awk '{print "ibase=10;obase=2;" $1}' $1 | bc | xargs printf "%08d\n"
where the outer $1 in the awk statement is file.txt. The output is :
00000001
00000010
00000011
Same for hex with one prepended 0
awk '{printf("0x%02x\n", $1)}' $1
Same as before. The Output is :
0x01
0x02
0x03
Well, the decimal should be just a print:
1
2
3
What I'd like to have is one liner where I have:
1 00000001 0x01
2 00000001 0x02
so basically to put 1. 2. and 3. in each line of the output.
I tried to execute bc (and other command) within awk using system() without success. And a zillion other ways. What is the way you would do it?
The following one-liner should work:
printf "%s %08d 0x%02x\n" "$1" $(bc <<< "ibase=10;obase=2;$1") "$1"
Example output:
$ for i in {1..10}; do printf "%s %08d 0x%02x\n" "$i" $(bc <<< "ibase=10;obase=2;$i") "$i"; done
1 00000001 0x01
2 00000010 0x02
3 00000011 0x03
4 00000100 0x04
5 00000101 0x05
6 00000110 0x06
7 00000111 0x07
8 00001000 0x08
9 00001001 0x09
10 00001010 0x0a
So I searched for a short and elegant awk binary converter. Not satisfied considered this as a challenge, so here you are. A little bit optimzed for size, so I put a readable version below.
The printf at the end specifies how large the numbers should be. In this case 8 bits.
Is this bad code? Hmm, yeah... it's awk :-)
Does of course not work with very huge numbers.
67 characters long awk code:
awk '{r="";a=$1;while(a){r=((a%2)?"1":"0")r;a=int(a/2)}printf"%08d\n",r}'
Edit: 55 characters awk code
awk '{r="";a=$1;while(a){r=a%2r;a=int(a/2)}printf"%08d\n",r}'
Readable version:
awk '{r="" # initialize result to empty (not 0)
a=$1 # get the number
while(a!=0){ # as long as number still has a value
r=((a%2)?"1":"0") r # prepend the modulos2 to the result
a=int(a/2) # shift right (integer division by 2)
}
printf "%08d\n",r # print result with fixed width
}'
And the asked one liner with bin and hex
awk '{r="";a=$1;while(a){r=a%2r;a=int(a/2)}printf"%08d 0x%02x\n",r,$1}'
You don't need bc. Here's a solution using only awk:
Fetch the bits2str function available in the manual
Add this minimal script:
{
printf("%s %s %x\n", $1, bits2str($1), $1)
}
This produces:
$ awk -f awkscr.awk nums
1 00000001 1
2 00000010 2
3 00000011 3

Resources