I'm trying to convert some cmd script to a makefile with no success.
The script is:
for /F "eol=* tokens=2,3*" %%i in (%VERSION_FILE_PATH%\VersionInfo.h,%VERSION_FILE_PATH%\Version.h) do (
if %%i==%MAJOR% set MAJOR_VALUE=%%j
if %%i==%MINOR% set MINOR_VALUE=%%j
if %%i==%HOTFIX% set HOTFIX_VALUE=%%j
if %%i==%BUILD% set BUILD_VALUE=%%j
)
What the script does is searching for specific string in each line and gets the string followed.
for example: #define MAJOR 4
I'm searching for MAJOR and getting the 4.
My question is how to do it in makefile.
If these lines always have the same structure, you could do it this way with awk:
tester:
cat test | grep MAJOR | awk '{print $$3}'
Where I made a little test file that just contains
#define MAJOR 4
You could loop over each line, grepping for the token you want, and then grabbing the third value with awk. Note that you need the double $ to escape string expansion by make.
Related
This question already has answers here:
Why does my tool output overwrite itself and how do I fix it?
(3 answers)
Closed 6 months ago.
I'm migrating many bash shell scripts from old versions of raspbian and ubuntu to the current raspbian version. I've made a brand new system installation, including various configuration (text) files that I've created for these scripts. I found to my horror that awk-print and awk-printf APPEAR to have changed in the latest version, as evidenced by bash variable-type errors when the values are used. What's going on ?
Now that I know the answer, I can explain what happened so others can avoid it. That's why I said, awk-print APPEARS to have changed. It didn't, as I discovered when I checked the version of awk on all three machines. Running:
awk -W version
on all three systems gave the same version, mawk 1.3.3 Nov 1996.
When a text file is small, I find it the simplest to cat the file to a variable, grep that variable for a keyword that identifies a particular line and by extension a particular variable, and use 'tr' and 'awk print' to split the line and assign the value to a variable. Here's an example line from which I want to assign '5' to a variable:
"keyword=5"<line terminator>
That line is one of several read from a text file, so there's at least one line terminator after each line. That line terminator is the key to the problem.
I execute the following commands to read the file, find the line with 'keyword', split the line at '=', and assign the value from that line to bar:
file_contents="$(cat "$filename")"
bar="$(echo -e "$file_contents" | grep "keyword" | tr "=" " " | awk '{print $2}')"
Here's the subtle part. Unknownst to me, in the process of creating a new system, the line terminators in some of my text files changed from linux format, with a single line terminator (\n), to DOS format, with two line terminators (\n\r), for each line, when I set up the new system. When, working from the keyboard, I grepped the text file to get the desired line, this caused the value that awk-print assigned to 'bar' to have a line terminator (\r). This terminator does NOT appear on screen because bash supplies one. It's only evident if one executes:
echo ${#bar}
to get the length of the string, or does:
echo -e "$bar"
The hidden terminator shows up as one additional character.
So, the solution to the problem was either to use 'fromdos' to remove the second line terminator before processing the files, or to remove the unwanted '\r' that was being assigned to each variable. One helpful comment noted that 'cat -vE $file" would show every character in the file. Sure enough, the dual terminators were present.
Another helpful comment noted that using I was causing multiple sub-processes to run when I parsed each line, slowing execution time, and that a bashism:
${foo//*=/}
could avoid it. That bashism helped parse each line but did not remove the offending '\r'. A second bashism:
${foo//$'\r'/}
removed that '\r'.
CASE SOLVED
#!/bin/sh -x
echo "value=5" | tr "=" "\n" > temp
echo "1,2p" | ed -s temp
I have come to view Ed as UNIX's answer to the lightsaber.
I found a format string, '"%c", $2' to use with printf in the current
awk, but I have to use '"%s", $2 in the old version. Note '%c' vs
'%s'.
%c behavior does depend on type of argument you feed - if it is numeric you will get character corresponding to given ASCII code, if it is string you will get first character of it, example
mawk 'BEGIN{printf "%c", 42}' emptyfile
does give output
*
and
mawk 'BEGIN{printf "%c", "HelloWorld"}' emptyfile
does give output
H
Apparently your 2nd field is digit and some junk characters, which is considered to be string, thus second option is used. But is taking first character correct action in all your use-cases? Is behavior compliant with requirement for multi-digit numbers, e.g. 555?
(tested in mawk 1.3.3)
I found the problem thanks to several of the responses. It's rudimentary, I know, but I grepped a text file to extract a line with a keyword. I used tr to split the line and awk print to extract one argument, a numeric value, from that. That text file, once copied to the new machine, had a CR LF at the end of each line. Originally, it just had a newline character, which worked fine. But with the CR LF pair, every numeric value that I assigned to a variable using awk print had a newline character. This was not obvious onscreen, caused every arithmetic statement and numeric IF statement using it to fail, and caused the issues I reported about awk print.
I am trying to write a script that will be doing some log highlighting for some human readability. Issue I am running into is trying to match a wildcard while also trying to print the wildcard variable. In my file I am searching for "psuslot-*" (can be either 1 or 2). The goal is to highlight the string "psuslot-1" or "psuslot-2" in the line of the log. My current command I am testing with is:
cat file.txt | sed "s|\(psuslot-.\)|$(printf "\033[93m"\1"\033[0m")|"
Highlights this in the output: 1
I know the groups are relatively correct as this command retains the correct group:
cat file.txt | sed "s|\(psuslot-.\)|\1|"
Retained in the output: psuslot-1 and psuslot-2
Using capture groups does not look to translate well into the shell execution. Is there a way to translate the wildcard into the execution?
You could use
sed 's|\(psuslot-.\)|\o33[93m\1\o33[0m|'
or less readable
sed 's|\(psuslot-.\)|'$(printf '\033[93m')'\1'$(printf '\033[0m')'|'
I have taken the syntax for the octal value from the GNU sed manual: Escape Sequences - specifying special characters
\oxxx
Produces or matches a character whose octal ASCII value is xxx.
And \o033 produced the same result as \o33 (same as hexadecimal \x1b), so I took the shorter one.
I would like to execute some basic mathematical calculations in batch/Cygwin, but the solution, described in StackOverflow question: Calculating the sum of two variables in a batch script is using the set /A command for that matter.
This does not suit me, because I'd like to have everything in a pipe (UNIX style, hence the Cygwin).
My idea is the following: I have a list of files, containing an entry. I'd like to show, for all mentioned file, the one line behind that entry.
Therefore I was thinking about following approach:
Find the line where the entry is found: fgrep -n <entry> // this shows the line number together with the entry itself
Only show the line number: fgrep -n <entry> | awk -F ':' '{print $1}'
Add '1' to that number
Take the first amount of entries : head -<new number>
Only take the last line : tail -1
But as I don't know how to add 1 to a number, I am stuck here.
I've already tried using bc (basic calculator) but my Cygwin installation seems not to cover that.
As I want to have everything within one piped line, the usage of set /A has not sence.
Does anybody have an idea?
Thanks in advance
Sorry, sorry, I just realised that awk is capable of doing some basic calculations, so by replacing {print $1} by {print $1 + 1} my problem is solved.
Bash has builtin support for declaring variables to be integer and for doing arithmetic on integers. Bash has a help command and you should have both man bash and info bash installed.
For satisfying a legacy code i had to add date to a filename like shown below(its definitely needed and cannot modify legacy code :( ). But i need to remove the date within the same command without going to a new line. this command is read from a text file so i should do this within the single command.
$((echo "$file_name".`date +%Y%m%d`| sed 's/^prefix_//')
so here i am removing the prefix from filename and adding a date appended to filename. i also do want to remove the date which i added. for ex: prefix_filename.txt or prefix_filename.zip should give me as below.
Expected output:
filename.txt
filename.zip
Current output:
filename.txt.20161002
filename.zip.20161002
Assumming all the files are formatted as filename.ext.date, You can pipe the output to 'cut' command and get only the 1st and 2nd fields :
~> X=filename.txt.20161002
~> echo $X | cut -d"." -f1,2
filename.txt
I am not sure that I understand your question correctly, but perhaps this does what you want:
$((echo "$file_name".`date +%Y%m%d`| sed -e 's/^prefix_//' -e 's/\.[^.]*$//')
Sample input:
cat sample
prefix_original.txt.log.tgz.10032016
prefix_original.txt.log.10032016
prefix_original.txt.10032016
prefix_one.txt.10032016
prefix.txt.10032016
prefix.10032016
grep from start of the string till a literal dot "." followed by digit.
grep -oP '^.*(?=\.\d)' sample
prefix_original.txt.log.tgz
prefix_original.txt.log
prefix_original.txt
prefix_one.txt
prefix.txt
prefix
perhaps, following should be used:
grep -oP '^.*(?=\.\d)|^.*$' sample
If I understand your question correctly, you want to remove the date part from a variable, AND you already know from the context that the variable DOES contain a date part and that this part comes after the last period in the name.
In this case, the question boils down to removing the last period and what comes after.
This can be done (Posix shell, bash, zsh, ksh) by
filename_without=${filename_with%.*}
assuming that filename_with contains the filename which has the date part in the end.
% cat example
filename.txt.20161002
filename.zip.20161002
% cat example | sed "s/.[0-9]*$//g"
filename.txt
filename.zip
%
Windows command line, I want to search a file for all rows starting with:
# NNN "<file>.inc"
where NNN is a number and <file> any string.
I want to use findstr, because I cannot require that the users of the script install ack.
Here is the expression I came up with:
>findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9_]*.inc" all_pre.txt
The file to search is all_pre.txt.
So far so good. Now I want to pipe that to another command, say for example more.
>findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9]*.inc" all_pre.txt | more
The result of this is the same output as the previous command, but with the file name as prefix for every row (all_pre.txt).
Then comes:
FINDSTR: cannot open |
FINDSTR: cannot open more
Why doesn't the pipe work?
snip of the content of all_pre.txt
# 1 "main.ss"
# 7 "main.ss"
# 11 "main.ss"
# 52 "main.ss"
# 1 "Build_flags.inc"
# 7 "Build_flags.inc"
# 11 "Build_flags.inc"
# 20 "Build_flags.inc"
# 45 "Build_flags.inc(function a called from b)"
EDIT: I need to escape the dot in the regex also. Not the issue, but worth to mention.
>findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9_]*\.inc" all_pre.txt
EDIT after Frank Bollack:
>findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9_]*\.inc.*" all_pre.txt | more
is not working, although (I think) it should look for the same string as before then any character any number of times. That must include the ", right?
You are missing a trailing \" in your search pattern.
findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9]*.inc\"" all_pre.txt | more
The above works for me.
Edit:
findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9]*\.inc.*\"" all_pre.txt | more
This updated search string will now match these lines from your example:
# 1 "Build_flags.inc"
# 7 "Build_flags.inc"
# 11 "Build_flags.inc"
# 20 "Build_flags.inc"
# 45 "Build_flags.inc(function a called from b)"
Edit:
To circumvent this "bug" in findstr, you can put your search into a batch file like this:
#findstr /r /c:"^# [0-9][0-9]* \"[a-zA-Z0-9_]*\.inc" %1
Name it something like myfindstr.bat and call it like that:
myfinsdtr all_pre.txt | more
You can now use the pipe and redirection operators as usual.
Hope that helps.
I can't really explain the why, but from my experience although findstr behaviour with fixed strings (e.g. /c:"some string") is exactly as desired, regular expressions are a different beast. I routinely use the fixed string search function like so to extract lines from CSV files:
C:\> findstr /C:"literal string" filename.csv > output.csv
No issue there.
But using regular expressions (e.g. /R "^\"some string\"" ) appears to force the findstr output to console and can't be redirected via any means. I tried >, >>, 1> , 2> and all fail when using regular expressions.
My workaround for this is to use findstr as the secondary command. In my case I did this:
C:\> type filename.csv | findstr /R "^\"some string\"" > output.csv
That worked for me without issue directly from a command line, with a very complex regular expression string. In my case I only had to escape the " for it to work. other characters such as , and . worked fine as literals in the expression without escaping.
I confirmed that the behaviour is the same on both windows 2008 and Windows 7.
EDIT: Another variant also apparently works:
C:\> findstr /R "^\"some string\"" < filename.csv > output.csv
it's the same principle as using type, but just using the command line itself to create the pipe.
If you use a regex with an even number of double quotes, it works perfectly. But your number of " characters is odd, redirection doesn't work. You can either complete your regex with the second quote (you can use range for this purpose: [\"\"]), or replace your quote character with the dot metacharacter.
It looks like a cmd.exe issue, findstr is not guilty.
Here is my find, it's related to the odd number of double quotes not redirecting from within a batch script. Michael Yutsis had it right, just didn't give an example, so I thought I would:
dataset:
"10/19/2022 20:02:06.057","99.526755039736002573"
"10/19/2022 20:02:07.061"," "
"10/19/2022 20:02:08.075","85.797437749585213851"
"10/19/2022 20:02:09.096","96.71306029796799919"
"10/19/2022 20:02:10.107","4.0273833029566628028"
I tried using the following to find just lines that had a fractional portion of a number at the end of each line.
findstr /r /c:"\.[0-9]*\"$" file1.txt > file2.txt
(a valid regex string surrounded by quotes that has one explicit double quote in it)
needed to become
findstr /r /c:"\"[0-9]*\.[0-9]*\"$"" file1.txt > file2.txt
so it could identify the entire decimal (including the explicit quotes).
I tried just adding another double quote at the end of the string ($"" ) and the command worked and generated file2.txt, but it didn't match any lines in the file, so the extra trailing double quote becomes part of the regex string, I guess, and it doesn't match anything. Including the leading double quote around the full decimal was necessary, and fine for my needs.