split text file according to brackets or parantheses (top-level only) in terminal - bash

I have several text files (utf-8) that I want to process in shell script. They aren't excactly the same format, but if I could only break them up into edible chunks I can handle that.
This could be programmed in C or python, but I prefer not.
EDIT: I wrote a solution in C; see my own answer. I think this may be the simplest approach after all. If you think I'm wrong please test your solution against the more complicated example input from my answer below.
-- jcxz100
For clarity (and to be able to debug more easily) I want the chunks to be saved as separate text files in a sub-folder.
All types of input files consist of:
junk lines
lines with junk text followed by start brackets or parentheses - i.e. '[' '{' '<' or '(' - and possibly followed by payload
payload lines
lines with brackets or parentheses nested within the top-level pairs; treated as payload too
payload lines with end brackets or parantheses - i.e. ']' '}' '>' or ')' - possibly followed by something (junk text and/or start of a new payload)
I want to break up the input according to only the matching pairs of top-level brackets/parantheses.
Payload inside these pairs must not be altered (including newlines and whitespace).
Everything outside the toplevel pairs should be discarded as junk.
Any junk or payload inside double-quotes must be considered atomic (handled as raw text, thus any brackets or parentheses inside should also be treated as text).
Here is an example (using only {} pairs):
junk text
"atomic junk"
some junk text followed by a start bracket { here is the actual payload
more payload
"atomic payload"
nested start bracket { - all of this line is untouchable payload too
here is more payload
"yet more atomic payload; this one's got a smiley ;-)"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
} trailing junk
intermittent junk
{
payload that goes in second output file }
end junk
...sorry: Some of the input files really are as messy as that.
The first output file should be:
{ here is the actual payload
more payload
"atomic payload"
nested start bracket { - all of this line is untouchable payload too
here is more payload
"yet more atomic payload; this one's got a smiley ;-)"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
}
... and the second output file:
{
payload that goes in second output file }
Note:
I haven't quite decided whether it's necesary to keep the pair of start/end characters in the output or if they themselves should be discarded as junk.
I think a solution that keeps them in is more general use.
There can be a mix of types of top-level bracket/paranthesis pairs in the same input file.
Beware: There are * and $ characters in the input files, so please avoid confusing bash ;-)
I prefer readability over brevity; but not at an exponential cost of speed.
Nice-to-haves:
There are backslash-escaped double-quotes inside the text; preferably they should be handled
(I have a hack, but it's not pretty).
The script oughtn't break over mismatched pairs of brackets/parentheses in junk and/or payload (note: inside the atomics they must be allowed!)
More-far-out-nice-to-haves:
I haven't seen it yet, but one could speculate that some input might have single-quotes rather than double-quotes to denote atomic content... or even a mix of both.
It would be nice if the script could be easily modified to parse input of similar structure but with different start/end characters or strings.
I can see this is quite a mouthful, but I think it wouldn't give a robust solution if I broke it down into simpler questions.
The main problem is splitting up the input correctly - everything else can be ignored or "solved" with hacks, so
feel free to ignore the nice-to-haves and the more-far-out-nice-to-haves.

Given:
$ cat file
junk text
"atomic junk"
some junk text followed by a start bracket { here is the actual payload
more payload
"atomic payload"
nested start bracket { - all of this line is untouchable payload too
here is more payload
"yet more atomic payload; this one's got a smiley ;-)"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
} trailing junk
intermittent junk
{
payload that goes in second output file }
end junk
This perl file will extract the blocks you describe into files block_1, block_2, etc:
#!/usr/bin/perl
use v5.10;
use warnings;
use strict;
use Text::Balanced qw(extract_multiple extract_bracketed);
my $txt;
while (<>){$txt.=$_;} # slurp the file
my #blocks = extract_multiple(
$txt,
[
# Extract {...}
sub { extract_bracketed($_[0], '{}') },
],
# Return all the fields
undef,
# Throw out anything which does not match
1
);
chdir "/tmp";
my $base="block_";
my $cnt=1;
for my $block (#blocks){ my $fn="$base$cnt";
say "writing $fn";
open (my $fh, '>', $fn) or die "Could not open file '$fn' $!";
print $fh "$block\n";
close $fh;
$cnt++;}
Now the files:
$ cat block_1
{ here is the actual payload
more payload
"atomic payload"
nested start bracket { - all of this line is untouchable payload too
here is more payload
"yet more atomic payload; this one's got a smiley ;-)"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
}
$ cat block_2
{
payload that goes in second output file }
Using Text::Balanced is robust and likely the best solution.
You can do the blocks with a single Perl regex:
$ perl -0777 -nlE 'while (/(\{(?:(?1)|[^{}]*+)++\})|[^{}\s]++/g) {if ($1) {$cnt++; say "block $cnt:== start:\n$1\n== end";}}' file
block 1:== start:
{ here is the actual payload
more payload
"atomic payload"
nested start bracket { - all of this line is untouchable payload too
here is more payload
"yet more atomic payload; this one's got a smiley ;-)"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
}
== end
block 2:== start:
{
payload that goes in second output file }
== end
But that is a little more fragile than using a proper parser like Text::Balanced...

I have a solution in C. It would seem there's too much complexity for this to be easily achieved in shell script.
The program isn't overly complicated but nevertheless has more than 200 lines of code, which include error checking, some speed optimization, and other niceties.
Source file split-brackets-to-chunks.c:
#include <stdio.h>
/* Example code by jcxz100 - your problem if you use it! */
#define BUFF_IN_MAX 255
#define BUFF_IN_SIZE (BUFF_IN_MAX+1)
#define OUT_NAME_MAX 31
#define OUT_NAME_SIZE (OUT_NAME_MAX+1)
#define NO_CHAR '\0'
int main()
{
char pcBuff[BUFF_IN_SIZE];
size_t iReadActual;
FILE *pFileIn, *pFileOut;
int iNumberOfOutputFiles;
char pszOutName[OUT_NAME_SIZE];
char cLiteralChar, cAtomicChar, cChunkStartChar, cChunkEndChar;
int iChunkNesting;
char *pcOutputStart;
size_t iOutputLen;
pcBuff[BUFF_IN_MAX] = '\0'; /* ... just to be sure. */
iReadActual = 0;
pFileIn = pFileOut = NULL;
iNumberOfOutputFiles = 0;
pszOutName[OUT_NAME_MAX] = '\0'; /* ... just to be sure. */
cLiteralChar = cAtomicChar = cChunkStartChar = cChunkEndChar = NO_CHAR;
iChunkNesting = 0;
pcOutputStart = (char*)pcBuff;
iOutputLen = 0;
if ((pFileIn = fopen("input-utf-8.txt", "r")) == NULL)
{
printf("What? Where?\n");
return 1;
}
while ((iReadActual = fread(pcBuff, sizeof(char), BUFF_IN_MAX, pFileIn)) > 0)
{
char *pcPivot, *pcStop;
pcBuff[iReadActual] = '\0'; /* ... just to be sure. */
pcPivot = (char*)pcBuff;
pcStop = (char*)pcBuff + iReadActual;
while (pcPivot < pcStop)
{
if (cLiteralChar != NO_CHAR) /* Ignore this char? */
{
/* Yes, ignore this char. */
if (cChunkStartChar != NO_CHAR)
{
/* ... just write it out: */
fprintf(pFileOut, "%c", *pcPivot);
}
pcPivot++;
cLiteralChar = NO_CHAR;
/* End of "Yes, ignore this char." */
}
else if (cAtomicChar != NO_CHAR) /* Are we inside an atomic string? */
{
/* Yup; we are inside an atomic string. */
int bBreakInnerWhile;
bBreakInnerWhile = 0;
pcOutputStart = pcPivot;
while (bBreakInnerWhile == 0)
{
if (*pcPivot == '\\') /* Treat next char as literal? */
{
cLiteralChar = '\\'; /* Yes. */
bBreakInnerWhile = 1;
}
else if (*pcPivot == cAtomicChar) /* End of atomic? */
{
cAtomicChar = NO_CHAR; /* Yes. */
bBreakInnerWhile = 1;
}
if (++pcPivot == pcStop) bBreakInnerWhile = 1;
}
if (cChunkStartChar != NO_CHAR)
{
/* The atomic string is part of a chunk. */
iOutputLen = (size_t)(pcPivot-pcOutputStart);
fprintf(pFileOut, "%.*s", iOutputLen, pcOutputStart);
}
/* End of "Yup; we are inside an atomic string." */
}
else if (cChunkStartChar == NO_CHAR) /* Are we inside a chunk? */
{
/* No, we are outside a chunk. */
int bBreakInnerWhile;
bBreakInnerWhile = 0;
while (bBreakInnerWhile == 0)
{
/* Detect start of anything interesting: */
switch (*pcPivot)
{
/* Start of atomic? */
case '"':
case '\'':
cAtomicChar = *pcPivot;
bBreakInnerWhile = 1;
break;
/* Start of chunk? */
case '{':
cChunkStartChar = *pcPivot;
cChunkEndChar = '}';
break;
case '[':
cChunkStartChar = *pcPivot;
cChunkEndChar = ']';
break;
case '(':
cChunkStartChar = *pcPivot;
cChunkEndChar = ')';
break;
case '<':
cChunkStartChar = *pcPivot;
cChunkEndChar = '>';
break;
}
if (cChunkStartChar != NO_CHAR)
{
iNumberOfOutputFiles++;
printf("Start '%c' '%c' chunk (file %04d.txt)\n", *pcPivot, cChunkEndChar, iNumberOfOutputFiles);
sprintf((char*)pszOutName, "output/%04d.txt", iNumberOfOutputFiles);
if ((pFileOut = fopen(pszOutName, "w")) == NULL)
{
printf("What? How?\n");
fclose(pFileIn);
return 2;
}
bBreakInnerWhile = 1;
}
else if (++pcPivot == pcStop)
{
bBreakInnerWhile = 1;
}
}
/* End of "No, we are outside a chunk." */
}
else
{
/* Yes, we are inside a chunk. */
int bBreakInnerWhile;
bBreakInnerWhile = 0;
pcOutputStart = pcPivot;
while (bBreakInnerWhile == 0)
{
if (*pcPivot == cChunkStartChar)
{
/* Increase level of brackets/parantheses: */
iChunkNesting++;
}
else if (*pcPivot == cChunkEndChar)
{
/* Decrease level of brackets/parantheses: */
iChunkNesting--;
if (iChunkNesting == 0)
{
/* We are now outside chunk. */
bBreakInnerWhile = 1;
}
}
else
{
/* Detect atomic start: */
switch (*pcPivot)
{
case '"':
case '\'':
cAtomicChar = *pcPivot;
bBreakInnerWhile = 1;
break;
}
}
if (++pcPivot == pcStop) bBreakInnerWhile = 1;
}
iOutputLen = (size_t)(pcPivot-pcOutputStart);
fprintf(pFileOut, "%.*s", iOutputLen, pcOutputStart);
if (iChunkNesting == 0)
{
printf("File done.\n");
cChunkStartChar = cChunkEndChar = NO_CHAR;
fclose(pFileOut);
pFileOut = NULL;
}
/* End of "Yes, we are inside a chunk." */
}
}
}
if (cChunkStartChar != NO_CHAR)
{
printf("Chunk exceeds end-of-file. Exiting gracefully.\n");
fclose(pFileOut);
pFileOut = NULL;
}
if (iNumberOfOutputFiles == 0) printf("Nothing to do...\n");
else printf("All done.\n");
fclose(pFileIn);
return 0;
}
I've solved the nice-to-haves and one of the more-far-out-nice-to-haves.
To show this the input is a little more complex than the example in the question:
junk text
"atomic junk"
some junk text followed by a start bracket { here is the actual payload
more payload
'atomic payload { with start bracket that should be ignored'
nested start bracket { - all of this line is untouchable payload too
here is more payload
"this atomic has a literal double-quote \" inside"
"yet more atomic payload; this one's got a smiley ;-) and a heart <3"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
"here's a totally unprovoked $ sign and an * asterisk"
} trailing junk
intermittent junk
<
payload that goes in second output file } mismatched end bracket should be ignored >
end junk
Resulting file output/0001.txt:
{ here is the actual payload
more payload
'atomic payload { with start bracket that should be ignored'
nested start bracket { - all of this line is untouchable payload too
here is more payload
"this atomic has a literal double-quote \" inside"
"yet more atomic payload; this one's got a smiley ;-) and a heart <3"
end of nested bracket pair } - all of this line is untouchable payload too
this is payload too
"here's a totally unprovoked $ sign and an * asterisk"
}
... and resulting file output/0002.txt:
<
payload that goes in second output file } mismatched end bracket should be ignored >
Thanks #dawg for your help :)

Related

Extract trailing int from string containing other characters

I have a problem in regards of extracting signed int from string in c++.
Assuming that i have a string of images1234, how can i extract the 1234 from the string without knowing the position of the last non numeric character in C++.
FYI, i have try stringstream as well as lexical_cast as suggested by others through the post but stringstream returns 0 while lexical_cast stopped working.
int main()
{
string virtuallive("Images1234");
//stringstream output(virtuallive.c_str());
//int i = stoi(virtuallive);
//stringstream output(virtuallive);
int i;
i = boost::lexical_cast<int>(virtuallive.c_str());
//output >> i;
cout << i << endl;
return 0;
}
How can i extract the 1234 from the string without knowing the position of the last non numeric character in C++?
You can't. But the position is not hard to find:
auto last_non_numeric = input.find_last_not_of("1234567890");
char* endp = &input[0];
if (last_non_numeric != std::string::npos)
endp += last_non_numeric + 1;
if (*endp) { /* FAILURE, no number on the end */ }
auto i = strtol(endp, &endp, 10);
if (*endp) {/* weird FAILURE, maybe the number was really HUGE and couldn't convert */}
Another possibility would be to put the string into a stringstream, then read the number from the stream (after imbuing the stream with a locale that classifies everything except digits as white space).
// First the desired facet:
struct digits_only: std::ctype<char> {
digits_only(): std::ctype<char>(get_table()) {}
static std::ctype_base::mask const* get_table() {
// everything is white-space:
static std::vector<std::ctype_base::mask>
rc(std::ctype<char>::table_size,std::ctype_base::space);
// except digits, which are digits
std::fill(&rc['0'], &rc['9'], std::ctype_base::digit);
// and '.', which we'll call punctuation:
rc['.'] = std::ctype_base::punct;
return &rc[0];
}
};
Then the code to read the data:
std::istringstream virtuallive("Images1234");
virtuallive.imbue(locale(locale(), new digits_only);
int number;
// Since we classify the letters as white space, the stream will ignore them.
// We can just read the number as if nothing else were there:
virtuallive >> number;
This technique is useful primarily when the stream contains a substantial amount of data, and you want all the data in that stream to be interpreted in the same way (e.g., only read numbers, regardless of what else it might contain).

error not all control paths return a value

My compiler says that not all control paths return a value and it points to a overloaded>> variable function. I am not exactly sure what is causing the problem and any help would be appreciated. I am trying to overload the stream extraction operator to define if input is valid. if it is not it should set a failbit to indicate improper input.
std::istream &operator >> (std::istream &input, ComplexClass &c)//overloading extraction operator
{
int number;
int multiplier;
char temp;//temp variable used to store input
input >> number;//gets real number
// test if character is a space
if (input.peek() == ' ' /* Write a call to the peek member function to
test if the next character is a space ' ' */) // case a + bi
{
c.real = number;
input >> temp;
multiplier = (temp == '+') ? 1 : -1;
}
// set failbit if character not a space
if (input.peek() != ' ')
{
/* Write a call to the clear member function with
ios::failbit as the argument to set input's fail bit */
input.clear(ios::failbit);
}
else
// set imaginary part if data is valid
if (input.peek() == ' ')
{
input >> c.imaginary;
c.imaginary *= multiplier;
input >> temp;
}
if (input.peek() == '\n' /* Write a call to member function peek to test if the next
character is a newline \n */) // character not a newline
{
input.clear(ios::failbit); // set bad bit
} // end if
else
{
input.clear(ios::failbit); // set bad bit
} // end else
// end if
if (input.peek() == 'i' /* Write a call to member function peek to test if
the next character is 'i' */) // test for i of imaginary number
{
input >> temp;
// test for newline character entered
if (input.peek() == '\n')
{
c.real = 0;
c.imaginary = number;
} // end if
else if (input.peek() == '\n') // set real number if it is valid
{
c.real = number;
c.imaginary = 0;
} // end else if
else
{
input.clear(ios::failbit); // set bad bit
}
return input;
}
}
The return statement is only executed inside the last if:
if (input.peek() == 'i' ) {
...
return input;
}
It should be changed to:
if (input.peek() == 'i' ) {
...
}
return input;

Same .txt files, different sizes?

I have a program that reads from a .txt file
I use the cmd prompt to execute the program with the name of the text file to read from.
ex: program.exe myfile.txt
The problem is that sometimes it works, sometimes it doesn't.
The original file is 130KB and doesn't work.
If I copy/paste the contents, the file is 65KB and works.
If I copy/paste the file and rename it, it's 130KB and doesn't work.
Any ideas?
After more testing it shows that this is what makes it not work:
int main(int argc, char *argv[])
{
char *infile1
char tmp[1024] = { 0x0 };
FILE *in;
for (i = 1; i < argc; i++) /* Skip argv[0] (program name). */
{
if (strcmp(argv[i], "-sec") == 0) /* Process optional arguments. */
{
opt = 1; /* This is used as a boolean value. */
/*
* The last argument is argv[argc-1]. Make sure there are
* enough arguments.
*/
if (i + 1 <= argc - 1) /* There are enough arguments in argv. */
{
/*
* Increment 'i' twice so that you don't check these
* arguments the next time through the loop.
*/
i++;
optarg1 = atoi(argv[i]); /* Convert string to int. */
}
}
else /* not -sec */
{
if (infile1 == NULL) {
infile1 = argv[i];
}
else {
if (outfile == NULL) {
outfile = argv[i];
}
}
}
}
in = fopen(infile1, "r");
if (in == NULL)
{
fprintf(stderr, "Unable to open file %s: %s\n", infile1, strerror(errno));
exit(1);
}
while (fgets(tmp, sizeof(tmp), in) != 0)
{
fprintf(stderr, "string is %s.", tmp);
//Rest of code
}
}
Whether it works or not, the code inside the while loop gets executed.
When it works tmp actually has a value.
When it doesn't work tmp has no value.
EDIT:
Thanks to sneftel, we know what the problem is,
For me to use fgetws() instead of fgets(), I need tmp to be a wchar_t* instead of a char*.
Type casting seems to not work.
I tried changing the declaration of tmp to
wchar_t tmp[1024] = { 0x0 };
but I realized that tmp is a parameter in strtok() used elsewhere in my code.
I here is what I tried in that function:
//tmp is passed as the first parameter in parse()
void parse(wchar_t *record, char *delim, char arr[][MAXFLDSIZE], int *fldcnt)
{
if (*record != NULL)
{
char*p = strtok((char*)record, delim);
int fld = 0;
while (p) {
strcpy(arr[fld], p);
fld++;
p = strtok('\0', delim);
}
*fldcnt = fld;
}
else
{
fprintf(stderr, "string is null");
}
}
But typecasting to char* in strtok doesn't work either.
Now I'm looking for a way to just convert the file from UTF-16 to UTF-8 so tmp can be of type char*
I found this which looks like it can be useful but in the example it uses input from the user as UTF-16, how can that input be taken from the file instead?
http://www.cplusplus.com/reference/locale/codecvt/out/
It sounds an awful lot like the original file is UTF-16 encoded. When you copy/paste it in your text editor, you then save the result out as a new (default encoding) (ASCII or UTF-8) text file. Since a single character takes 2 bytes in a UTF-16-encode file but only 1 byte in a UTF-8-encoded file, that results in the file size being roughly halved when you save it out.
UTF-16 is fine, but you'll need to use Unicode-aware functions (that is, not fgets) to work with it. If you don't want to deal with all that Unicode jazz right now, and you don't actually have any non-ASCII characters to deal with in the file, just do the manual conversion (either with your copy/paste or with a command-line utility) before running your program.

JFlex match nested comments as one token

In Mathematica a comment starts with (* and ends with *) and comments can be nested. My current approach of scanning a comment with JFlex contains the following code
%xstate IN_COMMENT
"(*" { yypushstate(IN_COMMENT); return MathematicaElementTypes.COMMENT;}
<IN_COMMENT> {
"(*" {yypushstate(IN_COMMENT); return MathematicaElementTypes.COMMENT;}
[^\*\)\(]* {return MathematicaElementTypes.COMMENT;}
"*)" {yypopstate(); return MathematicaElementTypes.COMMENT;}
[\*\)\(] {return MathematicaElementTypes.COMMENT;}
. {return MathematicaElementTypes.BAD_CHARACTER;}
}
where the methods yypushstate and yypopstate are defined as
private final LinkedList<Integer> states = new LinkedList();
private void yypushstate(int state) {
states.addFirst(yystate());
yybegin(state);
}
private void yypopstate() {
final int state = states.removeFirst();
yybegin(state);
}
to give me the opportunity to track how many nested levels of comment I'm dealing with.
Unfortunately, this results in several COMMENT tokens for one comment, because I have to match nested comment starts and comment ends.
Question: Is it possible with JFlex to use its API with methods like yypushback or advance() etc. to return exactly one token over the whole comment range, even if comments are nested?
It seems the bounty was uncalled for as the solution is so simple that I just did not consider it. Let me explain. When scanning a simple nested comment
(* (*..*) *)
I have to track, how many opening comment tokens I see so that I finally, on the last real closing comment can return the whole comment as one token.
What I did not realise was, that JFlex does not need to be told to advance to the next portion when it matches something. After careful review I saw that this is explained here but somewhat hidden in a section I didn't care for:
Because we do not yet return a value to the parser, our scanner proceeds immediately.
Therefore, a rule in flex file like this
[^\(\*\)]+ { }
reads all characters except those that could probably be a comment start/end and does nothing but it advances to the next token.
This means that I can simply do the following. In the YYINITIAL state, I have a rule that matches a beginning comment but it does nothing else then switch the lexer to the IN_COMMENT state. In particular, it does not return any token:
{CommentStart} { yypushstate(IN_COMMENT);}
Now, we are in the IN_COMMENT state and there, I do the same. I eat up all characters but never return a token. When I hit a new opening comment, I carefully push it onto a stack but do nothing. Only, when I hit the last closing comment, I know I'm leaving the IN_COMMENT state and this is the only point, where I, finally, return the token. Let's look at the rules:
<IN_COMMENT> {
{CommentStart} { yypushstate(IN_COMMENT);}
[^\(\*\)]+ { }
{CommentEnd} { yypopstate();
if(yystate() != IN_COMMENT)
return MathematicaElementTypes.COMMENT_CONTENT;
}
[\*\)\(] { }
. { return MathematicaElementTypes.BAD_CHARACTER; }
}
That's it. Now, no matter how deep your comment is nested, you will always get one single token that contains the entire comment.
Now, I'm embarrassed and I'm sorry for such a simple question.
Final note
If you are doing something like this, you have to remember that you only return a token from when you hit the correct closing "character". Therefore, you definitely should make a rule that catches the end of file. In IDEA that default behavior is to just return the comment token, so you need another line (useful or not, I want to end gracefully):
<<EOF>> { yyclearstack(); yybegin(YYINITIAL);
return MathematicaElementTypes.COMMENT;}
When I wrote the answer first I had even not realized that one of the existing answers was of the questioner itself. On the other hand, I seldom find a bounty in the rather small SO lex community. Thus, this seemed to me worth to learn enough Java and jflex to write a sample:
/* JFlex scanner: to recognize nested comments in Mathematica style
*/
%%
%{
/* counter for open (nested) comments */
int open = 0;
%}
%state IN_COMMENT
%%
/* any state */
"(*" { if (!open++) yybegin(IN_COMMENT); }
"*)" {
if (open) {
if (!--open) {
yybegin(YYINITIAL);
return MathematicaElementTypes.COMMENT;
}
} else {
/* or return MathematicaElementTypes.BAD_CHARACTER;
/* or: throw new Error("'*)' without '(*'!"); */
}
}
<IN_COMMENT> {
. |
\n { }
}
<<EOF>> {
if (open) {
/* This is obsolete if the scanner is instanced new for
* each invocation.
*/
open = 0; yybegin(IN_COMMENT);
/* Notify about syntax error, e.g. */
throw new Error("Premature end of file! ("
+ open + " open comments not closed.)");
}
return MathematicaElementTypes.EOF; /* just a guess */
}
There might be typos and stupid errors although I tried to be carefully and did my best.
As a "proof of concept" I leave my original implementation here which is done with flex and C/C++.
This scanner
handles comment (with printf())
echoes everything else.
My solution is essentially based on the fact that flex rules may end with break or return. Therefore, the token is simply not returned until the rule for the pattern is matched closing the outmost comment. Contents in comments is simply "recorded" in a buffer – in my case a std::string.
(AFAIK, string is even a built-in type in Java. Therefore, I decided to mix C and C++ which I usally wouldn't.)
My source scan-nested-comments.l:
%{
#include <cstdio>
#include <string>
// counter for open (nested) comments
static int open = 0;
// buffer for collected comments
static std::string comment;
%}
/* make never interactive (prevent usage of certain C functions) */
%option never-interactive
/* force lexer to process 8 bit ASCIIs (unsigned characters) */
%option 8bit
/* prevent usage of yywrap */
%option noyywrap
%s IN_COMMENT
%%
"(*" {
if (!open++) BEGIN(IN_COMMENT);
comment += "(*";
}
"*)" {
if (open) {
comment += "*)";
if (!--open) {
BEGIN(INITIAL);
printf("EMIT TOKEN COMMENT(lexem: '%s')\n", comment.c_str());
comment.clear();
}
} else {
printf("ERROR: '*)' without '(*'!\n");
}
}
<IN_COMMENT>{
. |
"\n" { comment += *yytext; }
}
<<EOF>> {
if (open) {
printf("ERROR: Premature end of file!\n"
"(%d open comments not closed.)\n", open);
return 1;
}
return 0;
}
%%
int main(int argc, char **argv)
{
if (argc > 1) {
yyin = fopen(argv[1], "r");
if (!yyin) {
printf("Cannot open file '%s'!\n", argv[1]);
return 1;
}
} else yyin = stdin;
return yylex();
}
I compiled it with flex and g++ in cygwin on Windows 10 (64 bit):
$ flex -oscan-nested-comments.cc scan-nested-comments.l ; g++ -o scan-nested-comments scan-nested-comments.cc
scan-nested-comments.cc:398:0: warning: "yywrap" redefined
^
scan-nested-comments.cc:74:0: note: this is the location of the previous definition
^
$
The warning appears due to %option noyywrap. I guess it does not mean any harm and can be ignored.
Now, I made some tests:
$ cat >good-text.txt <<EOF
> Test for nested comments.
> (* a comment *)
> (* a (* nested *) comment *)
> No comment.
> (* a
> (* nested
> (* multiline *)
> *)
> comment *)
> End of file.
> EOF
$ cat good-text | ./scan-nested-comments
Test for nested comments.
EMIT TOKEN COMMENT(lexem: '(* a comment *)')
EMIT TOKEN COMMENT(lexem: '(* a (* nested *) comment *)')
No comment.
EMIT TOKEN COMMENT(lexem: '(* a
(* nested
(* multiline *)
*)
comment *)')
End of file.
$ cat >bad-text-1.txt <<EOF
> Test for wrong comment.
> (* a comment *)
> with wrong nesting *)
> End of file.
> EOF
$ cat >bad-text-1.txt | ./scan-nested-comments
Test for wrong comment.
EMIT TOKEN COMMENT(lexem: '(* a comment *)')
with wrong nesting ERROR: '*)' without '(*'!
End of file.
$ cat >bad-text-2.txt <<EOF
> Test for wrong comment.
> (* a comment
> which is not closed.
> End of file.
> EOF
$ cat >bad-text-2.txt | ./scan-nested-comments
Test for wrong comment.
ERROR: Premature end of file!
(1 open comments not closed.)
$
The Java traditional comment is defined in the sample grammar with
TraditionalComment = "/*" [^*] ~"*/" | "/*" "*"+ "/"
I suppose this expression should work for Mathematica comments too.

Canonical way to parse the command line into arguments in plain C Windows API

In a Windows program, what is the canonical way to parse the command line obtained from GetCommandLine into multiple arguments, similar to the argv array in Unix? It seems that CommandLineToArgvW does this for a Unicode command line, but I can't find a non-Unicode equivalent. Should I be using Unicode or not? If not, how do I parse the command line?
Here is an implementation of CommandLineToArgvA that delegate the work to CommandLineToArgvW, MultiByteToWideChar and WideCharToMultiByte.
LPSTR* CommandLineToArgvA(LPSTR lpCmdLine, INT *pNumArgs)
{
int retval;
retval = MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, lpCmdLine, -1, NULL, 0);
if (!SUCCEEDED(retval))
return NULL;
LPWSTR lpWideCharStr = (LPWSTR)malloc(retval * sizeof(WCHAR));
if (lpWideCharStr == NULL)
return NULL;
retval = MultiByteToWideChar(CP_ACP, MB_ERR_INVALID_CHARS, lpCmdLine, -1, lpWideCharStr, retval);
if (!SUCCEEDED(retval))
{
free(lpWideCharStr);
return NULL;
}
int numArgs;
LPWSTR* args;
args = CommandLineToArgvW(lpWideCharStr, &numArgs);
free(lpWideCharStr);
if (args == NULL)
return NULL;
int storage = numArgs * sizeof(LPSTR);
for (int i = 0; i < numArgs; ++ i)
{
BOOL lpUsedDefaultChar = FALSE;
retval = WideCharToMultiByte(CP_ACP, 0, args[i], -1, NULL, 0, NULL, &lpUsedDefaultChar);
if (!SUCCEEDED(retval))
{
LocalFree(args);
return NULL;
}
storage += retval;
}
LPSTR* result = (LPSTR*)LocalAlloc(LMEM_FIXED, storage);
if (result == NULL)
{
LocalFree(args);
return NULL;
}
int bufLen = storage - numArgs * sizeof(LPSTR);
LPSTR buffer = ((LPSTR)result) + numArgs * sizeof(LPSTR);
for (int i = 0; i < numArgs; ++ i)
{
assert(bufLen > 0);
BOOL lpUsedDefaultChar = FALSE;
retval = WideCharToMultiByte(CP_ACP, 0, args[i], -1, buffer, bufLen, NULL, &lpUsedDefaultChar);
if (!SUCCEEDED(retval))
{
LocalFree(result);
LocalFree(args);
return NULL;
}
result[i] = buffer;
buffer += retval;
bufLen -= retval;
}
LocalFree(args);
*pNumArgs = numArgs;
return result;
}
Apparently you can use __argv outside main() to access the pre-parsed argument vector...
I followed the source for parse_cmd (see "argv_parsing.cpp" in the latest SDK) and modified it to match the paradigm and operation for CommandLineToArgW and developed the following. Note: instead of using LocalAlloc, per Microsoft recommendations (see https://msdn.microsoft.com/en-us/library/windows/desktop/aa366723(v=vs.85).aspx) I've substituted HeapAlloc. Additionally one change in the SAL notation. I deviate slightly be stating _In_opt_ for lpCmdLine - as CommandLineToArgvW does allow this to be NULL, in which case it returns an argument list containing just the program name.
A final caveat, parse_cmd will parse the command line slightly different from CommandLineToArgvW in one aspect only: two double quote characters in a row while the state is 'in quote' mode are interpreted as an escaped double quote character. Both functions consume the first one and output the second one. The difference is that for CommandLineToArgvW, there is a transition out of 'in quote' mode, while parse_cmdline remains in 'in quote' mode. This is properly reflected in the function below.
You would use the below function as follows:
int argc = 0;
LPSTR *argv = CommandLineToArgvA(GetCommandLineA(), &argc);
HeapFree(GetProcessHeap(), NULL, argv);
LPSTR* CommandLineToArgvA(_In_opt_ LPCSTR lpCmdLine, _Out_ int *pNumArgs)
{
if (!pNumArgs)
{
SetLastError(ERROR_INVALID_PARAMETER);
return NULL;
}
*pNumArgs = 0;
/*follow CommandLinetoArgvW and if lpCmdLine is NULL return the path to the executable.
Use 'programname' so that we don't have to allocate MAX_PATH * sizeof(CHAR) for argv
every time. Since this is ANSI the return can't be greater than MAX_PATH (260
characters)*/
CHAR programname[MAX_PATH] = {};
/*pnlength = the length of the string that is copied to the buffer, in characters, not
including the terminating null character*/
DWORD pnlength = GetModuleFileNameA(NULL, programname, MAX_PATH);
if (pnlength == 0) //error getting program name
{
//GetModuleFileNameA will SetLastError
return NULL;
}
if (*lpCmdLine == NULL)
{
/*In keeping with CommandLineToArgvW the caller should make a single call to HeapFree
to release the memory of argv. Allocate a single block of memory with space for two
pointers (representing argv[0] and argv[1]). argv[0] will contain a pointer to argv+2
where the actual program name will be stored. argv[1] will be nullptr per the C++
specifications for argv. Hence space required is the size of a LPSTR (char*) multiplied
by 2 [pointers] + the length of the program name (+1 for null terminating character)
multiplied by the sizeof CHAR. HeapAlloc is called with HEAP_GENERATE_EXCEPTIONS flag,
so if there is a failure on allocating memory an exception will be generated.*/
LPSTR *argv = static_cast<LPSTR*>(HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY | HEAP_GENERATE_EXCEPTIONS,
(sizeof(LPSTR) * 2) + ((pnlength + 1) * sizeof(CHAR))));
memcpy(argv + 2, programname, pnlength+1); //add 1 for the terminating null character
argv[0] = reinterpret_cast<LPSTR>(argv + 2);
argv[1] = nullptr;
*pNumArgs = 1;
return argv;
}
/*We need to determine the number of arguments and the number of characters so that the
proper amount of memory can be allocated for argv. Our argument count starts at 1 as the
first "argument" is the program name even if there are no other arguments per specs.*/
int argc = 1;
int numchars = 0;
LPCSTR templpcl = lpCmdLine;
bool in_quotes = false; //'in quotes' mode is off (false) or on (true)
/*first scan the program name and copy it. The handling is much simpler than for other
arguments. Basically, whatever lies between the leading double-quote and next one, or a
terminal null character is simply accepted. Fancier handling is not required because the
program name must be a legal NTFS/HPFS file name. Note that the double-quote characters are
not copied.*/
do {
if (*templpcl == '"')
{
//don't add " to character count
in_quotes = !in_quotes;
templpcl++; //move to next character
continue;
}
++numchars; //count character
templpcl++; //move to next character
if (_ismbblead(*templpcl) != 0) //handle MBCS
{
++numchars;
templpcl++; //skip over trail byte
}
} while (*templpcl != '\0' && (in_quotes || (*templpcl != ' ' && *templpcl != '\t')));
//parsed first argument
if (*templpcl == '\0')
{
/*no more arguments, rewind and the next for statement will handle*/
templpcl--;
}
//loop through the remaining arguments
int slashcount = 0; //count of backslashes
bool countorcopychar = true; //count the character or not
for (;;)
{
if (*templpcl)
{
//next argument begins with next non-whitespace character
while (*templpcl == ' ' || *templpcl == '\t')
++templpcl;
}
if (*templpcl == '\0')
break; //end of arguments
++argc; //next argument - increment argument count
//loop through this argument
for (;;)
{
/*Rules:
2N backslashes + " ==> N backslashes and begin/end quote
2N + 1 backslashes + " ==> N backslashes + literal "
N backslashes ==> N backslashes*/
slashcount = 0;
countorcopychar = true;
while (*templpcl == '\\')
{
//count the number of backslashes for use below
++templpcl;
++slashcount;
}
if (*templpcl == '"')
{
//if 2N backslashes before, start/end quote, otherwise count.
if (slashcount % 2 == 0) //even number of backslashes
{
if (in_quotes && *(templpcl +1) == '"')
{
in_quotes = !in_quotes; //NB: parse_cmdline omits this line
templpcl++; //double quote inside quoted string
}
else
{
//skip first quote character and count second
countorcopychar = false;
in_quotes = !in_quotes;
}
}
slashcount /= 2;
}
//count slashes
while (slashcount--)
{
++numchars;
}
if (*templpcl == '\0' || (!in_quotes && (*templpcl == ' ' || *templpcl == '\t')))
{
//at the end of the argument - break
break;
}
if (countorcopychar)
{
if (_ismbblead(*templpcl) != 0) //should copy another character for MBCS
{
++templpcl; //skip over trail byte
++numchars;
}
++numchars;
}
++templpcl;
}
//add a count for the null-terminating character
++numchars;
}
/*allocate memory for argv. Allocate a single block of memory with space for argc number of
pointers. argv[0] will contain a pointer to argv+argc where the actual program name will be
stored. argv[argc] will be nullptr per the C++ specifications. Hence space required is the
size of a LPSTR (char*) multiplied by argc + 1 pointers + the number of characters counted
above multiplied by the sizeof CHAR. HeapAlloc is called with HEAP_GENERATE_EXCEPTIONS
flag, so if there is a failure on allocating memory an exception will be generated.*/
LPSTR *argv = static_cast<LPSTR*>(HeapAlloc(GetProcessHeap(),
HEAP_ZERO_MEMORY | HEAP_GENERATE_EXCEPTIONS,
(sizeof(LPSTR) * (argc+1)) + (numchars * sizeof(CHAR))));
//now loop through the commandline again and split out arguments
in_quotes = false;
templpcl = lpCmdLine;
argv[0] = reinterpret_cast<LPSTR>(argv + argc+1);
LPSTR tempargv = reinterpret_cast<LPSTR>(argv + argc+1);
do {
if (*templpcl == '"')
{
in_quotes = !in_quotes;
templpcl++; //move to next character
continue;
}
*tempargv++ = *templpcl;
templpcl++; //move to next character
if (_ismbblead(*templpcl) != 0) //should copy another character for MBCS
{
*tempargv++ = *templpcl; //copy second byte
templpcl++; //skip over trail byte
}
} while (*templpcl != '\0' && (in_quotes || (*templpcl != ' ' && *templpcl != '\t')));
//parsed first argument
if (*templpcl == '\0')
{
//no more arguments, rewind and the next for statement will handle
templpcl--;
}
else
{
//end of program name - add null terminator
*tempargv = '\0';
}
int currentarg = 1;
argv[currentarg] = ++tempargv;
//loop through the remaining arguments
slashcount = 0; //count of backslashes
countorcopychar = true; //count the character or not
for (;;)
{
if (*templpcl)
{
//next argument begins with next non-whitespace character
while (*templpcl == ' ' || *templpcl == '\t')
++templpcl;
}
if (*templpcl == '\0')
break; //end of arguments
argv[currentarg] = ++tempargv; //copy address of this argument string
//next argument - loop through it's characters
for (;;)
{
/*Rules:
2N backslashes + " ==> N backslashes and begin/end quote
2N + 1 backslashes + " ==> N backslashes + literal "
N backslashes ==> N backslashes*/
slashcount = 0;
countorcopychar = true;
while (*templpcl == '\\')
{
//count the number of backslashes for use below
++templpcl;
++slashcount;
}
if (*templpcl == '"')
{
//if 2N backslashes before, start/end quote, otherwise copy literally.
if (slashcount % 2 == 0) //even number of backslashes
{
if (in_quotes && *(templpcl+1) == '"')
{
in_quotes = !in_quotes; //NB: parse_cmdline omits this line
templpcl++; //double quote inside quoted string
}
else
{
//skip first quote character and count second
countorcopychar = false;
in_quotes = !in_quotes;
}
}
slashcount /= 2;
}
//copy slashes
while (slashcount--)
{
*tempargv++ = '\\';
}
if (*templpcl == '\0' || (!in_quotes && (*templpcl == ' ' || *templpcl == '\t')))
{
//at the end of the argument - break
break;
}
if (countorcopychar)
{
*tempargv++ = *templpcl;
if (_ismbblead(*templpcl) != 0) //should copy another character for MBCS
{
++templpcl; //skip over trail byte
*tempargv++ = *templpcl;
}
}
++templpcl;
}
//null-terminate the argument
*tempargv = '\0';
++currentarg;
}
argv[argc] = nullptr;
*pNumArgs = argc;
return argv;
}
CommandLineToArgvW() is in shell32.dll. I'd guessthat the Shell developers created the function for their own use, and it was made public either because someone decided that 3rd party devs would find it useful or because some court action made them do it.
Since the Shell developers only ever needed a Unicode version that's all they ever wrote. It would be fairly simple to write an ANSI wrapper for the function that converts the ANSI to Unicode, calls the function and converts the Unicode results to ANSI (and if Shell32.dll ever provided an ANSI variant of this API, that's probably exactly what would do).
None of these solved the problem perfectly when don't want parsing UNICODE, so my solution is modified from WINE projects, they contains source code of CommandLineToArgvW of shell32.dll, modified it to below and it's work perfectly for me:
/*************************************************************************
* CommandLineToArgvA [SHELL32.#]
*
* MODIFIED FROM https://www.winehq.org/ project
* We must interpret the quotes in the command line to rebuild the argv
* array correctly:
* - arguments are separated by spaces or tabs
* - quotes serve as optional argument delimiters
* '"a b"' -> 'a b'
* - escaped quotes must be converted back to '"'
* '\"' -> '"'
* - consecutive backslashes preceding a quote see their number halved with
* the remainder escaping the quote:
* 2n backslashes + quote -> n backslashes + quote as an argument delimiter
* 2n+1 backslashes + quote -> n backslashes + literal quote
* - backslashes that are not followed by a quote are copied literally:
* 'a\b' -> 'a\b'
* 'a\\b' -> 'a\\b'
* - in quoted strings, consecutive quotes see their number divided by three
* with the remainder modulo 3 deciding whether to close the string or not.
* Note that the opening quote must be counted in the consecutive quotes,
* that's the (1+) below:
* (1+) 3n quotes -> n quotes
* (1+) 3n+1 quotes -> n quotes plus closes the quoted string
* (1+) 3n+2 quotes -> n+1 quotes plus closes the quoted string
* - in unquoted strings, the first quote opens the quoted string and the
* remaining consecutive quotes follow the above rule.
*/
LPSTR* WINAPI CommandLineToArgvA(LPSTR lpCmdline, int* numargs)
{
DWORD argc;
LPSTR *argv;
LPSTR s;
LPSTR d;
LPSTR cmdline;
int qcount,bcount;
if(!numargs || *lpCmdline==0)
{
SetLastError(ERROR_INVALID_PARAMETER);
return NULL;
}
/* --- First count the arguments */
argc=1;
s=lpCmdline;
/* The first argument, the executable path, follows special rules */
if (*s=='"')
{
/* The executable path ends at the next quote, no matter what */
s++;
while (*s)
if (*s++=='"')
break;
}
else
{
/* The executable path ends at the next space, no matter what */
while (*s && *s!=' ' && *s!='\t')
s++;
}
/* skip to the first argument, if any */
while (*s==' ' || *s=='\t')
s++;
if (*s)
argc++;
/* Analyze the remaining arguments */
qcount=bcount=0;
while (*s)
{
if ((*s==' ' || *s=='\t') && qcount==0)
{
/* skip to the next argument and count it if any */
while (*s==' ' || *s=='\t')
s++;
if (*s)
argc++;
bcount=0;
}
else if (*s=='\\')
{
/* '\', count them */
bcount++;
s++;
}
else if (*s=='"')
{
/* '"' */
if ((bcount & 1)==0)
qcount++; /* unescaped '"' */
s++;
bcount=0;
/* consecutive quotes, see comment in copying code below */
while (*s=='"')
{
qcount++;
s++;
}
qcount=qcount % 3;
if (qcount==2)
qcount=0;
}
else
{
/* a regular character */
bcount=0;
s++;
}
}
/* Allocate in a single lump, the string array, and the strings that go
* with it. This way the caller can make a single LocalFree() call to free
* both, as per MSDN.
*/
argv=LocalAlloc(LMEM_FIXED, (argc+1)*sizeof(LPSTR)+(strlen(lpCmdline)+1)*sizeof(char));
if (!argv)
return NULL;
cmdline=(LPSTR)(argv+argc+1);
strcpy(cmdline, lpCmdline);
/* --- Then split and copy the arguments */
argv[0]=d=cmdline;
argc=1;
/* The first argument, the executable path, follows special rules */
if (*d=='"')
{
/* The executable path ends at the next quote, no matter what */
s=d+1;
while (*s)
{
if (*s=='"')
{
s++;
break;
}
*d++=*s++;
}
}
else
{
/* The executable path ends at the next space, no matter what */
while (*d && *d!=' ' && *d!='\t')
d++;
s=d;
if (*s)
s++;
}
/* close the executable path */
*d++=0;
/* skip to the first argument and initialize it if any */
while (*s==' ' || *s=='\t')
s++;
if (!*s)
{
/* There are no parameters so we are all done */
argv[argc]=NULL;
*numargs=argc;
return argv;
}
/* Split and copy the remaining arguments */
argv[argc++]=d;
qcount=bcount=0;
while (*s)
{
if ((*s==' ' || *s=='\t') && qcount==0)
{
/* close the argument */
*d++=0;
bcount=0;
/* skip to the next one and initialize it if any */
do {
s++;
} while (*s==' ' || *s=='\t');
if (*s)
argv[argc++]=d;
}
else if (*s=='\\')
{
*d++=*s++;
bcount++;
}
else if (*s=='"')
{
if ((bcount & 1)==0)
{
/* Preceded by an even number of '\', this is half that
* number of '\', plus a quote which we erase.
*/
d-=bcount/2;
qcount++;
}
else
{
/* Preceded by an odd number of '\', this is half that
* number of '\' followed by a '"'
*/
d=d-bcount/2-1;
*d++='"';
}
s++;
bcount=0;
/* Now count the number of consecutive quotes. Note that qcount
* already takes into account the opening quote if any, as well as
* the quote that lead us here.
*/
while (*s=='"')
{
if (++qcount==3)
{
*d++='"';
qcount=0;
}
s++;
}
if (qcount==2)
qcount=0;
}
else
{
/* a regular character */
*d++=*s++;
bcount=0;
}
}
*d='\0';
argv[argc]=NULL;
*numargs=argc;
return argv;
}
Be careful when parsing empty string "", it's return NULL instead of executable path, that's the different behavior with the standard CommandLineToArgvW, the recommanded usage is below:
int argc;
LPSTR * argv = CommandLineToArgvA(GetCommandLineA(), &argc);
// AFTER consumed argv
LocalFree(argv);
The following is about the simplest way I can think of to obtain an old-fashioned argc/argv pair at the top of WinMain. Assuming that the command-line really was ANSI text, you don't actually need any conversions fancier than this.
int WINAPI WinMain(HINSTANCE hInstance, HINSTANCE hPrevInstance, LPSTR lpCmdLine, int nShowCmd) {
int argc;
LPWSTR *szArglist = CommandLineToArgvW(GetCommandLineW(), &argc);
char **argv = new char*[argc];
for (int i=0; i<argc; i++) {
int lgth = wcslen(szArglist[i]);
argv[i] = new char[lgth+1];
for (int j=0; j<=lgth; j++)
argv[i][j] = char(szArglist[i][j]);
}
LocalFree(szArglist);

Resources