Is there a way to run multiple C program through one single C source code file using command line argument?
E.g. suppose the executable file is named, testing.out, if in the runtime one wants
the first test, then type “Testing.out 1”, if the second, “Testing.out 2”,
etc.
Yes, this can be easily done using the command line arguments functionality provided by C.
You can read more about it here: https://www.tutorialspoint.com/cprogramming/c_command_line_arguments.htm
And a lots of other tutorials:
https://www.geeksforgeeks.org/command-line-arguments-in-c-cpp/
https://www.cprogramming.com/tutorial/c/lesson14.html
Related
I have a set of text files and a set of GoLang files. The GoLang files contain directives such as the following:
//go:embed hello.txt
var s string
I want to write a bash script which takes the above code and substitutes the following in its place:
var s string = "<contents of hello.txt>"
Specifically, I want to bash script to go through all GoLang source files and replace all go:embed/string declaration pairs with a string defined to be the contents of the file specified in the embed directive.
I'm wondering if there is an existing program which can be configured to do the above. Otherwise, I'm planning on writing the algorithm myself.
Further explaination:
I am trying to replicate GoLang's embed directive (https://tip.golang.org/pkg/embed/).
We are not yet on GoLang 1.16, so we cannot use this functionality, but we are replicating it as closely as possible so that moving over to the standard implementation is as painless as possible.
Below is an attempt at solving your problem:
for i in file1 file2; do
awk '/^\/\/go:embed /{f=$2;next}/^var/&&f{printf"%s = \"",$0;system("cat "f);print"\"";f=0;next}1' < "$i" > "$i.new"
done
The awk script prints all normal lines, only if it encounters the embed directive this line will be skipped (and the file name remembered in variable f). A subsequent line starting with var will then be extended by the content of the file with the remembered name (using the system call "cat").
Beware, there are no error checks at all, no attempt to fix quotes and whatever. So for practical use - unless the file contents you are about to embed are known to be good-natured - you probably have to take a more sophisticated approach.
Although they do give the same results, I wonder if there is some difference between them and which is the most appropriate way to sort something contained in a file.
Another thing which intrigues me is the use of delimiters, I noticed that the sort filter only works if you separate the strings with a new line, are there any ways to do this without having to write the new strings in a separate line
The sort(1) command reads lines of text, analyzes and sorts them, and writes out the result. The command is intended to read lines, and lines in unix/linux are terminated by a new line.
The command takes its first non-option argument as the file to read; if there is no specification it reads standard input. So:
sort file_name
is a command line with such argument. The other two examples, "... | sort" and "sort < ..." do not specify the file to read directly to sort(1), but use its standard input. The effect, for what sort(1) is concerned, is the same.
ways to do this without having to write the new strings in a separate line
Ultimately no. But if you want you can feed sort using another filter (a program), which reads the file non-linefeed-separated and creates lines to pass to sort. If such program exists and is named "myparse", you can do:
myparse non-linefeed-separated-file | sort
The solution using cat involves creating a second process unnecessarily. This could be a performance issue if you perform many of such operation in a loop.
When doing input redirection to your file, the shell is setting up the association of file with std input. If the file would not exist, the shell complains about the file being missing.
When passing the file name as explicit argument, the sort process has to care about opening the file and to report an error if there is an accessability problem with it.
When installing my package, the user should at some point type
./wand-new "`cat wandcfg_install.spell`"
Or whatever the configuration file is called. If I put this line inside \code ... \endcode, doxygen thinks it is C++ or... Anyway, the word "new" is treated as keyword. How do I avoid this is in a semantically correct way?
I think \verbatim is disqualified because it actually is code, right?
(I guess the answer is to poke that Dimitri should add support for more languages inside a code block like LaTeX listings package, or at least add an disableparse option to code in the meantime)
Doxygen, as of July 2017, does not officially support documenting Shell/Bash scripting language, not even as an extension. There is an unofficial filter called bash-doxygen. Simple to setup: only one file download and three flags adjustments:
Edit the Doxyfile to map shell files to C parser: EXTENSION_MAPPING = sh=C
Set your shell script file names pattern as Doxygen inputs, like
e.g.: FILE_PATTERNS = *.sh
Mention doxygen-bash.sed in either the INTPUT_FILTER or the
FILTER_PATTERN directive of your Doxyfile. If doxygen-bash.sed is in
your $PATH, then you can just invoke it as is, else use sed -n -f /path/to/doxygen-bash.sed --.
Please note that since it uses C language parsing, some limitations apply, as stated in the main README page of bash-doxygen, one of them, at least in my tests, that the \code {.sh} recognises shell syntax, but all lines in the code block begin with an asterisk (*), apparently as a side-effect of requiring that all Doxygen doc sections have lines starting with double-hashes (##).
I'm writing a simple command line gem.
The library that does the actual work was developed with rspec and so far that works.
I'm trying to test the command line portion with Aruba/Cucumber, but I've come across some strange behaviour.
Just to test this, I've got a the binary file to puts ARGV, and I've got test files in tmp/aruba
When I run bundle exec gem_name tmp/aruba/*.* I am presented with the list of shell expanded file names.
Now my features file has:
Given files to work on # I set up files in tmp/aruba in this step
When I run `gem_name *.*` # standard step
Then the output should contain "Wibble"
The last step is obviously going to fail, but it shows me a diff between what it expects and the actual output. Rather than seeing a list of shell expanded filenames, all I get is "*.*"
So I'm left in the position of having an app that actually works as expected, but I can't get the tests to pass. I could take the "." and generate the list of files from there, but then I'm writing extra production code just to get the app to work under test - which I don't think is the correct way to go about it. And all because shell expansion isn't happening.
If you look at my profile, you'll see that Ruby isn't my main bag, feel free to point me at any resources that I may have missed about this, but is this just me missing something, or expected behaviour that somebody knows how to work around?
After a little digging in the Aruba source I figured out that the When I run step ends up in a code block like this:
def run!(&block)
#process = ChildProcess.build(*shellwords(#cmd))
...
begin
#process.start
...
Further digging into ChildProcess ends up here:
def launch_process
...
begin
exec(*#args)
...
And therein lies the problem. exec does not do shell expansion when the argument list is split into multiple array elements:
If exec is given a single argument, that argument is
taken as a line that is subject to shell expansion before being
executed. If multiple arguments are given, the second and
subsequent arguments are passed as parameters to command with no
shell expansion.
However playing with shellwords a bit we find:
Shellwords.shellwords('gem_name *.*')
=> ["gem_name", "*.*"] # No good
Shellwords.shellwords('"gem_name *.*"')
=> ["gem_name *.*"] # Aha!
Therefore the solution might be as simple as:
When I run `"gem_name *.*"`
If that doesn't work then you are pretty much out of luck. I would suggest you expand the file names manually since you're not really testing shell expansion here - we know that works: you are testing multiple arguments.
Therefore you should instead do:
When I run `gem_name your_file1 your_file2 your_file3`
I'm dealing with a pipeline of predominantly shell and Perl files, all of which pass parameters (paths) to the next. I decided it would be better to use a single file to store all the paths and just call that for every file. The issue is I am using awk to grab the files at the beginning of each file, and it's turning out to be a lot of repetition.
My question is: I do not know if there is a way to store key-value pairs in a file so shell can natively do something with the key and return the value? It needs to access an external file, because the pipeline uses many scripts and a map in a specific file would result in parameters being passed everywhere. Is there some little quirk I do not know of that performs a map function on an external file?
You can make a file of env var assignments and source that file as need, ie.
$ cat myEnvFile
path1=/x/y/z
path2=/w/xy
path3=/r/s/t
otherOpt1="-x"
Inside your script you can source with either . myEnvFile or the more versbose version of the same feature sourc myEnvFile (assuming bash shell) , i.e.
$cat myScript
#!/bin/bash
. /path/to/myEnvFile
# main logic below
....
# references to defined var
if [[ -d $path2 ]] ; then
cd $path2
else
echo "no pa4h2=$path2 found, can't continue" 1>&1
exit 1
fi
Based on how you've described your problem this should work well, and provide a-one-stop-shop for all of your variable settings.
IHTH
In bash, there's mapfile, but that reads the lines of a file into a numerically-indexed array. To read a whitespace-separated file into an associative array, I would
declare -A map
while read key value; do
map[$key]=$value
done < filename
However this sounds like an XY problem. Can you give us an example (in code) of what you're actually doing? When I see long piplines of grep|awk|sed, there's usually a way to simplify. For example, is passing data by parameters better than passing via stdout|stdin?
In other words, I'm questioning your statement "I decided it would be better..."