code coverage tools for validating the scripts - shell

Is there any way to validate the coverage of the shell scripts? I have project having lots of shell scripting, and need to ensure static analysis can be performed on the coverage for the shell scripts. Is there any tool available?

I seriously doubt that there could be any static code analysis performed on shell scripts - especially due to the fact that shell scripts are supposed to call external programs and based on what these external programs return - and there are myriads of external programs and external environment states. It's similar to the problem of static analysis of code heavily relying on eval-like mechanism, but shell scripting is all about eval-style programming.
However, there are some general pointers that could prove useful for "proper" validation, code coverage and documenting of shell scripts as major languages do:
You can always run a script with -x (AKA xtrace) option - it will output trace looking like that to stderr:
+ log_end_msg 0
+ [ -z 0 ]
+ retval=0
+ log_end_msg_pre 0
+ :
+ log_use_fancy_output
+ TPUT=/usr/bin/tput
+ EXPR=/usr/bin/expr
+ [ -t 1 ]
+ [ xxterm != x ]
+ [ xxterm != xdumb ]
Bash makes it possible to redirect this stream to external FD (using BASH_XTRACEFD variable) - that's much easier to parse in practice.
It's not trivial, but it's possible to write a program that will find relevant pieces of code being executed using xtrace output and make you a fancy "code coverage" report - like what was called how many times, which pieces of code weren't run at all and thus lack test coverage.
In fact, there's a wonderful tool named shcov already written that uses this process - albeit it's somewhat simplistic and doesn't handle all possible cases very well (especially when we're talking about long and complex lines)
Last, but not least - there's minimalistic shelldoc project (akin to javadoc) that helps generating documentation based on comments in shell scripts. Yep, that's a shameless plug :)

I don't think there are COTS tools available for test coverage, regardless of the scripting language and there are lots.
Another poster suggested an ad hoc approach that might work with some tools: get them to dump some trace data, and try to match that up with the actual code to get your coverage. He says it sort of works... that's the problem with most hueristics.
Another approach one might take to construct a test coverage for your favorite scripting language is covered by my technical paper on a general approach for building test coverage tools using program transformations. My company builds a line of such tools for more popular languages this way.

You might try looking at shcov, a GPL v2-licensed Python-based tool. It seems to perhaps have been abandoned by the author, but does produce HTML-based graphical reports and seemed (in my limited testing) to be reasonably accurate in terms of coverage analysis.

I have written the tool, that can do the coverage for the shell scripts, the name is shAge means shell script coverage, the project is hosted here
To do the Coverage if any shell scripts do the following
First Download the shAge.jar
Make sure you have you have jdk1.6 update 65 onwards
run the programe like
java -jar shAge.jar hello.sh
the content in the shell script will get executed and finally a report will be get generated
Report will be genared with the name of file in .html format hello.sh.html
Here is the sample output

Related

running external program in TCL

After developing an elaborate TCL code to do smoothing based on Gabriel Taubin's smoothing without shape shrinkage, the code runs extremely slow. This is likely due to the size of unstructured grid I am smoothing. I have to use TCL because the grid generator I am using is Pointwise and Pointwise's "macro language" is TCL based. I'm still a bit new to this, but is there a way to run an external code from TCL where TCL sends the data to the software, the software runs the smoothing operation, and output is sent back to TCL to update the internal data inside the Pointwise grid generation tool? I will be writing the smoothing tool in another language which is significantly faster.
There are a number of options to deal with code that "runs extremely show". I would start with determining how fast it must run. Are we talking milliseconds, seconds, minutes, hours or days. Next it is necessary to determine which part is slow. The time command is useful here.
But assuming you have decided that more performance is necessary and you have some metrics for your current program so you will know if you are improving, here are some things to try:
Try to improve the existing code. If you are using the expr command, make sure your expressions are given to the command as a single argument enclosed in braces. Beginners sometimes forget this and the improvement can be substantial.
Use the critcl package to code parts of the program in "C". Critcl allows you to put "C" code directly into your Tcl program and have that code pulled out, compiled and loaded into your program.
Write a traditional "C" based Tcl extension. Tcl is very extensible and has a clean API for building extensions. There is sample code for extensions and source to many extensions is readily available.
Write a program to do the time consuming part of the job and execute it as a separate process and obtain the output back into your Tcl script. This is where the exec command comes in useful. Presumably you will have to write data out to some where the program can get it and read the output of the program back into your Tcl script. If you want to get fancy you can do two-way communications across a localhost TCP port. The set up in Tcl is quite simple. The "C" code in a program to do it is a bit more tedious, but many examples exist out on the Internet.
Which option to choose depends very much on how much improvement is required and the amount of code that must be improved. You haven't given us much idea what those things are in your case, so all I can offer is rather vague general solutions.
For a loadable module, you can write a Tcl extension. An example is here:
File Last Modified Time with Milliseconds Precision
Alternatively, just write your program to take input from a file. Have Tcl write the input data to the file, run the program, then collect the output from the external program.

How do I wrap a compiled command line tool for use in Ruby?

I have compiled and tested an open-source command line SIP client for my machine which we can assume has the same architecture as all other machines in our shop. By this I mean that I have successfully passed a compiled binary to others in the shop and they were able to use them.
The tool has a fairly esoteric invocation, a simple bash script piped to it prior to execution as follows:
(sleep 3; echo "# 1"; sleep 3; echo h) | pjsua sip:somephonenumber#ip --flag_1 val --flag_2 val
Note that the leading bash script is an essential part of the functioning of the program and that the line itself seems to be the best practice for use.
In the framing of my problem I am considering the following:
I don't think I can expect very many others in the shop to
compile the binary for themselves
Having a common system architecture in the shop it is reasonable to think that a repo can house the most up-to-date version
Having a way to invoke the tool using Ruby would be the most useful and the most accessible to the
most people.
The leading bash script being passed needs to be wholly extensible. These signify modifiable "scenarios" e.g. in this case:
Call
Wait three seconds
Press 1
Wait three seconds
Hang up
There may be as many as a dozen flags. Possibly a configuration file.
Is it a reasonable practice to create a gem that carries at its core a command line tool that has been previously compiled?
It seems reasonable to create a gem that uses a command line tool. The only thing I'd say is to check that the command is available using system('which psjua') and raising an informative error if it hasn't been installed.
So it seems like the vocabulary I was missing is extension. Here is a great stack discussion on wrapping up a Ruby C extension in a Ruby Gem.
Here is a link to the Gem Guides on creating Gems with Extensions.
Apparently not only is it done but there are sets of best practices around its use.

How can I have different states of expansion in LuaTex/LuaLaTex for debugging for instance?

I am preparing LaTex/Tex fragments with lua programs getting information from SQL request to a database (LuaSQL).
I wish I could see intermediate states of expansion for debugging purpose but also to control what has been brought from SQL requests and lua processings.
My dream would be for instance to see the code of my Latex page as if I had typed it myself manually with all the information given by the SQL requests and the lua processing.
I would then have a first processing with my lua programs and SQL request to build a valid and readable luaLatex code I could amend if necessary ; then I would compile again that file to obtain the wanted pdf document.
Today, I use a lua development environment, ZeroBrane Studio, to execute and test the lua chunk before I integrate it in my luaLatex code. For instance:
my lua chunk :
for k,v in pairs(data.param) do
print('\\def\\'..k..'{'..data.param[k]..'}')
end
lua print out:
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
luaLaTex code :
nothing visible ! except that now I can use \pA for instance in the code
my dream would be to have, in the luaLatex code :
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
May be a solution would stand in the use of the expl3 extension ? But since I am not familiar with it nor with the precise Tex expansion process, I prefer to ask you experts before I invest heavily in the understanding of this module.
Addition :
Pushing forward the reflection, a consequence would be that from a Latex code I get a Latex code instead of a pdf file for instance. This implies that we use only the first steps of the four TeX processors as described by Veijkhout in "TeX by Topic" : the input processor, the expansion processor (but with a controlled depth of expansion), not the execution processor nor the visual processor. Moreover, there would be need to show the intermediate state, that means a new processor able to show tokens back into readable strings and correct Tex/Latex code that can be processed later.
Unless somebody has already done or seen something like that, I feel that my wish may be unfeasible in the short and middle terms. What is your feeling, should I abandon any hope ?

BASH shell process control - any other examples of controlling/scheduling work

I've inherited a medium sized project in which the main (batch) program is fed work through a large set of shell scripts that do a lot of process control (waiting for process to complete, sleeping, checking for conditions, etc) [ and reprocessed through perl scripts ]
Are there other examples of process control by shell scripts ? I would like to see what other people have done as a comparison. (as i'm not really fond of the 6,668 line shell script)
It may lead to that the current program works and doesn't need to be messed with or for maintenance reasons - it's too cumbersome and doing it another way will be easier to maintain, but I need other examples.
To reduce the "generality" of the question here's an example of what I'm looking for: procsup
Inquisitor project relies on process control from shell scripts extensively. You might want to see it's directory with main function set or directory with tests (i.e. slave processes) that it runs.
This is quite general question, and therefore giving specific answers may be a little bit difficult. (And you wont be happy with 5000 lines long example.) Most probably architecture of your application is faulty, and requires rather complete rework.
As you probably already know, process control with bash is pretty simple:
./test_script.sh &
test_script_pid=$!
wait $test_script_pid # waits until it's done
./test_script2.sh
echo $? # Prints return code of previous command
You can do same things with for example Python subprocess (or with Perl, obviously). If you have complex architecture with large number of different programs, then process is obviously non-trivial.
That is an awfully bug shell script. Have you considered refactoring it?
From the sound of it, there may be a lot of instances where you could replace several lines of code with a call to a shell function. If you can simplify the code in this way, then it will be easier to see where there are errors in the logic.
I've used this tactic successfully with a humongous PERL script and it turned out to have some serious logic errors and to be a security risk because it had embedded passwords that were obfuscated in an easily reversible way. The passwords that were exposed could have been used by persons unknown (well, a disgruntled employee) to shut down an entire global network.
Some managers were leaning towards making a security exception because this script was so important, but when the logic error was explained and it was clear that this script was providing incorrect data, it was decided that no data was better than dirty data. The guy who wrote that script taught himself programming with a PERL book and the writing of the script.

Real life SHELL SCRIPTS usage?

I'm learning UNIX/LINUX shell scripting and trying to think about it appropriate usage?
The only thing that comes into mind - it'll be nice for let's say backup operations and logs management....But I'm sure it goes way beyond that...or is it?
I'm sure there are people on this server who use Shell scripting on the daily basis.
Can you tell me what do you use it for in your organization/business?
Thanks:)
Why use shell scripts
Basically, there are any number of tasks related to backup, maintenance, etc. that need to be automated, and shell scripts do that.
You can do quite everything in shell, but it is easy to write ugly and slow scripts.
First domain of expertise of shells is to start and combine other programs. This is exceptionally well suited for:
file manipulations: list, move, copy, compress, archive
text lines manipulation: filter (grep), modify (sed), delete lines (sed), combine files (paste), sort (sort), unify (sort -u)
All those operation are NOT shell operation, but the shell is the glue that put them all together.
file operations are generally combined with flow control instructions (while, if, for)
line operations are combined with pipes | and named pipe mkfifo
Things you can do in less than 20 lines with shell commands.
I personally use it to batch miscellaneous daily/weekly commands and start up long running processes. They can be unwieldy and hard to debug when they get big. Unknown variables evaluate to empty strings (icky).
Scripting languages languages such as Python, Perl, and Ruby become more attractive as the code becomes more complex.
I work on an actively developed software project that runs in a unix environment. Unfortunately it uses a lot of different environment variables for configuration and stashes binary programs, data files, and shared libraries on version dependent paths.
All that is a pain to set up.
But it gets worse: at any given time I might want to work with the stable version, the pretty-stable-but-more-up-to-date version, the bleeding-edge-every-new-feature version, or my personally hacked development version.
Switching between them is a even bigger hassle.
Enter a shell script which insures that I am set up for exactly one version at a time. Ta da!
BTW--The script I use for this makes extensive use of the accepted answer to How do I manipulate $PATH elements in shell scripts?, so you know Stack Overflow works for me in the real world. More over, I've infected several other people with this technology.
I've seen and worked-on full-blown applications (medical records and scheduling processing) written in Korn shell.
Batch programming, PostScript print filters, automatic mailers and automated airline checkin systems, regular stock price tracking, software installers, et al, et al.
Better question = what could not be programmed in Shell?
for our company, we use shell scripts for the following:
backups - it would be very disastrous for us if we lose our data. Various parts of our backup like database backup, offsite backup, continuous backups etc all uses shell scripts that runs daily and some runs once a week.
update dates - we do not use ntp so we rely on sh scripts to update the date due to firewall restrictions.
log cleanup
send emails
I didn't think bash programming was particularly powerful until I saw that the OS startup scripts are all written in it. That made me re-examine my assumptions. I now have several dozen important shell scripts that I've written over the years that automate some common tasks.
For example, I wrote one that polls the current load average, and then executes a provided command if it exceeds a certain value (useful for examining events that only happen once or twice a day).
Another that I wrote iterates through all the mysql databases on the server and outputs a mysqldump for each one into its own appropriately-named .sql file.
Another iterates through a list of homedirs and changes the ownership of all the files under the corresponding public_html dir to match the user who should own them to be compliant with suPHP's restrictions.
Another examines the current hardware configuration and downloads, installs, and configures appropriate software for monitoring the health of the currently-attached RAID controller.
These are all relatively simple tasks that could be done by hand -- but whenever I find myself doing the same task more than once, I write a shell script to automate the process.
I also built a base-64 decoder in bash just to see if I could. It works, but it's terribly slow. I use shell scripting for simple tasks that primarily involve executing other programs. I often use Perl when a significant amount of string processing is required, and I use Python for the more complex scripting tasks. The more languages you know, the better you will be at choosing the right one for the job.

Resources