I have some issues when using Lauterbach debug tools. I want to create Practice scripts and integration to existing scripts.
For example, I'm testing for board ARM. I have a script arm.cmm, but when I'm running it the value of a register is changed. I can use debugging and detect that manually but I want it to be done by full auto.
So I'm using Practice script language to check the value of the register but I don't know any way to integrate the new script .cmm to existing scripts.
How can I do that?
DO <filename.cmm> [<paramaters>]
ENDDO
The DO command starts a PRACTICE program. The DO command can be used on the command level to start a PRACTICE program or within a program to run another file like a subroutine. PRACTICE files started by a DO command should be terminated by the ENDDO instruction. Additional parameters may be defined
which are passed to the subroutine. The subroutine reads the parameter list by the ENTRY command.
I'm not sure I entirely understand... a register is changing out from under you when you are debugging? Do you just want to print out contents of the register or do you want to do logical comparisons on it?
Either way I recommend giving them a call or shooting them an email. The Lauterbach is a great debugger, but it can be complex. I know the guys in the Mass division and they are incredibly helpful and can answer a question like that immediately.
Related
I am preparing LaTex/Tex fragments with lua programs getting information from SQL request to a database (LuaSQL).
I wish I could see intermediate states of expansion for debugging purpose but also to control what has been brought from SQL requests and lua processings.
My dream would be for instance to see the code of my Latex page as if I had typed it myself manually with all the information given by the SQL requests and the lua processing.
I would then have a first processing with my lua programs and SQL request to build a valid and readable luaLatex code I could amend if necessary ; then I would compile again that file to obtain the wanted pdf document.
Today, I use a lua development environment, ZeroBrane Studio, to execute and test the lua chunk before I integrate it in my luaLatex code. For instance:
my lua chunk :
for k,v in pairs(data.param) do
print('\\def\\'..k..'{'..data.param[k]..'}')
end
lua print out:
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
luaLaTex code :
nothing visible ! except that now I can use \pA for instance in the code
my dream would be to have, in the luaLatex code :
\gdef\pB{0.7}
\gdef\pAinterB{0.5}
\gdef\pA{0.4}
\gdef\pAuB{0.6}
May be a solution would stand in the use of the expl3 extension ? But since I am not familiar with it nor with the precise Tex expansion process, I prefer to ask you experts before I invest heavily in the understanding of this module.
Addition :
Pushing forward the reflection, a consequence would be that from a Latex code I get a Latex code instead of a pdf file for instance. This implies that we use only the first steps of the four TeX processors as described by Veijkhout in "TeX by Topic" : the input processor, the expansion processor (but with a controlled depth of expansion), not the execution processor nor the visual processor. Moreover, there would be need to show the intermediate state, that means a new processor able to show tokens back into readable strings and correct Tex/Latex code that can be processed later.
Unless somebody has already done or seen something like that, I feel that my wish may be unfeasible in the short and middle terms. What is your feeling, should I abandon any hope ?
I have a VB6 application that I'm trying to make log out differently. What I have is a flag in the registry (existing) which states if the application is set to Debug mode so that it would log out.
Within my code I then have lots of if statements checking if this is true. This means that there is a lot of processing time checking if a statement is true, which maybe not much really but as it does it so often it's an overhead I would like to reduce.
The code is full of statements like this
If isDebug = True Then
LogMessage("Log what is happening")
End If
So what I'm looking for is a better way to do this. I know I can set a debug mode within Project Properties -> Make, but this needs to be set prior to building the .exe and I want to be able to set this in production via the registry key.
Consider using a command line argument to set debug mode. I used to do this.
Dim sCommandLine() As String
sCommandLine = Split(Command$)
For I = 0 To UBound(sCommandLine)
' do something with each arg
Next I
You can also persist command line args inside the IDE, so you always have them when debugging. When running outside of the IDE, make a shortcut to the compiled application with the arguments in it.
I do something almost identical to what you have in mind in a lot of my code. Add this:
Sub LogDebug(ByVal strMsg As String)
If (isDebug) Then
LogMessage(strMsg)
End If
End Sub
Then just call LogDebug in your main program body, or call LogMessage directly if it's something you always want to log, regardless of the debug flag.
I'm assuming isDebug is a boolean here. If it's a function call, you should just create a global flag that you set at the beginning of the code, and check that instead of looking at the registry over and over. I don't think checking a boolean is that much of a processing load, is it?
You want to call a function if a runtime flag is set. The only thing I can see that could be faster is:
If isDebug Then
LogMessage("Log what is happening")
End If
But I doubt that either would be the cause of performance problems. Most logging frameworks promote code like that and even put the flag/log level as a parameter to the function. Just be sure that you don't have other places where you needlessly compute a log message outside of the conditional statement.
You might evaluate why you need logging and if the logs produced are effective for that purpose.
If you are looking for a problem that can be trapped using VB error handling, consider a good error handling library like HuntERR31. With it you can choose to log only errors instead of the debug message you are now doing. Even if you don't use the library, the docs have a very good description of error handling in VB.
Another answer still:
Read your registry flag into your app so that it's a session based thing (i.e. when you close and restart the app the flag will be checked again - there's no point in checking the registry with every single test).
Then (as per Tom's post) assign the value to a global variable and test that - far faster than a function.
To speed up logging you may want to consider dimensioning a string buffer in your app and, once it has reached a specific size, fire it into your log file. Obviously there are certain problems with this approach, namely the volatility of the memory, but if you want performance over disk access I would recommend such an approach.
This would, of course, be a lot easier if you could show us some code for your logging process etc.
This is homework. Tips only, no exact answers please.
I have a compiled program (no source code) that takes in command line arguments. There is a correct sequence of a given number of command line arguments that will make the program print out "Success." Given the wrong arguments it will print out "Failure."
One thing that is confusing me is that the instructions mention two system tools (doesn't name them) which will help in figuring out the correct arguments. The only tool I'm familiar with (unless I'm overlooking something) is GDB so I believe I am missing a critical component of this challenge.
The challenge is to figure out the correct arguments. So far I've run the program in GDB and set a breakpoint at main but I really don't know where to go from there. Any pro tips?
Are you sure you have to debug it? It would be easier to disassemble it. When you disassemble it look for cmp
There exists not only tools to decompile X86 binaries to Assembler code listings, but also some which attempt to show a more high level or readable listing. Try googling and see what you find. I'd be specific, but then, that would be counterproductive if your job is to learn some reverse engineering skills.
It is possible that the code is something like this: If Arg(1)='FOO' then print "Success". So you might not need to disassemble at all. Instead you only might need to find a tool which dumps out all strings in the executable that look like sequences of ASCII characters. If the sequence you are supposed to input is not in the set of characters easily input from the keyboard, there exist many utilities that will do this. If the program has been very carefully constructed, the author won't have left "FOO" if that was the "password" in plain sight, but will have tried to obscure it somewhat.
Personally I would start with an ltrace of the program with any arbitrary set of arguments. I'd then use the strings command and guess from that what some of the hidden argument literals might be. (Let's assume, for the moment, that the professor hasn't encrypted or obfuscated the strings and that they appear in the binary as literals). Then try again with one or two (or the requisite number, if number).
If you're lucky the program was compiled and provided to you without running strip. In that case you might have the symbol table to help. Then you could try single stepping through the program (read the gdb manuals). It might be tedious but there are ways to set a breakpoint and tell the debugger to run through some function call (such as any from the standard libraries) and stop upon return. Doing this repeatedly (identify where it's calling into standard or external libraries, set a breakpoint for the next instruction after the return, let gdb run the process through the call, and then inspect what the code is doing besides that.
Coupled with the ltrace it should be fairly easy to see the sequencing of the strcmp() (or similar) calls. As you see the string against which your input is being compared you can break out of the whole process and re-invoke the gdb and the program with that one argument, trace through 'til the next one and so on. Or you might learn some more advanced gdb tricks and actually modify your argument vector and restart main() from scratch.
It actually sounds like fun and I might have my wife whip up a simple binary for me to try this on. It might also create a little program to generate binaries of this sort. I'm thinking of a little #INCLUDE in the sources which provides the "passphrase" of arguments, and a make file that selects three to five words from /usr/dict/words, generates that #INCLUDE file from a template, then compiles the binary using that sequence.
I am trying to understand the code in this page: https://github.com/corroded/git-achievements/blob/gh-pages/git-achievements
and I'm kinda at a loss on how it actually works. I do know some bash and shell scripting, but how does this script actually "store" how many times you've used a command(im guessing saving into a text file?) and how does it "sense" that you actually typed in a git command? I have a feeling it's line 464 onwards that does it but I don't seem to quite follow the logic.
Can anyone explain this in a bit more understandable context?
I plan to do some achievements for other commands and I hope to have an idea on HOW to go about it without randomly copying and pasting stuff and voodoo.
Yes on 464 start the script, everything before are helping functions. I dont know how it gets installed, but I would assume you have to call this script instead of the normal git-command. It just checks if the first parameter is achievement, and if not then just (regular) git with the rest parameters is executed. Afterwards he checks if an error happend (if he exits). And then he just makes log_action and check_for_achievments. log_action just writes the issued command with a date into a text file, while achievments scans for that log file for certains events. If you want to add another achievment you have to do it in this check_for_achievments.
Just look how the big case handles it (most of the achievments call the count_function which counts the # usages of the function and matches when a power of 2 is reached).
I've inherited a medium sized project in which the main (batch) program is fed work through a large set of shell scripts that do a lot of process control (waiting for process to complete, sleeping, checking for conditions, etc) [ and reprocessed through perl scripts ]
Are there other examples of process control by shell scripts ? I would like to see what other people have done as a comparison. (as i'm not really fond of the 6,668 line shell script)
It may lead to that the current program works and doesn't need to be messed with or for maintenance reasons - it's too cumbersome and doing it another way will be easier to maintain, but I need other examples.
To reduce the "generality" of the question here's an example of what I'm looking for: procsup
Inquisitor project relies on process control from shell scripts extensively. You might want to see it's directory with main function set or directory with tests (i.e. slave processes) that it runs.
This is quite general question, and therefore giving specific answers may be a little bit difficult. (And you wont be happy with 5000 lines long example.) Most probably architecture of your application is faulty, and requires rather complete rework.
As you probably already know, process control with bash is pretty simple:
./test_script.sh &
test_script_pid=$!
wait $test_script_pid # waits until it's done
./test_script2.sh
echo $? # Prints return code of previous command
You can do same things with for example Python subprocess (or with Perl, obviously). If you have complex architecture with large number of different programs, then process is obviously non-trivial.
That is an awfully bug shell script. Have you considered refactoring it?
From the sound of it, there may be a lot of instances where you could replace several lines of code with a call to a shell function. If you can simplify the code in this way, then it will be easier to see where there are errors in the logic.
I've used this tactic successfully with a humongous PERL script and it turned out to have some serious logic errors and to be a security risk because it had embedded passwords that were obfuscated in an easily reversible way. The passwords that were exposed could have been used by persons unknown (well, a disgruntled employee) to shut down an entire global network.
Some managers were leaning towards making a security exception because this script was so important, but when the logic error was explained and it was clear that this script was providing incorrect data, it was decided that no data was better than dirty data. The guy who wrote that script taught himself programming with a PERL book and the writing of the script.