Debugging a proprietary recursive script - bash

So after reading these questions here, and here, I don't really have any other ideas of how I want to ask this question without explaining my situation.
Note:
I am very new to proprietary platforms in general, coming from a background in Free/Open software.
Essentially, I have a program at work for our in-house proprietary motion control platform that uses a script with macros to communicate between the program on the UI side and the firmware on the motion controller side. The documentation and development of all the involved software is handled by a sister company, and the documentation is lacking/not complete, we do have support from them but they are stretched thin, have their own products, are 2,700 mi away, and can't be in-house on call for us, I do not have the source/am not allowed to have the source for either the main application, nor the firmware, and to REALLY kill it, our last, only real programmer left. We are alone with this script and a grip of new products that all depend on this script working well with in the software. The script is in need of a serious, regular bug check solution; for each configured machine that we use this motion controller system with.
So I am going to start debugging this script. That's what's happening here.
I have tried to write a mock-up bash implementation but the degrees of recursion, arrays pulled in from .ini files, system defined command set, and parsing of the script in the firmware; have made this bash debug implementation difficult and I'm not sure I can pull it off fully without hacking away important stuff.
I looked at other options like ANTLR, which is a bit over my head too, but might work.
Now, the controller communicates over some kind of static-IPv4 cross-over Ethernet setup (Telnet?) but has a RS-232 serial that will output formatted strings that are parsed from a 'sout' command. It's been my intent to mod the script to output as much as I can get from formatted strings with predefined system commands and variables, but I'm afraid that it wont give me the big picture.
The script itself defines global, system, and local variables and functions that, (because it's on the motion controller side with hardware limitations), can be nested 25 subroutines deep. The real gotchas seem to be where the recursive side of script enters and exits these functions as they are called from the UI and other functions. Nothing jumps out, but without in depth docs I can only see so much and pretty much have just been learning all this with another engineer who asks questions to the sister company.
Can anyone give me some advice as to how I should proceed in my endeavors? I know it's probably a lot to ask but I am kinda stuck in a rut and need more cognitive resources than my skill, and coffee allow for.
Thank you for your time.

Related

Interact with GUI Elements of a Windows Application

First of all, I want to appreciate the work for the SCIDvsPC Project. I know that the basic SCID one has been discontinued many years back and the developer have done a great job with expanding it and doing his share for the Chess Field. We have a Minor Project to do in this 6th semester of our college. We've decided to start a project on a Chess Next Move Analyzer that is based on variety of filters and implements Self Learning and Machine Learning.
I've been researching over the project idea for the last 2 months. Actually we need to import several games defined on some filters and read and analyze from the PGN file generated. For example, if the user chooses to get the next best move predicted according to the rating range of 2000-2500, our program should only export and analyze the PGN files that have both the opponents from this range only. I know the project can do all this but I'm confused over how to automate this. I mean I have to manually enter the moves and then click on 'Generate PGN' but how to make my program do this ie take input from the user (like first 3 moves), make the project run these moves (what I had to manually) and then generate the PGN file and keep it in a folder.
I've surfed the net about interacting with GUI elements in Windows (we have no problem in working with Linux either) and came to know about Microsoft UI Automation, Python, Java and C# softwares and something like COM. Do the software support COM or any one of these or have you already developed some functionality like this? Please can you guide me over this?
If asked to Generalize this what I want to do is to interact with GUI Elements, be it any application. Take Notepad as an example. Suppose I want to open a file on it, find and replace a particular word. Now, I know how to do this manually but when I have over thousands of file I need some kind of program to do this for me. Do some specific programs like SCID in my case has some feature (read bit about COM) pre-built to handle this? In which programming language domain does this come into? Is using Linux help me more?
Take Notepad as an example. Suppose I want to open a file on it, find
and replace a particular word. Now, I know how to do this manually but
when I have over thousands of file I need some kind of program to do
this for me. Do some specific programs like SCID in my case has some
feature (read bit about COM) pre-built to handle this?
Your situation sounds to be quite specific so I doubt whether you will be able to find a pre-existing program to do this for you. Meaning: you'll have to code it yourself.
In which programming language domain does this come into?
Well, this could probably be done in many, many different programming languages. A simple shell script would be able to achieve the Notepad example you gave.
Is using Linux help me more?
No, your goals seem to be pretty achievable by a simple shell script, whether you write it in a Windows, macOS or a Linux distro.
#SB87 gave you some useful hints, I'd like to expand his answers.
Sorry, I don't think you know what you're talking about. Reinforcement learning (better term than self-learning) and machine learning are not something suitable for a college project. It's at the PhD or research level, consider getting yourself into university before even thinking about anything like that.
UI automation is possible, but error prone and slow. If you want to do it, you'd write a console program. You mentioned something about user inputs, do you mean you want to apply machine learning on user mouse-keyboard inputs? It's not going to work. Machine learning for chess requires hundreds and thousands of training set.
I think you should downplay the project and focus on something you can achieve.

Can OS X system calls be overridden or interposed on a system-wide basis?

Working under OS X Lion, I've done some work with code injection to interpose system calls on a process-by-process basis recently.
I've learned a lot along the way, and it now looks like it would make more sense, at least for research purposes, to "simply" interpose all calls to certain system functions, such as pwrite, if such a thing is possible.
Is it possible to get my code called instead of the OS for every call to certain system calls (e.g. pwrite) from every process?
And if so, can I know what process has made the call?
Edit: Lest anyone think I'm a malware author because of the nature of my question, I'll explain why I'm here now, asking what I'm asking:
I'm trying to get a big, complex piece of closed-source software working like it should. Why not wait for the vendor to fix it? Two years ago they starting pointing fingers at another party, and that party pointed right back. The situation is preposterous, and it is worth trying to overcome without either party's assistance because this software gets used by film and video production people who charge hundreds of dollars an hour for their creatively- and technically-advanced efforts, and shouldn't be wasting their time wrestling their tools.
The problem with my efforts thus far are that I need to use code injection and interposing to find the source of the problem (this is what I referred to above as "research"). Once I find the source of the problem, the solution might also be injection and interposing, or replacement of a dynamic library, or some obscure low-level system tuning, or who knows what? The software I'm analyzing is sprawling, and it in turn leverages other frameworks, libraries and background tasks, some of which is part of OS X, and some of which is part of the software package in question. Code injection and interposing on a component-by-component basis has become a little crazy, which is why I'd like to spy on what's going on at the system call end of things, so I can see, for example, where all pwrite calls originate and the specifics of the calls.
I hope this clarification helps, and that someone can point me in the right direction. Thanks!
You should look at DTrace: http://en.wikipedia.org/wiki/DTrace It's part of OS X now. For interposing, I think there are several approaches, many of which will probably be twarted by Gatekeeper/Code Signing. If that's not a worry, you might be able to use otool to edit the app linkage to have it load modified versions of it's libraries. For code injection, I believe people have hacked this in the past with Input Components... but I really don't know if that still works. Not really an answer I guess.

BOINC: Is there an easy example how to code a programm for it and how to implement it into their client/server system?

I did a numeric method as my diploma thesis and coded it in java. It needs a lot of computational time when adequately executed. So I looked for an alternative and found BOINC. Unfortunately I didn't have time for doing my method in BOINC, because I'm an Aerospace student and not a programmer and I decided to keep my priority on my java program. Now it's finished an I still would like to port this to BOINC environment.
Unfortunately I'm learning in re-doing examples and I couldn't find any, neither on the official site http://boinc.berkeley.edu nor in the internet.
So do you know a good and easy example or do you have any experience in BOINC and would like to start a new platform for such a boinc project?
I'm realistic about my method, that it wouldn't run 24/7, because there aren't as many work units as for seti or folding projects. So I would like to have a platform for more than just my project so that another platform project can be worked on, when one part of the project does not have any work units at that moment.
But to start this, I would keep it simple and just want to know how to code it and use it in the client and server system. It doesn't matter what the example projects will work on, as long as it is simple enough, that I can understand it and extending it for my method.
Thank you in advance, Andreas! :)
PS: I know that BOINC supports JAVA as a programming language, and my method is coded in JAVA.
As far as I know, JavaApps is just an idea; I don't know if anyone actually tried it in a real BOINC project. And it's Windows-only. And it seems to be a bit of a pain to redistribute the entire JRE as part of the BOINC application (both technically and legally).
Also, I generally dislike using that kind of “wrapper” where the science app (using the BOINC API) starts another process that then does the real computation. It's usually unreliable. There are lots of things that could go wrong with the wrapper, especially related to controlling the child process (eg. if something kills the wrapper, the child process has to quit too).
However, I just found something pretty interesting that may let me do a better Java wrapper for BOINC... Stay tuned! (but don't hold your breath either; it's the holidays!)
Meanwhile, I suggest you start by reading BOINC wiki and setting up a server with a “hello world” application; and if you have any trouble, ask a specific question about your trouble either here or in the boinc_projects mailing list.
(Of course, payin’ me to install the server for you is also an option ;) but I can't guarantee anything; not even my mere availability at this time of the year)

Do Character User Interfaces have a future?

We've got products built both with GUI and CHUI. Going forward, we're looking at redesigning a lot of our software and mainly taking the route of going all GUI. My question to the group is, do we need to account for keeping a CHUI around? What are the advantages of CHUI over GUI? Many times in the past people have said that CHUI is faster because you don't need a mouse. I argue that GUI can be just as fast with the right keyboard shortcuts, hotkeys and/or touch screens.
Is CHUI something we should no longer consider if hardware no longer provides a constraint?
Also to clarify, when I speak about CHUI I mean a CHaracter based User Interface, and I'm also mainly concerned with the effective presentation of data to an end user.
There have been some fantastic responses that have highlighted the importance of having a command line based interface for automation and scripting based tasks which I will certainly take to heart when we begin the design!
The primary benefits of a CHUI (that is something with forms and fields, not necessarily command line interfaces) is the keyboard for navigation and consistent layout. That is key.
If your GUI can be completely, and efficiently, keyboard navigated, then your CHUI user base should be happy. This is because in time, the users simply "type" their commands in to the system without "seeing the interface". They don't need to "discover" the interface, which is a primary feature of the GUI.
While CHUIs appear to be dinosaurs, they are still functional and usable. Most folks once they're trained (notably POS/Counter workers, but even back office scenarios like factory or warehouse floor, etc) have no problem using a CHUI.
But the key is the keyboard support so the user don't have to wait for the screen to catch up with them. Seeing a skilled operator with a mastery of the keyboard can make an application fly. You barely have a chance to see popup windows and what not.
You should poll your customers, not programmers. If your customers, who use your applications, want a CHUI, even if all your developers think it's a waste of time, you build it, because the customer is always right (except for when they're wrong).
You should absolutely still consider it. Most importantly, command line programs can be automated (and chained together in scripts) much more easily than GUIs (typically). I can't imagine working with a source control tool which didn't have a command line interface - although obviously having a GUI is useful too.
Now whether you need a command line version for your particular app is hard to say without knowing what your app does. Do you need automation and scripting? Might someone want to VPN in and run it from a very bad connection, and thus appreciate low bandwidth?
Note that MS certainly doesn't believe the command line is dead - or they wouldn't have created PowerShell.
I agree with Eli that your customers should have final say, but if you can keep the meat of your program from being too interwoven with the GUI(or CHUI), then production cost to make both available should be minimal.
If you write apps for unix and you need to handle users who telnet / ssh to your box then you will need command line interfaces.
I would say it depends on your target. Do you script your code from other apps? That would be a requirement to keep the interactive version (or some piece to avoid the GUI startup).
We usually do one or the other. But sometimes we have utils that have to be deployable through ftp and run ssh. Or we have tools that our users embed into their apps and don't want to expose a UI (data migration / conversion).
To this day, some of the most efficient user interfaces I've ever seen were plain old terminal-based character interfaces.
Anecdote: I was once part of a project to "modernize" a terminal application used by 500 customer service representatives. We published sexy GUI mockups and everyone, including the users, were suitably impressed. We worked for six months on the application, and all the user acceptance testing seemed to indicate we had a winner.
But when the application was finally launched, it failed miserably. As it turns out, CSRs are measured for performance daily, right down to the average number of seconds per call handled. And no matter how hard they tried, they could not match the same level of efficiency in the GUI as they could in the terminal interface. They could get close with tabs and shortcuts, but not quite there.
Hard lessons learned. Modern programmers may abhor "dinosaurs", but do users really care about slick interfaces? Usually they just want to get their work done.
When I first read this, my immediate thought was that this is probably one of those apps that's basically a series of forms, but displays inside a terminal. Often you see such dinosaurs running on cash registers. I also recall seeing such an app used to apply for a loan when I bought my car. This type of application doesn't seem to have a place in the modern world -- any system with even a tiny bit of processing power can handle a normal GUI nowadays. Unless you're trying to support really low-end legacy customers, get rid of this user interface. A GUI with decent keyboard shortcuts (please, please, please put some thought into keyboard-only use of your GUI programs...) is going to be equally effective for the users coming from the old CHUI system and much friendlier to those used to a GUI, without having to have 2 versions of your app.
I don't see why everyone is bringing up command line apps. I think most people recognize that the command line isn't going away. It's far faster for many tasks than a GUI, largely because the programs tend to be non-interactive (and thus easily scriptable). As soon as your app becomes interactive (or, at least, doesn't have a param to make it non-interactive), running it from the command line is much less important. Even awesome programs like Vim that are terminal-based are transitioning to their graphical counterparts (gVim) because it gives you the best of both worlds.
Even GUI apps like Firefox can benefit from command line interfaces like Ubiquity. If there's a way to provide the command line from within the GUI then why not have the best of both worlds?
A lot of CAD programs have command line interfaces that show you what the GUI interaction you just performed equates to in the command line. That way you can learn the command line operations for the things that you do frequently and where the command line can be quicker to interact with whist still having the discoverability of the GUI interface.
See this youtube video demonstrating Rhino3D's command line
CHUI is faster in execution speed, not user interaction speed. I write embedded systems (as well as GUIs), so I'll always have a use for command line apps.
Every study I have ever read showed that CHUI's are much faster for experienced users. GUI's are easier for new users and for applications that are only occasionally used. Also for a given screen size, you can display more information on a CHUI then a GUI. A good GUI can give you a quick over view at a glance.
In addition to the other benefits mentioned above, I've frequently found another reason to keep around an alternative UI--it keeps you and your interfaces honest. When an application is built with only one user interface, it becomes much easier to let design principles slide and for your business logic, etc. and your GUI to become an intertwined ball of spaghetti--despite best intentions. Regardless of the importance of your customers having a command-line interface, soon there might come a time when an alternative GUI (read: presentation layer) might be needed, and you'll want to be prepared. This might not be relevant to your requirements, but I think it's something good to keep in mind...
One of the big issues that we encountered was multisession capability which is almost nonexistent with the GUI technologies I have seen. Our users were quick to point out that with the current character based interface they could have over a dozen Telnet based terminal sessions going at the same time on their PC screen which enabled them to multitask or task switch with high efficiency. They rated multitasking as the killer feature which they benefitted from in our fast paced environment where interruptions are frequent. Being able to have concurrent access to multiple instances of a particular ERP application or multiple different ERP applications while always retaining session states was important to our user community.
I think the problem comes from design practices in GUI forms. We tend to place more objects on them especially with a vertical scroll bar and tab capabilities. This also makes loading slower. Going through CHUI menus with the keyboard is faster once you've memorized those sequences and holding the Ctrl key isn't required. There is something about the menu bar in Windows where the short-cut key descriptions are off to the right. The character based menus seemed easier to remember after awhile.
A) - This Menu
B) - That Menu
C) - Some other Menu
Or you could arrow through the choices and you just seemed to have some muscle memory where That Menu is the second choice.
As soon as you present some data, someone's going to want to query against it. You can integrate that with a gui, no problem. If you think some of your customers are going to want to script certain tasks. set it up. Anything to do with automation is better done from the command line(y harlo thar cron job!)
I love guis. I'm a mac user. But there is a time and a place for a CLI.
I was sysadmin at a university math department when the registration system went from a character based system using telnet, to a gui system on a PeopleSoft app.
The gals in the front office HATED the new system. Now part of this was the whole bit about old shoes being more comfortable. But when I asked about it, Christine said that even after a week of doing several hundred registrations per day, the new system took several times as long to do anything. Lots of things only doable with a mouse. The old system could accept input as fast as they could type. Screen repaints were under a tenth of a second. New system had lots of 3/4 to 2 second pauses -- just long enough to be annoying, not long enough to do anything else.

Why shouldn't I "bet the future of the company" on shell scripts? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I was looking at http://tldp.org/LDP/abs/html/why-shell.html and was struck by:
When not to use shell scripts
...
Mission-critical applications upon which you are betting the future of the company
Why not?
Using shell scripts is fine when you're using their strengths. My company has some class 5 soft switches and the call processing code and the provisioning interface is written in java. Everything else is written in KSH - DB dumps for backups, pruning, log file rotation, and all the automated reporting. I would argue that all those support functions, though not directly related to call-path, are mission critical. Especially the DB interaction. If something went wrong with the DB-interaction code and dumped the call routing tables it could put us out of business.
But nothing ever does go wrong, because shell scripts are the perfect language for stuff like this. They're small, they're well understood, manipulating files is their strength, and they're stable. It's not like KSH09 is going to be a complete rewrite because someone thinks it should compile to byte code, so it's a stable interface. Frankly, the provisioning interface written in Java goes wonky fairly often and the shell scripts have never messed up that I can remember.
I kind of think the article points out a really good list of the reasons when not to use shell scripts - with the single mission critical bullet you point out being more of a conclusion based on all the other bullets.
With that said, I think the reason you do not want to build a mission critical application on a shell script is because even if none of the other bullet points apply today, any application that is going to be maintained over a period of time will evolve to the point of being bit by at least one of the those potential pitfalls at some point.....and there isn't anything you are really going to be able to do about it without it being a complete do over to come up with a fix....wishing you used something more industrial strength from the beginning.
Scripts are nothing more or less than computer programs. Some would argue that scripts are less sophisticated. These same folks will usually admit that you can write sophisticated code in scripting languages, but that these scripts are really not scripts any more, but full-fledged programs, by definition.
Whatever.
The correct answer, in my opinion, is "it depends". Which, by the way, is the same answer to the converse question of whether you should place your trust in compiled executables for mission critical applications.
Good code is good, and bad code is bad - whether it is written as a Bash script, a Windows CMD file, in Python, Ruby, Perl, Basic, Forth, Ada, Pascal, Common Lisp, Cobol, or compiled C.
Which is not to say that choice of language doesn't matter. There are very good reasons, sometimes, for choosing a particular language or for compiling vs. interpreting (performance, scalability, capability, security, etc). But, all things being equal, I would trust a shell script written by a great programmer over an equivalent C++ program written by a doofus any day of the week.
Obviously, this is a bit of a straw man for me to knock down. I really am interested in why people believe shell scripts should be avoided in "mission-critical applications", but I can't think of a compelling reason.
For instance, I've seen (and written) some ksh scripts that interact with an Oracle database using SQL*Plus. Sadly, the system couldn't scale properly because the queries didn't use bind variables. Strike one against shell scripts, right? Wrong. The issue wasn't with the shell scripts but with SQL*Plus. in fact, the performance problem went away when I replaced SQL*Plus with a Perl script that connected to the database and used bind variables.
I can easily imagine putting shell scripts in spacecraft flight software would be a bad idea. But Java or C++ may be an equally poor choices. The best choice would be whatever language (assembly?) is usually used for that purpose.
The fact is, if you use any flavor of Unix, you are using shell scripts in mission-critical situations assuming you think booting up is mission critical. When a script needs to do something that shell isn't particularly good at, you put that portion into a sub-program. You don't throw out the script wholesale.
It is probably shell scripts that help take a company into the future. I know just from a programming standpoint that I would waste a lot of time doing repetitive tasks that I have delegated to shell scripts. For example, I know most of the subversion commands for the command line but if I can lump all those commands into one script I can fire at will I save time and mental energy.
Like a few other people have said language is a factor. For my short don't-want-to-remember steps and glue programs I completely trust my shell scripts and rely upon them. That doesn't mean I'm going to build a website that runs bash on the backend but I will surely use bash/ksh/python/whatever to help me generate the skeleton project and manage my packaging and deployment.
When I read thise quote I focus on the "applications" part rather than the "mission critical" part.
I read it as saying bash isn't for writing applications it's for, well, scripting. So sure, your application might have some housekeeping scripts but don't go writing critical-business-logic.sh because another language is probably better for stuff like that.
I would wager the author is showing they are uncomfortable with certain aspects of qualtiy wrt shell scripting. Who unit tests BASH scripts for example.
Also scripts are rather heavily coupled with the underlying operating system, which could be something of a negative thing.
No matter we all need a flexible tool to interact with os. It is human readable interaction with an os that we use; it's like using a screwdriver with the screws. The command line will always be a tool we need either admin, programmer, or network. Look at the window they even expanded on their Powershell.
Scripts are inappropriate for implementing certain mission-critical functions, since they must have both +r and +x permissions to function. Executables need only have +x.
The fact that a script has +r means users might be able to make a copy of the script, edit/subvert it, and execute their edited Cuckoo's-Egg version.

Resources