Any recommendations for writing a GUI client from a CLI client? - user-interface

I'm looking for writing a GUI client for a existing application in my job, this application is CLI and because this is not widely used.
This is the first time I'm writing something similar, the I ask you for recommendations, books, techniques, methodologies, advices. My first approach is to create the interface and to make calls to the original CLI client, is this a congruent approach?

Though it's not ideal, I don't think it's a bad approach, creating a GUI shell for your CLI app. In this design, the GUI acts as the CLI program's user. You have to consider things like:
Can the GUI anticipate or understand
all possible CLI program output? How about errors? How
complex will that be? Consider
parsing Unix "ls" output. Simple enough. How
about Windows command prompt "dir" output? A
bit more funky.
The CLI program may take time to
execute, this must be presented in
the GUI. The GUI may have to prevent
the user from running another instance of the CLI.

You might want to consider tcl/tk. I've written several successful commercial GUIs that work in exactly this manner.
I'll admit that it maybe takes a little skill to craft a stunning GUI but it's not impossible, and not even that hard. You won't be able to reproduce the eye candy of flash or silverlight, and if that is important this might not be the right solution for you.
If, on the other hand, you are more concerned with Getting The Job Done, tcl is a very viable choice. It's easy to learn and easy to I integrate with command line tools.

Related

Debugging a proprietary recursive script

So after reading these questions here, and here, I don't really have any other ideas of how I want to ask this question without explaining my situation.
Note:
I am very new to proprietary platforms in general, coming from a background in Free/Open software.
Essentially, I have a program at work for our in-house proprietary motion control platform that uses a script with macros to communicate between the program on the UI side and the firmware on the motion controller side. The documentation and development of all the involved software is handled by a sister company, and the documentation is lacking/not complete, we do have support from them but they are stretched thin, have their own products, are 2,700 mi away, and can't be in-house on call for us, I do not have the source/am not allowed to have the source for either the main application, nor the firmware, and to REALLY kill it, our last, only real programmer left. We are alone with this script and a grip of new products that all depend on this script working well with in the software. The script is in need of a serious, regular bug check solution; for each configured machine that we use this motion controller system with.
So I am going to start debugging this script. That's what's happening here.
I have tried to write a mock-up bash implementation but the degrees of recursion, arrays pulled in from .ini files, system defined command set, and parsing of the script in the firmware; have made this bash debug implementation difficult and I'm not sure I can pull it off fully without hacking away important stuff.
I looked at other options like ANTLR, which is a bit over my head too, but might work.
Now, the controller communicates over some kind of static-IPv4 cross-over Ethernet setup (Telnet?) but has a RS-232 serial that will output formatted strings that are parsed from a 'sout' command. It's been my intent to mod the script to output as much as I can get from formatted strings with predefined system commands and variables, but I'm afraid that it wont give me the big picture.
The script itself defines global, system, and local variables and functions that, (because it's on the motion controller side with hardware limitations), can be nested 25 subroutines deep. The real gotchas seem to be where the recursive side of script enters and exits these functions as they are called from the UI and other functions. Nothing jumps out, but without in depth docs I can only see so much and pretty much have just been learning all this with another engineer who asks questions to the sister company.
Can anyone give me some advice as to how I should proceed in my endeavors? I know it's probably a lot to ask but I am kinda stuck in a rut and need more cognitive resources than my skill, and coffee allow for.
Thank you for your time.

Using puppet (or any thing else) instead of a bash script for an SSH based deployment

I have a custom build and deployment script which work over SSH and deploy to servers (on running MacOS). The bash script does a lot of simple things like copying files, backing up the old ones and applying the correct SQL scripts for a forward moving database. But there are some advanced things like starting a remote SQL upgrade procedure which can be disconnected from and once the deployment script is started again it only goes forward if the SQL script has been applied completely (in short there is some flow control happening and bash is not really ideal for such stuff)
The script is already huge and is a mess since bash is not meant for such kind of detailed logic. Can you recommend some tools, libraries which would make things easier.
For what you tell us, I think you need a deployment tool, rather than a configuration management tool.
To simplify, I'll distinguish the two like this:
A deployment tool is a 'push' tool: When you press the button, the required actions are run to make the deployment. It's a one-step process (it can have multiple actions, but it's launched once).
A configuration management tool is usually a 'pull' tool, where your servers periodically check if their configuration is exactly as the CM server tells them to be - and apply changes, if needed. You configure your servers once, and after that the system assures that all is as it should be. It is also a great tool to easily clone systems.
For deployment tools, I personally know Fabric, a great Python tool. But there is also Capistrano in the Ruby world. I don't know of any others.
For CM tools, Puppet and Chef seem to be the preferred choice of people nowadays. Cfengine is an older tool, which had some problems (I don't know if that has changed).
Here are my recommendations:
Puppet
Chef
cfengine
These are all free (as in beer) and allow you to do what you're wanting. They will require you to adapt your current bash script into modules to fit their design/framework. It's a bit of work, but in the long run it tends to be better since the frameworks take care of error checking, converging configurations and a lot of other things you'd have to manually insert into your own code were you doing this yourself.
I've also used Opsware previously for this sort of thing, but that costs a fair bit of cash and, for what you're trying to do, does not offer significantly more benefit.
In some cases moving from a bash-script to an complete solution is not as straightforward as many cloudservices claim.
With 'dont try new things when your on a deadline' in mind:
it could also be a good timing to refactor your bashscripts.
I have done automated, repeatable deployments in the past using PaaS or just using GIT/SVN hooks using deployogi (which is bash) : https://github.com/coderofsalvation/deployogi
I understand your situation, but Im not sure whether its fair to say that the bash-language implies 'a mess' and 'complex'.
Every language allows to hide complexity no?
I guess code (in whatever language) gets overly complex when time does not allow us to refactor :)
PaaS is great. But always needed? I think not.

When should I add a GUI?

I write many scripts at home and on the job. Most of the time the scripts get used only a few times to accomplish their chosen task and then are never used again. However, sometimes I write a script to do something more complicated, something that requires user input. It is at this point that I usually agonize over whether to implement a GUI or stick with a y/n, press 1-10, etc. command-line interface. This type of interface can become tedious to use and difficult to maintain.
I know some things lend themselves to a GUI more than others, such as selecting things in a giant list. However, the time it takes to switch a command-line application to use a GUI is prohibitive. For me, it takes a good amount of time to add a GUI with even the most simple framework I can find.
I am curious if any developers have a method of determining at what point their script has grown enough to need a GUI. Or am I going about this the wrong way, should I always be writing my scripts assuming I might later add a GUI?
This doesn't answer your question but FWIW an intermediate step, between UI and command-line, is to have a configuration file instead of a UI:
Edit the configuration file
Run the program
A configuration file format can, if necessary, be complicated and well-commented.
As with many questions of this type, the answer is that it depends.
If your program/script does just one single thing by receiving a number of inputs from the user, it is better to stick with the non-GUI mode.
If the application is doing more than one thing and if you think that the user will use the application to do a lot of stuff, you may consider using a GUI.
Are you planning to distribute this program to others? Then it is better to provide a GUI.
If the users are non-technical, a GUI is a must!
Thats it.
When you want to hand your stuff over to someone else in a discoverable way. Command-line scripts are awesome because they are simple and elegant, but they are not very discoverable. That is, if you were to hand your scripts over to someone else with no documentation, would they be able to figure out what they are and how to use them? If your tasks are so simple that myscript /? will explain what you need to do fully, then you don't need a GUI.
If on the other hand, you are handing your scripts over to someone who isn't so technical, or needs some more visual guidance about the task to be done, than by all means, a GUI is a good way to go. You might even want to keep your scripts as they are and just create a separate GUI that runs them for maximum flexibility.
I think this decission also depends on the audience who will be using your script: If it is people who are comfortable working with the command line, then there is not pressing need to add a GUI as long as your script has a good /help which explains all the parameters it accepts. But if you want the "average user" to be able to use your program, I'd rather add a GUI because otherwise your program might not be intuitive enough for that user group.
If you only need some "Dialogs" to improve your scripts, you can use KDE Kdialog or Gnome Zenity.
I can't count the number of times I've written what I thought would be a 'one-off' and it became more useful than I thought and ended up writing a GUI for it, or I've need to come back to use a program months later. The advantage of the GUI is it makes it easier to remember what would otherwise likely be command line arguments. I.e. for flags and options you can simply use check boxes, combo boxes, radio buttons, and file selectors filenames. I use Borland C++ RAD so it is quite quick and easy to throw together a simple (or even not so simple) dialog box. I now often start with creating the GUI.
If you use Linux, try Zenity. It's an easy to use tool to make a GUI for command-line programs.

Practical Alternative for Windows Scheduled Tasks (small shop)

I work in a very small shop (2 people), and since I started a few months back we have been relying on Windows Scheduled tasks. Finally, I've decided I've had enough grief with some of its inabilities such as
No logs that I can find except on a domain level (inaccessible to machine admins who aren't domain admins)
No alerting mechanism (e-mail, for one) when the job fails.
Once again, we are a small shop. I'm looking to do the analogous scheduling system upgrade than I'm doing with source control (VSS --> Subversion). I'm looking for suggestions of systems that
Are able to do the two things outlined above
Have been community-tested. I'd love to be a guinae pig for exciting software, but job scheduling is not my day job.
Ability to remotely manage jobs a plus
Free a plus. Cheap is okay, but I have very little interest in going through a full blown sales pitch with 7 power point presentations.
Built-in ability to run common tasks besides .EXE's a (minor) plus (run an assembly by name, run an Excel macro by name a plus, run a database stored procedure, etc.).
I think you can look at :
http://www.visualcron.com/
Consider Cygwin and its version of "cron". It meets requirements #1 thru 4 (though without a nice UI for #3.)
Apologize for kicking up the dust here on a very old thread. But I couldn't disagree more with what's been presented here.
Scheduled tasks in Windows are AWESOME (a %^#% load better than writing services I might add). Yes, not without limitations. But still extremely powerful. I rely on them in earnest for a variety of different things.
If you even have a slight grasp on c# you can write as custom "task" (essentially a console application) to do, well, virtually anything. If persistent/accessible logging is what you're after, why not something like Serilog or NLog? Even at the time of writing, it had a very robust feature set. This tool in and of itself, in conjunction with some c#, could've solved both your problems very easily.
Perhaps I'm missing the point, but it seems to me that this isn't really a problem. At least not anymore...
If you're looking for a free tool there is plenty of implementations for the popular Cron tool for Windows, for example CRONw. It's pretty easy to configure and maintain. You could easily write add custom WSH scripts to send your emails and add log entries.
If you're going commercial way BMC Control-M is arguably one of the best but I don't believe that it is particularly cheap.
You may also consider some upcoming packages like JobScheduler
Pretty old question, but we use Jenkins. Yes its main purpose is for CI\CD, but its also a really nice UI for CRON with a ton of plugins and integrations.

Why shouldn't I "bet the future of the company" on shell scripts? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I was looking at http://tldp.org/LDP/abs/html/why-shell.html and was struck by:
When not to use shell scripts
...
Mission-critical applications upon which you are betting the future of the company
Why not?
Using shell scripts is fine when you're using their strengths. My company has some class 5 soft switches and the call processing code and the provisioning interface is written in java. Everything else is written in KSH - DB dumps for backups, pruning, log file rotation, and all the automated reporting. I would argue that all those support functions, though not directly related to call-path, are mission critical. Especially the DB interaction. If something went wrong with the DB-interaction code and dumped the call routing tables it could put us out of business.
But nothing ever does go wrong, because shell scripts are the perfect language for stuff like this. They're small, they're well understood, manipulating files is their strength, and they're stable. It's not like KSH09 is going to be a complete rewrite because someone thinks it should compile to byte code, so it's a stable interface. Frankly, the provisioning interface written in Java goes wonky fairly often and the shell scripts have never messed up that I can remember.
I kind of think the article points out a really good list of the reasons when not to use shell scripts - with the single mission critical bullet you point out being more of a conclusion based on all the other bullets.
With that said, I think the reason you do not want to build a mission critical application on a shell script is because even if none of the other bullet points apply today, any application that is going to be maintained over a period of time will evolve to the point of being bit by at least one of the those potential pitfalls at some point.....and there isn't anything you are really going to be able to do about it without it being a complete do over to come up with a fix....wishing you used something more industrial strength from the beginning.
Scripts are nothing more or less than computer programs. Some would argue that scripts are less sophisticated. These same folks will usually admit that you can write sophisticated code in scripting languages, but that these scripts are really not scripts any more, but full-fledged programs, by definition.
Whatever.
The correct answer, in my opinion, is "it depends". Which, by the way, is the same answer to the converse question of whether you should place your trust in compiled executables for mission critical applications.
Good code is good, and bad code is bad - whether it is written as a Bash script, a Windows CMD file, in Python, Ruby, Perl, Basic, Forth, Ada, Pascal, Common Lisp, Cobol, or compiled C.
Which is not to say that choice of language doesn't matter. There are very good reasons, sometimes, for choosing a particular language or for compiling vs. interpreting (performance, scalability, capability, security, etc). But, all things being equal, I would trust a shell script written by a great programmer over an equivalent C++ program written by a doofus any day of the week.
Obviously, this is a bit of a straw man for me to knock down. I really am interested in why people believe shell scripts should be avoided in "mission-critical applications", but I can't think of a compelling reason.
For instance, I've seen (and written) some ksh scripts that interact with an Oracle database using SQL*Plus. Sadly, the system couldn't scale properly because the queries didn't use bind variables. Strike one against shell scripts, right? Wrong. The issue wasn't with the shell scripts but with SQL*Plus. in fact, the performance problem went away when I replaced SQL*Plus with a Perl script that connected to the database and used bind variables.
I can easily imagine putting shell scripts in spacecraft flight software would be a bad idea. But Java or C++ may be an equally poor choices. The best choice would be whatever language (assembly?) is usually used for that purpose.
The fact is, if you use any flavor of Unix, you are using shell scripts in mission-critical situations assuming you think booting up is mission critical. When a script needs to do something that shell isn't particularly good at, you put that portion into a sub-program. You don't throw out the script wholesale.
It is probably shell scripts that help take a company into the future. I know just from a programming standpoint that I would waste a lot of time doing repetitive tasks that I have delegated to shell scripts. For example, I know most of the subversion commands for the command line but if I can lump all those commands into one script I can fire at will I save time and mental energy.
Like a few other people have said language is a factor. For my short don't-want-to-remember steps and glue programs I completely trust my shell scripts and rely upon them. That doesn't mean I'm going to build a website that runs bash on the backend but I will surely use bash/ksh/python/whatever to help me generate the skeleton project and manage my packaging and deployment.
When I read thise quote I focus on the "applications" part rather than the "mission critical" part.
I read it as saying bash isn't for writing applications it's for, well, scripting. So sure, your application might have some housekeeping scripts but don't go writing critical-business-logic.sh because another language is probably better for stuff like that.
I would wager the author is showing they are uncomfortable with certain aspects of qualtiy wrt shell scripting. Who unit tests BASH scripts for example.
Also scripts are rather heavily coupled with the underlying operating system, which could be something of a negative thing.
No matter we all need a flexible tool to interact with os. It is human readable interaction with an os that we use; it's like using a screwdriver with the screws. The command line will always be a tool we need either admin, programmer, or network. Look at the window they even expanded on their Powershell.
Scripts are inappropriate for implementing certain mission-critical functions, since they must have both +r and +x permissions to function. Executables need only have +x.
The fact that a script has +r means users might be able to make a copy of the script, edit/subvert it, and execute their edited Cuckoo's-Egg version.

Resources