Creating quick GUI front ends - user-interface

I wanted to have a GUI front-end for a script that accepts numerous command-line options, most of them are UNIX paths. So I thought rather than typing them in (even with auto-completion) every time, I'd create a GUI front end which contains text boxes with buttons beside them, which when clicked will invoke the file browser dialogue. Later, I thought I'd extend this to other scripts which would sure require a different set of GUI elements. This made me think if there's any existing app that would let me create a GUI dialog, after parsing some kind of description of the items that I want that window should contain.
I know of programs like Zenity, but I think it's doesn't give me what I want. For example, if I were to use it for the first script, it'll end up flashing sequence of windows in succession rather than getting everything done from a single window.
So, basically I'm looking at some corss-platform program that lets me create a window from a text description, probably XML or the like. Please suggest.
Thanks
Jeenu

Mozilla's XUL is a cross platform application framework - . You could write an app as a Firefox plugin or a standalone XUL application.

mono and monodevelop could work for this. Or even something super simple like shoes.

Related

GUI adapter for old DOS application

I have an old DOS application which accepts some files as input, does some calculations and saves results into file system. This app uses terminal as sort of GUI, where you can choose input files, types of calculations to perform and choose where to save the result. I don't know the logics behind calculations and am not able to reuse them in a new project.
The problem is that the users of this app want a modern looking GUI which will be easier to work with.
That is why, I have an idea to create an adapter which will translate button clicks into commands in DOS and grab text output to show in modern GUI.
Is it possible and where should I start from?
It is possible. How to start from depends on your programming Tools. If you use a RAD tool like Delphi or Lazarus or Visual Besic or ... then make your GUI design first and define Events after. For a Button click it is ButtonXClick(); In the RAD tool you will find a object inspector with properties and ther values and Events and their values. Go to Events page there, look for onClick-event. Double click there in the value line and you will get an empty Event handler, wehre you can write your Code for your application.
If you dont have or use such RAD tool, take a GUI Framework for DOS. Create your frontend and write your Code which is to call in Dependance of your button clicks.

How to open a file in multiple programs as my 'default program'

So I've got a lot of manual HTML checking to do, most of which is just a quick glance at the code and then make sure the page displays correctly.
My thought is that it would be much easier to do this if I could set multiple programs to open when I double click on a .htm file. By this I mean open the file in the programs I specify all at once without multiple "right-click > open with > the program" actions.
So really I'd like for it to open in my HTML editor, Chrome, and Firefox all at once and then I can just glance at them all and move about my business. I figure I'll still have to close all of them manually but at least I can do that every once in a while not EVERY time.
Any ideas? I was thinking about a simple man-in-the middle app to open all of the programs, but that seems like it would be a rather large solution to a small issue, is there a simple (and fairly quick-to-execute) way of doing this in a windows-based fashion, or should I just try and slim down this proposed app as much as possible and maybe it won't be too slow to open?
I was thinking about a simple man-in-the middle app to open all of the programs
That is exactly what you need. And then you can add that app to the "Open With" menu of the file extension(s) you want to process.
that seems like it would be a rather large solution to a small issue
Not really. It would actually be a very small app to implement. All you need is a configuration to specify the target apps, then receive the selected filename(s) as command-line parameters and pass them in a loop to the other apps using ShellExecute/Ex() or CreateProcess()as needed. Not much to it.
is there a simple (and fairly quick-to-execute) way of doing this in a windows-based fashion
Not really. You have to create your own app for it, and then register it so you can invoke it when needed.
should I just try and slim down this proposed app as much as possible and maybe it won't be too slow to open?
It won't be slow at all, unless you make it slow. If you really want to cut down overhead, you could even implement it as a simple .bat script that uses the start command to launch the files, instead of compiling an actual executable.
Looks like you would create something called a Shortcut Menu Handler. Beware, as that involves messing with the registry.

Handle GUI window changes

I'm doing an automation script for installation wizards using AutoIt. I'm trying to handle window changes in some way.
Can some one explain how these GUI's work?
When I click on the Next button it looks just like the components in the GUI is beeing changed. Is this tha case? Or is a new window created and the old destroyed?
I've noticed that the process ID is the same for all windows.
I'm sure there is some way to know which "state" the GUI is in, or which step?
By the way. All the windows has the same title.
Thanks
/Anders
This will be dependant on the program you are automating.
The easiest approach would be to look at what changes in the GUI between stages, likely candidates are if there is a label that is giving instructions for that step, or a button that has text changing (e.g. if the button says "Finish" then you know your at the end).
Most installer programs have child windows for grouping the controls of each stage. These are typically implemented as dialog resources (as can be seen when using something like reshacker on them). So although the window remains the same, the panels are being created/destroyed as appropriate. This is a very neat method of doing it, for the obvious reason that you don't need to have to code to create/destroy a lot of controls. Resource created dialogs don't have nice class names like windows sometimes do though, so this may not be a reliable way to check the state.

AppleScript Editor record doesn't work

I have opened the AppleScript Editor and pressed Record button.
Then I run TextEdit, create a file and put some text there.
When I click the Stop button in AppleScript Editor, nothing was recorded, the window is blank.
What is the problem?
You can use the Record feature of the Automator to record the UI interaction steps needed to do the relevant workflow. Then you can then literally select and copy the recorded steps in automator and paste them into a new Applescript Editor window. This will give you applescript which may or may not work. You'll probably want/need to edit the resulting script, but at least it should help give an idea what is needed to achieve your workflow programatically. This method is usable regardless of whether or not the target application has an applescript dictionary or supports the AppleScript Editor Record button, as it is the interaction with the underlying UI elements which is recorded.
Steps:
Open Automator
Start a new "Workflow"
Start recording
Perform whatever steps you require with your app (in this case typing into textedit)
Stop recording
This will create a list of actions in Automator like:
Select all these and copy (CMD+c)
Open the Applescript Editor app
Paste (CMD+v). The result will be valid applescript to perform the actions you just recorded:
Note that as is generally the case with UI automation, the automator records steps exactly and the script plays them back exactly. This my not be exactly what you want - e.g. if a different application were active, the text could get typed in there instead. The generated applescript should be used as a guide to the final applescript.
The problem is that applications need to explicitly support AppleScript recording in order for it to work, but almost no applications actually do. Finder still supports it a bit, and maybe a couple other apps (BBEdit comes to mind), but for the most part, AppleScript recording has been pretty useless for quite some time.
Not all apps are recordable (in fact, only a small handful are). Recordablity is something each app needs to implement, and I guess TextEdit isn't recordable.

Taking notes to a background text editor without switching windows

I am looking for a solution to a specific use case:
When I read something on my browser or pdf reader, I want to take notes without switching windows. I want to type right on my browser or pdf reader, but the typed text should go to the background text editor like notepad.
Is this possible?
Do you know any existing automation script that handles this use case?
You should create an application that uses some keylogger-like techniques (e.g. global hooks) to monitor the keypresses and, depending on some condition (a setting you may have set, the currenctly active window, ...), it may pass them normally to the application or suppress them to store them in a buffer. Such buffer, then, would be shown to the user as needed.
Still, in my opinion a much more convenient way to perform a similar task would be to create an application consisting of a semitransparent edit box, that could be shown and hide simply with a hotkey. This would avoid all the hooks stuff and the potential problems that may arise from them.
you might try using autohotkey scripting language... I can write a little script that would do exactly what you need and afaik in this particular case it wouldnt need keyboard hooks.
--EDIT--
Autohotkey is VERY simple to use/learn so even if you want to do the script yourself you can do it in a very short time even if you dont have any knowledge of ahk. Then again, I can help you with it.

Resources