This is more of a general theory question as I'm stuck on how to proceed since this is my first time developing an application...
I'm developing a reporting application in VS 2015 that requires two types of functionality. It needs to have a GUI so that users can interact with and create reports and those reports need to be scheduled via Windows Task Scheduler. I'm planning on using a Console Application for the scheduling portion. My question is, what would be the best way to implement this? As of right now I have two separate Projects in a single Solution. Is this the best route to take considering my needs or is there a better option that I'm not aware of? I've done some searching online but have not been able to find a valid solution. It's especially difficult since the scheduling portion needs to pull the application settings from the Windows Form Application.
Any help or guidance would be greatly appreciated. Thank you in advance!
The only reason you would need a console application would be if you actually needed a console interface. It doesn't sound like that's the case—the interface will be written in WinForms. Therefore, you don't actually need two separate applications. You can combine all the necessary functionality in a single executable.
The way to do this is by checking for command-line parameters that indicate whether the app should run interactively or headless. Probably, what you'll want to do is make the app run interactively when no command-line parameters are passed. This would be the normal case, the situation the user gets into when they double-click your app to launch it from Explorer.
When it comes time to schedule your app to run a task in the background (with Task Scheduler or anything else), you signal this by passing a special command-line parameter to your app. You can decide what this is, and you may need several of them if your app can do multiple things in the background. If configuration information/parameters need to (or can) be passed to the app to configure how it should perform the background task, you can pass these on the command line, too. Or, as you mention in the question, you could pull these settings from whatever was set/saved by the user the last time they ran the interactive version of the app.
The trick is just checking for these command-line parameters in your application's Main method. In the design I proposed, if there are no command-line parameters specified, then you just go ahead and create the main form like you normally would. If there are command-line parameters, then you need to parse them to see what action is being requested. Once you've parsed them and determined which background task should be run, you just run that background task—without ever creating/showing a form.
There are lots of different solutions for parsing command-line parameters. Using a library would probably be the easiest way, and also give you the most features. But if you just needed something really simple, like a /background mode, then you could easily write the code for this yourself, without taking a dependency on a library or needing to learn how to use it.
So you could do all of this with a single project in a single solution if you wanted to. Or, you could split different aspects of the functionality out into different projects that compile to libraries (e.g., DLLs), but still have only a single executable for simplicity.
Related
A project I'm writing at the moment requires a Windows Service to be written as it needs to run unattended. One requirement in the specification says that the service should also be able to be run interactively. This is no great problem as I can simply use Reflection to get at the OnStart/OnStop methods and use Console.ReadKey() to pause for keyboard input.
All that's really causing me to pause here is that in order to do this I need to change the output type of the project from Windows Application to Console Application. I'd like someone that has a detailed understanding of the differences between these two choices to explain what the difference is between them and if there are any ramifications for stability in production.
If, while executing in interactive mode, the service is NOT showing Forms, you can change the output type and expect no trouble. If Forms are needed, output type should remain Windows Application. As a general rule, I always start developing service apps as Console apps. It's easy to debug. When it is almost done with testing and debugging, I change to a service app.
I have run into a case where a Windows Form application is being run regularly via a scheduled task on a Windows Server 2003 box. The GUI is, obviously, not being used to take in any user input, so it is at best pointless. But is it also dangerous? Could it cause anything to go pop on the box?
It should not really harm.
You may want to create a standard shortcut to the application then in "properties" select the "Run" -> "Minimized" option.
Don't forget to point the task sceduler to execute the new shortcut rather than the direct application.
The GUI is, obviously, not being used
to take in any user input, so it is at
best pointless.
Just because it doesn't take input doesn't mean it does nothing. While the GUI part of it is probably pointless, the application execution itself may not be.
A Windows Form application being run regularly is the same as any other process being run regularly, and it may have been for whatever reason that the developer of the app wanted a GUI to appear while it was doing its thing or may have had plans to allow users to interrupt the running process through the GUI.
The developer may even be using a GUI control for application execution. A "good" example of this would be using a web rendering control for its DOM processing capabilities.
Could it cause anything to go pop on
the box?
If it doesn't correctly dispose of any resources it uses then yes.
I wouldn't imagine GUI apps are any more notorious than console apps for this, but the fact that someone perhaps unnecessarily used a GUI app (maybe they had only been introduced to WinForms projects) is a strong indicator to check the code and make sure all appropriate resources are being disposed of correctly (think 'using' blocks).
I have an application which needs to be able to write to Any User/Current host preference files (which requires admin privileges per Preferences Utilities Reference) and also to enable/disable a launchd agent via its plist (writable only by root).
I'm using SFAuthorizationView to require users to authenticate as an admin before altering these values.
I'm trying to decide on the best way to do the actual altering of these values.
The cheap hackish option seems to be to use AuthorizationExecuteWithPrivileges() and mv or defaults, either via BLAuthentication or creating something similar myself. The downside to this is not getting the return value of whatever command line app I'm executing, plus some odd esoteric bugs I've encountered (such as getting a -60008 error in certain situations). This is strongly recommended against by Apple, obviously, but people do seem to do it and have some success with it.
The second most hackish option would seem to be the whole create a helper app with the suid bit set and the --self-repair option as discussed in various places. This seems possible, but like it's probably not much less trouble than the third option.
The third option is to create a fully fledged launchd daemon which will run as root and communicate with my application via a socket. This seems like a bit of overkill to read and write some plist files, but it's also possible I may find other uses for it down the road, and it wont be the only daemon for my application, so it doesn't seem unreasonable to just add another.
I'm thinking about modifying this sample code for my purposes.
My two questions are:
Does the launchd daemon option seem like the best route to go for this, or is there a much easier route I'm missing?
Has anybody else successfully used that code as a basis for something similar, and does anybody see any glaring issues with it I'm missing? I've used it successfully in a test app, but I'd be curious to hear you guys' opinion on it.
launchd is definitely the best and safest way to go: you’ll need an installer package to get your helper into place. Do be sure that your helper does and can do absolutely nothing except edit the files you wish to target.
No experience w/the code, but it’s based off of BetterAuthorizationSample, so that’s a nice start.
There's also the openauth API, which allows you to open files that require root privileges.
My objective is to write a program which will call another executable on a separate computer(all with win xp) with parameters determined at run-time, then repeat for several more computers, and then collect the results. In short, I'm working on a grid-computing project. The algorithm itself being used is already coded in FORTRAN, but we are looking for an efficient way to run it on many computers at once.
I suppose one way to accomplish this would be to upload a script to each computer and then run said script on each computer, all automatically and dependent on my own parameters. But how can I write a program which will write to, upload, and run a script on a separate computer?
I had considered GridGain, but the algorithm is already coded and in a different language, so that is ruled out.
My current guess at accomplishing this task is using Expect (wiki/Expect), but I have no knowledge of the tool.
Any advice appreciated.
You can use PsExec for this:
http://technet.microsoft.com/en-us/sysinternals/bb897553.aspx
You could also look at the open source alternative RemCom:
http://rce.sourceforge.net/
It's actually pretty simple to write your own as well but RCE will show you how to do it if you want. Although, using PsExec may just suffice your needs.
Have a look into PVM, it was made for the type of situation you're describing, but you may have to annotate your existing codebase and/or implement a wrapper application.
I write many scripts at home and on the job. Most of the time the scripts get used only a few times to accomplish their chosen task and then are never used again. However, sometimes I write a script to do something more complicated, something that requires user input. It is at this point that I usually agonize over whether to implement a GUI or stick with a y/n, press 1-10, etc. command-line interface. This type of interface can become tedious to use and difficult to maintain.
I know some things lend themselves to a GUI more than others, such as selecting things in a giant list. However, the time it takes to switch a command-line application to use a GUI is prohibitive. For me, it takes a good amount of time to add a GUI with even the most simple framework I can find.
I am curious if any developers have a method of determining at what point their script has grown enough to need a GUI. Or am I going about this the wrong way, should I always be writing my scripts assuming I might later add a GUI?
This doesn't answer your question but FWIW an intermediate step, between UI and command-line, is to have a configuration file instead of a UI:
Edit the configuration file
Run the program
A configuration file format can, if necessary, be complicated and well-commented.
As with many questions of this type, the answer is that it depends.
If your program/script does just one single thing by receiving a number of inputs from the user, it is better to stick with the non-GUI mode.
If the application is doing more than one thing and if you think that the user will use the application to do a lot of stuff, you may consider using a GUI.
Are you planning to distribute this program to others? Then it is better to provide a GUI.
If the users are non-technical, a GUI is a must!
Thats it.
When you want to hand your stuff over to someone else in a discoverable way. Command-line scripts are awesome because they are simple and elegant, but they are not very discoverable. That is, if you were to hand your scripts over to someone else with no documentation, would they be able to figure out what they are and how to use them? If your tasks are so simple that myscript /? will explain what you need to do fully, then you don't need a GUI.
If on the other hand, you are handing your scripts over to someone who isn't so technical, or needs some more visual guidance about the task to be done, than by all means, a GUI is a good way to go. You might even want to keep your scripts as they are and just create a separate GUI that runs them for maximum flexibility.
I think this decission also depends on the audience who will be using your script: If it is people who are comfortable working with the command line, then there is not pressing need to add a GUI as long as your script has a good /help which explains all the parameters it accepts. But if you want the "average user" to be able to use your program, I'd rather add a GUI because otherwise your program might not be intuitive enough for that user group.
If you only need some "Dialogs" to improve your scripts, you can use KDE Kdialog or Gnome Zenity.
I can't count the number of times I've written what I thought would be a 'one-off' and it became more useful than I thought and ended up writing a GUI for it, or I've need to come back to use a program months later. The advantage of the GUI is it makes it easier to remember what would otherwise likely be command line arguments. I.e. for flags and options you can simply use check boxes, combo boxes, radio buttons, and file selectors filenames. I use Borland C++ RAD so it is quite quick and easy to throw together a simple (or even not so simple) dialog box. I now often start with creating the GUI.
If you use Linux, try Zenity. It's an easy to use tool to make a GUI for command-line programs.