I'm currently working on a command line tool and since this is my first time designing a tool like this I have a few design questions, most notably how to handle a non lethal error.
The tool that I'm working on raises a main server on a configurable port and after that an optional web server on a non configurable port. If we then choose to do this again (while using a different port for the main server) we would obviously get an binding error when try to start up the optional web server.
Since this is a non lethal error (running the webserver is optional) and from UI experience my initial thoughts would be to print out a clear error and carry on with the program. However I've been told that from a scripting stand point print out the error and then existing is better practice.
So what is the better?
You might also want to consider that people might want to write scripts which expect the invocation to succeed even if the webserver is already running.
If you define a default behavior of 'fail if webserver already running', then such scripts will have to parse your error message, or read/understand your return value and figure out that the invocation failed for this particular reason (i.e. webserver already running).
Give them a way out of this and introduce a flag (argument) where they can decide which behavior they want. In the absence of the flag, do the safer thing maybe (i.e. error out if webserver is running).
Related
My project: go - 1.12.5; gin-gonic; vue-cli - 3.8.2.
On windows server 2008 go under the local account, run main.exe - works well. But when log off my account, all local account programs are closed, including my go server.
The first thing I did was try to configure IIS for my GO. Nothing good came of it.
Then I tried to run main.exe from the SYSTEM account psexec -s c:\rafd\main.exe. When log off the process does not close. But the frontend is in my account and SYSTEM does not see the local files (js, html, css) of my project
Tell me how to start the Go server, to after log off my project did not stop life
Two ways to approach it.
Go with ISS (or another web server).
Should you pick this option, you have further choices:
Leave your project's code as is, but
Make sure it's able to be told which socket to listen for connections on—so that you can tell it to listen, say, on localhost:8080.
For instance, teach your program to accept a command-line parameter for that—such as -listen or whatever.
Configure IIS in a way so that it reverse-proxies incoming HTTP requests on a certain virtual host and/or path prefix to a running instance of your server. You'll have to make the IIS configuration—the socket it proxies the requests to—and the way IIS starts your program agree with each other.
Rework the code to use FastCGI protocol instead.
This basically amounts to using net/fastcgi instead of net/http.
The upside is that IIS (even its dirt-old versions) support FastCGI out of the box.
The downsides are that FastCGI is beleived to be slightly slower than plain HTTP in Go, and that you'll lose the ability to run your program in the standalone mode.
Turn your program into a proper Windows™ service or "wrap" it with some helper tool to make it a Windows™ service.
The former is cleaner as it allows your program to actually be aware of control requests the Windows Service Management subsystem would send to you. You could also easily turn your program into a shrink-wrapped product, if/when needed. You could start with golang.org/x/sys/windows/svc.
The latter may be a bit easier, but YMMV.
If you'd like to explore this way, look for tools like srvany, nssm, winsv etc.
Note that of these, only srvany is provided by Microsoft® and, AFAIK, it's missing since Win7, W2k8, so your best built-in bet might be messing with sc.exe.
In either case, should you pick this route, you'll have to deal with the question of setting up proper permissions on your app's assets.
This question is reasonably complex in itself since there are many moving parts involved.
For a start, you have to make sure your assets are tried to be accessed not from "the process' current directory"—which may be essentially random when it runs as a service—but either from the place the process was explicitly told about when run (via command-line option or whatever) or figured out somehow using a reasonably engeneered guess (and this is a complicated topic in itself).
Next, you either have to make sure the account your Windows™ uses to run your service really has the permissions to access the place your assets are stored in.
Another possibility is to add a dedicated account and make the SCM use it for running your service.
Note that in either case proper error handling and their reporting is paramount: when your program is being run non-interactively, you want to know when something goes wrong: socket failed to be opened or listened on, assets not found, access was denied when trying to open an asset file, and so on—in all these cases you have to 1) handle the error, and 2) report it in a way you can deal with it.
For a non-interactive Windows™ program the best way may be to use the Event Log (say, via golang.org/x/sys/windows/svc/eventlog).
Simplest solutions would be using windows schedular.
Start your exe file on system logon with highest privilage in background. So whenever your system will logon it will start your exe and make runnign in background.
You can refer this answer,
How do I set a Windows scheduled task to run in the background?
I want to develop a script which will check the state (running or not) of an Essbase application (say, every 15 mins.).
If the Essbase application is not running then it will send an alert via email to user or application admin .
There isn't anything built in to Essbase that is going to do this for you. You really need to look at this in terms of two different things you are trying to accomplish:
How to tell if an Essbase cube is running
How to send an email from a batch/scheduled process
As for item 1, there are at least a couple of ways to go. I believe you could use MaxL (the automation language for Essbase) to determine if an application/cube is running. For example, you could use MaxL to check the status of the cube, output the script to a text file, and then scan that text file for the proper line/indicator that the cube is running or not. This may technically work but it wouldn't be my preferred option. You would also have to develop this so that in case the server itself isn't running, it triggers the specific action you want.
Another way to go (and this is my preferred option) is to use the Essbase Java API to determine if the application/cube is running. Although you may not be familiar with Java, this to me is the cleanest way to implement this functionality. You would use the API to connect to the server, and if the connection to the server was successful you could then check the status of the application/cube. In case the cube or server is down/stopped, you can do the appropriate action. You could also send the email from Java itself, using one of the Java email libraries.
You can also send emails from the command-line. It varies in specifics between Unix/Windows but in general if you can specify an SMTP server, email address, subject, body, and other parameters, it works just fine. There are freely available email clients I have seen and used that work completely on the command-line and handle this use case fairly well.
Once everything is all developed and tested it's just a simple matter of scheduling it to run every 15 minutes or however often you want to run it.
As I mentioned, MaxL is the scripting language for Essbase. There is a program on the Essbase server that interprets MaxL scripts. The MaxL language is documented extensively in the EPM documentation. As a very simple example, you could write a MaxL script like the following:
login "admin" "password" on "localhost";
display application "Sample";
If you are running on a Linux/Unix server then you might then run the script as follows:
startMaxl.sh your_script.msh
This script would then output a text-based grid of different application properties to the console.
One of the columns that the display application command will display is related to the application status, if the app is loaded/started or whatever. Theoretically to get this solution to work you'd need to send MaxL's script output to a file and then scan the file for the proper text and decide if action needs to be taken. While this approach can work, it comes with risks. That's why I think Java (the preferred API for Essbase) is much better suited to solving this.
I am wondering what would be the best practice for deploying updates to a (MVC) Go web application. Imagine the following scenario :
1) Code and test some changes for my Go Web Application
2) Deploy update without anyone currently using the previous version getting interrupted.
I don't know how to make sure point 2) can be covered - when somebody is sending a request to the server and I rebuild/restart it just in this moment, he gets an error - even if the request just uses a part of the code I did not touch or that is backwards-compatible, or if I just added a new Request-handler.
Maybe I'm missing something trivial or a well-known pattern as I am just in the process of learning go and my previous web applications were ASP.NET- or php-applications where this was no issue as I did not need to restart the webserver on code changes.
It's not just an issue with Go, but in general we can divide the problem into two separate ones:
Making sure current requests do not get terminated and affect user experience.
Making sure there is no down-time in which new requests cannot be handled.
The first one is easier to tackle: You just don't violently kill your server, but tell it to exit, causing a "Drain phase", in which it does not accept new requests and only finishes the currently running requests, and exits. This can be done by listening on signals for example, and entering the app into a special state.
It's not trivial with Go as the default http server doesn't support shutting it down, but you can start a server with a net.Listener, and then keep a reference to it an close it when the time is due.
Now, doing only approach one and then starting the service again will cause new requests not to be accepted while this is going on, and we all know this can take a number of seconds in extreme cases.
So what we need is another instance of the server already running with the new code, the instant the old one is not responding to new requests, right? That can be done in several ways:
Having more than one server, and a load-balancer on top of them, allowing one (or more) server to take the load while we restart another. That's the simplest way, and the way most people do it. If you need N servers to take the load of your users, just keep N+1 and restart one at a time.
Using socket sharing tricks. In Newer Linux kernels, Many processes can listen and accept on the same port. What you do is simply start the new instance and then tell the old one to finish and exit. This way there is no pause. This is done by setting SO_REUSEPORT on the listening socket.
The above can be automated with ready to ship solutions, like Einhorn, that deals with all the details for you, see https://github.com/stripe/einhorn
Another approach is documented in this blog post: http://blog.nella.org/?p=879
I would like to improve the way how an application is checking that another instance is not already running. Right now we are using named mutexes with checking of running processes.
The goal is to prevent security attacks (as this is security software). My idea right now is that "bulletproof" solution is only to write an driver, that will serve this kind of information and will authenticate client via signed binaries.
Does anyone solved such problem?
What are your opinions and recommendations?
First, let me say that there is ultimately no way to protect your process from agents that have administrator or system access. Even if you write a rootkit driver that intercepts all system calls (a difficult and unsafe practice in of itself), there are still ways to use admin access to get in. You have the wrong design if this is a requirement.
If you set up your secure process to run as a service, you can use the Service Control Manager to start it. The SCM will only start one instance, will monitor that it stays up, allow you to define actions to execute if it crashes, and allow you to query the current status. Since this is controlled by the SCM and the service database can only be modified by administrators, an attacking process would not be able to spoof it.
I don't think there's a secure way of doing this. No matter what kind of system-unique, or user-unique named object you use - malicious 3rd party software can still use the exact same name and that would prevent your application from starting at all.
If you use the method of checking the currently executing processes, and checking if no executable with the same name is running - you'd run into problems, if the malicious software has the same executable name. If you also check the path, of that executable - then it would be possible to run two copies of your app from different locations.
If you create/delete a file when starting/finishing - that might be tricked as well.
The only thing that comes to my mind is you may be able to achieve the desired effect by putting all the logic of your app into a COM object, and then have a GUI application interact with it through COM interfaces. This would, only ensure, that there is only one COM object - you would be able to run as many GUI clients as you want. Note, that I'm not suggesting this as a bulletproof method - it may have it's own holes (for example - someone could make your GUI client to connect to a 3rd party COM object, by simply editing the registry).
So, the short answer - there is no truly secure way of doing this.
I use a named pipe¹, where the name is derived from the conditions that must be unique:
Name of the application (this is not the file name of the executable)
Username of the user who launched the application
If the named pipe creation fails because a pipe with that name already exists, then I know an instance is already running. I use a second lock around this check for thread (process) safety. The named pipe is automatically closed when the application terminates (even if the termination was due to an End Process command).
¹ This may not be the best general option, but in my case I end up sending data on it at a later point in the application lifetime.
In pseudo code:
numberofapps = 0
for each process in processes
if path to module file equals path to this module file
increment numberofapps
if number of apps > 1
exit
See msdn.microsoft.com/en-us/library/ms682623(VS.85).aspx for details on how to enumerate processes.
I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.