Windows "system service", not "web service" performance - performance

We have an image processing workflow product. Typically 10,000->100,000 images can be run though our processing in a job. More than one job may be pending.
Currently, all the image processing is performed in our home grown imaging library, a managed C++ library, .NET compatible. It is run in the user’s application space. What I mean by that is that if you log on as “PeteSmith” the images will be run on Pete Smiths’ account.
Currently, we only allow one instance of this image processing at a time. Customers are asking us for a new version, one that allows more than one instance to run at the same time, so the question of how we do this is now something we are examining.
The idea of getting processing off the “users account” and using a “system account” to do the processing in the background is appealing. It is appealing, because of the way windows services are naturally managed by OS events like logging in and logging out and other system resource utilization events alarms.
It appears to me that all we would need to do is manage a small number of well-defined events, well documented by Microsoft.
That’s all nice and wonderful. But what I need to understand is what going to a service implantation for our image processing code means for performance, from our customer’s point of view.
In their view, they need more processed, faster.
QUESTION How I should think about tradoffs:
1) Using a service to run a job vs. running N different “instances” of the software running only on Pete Smith (the users’) account?
2) Allowing N number of services to run N different jobs (no cross talk needed) in comparison to running N different “instances” of the software running only on Pete Smith (the users’) account?

Well, the image processing requires a certain amount of CPU and IO resources for the processing. That amount does not change by fiddling with how and where you start your process.
The difference between service or not should be governed by the required usage pattern. If you want the application to go on processing images automatically regardless if anyone is logged on you should run as a service, but if the usage is more like "choose an image, start processing and wait for the result" style you should go for a client app.
It is not entirely clear why your customer wants to run multiple instances. Is it because they want to have one instance do the heavy processing work while they configure the processing for the other? Or do they want to run multiple instances because the processing is heavy and they want to run multiple in parallel?
In both those cases I would consider running the calculation(s) on a background thread in the application. If it is not possible to use threads (maybe due to some global shared state in the library) my second best bet would be to start each processing in a new process and wait for the result on the main process.

Related

What are some best practices when calling external executable from ASP.NET WEB API 2

I am in need to call an external *.exe compiled in C++
from ASP.NET WEB API 2 using Process (System.Diagnostics)
This executable does some image processing stuff and use lot of memory.
SO my question is if change my API calls to Async. or implement threads will it help, Or it doesn't matter?
Note: All i have is executable so i can not go for a CLI Wrapper.
You can separate the two. Your api is one thing, it needs to be fast, responsive to be able to serve the clients. Your image processing thing is different.
You could implement a queuing system. The api is responsible for adding a new item to this queue and nothing more. You could keep track of what tasks are being run in a separate sql table let's say. Imagine you have a sql table called Tasks. Your api chucks data in there and the status is "Not Running".
Some other app which lives on another machine entirely keeps an eye on this table and takes care of running that executable for each item. When it starts, it changes the status to Running, when it completes it's Done. You do whatever else you need. You could have an api endpoint which takes the ID of the task so your client can keep calling this endpoint to see what the status is. Or you could raise an event when it's done, depending on your application needs.
Bottom line, keep things separate, you gain nothing for blocking the api while a resources heavy task is running. Think what happens if you start that process 5 times, at the same time. You've just killed your api basically.
The app that does the heavy work, could even be located on a separate machine, so it doesn't affect the api at all.

CPU bound/stateful distributed system design

I'm working on a web application frontend to a legacy system which involves a lot of CPU bound background processing. The application is also stateful on the server side and the domain objects needs to be held in memory across the entire session as the user operates on it via the web based interface. Think of it as something like a web UI front end to photoshop where each filter can take 20-30 seconds to execute on the server side, so the app still has to interact with the user in real time while they wait.
The main problem is that each instance of the server can only support around 4-8 instances of each "workspace" at once and I need to support a few hundreds of concurrent users at once. I'm going to be building this on Amazon EC2 to make use of the auto scaling functionality. So to summarize, the system is:
A web application frontend to a legacy backend system
task performed are CPU bound
Stateful, most calls will be some sort of RPC, the user will make multiple actions that interact with the stateful objects held in server side memory
Most tasks are semi-realtime, where they have to execute for 20-30 seconds and return the results to the user in the same session
Use amazon aws auto scaling
I'm wondering what is the best way to make a system like this distributed.
Obviously I will need a web server to interact with the browser and then send the cpu-bound tasks from the web server to a bunch of dedicated servers that does the background processing. The question is how to best hook up the 2 tiers together for my specific neeeds.
I've been looking at message Queue systems such as rabbitMQ but these seems to be geared towards one time task where any worker node can simply grab a job form a queue, execute it and forget the state. My needs are a little different since there could be multiple 'tasks' that needs to be 'sticky', for example if step 1 is started in node 1 then step 2 for the same workspace has to go to the same worker process.
Another problem I see is that most worker queue systems seems to be geared towards background tasks that can be processed anytime rather than a system that has to provide user feedback that I'm dealing with.
My question is, is there an off the shelf solution for something like this that will allow me to easily build a system that can scale? Would love to hear your thoughts.
RabbitMQ is has an RPC tutorial. I haven't used this pattern in particular but I am running RabbitMQ on a couple of nodes and it can handle hundreds of connections and millions of messages. With a little work in monitoring you can detect when there is more work to do then you have consumers for. Messages can also timeout so queues won't backup too greatly. To scale out capacity you can create multiple RabbitMQ nodes/clusters. You could have multiple rounds of RPC so that after the first response you include the information required to get second message to the correct destination.
0MQ has this as a basic pattern which will fanout work as needed. I've only played with this but it is simpler to code and possibly simpler to maintain (as it doesn't need a broker, devices can provide one though). This may not handle stickiness by default but it should be possible to write your own routing layer to handle it.
Don't discount HTTP for this as well. When you want request/reply, a strict throughput per backend node, and something that scales well, HTTP is well supported. With AWS you can use their ELB easily in front of an autoscaling group to provide the routing from frontend to backend. ELB supports sticky sessions as well.
I'm a big fan of RabbitMQ but if this is the whole scope then HTTP would work nicely and have fewer moving parts in AWS than the other solutions.

Site is slow due to processing-intensive scripts

I have a site that has to crawl different sites to aggregate information. When the crawling scripts are running, the site's speed slows down. I have done as much as possible to optimize the crawling, but it's really CPU- and RAM- intensive. These crawls have to occur based on some user action (e.g. search). It is not an option to "pre-crawl" the information as the information is time-sensitive.
What are the general strategies I can use to solve this? Here are 2 of my ideas:
Get more CPU and RAM on current server
Offload these processing intensive scripts on a separate physical server
I'm wondering about cloud computing, but don't have any experience in it. Suggestions?
You've already identified the options. "Cloud computing" doesn't mean anything but being able to quickly allocate a VPS with hourly pricing. It's the same as buying another physical server, except without waiting for the host to put it online and e-mail you access info, and without a monthly commitment. You still have to write your application to make use of multiple servers, you have to write code to "scale up" or "scale down" as needed (purchase or terminate virtual servers, and write code to automatically start whatever programs you need on them), you still have to properly manage the servers (install and maintain an OS, keep packages updated with security fixes) etc.
You could try to make the action to be asynchronous:-
User submits a search.
System displays "The system is currently searching the information based on your criteria and you will be notified shortly". System handles the user request at the mean time.
Since the user isn't waiting for the result page, the user is free to browse around or do other thing in your website instead of locking up their screens.
When the result is generated, system notifies the user that the search is done and provides the link for the user to view the result. This can be done by either sending an email notification to the user, or merely popping a dialog box or sliding down a notification message on the menu bar (basically something to catch the user attention).
It is wise to have a separate machine to run these processing intensive scripts so that it will not slow down the entire application server especially when you have tons of users submitting the search.

What Windows API to look into for building a scheduling application?

Why not use the Windows scheduler?
I have several applications that have to run at certain times according to business rules not the typical every weekday at 1pm.
I also need a way for the applications to provide feedback of their progress so that I can have rules that notify me when the applications are running slow or aren't even running anymore.
What Windows API should I be looking into? (like, a time version of the FileWatcher apis)
What's the best way to have the application notify the scheduler of its progress (files, sockets, windows messages, ???)?
For Vista/Win2k8, there's the nice Task Scheduler 2.0 API: http://msdn.microsoft.com/en-us/library/aa384138(VS.85).aspx. Previous version have the Task Scheduler 1.0 API, but I've never used it.
AppControls has a CronJob component that you can use to create scheduled events. This saves your program from having to wake up every minute and check the schedule itself. Instead, just schedule the job and indicate a callback method.
I have used this component for scheduling jobs myself and have been very happy with the way that it works.
I think what you really want is a common framework for your applications that report to something (you or the system messages or tracing or perfmon, event log, whatever) and also to receive via some inter process protocol a way to receive messages and respond.
based on the reporting you can change the scheduling or make changes, etc.
So, there is some monitor app, and then each of your other apps does common reporting.
events I can think of:
- started
- stopped
- error
- normal log messages
- and of course specific things your apps do.
I think there are probably existing classes/framework that do this - you'll have to check around.
If it were me, I would make a service that could talk to all the other apps and perhaps was even an http server. It would be able to route messages to particular apps and start stop those processes and query them.
There are lots of ways to do what you want though. those were just off the top of my head.
Alternatively you might just be able to get these to be services and they handle messages sent to them. Their normal processing does nothing until they are "woken up" with some task command.
You have more questions in one. Normally you should split them. But let's overlook this and try to answer.
To schedule certain events (including running an application): Use TJvScheduledEvents from JVCL. IMHO JVCL is the best Delphi open source library around with extensive number of components, developers & support. TJvScheduledEvents is quite neat, uses threads for event scheduling and also you have in JVCL a detailed editor for your events (it needs a small hack to use it though).
To provide 'feedback' from your applications to a (remote) central point: A very very very good solution (if your requirements permit) is to log the progress of your applications in a table (let's call it LOG) on a Firebird server. In LOG you can have the following fields: COMPUTER, USERNAME, APPNAME, MSG, LOGDATE (etc. etc.). In the After Insert trigger of the LOG table you can fire an event (let's call it NEW_LOG). In your console app you can register the interest for this event and so, your application will be automatically updated with everything which happens in any of your applications, so you can do log analysis, graphs etc. Of course you can do it with IB, but IB costs.
...going on Windows API route you need headers (which probably aren't translated), you'll encounter our dearest Pointers/PChars etc. etc. Of course, building from scratch everything isn't worthwhile but when this is already done in a Delphi way, why don't use it?
Use service with a timer that is fired regulary (for example each minute). It reads the schedule and looks if some are due before the next iteration. If so, you can execute them.
You can add an interface that shows all running apps. For the feedback and query that using a desktop application.

Looking for pattern/approach/suggestions for handling long-running operation tied to web app

I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.

Resources