Oracle Advanced Queueing Experiences - oracle

I am considering to use the Oracle Advanced Queueing technology for asynchronous communication. My aim is to use it for concurrent process execution (asynchronous PL/SQL procedure calls).
The current legacy implementation for the concurrent process execution is made of Unix KornShell (ksh) scripts which we are starting from front end via SSH connection in background mode. It works fine for us, but I am unhappy with that kind of solution because of:
Security (front end starts a SSH connection and executes ksh scripts in background mode. From our colleagues I noticed that this kind of login will be restricted in our company.)
Maintenance (Not everyone of our team is familiar with ksh scripts)
Diversity in technology (I try to decrease the diversity in technology because of know how and migration efforts)
Logging (Our back end system logs into database log tables, the concurrent execution logs partially into a log file)
By moving from ksh to the database I will be able to increase overall quality of my system:
Security (No SSH connections anymore, the front end will send messages to the database and the database message listener will react to the messages and execute procedures asynchronously)
Maintenance (We use PL/SQL, where we are familiar in)
Diversity in technology (By next OS migration we will need to migrate only the database objects and the data)
Logging (We will fully use our back end logging solution)
What do you think about my considerations and what are your experiences with Oracle Advanced Queueing? Especially in stability, performance and maintenance? Are there better alternatives?

I obviously don't know the details of your project, but if asynchronous PL/SQL procedure calls is your only goal, it may be easier to use DBMS_SCHEDULER. Your program could submit jobs to "run now" through the scheduler that call your PL/SQL. In my opinion, the scheduler is a much easier thing to work with than AQs.

The management of flows with Asynchronous queues Oracle brings with it advantages and disadvantages:
ADVANTAGES
Ability to manage flows by type creating ad hoc code on which to
create Handler (JOB EVENT or APPLY PROCESS) to manage the various
Sub Flows
Easy to put out a whole type of flows closing DEQUEUE Queue.
Managing Priorities (Parameter in the creation of Coda) of MSG for
INSERT TIME or PRIORITY (msg parameter in the Payload) Managing
message with a deadline or an Elapsed TIME.
Align the paradigm to a solution to EVENT no POLLING
DISADVANTAGES
The load of Business Logic will all be on the DB.
When Installing New PKG you will need to stop the queues (queuing and DEQUEUEING) to restart the HANDLER that point to the PKG.
Having to implement a recovery system msg Incorrectly Processed.
I think a good solution would be to use the CODE JMS (JMS provider) on the tails ORACLE so as to move the BL on JAVA and to use the various potentials of the language including the Logging.

Related

What´s the point of SAPGuiSession.Sync

The documentation for SAPGUISession.Sync says:
Instructs UFT to wait until the SAP GUI for Windows session is available.
Is this 1:1 comparable to Web add-on´s Page.Sync? If so, when should I call it? Do I have to call Sync...
after each input sent to the SAP GUI?
after each input sent to the SAP GUI if a server roundtrip takes place after this input is received? (How can I identify that one takes place?)
after each context-changing input sent to the SAP GUI?
only once after launching the SAP session?
I haven´t had a chance to use the Windows-SAP Support in UFT yet, that´s why I find the documentation to be rather sparse.
Thanks...
AFAIK it's the same as web's Sync, there's usually no need to use it. Synchronisation in UFT usually comes from the need to identify an object before acting upon it. Sync is useful in cases where an object in the old state of the application may match the expected object in the new state of the application. Usually Sync is added to tests ad-hoc when it fails due to synchronisation issues.
SAP´s Sync works just as Web´s Sync, but one important difference is:
While Page.Sync is often no guarantee that the app really is idle when Page.Sync Returns, for SAP applications, SAPSession.Sync´s returning does indeed guarantee this.
So whenever the SAP Client is doing Server roundtrips, SAPSession.Sync is a very save way of obtaining synchronization (i.e. awaiting SAP client´s idle state).

HowTo: Inform application that database table row is updated?

I am in process of developing an MFC based Windows based application, using PostgreSQL which would perform
Fetches information from the UI
Performs some logic and store related information to the database
The stored information has to be send immediately OR at schedule interval (ex. at 5:00 on xyz date) over the network
Currently, we have developed a dispacher mechanism (thread ), which constantly polls the database for new information inserted in the database. The thread fetches the information and send to the network module.
But, I feel this is not the correct approach as
Polling every time is a overhead. There can be times when there is nothing to execute
It is not real time , because we poll after every 5 seconds
So
Is there a way to send a trigger to my network module as soon as information is updated in database?
Or any better way to achieve this task?
Thanks in advance.
You can use the listen/notify feature of PostgreSQL for this.
http://www.postgresql.org/docs/current/static/sql-listen.html
http://www.postgresql.org/docs/current/static/sql-notify.html
The clients interested in the messages would execute a listen statement and the trigger would then notify them.
I don't use C# so, but according to the manual you can retrieve the messages in an asynchronous manner - which still involves some "lightweight" polling as the notification message is only sent as part of the answer of the server. The manual claims that running an "empty" statement (such as ;) will be enough. Using Java/JDBC I used a simple select 42 which doesn't impose a big workload on the server as no tables are touched.
This polling is defintely faster and more scalable than actually retrieving the table's data.
Yes you are right #RDX, you shouldnt poll it every time rather you could write a trigger in Postgres and from that trigger try calling a java program which could be seen in the below thread.
Calling java pgm from Postgres trigger

How can make my database records automatic

is there any way i can make my records in the database to be automatic. e.g i want a message to be sent to helpdesk if a requested service is not attended within 24 hours, without clicking anything.
technically it depends on the database you are using. if the database supports it, you could set up a scheduled job to scan the records and identify late services and email the helpdesk.
if the database doesn't support scheduled tasks then you could set up a client job on a timer to do the same thing.
This is what application software is for.
When the application saves to the database, the application also sends an email.
The traditional approach to this is to schedule a job (there are too many ways[1] to do that for me to go into details without knowing your server operating system, DBMS, and how much control you have to install or schedule programs on the server).
Your scheduled job would regularly check the database for records that have not been attended, and then take the appropriate action such as emailing the support team.
[1] Just so that this is not left completely unanswered; some DBMS (ex. SQL Server) have built in job scheduling facilities. You could run a Windows service on the server to do this. If not, you might consider running a Windows Service on one of your own servers to access the website (a great way to waste bandwidth).
Use a scheduler like this one, found on rufus site. You could program it to run, for instance, every hour, and make it do the job without human interaction.
I am a Java shop myself and I've been using quartz. It is quite good and usable if you can adjust to jruby.
I've never liked database or operating system based solutions, since you might not control them and often get asked to run on different environments.
Here's a very simple background job handler for Ruby:
codeforpeople.rubyforge.org/svn/bj/trunk/README
Easy to install and use. Fairly lightweight. It uses a SQL backend for managing concurrency. Runs on multiple machines simultaneously if you need it to.

What Windows API to look into for building a scheduling application?

Why not use the Windows scheduler?
I have several applications that have to run at certain times according to business rules not the typical every weekday at 1pm.
I also need a way for the applications to provide feedback of their progress so that I can have rules that notify me when the applications are running slow or aren't even running anymore.
What Windows API should I be looking into? (like, a time version of the FileWatcher apis)
What's the best way to have the application notify the scheduler of its progress (files, sockets, windows messages, ???)?
For Vista/Win2k8, there's the nice Task Scheduler 2.0 API: http://msdn.microsoft.com/en-us/library/aa384138(VS.85).aspx. Previous version have the Task Scheduler 1.0 API, but I've never used it.
AppControls has a CronJob component that you can use to create scheduled events. This saves your program from having to wake up every minute and check the schedule itself. Instead, just schedule the job and indicate a callback method.
I have used this component for scheduling jobs myself and have been very happy with the way that it works.
I think what you really want is a common framework for your applications that report to something (you or the system messages or tracing or perfmon, event log, whatever) and also to receive via some inter process protocol a way to receive messages and respond.
based on the reporting you can change the scheduling or make changes, etc.
So, there is some monitor app, and then each of your other apps does common reporting.
events I can think of:
- started
- stopped
- error
- normal log messages
- and of course specific things your apps do.
I think there are probably existing classes/framework that do this - you'll have to check around.
If it were me, I would make a service that could talk to all the other apps and perhaps was even an http server. It would be able to route messages to particular apps and start stop those processes and query them.
There are lots of ways to do what you want though. those were just off the top of my head.
Alternatively you might just be able to get these to be services and they handle messages sent to them. Their normal processing does nothing until they are "woken up" with some task command.
You have more questions in one. Normally you should split them. But let's overlook this and try to answer.
To schedule certain events (including running an application): Use TJvScheduledEvents from JVCL. IMHO JVCL is the best Delphi open source library around with extensive number of components, developers & support. TJvScheduledEvents is quite neat, uses threads for event scheduling and also you have in JVCL a detailed editor for your events (it needs a small hack to use it though).
To provide 'feedback' from your applications to a (remote) central point: A very very very good solution (if your requirements permit) is to log the progress of your applications in a table (let's call it LOG) on a Firebird server. In LOG you can have the following fields: COMPUTER, USERNAME, APPNAME, MSG, LOGDATE (etc. etc.). In the After Insert trigger of the LOG table you can fire an event (let's call it NEW_LOG). In your console app you can register the interest for this event and so, your application will be automatically updated with everything which happens in any of your applications, so you can do log analysis, graphs etc. Of course you can do it with IB, but IB costs.
...going on Windows API route you need headers (which probably aren't translated), you'll encounter our dearest Pointers/PChars etc. etc. Of course, building from scratch everything isn't worthwhile but when this is already done in a Delphi way, why don't use it?
Use service with a timer that is fired regulary (for example each minute). It reads the schedule and looks if some are due before the next iteration. If so, you can execute them.
You can add an interface that shows all running apps. For the feedback and query that using a desktop application.

Looking for pattern/approach/suggestions for handling long-running operation tied to web app

I'm working on a consumer web app that needs to do a long running background process that is tied to each customer request. By long running, I mean anywhere between 1 and 3 minutes.
Here is an example flow. The object/widget doesn't really matter.
Customer comes to the site and specifies object/widget they are looking for.
We search/clean/filter for widgets matching some initial criteria. <-- long running process
Customer further configures more detail about the widget they are looking for.
When the long running process is complete the customer is able to complete the last few steps before conversion.
Steps 3 and 4 aren't really important. I just mention them because we can buy some time while we are doing the long running process.
The environment we are working in is a LAMP stack-- currently using PHP. It doesn't seem like a good design to have the long running process take up an apache thread in mod_php (or fastcgi process). The apache layer of our app should be focused on serving up content and not data processing IMO.
A few questions:
Is our thinking right in that we should separate this "long running" part out of the apache/web app layer?
Is there a standard/typical way to break this out under Linux/Apache/MySQL/PHP (we're open to using a different language for the processing if appropriate)?
Any suggestions on how to go about breaking it out? E.g. do we create a deamon that churns through a FIFO queue?
Edit: Just to clarify, only about 1/4 of the long running process is database centric. We're working on optimizing that part. There is some work that we could potentially do, but we are limited in the amount we can do right now.
Thanks!
Consider providing the search results via AJAX from a web service instead of your application. Presumably you could offload this to another server and let you web application deal with the content as you desire.
Just curious: 1-3 minutes seems like a long time for a lookup query. Have you looked at indexes on the columns you are querying to improve the speed? Or do you need to do some algorithmic process -- perhaps you could perform some of this offline and prepopulate some common searches with hints?
As Jonnii suggested, you can start a child process to carry out background processing. However, this needs to be done with some care:
Make sure that any parameters passed through are escaped correctly
Ensure that more than one copy of the process does not run at once
If several copies of the process run, there's nothing stopping a (not even malicious, just impatient) user from hitting reload on the page which kicks it off, eventually starting so many copies that the machine runs out of ram and grinds to a halt.
So you can use a subprocess, but do it carefully, in a controlled manner, and test it properly.
Another option is to have a daemon permanently running waiting for requests, which processes them and then records the results somewhere (perhaps in a database)
This is the poor man's solution:
exec ("/usr/bin/php long_running_process.php > /dev/null &");
Alternatively you could:
Insert a row into your database with details of the background request, which a daemon can then read and process.
Write a message to a message queue which a daemon then read and processed.
Here's some discussion on the Java version of this problem.
See java: what are the best techniques for communicating with a batch server
Two important things you might do:
Switch to Java and use JMS.
Read up on JMS but use another queue manager. Unix named pipes, for instance, might be an acceptable implementation.
Java servlets can do background processing. You could do something similar to this technology in a web technology with threading support. I don't know about PHP though.
Not a complete answer but I would think using AJAX and passing the 2nd step to something thats faster then PHP (C, C++, C#) then a PHP function pick the results off of some stack most likely just a database.

Resources