i made a program in delphi that upload picture to kinvey provider, first the program saves the image from TImage component in a specific Dir then upload it to kinvey, the issue is every time i open the program it uploads to kinvey, now how to make sure that it only upload one time even if i opened the program multiple times
Image1.Repaint;
Image1.Bitmap.SaveToFile('some dir');
procedure TTabbedwithNavigationForm.Timer2Timer(Sender: TObject);
var
fn : string;
Lstream : TFileStream;
Lfile : TBackendEntityValue;
begin
fn := 'the file dir';
try
Lstream := TFileStream.Create(fn, fmOpenRead);
BackendFiles1.Files.UploadFile(fn,Lstream, 'image/png',Lfile);
finally
Lstream.Free;
BackendFiles1.Free;
end;
end;
end.
The name of the method "Timer2Timer" suggests that this code is triggered... by a Timer component named Timer2. ;-)
Check where this timer is activated. If it is already enabled at design time, the event will be called as soon as the defined time has elapsed, independently from any user interaction.
BTW: It is good to use a try/finally-block for streams, but the stream creation should be directly before "try" (else you would trigger the finally block if the creation of the stream fails, and then you are getting an access violation because the stream variable is not initialized).
The call of "BackendFiles1.Files;" inside the finally-block seems obsolete to me, what should that do?
Related
Here my first steps with Oracle Advanced Queueing...
Szenario: I have a running application where many, many multiple independ processes report back to a central controller to handle the next steps. Simplified the processes are started via cron or via callback of a just finished process.The callbacks are from remote hosts via http -> php -> DB, basicly one http-call after the process has finished on the remote host.
The complete controller logic was written in pl/sql with a singleton concept in mind, so only one process should execute the controller logic at the same time. In fact in 99% of all calls this is not necessary, but that's not the kind of thing I could change at the moment (nor the architecture in general).
To ensure this there is actually a bad mutex implementation, pseudo-code
$mutex = false;
while( not $mutex )
{
$mutex = getMutex();
if( $mutex )
executeController();
else
sleep(5);
}
Wherein the mutex is a one field table having the values 0 (=> "free") or 1 ( => "busy" )
The result of this "beautiful" contstruction is log-file full of "Hey! Got no mutex! Waiting...". And the more processes wait, the longer they wait with no control of who's next. Sometimes the load gets so heavy that the apache first forks and finally dies...
Solution
So my first "operation" would be to replace the mutex with Oracle Advanced Queueing with the controller as single-consumer. Benefits: No more "busy waiting" within the apache layer, strict first come first serve.
( Because all the DB-Actions take place in the same oracle-schema, this could be achieved with standard-objects, pl/sql-methods as well. But why reinvent the wheel, if there are dbms-packages?)
As far as I read using the listen-feature (polling the queued items) in this context is far better than the registration-feaure (scheduling an action when a message arrives).
Basicly everything works fine, i managed to:
create the message type
create the queue-table
create the queue
start the queue
add USER as subscriber
create a procedure for enqueueing
create a procedure for processing & dequeueing
create a procedure for listening to the queue and calling the "process & dequeue"-function when a message arrives.
Of course the listener shall be active 24/7, so i specified no "wait" time. In general depending on the time of the day he will get "something to do" at least every few minutes, more likely every few seconds, sometimes more.
Now here is my problem (if it actually is a problem), i just wrote it according to the examples i found so far:
CREATE OR REPLACE PROCEDURE demo_aq_listener IS
qlist dbms_aq.aq$_agent_list_t;
agent_w_msg sys.aq$_agent;
BEGIN
qlist(0) := sys.aq$_agent(USER, 'demo_aq_queue', NULL);
LOOP
dbms_aq.listen(agent_list => qlist, agent => agent_w_msg);
DEMO_AQ_DEQUEUE();--process & dequeue
END LOOP;
END;
/
Calling the procedure basically does what i expect: It stays "up" and prosseces the queued messages.
But is this the way to do this? What does it do if there are no queued messages? "Sleeping" within the dbms_aq.listen-routine or "Looping as fast as it can", so that I just have implemented another way of "busy waiting"? Might there be a timeout (maybe on oss-level or elsewhere) i just didn't reach?
Here is the complete code with queue-definition etc.: demo_dbms_aq_with_listener.sql
UPDATE
Through further testing i just realized that it seems, that i got a far greater lack of understanding then i hoped :(
On "execution level" don't using the listener at all and just looping the dequeue function has the same effect: It waits for the first/next message
CREATE OR REPLACE PROCEDURE demo_aq_listener IS
BEGIN
LOOP
DEMO_AQ_DEQUEUE();
END LOOP;
END;
/
At least this is easier to test, calling only
BEGIN
DEMO_AQ_DEQUEUE();
END;
/
Also just waits for the first message. Which leaves me totally confused wether I need the listener at all and if what i'am doing does make any sense at all :(
Conclusion
I don't need the listener at all, because i have a single consumer who can treat all messages in the same way.
But the key/core Question stays the same: Is it ok to keep DBMS_AQ.DEQUEUE on "maybe active waiting" in a loop knowing it'll get messages all day long in short intervalls?
(you'll find DEMO_AQ_DEQUEUE() in linked sql-file above)
Better late than never, everything's fine, it is idle waiting:
1) Whilst the DEQUEUE is in sleep mode (WAIT FOREVER), I can see the session is waiting on the event - "Streams AQ: waiting for messages in the queue", that is an IDLE wait class and not actually consuming ANY resources, correct ?
Correct. It's similar to waiting on a row lock on a table. You just "sit there"
https://asktom.oracle.com/pls/apex/asktom.search?tag=writing-a-stand-alone-application-to-continuously-monitor-a-database-queue-aq
I'm working in a service whose main loop looks like this:
while (fServer.ServerState = ssStarted) and (Self.Terminated = false) do
begin
Self.ServiceThread.ProcessRequests(false);
ProcessFiles;
Sleep(3000);
end;
ProcessRequests is a lot like Application.ProcessMessages. I can't pass true to it because if I do then it blocks until a message is received from Windows, and ProcessFiles won't run, and it has to run continually. The Sleep is there to keep the CPU usage down.
This works just fine until I try to shut down the service from Windows's service management list. When I hit Stop, it sends a message and expects to get a response almost immediately, and if it's in the middle of that Sleep command, Windows will give me an error that the service didn't respond to the Stop command.
So what I need is to say "Sleep for 3000 or until you receive a message, whichever comes first." I'm sure there's an API for that, but I'm not sure what it is. Does anyone know?
This kind of stuff is hard to get right, so I usually start at the API documentation at MSDN.
The WaitForSingleObject documention specifically directs to MsgWaitForMultipleObjects for these kinds of situations:
Use caution when calling the wait
functions and code that directly or
indirectly creates windows. If a
thread creates any windows, it must
process messages. Message broadcasts
are sent to all windows in the system.
A thread that uses a wait function
with no time-out interval may cause
the system to become deadlocked. Two
examples of code that indirectly
creates windows are DDE and the
CoInitialize function. Therefore, if
you have a thread that creates
windows, use MsgWaitForMultipleObjects
or MsgWaitForMultipleObjectsEx, rather
than WaitForSingleObject.
In MsgWaitForMultipleObjects, you have a dwWakeMask parameter specifying on which queued messages to return, and a table describing the masks you can use.
Edit because of comment by Warren P:
If your main loop can be continued because of a ReadFileEx, WriteFileEx or QueueUserAPC, then you can use SleepEx.
--jeroen
MsgWaitForMultipleObjects() is the way to go, ie:
while (fServer.ServerState = ssStarted) and (not Self.Terminated) do
begin
ProcessFiles;
if MsgWaitForMultipleObjects(0, nil, FALSE, 3000, QS_ALLINPUT) = WAIT_OBJECT_0 then
Self.ServiceThread.ProcessRequests(false);
end;
If you want to call ProcessFiles() at 3 second intervals regardless of any messages arriving, then you can use a waitable timer for that, ie:
var
iDue: TLargeInteger;
hTimer: array[0..0] of THandle;
begin
iDue := -30000000; // 3 second relative interval, specified in nanoseconds
hTimer[0] := CreateWaitableTimer(nil, False, nil);
SetWaitableTimer(hTimer[0], iDue, 0, nil, nil, False);
while (fServer.ServerState = ssStarted) and (not Self.Terminated) do
begin
// using a timeout interval so the loop conditions can still be checked periodically
case MsgWaitForMultipleObjects(1, hTimer, False, 1000, QS_ALLINPUT) of
WAIT_OBJECT_0:
begin
ProcessFiles;
SetWaitableTimer(hTimer[0], iDue, 0, nil, nil, False);
end;
WAIT_OBJECT_0+1: Self.ServiceThread.ProcessRequests(false);
end;
end;
CancelWaitableTimer(hTimer[0]);
CloseHandle(hTimer[0]);
end;
Use a timer to run ProcessFiles instead of hacking it into main application loop. Then ProcessFiles will run in the interval you want and the messages will be processed correctly, not taking 100 % CPU.
I used a TTimer in a multithreaded application with strange results, so now i use Events.
while (fServer.ServerState = ssStarted) and (Self.Terminated = false) do
begin
Self.ServiceThread.ProcessRequests(false);
ProcessFiles;
if ExitEvent.WaitFor(3000) <> wrTimeout then
Exit;
end;
You create the event with
ExitEvent := TEvent.Create(nil, False, False, '');
Now the last thing is to fire the event in case of service stop. I think the Stop event of the service is the right place to put this.
ExitEvent.SetEvent;
I use this code for an cleanup thread in my DB connections pooling system, but it should work well in your case too.
You don't need to sleep for 3 full seconds to keep the CPU usage low. Even something like Sleep(500) should keep your usage pretty low (if there are no messages waiting to process it should blow through the loop pretty quick and hit the sleep again. If your loop takes a few ms to run it still means your thread is spending the vast majority of time in sleep.
That being said, your code may benefit from some refactoring. You say you don't want ProcessRequests to block waiting for a message? The only other thing in that loop is ProcessFiles. If that is dependent on the message being processed then why can't it block? And if it's not dependent on the message being processed then can it be split onto another thread? (the previous suggestion of firing ProcessFiles via a timer is an excellent suggestion on how to do this).
Use an TEvent that you signal when the thread should wake up. Then block on the tevent (using waitformultiple as Jeroen says if you have multiple events to wait on)
Is it not possible to move ProcessFiles to a seperate thread? In your MainThread you just wait for messages and when the service is being terminated you terminate the ProcessFiles thread.
I am creating an application that implements inter process communication.
For this purpose I have set up a shared buffer, which seems to work fine.
Now, I need a way for the data generating application (written in c++)
to tell the data receiving application (written in freepascal/lazarus)
when it should read the data.
I was trying to use a mutex for this purpose. I do not have much experience with windows api programming.
So, my problem is, in the FreePascal code below, the mutex won't wait. I can call the TMutex.Wait() function, it doesn't return an error or anything, but it simply won't wait.
constructor TMutex.Create(sName: AnsiString);
begin
sName := 'Local\Mutex'+sName;
hMutex := CreateMutexA(
nil, // default access
True, // initially not owned
PChar(sName)); // named mutex
if hMutex = 0 then
begin
raise Exception.Create('mutex creation failed');
end;
end;
destructor TMutex.Destroy;
begin
CloseHandle(hMutex);
end;
procedure TMutex.Wait;
begin
if (WaitForSingleObject(hMutex, INFINITE) <> 0) then ShowMessage('debug: wait returned something');
end;
procedure TMutex.Post;
begin
ReleaseMutex(hMutex);
end;
It looks like your problem is at:
True, // initially not owned
You have things backwards -- true means it initially IS owned, so waiting on it will return immediately.
you don't show us the code that calls the Wait, method of TMutex. however, you have to know that a mutex is reentrant: if a thread owns a mutex, it will always be granted access to it, thus a wait will never block. this is built into the mutex to avoid deadlocks.
try acquiring the mutex from another thread, the wait should block.
I wrote a multi-threaded windows application where thread:
A – is a windows form that handles user interaction and process the data from B.
B – occasionally generates data and passes it two A.
A thread safe queue is used to pass the data from thread B to A. The enqueue and dequeue functions are guarded using a windows critical section objects.
If the queue is empty when the enqueue function is called, the function will use PostMessage to tell A that there is data in the queue. The function checks to make sure the call to PostMessage is executed successfully and repeatedly calls PostMessage if it is not successful (PostMessage has yet to fail).
This worked well for quite some time until one specific computer started to lose the occasional message. By lose I mean that, PostMessage returns successfully in B but A never receives the message. This causes the software to appear frozen.
I have already come up with a couple acceptable workarounds. I am interesting in knowing why windows is loosing these messages and why this is only happening on the one computer.
Here is the relevant portions of the code.
// Only called by B
procedure TSharedQueue.Enqueue(AItem: TSQItem);
var
B: boolean;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
FLast.FNext := AItem;
FLast := AItem;
end
else
begin
FFirst := AItem;
FLast := AItem;
end;
if (FCount = 0) or (FCount mod 10 = 0) then // just in case a message is lost
repeat
B := PostMessage(FConsumer, SQ_HAS_DATA, 0, 0);
if not B then
Sleep(1000); // this line of code has never been reached
until B;
Inc(FCount);
LeaveCriticalSection(FQueueLock);
end;
// Only called by A
function TSharedQueue.Dequeue: TSQItem;
begin
EnterCriticalSection(FQueueLock);
if FCount > 0 then
begin
Result := FFirst;
FFirst := FFirst.FNext;
Result.FNext := nil;
Dec(FCount);
end
else
Result := nil;
LeaveCriticalSection(FQueueLock);
end;
// procedure called when SQ_HAS_DATA is received
procedure TfrmMonitor.SQHasData(var AMessage: TMessage);
var
Item: TSQItem;
begin
while FMessageQueue.Count > 0 do
begin
Item := FMessageQueue.Dequeue;
// use the Item somehow
end;
end;
Is FCount also protected by FQueueLock? If not, then your problem lies with FCount being incremented after the posted message is already processed.
Here's what might be happening:
B enters critical section
B calls PostMessage
A receives the message but doesn't do anything since FCount is 0
B increments FCount
B leaves critical section
A sits there like a duck
A quick remedy would be to increment FCount before calling PostMessage.
Keep in mind that things can happen quicker than one would expect (i.e. the message posted with PostMessage being caught and processed by another thread before you have a chance to increment FCount a few lines later), especially when you're in a true multi-threaded environment (multiple CPUs). That's why I asked earlier if the "problem machine" had multiple CPUs/cores.
An easy way to troubleshoot problems like these is to scaffold the code with additonal logging to log every time you enter a method, enter/leave a critical section etc. Then you can analyze the log to see the true order of events.
On a separate note, a nice little optimization that can be done in a producer/consumer scenario like this is to use two queues instead of one. When the consumer wakes up to process the full queue, you swap the full queue with an empty one and just lock/process the full queue while the new empty queue can be populated without the two threads trying to lock each other's queues. You'd still need some locking in the swapping of the two queues though.
If the queue is empty when the enqueue
function is called, the function will
use PostMessage to tell A that there
is data in the queue.
Are you locking the message queue before checking the queue size and issuing the PostMessage? You may be experiencing a race condition where you check the queue and find it non-empty when in fact A is processing the very last message and is about to go idle.
To see if you're in fact experiencing a race condition and not a problem with PostMessage, you could switch to using an event. The worker thread (A) would wait on the event instead of waiting for a message. B would simply set that event instead of posting a message.
This worked well for quite some time
until one specific computer started to
lose the occasional message.
By any chance, does the number of CPUs or cores that this specific computer have different than the others where you see no problem? Sometimes when you switch from a single-CPU machine to a machine with more than one physical CPU/core, new race conditions or deadlocks may arise.
Could there be a second instance unknowingly running and eating the messages, marking them as handled?
Is there a way to track which window currently has keyboard focus. I could handle WM_SETFOCUS for every window but I'm wondering if there's an alternative, simpler method (i.e. a single message handler somewhere).
I could use OnIdle() in MFC and call GetFocus() but that seems a little hacky.
So from the way you worded the question I'm inferring that you want to have an event handler which is invoked whenever focus switches between windows. You want to be notified, rather than having to poll.
I actually don't think calling GetFocus from OnIdle is that much of a hack - sure it's polling, but it's low-overhead polling without side effects - but if you really want to track this, Windows Hooks are probably your best choice. Specifically you can install a CBT hook (WH_CBT) and listen for the HCBT_SETFOCUS notification.
Windows calls the WH_CBT hook with this hook code when Windows is about to set the focus to any window. In the case of thread-specific hooks, the window must belong to the thread. If the filter function returns TRUE, the focus does not change.
You could also do with with a WH_CALLWNDPROC hook and listen for the WM_SETFOCUS message.
Depending on whether you make it a global hook, or app-local, you can track focus across all windows on the system, or only the windows owned by your process.
There is an easy way using .Net Framework 3.5 : the library UI Automation provides an event focus changed that fires every time the focus change to a new control.
Page on MSDN
Sample:
public void SubscribeToFocusChange()
{
AutomationFocusChangedEventHandler focusHandler
= new AutomationFocusChangedEventHandler(OnFocusChanged);
Automation.AddAutomationFocusChangedEventHandler(focusHandler);
}
private void OnFocusChanged(object sender, AutomationFocusChangedEventArgs e)
{
AutomationElement focusedElement = sender as AutomationElement;
//...
}
This api in fact use windows hook behind the scenes to do that. However you have to use the .Net Framework...
How about the Win32 GetForegroundWindow?
If you're programming in .net 3.5, the Automation package olorin mentions is by far the easiest, but beware of using it in a program that itself has a UI, at least if the UI is done in WPF -- the focus tracking hooks get confused by events in its own app, and quickly lock up the UI. I sent MS a bug report on it. I have not observed the same problem using a traditional Windows Forms UI. You could, of course, put the tracking code in a separate console app and use some kind of ipc to transmit the info you need.
The tempting alternative of using Interop to access the WH_CBT Windows Hook from C# won't work -- the only global hooks you can get at from C# are the mouse and keyboard.
Well, this may not be very graceful... but you can retrieve the current focused control pretty easily. So you might consider setting up a timer that asks every 1/2 second or so "Where is the current focus?"... Then you can observe changes. Example Delphi code is below; it should be pretty easy to adapt, since the real work is in the Windows API calls.
<snip>
function TForm1.GetCurrentHandle: integer;
var
activeWinHandle: HWND;
focusedThreadID : DWORD;
begin
//return the Windows handle of the currently focused control
Result := 0;
activeWinHandle := GetForegroundWindow;
focusedThreadID := GetWindowThreadProcessID(activeWinHandle,nil);
if AttachThreadInput(GetCurrentThreadID,focusedThreadID,true) then begin
try
Result := GetFocus;
finally
AttachThreadInput(GetCurrentThreadID, focusedThreadID, false);
end;
end; //if attached
end;
procedure TForm1.Timer1Timer(Sender: TObject);
begin
//give notification if the handle changed
//(this code gets fired by a timer)
CurrentHandle := GetCurrentHandle;
if CurrentHandle <> PreviousHandle then begin
Label1.Caption := 'Last focus change occurred # ' + DateTimeToStr(Now);
end;
PreviousHandle := CurrentHandle;
end;
<snip>
http://msdn.microsoft.com/en-us/library/ms771428.aspx
Has a window focus tracker sample.
You could monitor messages for the WM_ACTIVATE event.
ref