This question already has answers here:
How to get the last windows active time by windows api
(2 answers)
Closed 11 months ago.
I have an application running on a Windows 10 machine.
I would like to be able to determine if the currently logged in user is idle/away from the PC without having locked the screen.
The idea is that since my application is showing sensitive data I would like to have an auto-logout in my application if the user is idle for some time.
The idle time should however not be the idle time from my application, but from the whole Windows session. If the user does not use my application but is active with another app (using mouse/keyboard) this should not count as being idle.
So I guess my question is:
Is there a way to determine if the user is idle and for how long?
To determine how long a user has been idle, the system provides the GetLastInputInfo API call. To get notified when a user has been idle for a specified amount of time an application would usually do the following:
Set up a timer with the specified timeout (SetTimer)
When the timer expires calculate the difference between GetTickCount and GetLastInputInfo (dwCurrent - dwLastInput reliably calculates the elapsed time since the last user input, even when GetTickCount wraps around to 0; unsigned integer overflow is well defined in C and C++)
If the difference is smaller than the specified timeout, start over from 1. with the remainder of the timeout
Otherwise the timeout has elapsed without user input, so do whatever your program needs to do after a user has been idle for the specified amount of time
Related
I'm using load test in Visual Studio to test our web api services. But to my surprise I can't seem to test what I want to. Actually I have a single url in my .webtest file and try to send the same url time and again to see what is the avg. response time.
Here are the details
1.I use constant load of 1 user
2.Test duration of 1 hour
3.Think time of 10 seconds (not the think time between iterations)
4.The avg. response time that I get is 1.5 seconds
5.So the avg. test time comes out to be 11.5 seconds
6.Requests/sec are 0.088
7.And I'm using Sequential Test Order among 4 types of different tests
So these figures are making me think that every time a virtual user sends a request besides the specified think time it waits for the request to complete before he sends a new one (request). Thus technically the total think time becomes
Total think time = think time specified + avg. response time
But I don't want the user to wait for an already sent request to come back and then send a new one after a specified think time. I need to configure the load test in such a way that if the think time is 10 seconds then the user should send next request after every 10 seconds without waiting the first one to come back then think for another 10 seconds and then send a new request (hence making the total think time to 11.5 seconds in my case as mentioned above). And no matter what type of test I choose among 4 different types Visual Studio is always forcing the virtual user to wait for the completion of the request then add specified think time and then send a new one.
I know what Visual Studio load test is doing is more of a practical approach where the user sends the request wait till it comes back then think or interact with the website and then sends a new one.
Any help or suggestion would be appreciated towards what I'm trying to achieve.
In the properties of the scenario, set the "Test mix type" to be "Test mix based on user pace" and set the "Tests per user per hour" as appropriate. See here.
The suggestion in the question that:
Total think time = think time specified + avg. response time
is erroneous. To my mind adding the values does not provide a useful result. The two values on the right are as stated. Think time simulates the time a user spends reading the page, deciding what to do next and typing/clicking/etc their response. Response time is the "turn around" time between sending a request and getting the response. Adding them does not increase the think time in any sense, it just makes the total duration for handing the request in this specific test. Another test might make the same request with a different think time. Note that many web pages cause more than one request and response to be issued; JavaScript and other allow web pages to do many clever things.
There are couple of apps at the Windows Phone Store which are automatically updating phone lock screen by a custom interval, lets say 1, 2, 4 or more hours.
I did some search over internet to find some articles or best practice to implement custom update interval, which is bigger than 30 minutes but without any result.
Maybe you know some code snippets or reference on articles ?
Thanks in advance!
As you have already found, the periodic agents are invoked once every 30 mins. However, you can simply do nothing until your desired update period has passed then execute your update.
You already have access to your app's isolated storage from within your background agent. You can simply store a counter in some file to track the time that has passed and once it meets your requirement you can execute your update and reset the counter.
We have about 10 different Python scripts that download data from the web, read data from a database and write data back to that database. They do so repeatedly every 10 seconds (or 10 seconds after the last task has completed).
The question is, what is the best approach at running these tasks? I can think of a few ways:
a while True that runs the task then sleeps for the interval. It could be guarded by a watchdog like supervisord, making sure it is always up.
having the script execute the task just once, and invoking the script externally once every 10 seconds by another process.
having the script execute the task lets say for 1 hour (every 10 seconds for an hour), and having a watchdog make sure that task runs again once the hour is over.
I would like to avoid long running processes that actually do something because I don't want to deal with memory problems etc over long periods of time.
Additional Information
The scripts are different because they each retrieve data from a different source, and query, calculate and insert different data into the database.
The tasks are performed every 10 seconds since the data being retrieve is in real-time, and we need to not only keep updating it very frequently, but also keep all the historical data in the database.
There are a lot of resources being used by the scripts - MySQL connections, HTTP connections, Redis connections, etc. We have encountered issues with using the long-running approach before, specifically with MySQL connections (things like MySQL server has gone away, even though all connections had been closed). Hence the inclination toward having the scripts run in shorter periods of time.
What are some common approaches at this?
Unless your scripts somehow leak memory (quite unlikely), they should all be the same. So, for sheer simplicity (your time programming/debugging is much more expensive than a few miliseconds of the machine's time, even each 10 seconds!) I'd go for the single script that checks each 10 seconds.
OTOH, checking each 10 seconds sounds like busywork. Can't you set up so that whatever you are monitoring tells you when there are changes? Or batch the records up so you can retrieve, say, a day's worth at at time?
If you are running on linux, cron has granularity of a minute. We have processes we run constantly. Rather than watch them, the script will open a semaphore that gets released when the program finishes normally or not. This way if it runs long and it gets called again by cron, the copy will exit when it can't get the lock. This way you can call it a often as you need to without it stepping on a possibly still running copy.
I have an application with the following pattern:
2 long running processes that go into hibernate after some idle time
and their memory consumption goes down as expected
N (0 < N < 100) worker processes that do some work and hibernate when idle more than
10 seconds or terminate if idle more than two hours
during the night,
when there is no activity the process memory goes back to almost the
same value that was at the application start, which is expected as
all the workers have died.
The issue is that "system" section keeps growing (around 1GB/week).
My question is how can I debug what is stored there or who's allocating memory in that area and is not freeing it.
I've already tested lists:keysearch/3 and it doesn't seem to leak memory, as that is the only native thing I'm using (no ports, no drivers, no NIFs, no BIFs, nothing). Erlang version is R15B03.
Here is the current erlang:memory() output (slight traffic, app started on Feb 03):
[{total,378865650},
{processes,100727351},
{processes_used,100489511},
{system,278138299},
{atom,1123505},
{atom_used,1106100},
{binary,4493504},
{code,7960564},
{ets,489944},
{maximum,402598426}]
This is a 64-bit system. As you can see, "system" section has ~270MB and "processes" is at around 100MB (that drops down to ~16MB during the night).
It seems that I've found the issue.
I have a "process_killer" gen_server where processes can subscribe for periodic GC or kill. Its subscribe functions are called on each message received by some processes to postpone the GC/kill (something like re-arm).
This process performs an erlang:monitor if not already monitored to catch a dead process and remove it from watch list. If I comment our the re-subscription line on each handled message, "system" area seems to behave normally. That means it is a bug in my process_killer that does leak monitor refs (remember you can call erlang:monitor multiple times and each call creates a reference).
I was lead to this idea because I've tested a simple module which was calling erlang:monitor in a loop and I have seen ~13 bytes "system" area grow on each call.
The workers themselves were OK because they would die anyway taking their monitors along with them. There is one long running (starts with the app, stops with the app) process that dispatches all the messages to the workers that was calling GC re-arm on each received message, so we're talking about tens of thousands of monitors spawned per hour and never released.
I'm writing this answer here for future reference.
TL;DR; make sure you are not leaking monitor refs on a long running process.
I'm using the scripting bridge to query iTunes from my cocoa application. Sometimes iTunes pops up a window (like if an ipod needs updating etc.) and while that popup window is open I can't get any information from iTunes. So if I request information from iTunes when it's in this state my application completely locks-up until that popup window is dismissed.
So I need some sort of mechanism where I can ask itunes something simple in a separate thread to see if I can get a response from it... and if that separate thread doesn't receive a response within a short period of time my main thread will just kill that thread and thus know not to query itunes at that particular time.
Any ideas how to create such a mechanism? I searched for ways to kill a thread but haven't found any.
Your problem is nothing to do with threads; it's that your timeout is too long. Whatever you're doing should fail after about a minute.
To fix this, send a setTimeout: message to the SBApplication object, passing the amount of time you want it to wait. The value is in ticks, of which there are exactly 60 per second.
(Some sources say 60.15, and Apple's own docs say “approximately” 60, but I just measured ten minutes' worth of TickCount, and the result of the division by 600 seconds is exactly 60.0. The code I used:
NSLog(#"Ticks per second: %f", (end - start) / (60.0 * numMinutes)); where end and start are results from TickCount.)
Check out NSOperation/NSOperationQueue.