Posix sleep(), usleep() and nanosleep() don't work but Sleep() does under cygwin gcc - sleep

When I use sleep, usleep or nanosleep, the test program seems to have no delay at all (I was trying 10 seconds), but when I use the windows Sleep() function, it delays correctly.
What am I missing?
Thanks in advance.

Related

Logitech lua reliable Sleep

Introduction
I am writing a lua script for my logitech mouse. The logitech lua api has this documentation.
My script moves the mouse every x milliseconds to draw a pattern. My problem is that the Sleep(x) function of this lua api is very inaccurate. I have read that it takes time (couple milliseconds) for it to get a thread, so that time adds to the execution time for the code itself. However it makes it useless for measuring milliseconds.
Question
Do you know a workaround? Is there a more capable way for measuring milliseconds than the Sleep(x) function?
Also I wanted to note that in windows 10 version 1909 and bellow, it was much-much more accurate. They have messed something with it so it is inaccurate since windows 10 version 2004 (aruound august, last year). So I would need to find a workaround for this.
My Code
Here is a snippet from my code:
PressMouseButton(1)
--1
MoveMouseRelative(-26, 36)
Sleep(127)
--2
MoveMouseRelative(2, 36)
Sleep(127)
--3
MoveMouseRelative(-36, 32)
Sleep(127)
--4
MoveMouseRelative(-33, 30)
Sleep(127)
--5
MoveMouseRelative(-11, 38)
Sleep(127)
ReleaseMouseButton(1)
This does not work on its own, but you can see here how I want to use the function.
Thank you for your help!
Sleep is not for measuring milliseconds. It pauses your script for a certain amount of time.
From what I can see it is not possible to load any libraries from a Logitech script.
So you can only use Sleep or run a loop to delay.
Send you a logitech high-precision delay, accurate to 1ms, why high precision, because win10 1909 after the system version, logitech script 1ms=15.6ms, so need
function Sleep3(time)
local a = GetRunningTime()
while GetRunningTime()-a < time do
end
end

Is it safe to call getrawmonotonic() in Linux interrupt handler?

I did some research online, and people suggest using getrawmonotonic to get timestamp in kernel. Now I need to get time stamp in ISR, just wondering if it's safe. The Linux kernel version is 2.6.34.
Thanks
Yes, it is safe to use getrawmonotonic in interrupt handler.
Implementation of that function (in kernel/time/timekeeping.c) uses seqlock functionality(read_seqbegin(), read_seqretry calls), which is interrupt-safe, and timespec_add_ns() call, which is just arithmetic operation.

CPU Frequency in AVR, CodeBlocks and Proteus

Well, I'm using Code::Blocks as the IDE, and Win AVR as the compiler.
F_CPU is selected as 8000000UL.
I'm writing code for Atmega32.
But when I run my written code (*.hex file) in Proteus design suite (ISIS) the _delay_ms(1000) doesn't give a delay for 1sec. I don't know if it is write or wrong, I've selected CKSEL fuses to be (0100) Int.RC 8MHz in edit component.
What's wrong?
please....
Have you tried setting the compiler optimization to something other than -O0? From the avr-libc docs regarding delay* functions.
In order for these functions to work as intended, compiler
optimizations must be enabled, and the delay time must be an
expression that is a known constant at compile-time.
Using PWM for servo control I figured out that even with this setting of Internal 8Mhz, Proteus are actually simulated with a clock of 1Mhz. If you change F_CPU to 1000000UL you will see that delay will work just fine.
Its just proteus simulation lags. On the real device your delay function will work properly. In order to simulate time delays the good choice is using avr studio program.

How can I get a pulse in win32 Assembler (specifically nasm)?

I'm planning on making a clock. An actual clock, not something for Windows. However, I would like to be able to write most of the code now. I'll be using a PIC16F628A to drive the clock, and it has a timer I can access (actually, it has 3, in addition to the clock it has built in). Windows, however, does not appear to have this function. Which makes making a clock a bit hard, since I need to know how long it's been so I can update the current time. So I need to know how I can get a pulse (1Hz, 1KHz, doesn't really matter as long as I know how fast it is) in Windows.
There are many timer objects available in Windows. Probably the easiest to use for your purposes would be the Multimedia Timer, but that's been deprecated. It would still work, but Microsoft recommends using one of the new timer types.
I'd recommend using a threadpool timer if you know your application will be running under Windows Vista, Server 2008, or later. If you have to support Windows XP, use a Timer Queue timer.
There's a lot to those APIs, but general use is pretty simple. I showed how to use them (in C#) in my article Using the Windows Timer Queue API. The code is mostly API calls, so I figure you won't have trouble understanding and converting it.
The LARGE_INTEGER is just an 8-byte block of memory that's split into a high part and a low part. In assembly, you can define it as:
MyLargeInt equ $
MyLargeIntLow dd 0
MyLargeIntHigh dd 0
If you're looking to learn ASM, just do a Google search for [x86 assembly language tutorial]. That'll get you a whole lot of good information.
You could use a waitable timer object. Since Windows is not a real-time OS, you'll need to make sure you set the period long enough that you won't miss pulses. A tenth of a second should be safe most of the time.
Additional:
The const LARGE_INTEGER you need to pass to SetWaitableTimer is easy to implement in NASM, it's just an eight byte constant:
period: dq 100 ; 100ms = ten times a second
Pass the address of period as the second argument to SetWaitableTimer.

Windows kernel equivalent to FreeBSD's ticks or Linux' jiffies in the latest WDK

I am working on a Windows NDIS driver using the latest WDK that is in need of a millisecond resolution kernel time counter that is monotonically non-decreasing. I looked through MSDN as well as WDK's documentation but found nothing useful except something called TsTime, which I am not sure whether is just a made-up name for an example or an actual variable. I am aware of NDISGetCurrentSystemTime, but would like to have something that is lower-overhead like ticks or jiffies, unless NDISGetCurrentSystemTime itself is low-overhead.
It seems that there ought to be a low-overhead global variable that stores some sort of kernel time counter. Anyone has insight on what this may be?
Use KeQueryTickCount. And perhaps use KeQueryTimeIncrement once to be able to convert the tick count into a more meaningful time unit.
How about GetTickCount / GetTickCount64 (Check the reqs on the latter)

Resources