I'm trying to get actual NTP drift on Macs connected to a local NTP server.
When reading /var/db/ntp.drift file I get -37.521 which according to PPM to milliseconds conversion gives -3241ms of drift.
When using ntpq -c lpeer I get something like this:
remote refid st t when poll reach delay offset jitter
==============================================================================
*172-1-1-5.light 164.67.62.212 2 u 57 64 377 199.438 38.322 29.012
which means 38.322ms of drift.
Finally, sntp 172.1.1.5 outputs this:
2016 Jan 21 18:41:45.248591 +0.019244 +/- 0.022507 secs
which means 19.244ms of drift.
I'm confused which one of the approaches gives accurate NTP drift?
Have a look at ntpq -pcrv that should give you all the info and more. If you need any of the output explaining then edit your question and we will try & help you out.
Remember drift is specific to your box. It looks like your NTP server is either far away or you have a poor network link (based on your delay time). You might want to try a closer ntp server.
Related
Status: the problem lowered, but compared to other users reports it persists.
I have moved to UE4.27.0 and the startup time lowered from 11 (v4.26.2) to 6 minutes! (the RAM usage lowered too!) But doesnt compare to the speed other ppl report "almost instantly"...
It is not compiling anything, not even shaders, it is like the 6th time I run it for one project.
Should I try to disable plugins? but Im new with UE and dont want to difficult my usage. Tho, for ex., I have nothing VR related to test so it could really be initially disabled.
HD READ SPEED? NO
I have tested moving UE4Editor whole engine path (100GB) to a 3xSSD(Stripes), but the UE4Editor startup time remained the same. My HD were it is too, is fast but not so fast as the 3xSSD.
CPU USAGE? MAY BE if it could use 4 cores could solve it?
UE4Editor startup uses A SINGLE CORE ONLY, i can confirm with htop and system monitor, it is possible to see only a single core being used 100% and it changes between the 4 cores, so only one is used at 100% per time.
I tested this command line parameter -USEALLAVAILABLECORES after the project URL for UE4Editor, but nothing changed. I read that option is ignored in some machines, so may be if I patch it's usage it could work on mine?
GPU? no?
a report about an integrated graphics card (weak one) says it doesnt interfere with the startup time.
LOG for UE4Editor v4.27.0 with the new biggest intervals ("..." means ommited log lines to make it easier to read; "!(interval in seconds)" is just to easy reading it (no ommitted lines here)):
[2021.09.15-23.38.20:677][ 0]LogHAL: Linux SourceCodeAccessSettings: NullSourceCodeAccessor
!22s
[2021.09.15-23.38.42:780][ 0]LogTcpMessaging: Initializing TcpMessaging bridge
[2021.09.15-23.38.42:782][ 0]LogUdpMessaging: Initializing bridge on interface 0.0.0.0:0 to multicast group 230.0.0.1:6666.
!16s
[2021.09.15-23.38.58:158][ 0]LogPython: Using Python 3.7.7
...
[2021.09.15-23.39.01:817][ 0]LogImageWrapper: Warning: PNG Warning: Duplicate iCCP chunk
!75s
[2021.09.15-23.40.16:951][ 0]SourceControl: Source control is disabled
...
[2021.09.15-23.40.26:867][ 0]LogAndroidPermission: UAndroidPermissionCallbackProxy::GetInstance
!16s
[2021.09.15-23.40.42:325][ 0]LogAudioCaptureCore: Display: No Audio Capture implementations found. Audio input will be silent.
...
[2021.09.15-23.41.08:207][ 0]LogInit: Transaction tracking system initialized
!9s
[2021.09.15-23.41.17:513][ 0]BlueprintLog: New page: Editor Load
!23s
[2021.09.15-23.41.40:396][ 0]LocalizationService: Localization service is disabled
...
[2021.09.15-23.41.45:457][ 0]MemoryProfiler: OnSessionChanged
!13s
[2021.09.15-23.41.58:497][ 0]LogCook: Display: CookSettings for Memory: MemoryMaxUsedVirtual 0MiB, MemoryMaxUsedPhysical 16384MiB, MemoryMinFreeVirtual 0MiB, MemoryMinFreePhysical 1024MiB
SPECS:
I'm using ubuntu 20.04.
My CPU is 4 cores 3.6GHz.
GeForce GT 710 1GB.
Related question but for older UE4: https://answers.unrealengine.com/questions/987852/view.html
Unreal Engine needs a high-end pc with a lot of RAM, fast SSD's, a good CPU and a medium graphic card. First of all there are always some shaders that needs to be compiled from the engine, and a lot of assets to be loaded in the startup time. As I can see you're on Linux you are probably using a self-compiled Unreal Engine version.... not the best thing to do for a newbie, because this may cause several problems on load time, startup, compiling and a lot of other stuff. If it's the first times you're using Unreal, try using it on Windows, it's all easier.
I just installed the latest version of LIRC(0.10.1-5.2) on my Raspberry Pi 3, running Raspbian on Debian Buster.
I am trying to get my Pi to take input from an IR remote using lirc.
I have made the necessary changes to these files :
/etc/lirc/lirc_options.conf
driver = default
device = /dev/lirc0
/boot/config.txt
dtoverlay=gpio-ir,gpio_in_pin=18,gpio_out_pin=17,gpio_in_pull=up
//I set mine on up on GPIO pins 17 and 18 instead of 22 and 23
I have checked and cross-checked my circuit. Everything looks okay.
The challenge I'm facing right now is when I test my IR receiver using the following command,
mode2 -d /dev/lirc0
Nothing happens. There's no output at all. No pulses recorded.
Has anyone else experienced this issue?
Any help would be much appreciated.
After spending a great amount of time, trying to figure out how to solve this issue, I was finally able to resolve it.
So hopefully my answer will help someone else.
First things first, it's important to note that infrared device has changed from lirc-rpi to gpio-ir
Although, I already had this change in my /boot/config.txt
file,like below:
dtoverlay=gpio-ir,gpio_in_pin=18,gpio_out_pin=17,gpio_in_pull=up
// in stead of dtoverlay=lirc-rpi
I just thought it was important to point out.
Since I am trying to get my Pi to take input from an IR remote using lirc, I decided to first test my IR sensor separately, to make sure it works.
To do that, I connected up the sensor like so:
Pin 1 is the output so we wire this to a visible LED and resistor
Pin 2 is ground
Pin 3 is VCC, connect to 3v3
You can find more detailed step by step instructions from this tutorial here which also shows how to wire up your circuit as seen below.
During this test, my LED lit up each time I pointed a remote at the receiver, which gave me hope that it was working just fine.
The next step was to test the IR receiver on my raspberry pi, which is the challenge I had in the beginning.
I re-wired my circuit, this time:
Pin 1 is DATA, goes to RPi pin 12 (GPIO 18)
Pin 2 is GND, goes to RPI pin 6 (GROUND)
Pin 3 is POWER, goes RPi pin 1 (3v3)
Then I ran this command sudo /etc/init.d/lirc stop to make sure that service wasn't running.
I then ran the initial command mode2 -d /dev/lirc0 and now pressed random buttons from my remote at the receiver and violá! I could see some pulses on the screen with each button press now.
Like you I managed to get all the way to receiving pulses/data on the RPI 3, seem to have a problem with the output.
I have the USB strip light and my RPI with the IRC receiver, this is so I can monitor what data captured corresponds with the button pressed on the remote keypad. Works just fine.
However? If I push the ON button - I get data, if I push the ON button again I get another set of data. The two set of data dont match? , in both cases mode2 or mode2 -r.
I get the feeling I'm missing a method to decode the output, I've noticed there's a huge amount of companies and they all have distinct code sets.
Here is one thread that exactly matches what I have (24-Key IR remote).
http://woodsgood.ca/projects/2015/02/13/rgb-led-strip-controllers-ir-codes/
However I dont see the same set codes ???
Try to change the device to mode2 -d /dev/lirc1
I also faced this.
Best case would be, if I had a (debug)-tool which runs in the background and tells me the name of the process or driver that breaks my latency requirement to my system. Which tool is suitable? Do you have a short example of its usage for the following case?
Test case:
The oscilloscope measures the time between the trigger of a GPIO input and the response on a GPIO output. Usually the response time is 150µs. I trigger every 25ms.
My linux user test program uses poll() and read()+write() to mirror the detected signal of the input as response back to an output.
The Linux kernel is patched with the Preempt_rt patch.
In the dimension of hours I can see response time peaks of up to 20ms.
The best real chance is to
switch on tracing in the kernel configuration and build such Linux kernel:
CONFIG_FTRACE=y
CONFIG_FUNCTION_TRACER=y
CONFIG_FUNCTION_GRAPH_TRACER=y
CONFIG_SCHED_TRACER=y
CONFIG_FTRACE_SYSCALLS=y
CONFIG_STACK_TRACER=y
CONFIG_DYNAMIC_FTRACE=y
CONFIG_FUNCTION_PROFILER=y
CONFIG_DEBUG_FS=y
then run your application until weird things happen by using a tool trace-cmd
trace-cmd start -b 10000 -e 'sched_wakeup*' -e sched_switch -e gpio_value -e irq_handler_entry -e irq_handler_exit /tmp/myUserApplication
and get a trace.dat file.
trace-cmd stop
trace-cmd extract
Load that trace.dat file in KernelShark and analyse the CPUs, threads, interrupts, kworker threads and user space threads. It's great to see which blocks the system.
I have to dash away from the computer frequently, and I want to trigger some commands to run when my iPhone is close enough/far enough from my iMac (next to it vs. 2-3 metres away/other side of a wall). A couple of minutes latency is fine.
Partial solution: proximity
I've downloaded reduxcomputing-proximity and it works, but this only triggers when the device goes in to/out of range of bluetooth, but my desired range is much smaller.
(Proximity polls [IOBluetoothDevice -remoteNameRequest] to see if the device is in bluetooth range or not.)
Enhancement: rawRSSI
I've used [IOBluetoothDevice -rawRSSI] to get the RSSI when I am connected to the iPhone (when disconnected this just returns +127), but in order to save the battery life of my iPhone I'd rather avoid establishing a full bluetooth connection.
Am I correct in thinking that maintaining a connection will consume more battery life than just polling every couple of minutes?
I've overridden the isInRange method of proximity here to give me a working solution that's probably relatively battery intensive compared to the previous remoteNameRequest: method:
- (BOOL)isInRange {
BluetoothHCIRSSIValue RSSI = 127; /* Valid Range: -127 to +20 */
if (device) {
if (![device isConnected]) {
[device openConnection];
}
if ([device isConnected]) {
RSSI = [device rawRSSI];
[device closeConnection];
}
}
return (RSSI >= -60 && RSSI <= 20);
}
(Proximity uses synchronous calls - if and when I fit it to my needs I will edit it to be asynchronous but for now that's not important.)
Under Linux: l2ping - inquiry scan?
This SO post references getting an RSSI during an 'inquiry scan' which sounds like what I want, but it talks about using the Linux Bluez library, whilst I am on a Mac - I'd rather do it without having to stray too far if possible! (I have considered using a VM with USB pass-thru to hook up a second bluetooth device... But a simpler solution would be preferable!)
I see there is a IOBluetoothDeviceInquiry class, but I am not sure if this is useful to me. I don't intend to learn bluetooth protocol just for this simple problem!
The commands
For interest, and not particularly relevant to the solution, here are the Apple Scripts I currently trigger when
in range:
tell application "Skype"
send command "SET USERSTATUS ONLINE" script name "X"
do shell script "afplay '/System/Library/Sounds/Blow.aiff'"
end tell
out of range:
tell application "Skype"
send command "SET USERSTATUS AWAY" script name "X"
do shell script "afplay '/System/Library/Sounds/Basso.aiff'"
end tell
Though these are likely to get longer!
You are correct that making a connection will cost more energy. However, I'm not aware of APIs on mac OS that will give you access to the RSSI from inquiry scan packets. You could get access to the raw packets from your BT adapter using Mac OS PacketLogger. See this post Bluetooth sniffer - preferably mac osx
You could programmaticly put your device in discovery every couple of minutes, capture the inquiry scan packets with the packetlogger, and parse out the RSSI. You can use WireShark to help you understand how to decode the packets and find RSSI.
Your simplest option is probably to just periodically create a connection, measure RSSI, and then tear down the connection.
In terms of tradeoffs for your use case doing a continuous or periodic inquiry will consume same or even a bit more energy as doing a periodic connect / read RSSI and disconnect. Depending on the use case it sometimes may be more efficient to maintain the connection in a low power mode (sniff with 2.56 sec interval) and remain connected if the device is in range. And use RSSI to monitor proximity (although it is not accurate as interference due to objects change rssi drastically even though the device might be in proximity)
I'm working on a network-related project and I am using DTLS (TLS/UDP) to secure communications.
Reading the specifications for DTLS, I've noted that DTLS requires the DF flag (Don't Fragment) to be set.
On my local network if I try to send a message bigger than 1500 bytes, nothing is sent. That makes perfect sense. On Windows the sendto() reports a success but nothing is sent.
I obviously cannot unset the DF flag manually since it is mandatory for DTLS and i'm not sure whether the 1500 bytes limit (MTU ?) could change in some situations. I guess it can.
So, my question is : "Is there a way to discover this limit ?" using APIs ?
If not, what would be the lowest possible value ?
My software runs under UNIX (Linux/MAC OSX) and Windows OSes so different solutions for each OS are welcome ;)
Many thanks.
There is a minimum MTU that must be supported - 576 bytes, including IP headers. So if you keep your packets below that, you don't have to worry about PMTU-D (that's what DNS does).
you probably need to 'auto tune' it by sending a range of packet sizes to the target, and see which arrive. think binary_search ...