I started to develop a new Windows 10 images for our WDS. I know we are a little late still running W7... but i have a problem concerning W32TM.
We had problems adding computers to the Domain after we deployed an image to a new computer because of a time difference. Lots of the new computers are sometimes days of. So before adding computer to the domain i batched a timesync. So I started W32Time service, added the locally time server and did a W32Time /resync /force.
On W10 the force does not work any more and I receive a message the time difference is to big. So i searched the Internet and found to edit the Register to increase the time limit from 54000 seconds to 4294967295 seconds. But i still receive the same Error message.
So i looked into the log and found some interesting. The Error ID is 34 and the text makes no sense. Well it is in German. Before i made the REG-Change the error message was:
Der Zeitdienst hat festgestellt, dass die Systemzeit um 82597 Sekunden geändert werden muss. Die Systemzeit kann durch den Zeitdienst um maximal 54000 Sekunden geändert werden.
So this is telling me the 82597 seconds are more than maximum of 54000 sec. Well this made sense, but after the Reg-Change it told me:
Der Zeitdienst hat festgestellt, dass die Systemzeit um 82597 Sekunden geändert werden muss. Die Systemzeit kann durch den Zeitdienst um maximal 4294967295 Sekunden geändert werden.
So the 82597 seconds won't fit into the 4294967295... I'm not that good in Mat, but eeehhhh.
Now even more strange. I started the Time and Date configuration app (or how it is called in English), set it to manually and changed the date to fit in the 15H/54000Sec. I switched back to automatically, and it switched back to yesterday.... Why? Time zone is correctly set to UTC+01:00
Next step: BIOS... edited Time and Date correctly in the bios, booted W10 and finally i could sync the time... using the GUI. But than i tested the script, and noticed it gives the same error and time and date went off 27 hours again. Even totally unlogical time i.a. now it is 12:13 and the time on the machine is yesterday 8:39. Hmm I did another restarts and BIOS-change to demonstrate this to my colleague and now the difference has changed. Time now 12:25 time on machine yesterday 8:29.
Also tested another Time server, same result. Now i am at the End of my wisdom an need some idea's from others.
Related
We made our own board with ESP32-S3 (FN8 Model with 8 MB Flash internal). We see bootloop when we first turn it on. After looading bootloader.bin (and many different *.bin files) we still see bootloop problem and we get the following messages from COM port. We use Windows CMD for esptool.exe to load *.bin files.
ESP-ROM:esp32s3-20210327
Build:Mar 27 2021
rst:0x7 (TG0WDT_SYS_RST),boot:0x8 (SPI_FAST_FLASH_BOOT)
Saved PC:0x40043ac8
SPIWP:0xee
mode:DIO, clock div:1
load:0x3fcd0108,len:0x1634
load:0x403b6000,len:0xe74
load:0x403ba000,len:0x31c8
Checksum failure. Calculated 0x8e stored 0xde
ets_main.c 329
... (loops) ...
How can we solve this problem?
Any help is appreciated.
Thank You
We have tested many *.bin files (bootloader.bin, firmware.bin, combined.bin, helloworld-esp32s3.bin, bootloader_esp32s3.bin, etc) to get rid of bootloop. But no solution.
Text from COM port
Status: the problem lowered, but compared to other users reports it persists.
I have moved to UE4.27.0 and the startup time lowered from 11 (v4.26.2) to 6 minutes! (the RAM usage lowered too!) But doesnt compare to the speed other ppl report "almost instantly"...
It is not compiling anything, not even shaders, it is like the 6th time I run it for one project.
Should I try to disable plugins? but Im new with UE and dont want to difficult my usage. Tho, for ex., I have nothing VR related to test so it could really be initially disabled.
HD READ SPEED? NO
I have tested moving UE4Editor whole engine path (100GB) to a 3xSSD(Stripes), but the UE4Editor startup time remained the same. My HD were it is too, is fast but not so fast as the 3xSSD.
CPU USAGE? MAY BE if it could use 4 cores could solve it?
UE4Editor startup uses A SINGLE CORE ONLY, i can confirm with htop and system monitor, it is possible to see only a single core being used 100% and it changes between the 4 cores, so only one is used at 100% per time.
I tested this command line parameter -USEALLAVAILABLECORES after the project URL for UE4Editor, but nothing changed. I read that option is ignored in some machines, so may be if I patch it's usage it could work on mine?
GPU? no?
a report about an integrated graphics card (weak one) says it doesnt interfere with the startup time.
LOG for UE4Editor v4.27.0 with the new biggest intervals ("..." means ommited log lines to make it easier to read; "!(interval in seconds)" is just to easy reading it (no ommitted lines here)):
[2021.09.15-23.38.20:677][ 0]LogHAL: Linux SourceCodeAccessSettings: NullSourceCodeAccessor
!22s
[2021.09.15-23.38.42:780][ 0]LogTcpMessaging: Initializing TcpMessaging bridge
[2021.09.15-23.38.42:782][ 0]LogUdpMessaging: Initializing bridge on interface 0.0.0.0:0 to multicast group 230.0.0.1:6666.
!16s
[2021.09.15-23.38.58:158][ 0]LogPython: Using Python 3.7.7
...
[2021.09.15-23.39.01:817][ 0]LogImageWrapper: Warning: PNG Warning: Duplicate iCCP chunk
!75s
[2021.09.15-23.40.16:951][ 0]SourceControl: Source control is disabled
...
[2021.09.15-23.40.26:867][ 0]LogAndroidPermission: UAndroidPermissionCallbackProxy::GetInstance
!16s
[2021.09.15-23.40.42:325][ 0]LogAudioCaptureCore: Display: No Audio Capture implementations found. Audio input will be silent.
...
[2021.09.15-23.41.08:207][ 0]LogInit: Transaction tracking system initialized
!9s
[2021.09.15-23.41.17:513][ 0]BlueprintLog: New page: Editor Load
!23s
[2021.09.15-23.41.40:396][ 0]LocalizationService: Localization service is disabled
...
[2021.09.15-23.41.45:457][ 0]MemoryProfiler: OnSessionChanged
!13s
[2021.09.15-23.41.58:497][ 0]LogCook: Display: CookSettings for Memory: MemoryMaxUsedVirtual 0MiB, MemoryMaxUsedPhysical 16384MiB, MemoryMinFreeVirtual 0MiB, MemoryMinFreePhysical 1024MiB
SPECS:
I'm using ubuntu 20.04.
My CPU is 4 cores 3.6GHz.
GeForce GT 710 1GB.
Related question but for older UE4: https://answers.unrealengine.com/questions/987852/view.html
Unreal Engine needs a high-end pc with a lot of RAM, fast SSD's, a good CPU and a medium graphic card. First of all there are always some shaders that needs to be compiled from the engine, and a lot of assets to be loaded in the startup time. As I can see you're on Linux you are probably using a self-compiled Unreal Engine version.... not the best thing to do for a newbie, because this may cause several problems on load time, startup, compiling and a lot of other stuff. If it's the first times you're using Unreal, try using it on Windows, it's all easier.
I do a variety of different kinds of data analysis and numerical simulation on my custom-built Ubuntu machine using custom-written programs that sometimes must run for days or even weeks. Some of those programs have been in Fortran, some in Python, some in C; there is literally zero commonality between these programs except that they run a long time and do a lot of disk i/o. Most are single-thread.
The typical execution command line looks like
./myprog &> myprog.log &
If an ordinary runtime error occurs, any buffered program output and the error message both faithfully appear in myprog.log and the logfile is cleanly closed. But what's been happening instead in many cases is that the program simply quits in mid-stream -- usually after half a day to a day or so, without any further output to the log file. It's like the program had been randomly hit with a 'kill -9'.
I don't know why this is happening, and it seems to be specific to this particular machine (I have been doing similar work for 30 years and never experienced this before). The operating system itself seems rock-stable; it has been rebooted only rarely over the past couple years for specific reasons like updates. It's only my longer-running user processes that seem to die abruptly like this with no accompanying diagnostic.
Not being a system-level expert, I'm at a loss for how to diagnose what's going on. Right now, my only option is to regularly check whether my program is still running and restart it if necessary.
System details:
Ubuntu 18.04.4 LTS
Linux kernel: 4.15.0-39-generic
CPU: AMD Ryzen Threadripper 1950x
UPDATE: Since dmesg was mentioned, here are some representive messages, which I have no idea how to interpret. The UFW BLOCK messages are by far the most numerous, but there are also a fair number of the ata6 messages, which seem to have something to do with the SATA hard drive. Could this be relevant?
[5301325.692596] audit: type=1400 audit(1594876149.572:218): apparmor="DENIED" operation="open" profile="/usr/sbin/cups-browsed" name="/usr/share/locale/" pid=19663 comm="cups-browsed" requested_mask="r" denied_mask="r" fsuid=0 ouid=0
[5352288.689739] ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen
[5352288.689753] ata6.00: cmd a0/00:00:00:08:00/00:00:00:00:00/a0 tag 14 pio 16392 in
Get event status notification 4a 01 00 00 10 00 00 00 08 00res 40/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x4 (timeout)
[5352288.689756] ata6.00: status: { DRDY }
[5352288.689760] ata6: hard resetting link
[5352289.161877] ata6: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[5352289.166076] ata6.00: configured for PIO0
[5352289.166635] ata6: EH complete
[5353558.066052] [UFW BLOCK] IN=enp5s0 OUT= MAC=10:7b:44:93:2f:58:b4:0c:25:e0:40:12:08:00 SRC=172.105.89.161 DST=144.92.130.162 LEN=40 TOS=0x00 PREC=0x00 TTL=243 ID=50780 PROTO=TCP SPT=58944 DPT=68 WINDOW=1024 RES=0x00 SYN URGP=0
I'm trying to get actual NTP drift on Macs connected to a local NTP server.
When reading /var/db/ntp.drift file I get -37.521 which according to PPM to milliseconds conversion gives -3241ms of drift.
When using ntpq -c lpeer I get something like this:
remote refid st t when poll reach delay offset jitter
==============================================================================
*172-1-1-5.light 164.67.62.212 2 u 57 64 377 199.438 38.322 29.012
which means 38.322ms of drift.
Finally, sntp 172.1.1.5 outputs this:
2016 Jan 21 18:41:45.248591 +0.019244 +/- 0.022507 secs
which means 19.244ms of drift.
I'm confused which one of the approaches gives accurate NTP drift?
Have a look at ntpq -pcrv that should give you all the info and more. If you need any of the output explaining then edit your question and we will try & help you out.
Remember drift is specific to your box. It looks like your NTP server is either far away or you have a poor network link (based on your delay time). You might want to try a closer ntp server.
My system is suffering from a high timer resolution (NtQueryTimerResolution returns 0.5ms).
Maximum timer interval: 15.600 ms
Minimum timer interval: 0.500 ms
Current timer interval: 0.500 ms
Some process must be calling NtSetTimerResolution with a value of 5000 (0.5ms), but how can I determine which one? I saw Intel has a tool called Battery Life Analyzer that shows the current timer resolution per process, but that tool is only available to Intel partners. Is there another tool or a way to see it via WinDbg? Note: It seems to happen at boot time as setting a breakpoint isn't working (the resolution is already high when the debugger starts).
I found that Windows 7 keeps track of timer resolution per process in the _EPROCESS kernel structure.
With debugging enabled (boot with /debug) it is possible to browse the ExpTimerResolutionListHead list with windbg (run windbg -kl) and extract timer information like this:
lkd> !list "-e -x \"dt nt!_EPROCESS #$extret-##(#FIELD_OFFSET(nt!_EPROCESS,TimerResolutionLink)) ImageFileName UniqueProcessId SmallestTimerResolution RequestedTimerResolution\" nt!ExpTimerResolutionListHead"
In my case however the process ID was NULL (probably because a driver made the request), and I still couldn't figure out which driver it was.
The only way I know and have used so far is injecting into each of running processes and inside that process calling timeEndPeriod for each increased resolution (values 1-15) in a loop over these resolutions and checking whether the timeEndPeriod call for a current resolution returns TIMERR_NOCANDO or TIMERR_NOERROR (note: these return values are NOT correspondingly false and true). And if it returns TIMERR_NOERROR then concluding that the program is using that frequency, and then calling again timeBeginPeriod to restore the original resolution requested by the program.
Unfortunately this method does not detect the 0.5 ms timer resolutions that can be set by undocumented NtSetTimerResolution function.
If you want to continuously monitor the new timer resolutions then hooking calls to undocumented NtSetTimerResolution function in ntdll.dll is the way I use currently (the function's signature can be taken for example from here).
Unfortunately hooking does not detect timer resolutions that were requested before the hook was installed, so you need to combine it with the above timeEndPeriod trick and note also that the 0.5 ms resolution requests before the hooking stay undetected.
And I agree, this method seems cumbersome. Moreover, it is a bit intrusive since it modifies the state of the process, and also assumes that you are able to inject into all processes.
If anybody has better methods, I would be interested knowing about them too.
Input
You can run the following command in a Administrative CMD Prompt:
c:\temp> powercfg -energy duration 5
This will create a report called: C:\temp\energy-report.html
This report will show you which processes have changed the Clock Latency/Resolution on your computer. Normally these are RTC (Real-Time Communication) applications, but as you have noticed can be Chrome and other applications.
Output
An (albeit German) example of the output looks like this. Sorry I don't have access to an English client at the moment.
First Statement in Report: Something has changed
Plattform-Zeitgeberauflösung:Plattform-Zeitgeberauflösung
Die standardmäßige Plattform-Zeitgeberauflösung beträgt 15,6 ms (15625000 ns) und sollte immer dann verwendet werden, wenn sich das System im Leerlauf befindet. Wenn die Zeitgeberauflösung erhöht wird, sind die Technologien zur Prozessorenergieverwaltung möglicherweise nicht wirksam. Die erhöhte Zeitgeberauflösung kann auf eine Multimediawiedergabe oder Grafikanimationen zurückzuführen sein.
Aktuelle Zeitgeberauflösung (100-ns-Einheiten) 10000 <<=== CURRENT SETTING
Maximale Zeitgeberperiode (100-ns-Einheiten) 156250 <<== DEFAULT SETTING
Second Statement in Report: The Culprit
Plattform-Zeitgeberauflösung:Ausstehende Zeitgeberanforderung
Von einem Programm oder Dienst wurde eine Zeitgeberauflösung angefordert, die kleiner als die maximale Zeitgeberauflösung der Plattform ist.
Angeforderter Zeitraum 10000 <<== Requested Clock Latency
ID des anfordernden Prozesses 12592 <<== Process ID of application requesting different Clock Latency
Pfad des anfordernden Prozesses \Device\HarddiskVolume4\Program Files (x86)\C4B\XPhone Connect Client\C4B.XPhone.Commander.exe <<== The culprit
The information can be separated from each other and can contain different modules in between the individual blocks, but you should be able to find the culprit armed with the information provided above.