Reopening Chronicle Map gives checksum errors - chronicle

Using Chronicle Map version 3.20.84
When reopening an Chronicle Map I get the following:
map.ChronicleMapBuilder - Checksum doesn't match, stored: -1805860448, should be from the entry bytes: 1297789250, key: 4-US9024941034, value:...
I have seen https://github.com/OpenHFT/Chronicle-Map/issues/198 which seems to exhibit the same problem I am having.
Over there is says to make sure you don't write to the map after it is closed. As of release 3.x of ChronicleMap I discovered that there is a shutdown hook mechanism that closes the maps for you. I turned that feature off. Since shutdown hooks are called arbitrarily I reasoned that I may have been still been writing to the map after the hook closed it down. Instead I know close the maps myself. Yet I still am getting checksum errors when trying to reopen previously closed maps.
Any idea what is going?

Shutdown hooks should only be called by the JVM after all your threads have been killed.
I would check you aren't killing your process with kill -9
I would try a newer version, as this version 2.5 years old, though I don't know of a fix for this in that time.
What sort of filesystem is the map stored on?

Related

Diagnosing Windows program that hangs on startup

I have a new Win10 laptop. I've installed lots of software, including a 25-year-old Codewright editor that I've customized up the wazoo, and that I've been installing on all my machines for, well, 25 years. After working for a few days, it suddenly stopped, and reinstalling it didn't fix it. On startup, it puts up a small splash window, and normally opens the main window a half a second later (that took more than 5 seconds 25 years ago). It's not using any CPU, and there's nothing I can do but kill the process.
In the past, I've occasionally got my system into a state where Codewright would hang on loading, due to some other program that hadn't terminated correctly, and it was unfrozen by killing off that other process. So that's reason to believe that Codewright is waiting at some global lock which some other malfunctioning software is holding. So I have two questions:
Does this ring a bell? Is there some known failure mode where a program putting up a splash window then switching to another window can be prevented by something else going on the system?
Is there a way to diagnose this, perhaps by finding out what system call it's hanging inside? I tried dtrace.exe, started Codewright, and then stopped tracing, and it produced a 3GB XML file, which is quite a haystack. There's a way to filter it by PID, but since this is a startup problem, I have no idea what the PID will be. Is there a better tool for doing this, or some more appropriate dtrace feature that I missed?
The comment about using the Task Manager to create a dump file actually led me to notice that there is an Analyze Wait Chain function there that I had never seen before, since I haven't used Task Manager much since I switched from Win7. This gave me exactly the answer I wanted. My editor was waiting for something that was being held by some NVIDIA GeForce Experience module. Since I don't use that, I uninstalled it, and I'm back up and running. Thanks for the tip.

Prevent my Windows App to cause Windows Runtime Broker to run out of Memory

When my Windows 10 app runs, it causes a process called Runtime Broker to execute, which takes up a lot of Memory space.
I know my app isn't "Memory-hungry" and it hardly takes 80 MB of RAM to execute. But from the time it starts, the Memory used by Runtime Broker keeps in increasing until the PC gets stuck.
Upon killing that process, the app is force closed by Windows.
I would have posted my source code here, if only I knew which part of the code is causing this to happen.
What are the possible technical reasons for this problem to happen, and what are the possible fixes in my code to prevent this?
Is there something wrong with my code, or is it some API that I am calling?
You can easily delete RuntimeBroker.exe and any other file. I deleted RuntimeBroker.exe and Livecomm.exe by booting a live Linux Dvd and after loading go and mount the c: drive then simply navigate to the file and delete it. Done!
Runtimebroker seems to hold about 60k per file held via StorageFile objects. It's still a bad problem and the only solution is to not hold on to very many of these.
Microsoft just never does anything about this.
Update: Microsoft seems to have quietly ditched UWP. The replacement has "WinUI" and is probably called the Windows App SDK at the moment. No more runtimebroker.exe.

Rails. Free memory of Delayed Job (active record) without process restart

It must be obvious, but I cant get a usecase of Delayed Job, cause due to ruby`s Gargabe Collector specific, it doesnt free memory back to OS. And once delayed job process will take all memory anyway. And the only way is to restart delayed job process.
But if I restart delayed job process and there is currenlty running task - it will never be completed. Probably, there is some workaround to restart that task later, but this approach seems ugly to me.
I tried real jobs and some simple computatuin without any variables, symbols or links so I dont think that "my code leaks". Still, every new job increases memory of delayed_job process.
May be I use Delayed job for something that its not designed? Or it could be environment problem (besides, tried on local machine and on VPS) ?
Tested on: Ubuntu 14.04 and Debian 6 (both x86), Rails 3.2, delayed_job 4.0.2, delayed_job_active_record 4.0.1, ruby 2.1.2
I could give some code examples, but, as I mentioned, I tried both: real job and simple computation. So I won`t if it is not significant and my mistakes are fundamental.
Due to my conditions - my tasks can be executed for couple of minutes, read and write about 100K records to database and require a lot of computation, tasks cant be interrupted, and number of tasks limited by 10-20 dayli, may be - I only guess to use Resque, because it forks process everytime, so there should be no problems with accumulating memory with time.
So do I realy do something wrong or this is a nature of DJ - to occupie all memory or require a restart - and if I cant restart it, I shouldnt use its approach ?
Everything I read on the internet (not so much, by the way) tells that its rubys GC trouble that it doesnt free memory back to OS, and some advises to profile code for unlinked objects (it sounds the most realistic to my case, but, I tried a lot with code that doesnt create any objects, and I explicitly set everything to nil and call GC.start)

GetPrivateProfileInt on network file on freshly booted machine

After intensive searching why certain workstations wouldn't perform a certain action when just being started up in the morning (...) I've discovered that GetPrivateProfileInt just returns the default value and doesn't bother to set GetLastError to something non-zero when the network-subsystem hasn't activated yet (e.g. because the DHCP client is still trying to get hold of an IP address to use.)
Does this sound familiar to someone? Does anybody happen to know what I should/could do about it?
For now I'll correct by using an alternate default value, and stalling a bit while I get my default value.
GetPrivateProfileInt() is one of those innocent looking Windows API functions that has a ton of code behind it. There's a mass of appcompat code, designed to allow Win3 programs to run on modern versions of Windows. One of the side-effects is that it is incredibly slow, it took about 50 msec the last time I profiled it.
Looks like you found a flaw in it. For all I know, it might actually be designed appcompat behavior. Emulating the way this API worked 18 years ago. I have no clue of course if that's accurate.
The very best thing you can do is stop using it. A possible workaround is to open the file first so that your program blocks until the service is up and running.
I would check if the file exists and sleep for a few seconds until the file is there. After some number of tries either use the default value or take an appropriate action.

How to Safely Force Shutdown of Mac

What I want
I'm developing a little app to force me to only work at certain times of day - I need something to force me to stop working in the evenings so I can be more effective in the day.
The option within OS X to shut down my machine at a certain time is too easy to cancel. And you can always log back in afterwards.
I want my app to quit all applications whether they have unsaved work or not.
What I've tried
I thought of killing the loginwindow process, but I've read that this can cause data corruption.
I've come across the shutdown command - I'm using sudo shutdown -h +0 to shutdown immediately. This appears to be just the ticket, but I'm worried that it might cause data corruption if, say, Disk Utility is doing some kind of scan.
Is the shutdown command safe?
Can the shutdown command cause corruption? Or is it safe to use? Is there a better way of forcing shutdown safely?
Use AppleScript to tell application "System Events" to shut down.
The shutdown command sends running processes a signal to terminate, giving them a chance to do clean up work, if needed. So generally, when an application receives this signal (SIGTERM(inate)) it should wrap up and exit.
IIRC in Snow Leopard (10.6) Apple added something called fast-shutdown (or similar) which will send processes that have been flagged as being ok with it a SIGKILL signal, shutting them down without chance for cleanup work. This is supposed to make shutdown faster. The default is that applications still get SIGTERM and have to opt-in for SIGKILL; and they can mark themselves as "dirty", i. e. having unsaved work and do not want to be killed forcibly.
So while shutting down in the middle of a disk utility run will abort whatever disk utility is doing, IMHO it would not cause data corruption in general. However depending on the operation you are currently running, you could end up with an incomplete disk image or a half-formatted partition. Maybe you want to refrain from using it when you know the end of your configured work time is coming close.
Using cron to schedule the shutdown is a viable option if you want it to happen at a specified time. If you want it to happen after a certain amount of time after you log in, you could use the number parameter to shutdown to specify say 8 hours from now.
If you want to lose unsaved work then shutdown -h is your only answer.
However, anyone who has debugged a full-screen app on OS X knows that is it very easy (some say too easy) for an app to capture the screen and render the computer essentially useless (without SSHing from another computer to kill the process.) That's another alternative.
the recommended way to schedule a shutdown of your computer on a regular basis is in the system preferences -> Energy Saver panel. Click on the "schedule" button in the lower right hand corner. the rest is self explanatory...
Forcing your computer to shut down (and discard any unsaved work) doesn't sound like a good idea to me. Wouldn't it be easier and safer to just set an alarm clock to remind yourself when you should stop working, and walk away from your computer when it rings? (That's what I do.)
Edit: That might have come across as a bit rude, which was not my intention at all. (I had no intention of making fun of your question or anything like that.) I just think that this would be a better solution to this problem :)
Maybe cron is installed on your computer? It's wonderful =)

Resources