RethinkDB 'make' fails on a Raspberry Pi - gcc

I have tried running the 'make ALLOW_WARNINGS=1' command multiple times, but it seems to fail every time.
I have made sure gcc is updated and so is the rethinkDB package.
Here's the error it produces:

It was down to the fact I didn't have enough swap. I ordered a new RPi2 and it worked (due to 1GB of ram built in), but I also found out that I didn't increase the swap file on the RPi1 correctly, I was following this tutorial and I didn't carry out the last two commands, so it might still be possible on a RPi1, I haven't checked.

Related

Julia seems to be very slow

I am running code shown in this question. I expected it to run faster second and third time (on first run it takes time to compile the code). However, it seems to be taking same amount of time as the first time. How can I make this code run faster?
Edit: I am running the code by giving command on Linux terminal: julia mycode.jl
I tried following instructions in the answer by #Przemyslaw Szufel but got following error:
julia> create_sysimage(["Plots"], sysimage_path="sys_plots.so", precompile_execution_file="precompile_plots.jl")
ERROR: MethodError: no method matching create_sysimage(::Array{String,1}; sysimage_path="sys_plots.so", precompile_execution_file="precompile_plots.jl")
Closest candidates are:
create_sysimage() at /home/cardio/.julia/packages/PackageCompiler/2yhCw/src/PackageCompiler.jl:462 got unsupported keyword arguments "sysimage_path", "precompile_execution_file"
create_sysimage(::Union{Array{Symbol,1}, Symbol}; sysimage_path, project, precompile_execution_file, precompile_statements_file, incremental, filter_stdlibs, replace_default, base_sysimage, isapp, julia_init_c_file, version, compat_level, soname, cpu_target, script) at /home/cardio/.julia/packages/PackageCompiler/2yhCw/src/PackageCompiler.jl:462
Stacktrace:
[1] top-level scope at REPL[25]:1
I am using Julia on Debian Stable Linux: Debian ⛬ julia/1.5.3+dfsg-3
In Julia packages are compiled each time they are run withing a single Julia session. Hence starting a new Julia process means that each time Plots.jl get compiled. This is quite a big package so will take a significant time to compile.
In order to circumvent it, use the PackageCompiler and compile Plots.jl into a static system image that can be used later by Julia
The basic steps include:
using PackageCompiler
create_sysimage(["Plots"], sysimage_path="sys_plots.so", precompile_execution_file="precompile_plots.jl")
After this is done you will need to run your code as:
julia --sysimage sys_plots.so mycode.jl
Similarly you could have added MultivariateStats and RDatasets to the generated sysimage but I do not think they cause any significant delay.
Note that if the subsequent runs are part of your development process (rather than your production system implementation) and you are eg. developing a Julia module than you could rather consider using Revise.jl in the development process rather than precompile the sysimage. Basically, having the sysimage means that you will need to rebuild it each time you update your Julia packages so I would consider this approach rather for production than development (depends on your exact scenario).
I had this problem and almost went back to Python but now I run scripts in the REPL with include. It is much faster this way.
Note: First run will be slow but subsequent runs in the same REPL session will be fast even if the script is edited.
Fedora 36, Julia 1.8.1

"Select jobs to execute..." runs literally forever

I have a rather complex workflow with 750 samples and roughly 18.000 jobs, at first snakemake runs just fine but then after around 4.000 jobs it suddenly freezes and upon restart it hangs with "Select jobs to execute..." for 24h, after that I terminated it. The initial DAG building takes roughly 2-3 minutes, though.
When I run snakemake (v5.32.0 and v5.32.1) with the --verbose option, I get tons of lines similar to this one:
Cbc0010I After 600 nodes, 304 on tree, -52534.791 best solution, best possible -52538.194 (7.08 seconds
I tried to delete the .snakemake folder in the hope that something went riot there, but that wasn't the case, unfortunately. To me it seems that the CBC MILP Solver somehow does not converge, and it keeps going and going to bring the best and the best possible solution closer together!?
Now I do not have any idea anymore, how to proceed and fix the problem. My possible solutions are somehow to change the convergence criteria or the solver itself. In the manual I found the option --scheduler-ilp-solver but it has apparently only one option, the default COIN_CMD.
After terminating a (shorter) run, I get this verbose output
Result - User ctrl-cuser ctrl-c
Objective value: 52534.79114334
Upper bound: 52538.202
Gap: -0.00
Enumerated nodes: 186926
Total iterations: 1807277
Time (CPU seconds): 1181.97
Time (Wallclock seconds): 1188.11
Next I will try to limit the number of samples in the workflow and see if this has any impact (for other datasets with 500 samples, it ran without any problems (with snakemake version 5.24), but there the DAG building took some hours. Hence, I am not very eager to try the old version.)
So, any idea how to fix the problem is highly appreciated. Also, I do not even know, if this is a bug!?
EDIT Actually, I believe it is a bug in the current version, I downgraded Snakemake back to version 5.24, it created the DAG within 10 minutes and started to run the pipeline. So, apparently there is some bug with the latest version. I will make this an answer to my own question, as the downgrading to an older version solved the problem...
I also ran into this issue with a smaller workflow (~1500 jobs total) and snakemake version 6.0.2. About half the jobs had run when the workflow got stuck, and refused to run any more jobs. Looks like it's a problem specific to the ILP solver, because when I re-ran with --scheduler greedy, it worked fine.
Actually, I believe it is a bug in the current snakemake version, I downgraded Snakemake back to version 5.24, it created the DAG within 10 minutes and started to run the pipeline. So, apparently there is some bug with the latest version. I will make this an answer to my own question, as the downgrading to an older version solved the problem...

Stack Overflow in Open Cobol

I am using Open Cobol.
I have a program that I have been running for several weeks.
Yesterday, I got the following error:
MERRILL_MAX_AMOUNTS.COB:46: libcob: Stack overflow, possible PERFORM depth exceeded
I tried going back to other versions of the same program that worked, but I am still getting the same error. I have several other programs that run fine with no problem.
If the program was running for several weeks and then ends with this error the program seems to be broken.
You get that error if a section/paragraph was PERFORMed and then (likely after a bunch of other statements possibly including GO TO or PERFORMing other sections/paragraphs there) is `PERFORM' itself again (recursively).
In most cases this is an error.
If the same program "worked before" and now doesn't then its program flow is changed, likely because of different data being processed.
You could enable tracing of paragraphs and sections for this single program by adding -ftrace to this single program and adjusting runtime.cfg / export/set COB_SET_TRACE and COB_TRACE_FILE according to the runtime documentation.
Note: The PERFORM stack checking is only enabled upon request by -fstack-check, which is auto-enabled with --debug (all runtime checks) or -g (debugging) - if you don't want this you can disable it by explicit specifying -fno-stack-check.
You can also adjust the number of iterations libcob considers "possibly safe" with -fstack-size=number, the current default of 255 is quite high, the maximum that can be set in a current version is 512 (artificial limit only).
In any case I highly suggest to replace the outdated OpenCOBOL (likely 1.1 from Feb 2009) by a current GnuCOBOL version (latest 3.1-rc1 19 days ago).

Rcpp in Rstudio, can't cache in memory when parallel if I don't open the cpp file in Rstudio

I met a wired problem but I wonder if I'm asking the correct question:
result = parLapply(cl, 1:4,
function(j,rho_list_needed,delta0_needed,
V_iter_s,Sigma_list_needed) {
rhoj = rho_list_needed[[j]]
delta0_in_cpp = delta0_needed
v = as.vector(V_iter_s[,,,j])
sigmaj = Sigma_list_needed[[j]]
sourceCpp('sample_Z.cpp')#first time complie slow,then cashed
return(Sample_Z(rhoj,delta0_in_cpp, v,sigmaj,A,Cmatrix))
},rho_list_needed,delta0_needed,
V_iter[[s]],Sigma_list_needed)
When I was testing my sample_Z.cpp with parallel through parLapply, the single calculation takes around 1 sec. By parallel, my 4 iterations takes around 1.2 secs, which is a big improvement compared to unparalleled version, which is 8 sec.
There's no problem at all when I run my program yesterday. Just now I noticed a bug and revised my program. To give my PC a fresh environment, I restarted my computer. When started to run my program, I only opened the .R file, and run. But it took 9 sec for that parallel, which used to be 1.2 sec. The 9 sec was after warming up my cores, i.e., already sourced the cpp before I time it.
I just don't know where is the bug. Then try to source the cpp file directly in my global merriment, and I found out that there was no caching at all. The second time took the same time as the first one.
But I accidentally opened the sample_Z.cpp in Rstudio, explicitly at the editor. And then, everything works correct now.
I don't know how to search this similar problem on google with what kind of key words and I don't know if opening the cpp file is a must, while I never known before.
Can anyone tell me what's the real issue? Thanks!
After restarting your PC, you probably had extra processes running which would have competed for CPU cores that slowed down your algorithm. The fact you're rebooting suggests to me you're not using Linux... but if you are, watch with top while starting your code, or equivalent for your platform.

How to debug potential CPU/RAM errors in Bash script on Linux

I have a relatively simple bash script that reads from a set of static input files, stores the input in bash variables and then does a bunch of processing over said input by calling out to external scripts (e.g. written in Python, Go, other bash scripts etc.) and using the intermediate results.
Lately I have been experiencing an intermittent problem where a single character seems to be getting altered somewhere during the processing which then causes subsequent errors. Specifically, a lot of the processing I'm doing involves slicing up a list of comma-separated records, and one of the values on each line is a unix timestamp, e.g. 1354245000.
What seems to be happening is that occasionally one of these values will get altered slightly, so I end up with a timestamp like 13542458=2 or 13542458>2 or 13542458;2 coming out of one of the intermediate scripts. This then subsequently gets fed into another script, which throws an exception when it tries to parse the value to an integer.
In the title of this question, I've suggested that this might be a potential CPU/RAM error. I know the general folly in thinking errors are caused by low level things like hardware/compilers etcetera, but the nature of this particular error makes me think it may be possible, for the following reasons:
The input files are the same on each invocation of the script, and the script only fails on some invocations.
I cannot think of any sources of randomness in the source code prior to where the script is breaking. It's basically just slicing and dicing csv input.
I cannot think of any sources of concurrency in the source code -- even the Go scripts aren't actually written to run anything concurrently.
This problem has only arisen in the last week or so. Prior to this time, this error would never occur.
While I haven't documented every erroneous character, they seem to often be quite close in the ASCII table to numeric values (=, >, ; etc). That said, I guess the Hamming distance between two characters quite far apart can be small also with changes to a high order bit.
The script often breaks at a different stage on different runs. i.e. I have a number of separate Python scripts, and sometimes it'll make it past one script and then the error will be induced in another. Other times it'll be induced on an earlier script.
What I'd like to know is, is there any methodical way to either confirm or rule out a hardware error for this problem? Or if it is a hardware problem, is it possibly undetectable by the operating system?
A bit of further info on the machine:
Linux 64-bit, Ubuntu 12.04
Intel i7 processor
16GB DDR3 RAM
I'm hoping someone can either point me to a reliable way to verify whether the hardware is to blame or otherwise a sound reason as to what else might be the cause.
Try booting into Memtest to check your memory.
While it is highly unlikely that it will be hardware, if you have exhausted you standard software debug as suggested by #OliCharlesworth, here is an outline of hardware error investigation:
(1) check your log area for any `MCE` logs (machine check exceptions).
If you find any in either your log area (syslog) or sometimes in
the present working dir or /dir -- you have a hardware failure.
(2) check your log area for disk errors. e.g:
smartd[3963]: Device: /dev/sda [SAT], 34 Currently unreadable (pending) sectors
(3) check your drive integrity, e.g.: (as root) # `smartctl -a /dev/sda` if any abnormality, run:
smartctl -t short /dev/sda (change drive as required)
(4) download/install/boot to [memtest86](http://www.memtest86.com/download.htm)
(run the complete test)
If your cpu/motherboard has thrown no mce's, you have no disk error, your drive tests OK with smartctl and you have no memory errors with memtest86, then recheck the software debugging. While additional hardware errors can still be present (bad capacitors, etc..) the likelihood at this point is software. Good luck.

Resources