auto detect segmentation fault and re-run if segmentation fault occurs - bash

I have a script get_new_GMC.sh that contains this line:
./casm.x < input_to_casm.txt
However, sometimes we might get a segmentation fault (and sometimes we do not)... Sometimes it would have the error:
get_new_GMC.sh: line 156: 72577 Segmentation fault: 11 ./casm.x <
input_to_casm.txt
How could I change my script so that whenever a segmentation fault occurs, I would just rerun ./casm.x < input_to_casm.txt again? Thank you.

Related

Code not running in Hackerrank though it is running in onlinegdb

The main topic is to write a code in C++ for Hash with Chaining (using single link list).here, data has been provided in terms of array of long datatype and we have to store them in hash(table size 13) in a sorted manner.
Here is my code for the same.
https://onlinegdb.com/B1pbgjxAI
There is no compiler error in the code but while running the code the following error arises.
*** buffer overflow detected ***: ./Solution terminated
Reading symbols from Solution...done.
[New LWP 86657]
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
Core was generated by `./Solution'.
Program terminated with signal SIGABRT, Aborted.
#0 __GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:50
To enable execution of this file add
add-auto-load-safe-path /usr/local/lib64/libstdc++.so.6.0.25-gdb.py
line to your configuration file "//.gdbinit".
To completely disable this security protection add
set auto-load safe-path /
line to your configuration file "//.gdbinit".
For more information about this security protection see the
"Auto-loading safe path" section in the GDB manual. E.g., run from the shell:
info "(gdb)Auto-loading safe path"
Here, for the testcase, input is
201911169
and the output should be
93
In line 36 you're calling strcpy(p->name,Name), but Name is passed from x in main, and char x[4] isn't null-terminated as you only assign to x[j] for j from 2 downto 0. Add a statement x[3] = '\0';.

File doesn't open in Fortran [duplicate]

I get the following error when I execute a fortran code compiled with gfortran. The error is followed by a backtrace for this error pointing to memory locations.
Program received signal SIGSEGV: Segmentation fault - invalid memory reference.
Backtrace for this error:
#0 0x2b2f8e39da2d in ???
#1 0x2b2f8e39cc63 in ???
#2 0x311823256f in ???
#3 0x311827a7be in ???
#4 0x2b2f8e39cff2 in ???
#5 0x2b2f8e4adde6 in ???
#6 0x2b2f8e4ae047 in ???
#7 0x2b2f8e4a62d7 in ???
#8 0x40482a in instrument_
at /home/user/model/instrument.f:90
#9 0x406c1e in funcdet
at /home/user/model/funcsynth.f:67
#10 0x406c98 in main
at /home/user/model/funcsynth.f:78
Segmentation fault (core dumped)
I would like to know where the first instance of error arises - is it the first line of the backtrace or the lastline? Also, strategies that might help me debug the issue.
Update:
Upon backtracing, the line 90 of instrument involves opening a file like so:
out_file3 = 'new_file'
OPEN(unit=3,file=out_file3,status='unknown')
To determine the issue behind I've incorporated error checking by doing so:
OPEN(unit=3,file=out_file3,status='unknown',iostat=io_status, err=100)
100 write(STDOUT,*) 'io status=', io_status
The code exits with the error:
Error: Invalid value for ERR specification at (1). How do I determine the appropriate value for ERR specification? This led me to suspect that unit=3 might be the cause for error, I've changed the value for unit, but everytime get the "Segmentation fault (core dumped)" error.
Update 2:
OPEN(unit=3,file=out_file3,status='unknown',err=17)
17 write(STDOUT,*) 'Problem'
Still get the Segmentation fault (core dumped) error at the line corresponding to OPEN.... I can only guess that unit=3 is the root cause of the problem.
Update 3
Attempt at a self sufficient example:
character*280 testfile,finalfile,outfile
DIR = '/storage/work/user/'
testfile = 'test.dat'
CALL getenv(DIR,outfile)
CALL sappend(outfile,testfile,finalfile)
OPEN(unit=3,file=finalfine,status='new')
write(3,*) 'Test'
END

Should pcall catch PANIC errors (ESP32 NodeMCU)?

I've such code:
print("AAAAAA")
local status, jobj = pcall(json.decode(docTxt))
print("BBBBBB")
decode method causes PANIC error, an it results in following console output:
AAAAAAA
PANIC: unprotected error in call to Lua API (json.lua:166: 'for' initial value must be a number)
Whole program beaks, BBBBB does not get printed to console.
Is this normal? Is pcall broken ?
I was able to figure it out: it can be configured in watchdog options for firmware compiler. Now I've have such setup, that it reboots on panic.

Freeswitch terminates with signal SIGSEGV, Segmentation fault

While freeswitch is running and we call 1 client to another client then another client pick the call then code is dumped and throwing following error.
I am trying allocate the memory for that function but nothing happens.
Core was generated by `./freeswitch'.
Program terminated with signal SIGSEGV, Segmentation fault.
0 0x00007fb18e95a771 in H245NegLogicalChannels::FindChannelBySession (
this=<optimized out>, rtpSessionId=rtpSessionId#entry=0,
fromRemote=fromRemote#entry=false, anyState=anyState#entry=false)
at /root/opal/src/h323/h323neg.cxx:1097
warning: Source file is more recent than executable.
1097 if (channel != NULL && (rtpSessionId == 0 || channel->GetSessionID() == >rtpSessionId) &&
(gdb)

AVCaptureInput breaks at line 129 after session starts running

I received a weird error saying
[AVCaptureInput attachToFigCaptureSession:]_block_invoke, file
/SourceCache/EmbeddedAVFoundation/EmbeddedAVFoundation-
887.52/Aspen/AVCaptureInput.m, line 129.
What is this error? How it happened?
It happened when I call session.startRunning() where session is a AVCaptureSession!

Resources