blastn -mt_mode 1 -num_threads 24 terminate called after throwing an instance of 'std::length_error' - bioinformatics

When I was running blastn v2.12.0 with multithreaded mode (-num_thread 24), split by query(-mt_mode 1), I have the following error:
terminate called after throwing an instance of 'std::length_error'
what(): basic_string::_M_create
for a supercomputer (RHEL 7/cent OS), this happened but for a self-owned ubuntu v18.0.4, this did not happen.
Blast team do not respond so I posted it here.
Many Thanks

Related

NS3_Can't run vanet-routing-compare.cc

I'm new to NS3. I studied vanet-routing-compair.cc script. I tried to run it by these commands (vanet-routing-compare.cc is in scratch folder).
./waf --run scratch/vanet-routing-compare
./waf --run "vanet-routing-compare --scenario=1 --saveconfig=scenario1.txt"
But I'm getting confused with the results. I get following error messages.
msg="Could not connect callback to /NodeList/*/DeviceList/*/ns3::WifiNetDevice/Phy/PhyTxDrop", file=../src/core/model/config.cc, line=920 terminate called without an active exception
Command ['/home/azra/Desktop/ns-allinone-3.31/ns-3.31/build/scratch/vanet-routing-compare'] terminated with signal SIGIOT. Run it under a debugger to get more information (./waf --run <program> --gdb").
And by using the gdb debugger, I see this message.
The program being debugged has been started already.
Start it from the beginning? (y or n) y
Starting program: /home/azra/Desktop/ns-allinone-3.31/ns-3.31/build/scratch/vanet-routing- compare --scenario=1 --saveconfig=scenario1.txt
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
msg="Could not connect callback to /NodeList/*/DeviceList/*/ns3::WifiNetDevice /Phy/PhyTxDrop", file=../src/core/model/config.cc, line=920
terminate called without an active exception
Program received signal SIGABRT, Aborted.
__GI_raise (sig=sig#entry=6) at ../sysdeps/unix/sysv/linux/raise.c:51
51 }
I appreciate who can help me understand why this is happening and how I can solve it.
I got the same error in version 3.31, but the version 3.30 is ok. Maybe you can also try version 3.30.
I believe since version 3.31 the names changed from ns3:: to $ns3::
https://groups.google.com/g/ns-3-users/c/VWTV9ZdY7fs/m/MxRdIoLoAAAJ[Here][1]
The workaround I use is to copy the entire file from here:gitlab of ns3 development branch
As you can see, there were few (more than a few) changes in the code.

What does "Failed to execute /init (error -7)" mean?

Linux kernel version: 4.18.0-17
I am porting some 4.15 kernel customizations to 4.18, but my 4.18 kernel does not boot. A stock 4.18 kernel (i.e. the starting point before merging the 4.15 modifications) boots and runs.
The error message is:
Failed to execute /init (error -7)
Starting init: /bin/sh exists but couldn't execute it (error -7)
"errno 7" is "E2BIG 7 Argument list too long"
What does that mean in the context of the kernel starting the init process?
If the kernel command line and root file systems is exactly the same as the one you are giving to the kernel version that does boot than the most likely cause is that get_user_pages_remote() is failing here: https://elixir.bootlin.com/linux/v4.18/source/fs/exec.c#L194
Which would imply one of your changes broke memory management.
To get here, just track from try_to_run_init_process() which runs init to all the functions called from it which can return E2BIG. This is the only call site that does not depend on init argument list or environment size - https://elixir.bootlin.com/linux/v4.18/source/init/main.c#L1001
Having said that, I would first make VERY sure that the kernel command line and root file system are the same.

SBCL: building a standalone executable

How do I build a standalone executable in SBCL? I've tried
; SLIME 2.20
CL-USER> (defun hullo ()
(format t "hullo"))
HULLO
CL-USER> (sb-ext:save-lisp-and-die "hullo" :toplevel #'hullo :executable t)
but that just produces the following error.
Cannot save core with multiple threads running.
Interactive thread (of current session):
#<THREAD "main thread" RUNNING {10019563F3}>
Other threads:
#<THREAD "Swank Sentinel" RUNNING {100329E073}>,
#<THREAD "control-thread" RUNNING {1003423A13}>,
#<THREAD "reader-thread" RUNNING {1003428043}>,
#<THREAD "swank-indentation-cache-thread" RUNNING
{1003428153}>,
#<THREAD "auto-flush-thread" RUNNING {1004047DA3}>,
#<THREAD "repl-thread" RUNNING {1004047FA3}>
[Condition of type SB-IMPL::SAVE-WITH-MULTIPLE-THREADS-ERROR]
What am I doing wrong?
What you are doing wrong is trying to save an image while multiple threads are running. Unlike many errors in Lisp the error message explains exactly what the problem is.
If you look up the function in the sbcl manual here then you find that indeed one may not save an image with multiple threads running. The extra threads come from swank (the CL half of SLIME). The manual says that you may add functions to *save-hooks* which destroy excess threads and functions to *init-hooks* to restore threads.
One way around all this is to not save the image when it is running through slime but instead to start sbcl directly at a terminal (note: no readline support), load your program and save from there.
Working with slime is different. In theory there is a SWANK-BACKEND:SAVE-IMAGE function but I’m not sure if that works. Also as saving an image kills the process you may want to fork (SB-POSIX:FORK) first, unless you are on Windows. But forking causes problems due to not being well specified and file descriptor issues (i.e. if you try fork->close swank connection->save and die then you may find that the connection in the parent process is closed (or worse, corrupted by appearing open but being closed at some lower level)). One can read about such things online. Note that due to the way sbcl threads are implemented, forking clones only the thread that forks and the other threads are not cloned. Thus forking and then saving should work but may cause problems when running the executable due to partial slime state.
You may be interested in buildapp.
If you want to be able to use slime with your saved application you can load swank and start listening on a socket or port (maybe with some command line argument) and then in Emacs you may connect to that swank backend with slime.
You have to run save-lisp-and-die from a new sbcl, not from Slime. Dan Robertson explains more.
It is cumbersome the first time, but you can put it in a Makefile and re-use it. Don't forget to load your dependencies.
build:
sbcl --load cl-torrents.asd \
--eval '(ql:quickload :torrents)' \
--eval '(use-package :torrents)' \ # not mandatory
--eval "(sb-ext:save-lisp-and-die #p\"torrents\" :toplevel #'main :executable t)"
The quickload implies Quicklisp is already loaded, which may be the case if you installed Quicklisp on your machine, because then your ~/.sbclr contains quicklisp loading script ((load quicklisp-init)).
However sb-ext is not portable across implementations. asdf:make is the cross-platform equivalent. Add this in your .asd system definition:
:build-operation "program-op" ;; leave as is
:build-pathname "<binary-name>"
:entry-point "<my-package:main-function>"
and then call asdf:make to build the executable.
You can have a look at buildapp (mentioned above), a still popular app to do just that, for SBCL and CCL. It is in Debian. http://lisp-lang.org/wiki/article/buildapp An example usage looks like
buildapp --output myapp \
--asdf-path . \
--asdf-tree ~/quicklisp/dists \
--load-system my-app \
--entry my-app:main
But see also Roswell, a more general purpose tool, also supposed to build executables, but it is less documented. https://roswell.github.io/
If you want to build an executable on a CI system (like Gitlab CI), you may appreciate a lisp Docker image which has already SBCL, others lisps and Quicklisp installed, and if you want to parse command line arguments, see https://lispcookbook.github.io/cl-cookbook/testing.html#gitlab-ci and (my) tutorial: https://vindarel.github.io/cl-torrents/tutorial.html#org8567d07

What are the exit status codes for the G-WAN executable?

I'm trying to serve a big number of small files with G-WAN (version 4.3.14, started with sudo on 64-bit Ubuntu 14.04.3). I start hammering it with requests over a single connection using wget to provide base URL and a file with a list of the URL suffixes. At some point, which is different for different runs, the gwan executable silently exits. There's no trace in the gwan log or in the site error log (I did change '_log' to 'log' to enable logging). The exit status code is 139. What does it mean? When I stop it with Ctrl-C the exit code is 130.
Is there a reference for the exit status codes? I cannot find any with Google.
First, Ubuntu 14.04.3 is very recent while G-WAN v4.3.14 is very old. Almost every new OS release introduces cincompatibilities that require patches and this is why we have to issue more recent releases for registered users. This explains the 'silent exits' that you are experiencing.
Second, process exits codes can be found this way:
./gwan -h
echo $?
0
Zero means no error, and any other value is an error (mixing system flags to be as informative as possible). That's why Ctrl+C returns 130: Control-C is fatal error signal 2, (130 = 128 + 2).

bash is crashing on cygwin add_item ("\??\C:\cygwin", "/", ...)

I am trying to run applications on windows cluster. I am getting random crashes like bellow but most times it works.
I suspected that it was famous forking issue, but cygwin's rebase did not help.
Thank you for suggestions.
2 [main] bash 12840 C:\cygwin\bin\bash.exe: *** fatal error - add_item ("\??\C:\cygwin", "/", ...) failed, errno 1
Stack trace:
Frame Function Args
002868A8 6102F97B (002868A8, 00000000, 00000000, 00000000)
00286B98 6102F97B (6119FE20, 00008000, 00000000, 611A1C8F)
00287BC8 6100652C (611DF498, 00287BF4, 00000000, 60FE000C)
00287BE8 61006568 (611DF498, 00289C10, 00000001, 0003000A)
0028AC28 610917E4 (60FE000C, 20000C08, 0028ACF8, 61083290)
0028AC58 610D40FF (004C46B0, 01D05699, 004657E0, 612729D4)
208979 [main] bash 12840 exception::handle: Exception: STATUS_ACCESS_VIOLATION
cygwin 6.1
windows server 2008 R2 Ent
I have got explanation to the error from cygwin support guys(Thanks to Corinna):
That's not a rebase problem. It's apparently a concurrency problem of
sorts. While pulling up the per-user shared memory region, two or
more processes are trying to set up the same mount points.
This is not supposed to happen. Only the first process actually
creating the per-user shared memory is supposed to create the mount points. The OS tells a process if it created or just opened a shared
memory region, but for some reason both processes seem to think they
created the shmem region and one of them then stumbles of the EPERM
condition trying to create the root mount point twice.
But it is still leave problem the original problem.

Resources