Julia invoke script on existing REPL from command line - cmd

I want to run a Julia script from window command line, but it seems everytime I run > Julia code.jl, a new instance of Julia is created and the initiation time (package loading, compiling?) is quite long.
Is there a way for me to skip this initiation time by running the script on the current REPL/Julia instance? (which usually saves me 50% of running time).
I am using Julia 1.0.
Thank you,

You can use include:
julia> include("code.jl")

There are several possible solutions. All of them involve different ways of sending commands to a running Julia session. The first few that come to my mind are:
use sockets as explained in https://docs.julialang.org/en/v1/manual/networking-and-streams/#A-simple-TCP-example-1
set up a HTTP server e.g. using https://github.com/JuliaWeb/HTTP.jl
use named pipes, as explained in Named pipe does not wait until completion in bash
communicate e.g. through the file system (e.g. make Julia scan some folder for .jl files and if it finds them there they get executed and moved to another folder or deleted) - this is probably simplest to implement correctly
In all the solutions you can send the command to Julia by executing some shell command.
No matter which approach you prefer the key challenge is sanitizing the code to handle errors properly (i.e. a situation when you sent some command to the Julia session and it crashes or when you send requests faster than Julia is able to handle them). This is especially important if you want the Julia server to be detached from the terminal.
As a side note: when using the Distributed module from stdlib in Julia for multiprocessing you actually do a very similar thing (but the communication is Julia to Julia) so you can also have a look how this module is implemented to get the feeling how it can be done.

Related

Bash Scripts (even trivial ones) stuck when invoked on the terminal

I have a server on which we execute multiple bash scripts to automate tasks (like copying files to other servers, kicking off backups, etc). It has been working for some months, but today it started to get erratic.
What is happening, is that the script gets 'stuck' for a while, and after that, it runs with no problem. If I copy and paste the commands one by one on the terminal, it works, so is not something on the script itself, but it seems something that is preventing the bash interpreter (if that makes sense).
Another weird behavior is that the same script will run with no issues eventually. However, as we use Jenkins for automation, the scripts are re-created every time a new job starts.
For example, I created a new script, tst.sh, which only contains an echo. If I try to run it directly, it gets stuck for a while. I tried to debug it with bash -xeav but it does not print my script code, which means that it is not reading it. After a while, the script ran, with no changes. However, creating one script, with the same content and a different name, resurfaces the issue.
My hypothesis is that something prevents the script to be read, and just waits until whatever is blocking it to finish. However, I did not see any process holding the file, which means that it may not the case.
Is there any other thing I should try? My knowledge in bash is pretty basic, so I don't know if there is a flag that may help me on debugging this internally.
I am working on RHEL 8.85, the bash version is GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu)
UPDATES BASED ON THE COMMENTS
Server resources are OK, no usage for them.
Hardware for the server also works fine, the ops team has not reached out with any known issue at least
Reboot makes the issue disappear, however, it reappears after 5 minutes or so
The issue seems that is not related to bash profiles and such.
Issue solved, posting this as an answer so people can find it quicker.
Turns out, as multiple users suggested in the comments (thanks to all!!) the problem was caused by a security monitor, which analyzed each of the scripts that were executed. The team changed some settings on that end to prevent it from happening, and so far is working.

How to detect or log (ubuntu 14.04) when Ruby forks the process?

I'm trying to reduce the amount of forking our Ruby/Rails app does. We shell out a lot with backticks, and each of these forks the entire process, which can cause a huge memory bloat.
I'm going through, identifying the ones that get called the most, and trying to replace them with code which achieves the same thing without making a shell call. However, in some cases I suspect it might still be forking under the hood anyway.
Is there a way to detect or log whenever a process forks? I'm using ubuntu 14.04. A log would be ideal as I can then keep an eye on it when I run the amended code.

How to Get Child Process IDs Without the jobs Command

Okay, so I'm working with a shell script that needs to be as portable as possible, and I want to make sure I can cleanly tidy up any child-processes using a trap command.
Now, on more recent platforms the jobs -p command can be used to get a list of child process ids, suitable for throwing straight into a kill command to tidy things up without any fuss.
However, some environments don't have this. To work around this I'm using a variable into which I throw process IDs, but it's messy, and a typo could result in some or all child processes not being killed when they should be.
So, in the absence of the jobs command, what alternatives are there? Or put another way, what is the most compatible method to kill all child-processes of a script?
To give you an idea of potential limitations; the most basic system I need to work with has no pgrep, and only a basic version of ps only supporting the -w flag. It does have access to special files under /proc/$$/ but I'm not sure what to do with those (do any of them even list child processes?). This has been a big part of the difficulty, as many similar questions list solutions using tools I don't have access to, I just love compatibility issues =)
You can get the child pid using `!`
$!

Can the shell created from PHP shell_exec is reusable?

I am working on a project, where I need to execute multiple unix script from PHP environment. Could this be possible to open a single unix shell and execute all the unix scripts.
Currently im using shell_exec for each of the scripts execution. This makes the application slow, as each time shell_exec,a new shell is being opened and the script is executed.
Thanks in Advance,
No, the underlying shell is not accessible.
You could try few things:
Optimise the scripts so you have to do fewer execs. Pipe them or something like that
I am not sure if it will work but you should be able to start a bash process and send commands to it (see proc_open). This way you could be able to manually and reuse the shell. But I imagine it will be a nightmare, especially in parsing the responses from the scripts (if you need that).

Run a process each time a new file is created in a directory in linux

I'm developing an app. The operating system I'm using is linux. I need to run if possible a ruby script on the file created in the directory. I need to keep this script always running. The first thing I thought about is inotify:
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories.
It's exactly what I need, then I found "rb-inotify", a wrapper fir inotify.
Do you think there is a better way of doing what I need than using inotify? Also, I really don't understand the way that I have to use rb-inotify.
I just create, for example, a rb file with:
notifier = INotify::Notifier.new
notifier.watch("directory/to/check",:create) do |event|
#do task with event.name file
end
notifier.run
Then I just ruby myRBNotifier.rb, and it will stay looping for ever. How do I stop it? Any idea? Is this a good approach?
I'd recommend looking at god. It's designed for this sort of task, and makes it pretty easy to build a monitoring system for background and daemon apps.
As for the main code itself, inotify isn't cross-platform, so if you have a possibility you'll need to run on Windows or Mac OS then you'll need a different solution. It's not too hard to write a little piece of code that checks your target directory periodically for a change. If you need to know what changed, read and cache the directory entries then compare them the next time your code runs. Use sleep between runs to wait some period of time before looping.
The old-school method of doing similar things is to use cron to fire off a job at regular intervals. That job can be your script that checks whether the file list changed by comparing it to the cached version, then acting as needed if something is different.
Just run your script in the background with
ruby myRBNotifier.rb &
When you need to stop it, find the process id and use kill on it:
ps ux
kill [whatever pid your process gets from the OS]
Does that answer your question?
If you're running on a mac/unix machine, look at the launchctl man page. You can set up a process to run and execute a ruby script whenever a file changes. It's highly configurable.

Resources