Reload a source file on multiple processes in Julia - parallel-processing

I have a Julia source file with a simulation function, which I want to run in parallel on multiple processors.
addprocs(1)
using Autoreload
arequire("testp.jl")
testresults = pmap(testp,[100,100])
* change code *
areload()
testresults = pmap(testp,[100,100])
According to the docs, include or using do not bring the module content in scope on all processes, so I have to use require and to remove the module declaration from the .jl file. I do testing in Jupyter, so I often change the function code in an editor and want to reload the .jl file in Jupyter. Since reload function does not work with non-module source files, I have to use an obsolete Autoreload package with arequire and areload. It works in the single-process case. However, the function code is not reloaded on either processor with 2 or more processes. I am using Julia 0.4.3 x86_64.
How can I reload the code without restarting the kernel?

Related

Go pkg/mod vs pkg/windowsamd_64 on go build

I am beginner in go and facing difficulty in understanding go/pkg folder.As suggested by documentation it contains pkg/mod and pkg/windowsamd_64. pkg/windowsamd_64 for storing compiled files. What happens if I have a file importing some external github modules and do go build on that.
Will it go first to pkg/mod (but modules are compiled in
pkg/windowsamd_64) to search for external modules
Will it go first to pkg/windowsamd_64 (then what will be use of
pkg/mod) to search for modules
Will it go to {gopath}/src and do something from there
pkg/mod is just a folder ,why do we call it cache as it will keep on
filling or better when does it populate?
The go command has two different modes of locating packages: module mode (introduced in Go 1.11) and GOPATH mode (much older). Module mode is the default as of Go 1.16, and if you are new to Go you will probably want to work exclusively in that mode. (There isn't much point to working in GOPATH mode unless you have a large legacy codebase that requires it.)
pkg/mod stores cached source code for use in module mode. The source code for a given version of a module is downloaded automatically when you build a package from that module (for example, as a dependency of some other package).
GOPATH/src stores source code for use in GOPATH mode. You can also choose to work in that directory in module mode, but that's a completely optional/aesthetic choice and shouldn't change the behavior of anything in module mode.
pkg/windows_amd64 stores installed packages in GOPATH mode. However, installed packages aren't very useful anyway because Go has a separate build cache (even in GOPATH mode). So you can mostly ignore pkg/windows_amd64 completely.

How can I build bootimage without going through all Makefiles

I am currently working on the Linux kernel for an Android phone. My workflow is:
Make changes to kernel code
Build with make bootimage
Flash with fastboot flash boot
This works fine. However building takes unnecessary long because make bootimage first goes through the entire tree and includes all Android.mk files. This takes longer than actually compiling the kernel and creating a boot image. Including these files should not be necessary since there are no changes to them. To decrease turnaround time in my workflow, I would like to speed up the build step.
When building other other projects, there are ways to to not build dependencies and thus skip reading all Android.mk files (for instance mm).
There is a make target bootimage-nodeps which seems to do the right thing: It makes a new boot image without going through all Android.mk files. Unfortunately dependencies also include the kernel itself (which therefore does not get built although there are changes).
My question is: Is there a way to build the kernel and create a boot image withouth having to read all Android.mk files.
In case you 're still looking into it, try using the showcommands goal at make, for example :
make bootimage showcommands
The showcommands goals will show all commands needed to build the kernel and the bootimage. Some of the commands, including the one to create the bootimage have $(hide) in front and are not shown.
Once you know the commands and the arguments, next time you need to make the bootimage you can run the commands manually (without using make bootimage and without including all the makefiles).
I have exactly the same problem and this is the only working solution I've found.
I am not sure if you can save time this way (since this solution needs to zip/unzip multiple times which needs more time then to search for all Android.mks on my machine) but since your question is:
Is there a way to build the kernel and create a boot image withouth
having to read all Android.mk files.
you could give this a try:
Preperations:
call make dist once
unzip the target_files.zip in out/dist/
now create a script that does the following for you:
overwrite the kernel in your unpacked target_files with your newly build kernel
zip your target_files with your new kernel
use the python script img_from_target_files from build/tools/releasetools/ with the extra parameter -z. Example: img_from_target_files -z out/dist/target_files.zip new_imgs.zip
inside the newly created new_imgs.zip you will find your new boot.img with your new kernel
You can try the make SINGLE_SHOTcommand - if you know the path to your Andorid.mk:-
m -j8 ONE_SHOT_MAKEFILE=build/target/board/Android.mk bootimage
This worked for me pretty well in Android L/M/N releases

My Perl script starts too slowly, and includes many modules - can I precompile it?

I have a Perl script, that includes a few custom Perl modules.
I have profiled the script using Devel::NYTProf, and I can see that including these Perl modules has a cost that I would like to minimize.
I have installed PAR::Packer and compiled my script to make it stand alone, but it does not include the custom Perl modules.
Any suggestions?
Edit :
I need to precomplie the script so that i does not include the compilation overhead every time it is evoked.
If some of the packages you import are not needed at startup, change use calls to require and move them to the places in your code where the packages are needed (so you import them when they are needed, not necessarily at startup). Depending on how complex your program is, it could be a lot of work to figure out what calls can be changed without breaking your program or affecting its behavior.
Borodin's daemon suggestion is a good one, too. Start a skeleton of your program that loads the necessary packages and waits for something to invoke it (maybe set up a socket connection or a signal handler). Then when it is time for your program to run, fork it and call some &main subroutine that starts the useful part of your program.

Release required module from Node.js

How can one release a Node.js module during runtime in order to save memory or improve the overall performance.
My application dynamically loads modules in Node.js during runtime, but does not unload any of them. I'm looking for such functionality esp. to update a module that has been changed after the code loaded the module; and also to unload modules that may not be used further.
Any insights?
Thanks.
To unload a script, you can do this:
delete require.cache['/your/script/absolute/path'] // delete the cache
var yourModule = require('/your/script/absolute/path') // load the module again
So if you have plugin modules, you can watch those files' change, then dynamically unload(delete the cache), then require that script again.
But make sure you are not leaking memory, you can re-assign the changed module to the old variable.
Here is a handy tool that you can check the memory:
node-memwatch .
Good luck!
It sounds like you're creating some sort of plugin system. I would have a look at Node VM's:
http://nodejs.org/docs/latest/api/vm.html
It allows you to load and run code in a sandbox which means when it's finished all it's internal allocations should be freed again.
It is marked as unstable, but that doesn't mean it doesn't work. It means the API might change in future versions of Node.
As an example, Haraka, a node based smtp server, uses the VM module to (re)load plugins.

Writing a file change listener in Ruby

I want to write a listener (ruby module) to identify a file creation inside a folder. My scenario is as follows
I have a folder called (files)
I have a Rails project which will create a file (demo.txt) inside
the folder ('file')
I need to write a listener to identify the file change and start
reading the file (demo.txt)
Where can I start on creating this Ruby module?
I'm using 'Ruby 1.8.7 (2010-06-23 patchlevel 299) [i686-linux]'.
There are a few small libraries, which you could utilize, learn from or build upon, e.g.
https://github.com/mynyml/watchr
Agile development tool that monitors a directory tree, and triggers a user defined action whenever an observed file is modified. Its most typical use is continuous testing, and as such it is a more flexible alternative to autotest.
http://codeforpeople.rubyforge.org/directory_watcher/
The directory watcher operates by scanning a directory at some interval and generating a list of files based on a user supplied glob pattern. As the file list changes from one interval to the next, events are generated and dispatched to registered observers. Three types of events are supported — added, modified, and removed.
https://github.com/guard/guard
Guard is a command line tool to easily handle events on files modifications (FSEvent / Inotify / Polling support).
http://rubydoc.info/gems/rb-inotify/0.8.6/frames
This is a simple wrapper over the inotify Linux kernel subsystem for monitoring changes to files and directories. It uses the FFI gem to avoid having to compile a C extension.

Resources