I am starting to have a big project and I am currently using and including many of packages and .jl files:
a = time()
#info "Loading JuMP"
using JuMP
#info "Loading Gurobi"
using Gurobi
# #info "Loading Combinatorics, DelimitedFiles, Dates and StatsBase"
# using Combinatorics, DelimitedFiles, Dates, StatsBase
#info "Loading Combinatorics, DelimitedFiles, Dates and Random"
using Combinatorics, DelimitedFiles, Dates, Random
#info "Loading Distributions, Graphs, GraphsFlows[TODO] and Plots"
using Distributions
# using Graphs, GraphsFlows, GraphPlot, Plots
using Graphs, GraphPlot, Plots
#info "Loading Parameters and Formatting"
using Parameters, Formatting #https://stackoverflow.com/a/58022378/10094437
#info "Loading Compose, Cairo and Fontconfig"
using Cairo, Fontconfig, Compose
#info "Loading .jl files $(lpad("0%",4))"
include("write_tikz.jl")
include("with_kw_mutable_structures.jl")
include("instance.jl")
include("solution_checker.jl")
#info "Loading .jl files $(lpad("25%",4))"
include("create_subtour_constraint.jl")
# include("ilp_rho_rsp_st_chains.jl")
include("ilp_rho_rsp_without_uc.jl")
include("benders_rho_rsp.jl")
include("benders_subproblem_poly.jl")
#info "Loading .jl files $(lpad("50%",4))"
include("benders_subproblem_ilp_primal.jl")
include("benders_subproblem_ilp_dual.jl")
include("print.jl")
include("three_four_rho_rsp.jl")
#info "Loading .jl files $(lpad("75%",4))"
include("utilities.jl")
include("rho_rsp_lb.jl")
include("./plots/plots.jl")
include("local_searches.jl")
#info "Loading .jl files $(lpad("100%",4))"
#info time()-a
All these using and includes take 34 seconds each time a launch Julia. It is indeed faster when already compiled, and when doing include("main.jl") for the second time, but still it takes 2s45 when already compiled once.
I would like to know if there are faster ways to using packages and includes Julia files, maybe with parallelism?
I am using Julia 1.7.2
These are the steps to consider
Wrap all your code into a Julia package.
The Julia packages will be precompiled on the first use and the subsequent times will be much shorter. You do not want to have a code with so many includes - make a package to speed up the things.
Most likely you will be fine after step (1). However the next option is compiling the packages into Julia system image using PackageCompiler.
As mentioned by #jing each new Julia version will do the job in a shorter time
Related
I've been trying to generate a OpenMP enabled precompiled header (PCH) with cppyy
and have been failing so far. Instead of doing so manually each time as outlined in this answer, I'm looking for an automated solution.
So far, the broad method is to use os.environ to export environmental variables - but this seems to fail somewhat inconsistently, resulting in single-threaded code each time.
In particular, there doesn't seem to be any output from cppyy_backend.loader.ensure_precompiled_header all the time. Even if there is output, it has no effect on whether the C++ code runs parallely or not.
The relevant part of the code below:
import cppyy_backend.loader as l
os.environ['EXTRA_CLING_ARGS'] = '-fopenmp -O2 -g'
l.set_cling_compile_options(True)
current_folder = os.path.split(os.path.abspath(__file__))[0]
pch_folder = os.path.join(current_folder, 'cling_pch/')
if os.path.exists(pch_folder):
pass
else:
os.mkdir(pch_folder)
l.ensure_precompiled_header(pch_folder)
# find PCH file:
pch_path = glob.glob(pch_folder+'/allDict*')
if len(pch_path)>0:
os.environ['CLING_STANDARD_PCH'] = pch_path[0]
else:
raise ValueError('Unable to find a precompiled header...')
System specs:
Ubuntu 18.04.2 LTS
python 3.9.0
cppyy 2.4.0
cppyy_backend 6.27.0
EDIT:
I made sure the custom PCH folder was first empty while checking to see if the custom PCH file was being created.
If you set CLING_STANDARD_PCH to the desired name, it will be used as the name for the generated PCH. Conversely, if you set the folder name to the lower level call from loader, it will set CLING_STANDARD_PCH for you. Setting both should not be necessary.
The default naming scheme will generate a differently named PCH if OpenMP is enabled from when it's not (a .omp. part is added to the name). This allows you to go back end forth, without bothering having to rebuild the PCH each time. Likely, you have multiple editions in the cling_pch directory, and pch_path[0] may well be a non-OpenMP variant.
If you want to stay close to the above, though, then how about:
import os
os.environ['EXTRA_CLING_ARGS'] = '-fopenmp -O2 -g'
current_folder = os.path.split(os.path.abspath(__file__))[0]
pch_folder = os.path.join(current_folder, 'cling_pch')
if os.path.exists(pch_folder):
pass
else:
os.mkdir(pch_folder)
os.environ['CLING_STANDARD_PCH'] = os.path.join(pch_folder, 'std_with_openmp.pch')
import cppyy
cppyy.cppexec("std::cerr << _OPENMP << std::endl")
I'm not sure about this, and will be able to confirm in a few days, but... I now suspect the reason there was an inconsistency in PCH creation was that cppyy was being called before the EXTRA_CLING_ARGS were being set.
I guess this means the 'standard' pre-compiled header was first being loaded and the 'custom' arguments were being ignored. During debugging or over the course of multiple runs - sometimes the effective order of execution was reversed.
Reading the YARD documentation and questions like this one I see this is a very nice tool to document functions, classes, and methods, but I fail to figure out how to document a simple script such as:
# #description Read files in folder and print their sizes
# #author Mefitico
require_relative '../lib/my_funcs'
# Check files in folder and print their sizes:
Dir.entries(Dir.pwd).select do |file|
if File.file?(file)
puts "#{file} - #{File.size(file)} bytes"
end
end
puts "Finished script"
Doing so generates no documentation whatsoever because no functions or classes are being defined. But for the project I'm working on, I need to create documentation for several standalone scripts which call for functions defined elsewhere. These scripts need to be documented nicely, and cannot be refactored into functions, modules or classes.
I have inherited a site from a development team long gone that used scss to compile the style sheet. Unfortunately the documentation on how to set up the development environment is non-existant and whilst I have everything else going on the site, the scss / sass compiling process is taking it's toll on my sanity. I have the following code and various iterations of this pattern throughout the codebase:
#include breakpoint($bp-medium) {
background-color: transparent;
width: (100 / 3) + %;
}
The "+ %" at the end of the width statement is being complained about by the compiler. If I remove it from the formula it compiles fine, but I'm trying to understand what the original intent here was. Can someone give me some explanation of (what I expect is) old syntax from a few years ago and what the current sass/scss compiler would expect to see to achieve the same result?
I've installed Ruby Sass v3.7.4, and I have deployed bourbon (and fixed up the imports) and neat (and also fixed up the import statements). I suspect I'm going to end up bashing my head some more on the screen... but any pointers here would be appreciated.
(100 / 3) + %
is meant to represent the third of 100 in percentage.
You can do something like this:
$width: percentage(100 / 3);
and then use $width.
Typically when I am testing small snippets of code for Ruby, I many times put code chunks in separate files in the same directory, run irb, and then run the following command:
Dir[Dir.pwd + "/*.rb"].each { |file| require file }
Which loads all the files into irb. Which brings me to my question: when I require a file, how does irb process that request? Does it take all requires and put them in one overall 'file' ? I am looking for the mechanics of how irb works.
If anyone has the answer or can point me in the right direction, I would appreciate it.
Cheers
The short answer is:
require loads a file into the Ruby interpreter. The source code is analyzed, its by-products incorporated into the Ruby runtime (classes loaded, etc.), and then the source code is not saved anywhere and is eventually garbage collected (the memory it occupied is freed).
I work on a project which is often built and run on several operating systems and in multiple configurations. I use two compilers: icc and gcc, and multiple sets of arguments for those compilers. That can give me many variants of build of one project.
What I would like to do is:
compile project using icc or gcc compiler with one set of arguments
test the performance of the application befor and after new build>
compare obtained results
compile project for another set of arguments and repeat previous steps
Has anyone an idea how to do it nicely using makefile?
You just need to cascade your make-targets according to your needs:
E.g.:
# Assumed that $(CONFIGURATION_FILES) is a list of files, all named *.cfg
# There you store your set of arguments per step
#The main target
all: $(CONFIGURATION_FILES)
#Procedure for each configuration-file
%.cfg: compile_icc compile_gcc test compare
compile_icc:
#DO whatever is necesarry
compile_gcc:
#DO whatever is necesarry
test:
#DO whatever is necesarry
compare:
#DO whatever is necesarry
However, for this kind of job I would rather use some build-automation tool ... I only know Maven, but for Makefile-Based build some other tools may fit better ... take a look on the different options e.g. here:
https://en.wikipedia.org/wiki/List_of_build_automation_software