Best practice to solve large number of similar knapsack instances - knapsack-problem

I'm working on a project where I need to solve thousands of small to large "simple" instances of a knapsack-like problems. All my instances have the same structure, the same constraints but varies in the number of items (thus variables). The domains can be fixed for all the instances.
I've successfully integrated minizinc into a workflow which
extract relevant data from a database,
generate the .mzn model (including the initial variable assignments)
call the minizinc driver for the CBC solver and,
parse and interpret the solution.
I've now hit a performance bottleneck at the flattening phase for large instances (cf verbose flattening stats). Flattening takes up to 30seconds while solving to optimality requires less than 1s.
I've tried to disable the "optimized flattening" to no avail, I've also tried to split the toolchain (mzn2fzn then mzn-cbc); and finally tried to separate model and data definitions but didn't observe any significant improvement.
If it helps, I've identified the set of constraint causing the problem:
...
array[1..num_items] of int: item_label= [1,12,81,12, [10 000 more ints between 1..82], 53 ];
int: num_label = 82;
array[1..num_label] of var 0..num_items: cluster_distribution;
constraint forall(i in 1..num_label)(cluster_distribution[i]==sum(j in 1..num_items)(item_indicator[j] /\ item_label[j]==i));
var 1..num_label: nz_label;
constraint non_zero_label = sum(i in 1..num_label) (cluster_distribution[i]>0);
....
Basically, each of the 5000 items have a label (out of ~100 possible labels), and I try (amongst other objectives) to maximise the number of different labels (the non_zero_label var ). I've tried various formulation but this one seems to be the most efficient.
Could anyone provide me with some guidance on how to accelerate this flattening phase? How would you approach the task of solving thousands of "similar" instances?
Would it be beneficial to directly generate the MPS file and then call CBC natively? I expect minizinc to be quite efficient at compiling MPS file, but maybe I could get a speedup by exploiting the repeated structure of the instances? I however fear this would more or less boil down to re-coding a poorly written half-baked pseudo custom version of minizinc, which doesn't feel right.
Thanks !
My system info: Os X and Ubuntu, mnz 2.1.7.
Compiling instance-2905-gd.mzn
MiniZinc to FlatZinc converter, version 2.1.7
Copyright (C) 2014-2018 Monash University, NICTA, Data61
Parsing file(s) 'instance-2905-gd.mzn' ...
processing file '../share/minizinc/std/stdlib.mzn'
processing file '../share/minizinc/std/builtins.mzn'
processing file '../share/minizinc/std/redefinitions-2.1.1.mzn'
processing file '../share/minizinc/std/redefinitions-2.1.mzn'
processing file '../share/minizinc/linear/redefinitions-2.0.2.mzn'
processing file '../share/minizinc/linear/redefinitions-2.0.mzn'
processing file '../share/minizinc/linear/redefinitions.mzn'
processing file '../share/minizinc/std/nosets.mzn'
processing file '../share/minizinc/linear/redefs_lin_halfreifs.mzn'
processing file '../share/minizinc/linear/redefs_lin_reifs.mzn'
processing file '../share/minizinc/linear/domain_encodings.mzn'
processing file '../share/minizinc/linear/redefs_bool_reifs.mzn'
processing file '../share/minizinc/linear/options.mzn'
processing file '../share/minizinc/std/flatzinc_builtins.mzn'
processing file 'instance-2905-gd.mzn'
done parsing (70 ms)
Typechecking ... done (13 ms)
Flattening ... done (16504 ms), max stack depth 14
MIP domains ...82 POSTs [ 82,0,0,0,0,0,0,0,0,0, ], LINEQ [ 0,0,0,0,0,0,0,0,82, ], 82 / 82 vars, 82 cliques, 2 / 2 / 2 NSubIntv m/a/m, 0 / 127.085 / 20322 SubIntvSize m/a/m, 0 clq eq_encoded ... done (28 ms)
Optimizing ... done (8 ms)
Converting to old FlatZinc ... done (37 ms)
Generated FlatZinc statistics:
Variables: 21258 int, 20928 float
Constraints: 416 int, 20929 float
This is a minimization problem.
Printing FlatZinc to '/var/folders/99/0zvzbfcj3h16g04d07w38wrw0000gn/T/MiniZinc IDE (bundled)-RzF4wk/instance-2905-gd.fzn' ... done (316 ms)
Printing .ozn to '/var/folders/99/0zvzbfcj3h16g04d07w38wrw0000gn/T/MiniZinc IDE (bundled)-RzF4wk/instance-2905-gd.ozn' ... done (111 ms)
Maximum memory 318 Mbytes.
Flattening done, 17.09 s

If you are solving many instantiations of the same problem class and you are running into large flattening times, then there are two general approaches you can take: either you optimise the MiniZinc to minimize flattening time or you can implement the model in a direct solver API.
The first option is the best if you want to keep the generality between solvers. To optimise the flattening time the main thing you would like to eliminate is "temporary" variables: variables that are created and then thrown away. A lot of the time in flattening the model goes into resolving variables that are not necessary. This is to minimize the search space while solving. Temporary variables might be generated in comprehensions, loops, let-expressions, and reifications. For your specific model Gleb posted an optimisation for the for-loop in your constraint:
constraint forall(i in 1..num_label)(cluster_distribution[i]==sum(j in 1..num_items where item_label[j]==i)(item_indicator[j]));
The other option might be best if you want to integrate your model into a software product as you can offer direct interactions with a solver. You will have to "flatten" your model/data manually, but you can do it in a more limited way specific to only your purpose. Because it is not general purpose it can be very quick. Hints for the model can be found by looking at the generated FlatZinc for different instances.

Related

Vowpal Wabbit Continuing Training

Continuing with some experimentation here I was interested is seeing how to continuing training a VW model.
I first ran this and saved the model.
vw -d housing.vm --loss_function squared -f housing2.mod --invert_hash readable.housing2.mod
Examining the readable model:
Version 7.7.0
Min label:0.000000
Max label:50.000000
bits:18
0 pairs:
0 triples:
rank:0
lda:0
0 ngram:
0 skip:
options:
:0
^AGE:104042:0.020412
^B:158346:0.007608
^CHAS:102153:1.014402
^CRIM:141890:0.016158
^DIS:182658:0.278865
^INDUS:125597:0.062041
^LSTAT:170288:0.028373
^NOX:165794:2.872270
^PTRATIO:223085:0.108966
^RAD:232476:0.074916
^RM:2580:0.330865
^TAX:108300:0.002732
^ZN:54950:0.020350
Constant:116060:2.728616
If I then continue to train the model using two more examples (in housing_2.vm), which note, has zero values for ZN and CHAS:
27.50 | CRIM:0.14866 ZN:0.00 INDUS:8.560 CHAS:0 NOX:0.5200 RM:6.7270 AGE:79.90 DIS:2.7778 RAD:5 TAX:384.0 PTRATIO:20.90 B:394.76 LSTAT:9.42
26.50 | CRIM:0.11432 ZN:0.00 INDUS:8.560 CHAS:0 NOX:0.5200 RM:6.7810 AGE:71.30 DIS:2.8561 RAD:5 TAX:384.0 PTRATIO:20.90 B:395.58 LSTAT:7.67
If the model saved is loaded and training continues, the coefficients appear to be lost from these zero valued features. Am I doing something wrong or is this a bug?
vw -d housing_2.vm --loss_function squared -i housing2.mod --invert_hash readable.housing3.mod
output from readable.housing3.mod:
Version 7.7.0
Min label:0.000000
Max label:50.000000
bits:18
0 pairs:
0 triples:
rank:0
lda:0
0 ngram:
0 skip:
options:
:0
^AGE:104042:0.023086
^B:158346:0.008148
^CRIM:141890:1.400201
^DIS:182658:0.348675
^INDUS:125597:0.087712
^LSTAT:170288:0.050539
^NOX:165794:3.294814
^PTRATIO:223085:0.119479
^RAD:232476:0.118868
^RM:2580:0.360698
^TAX:108300:0.003304
Constant:116060:2.948345
If you want to continue learning from saved state in a smooth fashion you must use the --save_resume option.
There are 3 fundamentally different types of "state" that can be saved into a vw "model" file:
The weight vector (regressor) obviously. That's the model itself.
invariant parameters like the version of vw (to ensure binary compatibility which is not always preserved between versions), number of bits in the vector (-b), and type of model
state which dynamically changes during learning. This subset includes parameters like learning and decay rates which gradually change during learning with each example, the example numbers themselves, etc.
Only --save_resume saves the last group.
--save_resume is not the default because it has an overhead and in most use-cases it isn't needed. e.g. if you save a model once in order to do many predictions and no learning (-t), there's no need in saving the 3rd subset of state.
So, I believe in your particular case, you want to use --save_resume.
The possibility of a bug always exists, especially since vw supports so many options (about 100 at last count) which are often interdependent. Some option combinations make sense, other don't. Doing a sanity check for roughly 2^100 possible option combinations is a bit unrealistic. If you find a bug, please open an issue on github. In this case, please make sure to use a complete example (full data & command line) so your problem can be reproduced.
Update 2014-09-20 (after an issue was opened on github, thanks!):
The reason for 0 valued features "disappearing" (not really from the model, but only from the --invert_hash output) is that 1) --invert_hash was never designed for multiple passes, because keeping the original feature names in a hash-table, incurs a large performance overhead 2) The missing features are those with a zero value, which are discarded. The model itself should still have any feature with any prior pass non-zero weight in it. Fixing this inconsistency is too complex and costly for implementation reasons, and would go against the overriding motivation of making vw fast, especially for the most useful/common use-cases. Anyway, thanks for the report, I too learned something new from it.

Converting package using S3 to S4 classes, is there going to be performance drop?

I have an R package which currently uses S3 class system, with two different classes and several methods for generic S3 functions like plot, logLik and update (for model formula updating). As my code has become more complex with all the validity checking and if/else structures due to to the fact that there's no inheritance or dispatching based on two arguments in S3, I have started to think of converting my package to S4. But then I started to read about the advantages and and disadvantages of S3 versus S4, and I'm not so sure anymore. I found R-bloggers blog post about efficiency issues in S3 vs S4, and as that was 5 years ago, I tested the same thing now:
library(microbenchmark)
setClass("MyClass", representation(x="numeric"))
microbenchmark(structure(list(x=rep(1, 10^7)), class="MyS3Class"),
new("MyClass", x=rep(1, 10^7)) )
Unit: milliseconds
expr
structure(list(x = rep(1, 10^7)), class = "MyS3Class")
new("MyClass", x = rep(1, 10^7))
min lq median uq max neval
148.75049 152.3811 155.2263 159.8090 323.5678 100
75.15198 123.4804 129.6588 131.5031 241.8913 100
So in this simple example, S4 was actually bit faster. Then I read SO question about using S3 vs S4, which was quite much in favor of S3. Especially #joshua-ulrich 's answer made me doubt against S4, as it said that
any slot change requires a full object copy
That feels like a big issue if I consider my case where I'm updating my object in every iteration when optimizing log-likelihood of my model. After some googling I found John Chambers post about this issue, which seems to be changing in R 3.0.0.
So although I feel it would be beneficial to use S4 classes for some clarity in my codes (for example more classes inheriting from the main model class), and for the validity checks etc, I am now wondering is it worth all the work in terms of performance? So, performance wise, is there real performance differences between S3 and S4? Is there some other performance issues I should be considering? Or is it even possible to say something about this issue in general?
EDIT: As #DWin and #g-grothendieck suggested, the above benchmarking doesn't consider the case where the slot of an existing object is altered. So here's another benchmark which is more relevant to the true application (the functions in the example could be get/set functions for some elements in the model, which are altered when maximizing the log-likelihood):
objS3<-structure(list(x=rep(1, 10^3), z=matrix(0,10,10), y=matrix(0,10,10)),
class="MyS3Class")
fnS3<-function(obj,a){
obj$y<-a
obj
}
setClass("MyClass", representation(x="numeric",z="matrix",y="matrix"))
objS4<-new("MyClass", x=rep(1, 10^3),z=matrix(0,10,10),y=matrix(0,10,10))
fnS4<-function(obj,a){
obj#y<-a
obj
}
a<-matrix(1:100,10,10)
microbenchmark(fnS3(objS3,a),fnS4(objS4,a))
Unit: microseconds
expr min lq median uq max neval
fnS3(objS3, a) 6.531 7.464 7.932 9.331 26.591 100
fnS4(objS4, a) 21.459 22.393 23.325 23.792 73.708 100
The benchmarks are performed on R 2.15.2, on 64bit Windows 7. So here S4 is clearly slower.
First of all, you can easily have S3 methods for S4 classes:
> extract <- function (x, ...) x#x
> setGeneric ("extr4", def=function (x, ...){})
[1] "extr4"
> setMethod ("extr4", signature= "MyClass", definition=extract)
[1] "extr4"
> `[.MyClass` <- extract
> `[.MyS3Class` <- function (x, ...) x$x
> microbenchmark (objS3[], objS4 [], extr4 (objS4), extract (objS4))
Unit: nanoseconds
expr min lq median uq max neval
objS3[] 6775 7264.5 7578.5 8312.0 39531 100
objS4[] 5797 6705.5 7124.0 7404.0 13550 100
extr4(objS4) 20534 21512.0 22106.0 22664.5 54268 100
extract(objS4) 908 1188.0 1328.0 1467.0 11804 100
edit: due to Hadley's comment, change the experiment to plot:
> `plot.MyClass` <- extract
> `plot.MyS3Class` <- function (x, ...) x$x
> microbenchmark (plot (objS3), plot (objS4), extr4 (objS4), extract (objS4))
Unit: nanoseconds
expr min lq median uq max neval
plot(objS3) 28915 30172.0 30591 30975.5 1887824 100
plot(objS4) 25353 26121.0 26471 26960.0 411508 100
extr4(objS4) 20395 21372.5 22001 22385.5 31359 100
extract(objS4) 979 1328.0 1398 1677.0 3982 100
for an S4 method for plot I get:
plot(objS4) 19835 20428.5 21336.5 22175.0 58876 100
So yes, [ has an exceptionally fast dispatch mechanism (which is good, because I think extraction and the corresponding replacement functions are among the most frequently called methods. But no, S4 dispatch isn't slower than S3 dispatch.
Here the S3 method on the S4 object is as fast as the S3 method on the S3 object. However, calling without dispatch is still faster.
there are some things that work much better as S3 such as as.matrix or as.data.frame
For some reason, defining these as S3 means that e.g. lm (formula, objS4) will work out of the box. This doesn't work with as.data.frame being defined as S4 method.
Also it is much more convenient to call debug on a S3 method.
some other things will not work with S3, e.g. dispatching on the second argument.
Whether there will be any noticable drop in performance obviously depends on your class, that is, what kind of structures you have, how large the objects are and how often methods are called. A few μs of method dispatch won't matter with a calculation of ms or even s. But μs do matter when a function is called billions of times.
One thing that caused noticable performance drop for some functions that are called often ([) is S4 validation (a fair number of checks done in validObject) - however, I'm glad to have it, so I use it.Internally I use workhorse functions that skip this step.
In case you have large data and call-by-reference would help your performance, you may want to have a look at reference classes. I've never really worked with them so far, so I cannot comment on this.
If you are concerned about performance, benchmark it. If you really need multiple inheritance or multiple dispatch, use S4. Otherwise use S3.
(This is pretty close to the boundary of a "question likely to elicit opinion" but believe it is an important issue, one for which you have offered code and data and useful citations, and so I hope there are no votes to close.)
I admit that I have never really understood the S4 model of programming. However, what Chambers' post was saying is that #<-, i.e. slot assignment, was being re-implemented as a primitive rather than as a closure so that it would not require a complete copy of an object when one component was altered. So the earlier state of affairs will be altered in R 3.0.0 beta. On my machine (a 5 year-old MacPro running R 3.0.0 beta) the relative difference was even greater. However, I did not think that was necessarily a good test, since it was not altering an existing copy of a named object with multiple slots.
res <-microbenchmark(structure(list(x=rep(1, 10^7)), class="MyS3Class"),
new("MyClass", x=rep(1, 10^7)) )
summary(res)[ ,"median"]
#[1] 145.0541 103.4064
I think you should go with S4 since your brain structure is more flexible than mine and there are a lot of very smart people, Douglas Bates and Martin Maechler to name two other than John Chambers, who have used S4 methods for packages that require heavy processing. The Matrix and lme4 package both use S4 methods for critical functions.

Faster huge data-import than Get["raggedmatrix.mx"]?

Can anybody advise an alternative to importing a couple of
GByte of numeric data (in .mx form) from a list of 60 .mx files, each about 650 MByte?
The - too large to post here - research-problem involved simple statistical operations
with double as much GB of data (around 34) than RAM available (16).
To handle the data size problem I just split things up and used
a Get / Clear strategy to do the math.
It does work, but calling Get["bigfile.mx"] takes quite some time, so I was wondering if it would be quicker to use BLOBs or whatever with PostgreSQL or MySQL or whatever database people use for GB of numeric data.
So my question really is:
What is the most efficient way to handle truly large data set imports in Mathematica?
I have not tried it yet, but I think that SQLImport from DataBaseLink will be slower than Get["bigfile.mx"].
Anyone has some experience to share?
(Sorry if this is not a very specific programming question, but it would really help me to move on with that time-consuming finding-out-what-is-the-best-of-the-137-possibilities-to-tackle-a-problem-in-Mathematica).
Here's an idea:
You said you have a ragged matrix, i.e. a list of lists of different lengths. I'm assuming floating point numbers.
You could flatten the matrix to get a single long packed 1D array (use Developer`ToPackedArray to pack it if necessary), and store the starting indexes of the sublists separately. Then reconstruct the ragged matrix after the data has been imported.
Here's a demonstration that within Mathematica (i.e. after import), extracting the sublists from a big flattened list is fast.
data = RandomReal[1, 10000000];
indexes = Union#RandomInteger[{1, 10000000}, 10000];
ranges = #1 ;; (#2 - 1) & ### Partition[indexes, 2, 1];
data[[#]] & /# ranges; // Timing
{0.093, Null}
Alternatively store a sequence of sublist lengths and use Mr.Wizard's dynamicPartition function which does exactly this. My point is that storing the data in a flat format and partitioning it in-kernel is going to add negligible overhead.
Importing packed arrays as MX files is very fast. I only have 2 GB of memory, so I cannot test on very large files, but the import times are always a fraction of a second for packed arrays on my machine. This will solve the problem that importing data that is not packed can be slower (although as I said in the comments on the main question, I cannot reproduce the kind of extreme slowness you mention).
If BinaryReadList were fast (it isn't as fast as reading MX files now, but it looks like it will be significantly sped up in Mathematica 9), you could store the whole dataset as one big binary file, without the need of breaking it into separate MX files. Then you could import relevant parts of the file like this:
First make a test file:
In[3]:= f = OpenWrite["test.bin", BinaryFormat -> True]
In[4]:= BinaryWrite[f, RandomReal[1, 80000000], "Real64"]; // Timing
Out[4]= {9.547, Null}
In[5]:= Close[f]
Open it:
In[6]:= f = OpenRead["test.bin", BinaryFormat -> True]
In[7]:= StreamPosition[f]
Out[7]= 0
Skip the first 5 million entries:
In[8]:= SetStreamPosition[f, 5000000*8]
Out[8]= 40000000
Read 5 million entries:
In[9]:= BinaryReadList[f, "Real64", 5000000] // Length // Timing
Out[9]= {0.609, 5000000}
Read all the remaining entries:
In[10]:= BinaryReadList[f, "Real64"] // Length // Timing
Out[10]= {7.782, 70000000}
In[11]:= Close[f]
(For comparison, Get usually reads the same data from an MX file in less than 1.5 seconds here. I am on WinXP btw.)
EDIT If you are willing to spend time on this, and write some C code, another idea is to create a library function (using Library Link) that will memory-map the file (link for Windows), and copy it directly into an MTensor object (an MTensor is just a packed Mathematica array, as seen from the C side of Library Link).
I think the two best approaches are either:
1) use Get on the *.mx file,
2) or read in that data and save it in some binary format for which you write a LibraryLink code and then read the stuff via that. That, of course, has the disadvantage that you'd need to convert your MX stuff. But perhaps this is an option.
Generally speaking Get with MX files is pretty fast.
Are sure this is not a swapping problem?
Edit 1:
You could then use also write in an import converter: tutorial/DevelopingAnImportConverter

What is the highest Cyclomatic Complexity of any function you maintain? And how would you go about refactoring it?

I was doing a little exploring of a legacy system I maintain, with NDepend (great tool check it out), the other day. My findings almost made me spray a mouthful of coffee all over my screen. The top 3 functions in this system ranked by descending cyclomatic complexity are:
SomeAspNetGridControl.CreateChildControls (CC of 171!!!)
SomeFormControl.AddForm (CC of 94)
SomeSearchControl.SplitCriteria (CC of 85)
I mean 171, wow!!! Shouldn't it be below 20 or something? So this made me wonder. What is the most complex function you maintain or have refactored? And how would you go about refactoring such a method?
Note: The CC I measured is over the code, not the IL.
This is kid stuff compared to some 1970s vintage COBOL I worked on some years ago. We used the original McCabe tool to graphically display the CC for some of the code. The print out was pure black because the lines showing the functional paths were so densely packed and spaghetti-like. I don't have a figure but it had to be way higher than 171.
What to do
Per Code Complete (first edition):
If the score is:
0-5 - the routine is probably fine
6-10 - start to think about ways to simplify the routine
10+ - break part of the routine into a second routine and call it from the first routine
Might be a good idea to write unit tests as you break up the original routine.
This is for C/C++ code currently shipping in a product:
the highest CC value that I could reliably identify (i.e. I don't suspect the tool is erroneously adding complexity values for unrelated instances of main(...) ):
an image processing function: 184
a database item loader with verification: 159
There is also a test subroutine with CC = 339 but that is not strictly part of the shipping product. Makes me wonder though how one could actually verify the test case(s) implemented in there...
and yes, the function names have been suppressed to protect the guilty :)
How to change it:
There is already an effort in place to remedy this problem. The problems are mostly caused by two root causes:
spaghetti code (no encapsulation, lots of copy-paste)
code provided to the product group by some scientists with no real software construction/engineering/carpentry training.
The main method is identifying cohesive pieces of the spaghetti (pull on a thread:) ) and break up the looooong functions into shorter ones. Often there are mappings or transformations that can be extracted into a function or a helper class/object. Switching to using STL instead of hand-built containers and iterators can cut a lot of code too. Using std::string instead of C-strings helps a lot.
I have found another opinion on this from this blog entry which seems to make good sense and work for me when comparing it against various code bases. I know it's a highly opinionated topic, so YMMV.
1-10 - simple, not much risk
11-20 - complex, low risk
21-50 - too complex, medium risk, attention
More than 50 - too complex, can't test , high risk
These are the six most complex functions in PerfectTIN, which will hopefully go into production in a few weeks:
32 32 testptin.cpp(319): testmatrix
36 39 tincanvas.cpp(90): TinCanvas::tick
53 53 mainwindow.cpp(126): MainWindow::tick
56 60 perfecttin.cpp(185): main
58 58 fileio.cpp(457): readPtin
62 62 pointlist.cpp(61): pointlist::checkTinConsistency
Where the two numbers are different, it's because of switch statements.
testmatrix consists of several order-2 and order-1 for-loops in a row and is not hard to understand. The thing that puzzled me, looking at it years after I wrote it in Bezitopo, is why it mods something by 83.
The two tick methods are run 20 times a second and check several conditions. I've had a bit of trouble with the complexity, but the bugs are nothing worse than menu items being grayed out when they shouldn't, or the TIN display looking wonky.
The TIN is stored as a variant winged-edge structure consisting of points, edges, and triangles all pointing to each other. checkTinConsistency has to be as complex as it is because the structure is complex and there are several ways it could be wrong.
The hardest bugs to find in PerfectTIN have been concurrency bugs, not cyclomatic bugs.
The most complex functions in Bezitopo (I started PerfectTIN by copying code from Bezitopo):
49 49 tin.cpp(537): pointlist::tryStartPoint
50 50 ptin.cpp(237): readPtin
51 51 convertgeoid.cpp(596): main
54 54 pointlist.cpp(117): pointlist::checkTinConsistency
73 80 bezier.cpp(1070): triangle::subdivide
92 92 bezitest.cpp(7963): main
main in bezitest is just a long sequence of if-statements: If I should test triangles, then run testtriangle. If I should test measuring units, then run testmeasure. And so on.
The complexity in subdivide is partly because roundoff errors very rarely produce some wrong-looking conditions that the function has to check for.
What is now tryStartPoint used to be part of maketin (which now has a complexity of only 11) with even more complexity. I broke out the inside of the loop into a separate function because I had to call it from the GUI and update the screen in between.

Generating random number in a given range in Fortran 77

I am a beginner trying to do some engineering experiments using fortran 77. I am using Force 2.0 compiler and editor. I have the following queries:
How can I generate a random number between a specified range, e.g. if I need to generate a single random number between 3.0 and 10.0, how can I do that?
How can I use the data from a text file to be called in calculations in my program. e.g I have temperature, pressure and humidity values (hourly values for a day, so total 24 values in each text file).
Do I also need to define in the program how many values are there in the text file?
Knuth has released into the public domain sources in both C and FORTRAN for the pseudo-random number generator described in section 3.6 of The Art of Computer Programming.
2nd question:
If your file, for example, looks like:
hour temperature pressure humidity
00 15 101325 60
01 15 101325 60
... 24 of them, for each hour one
this simple program will read it:
implicit none
integer hour, temp, hum
real p
character(80) junkline
open(unit=1, file='name_of_file.dat', status='old')
rewind(1)
read(1,*)junkline
do 10 i=1,24
read(1,*)hour,temp,p,hum
C do something here ...
10 end
close(1)
end
(the indent is a little screwed up, but I don't know how to set it right in this weird environment)
My advice: read up on data types (INTEGER, REAL, CHARACTER), arrays (DIMENSION), input/output (READ, WRITE, OPEN, CLOSE, REWIND), and loops (DO, FOR), and you'll be doing useful stuff in no time.
I never did anything with random numbers, so I cannot help you there, but I think there are some intrinsic functions in fortran for that. I'll check it out, and report tomorrow. As for the 3rd question, I'm not sure what you ment (you don't know how many lines of data you'll be having in a file ? or ?)
You'll want to check your compiler manual for the specific random number generator function, but chances are it generates random numbers between 0 and 1. This is easy to handle - you just scale the interval to be the proper width, then shift it to match the proper starting point: i.e. to map r in [0, 1] to s in [a, b], use s = r*(b-a) + a, where r is the value you got from your random number generator and s is a random value in the range you want.
Idigas's answer covers your second question well - read in data using formatted input, then use them as you would any other variable.
For your third question, you will need to define how many lines there are in the text file only if you want to do something with all of them - if you're looking at reading the line, processing it, then moving on, you can get by without knowing the number of lines ahead of time. However, if you are looking to store all the values in the file (e.g. having arrays of temperature, humidity, and pressure so you can compute vapor pressure statistics), you'll need to set up storage somehow. Typically in FORTRAN 77, this is done by pre-allocating an array of a size larger than you think you'll need, but this can quickly become problematic. Is there any chance of switching to Fortran 90? The updated version has much better facilities for dealing with standardized dynamic memory allocation, not to mention many other advantages. I would strongly recommend using F90 if at all possible - you will make your life much easier.
Another option, depending on the type of processing you're doing, would be to investigate algorithms that use only single passes through data, so you won't need to store everything to compute things like means and standard deviations, for example.
This subroutine generate a random number in fortran 77 between 0 and ifin
where i is the seed; some great number such as 746397923
subroutine rnd001(xi,i,ifin)
integer*4 i,ifin
real*8 xi
i=i*54891
xi=i*2.328306e-10+0.5D00
xi=xi*ifin
return
end
You may modifies in order to take a certain range.

Resources