Table Row returning blank when Filtered Criteria not met on 2nd table - filter

I have the following two tables:
Historical_Data_Tbl:
DATE
Cloud%
Wind_KM
Solar_Utiliz
Price
01-Jan
0.85
0
0.1
4.5
02-Jan
0.85
0
0.1
4.5
03-Jan
0.95
15
0
10
04-Jan
0.95
15
0
8
05-Jan
0.6
25
0.35
6
06-Jan
0.6
25
0.35
6
07-Jan
0.2
55
0.8
6
08-Jan
0.2
55
0.8
7
09-Jan
0.55
10
0.5
5.5
10-Jan
0.55
10
0.5
5.5
11-Jan
0.28
12
0.6
2
12-Jan
0.28
12
0.6
2
13-Jan
0.1
40
0.9
3
14-Jan
0.1
40
0.9
3
15-Jan
0.33
17
0.7
8
16-Jan
0.01
17
0.95
1
17-Jan
0.01
17
0.95
1
Forecast_Tbl:
Date
Fcst_Cloud
Fcst_Wind
Fcst_Solar
Max_Cloud
Min_Cloud
Max_Wind
Min_Wind
Max_Solar
Min_Solar
1
0.5
12
0.5
0.7
0.3
27
-3
0.75
0.25
2
0.8
10
0.1
1
0.6
25
-5
0.35
-0.15
3
0.15
15
0.8
0.35
-0.05
30
0
1.05
0.55
4
0.75
10
0.2
0.95
0.55
25
-5
0.45
-0.05
5
0.1
99
0.99
0.3
-0.1
114
84
1.24
0.74
6
0.11
35
0.8
0.31
-0.09
50
20
1.05
0.55
CODE BELOW:
let
//Read in Historical table and set data types
Source = Excel.CurrentWorkbook(){[Name="Historical"]}[Content],
Historical = Table.Buffer(Table.TransformColumnTypes(Source,{
{"DATE", type date}, {"Cloud%", type number}, {"Wind_KM", Int64.Type},
{"Solar_Utiliz", type number}, {"Price", type number}})),
//Read in Forecast table anda set data types
Source1 = Excel.CurrentWorkbook(){[Name="Forecast"]}[Content],
Forecast = Table.Buffer(Table.TransformColumnTypes(Source1,{
{"Date", Int64.Type}, {"Fcst_Cloud", type number}, {"Fcst_Wind", Int64.Type},
{"Fcst_Solar", type number}, {"Max_Cloud", type number},
{"Min_Cloud", type number}, {"Max_Wind", Int64.Type}, {"Min_Wind", Int64.Type},
{"Max_Solar", type number}, {"Min_Solar", type number}})),
//Generate list of filtered Historical Table for each row in Forecast Table with aggregations
//Merge aggregations with the associated Forecast row
#"Filtered Historical" = List.Generate(
()=>[t=Table.SelectRows(Historical, (h)=>
h[#"Cloud%"] <= Forecast[Max_Cloud]{0} and h[#"Cloud%"]>= Forecast[Min_Cloud]{0}
and h[Wind_KM] <= Forecast[Max_Wind]{0} and h[Wind_KM] >= Forecast[Min_Wind]{0}
and h[Solar_Utiliz] <= Forecast[Max_Solar]{0} and h[Solar_Utiliz] >= Forecast[Min_Solar]{0}),
idx=0],
each [idx] < Table.RowCount(Forecast),
each [t=Table.SelectRows(Historical, (h)=>
h[#"Cloud%"] <= Forecast[Max_Cloud]{[idx]+1} and h[#"Cloud%"]>= Forecast[Min_Cloud]{[idx]+1}
and h[Wind_KM] <= Forecast[Max_Wind]{[idx]+1} and h[Wind_KM] >= Forecast[Min_Wind]{[idx]+1}
and h[Solar_Utiliz] <= Forecast[Max_Solar]{[idx]+1} and h[Solar_Utiliz] >= Forecast[Min_Solar]{[idx]+1}),
idx=[idx]+1],
each Forecast{[idx]} & Record.FromList(
{List.Count([t][Price]),List.Min([t][Price]), List.Max([t][Price]),
List.Modes([t][Price]){0}, List.Median([t][Price]), List.Average([t][Price])},
{"Count","Min","Max","Mode","Median","Average"})),
#"Converted to Table" = Table.FromList(#"Filtered Historical", Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Expanded Column1" = Table.ExpandRecordColumn(#"Converted to Table", "Column1",
{"Date", "Fcst_Cloud", "Fcst_Wind", "Fcst_Solar", "Max_Cloud", "Min_Cloud", "Max_Wind", "Min_Wind", "Max_Solar", "Min_Solar",
"Count", "Min", "Max", "Mode", "Median", "Average"}),
#"Changed Type" = Table.TransformColumnTypes(#"Expanded Column1",{
{"Date", Int64.Type}, {"Fcst_Cloud", Percentage.Type}, {"Fcst_Wind", Int64.Type}, {"Fcst_Solar", type number},
{"Max_Cloud", type number}, {"Min_Cloud", type number}, {"Max_Wind", Int64.Type}, {"Min_Wind", Int64.Type},
{"Max_Solar", type number}, {"Min_Solar", type number}, {"Count", Int64.Type},
{"Min", Currency.Type}, {"Max", Currency.Type}, {"Mode", Currency.Type}, {"Median", Currency.Type}, {"Average", Currency.Type}})
in
#"Changed Type"
And this is the resulting output:
Date
Fcst_Cloud
Fcst_Wind
Fcst_Solar
Max_Cloud
Min_Cloud
Max_Wind
Min_Wind
Max_Solar
Min_Solar
Count
Min
Max
Mode
Median
Average
1
0.5
12
0.5
0.7
0.3
27
0
0.75
0.25
5
5.5
8
6
6
6.2
2
0.8
10
0.1
1
0.6
25
-5
0.35
-0.15
6
4.5
10
4.5
6
6.5
3
0.15
15
0.8
0.35
-0.05
30
0
1.05
0.55
5
1
8
2
2
2.8
4
0.75
10
0.2
0.95
0.55
25
-5
0.45
-0.05
6
4.5
10
4.5
6
6.5
6
0.11
35
0.8
0.31
-0.09
50
20
1.05
0.55
2
3
3
3
3
3
Forecast_Tbl OUTPUT](https://i.stack.imgur.com/8ozB2.png)
The issue is that when one forecast row (for example where Date "5" in output table should be) doesn't have any data points within the filtered range of Historical Data table, it return blank for the entire row.
What I would like it to do is return the original data from the Forecast_Tbl in the first 10 columns, for "Count" column show "0" (When no filtered Criteria are met), and use the previous rows "Average" column value (in this case 6.5) when no filtered Criteria are met. Below is the output I would like for the table to return:
Date
Fcst_Cloud
Fcst_Wind
Fcst_Solar
Max_Cloud
Min_Cloud
Max_Wind
Min_Wind
Max_Solar
Min_Solar
Count
Min
Max
Mode
Median
Average
1
0.5
12
0.5
0.7
0.3
27
0
0.75
0.25
5
5.5
8
6
6
6.2
2
0.8
10
0.1
1
0.6
25
-5
0.35
-0.15
6
4.5
10
4.5
6
6.5
3
0.15
15
0.8
0.35
-0.05
30
0
1.05
0.55
5
1
8
2
2
2.8
4
0.75
10
0.2
0.95
0.55
25
-5
0.45
-0.05
6
4.5
10
4.5
6
6.5
5
0.1
99
0.99
0.3
-0.1
114
84
1.24
0.74
0
6.5
6
0.11
35
0.8
0.31
-0.09
50
20
1.05
0.55
2
3
3
3
3
3
I have tried using conditional if functions but unsuccessful.

How about
....
{"Count","Min","Max","Mode","Median","Average"})),
#"Converted to Table" = Table.FromList(#"Filtered Historical", Splitter.SplitByNothing(), null, null, ExtraValues.Error),
#"Added Index" = Table.AddIndexColumn(#"Converted to Table", "Index", 0, 1, Int64.Type),
#"Added Custom" = Table.AddColumn(#"Added Index", "Column2", each try if Value.Is([Column1], type record ) then [Column1] else null otherwise Record.Combine({Forecast{[Index]}, [Count = 0, Average = #"Added Index"{[Index]-1}[Column1][Average]]})),
#"Expanded Column1" = Table.ExpandRecordColumn(Table.SelectColumns(#"Added Custom",{"Column2"}), "Column2",
{"Date", "Fcst_Cloud", "Fcst_Wind", "Fcst_Solar", "Max_Cloud", "Min_Cloud", "Max_Wind", "Min_Wind", "Max_Solar", "Min_Solar",
"Count", "Min", "Max", "Mode", "Median", "Average"}),
....

Related

Why are there extra equations when using extra neurons in GEKKO?

I'm a bit puzzled on why there are extra equations introduced in the optimization problem (using GEKKO) when increasing the amount of neurons in an ANN that e.g. is used within the objective function or in the constraints. I was hoping the find the answer in this paper, but I can't seem to pinpoint the reason.
This is the log of a baseline example I made, using 2 Gekko_NN_SKlearn functions.
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
Called files( 55 )
files: overrides.dbs does not exist
Run id : 2022y11m03d15h47m50.642s
COMMAND LINE ARGUMENTS
coldstart: 0
imode : 3
dbs_read : T
dbs_write: T
specs : T
rto selected
Called files( 35 )
READ info FILE FOR VARIABLE DEFINITION: gk_model0.info
SS MODEL INIT 0
Parsing model file gk_model0.apm
Read model file (sec): 1.1181
Initialize constants (sec): 0.
Determine model size (sec): 0.7627999999999999
Allocate memory (sec): 0.0049000000000001265
Parse and store model (sec): 0.7630000000000001
--------- APM Model Size ------------
Each time step contains
Objects : 247
Constants : 0
Variables : 752
Intermediates: 249
Connections : 741
Equations : 745
Residuals : 496
Error checking (sec): 0.2740999999999998
Compile equations (sec): 2.8513
Check for uninitialized intermediates (sec): 0.
------------------------------------------------------
Total Parse Time (sec): 5.7744
SS MODEL INIT 1
SS MODEL INIT 2
SS MODEL INIT 3
SS MODEL INIT 4
Called files( 31 )
READ info FILE FOR PROBLEM DEFINITION: gk_model0.info
Called files( 6 )
Files(6): File Read rto.t0 F
files: rto.t0 does not exist
Called files( 51 )
Read DBS File defaults.dbs
files: defaults.dbs does not exist
Called files( 51 )
Read DBS File gk_model0.dbs
files: gk_model0.dbs does not exist
Called files( 51 )
Read DBS File measurements.dbs
Called files( 51 )
Read DBS File overrides.dbs
files: overrides.dbs does not exist
Number of state variables: 1240
Number of total equations: - 989
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 251
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 2.23 NLPi: 67 Dpth: 0 Lvs: 3 Obj: 6.96E-02 Gap: NaN
--Integer Solution: 1.78E-01 Lowest Leaf: 6.96E-02 Gap: 1.08E-01
Iter: 2 I: 0 Tm: 0.13 NLPi: 6 Dpth: 1 Lvs: 2 Obj: 1.78E-01 Gap: 1.08E-01
Iter: 3 I: 0 Tm: 0.33 NLPi: 5 Dpth: 1 Lvs: 2 Obj: 1.74E-01 Gap: 1.08E-01
Iter: 4 I: 0 Tm: 0.52 NLPi: 11 Dpth: 1 Lvs: 3 Obj: 9.49E-02 Gap: 1.08E-01
--Integer Solution: 1.78E-01 Lowest Leaf: 9.49E-02 Gap: 8.27E-02
Iter: 5 I: 0 Tm: 0.28 NLPi: 5 Dpth: 2 Lvs: 2 Obj: 1.04E+00 Gap: 8.27E-02
--Integer Solution: 1.23E-01 Lowest Leaf: 1.23E-01 Gap: 0.00E+00
Iter: 6 I: 0 Tm: 0.30 NLPi: 5 Dpth: 2 Lvs: 2 Obj: 1.23E-01 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 3.842599999999999 sec
Objective : 0.12267384658102941
Successful solution
---------------------------------------------------
Called files( 2 )
Called files( 52 )
WRITE dbs FILE
Called files( 56 )
WRITE json FILE
Timer # 1 10.41/ 1 = 10.41 Total system time
Timer # 2 3.84/ 1 = 3.84 Total solve time
Timer # 3 0.02/ 191 = 0.00 Objective Calc: apm_p
Timer # 4 0.01/ 99 = 0.00 Objective Grad: apm_g
Timer # 5 0.03/ 191 = 0.00 Constraint Calc: apm_c
Timer # 6 0.00/ 0 = 0.00 Sparsity: apm_s
Timer # 7 0.00/ 0 = 0.00 1st Deriv #1: apm_a1
Timer # 8 0.00/ 99 = 0.00 1st Deriv #2: apm_a2
Timer # 9 0.67/ 1 = 0.67 Custom Init: apm_custom_init
Timer # 10 0.01/ 1 = 0.01 Mode: apm_node_res::case 0
Timer # 11 0.00/ 1 = 0.00 Mode: apm_node_res::case 1
Timer # 12 0.02/ 1 = 0.02 Mode: apm_node_res::case 2
Timer # 13 0.00/ 1 = 0.00 Mode: apm_node_res::case 3
Timer # 14 0.35/ 387 = 0.00 Mode: apm_node_res::case 4
Timer # 15 1.51/ 198 = 0.01 Mode: apm_node_res::case 5
Timer # 16 0.00/ 0 = 0.00 Mode: apm_node_res::case 6
Timer # 17 0.01/ 99 = 0.00 Base 1st Deriv: apm_jacobian
Timer # 18 0.00/ 99 = 0.00 Base 1st Deriv: apm_condensed_jacobian
Timer # 19 0.00/ 1 = 0.00 Non-zeros: apm_nnz
Timer # 20 0.00/ 0 = 0.00 Count: Division by zero
Timer # 21 0.00/ 0 = 0.00 Count: Argument of LOG10 negative
Timer # 22 0.00/ 0 = 0.00 Count: Argument of LOG negative
Timer # 23 0.00/ 0 = 0.00 Count: Argument of SQRT negative
Timer # 24 0.00/ 0 = 0.00 Count: Argument of ASIN illegal
Timer # 25 0.00/ 0 = 0.00 Count: Argument of ACOS illegal
Timer # 26 0.00/ 1 = 0.00 Extract sparsity: apm_sparsity
Timer # 27 0.00/ 13 = 0.00 Variable ordering: apm_var_order
Timer # 28 0.00/ 1 = 0.00 Condensed sparsity
Timer # 29 0.00/ 0 = 0.00 Hessian Non-zeros
Timer # 30 0.00/ 1 = 0.00 Differentials
Timer # 31 0.00/ 0 = 0.00 Hessian Calculation
Timer # 32 0.00/ 0 = 0.00 Extract Hessian
Timer # 33 0.00/ 1 = 0.00 Base 1st Deriv: apm_jac_order
Timer # 34 0.02/ 1 = 0.02 Solver Setup
Timer # 35 1.87/ 1 = 1.87 Solver Solution
Timer # 36 0.00/ 202 = 0.00 Number of Variables
Timer # 37 0.00/ 105 = 0.00 Number of Equations
Timer # 38 0.02/ 14 = 0.00 File Read/Write
Timer # 39 0.00/ 0 = 0.00 Dynamic Init A
Timer # 40 0.00/ 0 = 0.00 Dynamic Init B
Timer # 41 0.00/ 0 = 0.00 Dynamic Init C
Timer # 42 1.12/ 1 = 1.12 Init: Read APM File
Timer # 43 0.00/ 1 = 0.00 Init: Parse Constants
Timer # 44 0.76/ 1 = 0.76 Init: Model Sizing
Timer # 45 0.00/ 1 = 0.00 Init: Allocate Memory
Timer # 46 0.76/ 1 = 0.76 Init: Parse Model
Timer # 47 0.27/ 1 = 0.27 Init: Check for Duplicates
Timer # 48 2.85/ 1 = 2.85 Init: Compile Equations
Timer # 49 0.00/ 1 = 0.00 Init: Check Uninitialized
Timer # 50 0.00/ 1257 = 0.00 Evaluate Expression Once
Timer # 51 0.00/ 0 = 0.00 Sensitivity Analysis: LU Factorization
Timer # 52 0.00/ 0 = 0.00 Sensitivity Analysis: Gauss Elimination
Timer # 53 0.00/ 0 = 0.00 Sensitivity Analysis: Total Time
When I change the amount of neurons of 1 function from [25,20,20,10] to [50,40,40,40] I get the following log:
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
Called files( 55 )
files: overrides.dbs does not exist
Run id : 2023y02m04d11h08m15.999s
COMMAND LINE ARGUMENTS
coldstart: 0
imode : 3
dbs_read : T
dbs_write: T
specs : T
rto selected
Called files( 35 )
READ info FILE FOR VARIABLE DEFINITION: gk_model0.info
SS MODEL INIT 0
Parsing model file gk_model0.apm
Read model file (sec): 1.4879
Initialize constants (sec): 0.
Determine model size (sec): 0.9460000000000002
Allocate memory (sec): 0.01529999999999987
Parse and store model (sec): 0.7256999999999998
--------- APM Model Size ------------
Each time step contains
Objects : 342
Constants : 0
Variables : 1037
Intermediates: 344
Connections : 1026
Equations : 1030
Residuals : 686
Error checking (sec): 0.2522000000000002
Compile equations (sec): 2.8817999999999997
Check for uninitialized intermediates (sec): 0.
------------------------------------------------------
Total Parse Time (sec): 6.3089
SS MODEL INIT 1
SS MODEL INIT 2
SS MODEL INIT 3
SS MODEL INIT 4
Called files( 31 )
READ info FILE FOR PROBLEM DEFINITION: gk_model0.info
Called files( 6 )
Files(6): File Read rto.t0 F
files: rto.t0 does not exist
Called files( 51 )
Read DBS File defaults.dbs
files: defaults.dbs does not exist
Called files( 51 )
Read DBS File gk_model0.dbs
files: gk_model0.dbs does not exist
Called files( 51 )
Read DBS File measurements.dbs
Called files( 51 )
Read DBS File overrides.dbs
files: overrides.dbs does not exist
Number of state variables: 1715
Number of total equations: - 1369
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 346
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 2.84 NLPi: 53 Dpth: 0 Lvs: 3 Obj: 2.69E-01 Gap: NaN
--Integer Solution: 4.43E-01 Lowest Leaf: 2.69E-01 Gap: 1.74E-01
Iter: 2 I: 0 Tm: 0.14 NLPi: 5 Dpth: 1 Lvs: 2 Obj: 4.43E-01 Gap: 1.74E-01
Iter: 3 I: 0 Tm: 0.55 NLPi: 6 Dpth: 1 Lvs: 2 Obj: 3.79E-01 Gap: 1.74E-01
Iter: 4 I: 0 Tm: 0.96 NLPi: 12 Dpth: 1 Lvs: 3 Obj: 4.17E-01 Gap: 1.74E-01
--Integer Solution: 4.43E-01 Lowest Leaf: 4.17E-01 Gap: 2.62E-02
Iter: 5 I: 0 Tm: 0.56 NLPi: 7 Dpth: 2 Lvs: 2 Obj: 1.18E+00 Gap: 2.62E-02
--Integer Solution: 4.43E-01 Lowest Leaf: 4.17E-01 Gap: 2.62E-02
Iter: 6 I: 0 Tm: 0.36 NLPi: 4 Dpth: 2 Lvs: 1 Obj: 1.04E+00 Gap: 2.62E-02
--Integer Solution: 4.43E-01 Lowest Leaf: 5.39E-01 Gap: -9.53E-02
Iter: 7 I: 0 Tm: 0.36 NLPi: 4 Dpth: 2 Lvs: 1 Obj: 5.39E-01 Gap: -9.53E-02
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 5.816599999999999 sec
Objective : 0.4433267264972657
Successful solution
---------------------------------------------------
Called files( 2 )
Called files( 52 )
WRITE dbs FILE
Called files( 56 )
WRITE json FILE
Timer # 1 13.21/ 1 = 13.21 Total system time
Timer # 2 5.82/ 1 = 5.82 Total solve time
Timer # 3 0.02/ 189 = 0.00 Objective Calc: apm_p
Timer # 4 0.03/ 91 = 0.00 Objective Grad: apm_g
Timer # 5 0.03/ 189 = 0.00 Constraint Calc: apm_c
Timer # 6 0.00/ 0 = 0.00 Sparsity: apm_s
Timer # 7 0.00/ 0 = 0.00 1st Deriv #1: apm_a1
Timer # 8 0.00/ 91 = 0.00 1st Deriv #2: apm_a2
Timer # 9 0.90/ 1 = 0.90 Custom Init: apm_custom_init
Timer # 10 0.01/ 1 = 0.01 Mode: apm_node_res::case 0
Timer # 11 0.01/ 1 = 0.01 Mode: apm_node_res::case 1
Timer # 12 0.03/ 1 = 0.03 Mode: apm_node_res::case 2
Timer # 13 0.00/ 1 = 0.00 Mode: apm_node_res::case 3
Timer # 14 0.33/ 383 = 0.00 Mode: apm_node_res::case 4
Timer # 15 2.01/ 182 = 0.01 Mode: apm_node_res::case 5
Timer # 16 0.00/ 0 = 0.00 Mode: apm_node_res::case 6
Timer # 17 0.06/ 91 = 0.00 Base 1st Deriv: apm_jacobian
Timer # 18 0.02/ 91 = 0.00 Base 1st Deriv: apm_condensed_jacobian
Timer # 19 0.00/ 1 = 0.00 Non-zeros: apm_nnz
Timer # 20 0.00/ 0 = 0.00 Count: Division by zero
Timer # 21 0.00/ 0 = 0.00 Count: Argument of LOG10 negative
Timer # 22 0.00/ 0 = 0.00 Count: Argument of LOG negative
Timer # 23 0.00/ 0 = 0.00 Count: Argument of SQRT negative
Timer # 24 0.00/ 0 = 0.00 Count: Argument of ASIN illegal
Timer # 25 0.00/ 0 = 0.00 Count: Argument of ACOS illegal
Timer # 26 0.00/ 1 = 0.00 Extract sparsity: apm_sparsity
Timer # 27 0.00/ 13 = 0.00 Variable ordering: apm_var_order
Timer # 28 0.00/ 1 = 0.00 Condensed sparsity
Timer # 29 0.00/ 0 = 0.00 Hessian Non-zeros
Timer # 30 0.00/ 1 = 0.00 Differentials
Timer # 31 0.00/ 0 = 0.00 Hessian Calculation
Timer # 32 0.00/ 0 = 0.00 Extract Hessian
Timer # 33 0.00/ 1 = 0.00 Base 1st Deriv: apm_jac_order
Timer # 34 0.02/ 1 = 0.02 Solver Setup
Timer # 35 3.26/ 1 = 3.26 Solver Solution
Timer # 36 0.00/ 200 = 0.00 Number of Variables
Timer # 37 0.02/ 97 = 0.00 Number of Equations
Timer # 38 0.03/ 14 = 0.00 File Read/Write
Timer # 39 0.00/ 0 = 0.00 Dynamic Init A
Timer # 40 0.00/ 0 = 0.00 Dynamic Init B
Timer # 41 0.00/ 0 = 0.00 Dynamic Init C
Timer # 42 1.49/ 1 = 1.49 Init: Read APM File
Timer # 43 0.00/ 1 = 0.00 Init: Parse Constants
Timer # 44 0.95/ 1 = 0.95 Init: Model Sizing
Timer # 45 0.02/ 1 = 0.02 Init: Allocate Memory
Timer # 46 0.73/ 1 = 0.73 Init: Parse Model
Timer # 47 0.25/ 1 = 0.25 Init: Check for Duplicates
Timer # 48 2.88/ 1 = 2.88 Init: Compile Equations
Timer # 49 0.00/ 1 = 0.00 Init: Check Uninitialized
Timer # 50 -0.00/ 1732 = -0.00 Evaluate Expression Once
Timer # 51 0.00/ 0 = 0.00 Sensitivity Analysis: LU Factorization
Timer # 52 0.00/ 0 = 0.00 Sensitivity Analysis: Gauss Elimination
Timer # 53 0.00/ 0 = 0.00 Sensitivity Analysis: Total Time
Hence, a significant amount of extra objects, variables, intermediates, connections, equations and residuals are introduced.
Many thank in advance for your replies!
the paper talks about this a little in section 3.1.3.
The prediction functions used in both TensorFlow and Scikit-learn neural networks use linear algebra to relate the layers and neurons of the neural network to one another. Each neuron in an input layer has a specific weight that corresponds to an output neuron in the following layer. Each neuron in an output layer also has a corresponding bias value as shown in Figure 1.
Figure 1
For each neuron connection, there is a bias, weight, and an additional activation function. An additional activation equation, such as the linear rectifier or hyperbolic tangent functions, is generally used to normalize the activation between 0 and 1.
As you increase the number of neurons from [25,20,20,10] to [50,40,40,40], a new weight, bias, and activation function is being introduced for each new neuron connection. These objects are represented by an increase in the number of equations, variables, etc... in the optimization problem.
Hopefully this helps!

Reduce total parse (total system) time in GEKKO

I have an RTO problem that I want to solve for multiple simulated timesteps with some time-depended parameters. However, I'm struggling with the run-time and noticed that the total system time is relatively large compared to the actual solve time. I was therefore trying to reduce the total parse time, as all the equations remain the same - yet "only" the values of some parameters change with time. A simple example below:
#parameters from simulation
demand = 100
#do RTO
from gekko import GEKKO
# first, create the model
m = GEKKO(remote=False)
# declare additional decision variables
m.u = m.Var(lb=5, ub=25)
m.v = m.Var(lb=0, ub=100)
m.w = m.Var(lb=0, ub=50)
m.b = m.Var(lb=0, ub=1, integer=True)
m.demand = m.Param(demand)
# now add the objective and the constraints
m.Minimize((1-0.8)*m.u*m.b+(1-0.9)*m.v+(1-0.7)*m.w)
m.Equation(m.u*m.b >= 10)
m.Equation(m.u*m.b + m.v + m.w == m.demand)
m.options.SOLVER=1
m.options.DIAGLEVEL = 1
m.solve()
then I capture the results, execute them in the simulation and move on to the next timestep. Now I could just, execute all code above again - with updated parameters (let's say the demand is now 110). But this results in the before mentioned long run-time (the RTO problem needs to be build from scratch every time, while only some parameters change). So I thought the following could work:
m.demand.VALUE = 110
m.solve()
While this does work. It doesn't seem to improve the run-time (total parse time is still relatively long). Below are the display outputs of the actual problem.
First time solving the RTO problem.
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
Called files( 55 )
files: overrides.dbs does not exist
Run id : 2022y11m03d13h18m21.919s
COMMAND LINE ARGUMENTS
coldstart: 0
imode : 3
dbs_read : T
dbs_write: T
specs : T
rto selected
Called files( 35 )
READ info FILE FOR VARIABLE DEFINITION: gk_model6.info
SS MODEL INIT 0
Parsing model file gk_model6.apm
Read model file (sec): 0.6602
Initialize constants (sec): 0.
Determine model size (sec): 0.4170999999999999
Allocate memory (sec): 0.
Parse and store model (sec): 0.45140000000000025
--------- APM Model Size ------------
Each time step contains
Objects : 247
Constants : 0
Variables : 752
Intermediates: 249
Connections : 741
Equations : 745
Residuals : 496
Error checking (sec): 0.17809999999999993
Compile equations (sec): 1.9933000000000003
Check for uninitialized intermediates (sec): 0.
------------------------------------------------------
Total Parse Time (sec): 3.7062
SS MODEL INIT 1
SS MODEL INIT 2
SS MODEL INIT 3
SS MODEL INIT 4
Called files( 31 )
READ info FILE FOR PROBLEM DEFINITION: gk_model6.info
Called files( 6 )
Files(6): File Read rto.t0 F
files: rto.t0 does not exist
Called files( 51 )
Read DBS File defaults.dbs
files: defaults.dbs does not exist
Called files( 51 )
Read DBS File gk_model6.dbs
files: gk_model6.dbs does not exist
Called files( 51 )
Read DBS File measurements.dbs
Called files( 51 )
Read DBS File overrides.dbs
files: overrides.dbs does not exist
Number of state variables: 1240
Number of total equations: - 989
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 251
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 1.20 NLPi: 45 Dpth: 0 Lvs: 3 Obj: 9.86E-02 Gap: NaN
--Integer Solution: 2.32E-01 Lowest Leaf: 9.86E-02 Gap: 1.34E-01
Iter: 2 I: 0 Tm: 0.06 NLPi: 4 Dpth: 1 Lvs: 2 Obj: 2.32E-01 Gap: 1.34E-01
Iter: 3 I: 0 Tm: 0.23 NLPi: 6 Dpth: 1 Lvs: 2 Obj: 2.16E-01 Gap: 1.34E-01
Iter: 4 I: 0 Tm: 0.44 NLPi: 12 Dpth: 1 Lvs: 3 Obj: 1.60E-01 Gap: 1.34E-01
--Integer Solution: 2.32E-01 Lowest Leaf: 1.60E-01 Gap: 7.21E-02
Iter: 5 I: 0 Tm: 0.20 NLPi: 6 Dpth: 2 Lvs: 2 Obj: 1.01E+00 Gap: 7.21E-02
--Integer Solution: 2.06E-01 Lowest Leaf: 2.06E-01 Gap: 0.00E+00
Iter: 6 I: 0 Tm: 0.20 NLPi: 5 Dpth: 2 Lvs: 2 Obj: 2.06E-01 Gap: 0.00E+00
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 2.3522999999999996 sec
Objective : 0.20599966381706797
Successful solution
---------------------------------------------------
Called files( 2 )
Called files( 52 )
WRITE dbs FILE
Called files( 56 )
WRITE json FILE
Timer # 1 6.57/ 1 = 6.57 Total system time
Timer # 2 2.35/ 1 = 2.35 Total solve time
Timer # 3 0.01/ 156 = 0.00 Objective Calc: apm_p
Timer # 4 0.01/ 78 = 0.00 Objective Grad: apm_g
Timer # 5 0.01/ 156 = 0.00 Constraint Calc: apm_c
Timer # 6 0.00/ 0 = 0.00 Sparsity: apm_s
Timer # 7 0.00/ 0 = 0.00 1st Deriv #1: apm_a1
Timer # 8 0.01/ 78 = 0.00 1st Deriv #2: apm_a2
Timer # 9 0.42/ 1 = 0.42 Custom Init: apm_custom_init
Timer # 10 0.00/ 1 = 0.00 Mode: apm_node_res::case 0
Timer # 11 0.00/ 1 = 0.00 Mode: apm_node_res::case 1
Timer # 12 0.02/ 1 = 0.02 Mode: apm_node_res::case 2
Timer # 13 0.00/ 1 = 0.00 Mode: apm_node_res::case 3
Timer # 14 0.17/ 317 = 0.00 Mode: apm_node_res::case 4
Timer # 15 0.72/ 156 = 0.00 Mode: apm_node_res::case 5
Timer # 16 0.00/ 0 = 0.00 Mode: apm_node_res::case 6
Timer # 17 0.01/ 78 = 0.00 Base 1st Deriv: apm_jacobian
Timer # 18 0.00/ 78 = 0.00 Base 1st Deriv: apm_condensed_jacobian
Timer # 19 0.00/ 1 = 0.00 Non-zeros: apm_nnz
Timer # 20 0.00/ 0 = 0.00 Count: Division by zero
Timer # 21 0.00/ 0 = 0.00 Count: Argument of LOG10 negative
Timer # 22 0.00/ 0 = 0.00 Count: Argument of LOG negative
Timer # 23 0.00/ 0 = 0.00 Count: Argument of SQRT negative
Timer # 24 0.00/ 0 = 0.00 Count: Argument of ASIN illegal
Timer # 25 0.00/ 0 = 0.00 Count: Argument of ACOS illegal
Timer # 26 0.00/ 1 = 0.00 Extract sparsity: apm_sparsity
Timer # 27 0.00/ 13 = 0.00 Variable ordering: apm_var_order
Timer # 28 0.00/ 1 = 0.00 Condensed sparsity
Timer # 29 0.00/ 0 = 0.00 Hessian Non-zeros
Timer # 30 0.00/ 1 = 0.00 Differentials
Timer # 31 0.00/ 0 = 0.00 Hessian Calculation
Timer # 32 0.00/ 0 = 0.00 Extract Hessian
Timer # 33 0.00/ 1 = 0.00 Base 1st Deriv: apm_jac_order
Timer # 34 0.01/ 1 = 0.01 Solver Setup
Timer # 35 1.39/ 1 = 1.39 Solver Solution
Timer # 36 0.00/ 167 = 0.00 Number of Variables
Timer # 37 0.01/ 84 = 0.00 Number of Equations
Timer # 38 0.01/ 14 = 0.00 File Read/Write
Timer # 39 0.00/ 0 = 0.00 Dynamic Init A
Timer # 40 0.00/ 0 = 0.00 Dynamic Init B
Timer # 41 0.00/ 0 = 0.00 Dynamic Init C
Timer # 42 0.66/ 1 = 0.66 Init: Read APM File
Timer # 43 0.00/ 1 = 0.00 Init: Parse Constants
Timer # 44 0.42/ 1 = 0.42 Init: Model Sizing
Timer # 45 0.00/ 1 = 0.00 Init: Allocate Memory
Timer # 46 0.45/ 1 = 0.45 Init: Parse Model
Timer # 47 0.18/ 1 = 0.18 Init: Check for Duplicates
Timer # 48 1.99/ 1 = 1.99 Init: Compile Equations
Timer # 49 0.00/ 1 = 0.00 Init: Check Uninitialized
Timer # 50 0.01/ 1257 = 0.00 Evaluate Expression Once
Timer # 51 0.00/ 0 = 0.00 Sensitivity Analysis: LU Factorization
Timer # 52 0.00/ 0 = 0.00 Sensitivity Analysis: Gauss Elimination
Timer # 53 0.00/ 0 = 0.00 Sensitivity Analysis: Total Time
Updating one parameter and only calling m.solve() again - is shown in the simple problem above.
----------------------------------------------------------------
APMonitor, Version 1.0.0
APMonitor Optimization Suite
----------------------------------------------------------------
Called files( 55 )
Called files( 55 )
files: overrides.dbs does not exist
Run id : 2022y11m03d13h18m28.729s
COMMAND LINE ARGUMENTS
coldstart: 0
imode : 3
dbs_read : T
dbs_write: T
specs : T
rto selected
Called files( 35 )
READ info FILE FOR VARIABLE DEFINITION: gk_model6.info
SS MODEL INIT 0
Parsing model file gk_model6.apm
Read model file (sec): 0.6901
Initialize constants (sec): 0.
Determine model size (sec): 0.4546999999999999
Allocate memory (sec): 0.
Parse and store model (sec): 0.2824000000000002
--------- APM Model Size ------------
Each time step contains
Objects : 247
Constants : 0
Variables : 752
Intermediates: 249
Connections : 741
Equations : 745
Residuals : 496
Error checking (sec): 0.16720000000000002
Compile equations (sec): 2.0142999999999995
Check for uninitialized intermediates (sec): 0.
------------------------------------------------------
Total Parse Time (sec): 3.6097
SS MODEL INIT 1
SS MODEL INIT 2
SS MODEL INIT 3
SS MODEL INIT 4
Called files( 31 )
READ info FILE FOR PROBLEM DEFINITION: gk_model6.info
Called files( 6 )
Files(6): File Read rto.t0 T
Called files( 51 )
Read DBS File defaults.dbs
files: defaults.dbs does not exist
Called files( 51 )
Read DBS File gk_model6.dbs
Called files( 51 )
Read DBS File measurements.dbs
Called files( 51 )
Read DBS File overrides.dbs
files: overrides.dbs does not exist
Number of state variables: 1240
Number of total equations: - 989
Number of slack variables: - 0
---------------------------------------
Degrees of freedom : 251
----------------------------------------------
Steady State Optimization with APOPT Solver
----------------------------------------------
Iter: 1 I: 0 Tm: 0.26 NLPi: 7 Dpth: 0 Lvs: 3 Obj: 9.35E-02 Gap: NaN
--Integer Solution: 1.21E-01 Lowest Leaf: 9.35E-02 Gap: 2.71E-02
Iter: 2 I: 0 Tm: 0.22 NLPi: 5 Dpth: 1 Lvs: 2 Obj: 1.21E-01 Gap: 2.71E-02
--Integer Solution: 1.21E-01 Lowest Leaf: 9.35E-02 Gap: 2.71E-02
Iter: 3 I: 0 Tm: 0.32 NLPi: 10 Dpth: 1 Lvs: 1 Obj: 1.03E+00 Gap: 2.71E-02
Iter: 4 I: 0 Tm: 0.25 NLPi: 8 Dpth: 1 Lvs: 1 Obj: 1.20E-01 Gap: 2.71E-02
--Integer Solution: 1.21E-01 Lowest Leaf: 1.86E-01 Gap: -6.58E-02
Iter: 5 I: 0 Tm: 0.37 NLPi: 15 Dpth: 2 Lvs: 1 Obj: 1.86E-01 Gap: -6.58E-02
Successful solution
---------------------------------------------------
Solver : APOPT (v1.0)
Solution time : 1.4365000000000006 sec
Objective : 0.12065435497282542
Successful solution
---------------------------------------------------
Called files( 2 )
Called files( 52 )
WRITE dbs FILE
Called files( 56 )
WRITE json FILE
Timer # 1 5.64/ 1 = 5.64 Total system time
Timer # 2 1.44/ 1 = 1.44 Total solve time
Timer # 3 0.00/ 91 = 0.00 Objective Calc: apm_p
Timer # 4 0.00/ 45 = 0.00 Objective Grad: apm_g
Timer # 5 0.01/ 91 = 0.00 Constraint Calc: apm_c
Timer # 6 0.00/ 0 = 0.00 Sparsity: apm_s
Timer # 7 0.00/ 0 = 0.00 1st Deriv #1: apm_a1
Timer # 8 0.00/ 45 = 0.00 1st Deriv #2: apm_a2
Timer # 9 0.44/ 1 = 0.44 Custom Init: apm_custom_init
Timer # 10 0.00/ 1 = 0.00 Mode: apm_node_res::case 0
Timer # 11 0.00/ 1 = 0.00 Mode: apm_node_res::case 1
Timer # 12 0.01/ 1 = 0.01 Mode: apm_node_res::case 2
Timer # 13 0.00/ 1 = 0.00 Mode: apm_node_res::case 3
Timer # 14 0.10/ 187 = 0.00 Mode: apm_node_res::case 4
Timer # 15 0.46/ 90 = 0.01 Mode: apm_node_res::case 5
Timer # 16 0.00/ 0 = 0.00 Mode: apm_node_res::case 6
Timer # 17 0.00/ 45 = 0.00 Base 1st Deriv: apm_jacobian
Timer # 18 0.00/ 45 = 0.00 Base 1st Deriv: apm_condensed_jacobian
Timer # 19 0.00/ 1 = 0.00 Non-zeros: apm_nnz
Timer # 20 0.00/ 0 = 0.00 Count: Division by zero
Timer # 21 0.00/ 0 = 0.00 Count: Argument of LOG10 negative
Timer # 22 0.00/ 0 = 0.00 Count: Argument of LOG negative
Timer # 23 0.00/ 0 = 0.00 Count: Argument of SQRT negative
Timer # 24 0.00/ 0 = 0.00 Count: Argument of ASIN illegal
Timer # 25 0.00/ 0 = 0.00 Count: Argument of ACOS illegal
Timer # 26 0.00/ 1 = 0.00 Extract sparsity: apm_sparsity
Timer # 27 0.00/ 13 = 0.00 Variable ordering: apm_var_order
Timer # 28 0.00/ 1 = 0.00 Condensed sparsity
Timer # 29 0.00/ 0 = 0.00 Hessian Non-zeros
Timer # 30 0.01/ 1 = 0.01 Differentials
Timer # 31 0.00/ 0 = 0.00 Hessian Calculation
Timer # 32 0.00/ 0 = 0.00 Extract Hessian
Timer # 33 0.00/ 1 = 0.00 Base 1st Deriv: apm_jac_order
Timer # 34 0.01/ 1 = 0.01 Solver Setup
Timer # 35 0.84/ 1 = 0.84 Solver Solution
Timer # 36 0.00/ 102 = 0.00 Number of Variables
Timer # 37 0.00/ 51 = 0.00 Number of Equations
Timer # 38 0.12/ 14 = 0.01 File Read/Write
Timer # 39 0.00/ 0 = 0.00 Dynamic Init A
Timer # 40 0.00/ 0 = 0.00 Dynamic Init B
Timer # 41 0.00/ 0 = 0.00 Dynamic Init C
Timer # 42 0.69/ 1 = 0.69 Init: Read APM File
Timer # 43 0.00/ 1 = 0.00 Init: Parse Constants
Timer # 44 0.45/ 1 = 0.45 Init: Model Sizing
Timer # 45 0.00/ 1 = 0.00 Init: Allocate Memory
Timer # 46 0.28/ 1 = 0.28 Init: Parse Model
Timer # 47 0.17/ 1 = 0.17 Init: Check for Duplicates
Timer # 48 2.01/ 1 = 2.01 Init: Compile Equations
Timer # 49 0.00/ 1 = 0.00 Init: Check Uninitialized
Timer # 50 0.00/ 505 = 0.00 Evaluate Expression Once
Timer # 51 0.00/ 0 = 0.00 Sensitivity Analysis: LU Factorization
Timer # 52 0.00/ 0 = 0.00 Sensitivity Analysis: Gauss Elimination
Timer # 53 0.00/ 0 = 0.00 Sensitivity Analysis: Total Time
Many thanks in advance for your ideas.
Here are a few ideas to improve the compile time speed:
If the problem is a simulation, it can be run with IMODE=7 to compile the model once and run through all the timesteps sequentially. This is often the fastest option for simulation.
If the problem has degrees of freedom (find optimal parameter to minimize objective) then it is also possible to set up the problem as IMODE=6 to compile the model once and it solves all time steps simultaneously.
The model re-compiles every time m.solve() is called. Keeping a compiled version of the model is built into the REPLAY parameter, but it has little documentation and is built for exactly reenacting a sequence from historical data. If you'd like the option to keep a compiled version of the model for the next run, please add it as a feature request in GitHub.
Using the simple model shows that the model re-compile time doesn't change significantly between runs with different values of demand.
from gekko import GEKKO
from numpy import random
import numpy as np
import matplotlib.pyplot as plt
import time
# first, create the model
m = GEKKO(remote=False)
# declare additional decision variables
m.u = m.Var(lb=5, ub=25)
m.v = m.Var(lb=0, ub=100)
m.w = m.Var(lb=0, ub=50)
m.b = m.Var(lb=0, ub=1, integer=True)
m.demand = m.Param()
# now add the objective and the constraints
m.Minimize((1-0.8)*m.u*m.b+(1-0.9)*m.v+(1-0.7)*m.w)
m.Equation(m.u*m.b >= 10)
m.Equation(m.u*m.b + m.v + m.w == m.demand)
m.options.SOLVER=1
m.options.DIAGLEVEL = 0
tt = [] # total time
ts = [] # solve time
for i in range(101):
start = time.time()
m.demand.value = 50+random.rand()*50
m.solve(disp=False)
tt.append(time.time()-start)
ts.append(m.options.SOLVETIME)
plt.figure(figsize=(8,4))
plt.plot(tt,label='Total Time')
plt.plot(ts,label='Solve Time')
plt.plot(np.array(tt)-np.array(ts),label='Compile Time')
plt.ylim([0,0.03]); plt.grid(); plt.ylabel('Time (sec)')
plt.savefig('timing.png',dpi=300)
plt.legend()
plt.show()
The compile and solve times are very fast for this simple problem. If you can post the full RTO problem, we can give more specific suggestions to improve solve time.

D3 Filter Issue

I am trying to filter my data list using D3. What I am trying to do is filter my data based on date I specify and threshold value for precipitation.
Here is my code as
$(function() {
$("#datepicker").datepicker();
$("#datepicker").on("change",function(){
//var currentDate = $( "#datepicker" ).datepicker( "getDate" )/1000;
//console.log(currentDate)
});
});
function GenerateReport() {
d3.csv("/DataTest.csv", function(data) {
var startdate = $( "#datepicker" ).datepicker( "getDate" )/1000;
var enddate = startdate + 24*60*60
var data_Date = d3.values(data.filter(function(d) { return d["Date"] >=
startdate && d["Date"] <= enddate} ))
var x = document.getElementById("threshold").value
console.log(data_Date)
var data_Date_Threshold = data_Date.filter(function(d) {return
d.Precipitation > x});
My data set looks like
ID Date Prcip Flow Stage
1010 1522281000 0 0 0
1010 1522281600 0 0 0
1010 1522285200 10 0 0
1010 1522303200 12 200 1.2
1010 1522364400 6 300 2
1010 1522371600 4 400 2.5
1010 1522364400 6 500 2.8
1010 1522371600 4 600 3.5
2120 1522281000 0 0 0
2120 1522281600 0 0 0
2120 1522285200 10 100 1
2120 1522303200 12 1000 2
2120 1522364400 6 2000 3
2120 1522371600 4 2500 3.2
2290 1522281000 0 0 0
2290 1522281600 4 0 0
2290 1522285200 5 200 1
2290 1522303200 10 800 1.5
2290 1522364400 6 1500 3
2290 1522371600 0 1000 2
6440 1522281000 0 0 0
6440 1522281600 4 0 0
6440 1522285200 5 200 0.5
6440 1522303200 10 800 1
6440 1522364400 6 1500 2
6440 1522371600 0 100 1.4
When I use filter function, I have some problems.
What I have found is that when I use x = 2 to filter precipitation value, it does not catch precipitation = 10 or 12. However, when I use x=1, it works fine. I am guessing that it catches only the first number (e.g., if x=2, it regards precipitation = 10 or 12 is less than 2 since it looks only 1 in 10 and 12) Is there anyone who had the same issue what I have? Can anyone help me to solve this problem?
Thanks.
You are comparing strings. This comparison is therefore done lexicographically.
In order to accomplish what you want, you need to first convert these strings to numbers:
var x = Number(document.getElementById("threshold").value)
var data_Date_Threshold = data_Date.filter(function(d) {return Number(d.Precipitation) > x});
Alternatively, floats:
var x = parseFloat(document.getElementById("threshold").value)
var data_Date_Threshold = data_Date.filter(function(d) {return parseFloat(d.Precipitation) > x});

I need simple formula for data transformation according to this pattern

I'm looking for an algorithm that has two input values and one output value and follows this pattern:
Input_A: 10 (When INPUT_B is increased from 0 to 1 in very small steps, it should reach the value '1' 100/10=10 times.)
Input_B => Output
0.025 => 0.25
...
0.05 => 0.50
...
0.075 => 0.75
...
0.1 => 1.00
...
0.125 => 0.25
...
0.15 => 0.50
...
0.175 => 0.75
...
0.2 => 1.00
....
0.9 => 1.00
....
0.95 => 0.50
...
Input_A: 20 (When INPUT_B is increased from 0 to 1 in very small steps, it should reach the value '1' 100/20=5 times.)
Input_B => Output
0.025 => 0.50
...
0.05 => 1.00
...
0.075 => 0.50
...
0.1 => 1.00
...
0.125 => 0.50
...
0.15 => 1.00
...
0.175 => 0.50
...
0.2 => 1.00
....
0.9 => 1.00
....
0.9125 => 0.25
...
0.925 => 0.50
...
0.95 => 1.00
...
I think I managed to create an algorithm that follows the first pattern. But I couldn't find one that follows both.
myAlgorithm(Input_A,Input_B) {
return (Input_B && Input_B%0.1 == 0) ? 1 : Input_B%0.1 * Input_A;
}
It seems you need something like this:
A10 = A * 10 //0.175 * 10 = 1.75
AInt = (Int)A10 //integer part = 1
AFrac = A10 - AInt //fractional part = 0.75
Output = AFrac? AFrac: 1.0 ; //extra case of zero fractional part

For loop for computing two vectors in R

Suppose i have a genotype dataset: geno
FID rs1 rs2 rs3
1 1 0 2
2 1 1 1
3 0 1 1
4 0 1 0
5 0 0 2
Another dataset is : coed
rs1 rs2 rs3
0.6 0.2 0.3
Do the following code:
geno$rs1 <- geno$rs1 * coed$rs1
geno$rs2 <- geno$rs2 * coed$rs2
geno$rs3 <- geno$rs3 * coed$rs3
sum3 <- rowSums(geno[,c(2:4)])
c <- cbind(geno,sum3)
I will get the output as i want
FID rs1 rs2 rs3 sum3
1 0.6 0 0.6 1.2
2 0.6 0.2 0.3 1.1
3 0 0.2 0.3 0.5
4 0 0.2 0 0.2
5 0 0 0.6 0.6
But i have thousands of SNPs, which i tried to build the below for loop
snp <- names(geno)[2:4]
geno.new <- numeric(0)
for (i in snp){
geno.new[i] = geno1[i] * coed[i]
}
The results is not what i would expected
$rs1
[1] 0.6 0.6 0.0 0.0 0.0
$rs2
[1] 0.0 0.2 0.2 0.2 0.0
$rs3
[1] 0.6 0.3 0.3 0.0 0.6
Could any one help me to improve that?
Thanks
I did find the solution, see the code below:
## read datasets
geno <- read.table("Genotype.csv",header=T,sep=",")
dim(geno)
coed <- read.table("beta.csv",header=T,sep=",")
## define the snp name
snp <- names(geno)[2:4]
## building for loop
for (i in snp){
geno[i] <- geno[i] * coed[i]
}
## caculate the sums
sum <- rowSums(geno[,c(2:4)])
## combind the results
all <- cbind(geno,sum)

Resources