How to subtract or add time series data of a CombiTimeTable in Modelica? - text-files

I have a text file that is used in a CombiTimeTable. The text file looks like as follows:
#1
double tab1(5,2) # comment line
0 0
1 1
2 4
3 9
4 16
The first column is time and the second one is my data. My goal is to add each datum to the previous one, starting from the second row.
model example
Modelica.Blocks.Sources.CombiTimeTable Tsink(fileName = "C:Tin.txt", tableName = "tab1", tableOnFile = true, timeScale = 60) annotation(
Placement(visible = true, transformation(origin = {-70, 30}, extent = {{-10, -10}, {10, 10}}, rotation = 0)));
equation
end example;
Tsink.y[1] is the column 2 of the table but I do not know how to access it and how to implement an operation on it. Thanks for your help.

You can't use the blocks of the ModelicaStandardTables here, which are only meant for interpolation and hence do not expose the sample points to the Modelica model. However, you can use the Modelica library ExternData to easily read the array from a CSV file and do the required operations on the read data. For example,
model Example "Example model to read array and operate on it"
parameter ExternData.CSVFile dataSource(
fileName="C:/Tin.csv") "Data source"
annotation(Placement(transformation(extent={{-60,60},{-40,80}})));
parameter Integer n = 5 "Number of rows (must be known)";
parameter Real a[n,2] = dataSource.getRealArray2D(n, 2) "Array from CSV file";
parameter Real y[n - 1] = {a[i,2] + a[i + 1,2] for i in 1:n - 1} "Vector";
annotation(uses(ExternData(version="2.6.1")));
end Example;
where Tin.csv is a CSV file with comma as delimiter
0,0
1,1
2,4
3,9
4,16

Related

R matrix transpose (2 into 1 column) many separate files and merge into one .csv

I have ~217 identical .csv files in a folder, each represents one individual, and has 2 columns (header: x, y) and 180 rows of data.
I need to transpose these into a single row (new headers: x1:x180, continued into y1:y180), create an ID column with an abbreviated file name, and merge the separate files into one data frame of 217 rows and an ID columns, and 360 columns of data.
Here's example data from separate .csv files in the same folder, truncated to the first 6 rows:
#dataA_observer_date
x y
1 -2.100343 -0.2601952
2 -2.128320 -0.2805480
3 -2.152010 -0.3000733
4 -2.168258 -0.3170724
5 -2.174368 -0.3305717
6 -2.168887 -0.3403942
#dataB_observer_date
x y
1 0.7577988 -0.1212715
2 0.7256039 -0.1344822
3 0.6933261 -0.1496408
4 0.6638619 -0.1657460
5 0.6409363 -0.1815894
6 0.6281463 -0.1960087
I need the data to look like this, in one file:
head(dataA)
ID [x1] [x2] [x3] [x4] [x5] [x6] [y1] [y2] [y3] [y4] [y5] [y6]
dataA -2.100343 -2.12832 -2.15201 -2.168258 -2.174368 -2.168887 -0.2601952 -0.280548 -0.3000733 -0.3170724 -0.3305717 -0.3403942
dataB
dataC...
...data217
for transposing, I tried the following, which results in a different column ordersince it works by row through the 180 rows:
t_Image1 <- matrix(t(Image1Coords), nrow = 1)
x1 y1 x2 y2...
I have the file names from the folder in a list using other help from https://stackoverflow.com/questions/31039269/combine-and-transpose-many-fixed-format-dataset-files-quickly
filenames <- list.files(path = "C:/Users/path_to_folder", pattern = "*.csv", full.names = FALSE)
require(data.table)
data_list <- lapply(filenames,read.csv)
But I can't get it to come together. So far, with help from https://stackoverflow.com/questions/21530672/in-r-loop-through-matrix-files-transpose-and-save-with-new-name and several other places, to just transpose the files and resave them to be combined in another step: But the exported file is hideous. The matrix transpose into one row retains quotes and puts 2 data points in one cell, and I'm not sure what its doing as headers, but its all in the first cell
for (i in filenames) {
mat <- matrix(t(read_table(i, col_names = TRUE, skip_empty_rows = TRUE)), nrow = 1)
mat$ID <- tools::file_path_sans_ext(basename(filename))
filename <- paste0("transposed_", i)
write.table(mat, file = filename)
}
I have not addressed shortening the file names and making an ID column yet.
Any help/advice would be greatly appreciated.

How to sort number in file (highest)

There is a file included many information.
I want to sort several sentences with included numbers.
In Files, there are several sentences.
There are 7 lines below (middle line is blank)
GRELUP.C.3a.or:ndiff_c_fail_a_same_well = SELECT -inside GRELUP.C.3a.or:ndiff_c_fail_a GRELUP.C.3a.or:_EPTMPL312066 -not
generate layer GRELUP.C.3a.or:ndiff_c_fail_a_same_well, TYP = P, HPN = 0, FPN = 0, HEN = 0, FEN = 0
Time: cpu=0.00/8818.64 real=0.30/1875.23 Memory: 160.81/245.20/245.20
GRELUP.C.3a.or:ndiff_c_fail_a = SELECT -inside GRELUP.C.3a.or:ndiff_c_eg GRELUP.C.3a.or:well_cont_a_sized_a -not
generate layer GRELUP.C.3a.or:ndiff_c_fail_a, TYP = P, HPN = 0, FPN = 0, HEN = 0, FEN = 0
Time: cpu=0.00/8818.64 real=1.10/1875.23 Memory: 180.84/252.29/252.29
What I want to return line are below.
GRELUP.C.3a.or:ndiff_c_fail_a real=1.10/1875.23
GRELUP.C.3a.or:ndiff_c_fail_a_same_well real=0.30/1875.23
In other words, high number behind of "real=" is sorted first and added specific words behind "generate layer" above line.
I suggest doing this in several stages, as it is so much easier to take things in limited steps:
Split the data into records.
Convert each record into a reduced form that just has the information you want.
Sort now that you can easily determine what to sort by.
(Optional, depending on how you do step 2) Extract the information to print.
If your data is small enough to fit into memory, the first step can be done with:
proc splitIntoRecords {data} {
# U+001E is the official ASCII record separator; it's not used much!
regsub -all {\n{2,}} $data \u001e data
return [split $data \u001e]
}
I'm not quite so sure about the conversion step; this might work (on a single record; I'll lift to the collection with lmap later):
proc convertRecord {record} {
# We extract the parts we want to print and the part we want to sort by
regexp {(^\S+).*(real=[^\s/]+/(\S+))} $record -> name time val
return [list "$name $time" $val]
}
Once that's done, we can lsort -real -decreasing with a -index specified to get the collation key (the $vals we extracted above), and printing is now trivial:
set records [lmap r [splitIntoRecords $data] {convertRecord $r}]
foreach r [lsort -real -decreasing -index 1 $records] {
puts [lindex $r 0]
}

Parsing a large table into smaller tables

I'm attempting to take a table, that contains numerous nested tables, and place them into an order based on certain Y-coordinate & object type values. I want to take this "master" table and sort them in Y-location first, then sort it dependent on types (text or lines).
Right now, all I can think of doing is to place two tables that consist of objects that are above and below point 200 on a Y-axis. From that, two tables that are split into two different object types; lines and text.
I cannot seem to get beyond a certain point in my code where I have the same for loops occurring for each table. I do this in order to maintain a point of "top to bottom" with the objects of each type. Ideally, I wish to maintain the following order for my table(s) (and hopefully placed into the larger table for use):
< 201 for text
< 201 for lines
200 for text
200 for lines
Here is what I have so far (where objTable is my master table containing all of the numerous objects, where each of those is a table of their own):
local offset = 0
local upperObjTbl, lowerObjTbl, upperLineTbl, lowerLineTbl = {},{},{},{}
for objKey, object in pairs(objTable) do
if tonumber(object.y) < 201 and object.object ~= "line" then
offset = totalOffset + object.offset
table.insert(lowerObjTbl, #lowerObjTbl + 1, object)
end
end
for objKey, object in pairs(objTable) do
if tonumber(object.y) < 201 and object.object == "line" then
offset = totalOffset + object.offset
table.insert(lowerObjTbl, #lowerObjTbl + 1, object)
end
end
for objKey, object in pairs(objTable) do
if tonumber(object.y) > 200 and object.object ~= "line" then
offset = totalOffset + object.offset
table.insert(higherObjTbl, #higherObjTbl + 1, object)
end
end
for objKey, object in pairs(objTable) do
if tonumber(object.y) > 200 and object.object == "line" then
offset = totalOffset + object.offset
table.insert(higherObjTbl, #higherObjTbl + 1, object)
end
end
Ideally, what I would like to happen is to compact this into a much better for loop that, no matter what type of object it is or where on the Y-axis it is located, it places them in order of Y-axis first (lowest to highest) and text before lines.

write cell array into text file as two column data

I have two different variables which are stored as cell arrays. I try to open text file and store these variables as two column arrays. Below is my code, i used \t to seperate x and y data, but in the output file, the x data is written first which is followed by the y data. How can I obtain two column array in the text file?
for j=1:size(data1,2)
file1=['dir\' file(j,1).name];
f1{j}=fopen(file1,'a+')
fprintf(f1{j},'%7.3f\t%20.10f\n',x{1,j}',y{1,j});
fclose(f1{j});
end
Thanks in advance!
You can use dlmwrite as well to accomplish this for numeric data:
x = [1;2;3]; y = [4;5;6]; % two column vectors
dlmwrite('foo.dat',{x,y},'Delimiter','\t')
This produces the output:
1 4
2 5
3 6
Use a MATLAB table if you have R2013b or beyond:
data1 = {'a','b','c'}'
data2 = {1, 2, 3}'
t = table(data1, data2)
writetable(t, 'data.csv')
More info here.

How to make new structure from existing in matlab?

I have structure stations with fields name and code.
For example:
stations = struct(...
'name',{'a','b','c','d'},...
'code',{[0 0],[0 1],[1 0],[1 1]})
(I will change this structure, add new stations-name and code etc.)
I want to make new structure sessions'which will also have fields name and code but values will be combination of two stations?
For example:
stations = struct(...
'name',{'ab','ac','ad','bc','bd','cd'},...
'code',{[0 0 0 1],[0 0 1 0],[0 0 1 1],[0 1 1 0],[0 1 1 1],[1 0 1 1]}).
I'm trying something like:
for i=1:numberOfStations-1
for j=i+1:numberOfStations
strcat(stations(i).name,stations(j).name);
cat(2,stations(i).code,stations(j).code);
end
end
but I don't know where to put those values.
The struct you have is a struct array so you access each element like:
stations(1)
ans =
name: 'a'
code: [0 0]
Then for a specific element and member
stations(2).name
ans =
b
If you want to add to the struct, you can do the following:
stations(end+1) = struct('name','hi','code',[1 1]);
If you want to merge an new array of structures to your current one:
% current struct array, stations
% new data, new_station_data
for ii=1:length(new_station_data)
station(end+1) = new_station_data(ii);
end
Hope this helps!

Resources