I have a time series dataset of accelerometry values where there are many sub-seconds of measurements but the actual number of sub-seconds recorded per second is variable.
So I would be starting with something that looks like this:
Date time
Dec sec
Acc X
1
.00
0.5
1
.25
0.5
1
.50
0.6
1
.75
0.5
2
.00
0.6
2
.40
0.5
2
.80
0.5
3
.00
0.5
3
.50
0.5
4
.00
0.6
4
.25
0.5
4
.50
0.5
4
.75
0.5
And trying to convert it to wide format where each row is a second, and the columns are the decimal seconds corresponding to each second.
sub1
sub2
sub3
sub4
.5
.5
.6
.5
.6
.5
.5
NaN
.5
.5
NaN
NaN
.6
.5
.5
.5
In code this would look like:
%Preallocate some space
Dpts_observations = NaN(13,3);
%These are the "seconds" number
Dpts_observations(:,1)=[1 1 1 1...
2 2 2...
3 3...
4 4 4 4];
%These are the "decimal seconds"
Dpts_observations(:,2) = [0.00 0.25 0.50 0.75...
0.00 0.33 0.66...
0.00 0.50 ...
0.00 0.25 0.50 0.75]
%Here's actual acceleration values
Dpts_observations(:,3) = [0.5 0.5 0.5 0.5...
0.6 0.5 0.5...
0.4 0.5...
0.5 0.5 0.6 0.4]
%I have this in a separate file but I have summary data that helps me
determine the row indexes corresponding to sub-seconds that belong to the same second and I use them to manually extract from long form to wide form.
%Create table to hold indexing information
Seconds = [1 2 3 4];
Obs_per_sec = [4 3 2 4];
Start_index = [1 5 8 10];
End_index = [4 7 9 13];
Dpts_attributes = table(Seconds, Obs_per_sec, Start_index, End_index);
%Preallocate new array
Acc_X = NaN(4,4);
%Loop through seconds
for i=1:max(size(Dpts_attributes))
Acc_X(i, 1:Dpts_attributes.Obs_per_sec(i))=Dpts_observations(Dpts_attributes.Start_index(i):Dpts_attributes.End_index(i),3);
end
Now this is working but its very slow. In reality, I have a huge data set consisting of millions of seconds and I'm hoping there might be a better solution than the one I currently have going. My data is all numeric to try to make everything as fast a possible.
Thank you!
I noticed that rand(x) where x is an integer gives me an array of random floating points. I want to know how I can generate an array of random float type variables within a certain range. I tried using a range as follows:
rand(.4:.6, 5, 5)
And I get:
0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4
0.4 0.4 0.4 0.4 0.4
How can I get a range instead of the lowest number in the range?
Perhaps a bit more elegant, as you actually want to sample from a Uniform distribution, you can use the Distribution package:
julia> using Distributions
julia> rand(Uniform(0.4,0.6),5,5)
5×5 Array{Float64,2}:
0.547602 0.513855 0.414453 0.511282 0.550517
0.575946 0.520085 0.564056 0.478139 0.48139
0.409698 0.596125 0.477438 0.53572 0.445147
0.567152 0.585673 0.53824 0.597792 0.594287
0.549916 0.56659 0.502528 0.550121 0.554276
The same method then applies from sampling from other well-known or user-defined distributions (just give the distribution as the first parameter to rand())
You need a step parameter:
rand(.4:.1:.6, 5, 5)
The .1 will provide a step for your range which is necessary for floating point numbers and not necessary for incrementing by 1. The issue is that it will assume 1 regardless of implicit precision. If you need the increment more precise than do the following:
rand(.4:.0001:.6, 5, 5)
This will give you a result that looks similar to:
0.4587 0.557 0.586 0.4541 0.4686
0.4545 0.4789 0.4921 0.4451 0.4212
0.4373 0.5056 0.4229 0.5167 0.5504
0.5494 0.4068 0.5316 0.4378 0.5495
0.4368 0.4384 0.5265 0.5995 0.5231
You can do it with
julia> map(x->0.4+x*(0.6-0.4),rand(5,5))
5×5 Array{Float64,2}:
0.455445 0.475007 0.518734 0.463064 0.400925
0.509436 0.527338 0.566976 0.482812 0.501817
0.405967 0.563425 0.574607 0.502343 0.483075
0.50317 0.482894 0.54584 0.594157 0.528844
0.50418 0.515788 0.5554 0.580199 0.505396
The general rule is
julia> map( x -> start + x * (stop - start), rand(5,5) )
where start is 0.4 and stop is 0.6
You can even generate a six sided dice this way by having x ranging from 1 to 7 that is 1 < x < 7 since the probability of x being exactly 1.0 or 7.0 is zero
julia> map(x->Integer(floor(1+x*(7-1))),rand(5,5))
5×5 Array{Int64,2}:
2 6 6 3 2
3 1 3 1 6
5 4 6 1 5
3 6 5 5 3
3 4 3 5 4
or you can use
julia> rand(1:6,5,5)
5×5 Array{Int64,2}:
3 6 3 5 5
2 1 3 3 3
1 5 4 1 5
5 5 5 5 1
3 2 1 5 6
Just another simple solution (using vectorized operations)
0.2 .* rand(5,5) .+ 0.4
And if efficiency matters...
#time 0.2 .* rand(10000, 10000) .+ 0.4
>> 0.798906 seconds (4 allocations: 1.490 GiB, 5.20% gc time)
#time map(x -> 0.4 + x * (0.6 - 0.4), rand(10000, 10000))
>> 0.836322 seconds (49.20 k allocations: 1.493 GiB, 7.08% gc time)
using Distributions
#time rand(Uniform(0.4, 0.6), 10000, 10000)
>> 1.310401 seconds (2 allocations: 762.940 MiB, 1.51% gc time)
#time rand(0.2:0.000001:0.4, 10000, 10000)
>> 1.715034 seconds (2 allocations: 762.940 MiB, 6.24% gc time)
When multiplying matrices in some real fields(like I have now) these matrices contains a lot of systimaticaly repeated values. The repeated values are not only zero so we can't call it sparse(?)
For example lets take this matrix (In my case dimensions are 1000 x 1000):
0.8 0.8 0.8 0.1 0.1
0.8 0.8 0.8 0.7 0.7
0.8 0.8 0.8 0.7 0.7
0.9 0.6 0.5 0.7 0.7
Then we are multiplying this matrix by a value matrix and got a result. For example, we are multiplying just by a vector V {v1, v2, v3, v4}. We can do normal matmul, but this is wasteful. We can compress the matrix:
A1 = 0.8 * (v1 + v2 + v3)
A2 = 0.7 * (v2 + v3 + v4)
And add this values again and again to the columns dot products.
If there is a lot of repetition amount of computation can be reduced in several times.
But effective implementation looks hard to me. Can you suggest something?
You could decompose your matrix into a sum of sparse matrices.
0.8 0.8 0.8 0.1 0.1 1 1 1 0 0 0 0 0 0 0 0 0 0 0.1 0.1
0.8 0.8 0.8 0.7 0.7 = 0.8 * 1 1 1 0 0 + 0.7 * 0 0 0 1 1 + 0 0 0 0 0
0.8 0.8 0.8 0.7 0.7 1 1 1 0 0 0 0 0 1 1 0 0 0 0 0
0.9 0.6 0.5 0.7 0.7 0 0 0 0 0 0 0 0 1 1 0.9 0.6 0.5 0 0
Then your multiplication becomes a series of relatively simple to optimise multiplications and one big addition at the end.
Split them into block matrices and modify the multiplying vector as such. You'll probably need a data structure to keep track for recombination.
I want to generate n random numbers between 0 and 1 that sum of them is less equal than one.
Sum(n random number between 0 and 1) <= 1
n?
For example: 3 random number between 0 and 1:
0.2 , 0.3 , 0.4
0.2 + 0.3 + 0.4 = 0.9 <=1
It sounds like you would need to generate the numbers separately while keeping track of the previous numbers. We'll use your example:
Generate the first number between 0 and 1 = 0.2
1.0 - 0.2 = 0.8: Generate the next number between 0 and 0.8 = 0.3
0.8 - 0.3 = 0.5: Generate the next number between 0 and 0.5 = 0.4
I need to delete rows a table (.csv) only if in all column absolute values for that row are less than 1, how can I accomplish this?
Example
Year Parameter1 Parameter2 Parameter3 Parameter4
1 -0.3 0.1 -2.5 1.0
2 -0.3 0.1 0.8 0.1
3 -0.3 0.1 -3.8 1.6
4 -0.6 0.5 -0.2 0.4
5 0.3 -0.1 -0.5 1.3
And I want to output to result in:
Year Parameter1 Parameter2 Parameter3 Parameter4
1 -0.3 0.1 -2.5 1.0
3 -0.3 0.1 -3.8 1.6
5 0.3 -0.1 -0.5 1.3
Thanks in advance!