getting 6 equally spaced points from a sequence of linestring in python - geopandas

I have data whose geometry is a sequence of line strings. From this geometry, we intend to obtain 6 equally spaced points. I have tried using the np.linspace and the np.arrange functions but I can't get what I am looking for. Kindly help me with the algorithm for working out this. Below is my geometry:
0 0 [(34.220545045730496, 4.442393531636864), (34.2224155889151, 4.441156322612342), (34.223315935853314, 4.440427900224906), (34.224077028342684, 4.4399257458670185), (34.224766879680814, 4.439278243976972), (34.22556405912324, 4.438776319912429), (34.226398022458866, 4.438165968807749), (34.22734024644533, 4.437556318146236), (34.22943852729713, 4.4365111090884435), (34.23209935027103, 4.435703837269645), (34.234239592861904, 4.434838866278886), (34.23690044724009, 4.434031568657816), (34.238982889400376, 4.4332242366641825), (34.24054473957206, 4.4327052342468), (34.244651946024526, 4.431263497605144), (34.246271748538476, 4.430629103905764), (34.24783374809697, 4.429764000807084), (34.24904866823832, 4.429014226798339), (34.25043718779403, 4.428091408213787), (34.2521728964003,
(34.352324364376926, 4.208738505912754), (34.35256435592366, 4.208668509240028)]
Name: geometry, type: object
I tried this code but still can't get the 25th ,75th pts right.
first_coord = N_df["geometry"].apply(lambda g: g.coords[0])
Point_25th = N_df['geometry'].agg(lambda g: np.percentile(g, 25))
center_point = N_df['geometry'].centroid
point_75th = N_df['geometry'].agg(lambda g: np.percentile(g, 75))
last_coord = N_df["geometry"].apply(lambda g: g.coords[-1])
N_df["start_coord"] = first_coord
N_df["25th percentile"] = Point_25th
N_df["Midpoint"] = center_point
N_df["75th percentile"] = point_75th
N_df["last_coord"] = last_coord
# N_df ...?

Related

Fine-tune a pre-trained model

I am new to transformer based models. I am trying to fine-tune the following model (https://huggingface.co/Chramer/remote-sensing-distilbert-cased) on my dataset. The code:
enter image description here
and I got the following error:
enter image description here
I will be thankful if anyone could help.
The preprocessing steps I followed:
input_ids_t = []
attention_masks_t = []
for sent in df_train['text_a']:
encoded_dict = tokenizer.encode_plus(
sent,
add_special_tokens = True,
max_length = 128,
pad_to_max_length = True,
return_attention_mask = True,
return_tensors = 'tf',
)
input_ids_t.append(encoded_dict['input_ids'])
attention_masks_t.append(encoded_dict['attention_mask'])
# Convert the lists into tensors.
input_ids_t = tf.concat(input_ids_t, axis=0)
attention_masks_t = tf.concat(attention_masks_t, axis=0)
labels_t = np.asarray(df_train['label'])
and i did the same for testing data. Then:
train_data = tf.data.Dataset.from_tensor_slices((input_ids_t,attention_masks_t,labels_t))
and the same for testing data
It sounds like you are feeding the transformer_model 1 input instead of 3. Try removing the square brackets around transformer_model([input_ids, input_mask, segment_ids])[0] so that it reads transformer_model(input_ids, input_mask, segment_ids)[0]. That way, the function will have 3 arguments and not just 1.

to_crs("epsg:4326") retruns different coordinate

I am trying to change coordinates system on my geopandas dataframe from epsg:5179 to epsg:4236.
BUT, .to_crs("epsg:4326") retruns different coordinates... How can I get true cordinates?
geo[geometry].set_crs("epsg:5179", inplace = True)
geo_df = geo[geometry].to_crs("epsg:4326")
Original
LINESTRING (14138122.900 4519000.200, 14138248...LINESTRING (14135761.800 4518881.600, 14135799...
Changed-proj
LINESTRING (-149.90927 12.31701, -149.90912 12...LINESTRING (-149.91219 12.32162, -149.91215 12...
It seems like you got true coordinates with your code which is :
geo[geometry].set_crs("epsg:5179", inplace = True)
geo_df = geo[geometry].to_crs("epsg:4326")
I've been looking through pyproj, and couldn't find error to change coordinates epsg:5179 to epsg:4326.
If you want to get futher more information about cordinates, you can visit here.

lua - How to perform transitions in sequence

i'm trying to move an object along the points of a complex curved path with a constant velocity using transitions.
I have two tables to keep the coordinates of the points and another table with the respective time intervals for travelling each linear segment at the same speed (despite they have different lengths).
Assuming the firts and last values of the "timeTable" are 0, i tried with something similar to this:
local i = 1
local function Move()
transition.to(player, {time=timeTable[i+1], x=TableX[i+1], y=TableY[i+1]})
i=i+1
end
timer.performWithDelay( timeTable[i], Move, 0 )
It doesn't work although it no error is given.
Thanks in advance for your helpenter code here
May be this would work
local timeTable = {1, 3, 4, 1}
local TableX = {100, 400, 400, 500}
local TableY = {100, 100, 500, 500}
local i = 0
local function onCompleteMove()
i = i + 1
if timeTable[i] then
transition.to(player, {
time=timeTable[i],
x=TableX[i],
y=TableY[i],
onComplete=onCompleteMove
})
end
end
onCompleteMove() -- start moving to first point
Try
Tutorial: Moving objects along a path
Tutorial: Working with curved paths
Method for chain of transition for the same object
local function chainOfTransitions(object, params, ...)
if params then
function params.onComplete()
chainOfTransitions(object, unpack(arg))
end
transition.to(object, params)
end
end
Thanks to all of you!
I accomplished the goal by doing so:
local segmentTransition
local delta = 1
local function onCompleteMove()
i = i + delta
if timeTable[i] then
segmentTransition = transition.to(player2, {
time=timeTable[i],
x=tableX[i+delta],
y=tableY[i+delta],
onComplete=onCompleteMove
})
end
end
onCompleteMove() -- start moving

How can I execute a TensorFlow graph from a protobuf in C++?

I got a simple code form tutorial and output it to .pb file as below:
mnist_softmax_train.py
x = tf.placeholder("float", shape=[None, 784], name='input_x')
y_ = tf.placeholder("float", shape=[None, 10], name='input_y')
W = tf.Variable(tf.zeros([784, 10]), name='W')
b = tf.Variable(tf.zeros([10]), name='b')
tf.initialize_all_variables().run()
y = tf.nn.softmax(tf.matmul(x,W)+b, name='softmax')
cross_entropy = -tf.reduce_sum(y_*tf.log(y))
train_step = tf.train.GradientDescentOptimizer(0.01).minimize(cross_entropy, name='train_step')
train_step.run(feed_dict={x:input_x, y_:input_y})
In C++, I load the same graph, and feed in fake data for testing:
Tensor input_x(DT_FLOAT, TensorShape({10,784}));
Tensor input_y(DT_FLOAT, TensorShape({10,10}));
Tensor W(DT_FLOAT, TensorShape({784,10}));
Tensor b(DT_FLOAT, TensorShape({10,10}));
Tensor input_test_x(DT_FLOAT, TensorShape({1,784}));
for(int i=0;i<10;i++){
for(int j=0;j<10;j++)
input_x.matrix<float>()(i,i+j) = 1.0;
input_y.matrix<float>()(i,i) = 1.0;
input_test_x.matrix<float>()(0,i) = 1.0;
}
std::vector<std::pair<string, tensorflow::Tensor>> inputs = {
{ "input_x", input_x },
{ "input_y", input_y },
{ "W", W },
{ "b", b },
{ "input_test_x", input_test_x },
};
std::vector<tensorflow::Tensor> outputs;
status = session->Run(inputs, {}, {"train_step"}, &outputs);
std::cout << outputs[0].DebugString() << "\n";
However, this fails with the error:
Invalid argument: Input 0 of node train_step/update_W/ApplyGradientDescent was passed float from _recv_W_0:0 incompatible with expected float_ref.
The graph runs correctly in Python. How can I run it correctly in C++?
The issue here is that you are running the "train_step" target, which performs much more work than just inference. In particular, it attempts to update the variables W and b with the result of the gradient descent step. The error message
Invalid argument: Input 0 of node train_step/update_W/ApplyGradientDescent was passed float from _recv_W_0:0 incompatible with expected float_ref.
...means that one of the nodes you attempted to run ("train_step/update_W/ApplyGradientDescent") expected a mutable input (with type float_ref) but it got an immutable input (with type float) because the value was fed in.
There are (at least) two possible solutions:
If you only want to see predictions for a given input and given weights, fetch "softmax:0" instead of "train_step" in the call to Session::Run().
If you want to perform training in C++, do not feed W and b, but instead assign values to those variables, then continue to execute "train_step". You may find it easier to create a tf.train.Saver when you build the graph in Python, and then invoke the operations that it produces to save and restore values from a checkpoint.

How to normalize an image using Octave?

In their paper describing Viola-Jones object detection framework (Robust Real-Time Face Detection by Viola and Jones), it is said:
All example sub-windows used for training were variance normalized to minimize the effect of different lighting conditions.
My question is "How to implement image normalization in Octave?"
I'm NOT looking for the specific implementation that Viola & Jones used but a similar one that produces almost the same output. I've been following a lot of haar-training tutorials(trying to detect a hand) but not yet able to output a good detector(xml).
I've tried contacting the authors, but still no response yet.
I already answered how to to it in general guidelines in this thread.
Here is how to do method 1 (normalizing to standard normal deviation) in octave (Demonstrating for a random matrix A, of course can be applied to any matrix, which is how the picture is represented):
>>A = rand(5,5)
A =
0.078558 0.856690 0.077673 0.038482 0.125593
0.272183 0.091885 0.495691 0.313981 0.198931
0.287203 0.779104 0.301254 0.118286 0.252514
0.508187 0.893055 0.797877 0.668184 0.402121
0.319055 0.245784 0.324384 0.519099 0.352954
>>s = std(A(:))
s = 0.25628
>>u = mean(A(:))
u = 0.37275
>>A_norn = (A - u) / s
A_norn =
-1.147939 1.888350 -1.151395 -1.304320 -0.964411
-0.392411 -1.095939 0.479722 -0.229316 -0.678241
-0.333804 1.585607 -0.278976 -0.992922 -0.469159
0.528481 2.030247 1.658861 1.152795 0.114610
-0.209517 -0.495419 -0.188723 0.571062 -0.077241
In the above you use:
To get the standard deviation of the matrix: s = std(A(:))
To get the mean value of the matrix: u = mean(A(:))
And then following the formula A'[i][j] = (A[i][j] - u)/s with the
vectorized version: A_norm = (A - u) / s
Normalizing it with vector normalization is also simple:
>>abs = sqrt((A(:))' * (A(:)))
abs = 2.2472
>>A_norm = A / abs
A_norm =
0.034959 0.381229 0.034565 0.017124 0.055889
0.121122 0.040889 0.220583 0.139722 0.088525
0.127806 0.346703 0.134059 0.052637 0.112369
0.226144 0.397411 0.355057 0.297343 0.178945
0.141980 0.109375 0.144351 0.231000 0.157065
In the abvove:
abs is the absolute value of the vector (its length), which is calculated with vectorized multiplications (A(:)' * A(:) is actually sum(A[i][j]^2))
Then we use it to normalize the vector so it will be of length 1.

Resources