I have some projection string exported by MapInfo, but I can't find way to convert them into proj4 string, could any one help me with this?
Here is the string:
"CoordSys Earth Projection 8, 104, "m", 29, 0, 1, 0, 0"
"CoordSys Earth Projection 8, 150, "m", 27, 0, 1, 0, 0"
Thanks,
Edgar
With a little googling:
+proj=tmerc +ellps=WGS84 +lon_0=29
+proj=tmerc +ellps=WGS84 +lon_0=27
Link:
MapInfo projection and datum types: http://reference1.mapinfo.com/software/mapinfo_pro/english/16.0/MapInfoProUserGuide.pdf
Notes:
WGS84 and Hartebeesthoek datums are coincident.
No need to specify default proj4 parameters.
Related
I have been fine-tuning a BERT model for sentence classification. In training, while tokenization I had passed these parameters padding="max_length", truncation=True, max_length=150 but while inferencing it is still predicting even if padding="max_length" parameter is not being passed.
Surprisingly, predictions are the same in both cases when padding="max_length" is passed or not but if padding="max_length" is not being passed, inferencing is much faster.
So, I need some clarity on the parameter "padding" in Bert Tokenizer. Can someone help me to understand how best is able to predict even without the padding since the length of the sentences will differ and does it have any negative consequences If padding="max_length" is not passed while inferencing? Any help would be highly appreciated.
Thanks
When passing a list of sentences to a tokenizer, each sentence might have a different length. Hence the output of the tokenizer for each sentence will have a different length. Padding is a strategy for ensuring tensors are rectangular by adding a special padding token to shorter sentences.
Consider the following example where padding="max_length", max_length=10.
batch_sentences = ["Hello World", "Hugging Face Library"]
encoded_input = tokenizer(batch_sentences, padding="max_length", max_length=10)
print(encoded_input)
{'input_ids': [[101, 8667, 1291, 102, 0, 0, 0, 0, 0, 0], [101, 20164, 10932, 10289, 3371, 102, 0, 0, 0, 0]], 'token_type_ids': [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]], 'attention_mask': [[1, 1, 1, 1, 0, 0, 0, 0, 0, 0], [1, 1, 1, 1, 1, 1, 0, 0, 0, 0]]}
Notice that the output of the tokenizer for each sentence is padded to the maximum_length which is 10 by a special padding token '0'. Similarly, if we set padding=True, the output of the tokenizer for each sentence will be padded to the length of the longest sequence in the batch.
Coming back to your question, padding has no effect if you pass a list of just one sentence to the tokenizer. If you have set batch_size = 1 during training or inference, your model will be processing your data one sentence at a time. This could be one reason why padding is not making a difference in your case.
Another possible yet very unlikely reason padding does not make a difference in your case is that all your sentences have the same length. Lastly, if you have not converted the output of the tokenizer to a PyTorch or TensorFlow tensor, having varying sentence lengths would not be a problem. This again is unlikely in your case given that you used your model for training and testing.
Have a series of ordered geometries (lines) of type:
MDSYS.SDO_GEOMETRY(4402, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-87.5652173103127, 41.6985300456929, 0, 510.1408, -87.5652362658404, 41.6985530209061, 0, 510.14287, -87.5652682628194, 41.6985911197852, 0, 510.14632, ...)
Would like to join these into a "single" line of the same type, but with the vertices merged into a single line: i.e. another geometry (line) of type:
MDSYS.SDO_GEOMETRY(4402, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-87.5652173103127, 41.6985300456929, 0, 510.1408, -87.5652362658404, 41.6985530209061, 0, 510.14287, -87.5652682628194, 41.6985911197852, 0, 510.14632, ...)
Tried:
SDO_UTIL.APPEND to incrementally join pair of lines, but this resulted in a "multipart" polyline, not a "single" polyline, i.e.:
MDSYS.SDO_GEOMETRY(4406, 4326, NULL, MDSYS.SDO_ELEM_INFO_ARRAY(1, 2, 1, 241, 2, 1, 377, 2, 1, 465, 2, 1, 733, 2, 1, 865, 2, 1, 1365, 2, 1), MDSYS.SDO_ORDINATE_ARRAY(-89.7856903197518,...)
same issue with SDO_AGGR_LRS_CONCAT
SDO_UTIL.CONCAT_LINES came closest to it, by producing a single line, but it seems some of the vertices SDO_ORDINATE_ARRAY were not correct...
Either there must be another function that does this easily, or perhaps was not using one of the above correctly... or perhaps may have to write a custom function to go into each line's SDO_ORDINATE_ARRAY and join those individually (?).
New to oracle spatial (spatial queries of any type) and documentation out there seem sparse. Any input would be appreciated.
xCoordinates = {45, 40, 35, 30, 25, 20, 15, 10, 5, 0}
yCoordinates = {0.6, 1.3, 1.5, 2.4, 5, 5.2, 5.3, 6, 6.4, 6.6}
plotData = Transpose#{xCoordinates, yCoordinates}
Show[ListPlot[plotData], Plot[Fit[plotData, {1, x}, x], {x, 0, 45}]]
I executed these in order and got 3 errors saying "general::ivar : ... is not a variable" then General::stop : further output of General::ivar will be suppressed during this calculation.
The ListPlot is displayed, but without the Fit line. Can anyone please explain where the error in my code is, and what this error means?
EDIT: Also generated the messages
RGBColor called with 1 argument; 3 or 4 arguments are expected.
and
Coordinate Skeleton[10] should be a pair of numbers, or a Scaled or Offset form.
What do these mean?
See the Details section on Plot.
"Plot has attribute HoldAll and evaluates f only after assigning specific numerical values to x."
To fix the problem evaluate the fit outside of the Plot function.
xCoordinates = {45, 40, 35, 30, 25, 20, 15, 10, 5, 0};
yCoordinates = {0.6, 1.3, 1.5, 2.4, 5, 5.2, 5.3, 6, 6.4, 6.6};
plotData = Transpose#{xCoordinates, yCoordinates};
fit = Fit[plotData, {1, x}, x];
Show[ListPlot[plotData], Plot[fit, {x, 0, 45}]]
I'm putting together a simple chess position evaluation function. This being the first time for me building a chess engine, I am feeling very tentative with putting in just any evaluation function. The one shown on this Chess Programming Wiki page looks like a good candidate. However this has an ellipsis at the end which makes me unsure of whether it will be a good one to use?
Once the whole engine is in place and functional, I intend to come back to the evaluation function and make a real attempt to sorting it out properly. But for now I need some sort of function which is good enough to play against an average amateur.
The most basic component of an evaluation function is material, obviously. This should be perfectly straightforward, but on its own does not lead to interesting play. The engine has no sense of position at all, and simply reacts to tactical lines. But we will start here:
value = white_material - black_material // calculate delta material
Next we introduce some positional awareness through piece-square tables. For example, this is a such a predefined table for pawns:
pawn_table = {
0, 0, 0, 0, 0, 0, 0, 0,
75, 75, 75, 75, 75, 75, 75, 75,
25, 25, 29, 29, 29, 29, 25, 25,
4, 8, 12, 21, 21, 12, 8, 4,
0, 4, 8, 17, 17, 8, 4, 0,
4, -4, -8, 4, 4, -8, -4, 4,
4, 8, 8,-17,-17, 8, 8, 4,
0, 0, 0, 0, 0, 0, 0, 0
}
Note that this assumes the common centipawn (value of pawn is ~100) value system. For each white pawn we encounter, we index into the table with the pawn's square and add the corresponding value.
for each p in white pawns
value += pawn_table[square(p)]
Note that we can use use a simple calculation to reflect the table when indexing for black pieces. Alternatively you can define separate tables.
For simple evaluation this will work very well and your engine will probably already be playing common openings. However, it's not too hard to make some simple improvements. For example, you can create tables for the opening and the endgame, and interpolate between them using some sort of phase calculation. This is especially effective for kings, where their place shifts from the corners to the middle of the board as the game progresses.
Thus our evaluation function may look something like:
evaluate(position, colour) {
phase = total_pieces / 32 // this is just an example
opening_value += ... // sum of evaluation terms
endgame_value += ...
final_value = phase * opening_value + (1 - phase) * endgame_value
return final_value * sign(colour) // adjust for caller's perspective
}
This type of evaluation, along with quiescence search, should be enough to annihilate most amateurs.
Is there something like an anti-filter in image processing?
Say for instance, I am filtering an image using the following 13 tap symmetric filter:
{0, 0, 5, -6, -10, 37, 76, 37, -10, -6, 5, 0, 0} / 128
Each pixel is changed by this filtering process. My question is can we get back the original image by doing some mathematical operation on the filtered image.
Obviously such mathematical operations exists for trivial filters, like:
{1, 1} / 2
Can we generalize this to complex filters like the one I mentioned at the beginning?
Here is a pointer to one method of deconvolution - taking account of noise which in your case I guess you have due to rounding error - http://en.wikipedia.org/wiki/Wiener_deconvolution