I have terrestrial laser scanning point cloud collected with scanners coordinate system. I would like to create density surface using Pdal and running the following code in OSGeo4W shell. But getting error.
C:\>pdal density ^
More? /9A-1B_subset15m.las ^
More? -o /9A-1B_sub15m_den.sqlite ^
More? -f SQLite
(pdal density Error) GDAL failure (6) No translation for an empty SRS to PROJ.4
format is known.
Generally, is it possible to use PDAL for Ground lidar processing with single return? I would like to create evenly distributed point cloud (thinning), extract bare ground, removing the noise.
To the question of whether or not PDAL, generally speaking, can process ground lidar with single returns, the answer is yes. PDAL makes no assumptions as to whether or not multiple returns are available. (Some filters may be able to use the return information, but should either default to some other behavior or otherwise complain if return information is unavailable.)
As for your error, I would guess that the input LAS point cloud has no SRS assigned, and that this is required to create the SQLite output. If you know the SRS, you could assign it using pdal translate (or perhaps by setting --readers.las.a_srs=<your SRS> in the call to pdal density).
Related
For a project I am undertaking, I will need to calculate the derivative of a given surface brightness profile which is convolved with a pixel response function (as well as PSF etc.)
For various reasons, but mainly for consistency, I wish to do this using the guts of the GALSIM code. However, since in this case the `flux' defined as the sum of the non-parametric model no longer has a physical meaning in terms of the image itself (it will always be considered as noise-free in this case), there are certain situations where I would like to be able to define the interpolated image without a flux normalisation.
The code does not seem to care if the 'flux' is negative, but I am coming across certain situations where the 'flux' is within machine precision of zero, and thus the assertion ``dabs(flux-flux_tot) <= dabs(flux_tot)'' fails.
My question is therefore: Can one specify a non-parametric model to interpolate over without specifying a flux normalisation value?
There is currently no way to do this using the galsim.InterpolatedImage() class; you could open an issue to make this feature request at the GalSim repository on GitHub.
There is a way to do this using the guts of GalSim; an example is illustrated in the lensing power spectrum functionality if you are willing to dig into the source code (lensing_ps.py -- just do a search for SBInterpolatedImage to find the relevant code bits). The basic idea is that instead of using galsim.InterpolatedImage() you use the associated C++ class, galsim._galsim.SBInterpolatedImage(), which is accessible in python. The SBInterpolatedImage can be initialized with an image and choices of interpolants in real- and Fourier-space, as shown in the examples in lensing_ps.py, and then queried using the xValue() method to get the value interpolated to some position.
This trick was necessary in lensing_ps.py because we were interpolating shear fields which tend to have a mean of zero, so we ran into the same problem you did. Use of the SBInterpolatedImage class is not generally recommended for GalSim users (we recommend using only the python classes) but it is definitely a way around your problem for now.
In a DirectX 11 demo application, I use a CDXUTSDKMesh for my static geometry. It is loaded and already displayed.
I'm doing some experiments related to Precomputed Radiance Transfer in this application. The ID3DXPRTEngine would be a pretty handy tool for the job. Unfortunately, D3DX and DXUT don't seem to be very compatible there.
ID3DXPRTEngine requires a single ID3DXMesh (multiple meshes can be concatenated with D3DXConcatenateMeshes no problem). Is there an easy way to 'convert' CDXUTSDKMesh to one or multiple ID3DXMesh instances?
Short Answer: No.
Long answer: It can be done, but it requires that you be able to read the raw vertex data, convert it to the Direct3D 9 FVF system (the old fixed shader pipeline), and write the new data back.
Basically what you're trying to do is inter-operate two different versions of the Direct3D API: version 9.0 and version 11.0. In order to do this, you need to do the following:
Confirm your verticies contain no custom semantics (i.e. D3D11 allows a semantic like RANDOM_EYE_VEC, whereas D3D9 does not).
Confirm you have the ability to read your vertex and face information into a byte buffer (i.e. a char pointer).
Create a Direct3D 9 Device and a Direct3D 11 device.
Open your mesh, pull the data into a byte buffer, and create an ID3DXMesh object from that data. If you have more than 2^16 faces, you will need to split the mesh into two or more meshes.
Create the PRT Engine from the mesh(es), and run your PRT calculations.
Read the new information from the mesh(es) using ID3DXBaseMesh::GetVertexBuffer() (not sure if this step is entirely correct as I've never actually used the PRT engine).
Write the new data back into the CDXUTSDKMesh.
As you may be able to tell, this process is long, prone to error, and very slow. If you're doing this for an offline tool, it may be OK, but if this is to be used in real time, you're probably better off writing your own PRT engine.
Additionally, if your mesh has any custom semantics (i.e. you use a shader that expects some data that wasn't a part of the D3D9 world) then this algorithm will fail. The PRT engine uses the fixed function pipeline, so any new meshes that take advantage of D3D11 features that didn't exist back then are not going to work here.
Is it possible by using the Zbar API, that one can check if the image consists of barcode or not?
This is as a backup measure, so that if the application is unable to get barcode value, let it check if it might contain a barcode, if so user can later manually verify it.
I have explored quite a bit but with no major success. If not ZBar, any other open source library that can do it well?
Thanks
What you need is a detector, i.e. the ability to locate the barcode (if any), and thus just return yes or no according to the detection result.
IMHO Zbar does not provide a versatile enough API to do so since it exposes a high-level scanner interface (zbar_scan_image) that combines detection & decoding on one hand, and a pure decoder interface on the other hand.
You should definitely refer to this paper: Robust 1D Barcode Recognition on Mobile Devices. It contains an entire section related to the detection step including pseudo-algorithms [1] - see 4. Locating the barcode. But there is no ready-to-use open source library: you would have to implement your own detector based on the described techniques.
At last, more pragmatic/simple techniques may be used depending on the kind of input images you plan to work with (is there any rotation? blur? is it about processing images or the video stream in real-time?).
[1] In addition I would say that it's a good idea to use a different kind of algorithm within this fallback step than the one used within the first step.
I wrote a delphi program generating a gpx file as input for a "poor man's guidance system" for aerial spray by means of ultralight plane.
By and large, it produces route (parallel swaths) using gpx file as output.
The route's engine is based on the "Vincenty" algorithm which works fine for any wgs84 computation but
I can't get the accuracy of grid generated by ExpertGPS of Topografix (requirement).
I assume a 2D computation on the ellipsoïd :
1) From the start rtept (route point), compute the next rtept given a bearing and an arbitrary distance (swath length).
2) Compute the next rtept respective respective to previous bearing (90° turn) and another arbitrary distance (swath distance).
3) Redo 1) with the last rtept as starting point but in the opposite direction, and so on.
What's wrong with it ?
You do not describe your Pascal implementation of Vincenty's earth ellipsoid model so the following is speculation:
The model makes use of numerous geometrical trig functions-- ATAN2,
COS, SIN etc. Depending whether you use internal Delphi functions
or your own versions, there is the possibility of lack of precision
in calculations. The precision in the value of pi used in your
calculations could affect the precision you require.
Floating point arithmetic can cause decimal place errors. It will
make a difference whether you use single, double or real. I
believe some of the internal Delphi functions have changed with
different versions so possibly the version of Delphi you are
using will affect how the internal function is implemented.
If implemented accurately, Vincenty’s formula is supposed to be
accurate to within 0.5mm. Amazing accuracy. If there are rounding
errors or lack of precision in your Delphi implemention, the positional
errors can be significantly larger.
Consider the accuracy of your GPS information. Depending on how
many satellites are being used by the GPS receiver at any one time,
the accuracy of the positional information changes. Errors on
the order of 50 feet or more is possible. Additionally, the refresh
of positional information on the GPS receiver is not necessarily
instantaneous; therefore if the swath 'turns' occur rapidly, you
will have to ensure the GPS has updated at the turning point.
Your procedure to calculate the pattern seems reasonable so look
at your implementation of Vincenty's algorithm in your Delphi code.
This list is not exhaustive, I imagine others can improve it
dramatically. What I mention is based on my experience with GPS and
various versions of Delphi and what I could recall off the top of my head.
Something you might try is compare your calculations of
distance/bearing using your implementation of the algorithm with
examples provided on the Internet. There are several online
calculators. If you have not been there, the Aviation Formulary
is an excellent place to find examples of other navigational tricks.
http://williams.best.vwh.net/avform.htm . A comparison will
allow you to gain confidence in the precision of the Delphi
implementation of Vincenty's algorithm with data calculated by
mathematicians. Simply, your implementation of Vincenty may not be
precise. Then again, the error may be elsewhere.
I am doing farm GPS guidance similar for ground rig just with Android. Great for second tractor to help follow previous A B tracks especially when they disappear for a bit .
GPS accuracy repeat ability from one day to next will give larger distance. Expensive system's use dGPS2cm-10cm.5-30metres different without dGPS. Simple solution is recalibrate at known location. Cheaper light bars use this method.
Drift As above except relates to movement during job. Mostly unnoticeable <20cm 3hrs. Can jump 1-2metres rarely. I think when satellite connect or disconnect. Again recalibrate regularly at known coordinates ,i.e. spray fill point
GPS accuracy. Most phone update speed 1hz. 3? seconds between fixes at say 50km/hr , 41.66m between fixes. On ground rig 18km hrs but will be tracks after first run. Try a Bluetooth GPS 10hz check update speed and as mentioned fast turns a problem.
Accuracy of inputs and whether your guidance uses dGPS will make huge difference.
Once you are off your line say 5 metres at 100metres till next point, then at 50 metres your still 2.5 metres off unless your guidance takes you back to the route not the next coordinates.
I am not using Vincenty as I can 'bump'back onto line manually and over 1km across difference <30cm according to only reference I saw however I am taking 2 points and create parrallel points across.
Hope these ideas help your situation.
I have four sets of algorithms that I want to set up as modules but I need all algorithms executed at the same time within each module, I'm a complete noob and have no programming experience. I do however, know how to prove my models are decidable and have already done so (I know Applied Logic).
The models are sensory parsers. I know how to create the state-spaces for the modules but I don't know how to program driver access into ProLog for my web cam (I have a Toshiba Satellite Laptop with a built in web cam). I also don't know how to link the input from the web cam to the variables in the algorithms I've written. The variables I use, when combined and identified with functions, are set to identify unknown input using a probabilistic, database search for best match after a breadth first search. The parsers aren't holistic, which is why I want to run them either in parallel or as needed.
How should I go about this?
I also don't know how to link the
input from the web cam to the
variables in the algorithms I've
written.
I think the most common way for this is to use the machine learning approach: first calculate features from your video stream (like position of color blobs, optical flow, amount of green in image, whatever you like). Then you use supervised learning on labeled data to train models like HMMs, SVMs, ANNs to recognize the labels from the features. The labels are usually higher level things like faces, a smile or waving hands.
Depending on the nature of your "variables", they may already be covered on the feature-level, i.e. they can be computed from the data in a known way. If this is the case you can get away without training/learning.