Geoserver SLD style - double dashed line - geoserver

Is it possible to make SLD style for Geoserver maps to represent street that has 3 tracks (or more) with something like this
I know that it is possible to make single dashed line ------ so the street will apear to have 2 tracks, but I was unable to make double dashed or triple dashed line. I just need to represent each track on street.
I think it might be possible with using custom shape along with textsymbolizer and then making it appear as double dashed line, but I cannot use that because of all vendor options for displacement cause I will have street name, street direction etc. also with textsymbolizer.... I was wondering is it possbile with some line displacement or something similar?
Thx :)

You could try to use the offset function ( http://docs.geoserver.org/stable/en/user/filter/function_reference.html ). So you calculate a offset of e.g. +4 +2 -2 and -4 for your street. The +4 and -4 offset lines could be styled solid and the +2 and -2 offset lines could be styled dashed and in a different color. But you will still have a problem at the road links because the lines will be overlapping or there will be gaps.
Be aware that the geoserver needs to calculate an offset for 4 lines in realtime which could slow down performance. If the data is not changing you can use the GWC to serve tiles.

Related

"Barcode" reading from scanned image

I want to read a barcode from a scanned image that I printed. The image format is not relevant. I found that the scanned images are of very low quality and can understand why it normal barcodes fail.
My idea is to create a non standard and very simple barcode at the top of each page printed. It will be 20 squares in a row forming a simple binary code.Filled = 1, open = 0. It will be large enough on aA4 to make detection easy.
At this stage I need to load the image and find the barcode somewhere at the top. It will not be exactly at the same spot as it is scanned in. Step into each block and build the ID.
Any knowledge or links to info would be awesome.
If you can preset a region of interest that contains the code and nothing else, then detection is pretty easy. Scan a few rays across this region and find the white/black and black/white transitions. Then, knowing where the "cells" should be, you known their polarity.
For this to work, you need to frame your cells with two black ones on both ends to make sure to know where it starts/stops (if the scale is fixed, you can do with just a start cell, but I would not recommend this).
You could have a look at https://github.com/zxing/zxing. I would suggest to use a 1D bar code, but wide enough to match the low resolution of the scanner.
You could also invent your own bar code encoding and try to parse it your self. Use thick bars for 1 and thin lines for 0. A thick bar would be for instance 2 white pixels, 4 black pixels. A thin line would be 2 white pixels, 2 black pixels and 2 white pixels. The last two pixels encode the bit value.
The pixel should be the size of the scanned image pixel.
You then process the image scan line by scan line, trying to locate the bar code.
We locate the bar code by comparing a given pixel value sequence with a pattern. This is performed by computing a score function. The sum of squared difference is a good pick. When computing the score we ignore the two pixels encoding the bit value.
When the score is below a threshold, we found a matching pattern. It is good to add parity bits to the encoded value so that it's validity can be checked.
Computing a sum of square on a sliding window can be optimized.

How do I make the plot generated marker more noticeable?

I have the coordinate indicating the start of each letter within a word. I have set the plot function to make a red circle at that coordinate like so:
The problem is that the paper I am adding this image to has a structure of 2 columns per page. And when I add 2 of the above image to the same column, the circles become very small and difficult to notice.
I tried instead of circles to make triangles or pentagrams. I get the same result, they become too small to distinguish.
How can I make this more distinguishable? Especially when its printed in black and white.
You can change the marker size of the markers, and\or overlay two different markers one of the other. For example:
x=1:100;
y=rand(1,100);
plot(x,y); hold on
n=20:20:60;
plot(x(n),y(n),'r+','MarkerSize',30,'LineWidth',2);
plot(x(n),y(n),'ro','MarkerSize',15,'LineWidth',2);
There are many other degrees of freedom you can use to add \ change to this. It is a very basic question that you could have answered yourself if you read the documentation of plot in TMW website.

Invoice / OCR: Detect two important points in invoice image

I am currently working on OCR software and my idea is to use templates to try to recognize data inside invoices.
However scanned invoices can have several 'flaws' with them:
Not all invoices, based on a single template, are correctly aligned under the scanner.
People can write on invoices
etc.
Example of invoice: (Have to google it, sadly cannot add a more concrete version as client data is confidential obviously)
I find my data in the invoices based on the x-values of the text.
However I need to know the scale of the invoice and the offset from left/right, before I can do any real calculations with all data that I have retrieved.
What have I tried so far?
1) Making the image monochrome and use the left and right bounds of the first appearance of a black pixel. This fails due to the fact that people can write on invoices.
2) Divide the invoice up in vertical sections, use the sections that have the highest amount of black pixels. Fails due to the fact that the distribution is not always uniform amongst similar templates.
I could really use your help on (1) how to identify important points in invoices and (2) on what I should focus as the important points.
I hope the question is clear enough as it is quite hard to explain.
Detecting rotation
I would suggest you start by detecting straight lines.
Look (perhaps randomly) for small areas with high contrast, i.e. mostly white but a fair amount of very black pixels as well. Then try to fit a line to these black pixels, e.g. using least squares method. Drop the outliers, and fit another line to the remaining points. Iterate this as required. Evaluate how good that fit is, i.e. how many of the pixels in the observed area are really close to the line, and how far that line extends beyond the observed area. Do this process for a number of regions, and you should get a weighted list of lines.
For each line, you can compute the direction of the line itself and the direction orthogonal to that. One of these numbers can be chosen from an interval [0°, 90°), the other will be 90° plus that value, so storing one is enough. Take all these directions, and find one angle which best matches all of them. You can do that using a sliding window of e.g. 5°: slide accross that (cyclic) region and find a value where the maximal number of lines are within the window, then compute the average or median of the angles within that window. All of this computation can be done taking the weights of the lines into account.
Once you have found the direction of lines, you can rotate your image so that the lines are perfectly aligned to the coordinate axes.
Detecting translation
Assuming the image wasn't scaled at any point, you can then try to use a FFT-based correlation of the image to match it to the template. Convert both images to gray, pad them with zeros till the originals take up at most 1/2 the edge length of the padded image, which preferrably should be a power of two. FFT both images in both directions, multiply them element-wise and iFFT back. The resulting image will encode how much the two images would agree for a given shift relative to one another. Simply find the maximum, and you know how to make them match.
Added text will cause no problems at all. This method will work best for large areas, like the company logo and gray background boxes. Thin lines will provide a poorer match, so in those cases you might have to blur the picture before doing the correlation, to broaden the features. You don't have to use the blurred image for further processing; once you know the offset you can return to the rotated but unblurred version.
Now you know both rotation and translation, and assumed no scaling or shearing, so you know exactly which portion of the template corresponds to which portion of the scan. Proceed.
If rotation is solved already, I'd just sum up all pixel color values horizontally and vertically to a single horizontal / vertical "line". This should provide clear spikes where you have horizontal and vertical lines in the form.
p.s. Generated a corresponding horizontal image with Gimp's scaling capabilities, attached below (it's a bit hard to see because it's only one pixel high and may get scaled down because it's > 700 px wide; the url is http://i.stack.imgur.com/Zy8zO.png ).

variable stroke width in NVD3 lineChart

i am trying to figure out if there is a reasonably easy way to extend NVD3's lineChart model to allow variable stroke widths along each line path in a chart.
specifically, i am dealing with a simple line chart where i need to show the year-on-year growth of employment in different sectors (for which NVD3's lineChart works perfectly), while also giving an idea of the relative weight of these sectors (i.e. agricolture might be growing while employing only a few hundred people overall, while retail might be struggling but still be employing a large percentage of the population) - this won't be a linear scale of course, but assuming that relative weight of each sector varies across time, a thicker line could represent a sector with more employees than one with a thin line.
obviously i could very easily change the stroke width for the whole line using i.e. an average weight of each sector across the whole timespan, but as far as i understand there is no way in SVG to specify a varying width of a single path element: would it make sense to create an NVD3 model that builds on top of lineChart but splits each line into discrete polygons (triangles?) for each year-on-year period?
Looking for an answer to this myself. It seems there is no easy way, but one possibility is to use the stroke-dasharray attribute.
http://owl3d.com/svg/vsw/articles/vsw_article.html
Essentially, you can create a series of cloned paths, covering a range of stroke widths. If you turn them into dash arrays, you can play with the spacing between dashes to control which paths are visible at each point.
Depending on the shape and width you are looking for, you may also be able to fudge it by adding a second path element with a varying offset from the first.
Perhaps generate a closed path and apply a pattern fill or regular fill on that path. The closed path is effectively a triangle shape, to mimic a line of varied width.

How to determine visibility in 2D

I'm developing an AI sandbox and I would like to calculate what every living entity can see.
The rule is to simply hide what's behind the edges of the shapes from the point of view of the entity. The image clarifies everything:
alt text http://img231.imageshack.us/img231/2972/shadows.png
I need it either as an input to the artificial intelligence either graphically, to show it for a specific entity while it moves..
Any cool ideas?
This isn't the fastest algorithm but it produces a polygon which might be useful for further analysis by your AI:
For each line segment, calculate the angle between the entity and the two endpoints.
Sort the points by angle.
“Sweep” around 360°, keeping track of which line segments intersect with the sweep line. When you cross the beginning-of-segment point, you add that segment to the set; when you cross the end-of-segment point, you remove that segment from the set.
The closest line segments form a polygon of what's visible. The polygon is the union of triangle slivers.
I realize this explanation isn't great, but I have an online demo here that you can play with to get a sense for how it works. Extending it to work with circles probably isn't too bad (famous last words).
If you're using simple shapes to block the entity's view, there is an easy way to do this that I have implemented:
Create a VisionWave object which can move either horizontally or vertically. You can define a VisionWave using a source coordinate, two lines that intersect that point, and a distance from the source point.
You should have 4 waves: one going up, one down, one left, and one right, and the lines that define them should have a slope of 1 and -1 (i.e., an X). My crude drawing below shows one wave (going right) as represented by the > character.
\ /
\ />
\ / >
# >
/ \ >
/ \>
/ \
Make a loop that propagates each wave one pixel at a time. When you propagate the wave, you want to do the following:
Mark every pixel that the wave is touching as visible.
If any of the pixels that the wave touches block light, then you want to split the wave into two, and recursively propagate each one.
I implemented a system like this in my Roguelike and it is very fast. Make sure to profile your code for bottlenecks.
If you're very clever you might try circular waves instead of straight lines, but I don't know if it would be faster due to the trigonometric calculations.
Determine which vertices are visible to your eye point, then project those vertices back away from the eye point on a straight line to make new vertices. Close the shape and you will have created a polygon representing the invisible area.
See shadow volumes for the 3D equivalent.

Resources