An algorithm to generate a game map from individual images - algorithm

I am designing a game to be played in the browser.
Game is a space theme and I need to generate a map of the "Galaxy".
The basic idea of the map is here:
game map http://www.oglehq.com/map.png
The map is a grid, with each grid sector can contain a planet/system and each of these has links to a number of adjacent grids.
To generate the maps I figured that I would have a collection of images representing the grid elements. So in the case of the sample above, each of the squares is a separate graphic.
To create a new map I would "weave" the images together.
The map element images would have the planets and their links already on them, and I, therefore, need to stitch the map together in such a way that each image is positioned with its appropriate counterparts => so the image in the bottom corner must have images to the left and diagonal left that link up with it correctly.
How would you go about creating the code to know where to place the images?
Is there a better way than using images?
At the moment performance and/or load should not be a consideration (if I need to generate maps to have preconfigured rather than do it in real-time, I don't mind).
If it makes a difference I will be using HTML, CSS, and JavaScript and backed by a Ruby on Rails app.

There are two very nice browser-based vector / javascript-manipulable graphics packages which, together, are virtually universal: SVG and VML. They generally produce high-quality vector-based images with low bandwidth.
SVG is supported by firefox, opera, safari, and chrome - technically only part of the specification is supported, but for practical purposes you should be able to do what you need. w3schools has a good reference for learning/using svg.
VML is Microsoft's answer to SVG, and (surprise) is natively supported by IE, although SVG is not. Msdn has the best reference for vml.
Although it's more work, you could write two similar/somewhat integrated code bases for these two technologies. The real benefit is that users won't have to install anything to play your game online - it'll just work, for 99.9% of all users.
By the way, you say that you're asking for an algorithm, and I'm offering technologies (if that's the right term for SVG/VML). If you could clarify the input/output specification and perhaps what part presents the challenge (e.g. which naive implementation won't work, and why), that would clarify the question and maybe provide more focused answers.
Addendum The canvas tag is becoming more widely supported, with the notable exception of IE. This might be a cleaner way to embed graphic elements in html.
Useful canvas stuff: Opera's canvas tutorial | Mozilla's canvas tutorial | canvas-in-IE partial implementation

Hmm. If each box can only link to its 8 neighbours, then you only have 2^8 = 256 tile types. Fewer if you limit the number of possible links from any one tile.
You can encode which links are present in an image with an 8 char filename:
11000010.jpeg
Or save some bytes and convert that to decimal or hex
196.jpg
Then the code. There's lots of ways you could choose to represent the map internally. One way is to have an object for each planet. A planet object knows its own position in the grid, and the positions of its linked planets. Hence it has enough information to choose the appropriate file.
Or have a 2D array. To work out which image to show for each array item, look at the 8 neighbouring array items. If you do this, you can avoid coding for boundaries by making the array two bigger in both axes, and having an empty 'border' around the edges. This saves you checking whether a neighbouring array item is off the array.

There are two ways to represent your map.
One way is to represent it is a grid of squares, where each square can have a planet/system in it or not. You can then specify that if there is a neighbor one square away in any of the eight directions (NW, N, NE, W, E, SW, S, SE) then there is a connection to that neighbor. Note however in your sample map the center system is not connected to the system north/east of it, so perhaps this is not the representation you want. But it can be used to build the other representation
The second way is to represent each square as having eight bits, defining whether or not there is a connection to a neighbor along each of the same eight directions. Presumably if there is even one connection, then the square has a system inside it, otherwise if there are no connections it is blank.
So in your example 3x3 grid, the data would be:
Tile Connections
nw n ne w e sw s se
nw 0 0 0 0 0 0 0 0
n 0 0 0 0 1 0 1 0
ne 0 0 0 1 0 0 0 0
w 0 0 0 0 0 0 0 0
center 0 1 0 0 0 0 1 1
e 0 0 0 0 0 0 0 0
se 0 0 0 0 0 0 0 0
s 0 1 0 0 1 0 0 0
sw 1 0 0 1 0 0 0 0
You could represent these connections as an array of eight boolean values, or much more compactly as an eight bit integer.
Its then easy to use the eight boolean values (or the eight bit integer) to form the filename of the bitmap to load for that grid square. For example, your center tile using this scheme could be called "Bitmap01000011.png" (just using the boolean values), or alternatively "Bitmap43.png" (using the hexidecimal value of the eight bit integer representing that binary pattern for a shorter filename).
Since you have 256 possible combinations, you will need 256 bitmaps.
You could also reduce the data to four booleans/bits per tile, since a "north" connection for instance implies that the tile to the north has a "south" connection, but that makes selecting the bitmaps a bit harder, but you can work it out if you want.
Alternatively you could layer between zero (empty) and nine (fully connected + system circle) bitmaps together in each square. You would just need to use transparent .png's so that you could combine them together. The downside is that the browser might be slow to draw each square (especially the fully connected ones). The advantage would be less data for you to create, and less data to load from your website.
You would represent the map itself as a table, and add your bitmaps as image links to each cell as needed.
The pseudo-code to map would be:
draw_map(connection_map):
For each grid_square in connection_map
connection_data = connection_map[grid_square]
filenames = bitmap_filenames_from(connection_data)
insert_image_references_into_table(grid_square,filenames)
# For each square having one of 256 bitmaps:
bitmap_filenames_from(connection_data):
filename="Bitmap"
for each bit in connection_data:
filename += bit ? "1" : 0
return [filename,]
# For each square having zero through nine bitmaps:
bitmap_filename_from(connection_data):
# Special case - square is empty
if 1 not in connection_data:
return []
filenames=[]
for i in 0..7:
if connection_data[i]:
filenames.append("Bitmap"+i)
filenames.append("BitmapSystem");
return filenames

I would recommend using a graphics library to draw the map. If you do you won't have the above problem and you will end up with much cleaner/simpler code. Some options are SVG, Canvas, and flash/flex.

Personally I would just render the links in game, and have the cell graphics only provide a background. This gives you more flexibility, allows you to more easily increase the number of ways cells can link to each other, and generally more scalable.
Otherwise you will need to account for every possible way a cell might be linked, and this is rather a lot even if you take into account rotational and mirror symmetries.

Oh, and you could also just have a small number of tile png files with transparency on them, and overlap these using css-positioned div's to form a picture similar to your example, if that suffices.
Last time I checked, older versions of IE did not have great support for transparency in image files, though. Can anyone edit this to provide better info on transparency support?

As long as links have a maximum length that's not too long, then you don't have too many different possible images for each cell. You need to come up with an ordering on the kinds of image cells. For example, an integer where each bit indicates the presense or absence of an image component.
Bit 0 : Has planet
Bit 1 : Has line from planet going north
Bit 2 : Has line from planet going northwest
...
Bit 8 : Has line from planet going northeast
Ok, now create 512 images. Many languages have libraries that let you edit and write images to disk. If you like Ruby, try this: http://raa.ruby-lang.org/project/ruby-gd
I don't know how you plan to store your data structure describing the graph of planets and links. An adjacency matrix might make it easy to generate the map, although it's not the smallest representation by far. Then it's pretty straightforward to spit out html like (for a 2x2 grid):
<table border="0" cellspace="0" cellpadding="0">
<tr>
<td><img src="cell_X.gif"></td>
<td><img src="cell_X.gif"></td>
</tr>
<tr>
<td><img src="cell_X.gif"></td>
<td><img src="cell_X.gif"></td>
</tr>
</table>
Of course, replace each X with the appropriate number corresponding to the combination of bits describing the appearance of the cell. If you're using an adjacency matrix, putting the bits together is pretty simple--just look at the cells around the "current" cell.

Related

Speeding up a pre-computed seam carving algorithm

I'm writing a JS seam carving library. It works great, I can rescale a 1024x1024 image very cleanly in real time as fast as I can drag it around. It looks great! But in order to get that performance I need to pre-compute a lot of data and it takes about 10 seconds. I'm trying to remove this bottleneck and am looking for ideas here.
Seam carving works by removing the lowest energy "squiggly" line of pixels from an image. e.g. If you have a 10x4 image a horizontal seam might look like this:
........x.
.x.....x.x
x.xx..x...
....xx....
So if you resize it to 10x3 you remove all the 'X' pixels. The general idea is that the seams go around the things that look visually important to you, so instead of just normal scaling where everything gets squished, you're mostly removing things that look like whitespace, and the important elements in a picture are unaffected.
The process of calculating energy levels, removing them, and re-calculating is rather expensive, so I pre-compute it in node.js and generate a .seam file.
Each seam in the .seam file is basically: starting position, direction, direction, direction, direction, .... So for the above example you'd have:
starting position: 2
seam direction: -1 1 0 1 0 -1 -1 -1 1
This is quite compact and allows me to generate .seam files in ~60-120kb for a 1024x1024 image depending on settings.
Now, in order to get fast rendering I generate a 2D grids that represents the order in which pixels should be removed. So:
(figure A):
........1.
.1.....1.1
1.11..1...
....11....
contains 1 seam of info, then we can add a 2nd seam:
(figure B):
2...2....2
.222.2.22.
......2...
and when merged you get:
2...2...12
.122.2.1.1
1211..122.
....112...
For completeness we can add seams 3 & 4:
(figures C & D):
33.3..3...
..3.33.333
4444444444
and merge them all into:
(figure E):
2343243412
3122424141
1211331224
4434112333
You'll notice that the 2s aren't all connected in this merged version, because the merged version is based on the original pixel positions, whereas the seam is based on the pixel positions at the moment the seam is calculated which, for this 2nd seam, is a 10x3px image.
This allows the front-end renderer to basically just loop over all the pixels in an image and filter them against this grid by number of desired pixels to remove. It runs at 100fps on my computer, meaning that it's perfectly suitable for single resizes on most devices. yay!
Now the problem that I'm trying to solve:
The decoding step from seams that go -1 1 0 1 0 -1 -1 -1 1 to the pre-computed grid of which pixels to remove is slow. The basic reason for this is that whenever one seam is removed, all the seams from there forward get shifted.
The way I'm currently calculating the "shifting" is by splicing each pixel of a seam out of a 1,048,576 element array (for a 1024x1024 px image, where each index is x * height + y for horizontal seams) that stores the original pixel positions. It's veeerrrrrryyyyy slow running .splice a million times...
This seems like a weird leetcode problem, in that perhaps there's a data structure that would allow me to know "how many pixels above this one have already been excluded by a seam" so that I know the "normalized index". But... I can't figure it out, anything I can think of requires too many re-writes to make this any faster.
Or perhaps there might be a better way to encode the seam data, but using 1-2 bits per pixel of the seam is very efficient, and anything else I can come up with would make those files huge.
Thanks for taking the time to read this!
[edit and tl;dr] -- How do I efficiently merge figures A-D into figure E? Alternatively, any ideas that yield figure E efficiently, from any compressed format
If I understand correct your current algorithm is:
while there are pixels in Image:
seam = get_seam(Image)
save(seam)
Image = remove_seam_from_image(Image, seam)
You then want to construct an array containing the numbers of each seam.
To do so, you could make a 1024x1024 array where each value is the index of that element of the array (y*width+x). Call this Indices.
A modified algorithm then gives you what you want
Let Indices have the dimensions of Image and be initialized to [0, len(Image)0
Let SeamNum have the dimensions of Image and be initialized to -1
seam_num = 0
while there are pixels in Image:
seam = get_seam(Image)
Image = remove_seam_from_image(Image, seam)
Indices = remove_seam_from_image_and_write_seam_num(Indices, seam, SeamNum, seam_num)
seam_num++
remove_seam_from_image_and_write_seam_num is conceptually identical to remove_seam_from_image except that as it walks seam to remove each pixel from Indices it writes seam_num to the location in SeamNum indicated by the pixel's value in Indices.
The output is the SeamNum array you're looking for.

Detecting individual images in an array of images

I'm building a photographic film scanner. The electronic hardware is done now I have to finish the mechanical advance mechanism then I'm almost done.
I'm using a line scan sensor so it's one pixel width by 2000 height. The data stream I will be sending to the PC over USB with a FTDI FIFO bridge will be just 1 byte values of the pixels. The scanner will pull through an entire strip of 36 frames so I will end up scanning the entire strip. For the beginning I'm willing to manually split them up in Photoshop but I would like to implement something in my program to do this for me. I'm using C++ in VS. So, basically I need to find a way for the PC to detect the near black strips in between the images on the film, isolate the images and save them as individual files.
Could someone give me some advice for this?
That sounds pretty simple compared to the things you've already implemented; you could
calculate an average pixel value per row, and call the resulting signal s(n) (n being the row number).
set a threshold for s(n), setting everything below that threshold to 0 and everything above to 1
Assuming you don't know the exact pixel height of the black bars and the negatives, search for periodicities in s(n). What I describe in the following is total overkill, but that's how I roll:
use FFTw to calculate a discrete fourier transform of s(n), call it S(f) (f being the frequency, i.e. 1/period).
find argmax(abs(S(f))); that f represents the distance between two black bars: number of rows / f is the bar distance.
S(f) is complex, and thus has an argument; arctan(imag(S(f_max))/real(S(f_max)))*number of rows will give you the position of the bars.
To calculate the width of the bars, you could do the same with the second highest peak of abs(S(f)), but it'll probably be easier to just count the average length of 0 around the calculated center positions of the black bars.
To get the exact width of the image strip, only take the pixels in which the image border may lie: r_left(x) would be the signal representing the few pixels in which the actual image might border to the filmstrip material, x being the coordinate along that row). Now, use a simplistic high pass filter (e.g. f(x):= r_left(x)-r_left(x-1)) to find the sharpest edge in that region (argmax(abs(f(x)))). Use the average of these edges as the border location.
By the way, if you want to write a source block that takes your scanned image as input and outputs a stream of pixel row vectors, using GNU Radio would offer you a nice method of having a flow graph of connected signal processing blocks that does exactly what you want, without you having to care about getting data from A to B.
I forgot to add: Use the resulting coordinates with something like openCV, or any other library capable of reading images and specifying sub-images by coordinates as well as saving to new images.

Feature Vector Representation Neural Networks

Objective: Digit recognition by using Neural Networks
Description: images are normalized into 8 x 13 pixels. For each row ever black pixel is represented by 1and every white white 0. Every image is thus represented by a vector of vectors as follows:
Problem: is it possible to use a vector of vectors in Neural Networks? If not how should can the image be represented?
Combine rows into 1 vector?
Convert every row to its decimal format. Example: Row1: 11111000 = 248 etc.
Combining them into one vector simply by concatenation is certainly possible. In fact, you should notice that arbitrary reordering of the data doesn't change the results, as long as it's consistent between training and classification.
As to your second approach, I think (I am really not sure) you might lose some information that way.
To use multidimensional input, you'd need multidimensional neurons (which I suppose your formalism doesn't support). Sadly you didn't give any info on your network structure, which i think is your main source of problems an confusion. Whenever you evaluate a feature representation, you need to know how the input layer will be structured: If it's impractical, you probably need a different representation.
Your multidimensional vector:
A network that accepts 1 image as input has only 1 (!) input node containing multiple vectors (of rows, respectively). This is the worst possible representation of your data. If we:
flatten the input hierarchy: We get 1 input neuron for every row.
flatten the input hierarchy completely: we get 1 input neuron for every pixel.
Think about all 3 approaches and what it does to your data. The latter approach is almost always as bad as the first approach. Neural networks work best with features. Features are not restructurings of the pixels (your row vectors). They should be META-data you can gain from the pixels: Brightness, locations where we go from back to white, bounding boxes, edges, shapes, masses of gravity, ... there's tons of stuff that can be chosen as features in image processing. You have to think about your problem and choose one (or more).
In the end, when you ask about how to "combine rows into 1 vector": You're just rephrasing "finding a feature vector for the whole image". You definitely don't want to "concatenate" your vectors and feed raw data into the network, you need to find information before you use the network. This is critical for pre-processing.
For further information on which features might be viable for OCR, just read into some papers. The most successful network atm is Convolutional Neural Network. A starting point for the topic feature extraction is here.
1 ) Yes combine into one vector is suitable i use this way
http://vimeo.com/52775200
2) No it is not suitable because after normalization from rang ( 0-255 ) -> to range ( 0 - 1 ) differt rows gives aprox same values so lose data

Making a good XY (scatter) chart in VB6

I need to write an application in VB6 which makes a scatter plot out of a series of data points.
The current workflow:
User inputs info.
A bunch of calculations go down.
The output data is displayed in a series of 10 list boxes.
Each time the "calculate" button is clicked, 2 to 9 entries are entered into the list boxes.
One list box contains x coordinates.
One list box contains the y coordinates.
I need to:
Scan through those list boxes, and select my x's and y's.
Another list box field will change from time to time, varying between 0 and 100, and that field is what needs to differentiate which series on the eventual graph the x's and y's go into. So I will have Series 1 with six (x,y) data points, Series 26 with six data points, Series 99 with six data points, etc. Or eight data points. Or two data points. The user controls how many x's there are.
Ideally, I'll have a graph with multiple series displaying all this info.
I am not allowed to use a 3rd party solution (e.g. Excel). This all has to be contained in a VB6 application.
I'm currently trying to do this with MS Chart, as there seems to be the most documentation for that. However, this seems to focus on pie charts and other unrelated visualizations.
I'm totally open to using MS Graph but I don't know the tool and can't find good documentation.
A 2D array is, I think, a no go, since it would need to be of a constantly dynamically changing size, and that can't be done (or so I've been told). I would ideally cull through the runs, sort the data by that third series parameter, and then plug in the x's and y's, but I'm finding the commands and structure for MS Chart to be so dense that I'm just running around in very small circles.
Edit: It would probably help if you can visualize what my data looks like. (S for series, made up numbers.)
S X Y
1 0 1000000
1 2 500000
1 4 250000
1 6 100000
2 0 1000000
2 2 6500
2 4 5444
2 6 1111
I don't know MSGraph, but I'm sure there is some sort of canvas element in VB6 which you can use to easily draw dots yourself. Scatter plots are an easy graph to make on your own, so long as you don't need to calculate a line of best fit.
I would suggest looking into the canvas element and doing it by hand if you can't find a tool that does it for you.
Conclusion: MSChart and MSGraph can both go suck a lemon. I toiled and toiled and got a whole pile of nothing out of either one. I know they can do scatter plots, but I sure as heck can't make them do 'em well.
#BlackBear! After finding out that my predecessor had the same problems and just used Pset and Line to make some really impressive graphs, I did the same thing - even if it's not reproducible and generic in the future as was desired. The solution that works, albeit less functionally >> the solution with great functionality that exists only in myth.
If anyone is reading this down the line and has an actual answer about scatter plots and MSChart/Graph, I'd still love to know.

Compressing/packing "don't care" bits into 3 states

At the moment I am working on an on screen display project with black, white and transparent pixels. (This is an open source project: http://code.google.com/p/super-osd; that shows the 256x192 pixel set/clear OSD in development but I'm migrating to a white/black/clear OSD.)
Since each pixel is black, white or transparent I can use a simple 2 bit/4 state encoding where I store the black/white selection and the transparent selection. So I would have a truth table like this (x = don't care):
B/W T
x 0 pixel is transparent
0 1 pixel is black
1 1 pixel is white
However as can be clearly seen this wastes one bit when the pixel is transparent. I'm designing for a memory constrained microcontroller, so whenever I can save memory it is good.
So I'm trying to think of a way to pack these 3 states into some larger unit (say, a byte.) I am open to using lookup tables to decode and encode the data, so a complex algorithm can be used, but it cannot depend on the states of the pixels before or after the current unit/byte (this rules out any proper data compression algorithm) and the size must be consistent; that is, a scene with all transparent pixels must be the same as a scene with random noise. I was imagining something on the level of densely packed decimal which packs 3 x 4-bit (0-9) BCD numbers in only 10 bits with something like 24 states remaining out of the 1024, which is great. So does anyone have any ideas?
Any suggestions? Thanks!
In a byte (256 possible values) you can store 5 of your three-bit values. One way to look at it: three to the fifth power is 243, slightly less than 256. The fact that it's slightly less also shows that you're not wasting much of a fraction of a bit (hardly any, either).
For encoding five of your 3-bit "digits" into a byte, think of taking a number in base 3 made from your five "digits" in succession -- the resulting value is guaranteed to be less than 243 and therefore directly storable in a byte. Similarly, for decoding, do the base-3 conversion of a byte's value.

Resources