AS3 tile map rendering (with 1000's of tiles) - performance

Just first off I'll say that the context here is Actionscript 3.0 (IDE: Flashbuilder) along with the Starling Framework.
So, I want to create a Tile Map that could be used for a platformer or something similar.
I want to use 8x8 pixel tiles on an 800x600 pixel stage, and the problem I am having is that I don't know how to add these 7500+ tile objects to the stage without dramatically reducing the framerate.
I've found that the drop in performance comes from adding each tile to the stage, not from initializing each Tile object.
I know I'm not giving much specific information, but what I'm asking for is if there is a standardized way to draw thousands of static objects onto the stage without a loss of performance. I feel like there is a way, and I just have yet to find it.
Update:
After all of your kind help, I have found what seems to be a great solution. At first I wanted to implement Amy's solution, using copyPixels() and draw() to make one large bitmap data for the whole map and then render that to the screen. Then, though, I wanted to know if there was a Starling equivalent to this, because everything would be so much simpler if I didn't have to mix Starling with Native Flash.
Thanks to Amy again, I looked into Starling's RenderTexture class a bit more, and found that using it's "drawBundled()" and "draw()" methods, I could easily draw all of the tiles into a RenderTexture, and then put the RenderTexture into an Image (Starling's Image Class) and then just add that Image to the screen.
That solution is a million times faster than the silly slow solutions I tried before, with flattening sprites and such. Its faster both in it's initialization time and there seems to be no drop in framerate while the renderTexture's Image is on the screen.
The one thing I want to test with this is if it is easy to update the graphics of a tile during the gameplay. Say, if water spreads from a source (or something) and a "Grass" tile had to become a "Water" tile, would the RenderTexture and it's image be able to change their appearance without some sort of lag spike or performance hiccup. I will test this out soon.
Thank you all for your help!

Don't add that many objects to the stage. Instead, create a BitmpaData the size of your stage and use copyPixels() or draw() to draw onto it. Here's an article that should get you started. You can then take the concepts you learned in that post and learn anything specific you need to do that's not covered (flashandmath.com has a lot of good tutorials about pixel manipulation)

You need to manage the tiles that need to be added and removed as you move around the game. Only add to stage tiles that are with in 800 px of the center of the screen. Once the tile is beyond 800 px from center remove it. That should keep everything moving smoothly. Good luck.
or look into drawing/coping your tiles into one bitmap. You would be basically stamping your tiles onto the new bitmap. Here is an example from adobe:
import flash.display.Bitmap;
import flash.display.BitmapData;
import flash.geom.Rectangle;
import flash.geom.Point;
var bmd1:BitmapData = new BitmapData(40, 40, false, 0x000000FF);
var bmd2:BitmapData = new BitmapData(80, 40, false, 0x0000CC44);
var rect:Rectangle = new Rectangle(0, 0, 20, 20);
var pt:Point = new Point(10, 10);
bmd2.copyPixels(bmd1, rect, pt);
var bm1:Bitmap = new Bitmap(bmd1);
this.addChild(bm1);
var bm2:Bitmap = new Bitmap(bmd2);
this.addChild(bm2);
bm2.x = 50;
More Info on the bitmapData class. I think copyPixels is what you are after.

Related

Keep image centered in resized JavaFX Canvas

I am getting my feet wet with JavaFX, and have a simple drawing program which writes to a Canvas using a PixelWriter. The program draws a pixel at a time, reflecting each pixel over a number of axes to create a growing and evolving pattern centered on the canvas:
The Canvas is in the Center region of a BorderPane, and I have written the code to resize the canvas when the application window is resized. That works OK.
However, I would like to re-center the image on the new resized canvas so that the drawing can continue to grow on the larger canvas. What might be the best approach?
My ideas/attempts so far:
Capture a snapshot of the canvas and write it back to the resized canvas, but that comes out blurry (a couple of code examples below).
I dug into GraphicsContext translations, but that does not seem to move the existing image, just adjusts future drawing.
Maybe instead of resizing the canvas, I make a huge canvas bigger than I would expect my app window to be, and center it over the center region of the border pane (perhaps using a viewport of some kind?) I'm not thrilled about the whole idea of making some arbitrarily huge canvas that I think will be big enough though. I don't want to get into scaling - I am using PixelWriter so that I get the crispest image without antialiasing and other processing.
My snapshot attempt looked like this, but was blurry:
SnapshotParameters params = new SnapshotParameters();
params.setFill(Color.WHITE);
WritableImage image = canvas.snapshot(params, null);
canvas.getGraphicsContext2D().drawImage(image, 50, 50);
The 50, 50 offset above is just for my testing/learning - I'll replace it with a proper computed offset once I get the basic copy working. From the post How to copy contents of one canvas to another? I played with the setFill() parameter, to no effect.
From the post How to save a high DPI snapshot of a JavaFX Canvas I tried the following code. It was more clear, but I have not been able to figure out how to find or compute the pixelScale to get the most accurate snapshot (the value 10 is just some number I typed in bigger than 1 to see how it reacted):
int pixelScale = 10;
WritableImage image = new WritableImage((int)Math.rint(pixelScale * canvas.getWidth()),(int)Math.rint(pixelScale * canvas.getHeight()));
SnapshotParameters params = new SnapshotParameters();
params.setTransform(Transform.scale(pixelScale, pixelScale));
params.setFill(Color.WHITE);
canvas.snapshot(params, image);
canvas.getGraphicsContext2D().drawImage(image, 50, 50);
Thanks for any direction y'all can point me in!

Render texture doesn't update changes made, how to ensure this happens?

I'm building a system which has a set of quads in front of each other, forming a layer system. This layers are being rendered by a orthographic camera with a render texture, which is used to generate a texture and save it to disk after the layers are populated. It happens that I need to disable some of those layers before the final texture is generated. So I built a module that disable those specific layers' mesh renderers and raise an event to start the render to texture conversion.
To my surprise, the final image still presents the disabled layers in the final image. I'm really confused about this, cause I already debugged the code in every way I could and those specific layers shouldn't be visible at all considering the code. It must have something to do with how often render textures update or some other obscure execution order. The entire module is composed of 3 or 4 classes with dozens of lines, so to exemplify the issue in a more succinct way, I'll post only the method where the RT is being converted into a texture with some checks I made just before the RT pixels are read into the new texture:
public void SaveTexture(string textureName, TextureFormat textureFormat)
{
renderTexture = GetComponent<Camera>().targetTexture;
RenderTexture.active = renderTexture;
var finalTexture = new Texture2D(renderTexture.width,
renderTexture.height, textureFormat, false);
/*First test, confirming that the marked quad' mesh renderer
is, in fact, disabled, meaning it shouldn't be visible in the camera,
consequently invisible in the RT. The console shows "false", meaning it's
disabled. Even so, the quad still being rendered in the final image.*/
//Debug.Log(transform.GetChild(6).GetChild(0).GetComponent<MeshRenderer>().enabled);
/*Second test, changing the object' layer, because the projection camera
has a culling mask enabled to only capture objects in one specific layer.
Again, it doesn't work and the quad content still being saved in the final image.*/
//transform.GetChild(6).GetChild(0).gameObject.layer = 0;
/*Final test, destroying the object to ensure it doesn't appear in the RT.
This also doesn't work, confirming that no matter what I do, the RT is
"fixed" at this point of execution and it doesn't update any changes made
on it's composition.*/
//Destroy(transform.GetChild(6).GetChild(0).gameObject);
finalTexture.ReadPixels(new Rect(0, 0, renderTexture.width,
renderTexture.height), 0, 0);
finalTexture.Apply();
finalTexture.name = textureName;
var teamTitle = generationController.activeTeam.title;
var kitIndex = generationController.activeKitIndex;
var customDirectory = saveDirectory + teamTitle + "/"+kitIndex+"/";
StorageManager<Texture2D>.Save(finalTexture, customDirectory, finalTexture.name);
RenderTexture.active = null;
onSaved();
}
Funny thing is, if I manually disable that quad in inspector (at runtime, just before trigger the method above), it works, and the final texture is generated without the disabled layer.
I tried my best to show my problem, this is one of those issues that are kinda hard to show here but hopefully somebody will have some insight of what is happening and what should I do to solve that.
There are two possible solutions to my issue (got the answer at Unity Forum). The first one, is to use the methods OnPreRender and OnPostRender to properly organize what should happens before or after the camera render update. What I end up doing though was calling the manual render method in the Camera, using the "GetComponenent().Render();" line, which updates the camera render manually. Considering that my structure was ready already, this single line solved my problem!

Unity 3D - Dynamically 'slicing' background image

I'm fairly new to Unity and not quite sure how to handle this problem.
I have two images, one has clouds on it (day) and one has stars on it (night). What I want to do is show the clouds in the top of my scene and the stars on the bottom. There is a ground object in the middle of the screen where the player will be walking on, this should be the dividing line between the two images. The ground however is not one straight line but can have height differences.
The "solution" I came up with is to use the ground object(s) to slice the images so it kinda serves as a dividing line. But not sure if this is even possible. Maybe I could do something with 2 different camera's or mask the images somehow.. (Just throwing my own thoughts in here as well) I'll be fumbling around with these things in between and try to keep the topic up to date with what I tried.
I put in an attachment to (hopefully) make it more clear.
Greets,
Lukie
attachment: https://imgur.com/a/lblJXPi
The first solution to my mind was preparing a tileset. If you're not going to design a different section every time. So if you're not going to do a computer design. You can do it yourself by adjusting the size.
You can also dynamically generate the stars with the -y axis of the ground object and the clouds with the + y-axis. You can use instantiate function
Example:
public GameObject clouds;
public GameObject stars;
// Start is called before the first frame update
private void Awake()
{
Instantiate(clouds, new Vector3(this.transform.position.x, this.transform.position.y + 3.625f, this.transform.position.z), Quaternion.identity);
Instantiate(stars, new Vector3(this.transform.position.x, this.transform.position.y - 3.625f, this.transform.position.z), Quaternion.identity);
}
Of course, the background design that you will use here must be sustainable.
Dynamic Background

How do I Crop Images in Flutter?

I searching an days for this question.
I want to Crop Images like that, in flutter:
GIF Source: https://github.com/ArthurHub/Android-Image-Cropper
The closest lib for this solution is the Image Lib, that lib offers manipulate images and crop, but i want to crop images in UI level like this gif. All libs I found dont offers that.
There is no widget that performs all that for you. However, I believe that it is possible to write that natively in flutter now. I don't have time at this particular moment to do it for you, but I can definitely point you in the right direction.
You're going to need to load the image in such a way that you can either draw it onto a canvas or use a RawImage to draw it rather than using the Image widget directly.
You need to figure out a co-ordinate system relative to the image
You'll need to find a way of drawing the crop indicator - you could do this either by drawing directly on the canvas or possibly using some combination of GestureDetector/Draggable/DropTarget. I'd suggest that sticking to Canvas might be the easiest to start.
Once the user has selected a part of the image, you need to translate the screen co-ordinates to picture co-ordinates.
You then have to create an off-screen canvas to draw the cropped image to. There are various transforms you'll have to do to makes sure the image ends up in the right place.
Once you've made the off-screen crop, you'll have to display the new image.
All of that is quite a lot of work, and probably a lot of finessing to get right.
Here's examples for a couple of the steps you'll need to do, but you'll have to figure out how to put them together.
Loading an image:
var byteData = await rootBundle.load("assets/image.jpg");
Uint8List lst = new Uint8List.view(byteData.buffer);
var codec = await UI.instantiateImageCodec(lst);
var nextFrame = await codec.getNextFrame();
var image = frameInfo.image;
Displaying an image on a canvas:
https://docs.flutter.io/flutter/dart-ui/Canvas/drawImageRect.html
https://docs.flutter.io/flutter/rendering/CustomPainter-class.html
Writing an image to a off-screen canvas:
ui.Image getCroppedImage(Image image, Rect src, Rect dst) {
var pictureRecorder = new ui.PictureRecorder();
Canvas canvas = new Canvas(pictureRecorder);
canvas.drawImageRect(image, src, dst, Paint());
return pictureRecorder.endRecording().toImage(dst.width.floor(), dst.height.floor());
}
You'll probably need to do something like this answer for getting the local coordinates of mouse/touch gestures.
Some advice - I'd start as simple as possible, not thinking about performance to start (i.e. draw everything each paint if needed, etc). Then once you get the basics working you can start thinking of optimization (i.e. using a RawImage, Transform, and Stack for the image and only re-drawing the selector, etc).
If you need any additional help let me know in a comment and I'll do my best to answer. Now that I've been writing about this a bit it does make me slightly curious to try implementing it so I may try at some point, but it probably won't be soon as I'm quite low on time at the moment. Good luck =D
The image_cropper plugin does exactly what you are looking for.

Scrolling texture on quad is laggy

What could be the reason of lag while I scroll a simple texture on a quad using Unity 4? The lag is not consistent, the scroll works for like 3 or 4 seconds smooth and than comes the lag and so on.
Here is the code
public float speed = 0.01f;
manager.scroll_speed = Mathf.Repeat(Time.time *speed, 1);
renderer.sharedMaterial.SetTextureOffset("_MainTex", new Vector2(manager.scroll_speed, 0));
What should I do to get rid of the lag?
Modifying sharedMaterial will change the appearance of all objects using this material, and change material settings that are stored in the project too. I'm guessing you wanted to do this on purpose but maybe scrolling textures on a whole bunch of objects is just not efficient. If it's just one object then just use renderer.material. Actually try using renderer.material anyway and just having different instances of the script

Resources