First off I'm not totally sure if "texture masks" is the correct term to use here so If someone knows what it is then please let me know.
so the real question. I want to have an object in GameMaker: Studio which as it moves around it's texture changes depending on its position by pulling from a larger static image behind it. I've made a quick gif of what it might look like.
It can be found here
Another image that might help explain this is the "source-in" section of this image.
This is a reply to the same question posted on the steam GML forum by MrDave:
The feature you are looking for is draw_set_blend_mode(bm_subtract)
Basically you will have to draw everything onto a surface and then using the code above you switch the draw mode to bm_subtract. What this will do is rather than drawing images to the screen it will remove them. So you now draw blocks over the background and this will remove that area. Then you can draw everything you just put on the surface onto the screen.
(Remember to reset the draw mode and the surface target after. )
Its hard to get your head around the first time, but actually it isn’t all that complex once you get used to it.
Related
I'm working on a project that uses a lot of lines and marks with the camera at a very low angle (almost at ground level). I'm also using an outline effect to highlight selected objects in different ways (selection, collisions, etc.).
Native AA is lost when using postprocessing effects (eg: outline effect). This causes jagged lines on screen, more noticeable when the camera is closer to ground level.
I have created this jsFiddle to illustrate the issue (using ThreeJS r111):
https://jsfiddle.net/Eketol/s143behw/
Just press mouse/touch the 3D scene to render without postprocessing effects and release mouse/touch to render with it again.
Some posts suggest using an FXAAShader pass will solve it, but I haven't had any luck with it. Instead, I get some artifacts on the scene and in some cases the whole image is broken.
So far my options are:
Get a workaround to get the outline effects without postprocessing effects:
The ones I've seen around (eg: https://stemkoski.github.io/Three.js/Outline.html) duplicate the meshes/geometries to render a bigger version with a solid color applied behind the main object. While it may be ok with basic static geometries like a cube, it doesn't seem an efficient solution when using complex 3D objects that you need to drag around (objects with multiple meshes).
Increasing the renderer.pixelratio value to get a bigger frame size: This is not an option for me. On my test it doesn't make a big difference and also makes the rendering slower.
Try to get FXAAShader working without artifacts: As I said, it doesn't seem to fix the issue as well as the native AA does (and it is not as compatible). Maybe I'm not using it correctly, but I just get antialiased jagged lines.
Question 1: It may sound silly, but I though there would be an easy way to send the antialiased image directly to the composer or at least there could be some extra pass to do this, keeping the native AA. Is this possible?
Question 2: Maybe using Scene.onAfterRender to get the image with native AA and then blending the outline effect somehow?
Question 3: Googling around, it seems this issue also affects to Unity. In this post, it says this won't be an problem with WebGL2. Does this also apply to ThreeJS?
I want to remove the regular strips of the image as shown as follow. I try many methods, and they do not work, such as image media filter and FFT filter.
Could you tell me how to remove the strips?
All that black is removing a ton of information from the image. You have two options available - either re-capture that missing information in a new shot, or attempt to invent / synthesize / extrapolate the missing information with software.
If you can re-shoot, get your camera as close to the mesh fence as you can, use the largest aperture your lens supports to have the shallowest possible depth of field, and set your focus point as deep as possible - this will minimize the appearance of the mesh.
If that is the only still you have to work with, you've got a few dozen hours of playing with the clone and blur tools in front of you in just about any image editing software package you like.
Photoshop would be my go to tool of choice for this. In Photoshop CS5 they introduced something called content aware fill. I'm not sure if it will help you in this specific case because there is SO MUCH black that Adobe's algorithm may think other parts of the mesh are valid sources for filling in the mesh you're trying to clear out.
As an exercise, I decided to write a SimCity (original) clone in Swift for OSX. I started the project using SpriteKit, originally having each tile as an instance of SKSpriteNode and swapping the texture of each node when that tile changed. This caused terrible performance, so I switched the drawing over to regular Cocoa windows, implementing drawRect to draw NSImages at the correct tile position. This solution worked well until I needed to implement animated tiles which refresh very quickly.
From here, I went back to the first approach, this time using a texture atlas to reduce the amount of draws needed, however, swapping textures of nodes that need to be animated was still very slow and had a huge detrimental effect on frame rate.
I'm attempting to display a 44x44 tile map where each tile is 16x16 pixels. I know here must be an efficient (or perhaps more correct way) to do this. This leads to my question:
Is there an efficient way to support 1500+ nodes in SpriteKit and which are animated through changing their textures? More importantly, am I taking the wrong approach by using SpriteKit and SKSpriteNode for each tile in the map (even if I only redraw the dirty ones)? Would another approach (perhaps, OpenGL?) be better?
Any help would be greatly appreciated. I'd be happy to provide code samples, but I'm not sure how relevant/helpful they would be for this question.
Edit
Here are some links to relevant drawing code and images to demonstrate the issue:
Screenshot:
When the player clicks on the small map, the center position of the large map changes. An event is fired from the small map the central engine powering the game which is then forwarded to listeners. The code that gets executed on the large map the change all of the textures can be found here:
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/GameScene.swift#L489
That code uses tileImages which is a wrapper around a Texture Atlas that is generated at runtime.
https://github.com/chrisbenincasa/Swiftopolis/blob/drawing-performance/Swiftopolis/TileImages.swift
Please excuse the messiness of the code -- I made an alternate branch for this investigation and haven't cleaned up a lot of residual code that has been hanging around from pervious iterations.
I don't know if this will "answer" your question, but may help.
SpriteKit will likely be able to handle what you need but you need to look at different optimizations for SpriteKit and more so your game logic.
SpriteKit. Creating a .atlas is by far one of the best things you can do and will help keep your draw calls down. Also as I learned the hard way keep a pointer to your SKTextures as long as you need them and only generate the ones you needs. For instance don't create textureWithImageNamed#"myImage" every time you need a texture for myImage instead keep reusing a texture and store it in a dictionary. Also skView.ignoresSiblingOrder = YES; helps a bunch but you have to manage your own zPosition on all the sprites.
Game logic. Updating every tile every loop is going to be very expensive. You will want to look at a better way to do that. keeping smaller arrays or maybe doing logic (model) updates on a background thread.
I currently have a project you can look into if you want called Old Frank. I have a map that is 75 x 75 with 32px by 32px tiles that may be stacked 2 tall. I have both Mac and iOS target so you could in theory blow up the scene size and see how the performance holds up. Not saying there isn't optimization work to be done (it is a work in progress), but I feel it might help get you pointed in the right direction at least.
Hope that helps.
I have few thousands of images from our vendors. They are models wearing fashion clothing. I need to take only the clothes part of the images and discard the rest and make them transparent background. All the images has one color background but they are in different colors. Currently we perform the following steps manually and I need suggestion and help if there is a way to do this automatically or is there a way to do the manual process faster. We used Gimps and script-fu for automating some part of this process (see below steps), but still the remaining manual part is very time consuming. Is there any tool or any programming language or script that can make this process faster?
This is the way we are doing now:
We use Gimps script-fu run in batch to make all images background transparent.
Load one by one each image into Gimps manually
Via Free select took, we mark around the clothes
Remove everything outside the marked clothes area
Export and save image into png format.
Run script-fu in batch to auto crop all the image
I haven't figure out a way (code or script) to do the step 3 automatically. Does anyone know if that even possible? If it is not, is there any tool that could combine step 4-6 into one control key so reduce the key strokes and any faster way to finish these images?
Thank you for your suggestions. This is what I am thinking to do for making my step 3 and 4 automated. Do you think if this approach would work. Is there better way to handle it?
All the images will have transparent background via our batch job. So the idea is to remove the body part now.
Auto crop all the images, so the head and feet to be the topmost and bottommost of the image.
Code a PHP program to detect the skin colors from list of database colors for skin.
Then go to each pixel of image and detect where the skin color starts.
Start from topmost of image, the first pixel has skin color must be the head or neck part. I remove everything above the starting first pixel, so I will be able to get rid of hairs if the image has model with full head. Anything below could be the face and neck, I will just replace the color with transparent background. I still don't know how to get rid of hair in right and left side of the face.
Searching from bottom of image pixels by pixel until match to skin color. I remove everything from that pixel to bottom of the page. This way I can get rid of shoes parts as well.
6.Replace remaining skin part with transparent background.
The problem is the hand sometimes cover the clothes and I am not sure how to handle that. Perhaps if the adjacent pixel is not transparent background then leave that part alone.
I also don't know how to handle the the clothes(dress or blouse) that may have the same color as skin?
Step 3 can be semi-automated. That is, done in such a way that it requires far less human interaction than you are currently using. I get the impression from your question, that you are not a programmer. So, I'll point you to a specific off the shelf tool called Power Stroke. It plugs into non-free tools. There may be a GIMP equivalent. I don't know.
I know this is leaning more in the direction of a designer question, but as I am faced with developing something which requires me to crop an image, I thought I would give the question a shot.
This seems like a ridiculous question to ask, but I've look all over the IDE (Expression Blend 2) to try and find a way to crop my image, but I can't figure it out.
This seems to be very much in line with Joel's question and is discussed in Podcast 58 in the sense that I'm a complete noob when it comes to designing in Expression Blend. I am adamantly interested in figuring out the most efficient way to do this. I found an article that describes a work flow you can go through that will produce a crop, which I added as an answer below, but I'm really hoping someone else will know of a quicker (less clicks) way to do something as trivial as this.
Does anyone know how this can be done?
As far as I know, there's no way to crop an image directly in expression Blend. Blend is not an image editing application. You need another tool for that.
What you can do though, is clipping an image if you only want to show a portion of it. Just add a rectangle on top of it right-click it, go to path -> make clipping path.
alt text http://img200.imageshack.us/img200/7370/example1.jpg
Now select the System.Windows.Controls.Image entry from the list you want to apply the clipping on and hit Ok
You can even use rounded rectangles, circles and custom paths to clip, but in most cases a rectangle will do the trick.
Just ran into another way.
Have a look at this question. It uses a CroppedBitmapClass as the source of an image. It's not actual drawing in Blend, but you can add it by hand editing Xaml. From your question it's not clear if you are creating a Silverlight or a WPF application in Expression Blend. The CroppedBitmapClass is available in WPF only.
With the new Silverlight 3 you can use the WritableBitmap to do image cropping:
I found an article that has steps to do an image crop, but it's very drawn out to do such a simple operation. You would think something that MS Paint can do in a couple button clicks would be similarly easy in Blend.
Here's the link.
I'm still wondering if there's an easier way to do this, however.
The other problem with this approach is that afterwards, I can't change the size of the rectangle that I'm cropping the image with, which I need to be able to do, because I have to have the image be an exact number of pixels in width and height.