I work with Forge Autodesk, and I want to apply a texture to some rectangle object. In fact I just needed some plan, but I was given a rectangle. So I want to apply my image on the main face of the rectangle.
const mytex = THREE.ImageUtils.loadTexture(mytexture)
// Repeat image through object
mytex.wrapS = THREE.RepeatWrapping //ClampToEdgeWrapping //MirroredRepeatWrapping //RepeatWrapping
mytex.wrapT = THREE.RepeatWrapping //ClampToEdgeWrapping //MirroredRepeatWrapping //RepeatWrapping
mytex.mapping = THREE.UVMapping
mytex.repeat.set (0.05, 0.05)
console.log("applied texture")
But I get this problem : a part of my image appears on the right upper side (upper and right corners are cut, so not on the rectangle), but the left and bottom sides are stretched across the rest of the rectangle face.
I would like to adapt my image so that its dimension fit the rectangle's dimensions, and not just repeat it.
I read this and this. I think my code is correctly written, but I may be missing a parameter or set the wrong one... The 2 images I am testing are 676x676 and 1024x484 pixels. I cannot access the rectangle dimensions from my function (I don't think so).
I tried to just repeat the image too but it does not work either...
Any idea ?
Related
A solution to this problem seems to not exist but I find it hard to believe it is not possible.
Imagine you have an image with a semi-transparent overlay (color=black, transparency=50%), whether over the whole image or just a portion, doesn't matter. How could one convert the pixels underneath to their original color, in essence removing the black overlay.
Just like a simple algebra equation we should be able to rearrange the variables to solve for the "original pixels" under the overlay. Something along the lines of -
original pixels * semi-transparent overlay = new pixelsoriginal pixels = semi-transparent overlay / new pixels
Obviously such an equation over simplifies the problem but I think that gets my point across. Since we know the color and percent transparency, why couldn't we "retrieve" the colors of the underlying pixels?
EDIT: Mark Ransom in the comments is correct, if you know the transparency is 50% then simply multiplying by 2 gets you to the original color. Any recommendations on how to apply this to a whole region in Photoshop or GIMP? Certainly doing it pixel by pixel is out of the question.
Thank you!
The "divide" layer mode will do what you want. In the case of semi-transparent black, use a gray with the value equal to the opacity value of the overlayed layer.
Hello i am new to ThreeJS and texture mapping,
Let's say I have a 3D-Plane with the size of (1000x1000x1). When I apply a texture to it, it will be repeated or it will be scaled, to atleast filling the full plane.
What I try to achieve is, to change the scaling of the picture on the plane at runtime. I want the Image to get smaller and stop fitting the full plane.
I know there is a way to map each face to a part of a picture, but is it also possible to map it to a negative number in the picture, so it will be transparent?
My question is:
I UV-Mapped a Model in Blender and imported it with the UV-Coords into my ThreeJS-Code. Now i need to scale the texture down, like described before. Do I have to remap the UV-Cords or do i have to manipulate the image and add an transparent edge?
Further, will I be able on the same way to move the image on the picture?
I already achieved this kind of usage in java3d by manipulating bufferedImages and drawing them onto transparent ones. I am not sure this will be possible using javascript, so i want to know if it is possible by texture-mapping.
Thank you for your time and your suggestions!
This can be done using mapping the 3d -plane to a canvas ,where the image is drawn (fabric.js can be used for canvas drawings).Inshort set the canvas as texture for the 3d model
yourmodel.material.map = document.getElementById("yourCanvas");
Hope it helps :)
Yes. In THREE, there are some controls on the texture object..
texture.repeat, and texture.offset .. they are both Vector2()s.
To repeat the texture twice you can do texture.repeat.set(2,2);
Now if you just want to scale but NOT repeat, there is also the "wrapping mode" for the texture.
texture.wrapS (U axis) and texture.wrapT (V axis) and these can be set to:
texture.wrapS = texture.wrapT = THREE.ClampToEdgeWrapping;
This will make the edge pixels of the texture extend off to infinity when sampling, so you can position a single small texture, anywhere on the surface of your uv mapped object.
https://threejs.org/docs/#api/textures/Texture
Between those two options (including texture.rotation) you can position/repeat a texture pretty flexibly.
If you need something even more complex.. like warping the texture or changing it's colors, you may want to change the UV's in your modeller, or draw your texture image into a canvas, modify the canvas, and use the canvas as your texture image, as described in ArUns answer. Then you can modify it at runtime as well.
How do i fill some BitmapData image with other image pattern in as 3.0? For example, i have an white image with black square at the center which would be "square:BitmapData" and the other image with little(2x2) blue circle which i would call "circle:BitmapData". I want to fill that square with this blue circles, is there any way to do this?
UPDATE
Here i found the example of what i need to do:
This is two images (left is like my square, right is like the blue circle)
http://pix.samoucka.ru/img/content/graphics/thewebschedule/8/466.gif
And this is how it would look after filling
http://pix.samoucka.ru/img/content/graphics/thewebschedule/8/467.gif
You can try using copyPixels() and iterating though x and y to tile the whole thing, copyPixels() is very fast.
Or
It might be simpler to create a Sprite and use graphics.beginBitmapFill() then graphics.drawRect() with the correct size, then draw() to the BitmapData in the correct position.
If you need to determine the size and position of the black square, getColorBoundsRect() should do the job.
I have a very subtle problem with XNA, specifically the SpriteBatch.
In my game I have a Camera class. It can Translate the view (obviously) and also zoom in and out.
I apply the Camera to the scene when I call the "Begin" function of my spritebatch instance (the last parameter).
The Problem: When the cameras Zoomfactor is bigger than 1.0f, the spritebatch stops drawing.
I tried to debug my scene but I couldn't find the point where it goes wrong.
I tried to just render with "Matrix.CreateScale(2.0f);" as the last parameter for "Begin".
All other parameters were null and the first "SpriteSortMode.Immediate", so no custom shader or something.
But SpriteBatch still didn't want to draw.
Then I tried to only call "DrawString" and DrawString worked flawlessly with the provided scale (2.0f).
However, through a lot of trial and error, I found out that also multiplying the ScaleMatrix with "Matrix.CreateTranslation(0, 0, -1)" somehow changed the "safe" value to 1.1f.
So all Scale values up to 1.1f worked. For everything above SpriteBatch does not render a single pixel in normal "Draw" calls. (DrawString still unaffected and working).
Why is this happening?
I did not setup any viewport or other matrices.
It appears to me that this could be some kind of strange Near/Farclipping.
But I usually only know those parameters from 3d stuff.
If anything is unclear please ask!
It is near/far clipping.
Everything you draw is transformed into and then rasterised in projection space. That space runs from (-1,-1) at the bottom left of the screen, to (1,1) at the top right. But that's just the (X,Y) coordinates. In Z coordinates it goes from 0 to 1 (front to back). Anything outside this volume is clipped. (References: 1, 2, 3.)
When you're working in 3D, the projection matrix you use will compress the Z coordinates down so that the near plane lands at 0 in projection space, and the far plane lands at 1.
When working in 2D you'd normally use Matrix.CreateOrthographic, which has near and far plane parameters that do exactly the same thing. It's just that SpriteBatch specifies its own matrix and leaves the near and far planes at 0 and 1.
The vertices of sprites in a SpriteBatch do, in fact, have a Z-coordinate, even though it's not normally used. It is specified by the layerDepth parameter. So if you set a layer depth greater than 0.5, and then scale up by 2, the Z-coordinate will be outside the valid range of 0 to 1 and won't get rendered.
(The documentation says that 0 to 1 is the valid range, but does not specify what happens when you apply a transformation matrix.)
The solution is pretty simple: Don't scale your Z-coordinate. Use a scaling matrix like:
Matrix.CreateScale(2f, 2f, 1f)
I'm using GDI+ in my application, and I need to use a rotated LinearGradientBrush to paint several rects in the exact same way. However, although I'm calling the same code to fill each rect, the results aren't what I expect. Here's the code to create the gradient fill, where rcDraw is the rect containing the area to paint for each rect. These coordinates are in the parent window's coordinates, so they are not identical for the 2 rects.
g_hbrLinear = new LinearGradientBrush( Rect( 0, rcDraw.top, 0, rcDraw.bottom - rcDraw.top ),
clrStart, clrEnd, (REAL) 80, FALSE );
What I see on screen looks like this (http://www.nnanime.com/bugs/LinGradBrush-rotate10.png). You can see that it's as if the fill from the first rect continues into the second one. What I really want is to have the 2 rects look identical. I think I can do that if I paint each rect separately using its own client coordinates, but for the purposes of my app, I need to use the parent window's coordinates.
I guess what I'm asking is, how does GDI+ calculate the "origin" of a fill? Is it always based on 0,0 in the coordinate system you use? Is there a way to shift it? I tried TranslateTransform, but it doesn't seem to shift the fill in a way that I find predictable or understandable.
The rect passed to the linear gradient brush determines the where the left and right colors will sit, and the gradient will be painted within this rectangle.
So, I think you need to create a brush for each rectangle you are painting, where the rectangle you are painting is also passed to the constructor for the linear gradient brush.
My experience with the "transform" of linear gradient brushes matches yours; I haven't been able to understand what it's supposed to do.
You can think of a brush in GDI+ as a function mapping world co-ordinates to a color. What the brush looks like at a given point does not change based on the shape being filled.
It does change with the transform of the Graphics object you're drawing on. So, if you don't want to change the brush, you could temporarily change the transform of the Graphics object so that the rectangle you're drawing has a specific, known size and position in world coordinates. The BeginContainer and EndContainer methods should make this easy.
(There is also the RenderingOrigin property but it only affects hatch brushes, which oddly are unaffected by world transforms.)