Rotation Without Resizing - rainmeter

I'm creating a skin (possibly a set of skins) which I plan to publish at some point. As I was working I ran into an issue with rotating an image meter. Its size is dynamic to a scale variable.
The image is resized when it is rotated. I believe this is due to the diagonal of the image not fitting within the frame of the meter. Although I'm not sure how I can solve the issue.
The Following is the code of the Image Meter:
[icon0]
Meter=Image
ImageName=#Resources\images\gear.png
W=(50*#scale#)
H=(50*#scale#)
X=(5*#scale#)
Y=(5*#scale#)
ImageRotate=90
When the value of "ImageRotate" is changed from 90 to 45 the icon scales down.
I tried to look at an example which created the effect that I wanted, but I couldn't figure it out. I also looked for a forum, or information in the "Rainmeter Manual" to find some useful information. I found something about the ScaleMargin, but it didn't seem to have the effect I wanted.
Thank you in advance for any help that I get.

I think you need to calculate the maximum possible W/H after rotation yourself. Possibly change X/Y too, if you want it to rotate around origin.
There is an example shown here. It uses Rotator meter .
Looking at that example, your code would be like:
[icon0]
Meter=Image
ImageName=#Resources\images\gear.png
W=(SQRT((50*#scale#) ** 2 + (50*#scale#) ** 2))
H=(SQRT((50*#scale#) ** 2 + (50*#scale#) ** 2))
X=(5*#scale#)
Y=(5*#scale#)
ImageRotate=90
Haven't tested it myself, you may need DynamicVariables=1 for #scale#s, and you probably need to calculate X and Y if you want to rotate around the center of the image. Not sure what you want though, I'll leave it to you.
Edit:
You may also need DynamicWindowSize=1 under [Rainmeter] section as well. Otherwise it will crop the image after rotation, if it doesn't fit the initial size of the skin.

Related

Three.js - Add image to ExtrudeBufferGeometry

I want to add an image (a factory layout displayed on the floor) to an ExtrudeBufferGeometry (the floor) which is based on a shape with holes. I only want to cover the surface and not have it wrap around the corners - is that possible? If not, I am fine with a 2D geometry/plane as well as long as I can still punch in some holes.
I have successfully added said image to a PlaneGeometry but that does not allow for any holes.
I also managed to add it to the ExtrudeBufferGeometry but it looks like it only covers the border area as you can see colors when viewing it from the side. If I set the texture's repeat value using texture.repeat.set(), I can see the image properly. However, I do not want it to be repeated. It should only be displayed once. I'm having a hard time knowing what properties to set and change to achieve the desired result.
I researched a lot to get to this point but I cannot find a final solution for my specific problem. Any help or suggestions are highly appreciated - thank you!
Edit:
I managed to do it using ShapeGeometry. However, I still need texture.repeat.set(), otherwise it will be a tiny image on one side. Can anyone tell me how to adjust the scale?

Save image in original resolution with imfindcircle plot in Matlab

I have a really long picture where I use imfindcircles on. But I need to check if the right ones are found. It is a 158708x2560 logical.
So I have:
[centers, radii] = imfindcircles(I,[15 35],'ObjectPolarity','bright','Sensitivity',0.91);
figure(1)
imshow(I)
viscircles(centers,radii);
and I want to save that output you see in the figure box (binary image with circles on it) to a image file. File format doesn't matter as long as it has the same resolution of 158708x2560 pixel.
Every suggestion I find online alters the resolution or makes the image broader, like when saving the figure directly you get a huge grey border and resolution goes down.
What would also work is a way to zoom into the figure but the zoom option in the figure menu does not magnify correctly. It does magnify but the image stays really thin so you can't see a thing.
Matrix: https://www.dropbox.com/s/rh9wakimc7atfhg/I.mat?dl=0
There are two round spots repeating. I want to find those, not the others. And export the image with the circles plotted over it.

Detecting deformed lines

I've already asked this question on https://dsp.stackexchange.com/ but didn't get any answer! hope to get any suggestion here:
I have a project in which I have to recognize 2 lines in different "position", the lines are orthogonal but can be projected on different surfaces. I'm using opencv.
The intersection can be anywhere on the frame. The lines are red (the images show just the gray scale).
UPDATE
-I'll be using a gray scale camera !!!!!!!!!
-the background and objects on which the lines will be projected can change
I'm not asking for code, but only for hints about how can I solve this? I tried houghlines function but it works only for straight surfaces.
thanks in advance !
This is not that difficult task as it include straight line. I have done similar kind of project.
First of all if your image is colored covert it to gray scale.
Then use a calibrated median filter to blur the image.
Now subtract the blurred image from the gray scale image.
After step 3 if you look at the image you will see that the on the places of lines the intensity
is higher than the other parts of image because these line are contrasted and when we apply median
filter the subtracted value is more than the rest of image.
to get a cleaner distinction you need to use create a binary image ie. only black and white with
a particular thresh hold.
6.Finally you got yu lines if their is noise you can use top hat filtering after step 4 and
gaussian filtering after step 5.
You can take help from this paper on crack detection
I think AMI's idea is good.
You can also think about using controled laser source. In that case you can get image pair one with laser turned on and one with turned off, then find difference.
It can be interesting for you: http://www.instructables.com/id/3-D-Laser-Scanner/
Here's the result of subtracting the output of a median filter (r=6):
You might be able to improve things a bit by adjusting the median filter radius, but these wavy, discontinuous lines are going to be difficult to detect reliably.
You really need better source images. Here are a few suggestions:
A colour camera would help enormously. Apply a high-pass filter to the red and green channels, and calculate the difference between the two. The red lines will stand out much better then.
Can you make the light source brighter?
Have you tried putting a red filter over the camera lens? Ideally you want one with a pass band that matches the light source's wavelength as closely as possible — if the light is coming from a laser, then a suitable dichroic filter should give good results. But even a sheet of red plastic would be better than nothing. (Have you got an old pair of red/blue 3D glasses sitting around somewhere?)
Perhaps subtracting the grayscale image from the red channel would help to highlight the red. I'd post this as a comment but cannot do so yet.

How can I deblur an image in matlab?

I need to remove the blur this image:
Image source: http://www.flickr.com/photos/63036721#N02/5733034767/
Any Ideas?
Although previous answers are right when they say that you can't recover lost information, you could investigate a little and make a few guesses.
I downloaded your image in what seems to be the original size (75x75) and you can see here a zoomed segment (one little square = one pixel)
It seems a pretty linear grayscale! Let's verify it by plotting the intensities of the central row. In Mathematica:
ListLinePlot[First /# ImageData[i][[38]][[1 ;; 15]]]
So, it is effectively linear, starting at zero and ending at one.
So you may guess it was originally a B&W image, linearly blurred.
The easiest way to deblur that (not always giving good results, but enough in your case) is to binarize the image with a 0.5 threshold. Like this:
And this is a possible way. Just remember we are guessing a lot here!
HTH!
You cannot generally retrieve missing information.
If you know what it is an image of, in this case a Gaussian or Airy profile then it's probably an out of focus image of a point source - you can determine the characteristics of the point.
Another technique is to try and determine the character tics of the blurring - especially if you have many images form the same blurred system. Then iteratively create a possible source image, blur it by that convolution and compare it to the blurred image.
This is the general technique used to make radio astronomy source maps (images) and was used for the flawed Hubble Space Telescope images
When working with images one of the most common things is to use a convolution filter. There is a "sharpen" filter that does what it can to remove blur from an image. An example of a sharpen filter can be found here:
http://www.panoramafactory.com/sharpness/sharpness.html
Some programs like matlab make convolution really easy: conv2(A,B)
And most nice photo editing have the filters under some name or another (sharpen usually).
But keep in mind that filters can only do so much. In theory, the actual information has been lost by the blurring process and it is impossible to perfectly reconstruct the initial image (no matter what TV will lead you to believe).
In this case it seems like you have a very simple image with only black and white. Knowing this about your image you could always use a simple threshold. Set everything above a certain threshold to white, and everything below to black. Once again most photo editing software makes this really easy.
You cannot retrieve missing information, but under certain assumptions you can sharpen.
Try unsharp masking.

Liquify filter/iwarp

I'm trying to build something like the Liquify filter in Photoshop. I've been reading through image distortion code but I'm struggling with finding out what will create similar effects. The closest reference I could find was the iWarp filter in Gimp but the code for that isn't commented at all.
I've also looked at places like ImageMagick but they don't have anything in this area
Any pointers or a description of algorithms would be greatly appreciated.
Excuse me if I make this sound a little simplistic, I'm not sure how much you know about gfx programming or even what techniques you're using (I'd do it with HLSL myself).
The way I would approach this problem is to generate a texture which contains offsets of x/y coordinates in the r/g channels. Then the output colour of a pixel would be:
Texture inputImage
Texture distortionMap
colour(x,y) = inputImage(x + distortionMap(x, y).R, y + distortionMap(x, y).G)
(To tell the truth this isn't quite right, using the colours as offsets directly means you can only represent positive vectors, it's simple enough to subtract 0.5 so that you can represent negative vectors)
Now the only problem that remains is how to generate this distortion map, which is a different question altogether (any image would generate a distortion of some kind, obviously, working on a proper liquify effect is quite complex and I'll leave it to someone more qualified).
I think liquefy works by altering a grid.
Imagine each pixel is defined by its location on the grid.
Now when the user clicks on a location and move the mouse he's changing the grid location.
The new grid is again projected into the 2D view able space of the user.
Check this tutorial about a way to implement the liquify filter with Javascript. Basically, in the tutorial, the effect is done transforming the pixel Cartesian coordinates (x, y) to Polar coordinates (r, α) and then applying Math.sqrt on r.

Resources