I get the concept of applying regionprops to extract ROI. Basically, regionprops uses connected components technique to extract the object. But, the thing I wanted to ask is using regionprops on "BoundingBox", what is the default value for connected components (number of connectivity)? I tried searching about it but I could not really find the answer.
I didn't see the default connectivity on regionprops, but the default connectivity for both bwlabel and bwconncomp (for 2-dimensional matrices) is 8-connected. I would expect regionprops to be the same. You should be able to easily determine if this is the case for regionprops by constructing a test image something like this:
1 1 0 0
1 1 0 0
0 0 1 1
0 0 1 1
Alternatively, you could use bwlabel or bwconncomp first and control the connectivity parameter. regionprops accepts the output from either of these, as well as a BW image.
Take a look to the Matlab documentation about regionprops:
https://www.mathworks.com/help/images/ref/regionprops.html
In the section Tips you can see: ''If you need to specify nondefault connectivity, call bwconncomp and then pass the result to regionprops'' as:
CC = bwconncomp(BW, CONN);
S = regionprops(CC);
So answer to you question: It uses default connectivity that is 8, and you can also change it as you want with the parameter CONN when calling bwconncomp.
Related
I'm going through the Trackpy walkthrough (http://soft-matter.github.io/trackpy/v0.3.0/tutorial/walkthrough.html) but using my own pictures. When I get to calculating the overall drift velocity, I get this error and I don't know what it means:drift error
I don't have a ton of coding experience so I'm not even sure how to look at the source code to figure out what's happening.
Your screenshot shows the traceback of the error, i.e. you called a function, tp.compute_drift(), but this function called another function, pandas_sort(), which called another function, etc until raise ValueError(msg) is called, which interrupts the chain. The last line is the actual error message:
ValueError: 'frame' is both an index level and a column label, which is ambiguous.
To understand it, you have to know that Trackpy stores data in DataFrame objects from the pandas library. The tracking data you want to extract drift motion from is stored in such an object, t2. If you print t2 it will probably look like this:
y x mass ... ep frame particle
frame ...
0 46.695711 3043.562648 3.881068 ... 0.007859 0 0
3979 3041.628299 1460.402493 1.787834 ... 0.037744 0 1
3978 3041.344043 4041.002275 4.609833 ... 0.010825 0 2
The word "frame" is the title of two columns, which confuses the sorting algorithm. As the error message says, it is ambiguous to sort the table by frame.
Solution
The index (leftmost) column does not need a name here, so remove it with
t2.index.name = None
and try again. Check if you have the newest Trackpy and Pandas versions.
I am attempting to change the cursor in Windows 10 (version 1703) to a custom made one (conditional on some event when a script activates), that is larger than the default 32 by 32 size. The MWE based on my Autohotkey script is the following:
ImagePath = %A_ScriptDir%\foo.cur
Cursor_ID := 32512 ; Standard arrow
Cursor_Size := 128
^0::
SetSystemCursor( ImagePath, Cursor_ID, Cursor_Size, Cursor_Size )
return
SetSystemCursor( path, id, sizeX, sizeY )
{
Cursor := DllCall( "LoadImage",
UInt,0, Str,path, UInt,0x2, Int,sizeX, Int,sizeY, UInt,0x00000010, Ptr)
DllCall( "SetSystemCursor", Ptr,Cursor, Int,id )
}
(My code is based off of that found at https://autohotkey.com/board/topic/32608-changing-the-system-cursor/.)
As far as I can tell from the documentation of LoadImage, the function SetSystemCursor(...) should load the image with dimensions (sizeX, sizeY) when those parameters are not 0 (since the flag LR_DEFAULTSIZE = 0x00000040 is not set), but instead I get the following behaviour: no matter what sizes I set, the image gets scaled to (sizeX, sizeY), and then down/upscaled to (32, 32). This is most obvious by setting, say Cursor_Size := 2, then I get an upscaled version of a 2 by 2 image.
After some searching around I have found both information suggesting that this should work, and also to the effect that the size of cursors is always dictated by getSystemMetrics(SM_CXCURSOR)
and getSystemMetrics(SM_CYCURSOR): The biggest size of Windows Cursor (see also GetSystemMetrics).
Additional tests/ideas I've tried:
I checked the dimensions of the image corresponding to the handle returned
by LoadImage, and it seems to be (sizeX, sizeY), just as it should be,
therefore the scaling to 32 most likely happens upon executing SetSystemCursor.
I wanted to see if an application-specific cursor could bypass the
apparent 32 by 32 restriction, so using Resource Hacker, I replaced one of
the resources in Paint. It was scaled down to size 32 in the same way.
Setting the values that are returned by
getSystemMetrics(SM_CXCURSOR) and getSystemMetrics(SM_CYCURSOR)
might be an option if these indeed restrict cursor sizes, but I
could not find an appropriate function. I checked
SystemParametersInfo, but the only remotely relevant option
I found was SPI_SETCURSORS, and that just reloads the cursors from
registry.
It might be possible to change a registry value, though it would not
be my preferred solution, as it would most likely require a reboot
to take effect. Additionally, I haven't been able to find the relevant key.
My question would therefore be the following:
Is there a way to add an image of arbitrary size as a cursor in Windows 10, preferably without the need to reboot the computer? If so, how? Do SM_CXCURSOR and SM_CYCURSOR absolutely restrict the cursor's size? If they do, can these values be changed somehow?
EDIT:
It has been pointed out that yes, the documentation of GetSystemMetrics states "the system cannot create cursors of other sizes" than SM_CXCURSOR and SM_CYCURSOR, but at the same time at some of the other webpages I linked, people seem to claim to be able to create arbitrary sized cursors. Hence my request for confirmation/clarification of the matter.
Apart from that, the question about changing these values, or the existence of any other possible workaround would still be important to me.
I have an image master.png and more than 10.000 of other images (slave_1.png, slave_2.png, ...). They all have:
The same dimensions (Eg. 100x50 pixels)
The same format (png)
The same image background
98% of the slaves are identical to the master, but 2% of the slaves have a slightly different content:
New colors appear
New small shapes appear in the middle of the image
I need to spot those different slaves. I'm using Ruby, but I have no problem in use a different technology.
I tried to File.binread both images and then compare using ==. It worked for 80% of the slaves. In other slaves, it was spotting changes but the images was visually identical. So it doesn't work.
Alternatives are:
Count the number of colors present in each slave and compare with master. It will work in 100% of the time. But I don't know how to do it in Ruby in a "light" way.
Use some image processor to compare by histograms like RMagick or ruby-vips8. This way should also work but I need to consume the less CPU/Memory possible.
Write a C++/Go/Crystal program to read pixel by pixel and return a number of colors. I think in this way we can get performance out of if. But for sure is the hard way.
Any enlightenment? Suggestions?
In ruby-vips, you could do it like this:
#!/usr/bin/ruby
require 'vips'
# find normalised histogram of reference image
ref = Vips::Image.new_from_file ARGV[0], access: :sequential
ref_hist = ref.hist_find.hist_norm
ARGV[1..-1].each do |filename|
# find sample hist
sample = Vips::Image.new_from_file filename, access: :sequential
sample_hist = sample.hist_find.hist_norm
# calculate sum of squares of differences; if it's over a threshold, print
# the filename
diff_hist = (ref_hist - sample_hist) ** 2
diff = diff_hist.avg * diff_hist.width * diff_hist.height
if diff > 100
puts "#{filename}, #{diff}"
end
end
If I make some test data:
$ vips crop ~/pics/k2.jpg ref.png 0 0 100 50
$ for i in {1..10000}; do cp ref.png $i.png; done
I can run it like this:
$ time ../similarity.rb ref.png *.png
real 0m55.974s
user 1m31.921s
sys 0m54.433s
It runs in a steady ~80mb of memory.
I am working on printer driver sample which capture GDI call such as DrvBitBlt(), DrvTextOut() … etc. In DrvBitBlt i am getting ROP4 value as 0XF0F0. It mean to say that we need to use the brush object.
When I read the DrvBitBlt() ROP4 the documentation says:
The low byte specifies a Rop3 that should be calculated if the mask is
one, and the high byte specifies a Rop3 that can be calculated and
applied if the mask is 0.
My question is where the mask value will be present. How go get the mask bit is 0 or 1.
The mask bits come from the third parameter to DrvBitBlt
I'm a MATLAB beginner and I would like to know how I can acquire and save 20 images at 5 second intervals from my camera. Thank you very much.
First construct a video input interface
vid = videoinput('winvideo',1,'RGB24_400x300');
You'll need to adjust the last bit for your webcam. To find a list of webcam devices (and other things besides) use:
imaqhwinfo
The following makes the first webcam into an object
a=imaqhwinfo('winvideo',1)
Find the list of supported video formats with
a.SupportedFormats
You'll then want to start up the interface:
start(vid);
preview(vid);
Now you can do the following:
pics=cell(1,20)
for i=1:20
pause(5);
pics{i}=getsnapshot(vid);
end
Or, as other commentators have noted, you could also use a Matlab timer for the interval.
If you wish to capture images with a considerably shorter interval (1 or more per second), it may be more useful to consider the webcam as a video source. I've left an answer to this question which lays out methods for achieving that.
There are several ways to go about this, each with advantages and disadvantages. Based on the information that you've posted so far, here is how I would do this:
vid = videoinput('dcam', 1'); % Change for your hardware of course.
vid.FramesPerTrigger = 20;
vid.TriggerRepeat = inf;
triggerconfig(vid, 'manual');
vid.TimerFcn = 'trigger(vid)';
vid.TimerPeriod = 5;
start(vid);
This will acquire 20 images every five seconds until you call STOP. You can change the TriggerRepeat parameter to change how many times acquisition will occur.
This obviously doesn't do any processing on the images after they are acquired.
Here is a quick tutorial on getting one image http://www.mathworks.com/products/imaq/description5.html Have you gotten this kind of thing to work yet?
EDIT:
Now that you can get one image, you want to get twenty. A timer object or a simple for loop is what you are going to need.
Simple timer object example
Video example of timers in MATLAB
Be sure to set the "tasks to execute" field to twenty. Also, you should wrap up all the code you have for one picture snap into a single function.
To acquire the image, does the camera comes with some documented way to control it from a computer? MATLAB supports linking to outside libraries. Or you can buy the appropriate MATLAB toolbox as suggested by MatlabDoug.
To save the image, IMWRITE is probably the easiest option.
To repeat the action, a simple FOR loop with a PAUSE will give you roughly what you want with very little work:
for ctr = 1:20
img = AcquireImage(); % your function goes here
fname = ['Image' num2str(ctr)]; % make a file name
imwrite(img, fname, 'TIFF');
pause(5); % or whatever number suits your needs
end
If, however, you need exact 5 second intervals, you'll have to dive into TIMERs. Here's a simple example:
function AcquireAndSave
persistent FileNum;
if isempty(FileNum)
FileNum = 1;
end
img = AcquireImage();
fname = ['Image' num2str(FileNum)];
imwrite(img, fname, 'TIFF');
disp(['Just saved image ' fname]);
FileNum = FileNum + 1;
end
>> t = timer('TimerFcn', 'ShowTime', 'Period', 5.0, 'ExecutionMode', 'fixedRate');
>> start(t);
...you should see the disp line from AcquireAndSave repeat every 5 seconds...
>> stop(t);
>> delete(t);