The depth value of visuals on my machine is 24. Is it possible to create X window of lower depth (for example 16) ?
If yes how ? Any pointer to sample code would be useful.
XSetWindowAttributes attrs;
attrs.colormap = XCreateColormap(dpy, XDefaultRootWindow(dpy), visual, AllocNone);
attrs.background_pixel = 0;
attrs.border_pixel = 0;
XCreateWindow(dpy, parent, 10, 10, 150, 100, 0, 16, InputOutput,
visual, CWBackPixel | CWColormap | CWBorderPixel, &attrs);
The above code gave me bad match error. The visual parameter is having 24 bit depth.
Thanks in advance.
Not all possible depths are available in all servers. Run xdpyinfo | grep depths to see what yours support, or call XListDepths from your application.
For example, on my home computer the X server supports depth of 16, but on my work computer it doesn't.
EDIT Window depth must exactly match visual depth, or BadMatch error occurs.
XListDepths may be used to query which visuals support given depth. If all visuals have depth of 24, then every window must have depth of 24. Not all listed depths may be realised as visuals on a given server.
Related
I have a combined data information that requires minimum 35 bits.
Using a 4-state barcode, each bar represents 2 bits, so the above mentioned information can be translated into 18 bars.
I would like to add some strong error correction to this barcode, so if it's somehow damaged, it can be corrected. One of such approach is Reed-Solomon error correction.
My goal is to add as strong error correction as possible, but on the other hand I have a size limitation on the barcode. If I understood the Reed-Solomon algorithm correctly, m∙k has to be at least the size of my message, i.e. 35 in my case.
Based on the Reed-Solomon Interactive Demo, I can go with (m, n, t, k) being (4, 15, 3, 9), which would allow me to code message up to 4∙9 = 36 bits. This would lead to code word of size 4∙15 = 60 bits, or 30 bars, but the error correction ratio t / n would be just 20.0%.
Next option is to go with (m, n, t, k) being (5, 31, 12, 7), which would allow me to code message up to 5∙7 = 35 bits. This would lead to code word of size 5∙31 = 155 bits, or 78 bars, and the error correction ratio t / n would be ~38.7%.
The first scenario requires use of barcode with 30 bars, which is nice, but 20.0% error correction is not as great as desired. The second scenario offers excellent error correction of 38.7%, but the barcode would have to have 78 bars, which is too many.
Is there some other approach or a different method, that would offer great error correction and a reasonable barcode length?
You could use a shortened code word such as (5, 19, 6, 7) 31.5% correction ratio, 95 bits, 48 bars. One advantage of a shortened code word is reduced chance of mis-correction if it is allowed to correct the maximum of 6 errors. If any of the 6 error locations is outside of the range of valid locations, that is an indication of that there are more than 6 errors. The probability of mis-correction is about (19/31)^6 = 5.3%.
I'm reviewing for my midterm and this specific question is causing me some issues.
This is the following array to perform the binary search:
the value I want to search for is 150.
To start off, I take the first element which is 0, and the last element which is 15.
(start + end) / 2,
(0 + 15) / 2 = 7
The value at the array of 7 is 90.
90 < 150, so the value is contained in the right side of the array.
The array now looks like this:
Continuing with the same logic
(start + end) / 2
(8 + 15) / 2 = 11.
However, according to the professor I should be at the value 12 here. I'm not sure what i am doing wrong. Any help would be appreciated.
The algorithms were written even before the computers were invented.
Computers are simply a tool or a device which implements the algorithm in an efficient manner which is why it is fast.
The binary search which you are performing here is relevant to computers as the array are indexed from 0 (counting usually starts from 0 in computers), that is why you are getting 11 which is correct in point of computers.
But for the humans counting starts from 1 and the so the result according to professor is 12.
While writing algorithms we write in according to the perception of the human and we twist it a little to implement in our machine.
I'm trying to write a function in Matlab that reads in TIFF images from various cameras and restores them to their correct data values for analysis. These cameras are from a variety of brands, and, so far, store either 12 or 14 bit data into 16 bit output. I've been reading them in using imread, and I was told that dividing by either 16 or 4 would convert the data back to it's original form. Unfortunately, that was when the function was only intended for one brand of camera specifically, which nicely scales data to 16 bit at time of capture so that such a transformation would work.
Since I'd like to keep the whole image property detection thing as automated as possible, I've done some digging in the data for a couple different cameras, and I'm running into an issue that I must be completely clueless about. I've determined (so far) that the pictures will always be stored in one of two ways: such that the previous method will work (they multiply the original data out to fill the 16 bits), or they just stuff the data in directly and append zeroes to the front or back for any vacant bits. I decided to see if I could detect which was which and have been using the following two methods. The images I test should easily have values that fill up the full range from zero to saturation (though sometimes not quite), and are fairly large resolution, so in theory these methods should work:
I start by reading in the image data:
Mframe = imread('signal.tif');
This method attempts to detect the number of bits that ever get used:
bits = 0;
for i = 1:16
Bframe = bitget(Mframe,i);
bits = bits + max(max(Bframe));
end
And this method attempts to find if there has been a scaling operation done:
Mframe = imread('signal.tif');
Dframe = diff(Mframe);
mindiff = min(min(nonzeros(Dframe)));
As a 3rd check I always look at the maximum value of my input image:
maxval = max(max(Mframe));
Please check my understanding here:
The value of maxval should be at 65532 in the case of a 16 bit image containing any saturation.
If the 12 or 14 bit data has been scaled to 16 bit, it should return maxval of 65532, a mindiff of 16 or 4 respectively, and bits as 16.
If the 12 or 14 bit data was stored directly with leading/trailing zeros, it can't return a maxval of 65532, mindiff should not return 16 or 4 (though it IS remotely possible), and bits should show as 12 or 14 respectively.
If an image is actually not reaching saturation, it can't return a maxval of 65532, mindiff should still act as described for the two cases above, and bits could possibly return as one lower than it otherwise would.
Am I correct in the above? If not please show me what I'm not understanding (I'm definitely not a computer scientist), because I seem to be getting data that conflicts with this.
Only one case appears to work just like I expect. I know the data to be 12 bit, and my testing shows maxval near 65532, mindiff of 16, and bits as 15. I can conclude that this image is not saturated and is a 12 bit scaled to 16 bit.
Another case for a different brand I know to have 12 bit output, and testing an image that I know isn't quite saturated gives me maxval of 61056, mindiff of 16, and bits as 12. ???
Yet another case, for yet again another brand, is known to have 14 bit output, and when I test an image I know to be saturated it gives me maxval of 65532, mindiff of 4, and bits as 15. ???
So very confused.
Well, after a lot of digging I finally figured it all out. I wrote some code to help me understand the differences between the different files and discovered that a couple of the cameras had "signatures" of sorts in them. I'm contacting the manufacturers for more information, but one in particular appears to be a timestamp that always occurs in the first 2 pixels.
Anyhow, I wrote the following code to fix the two issues I found and now everything is working peachy:
Mframe = imread('signal.tiff');
minval = min(min(Mframe));
mindiff = min(min(nonzeros(diff(Mframe))));
fixbit = log2(double(mindiff));
if rem(fixbit,2) % Correct Brand A Issues
fixbit = fixbit + 1;
Bframe = bitget(Mframe,fixbit);
[x,y] = find(Bframe==1);
for i=1:length(x)
Mframe(x(i),y(i)) = Mframe(x(i),y(i)) + mindiff;
end
end
for i=1:4 % Correct Brand B Timestamp
Bframe = bitget(Mframe,i);
if any(any(Bframe))
Mframe(1,1) = minval; Mframe(1,2) = minval;
end
end
for i = 1:16 % Get actual bit depth
Bframe = bitget(Mframe,i);
bits = bits + max(max(Bframe));
end
As for the Brand A issues, that camera appears to have bad data in just a few pixels of every frame (not the same every time) where a value appears in a pixel that is a one bit lower difference than should be possible from the pixel below it. For example, in a 12 bit picture the minimum difference should be 16 and a 14 bit picture should have a minimum difference of 4, but they have values that are 8 and 2 lower than the pixel below them. Don't know why that's happening, but it was fairly simple to gloss over.
I'm using SVMLib to train a simple SVM over the MNIST dataset. It contains 60.000 training data. However, I have several performance issues: the training seems to be endless (after a few hours, I had to shut it down by hand, because it doesn't respond). My code is very simple, I just call ovrtrain on the dataset without any kernel and any special constants:
function features = readFeatures(fileName)
[fid, msg] = fopen(fileName, 'r', 'ieee-be');
header = fread(fid, 4, "int32" , 0, "ieee-be");
if header(1) ~= 2051
fprintf("Wrong magic number!");
end
M = header(2);
rows = header(3);
columns = header(4);
features = fread(fid, [M, rows*columns], "uint8", 0, "ieee-be");
fclose(fid);
return;
endfunction
function labels = readLabels(fileName)
[fid, msg] = fopen(fileName, 'r', 'ieee-be');
header = fread(fid, 2, "int32" , 0, "ieee-be");
if header(1) ~= 2049
fprintf("Wrong magic number!");
end
M = header(2);
labels = fread(fid, [M, 1], "uint8", 0, "ieee-be");
fclose(fid);
return;
endfunction
labels = readLabels("train-labels.idx1-ubyte");
features = readFeatures("train-images.idx3-ubyte");
model = ovrtrain(labels, features, "-t 0"); % doesn't respond...
My question: is it normal? I'm running it on Ubuntu, a virtual machine. Should I wait longer?
I don't know whether you took your answer or not, but let me tell you what I predict about your situation. 60.000 examples is not a lot for a power trainer like LibSVM. Currently, I am working on a training set of 6000 examples and it takes 3-to-5 seconds to train. However, the parameter selection is important and that is the one probably taking long time. If the number of unique features in your data set is too high, then for any example, there will be lots of zero feature values for non-existing features. If the tool is implementing data scaling on your training set, then most probably those lots of zero feature values will be scaled to a certain non-zero value, leaving you astronomic number of unique and non-zero valued features for each and every example. This is very very complicated for a SVM tool to get in and extract efficient parameter values.
Long story short, if you had enough research on SVM tools and understand what I mean, you either assign parameter values in the training command before executing it or find a way to decrease the number of unique features. If you haven't, go on and download the latest version of LibSVM, read the ReadME files as well as the FAQ from the website of the tool.
If non of these is the case, then sorry for taking your time:) Good luck.
It might be an issue of convergence given the characteristics of your data.
Check the kernel you have as default selection and change it. Also, check the stopping criterion of the package. Additionally, if you are looking for faster implementation, check MSVMpack which is a parallel implementation of SVM.
Finally, feature selection in your case is desired. You can end up with a good feature subset of almost half of what you have. In addition, you need only a portion of data for training e.g. 60~70 % are sufficient.
First of all 60k is huge data for training.Training that much data with linear kernel will take hell of time unless you have a supercomputing. Also you have selected a linear kernel function of degree 1. Its better to use Gaussian or higher degree polynomial kernel (deg 4 used with the same dataset showed a good tranning accuracy). Try to add the LIBSVM options for -c cost -m memory cachesize -e epsilon tolerance of termination criterion (default 0.001). First run 1000 samples with Gaussian/ polynomial of deg 4 and compare the accuracy.
I wonder if there's a way to begin reading from an arbitrary position in a array. E.g. if I have a array of size 10 and it begins reading from position 4. Then it should continue on reading from position 5, 6, 7, 8, 9, 0, 1, 2, 3
I was uncertain with tag, so if have picked wrong tag please do change it for me.
Yes, you can index using the modulo operation which is written as % in most languages:
x = list[i % list.length]
This will give you the desired effect of wrapping around when you reach the end of the list instead of attempting to index out of bounds.
This assumes 0-based indexing. If you use 1-based indexing you have to add one to the result of the modulo operation.
offset = 4;
for(i=0; i<n; i++)
cout << x[(i+offset)%n] << ' ';
It depends on what you mean by "list". Traditionally in computer science, "list" usually means "linked list". In this case, you have to traverse the list in order to get to a particular element.
If you need/want to be able to start reading from arbitrary positions efficiently, you probably want to avoid linked lists, but the exact alternatives you have (easily) available will vary with the programming language, libraries, etc., you're using.
Edit: for an array, it's generally trivial -- just specify the starting position directly, and take the remainder after dividing by the array size.
One option is to make the list circular (like they do it in the Linux kernel).