What do the different shapes of the labels in TortoiseGit Log window mean? - tortoisegit

Some labels are rectangular, whereas some labels have the corners cut.
What do these different shapes mean?
Example of remote branches:
Example of local branches:
Example of tags:

== Branches ==
The active branch is displayed in dark red (by default). The green ones are local branches and the peach ones are remote branches. Normally branches are displayed as normal rectangles.
The boxes with rounded corners for local branches indicate that it has an associated remote tracking (e.g., master and deploy-pr-label). The boxes with rounded corners for local remote branches are used to inducate which of (possible several remote branches) is the remote tracking branch (e.g., master and origin/master in the question).
== Tags ==
Tags are by default yellow. In Git there are two tag types: normal tags and annotated tags. The annotated ones have an apex at the right side.
== Special cases ==
The stash has a dark grey rectangle.
There are also reactangles to indicate the bad versions (light red) on bisecting, blue for known good and grey for skip.
== General ==
The colors can be changed in TortoiseGit settings: https://tortoisegit.org/docs/tortoisegit/tgit-dug-settings.html#tgit-dug-settings-colours2
The color of the lines does not correspondent to the shape's colors.

Related

How to avoid label of 2 layers in layer group overlaps in geoserver?

I have been create 2 layers with a text label of each other. When I add that 2 layers in a layer group, I see that 2 labels of 2 layers overlaps like in image below:
Anyone know how to avoid that problem.
WMS requests are completely independent of each other, so GeoServer can't tell that you intend to display the two layers on top of each other. Therefore there is nothing it can do automatically to prevent your two labels being on top of each other.
You can avoid this by combining the two layers into one request, this would need to be done in your client somehow.
Alternatively, you could modify the two SLD files that generate your labels to add a positive and negative offset from the label position so that the two labels are less likely to overlap.

how to whiten the white parts and blacken the black parts of a scanned image in MatLab or photoshop

I have a scanned image, scanned from printed word (docx) file. I want the scanned image to be looked like the original word file i.e. to remove noise and enhancement. i.e. to fully whiten the white parts and fully blacken the black parts without changing colorful parts on the fileenter image description here
There are a number of ways you could approach this. The simplest would be to apply a levels filter with the black point raised a bit and the white point lowered a bit. This can be done to all 3 color channels or selectively to a subset. Since you're going for creating pure black and white and there's no color cast on the image, I would apply the same settings to all 3 color channels. It works like this:
destVal = (srcVal - blackPt) + srcVal / (whitePt - blackPt);
This will slightly change the colored parts of the image, probably resulting in making them slightly more or less saturated.
I tried this in a photo editing app and was disappointed with the results. I was able to remove most of the noise by bringing the white point down to about 66%. However, the logo in the upper left is so wispy that it also ended up turning it very white. The black point didn't really need to be moved.
I think you're going to have a tough time with that logo. You could isolate it from your other changes, though, and that might help. A simple circular area around it where you just ignore any processing would probably do the trick.
But I got to thinking - this was made with Word. Do you have a copy of Word? It probably wouldn't be too difficult to put together a layout that's nearly identical. It still wouldn't help with the logo. But what you could do is layout the text the same and export it to a PDF or other image format. (Or if you can find the original template, just use it directly.) Then you could write some code to process your scanned copy and wherever a pixel is grayscale (red = green = blue), use the corresponding pixel from the version you made, otherwise use the pixels from the scan. That would get you all the stamps and signatures, while having the text nice and sharp. Perhaps you could even find the organization's logo online. In fact, Wikipedia even has a copy of their logo.
You'd probably need to have some sort of threshold for the grayscale because some pixels might be close but have a slight color cast. One option might be something like this:
if ((fabs(red - green) < threshold) && (fabs(red - blue) < threshold))
{
destVal = recreationVal; // The output is the same as the copy you made manually
}
else
{
destVal = scannedVal; // The output is the same as the scan
}
You may find this eats away at some of the colored marks, so you could do a second pass over the output where any pixel that's adjacent to a colored pixel brings in the corresponding pixel from the original scan.

Shaded interactive D3 treemap - support for people who are colorblind?

I created an interactive treemap where the severity of broken links in areas of a web site, shows using various shades of red. If there are zero problems, boxes appear as green as "all clear." I include a data table as a text equivalent, but I was asked to make the chart more usable by people who are colorblind. I looked for a colorblindness simulator that would help me pick shades of blue or something, because red-green is a particular problem, BUT I don't know how to judge.
Can anyone point me to how to add different textures, fill patterns, to treemap boxes, or how to add box borders whose width is based on my problem severity parameter (here, the count of broken links)? The change would need to be added interactively. These were two of the suggestions I received; perhaps there are other solutions?
Visualize and Accelerate Web Site Repairs:
http://bl.ocks.org/wendlingd/af1e751e97c5211ff11277c985e5e642
It‘s always fair to have two different characteristics on things that are important to be distinguished. That‘s why links receive different color and underline by default.
I think it‘s a good idea to use hatching as a second characteristic. So you could add e.g.
background-image { repeating-linear-gradient(
55deg,
transparent,
transparent 15px,
rgba(255,255,255,.5) 15px,
rgba(255,255,255,.5) 20px
); }
to your boxes and fiddle around with angle and pixels for different box types.
Could look like this in the end:
Most importantly, please increase font / background contrast! This will help visually impaired more than all hatching…
Hatching was taken from http://lea.verou.me/css3patterns/#diagonal-stripes

Placing window on two displays with different resolution in same point

Is there a way to place window in the same point for example in top-right corner on the to displays with different resolution?
For example you have Macbook and you connected it to big display.
Note: windows property "Spaces" in IB is set to "Can join all spaces"
Spaces and displays are two separate concepts. So, "Can join all spaces" is not relevant to your question.
A window can only be at one position in the global screen coordinate system that spans the whole desktop. Each display constitutes a separate part of that coordinate system (ignoring mirroring). Therefore, no, it's not possible to have a window show up in the top-right corner of two separate displays. You would need two separate windows to achieve that.

Counting Objects via Mean-Shift Segmentation

I'm trying to use mean-shift segmentation to count the objects found in an image. I've been working with [pyrMeanShiftFiltering][1] in OpenCV. Using the following code, I'm able to produce an image that has been segmented. However, I do not know how to actually count the number of "items" in that image.
Simply running pyrMeanShiftFiltering( img, res, spatialRad, colorRad, maxPyrLevel ); on this image
produces this image
In this example, it doesn't seem much different, although there are some images where the segmentation makes a huge difference in the colors and such present. However, for the majority of test cases, I'm going to assume that the colors will not be terribly distinct and that the edges will not be distinct (as they are in the example given) enough to use edge detection on the image itself.
Based off of this, how can I go about finding the number of objects found inside of that image? I'm looking for a bit of code, although any poke in the correct direction will help.
If the objects contain different colors (and those colors are distinctive) the simplest soultion would be to count how many clusters are there (remove cluster for the white color since the pages are white/yellow).
On images that you have shown you could also use corner detector since you have very distinctive corners, then look the surrounding and filter those corners by color (the surrounding color has to contain some color and white one (from pages)) then match corners that lie in the same vertical line and finally count them.
One other idea is to extrapolate white/yellow color from pages (clustering + histogram filtering) and to count different blobs.
Maybe the best approach is to extrapolate white/yeelow color from pages by finding cover color (blue, green) and closest white color => find blobs. Those blobs can be labeled to the closest cover color. Then you have blobs that present pages and are labeled according the closest cover color.
Those blobs may be broken to several pieces (one book partly covers the other book) and two different blobs may belong to the same object (a book with multiple color cover)
but you now that those blobs have to be rectangular. So you could find lines in that binarized image and try to connect them with closest line. Then you will finally have one blob that matches one book. Finally you can count them.

Resources