Use xming / vcxsrv / xwin to setup two screens for i3wm [closed] - x11

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I'd like to use i3 on my Windows 10 Linux subsystem with two monitors.
With:
vcxsrv.exe :1 -nodecoration -wgl -multimonitors -screen 0 3840x1160
I can create one large Window that spans over my two monitors. This configuration works with i3, however i3 recognizes it correctly as one single screen with the drawback that sometimes windows are cut in half when the span over both monitors.
I'd like to use i3 with two separate screens, like this:
vcxsrv.exe :1 -nodecoration -wgl -screen 0 #1 -screen 1 #2
However, I can run i3 either on screen 0 (export DISPLAY=:1) or on screen 1 (export DISPLAY=:1.1) but not on both on the same time.
Maybe it has something todo with xrandr since it does not recognize my configuration:
xrandr -q:
xrandr: Failed to get size of gamma for output default
Screen 1: minimum 0 x 0, current 1920 x 1160, maximum 32768 x 32768
default connected primary 1920x1160+0+0 0mm x 0mm
1920x1160 0.0*
How can I use both separate screens with i3?

I've been playing around with this a bit and, while I haven't found a solution for getting xrandr to recognize multiple monitors, I have found out that i3 has an undocumented config option that allows you to simulate multiple monitors on a single one. In my i3 config I've added the option:
fake-outputs 1920x1080+0+0,1366x768+1920+0
This makes i3 treat the massive display that VcXsrv provides as 2 logical displays, and by tuning the sizes/offsets to the monitor sizes, it places the displays perfectly on each monitor.
Also, this is the command I'm using to start VcXsrv:
vcxsrv.exe -screen 0 #2 -wgl -nodecoration +xinerama -screen 1 #1 -wgl -nodecoration +xinerama
The reason the screens are switched is because polybar was showing up on the larger screen with the smaller screens dimensions when using 0 #1 and 1 #2. This switch puts polybar on the large screen (on the left) with the correct dimensions. This may not be the case for everyone's setup.

Related

PIC18 Signal Measurement Timer SMT1 (Counter Mode) Not Incrementing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I'm trying to use SMT1 on a PIC18F45K42 to count cycles of a square wave on pin RB0. I can't get the counter to increment, not sure what I'm doing wrong. If I understand correctly, SMT1TMR should be incrementing but it's not. (I also checked SMT1TMRL, etc, directly and it's not changing).
1) I am trying to do a normal counter, not gated, so I'm not using the Window signal at all (I don't want to have to use it, I just want to zero the counter from time to time then check to see how many square cycles have arrived).
2) I have registers set as follows (pic below) according to the paused debugger in MPLAB X. I am putting a scope probe directly on the pin and I can see the square wave is arriving. I can also pause the debugger occasionally to read PORTB and see PORTB.0 is changing between high and low, so I believe it is being received.
3) I'm playing with square waves from 20 Hz to around 400 Hz created from a function generator.
I have attached an image of the registers. Here are the values for reference:
SMT1SIGPPS 0x08 (should be RB0)
SMT1CON0 0x80
SMT1CON1 0xC8
SMT1STAT 0x05
SMT1SIG 0x00
TRISB 0xE3
PMD6 0x17 (SMT1MD is 0, which should be "not disabled")
Any suggestions much appreciated. This seems like it should be so simple and straightforward.
Thank you.
I figured it out. The key is in data sheet 25.1.2 Period Match Interrupt. The Period register has to be set to longer than the counter will run. It defaults to 0, so the counter couldn't increment. Fixed it by manually loading the 3 period registers with max value... added the following to my ini code, seems to be working as expected now.
SMT1PRU = 0xFF; //set max period for SMT1 so counter doesn't roll over
SMT1PRH = 0xFF;
SMT1PRL = 0xFF;

What are online down-scaling algorithms? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 6 years ago.
Improve this question
I'm building a circuit that will be reading PAL/NTSC (576i, 480i) frames from analog input. The microcontroller has 32 kB of memory. My goal is to scale down input to 32x16 resolution, and forward this image to LED matrix.
PAL frame can take ~400 kB of memory. So i thought about down-scaling online. Read 18 pixels, decimate to 1. Read 45 lines, decimate to 1. Peak memory usage: 45 x 32 = 1.44 kB (45 decimated lines awaiting decimation).
Question: What are other online image down-scaling algorithms, other than the above naive one? Googling is extremely hard because online services are being found (PDF resize, etc.)
Note that mentioned formats are interlaced, so you read at first 0th, 2nd, 4th.. lines (first semi-frame), then 1st, 3rd, .. lines (second semi-frame).
If you are using simple averaging of pixel values in resulting cell (I suspect it is OK for so small output matrix), then create output array (16x32=512 entries) and sum appropriate values for every cell. And you need buffer for a single input line (768 or 640 entries).
x_coeff = input_width / out_width
y_coeff = input_height / out_height
out_y = inputrow / y_coeff
for (inputcol = 0..input_width - 1)
out_x = inputcol / x_coeff
out_array[out_y][out_x] += input_line[inputcol]
inputrow = inputrow + 2
if (inputrow = input_height)
inputrow = 1
if (inputrow > input_height)
inputrow = 0
divide out_array[][] entries by ( x_coeff * y_coeff)

Why is the offset of the "logical bottom" and "physical bottom" of the stack random?

I run a program on my Windows 10 machine with windbg, and let it break on the initial breakpoint. I take the address of the physical bottom of the stack (stackBase of the TEB), and subtract the rsp value of ntdll!LdrInitializeThunk. I just did this 5 times on the same program, and I got 5 different values:
0x600
0x9f0
0xa40
0x5d0
0x570
You get similar results if you do the same with ntdll!RtlUserThreadStart, etc. This suggests that the "logical bottom" of the stack is somewhat randomized. Why is that? Is this some kind of "mini-ASLR" inside of the stack? Is this documented anywhere?
After some googling for ASLR in Vista specifically (ASLR was introduced in Vista), I found this document from Symantec. On page 5, it mentions the phenomenon that my question is about (emphasis mine):
Once the stack has been placed, the initial stack pointer is further randomized by a random decremental
amount. The initial offset is selected to be up to half a page (2,048 bytes), but is limited to naturally
aligned addresses [...]
So it seems it's done intentionally for security reasons (this way it's harder to figure out addresses of things that reside at fixed offsets relative to the stack base).
I'm leaving this question open for some time hoping that someone can provide a more insightful answer. If no one does, I will accept this answer.

Bankers algorithm

I have a question about the answer to a problem on Dijkstra's Banker's Algorithm (the question is provided in the screen shot below).
I thought the answer to this question should be "yes, it is possible to do it". My thinking is that once user 1 is done, we can pop him out and free his requested resouces (10 A's and 5 B's), and return his used resources to the available resources pool, which will assist the others to be done.
Instead, the answer (in the screen shot beneath the question) states it's not possible. Where did I go wrong? Why is the answer that this is not possible?
Answer:
I think it's just a poorly worded question. The problem description states that the available resources are A = 10 and B = 15.
In the Banker's algorithm it's considered "safe" if a process can allocate the maximum resources it needs. (process 1 needs 10 A's and 5 B's)
Then the answer states the available resources are A = 1 and B = 2. If you look at all the processes currently allocated numbers:
process 1 has 2 A resources
process 2 has 3 A resources
process 3 has 2 A resources
process 4 has 2 A resources
---------------------------
total A resources in use = 9
it becomes clear that the question meant those were the Total System resources, not current available resources. Thus 9 A resouces are in use, process 1 requires a maximum of 10 (it has 2) so it needs 8 more; in which case the answer is no, it's not safe.

Is there any graphical Binary Diff tool for Mac OS X? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
Are there any Binary Diff tools for Mac OS X with a GUI? There are a gazillion text-based diff tools, but I need to compare two binary files. Essentially two Hex Editors with Dec/Hex View next to each other (the binary files are a custom file format, so not images or anything that has a more specialized diff tool)
I just discoverd Hex Fiend – love at first sight! Open both binary files then do File > Compare x and y or Shift+cmd+D
You could store the hex of each binary in temp files, then compare them with diff. This would give you the visual hex difference.
xxd -c 1 file1 | cut -d ' ' -f 2 > file1.hex
xxd -c 1 file2 | cut -d ' ' -f 2 > file2.hex
diff file1.hex file2.hex
xxd creates a hex dump, and we're telling it to print one byte per line, then cut splits on space and compares the correct column
you could also use od instead of xxd
there is Ellié Computing Merge (http://www.elliecomputing.com) (NB: I work for ECMerge).
it can compare arbitrarily large files with usual Hex+ASCII views and side by side visual diff.
it works on mac and linux/windows as well
You can use colorbindiff.pl it's a simple perl script that does exactly what you want, a side-by-side (and colored) binary diff. It shows byte changes and byte additions/deletions.
You can find it on GitHub.
http://en.wikipedia.org/wiki/Comparison_of_hex_editors
Maybe "HexEdit by Lane Roathe", wxHexEditor or UltraEdit
My go-to is for stuff like this is 010 Editor. It has a very customizable hex bin-diff, configurable min match length, synchronized scrolling, and much more.
Beyond Compare 4 does a pretty good job, especially if you have multiple binary files to compare. However, it's matching isn't obviously configurable and can be wonky, depending on the use-case.

Resources