constant similar "VM_RESERVED" and nopage method in 3.9.6 - linux-kernel

Cannot find VM_RESERVED constant and nopage method(in vm_operations_struct) in 3.9.6. What is their replacement in 3.9.6 ?

In the patch removing VM_RESERVED, the author had this advice:
A long time ago, in v2.4, VM_RESERVED kept swapout process off VMA,
currently it lost original meaning but still has some effects:
| effect | alternative flags
-+------------------------+---------------------------------------------
1| account as reserved_vm | VM_IO
2| skip in core dump | VM_IO, VM_DONTDUMP
3| do not merge or expand | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
4| do not mlock | VM_IO, VM_DONTEXPAND, VM_HUGETLB, VM_PFNMAP
Thus VM_RESERVED can be replaced with VM_IO or pair VM_DONTEXPAND |
VM_DONTDUMP.
vm_ops->nopage was replaced with vm_ops->fault in this patch.

Related

In Cache Coherency (specifically write-through and write-back), can cache be updated even though it hasn't read from the memory first?

There are 3 serial process. In the serial 1, the event is blank, and in write-through and write back only the memory column has X. In serial 2, the event is "P reads X" and, both memory and cache column in write-through and write-back have X. Lastly, in serial 3, the event is "P updates X" and, in write-through, both memory and cache have X'. However, in write-back, only the cach column have X' while the memory column is still X.
From this image, it shows how write-through and write-back works. But, what if in serial 2, instead of "P reads X", it is "P updates X"? What will happen to the memory and cache?
From what I understand, this is what will happen if the serial 2 is "P updates X"
| | Write-Through | Write-Back |
| Serial | Event | Memory | Cache | Memory | Cache |
| -------- | -------- | -------- | -------- | -------- | -------- |
| 1 | | X | | X | |
| 2 | P updates X | X' | X' | X | X' |
But I'm not really sure if it's correct though. I need clarificaiton about this.

Is there any calibration tool between two languages performance?

I'm measuring the performance of A and B programs. A is written in Golang, B is written in Python. The important point here is that I'm interested in how the performance value increases, not the absolute performance value of the two programs over time.
For example,
+------+-----+-----+
| time | A | B |
+------+-----+-----+
| 1 | 3 | 500 |
+------+-----+-----+
| 2 | 5 | 800 |
+------+-----+-----+
| 3 | 9 | 1300|
+------+-----+-----+
| 4 | 13 | 1800|
+------+-----+-----+
Where the values in columns A and B(A: 3, 5, 9, 13 / B: 500, 800, 1300, 1800) are the execution times of the program. This execution time can be seen as performance, and the difference between the absolute performance values of A and B is very large. Therefore, the slope comparison of the performance graphs of the two programs would be meaningless.(Python is very slow compared to Golang.)
I want to compare the performance of Program A written in Golang with Program B written in Python, and I'm looking for a calibration tool or formula based on benchmarks that calculates the execution time when Program A is written in Python.
Is there any way to solve this problem ?
If you are interested in the relative change, you should normalize the data for each programming language. In other words, divide the values for golang with 3 and for python, divide with value 500.
+------+-----+-----+
| time | A | B |
+------+-----+-----+
| 1 | 1 | 1 |
+------+-----+-----+
| 2 | 1.66| 1.6 |
+------+-----+-----+
| 3 | 3 | 2.6 |
+------+-----+-----+
| 4 |4.33 | 3.6 |
+------+-----+-----+

Can LUT cascade be used simultaneously with the carry-chain in the iCE40 FPGAs by any tools?

I try to construct the following:
CO
|
/carry\ ____
s2 ---(((---|I0 |------------ O
+------+((---|I1 |
| +-(+---|I2 |
| | +----|I3__|
| +-(-----------+
| | |
| /carry\ ____ |B ___ BQ
D -----+------(((---|I0 |-+-----| |-+
s0 --+((---|I1 | > | |
s1 ---(+---|I2 | s3 -|S | |
| +-|I3__| s4 -|CE_| |
| +--------------------+
|
/carry\
|||
I write in Verilog, and instantiates SB_LUT4, SB_CARRY, SB_DFFESS primitives. To try to get a LUT cascade, I edit a .pcf constraints file (set_cascading...). However, synthesis (Lattice IceCube 2017.01.27914) disregards the constraints:
W2401: Ignoring cascade constraint for LUT instance 'filt.blk_0__a.cmbA.l.l', as it is packed with DFF/CARRY in a LogicCell
In the admirable Project IceStorm I can't see any reason why a combination of cascaded LUTs and the carry chain can't be used.
I am aware that a (slightly) newer IceCube2 is available. I know of the Yosys/arachne-pnr/icepack/iceprog toolchain. But before changing a toolchain, it seems prudent to ask if anyone solved this problem already, or if it is indeed not possible to combine the carry chain and LUT cascades?
Update - a quick install of Yosys/arachne-pnr/icetools synthesizes my design without warnings, but visualisation in ice40_viewer (and log output) indicates that the chained lut is not used.

Feature Tracking by using Lucas Kanade algorithm

Lucas Kanade Feature Tracker Refer Page 6 I am implementing the Lucas Kanade Feature Tracker in C++.
One thing is unclear in implementing the equation 23 which is mentioned in attached paper. I think Matrix G calculation should happened inside K loop, not outside K loop. In case when Patch B is present at the border in frame j, That time it is not useful to use full G Spatial Gradient Matrix which is calculated before K loop (as mentioned in paper). For Frame j, Matrix G should calculate for the showed patch B portion only.
Patch A Patch B
| |
| |
-----|--- -|-------
| |---| | | | |
| | | | |--| |
| |---| | | | |
| | |--| |
--------- ---------
Frame i Frame j

The "Waiting lists problem"

A number of students want to get into sections for a class, some are already signed up for one section but want to change section, so they all get on the wait lists. A student can get into a new section only if someone drops from that section. No students are willing to drop a section they are already in unless that can be sure to get into a section they are waiting for. The wait list for each section is first come first serve.
Get as many students into their desired sections as you can.
The stated problem can quickly devolve to a gridlock scenario. My question is; are there known solutions to this problem?
One trivial solution would be to take each section in turn and force the first student from the waiting list into the section and then check if someone end up dropping out when things are resolved (O(n) or more on the number of section). This would work for some cases but I think that there might be better options involving forcing more than one student into a section (O(n) or more on the student count) and/or operating on more than one section at a time (O(bad) :-)
Well, this just comes down to finding cycles in the directed graph of classes right? each link is a student that wants to go from one node to another, and any time you find a cycle, you delete it, because those students can resolve their needs with each other. You're finished when you're out of cycles.
Ok, lets try. We have 8 students (1..8) and 4 sections. Each student is in a section and each section has room for 2 students. Most students want to switch but not all.
In the table below, we see the students their current section, their required section and the position on the queue (if any).
+------+-----+-----+-----+
| stud | now | req | que |
+------+-----+-----+-----+
| 1 | A | D | 2 |
| 2 | A | D | 1 |
| 3 | B | B | - |
| 4 | B | A | 2 |
| 5 | C | A | 1 |
| 6 | C | C | - |
| 7 | D | C | 1 |
| 8 | D | B | 1 |
+------+-----+-----+-----+
We can present this information in a graph:
+-----+ +-----+ +-----+
| C |---[5]--->1| A |2<---[4]---| B |
+-----+ +-----+ +-----+
1 | | 1
^ | | ^
| [1] [2] |
| | | |
[7] | | [8]
| V V |
| 2 1 |
| +-----+ |
\--------------| D |--------------/
+-----+
We try to find a section with a vacancy, but we find none. So because all sections are full, we need a dirty trick. So lets take a random section with a non empty queue. In this case section A and assume, it has an extra position. This means student 5 can enter section A, leaving a vacancy at section C which is taken by student 7. This leaves a vacancy in section D which is taken by student 2. We now have a vacancy at section A. But we assumed that section A has an extra position, so we can remove this assumption and have gained a simpler graph.
If the path never returned to section A, undo the moves and mark A as an invalid startingpoint. Retry with another section.
If there are no valid sections left we are finished.
Right now we have the following situation:
+-----+ +-----+ +-----+
| C | | A |1<---[4]---| B |
+-----+ +-----+ +-----+
| 1
| ^
[1] |
| |
| [8]
V |
1 |
+-----+ |
| D |--------------/
+-----+
We repeat the trick with another random section, and this solves the graph.
If you start with several students currently not assigned, you add an extra dummy section as their startingpoint. Of course, this means that there must be vacancies in any sections or the problem is not solvable.
Note that due to the order in the queue, it can be possible that there is no solution.
This is actually a Graph problem. You can think of each of these waiting list dependencies as edges on a directed graph. If this graph has a cycle, then you have one of the situations you described. Once you have identified a cycle, you can chose any point to "break" the cycle by "over filling" one of the classes, and you will know that things will settle correctly because there was a cycle in the graph.

Resources