What does M+L in monotonicity lemma mean - petri-net

While studying a bit of petri nets I encountered on Monotonicity Lemma, which says:
Let M and L be two markings of a net.
If M->M' for a finite sequence sigma, then (M+L)->(M'+L) for every marking L.
If M->for an infinite sequence sigma, then (M+L)-> for every marking L.
At the top of the arrows is sigma.
Does anyone understand what M+L mean in terms of markings? Should I add those markings together or is it a path where I add L to M?

Related

Dynamic Programming / Subproblems + Transition

I am kind of stuck, I decided to try this problem https://icpcarchive.ecs.baylor.edu/external/71/7113.pdf
to prevent it 404'ing here's the basic assignment
a hopper only visits arrays with integer entries,
• a hopper always explores a sequence of array elements using the following rules:
– a hopper cannot jump too far, that is, the next element is always at most D indices away
(how far a hopper can jump depends on the length of its legs),
– a hopper doesn't like big changes in values — the next element differs from the current
element by at most M, more precisely the absolute value of the difference is at most M (how
big a change in values a hopper can handle depends on the length of its arms), and
– a hopper never visits the same element twice.
• a hopper will explore the array with the longest exploration sequence.
n is the length of the array (as described above, D is the maximum length of a jump the
hopper can make, and M is the maximum difference in values a hopper can handle). The next line
contains n integers — the entries of the array. We have 1 ≤ D ≤ 7, 1 ≤ M ≤ 10, 000, 1 ≤ n ≤ 10, 000
and the integers in the array are between -1,000,000 and 1,000,000.
EDIT: I am doing this out of pure curiosity this is not a assignment I need to do for any particular reason other than challenging myself
basically its building a sparse graph out of an array,
the graph is undirected and due to the symmetry of the -d ... d jumps, its also either a complete graph (all edges are included) or mutually disjoint graph components
As first step I tried to simply exhaustive DFS search the graph, which works but has the infamous O(n!) runtime, the first iteration of this was written in F# which was horrible slow the second in C which still plateaus pretty fast too
so I know the longest path problem is NP hard but I thought I would give it a try with dynamic programming
The next approach was to simply use the common DP solution (bitmasked path) to DFS on the graph but at this at this point I already traversed the array and built the entire graph which may contain up to 1000 nodes so its not feasible
My next approach was to build a DFS Tree (tree of all the paths) which is a bit faster but then needs to store all entire path in memory for each iteration already which isn't what I really want, I am thinking I can reduce it to substates while already traversing the array
next I tried to memoize all paths I've already walked by simply using a bitmask and a simple memoization functions as seen here:
let xf = memoizedEdges (fun r i' p mask ->
let mask' = (addBit i' mask)
let nbs = [-d .. -1] # [ 1 .. d]
|> Seq.map (fun f -> match f with
| x when (i' + x) < 0 -> None
| x when (i' + x) >= a.Length -> None
| x when (diff a.[i'+x] a.[i']) > m -> None
| x when (i' + x) = i -> None
| x when (isSet (i'+x) mask') -> None
| x -> Some (i' + x )
)
let ec = nbs
|> Seq.choose id
|> Seq.toList
|> List.map (fun f ->
r f i' mask'
)
max (bitcount mask) (ec |> mxOrZero)
)
So memoized edges works by 3 int parameters the current index (i'), the previous (p) and the path as bitmask, the momizedEdges function itself will check on each recursive call it if has seen i' and p and the mask ... or p and i' and the mask with the i' and p bits flipped to mask the path in the other way (basically if we have seen this path coming from the other side already)
this works as I would expect, but the assignment states its up to 1000 indices which would cause the int32 mask to be too short
so I've been thinking for days now and there must be a way to encode each of the -d ... d steps into a start and end vertice and calculate the path for each step in that window based on the previous steps
I've come up with basically this
0.) Create a container to hold starting and endvertex as key with the current pathlength as value
1.) Check neighbors of i
2.) Have I seen either this combination either as (from -> to) or (to -> from) then I do not add or increase
3.) Check whatever any other predecessors to this node exist and increase the path of those by 1
but this would lead to having all paths stored and I would basically result in tuples and then I am back at my graph with DFS in another form
I am very thankful for any pointers (I just need some new ideas I am really stuck rn) how I could encode each subproblem from -d..d that I can use just intermediate results for calculating the next step (if this is even possible)
Partial answer
This is a difficult problem. Indeed, on competitive programming problem compendium Kattis it is (at the time of writing) in the top 5 of most difficult problems.
Only you know if this sort of problem is possible for you to solve, but there is a fair chance no one on this site can help you completely, hence this partial answer.
Longest path
What we're asked to do here is solve the longest path problem for a particular graph. This problem is known to be NP-complete in general, even for undirected unweighted graphs as ours is. Because the graph can have 1000 vertices, a (sub-)exponential algorithm in N will not work, and we're likely not asked to prove that P=NP, so the only option we have left is to somehow exploit the structure of the graph.
The most promising avenue is through D. D is at most 7, because of which the maximum degree of the graph is at most 14, and all edges are—in a sense—local.
Now, according to Wikipedia, the longest path problem can be solved polynomially on various classes of graphs, such as noncyclic ones. Our graph is of course not noncyclic, but unfortunately this is largely where my knowledge ends. I am not sufficiently familiar with graph theory to see whether the implied graph of the problem is in any of the classes Wikipedia mentions.
Of particular note is that the longest path problem can be solved in polynomial time given bounded-by-a-constant clique-width (or tree-width, which implies the former). I am unable to confirm or prove that our graph has bounded clique-width because of the bound on D, but perhaps you yourself know more about this, or you could try asking on the math or CS stackexchange, as at this point we're pretty far from any actual programming.
Regardless, if you're able to confirm that the graph is clique-width-bounded, this paper may help you further.
I hope this answer is of some use despite not being entirely fulfilling, and good luck!
Citation for the paper in case of link decay
Fomin, F. V., Golovach, P. A., Lokshtanov, D., & Saurabh, S. (2009, January). Clique-width: on the price of generality. In Proceedings of the twentieth annual ACM-SIAM symposium on Discrete algorithms (pp. 825-834). Society for Industrial and Applied Mathematics.

How many paths of length n with the same start and end point can be found on a hexagonal grid?

Given this question, what about the special case when the start point and end point are the same?
Another change in my case is that we must move at every step. How many such paths can be found and what would be the most efficient approach? I guess this would be a random walk of some sort?
My think so far is, since we must always return to our starting point, thinking about n/2 might be easier. At every step, except at step n/2, we have 6 choices. At n/2 we have a different amount of choices depending on if n is even or odd. We also have a different amount of choices depending on where we are (what previous choices we made). For example if n is even and we went straight out, we only have one choice at n/2, going back. But if n is even and we didn't go straight out, we have more choices.
It is all the cases at this turning point that I have trouble getting straight.
Am I on the right track?
To be clear, I just want to count the paths. So I guess we are looking for some conditioned permutation?
This version of the combinatorial problem looks like it actually has a short formula as an answer.
Nevertheless, the general version, both this and the original question's, can be solved by dynamic programming in O (n^3) time and O (n^2) memory.
Consider a hexagonal grid which spans at least n steps in all directions from the target cell.
Introduce a coordinate system, so that every cell has coordinates of the form (x, y).
Let f (k, x, y) be the number of ways to arrive at cell (x, y) from the starting cell after making exactly k steps.
These can be computed either recursively or iteratively:
f (k, x, y) is just the sum of f (k-1, x', y') for the six neighboring cells (x', y').
The base case is f (0, xs, ys) = 1 for the starting cell (xs, ys), and f (0, x, y) = 0 for every other cell (x, y).
The answer for your particular problem is the value f (n, xs, ys).
The general structure of an iterative solution is as follows:
let f be an array [0..n] [-n-1..n+1] [-n-1..n+1] (all inclusive) of integers
f[0][*][*] = 0
f[0][xs][ys] = 1
for k = 1, 2, ..., n:
for x = -n, ..., n:
for y = -n, ..., n:
f[k][x][y] =
f[k-1][x-1][y] +
f[k-1][x][y-1] +
f[k-1][x+1][y] +
f[k-1][x][y+1]
answer = f[n][xs][ys]
OK, I cheated here: the solution above is for a rectangular grid, where the cell (x, y) has four neighbors.
The six neighbors of a hexagon depend on how exactly we introduce a coordinate system.
I'd prefer other coordinate systems than the one in the original question.
This link gives an overview of the possibilities, and here is a short summary of that page on StackExchange, to protect against link rot.
My personal preference would be axial coordinates.
Note that, if we allow standing still instead of moving to one of the neighbors, that just adds one more term, f[k-1][x][y], to the formula.
The same goes for using triangular, rectangular, or hexagonal grid, for using 4 or 8 or some other subset of neighbors in a grid, and so on.
If you want to arrive to some other target cell (xt, yt), that is also covered: the answer is the value f[n][xt][yt].
Similarly, if you have multiple start or target cells, and you can start and finish at any of them, just alter the base case or sum the answers in the cells.
The general layout of the solution remains the same.
This obviously works in n * (2n+1) * (2n+1) * number-of-neighbors, which is O(n^3) for any constant number of neighbors (4 or 6 or 8...) a cell may have in our particular problem.
Finally, note that, at step k of the main loop, we need only two layers of the array f: f[k-1] is the source layer, and f[k] is the target layer.
So, instead of storing all layers for the whole time, we can store just two layers, as we don't need more: one for odd k and one for even k.
Using only two layers is as simple as changing all f[k] and f[k-1] to f[k%2] and f[(k-1)%2], respectively.
This lowers the memory requirement from O(n^3) down to O(n^2), as advertised in the beginning.
For a more mathematical solution, here are some steps that would perhaps lead to one.
First, consider the following problem: what is the number of ways to go from (xs, ys) to (xt, yt) in n steps, each step moving one square north, west, south, or east?
To arrive from x = xs to x = xt, we need H = |xt - xs| steps in the right direction (without loss of generality, let it be east).
Similarly, we need V = |yt - ys| steps in another right direction to get to the desired y coordinate (let it be south).
We are left with k = n - H - V "free" steps, which can be split arbitrarily into pairs of north-south steps and pairs of east-west steps.
Obviously, if k is odd or negative, the answer is zero.
So, for each possible split k = 2h + 2v of "free" steps into horizontal and vertical steps, what we have to do is construct a path of H+h steps east, h steps west, V+v steps south, and v steps north. These steps can be done in any order.
The number of such sequences is a multinomial coefficient, and is equal to n! / (H+h)! / h! / (V+v)! / v!.
To finally get the answer, just sum these over all possible h and v such that k = 2h + 2v.
This solution calculates the answer in O(n) if we precalculate the factorials, also in O(n), and consider all arithmetic operations to take O(1) time.
For a hexagonal grid, a complicating feature is that there is no such clear separation into horizontal and vertical steps.
Still, given the starting cell and the number of steps in each of the six directions, we can find the final cell, regardless of the order of these steps.
So, a solution can go as follows:
Enumerate all possible partitions of n into six summands a1, ..., a6.
For each such partition, find the final cell.
For each partition where the final cell is the cell we want, add multinomial coefficient n! / a1! / ... / a6! to the answer.
Just so, this takes O(n^6) time and O(1) memory.
By carefully studying the relations between different directions on a hexagonal grid, perhaps we can actually consider only the partitions which arrive at the target cell, and completely ignore all other partitions.
If so, this solution can be optimized into at least some O(n^3) or O(n^2) time, maybe further with decent algebraic skills.

In search of an algorithm for sorting collection in nodes to satisfy a layout constraint

Firstly, I apologize for the poor title; I cannot think of a good name for this algorithm.
I have an ordered list of stages. Each stage has a cast of characters, unordered. Characters can occur in multiple stages.
A crossing occurs when two consecutive stages cannot have their casts concatenated, with overlap allowed where it would unify the same character on both casts, in a way that leaves a character duplicated in the concatenation. Or, informally, a crossing is when a character would need to be at two different spots at once in a line-up of the combined casts. In code:
uncrossed = [D, F], [N, V, S]
overlap = [D, F, V], [V, N, S]
crossed = [D, V, F], [N, V, S]
In the first example, V isn't with D and F, so there aren't any crossings. In the second example, V is with D and F and then with N and S, but this isn't a problem because the ordering permits (with overlap) a crossing-less concatenation. On the third example, though, the ordering forces a crossing.
For my purposes, crossings can occur on non-consecutive stages as if characters did not actually stray from their previous order in the cast when they are not "on-stage."
I would like to order each stage's cast such that there are as few crossings as possible, understanding that it is definitely possible to have situations where crossings are inevitable. An example series which requires crossings:
required = [A, B], [B, C], [A, C], [A, B]
This all sounds very abstract and silly, so I'll provide a concrete example of a human solving this algorithm for a purpose similar to mine: http://xkcd.com/657/ In this case, the constraint is deliberately ignored for aesthetic purposes, but it's still possible to get a visual idea of what I'm talking about.
I already have some crude ideas for how to solve this, but nothing affordable, and I'm wondering if this is isomorphic to some problem already covered in the literature. It sounds vaguely topological as well.
Since people asked, this algorithm appears to be key to automatically generating pretty timelines for storyboards of characters in stories, and that's what I'm intending to use it for.
This isn't an answer, but I think maybe a more precise or even more correct formulation of what you are looking for:
There is a set, call it C, of characters, and there is a finite ordered sequence S_1, ... S_n of scenes, where a scene is a set consisting of some of the characters. Characters may (and typically do) appear in multiple scenes.
I'd like to phrase your desired outcome in a slightly different way from how you phrased it, because I think it makes it clearer how one may search for a solution (or at least, it makes it totally clear how to brute-force a solution):
The output of our algorithm is a sequence of arrangements of the characters. An arrangement of the characters is just a permutation of the ordered tuple [c_1, ... c_m], where the c_i are the characters, and there are m of them in total, so C = {c_1, ..., c_m}. We want n arrangements in total, call them A_1, ..., A_n, one per scene.
What arrangement A_n corresponds to is the top-to-bottom ordering of the characters in your storyboard, during scene n, in the following sense: draw a vertical line through your storyboard passing through scene n. This line should hit the characters' life-lines in the order specified by A_n.
We require the following property of our arrangements: given scene S_n, the arrangement A_n needs to put the characters contained in S_n into a contiguous chunk, in the following sense: suppose that S_n = {c_2, c_3, c_5}. Then A_n may yield [c_1, c_4, c_2, c_3, c_5], but may not yield [c_2, c_1, c_3, c_4, c_5]. This is because you don't want an errant character "cutting through" the scene in the storyboard.
We hope to minimize the number of "crossings." Here, crossings are easy to define: the number of crossings between A_i and A_(i+1) is exactly equal to the number of transpositions of adjacent characters required to go from permutation A_i to permutation A_(i+1).
I haven't given you an answer, but I think that given the above setup, a brute-force approach isn't too hard to code up, and will give you an answer overnight without a problem, if the storyboard isn't too big.
I think that if you posted this problem on MathOverflow, you could possibly get someone interested in it. Or maybe it has been solved, who knows?

Prolog: where to begin solving Minesweeper-like puzzle?

I need to write something like a minesweeper in prolog. I am able to do that in "normal" language but when i try to start coding with prolog i totally don't know how to start.
I need some sort of tips.
Input specification:
Board size: m × n (m, n ∈ {1,...,16}), list of triples (i, j, k), where i ∈ {1,...,m}, j ∈ {1,...,n}, k ∈ {1,...,8}) describing fields with numbers.
For example:
5
5
[(1,1,1), (2,3,3), (2,5,2), (3,2,2), (3,4,4), (4,1,1), (4,3,1), (5,5,2)].
Output: list of digits and the atoms * (for treasure) and (for blank fields). It is a representation of puzzle solution.
Rules of this puzzle:
In 20 fields of a board there are hidden treasures. A digit in a field represents how many neighbour-fields have a treasure. There are no treasures in fields with a digit. Mark all fields with a treasure.
You need to guess how many treasures are hidden in diagonals.
I would be grateful for any tips. I don't want full solution, I want to write it by my own, but without clues I am not able to do that.
A matrix is usually handled as a list of list, you can build using length/2 and findall/3. A matrix of empty variables (where you will place values while guessing....)
build_matrix(NRows, NCols, Mat) :-
findall(Row, (between(1, NRows, _), length(Row, NCols)), Mat).
Accessing elements via coordinates can be done using nth1 (see here for another answer where you can find some detail: see cell/3).
Then you place all your triples constraints: there is finite number of ways of consuming the 'hidden treasure' counters, let Prolog search all the ways, enumerating adjacents.
Process the list of triples, placing each counter in compatible cells, with a recursive predicate. When the list ends, you have a guess.
To keep your code simpler, don't worry about indexes out of matrix bounds, remember that failures are 'normal' when searching...

Selecting k sub-posets

I ran into the following algorithmic problem while experimenting with classification algorithms. Elements are classified into a polyhierarchy, what I understand to be a poset with a single root. I have to solve the following problem, which looks a lot like the set cover problem.
I uploaded my Latex-ed problem description here.
Devising an approximation algorithm that satisfies 1 & 2 is quite easy, just start at the vertices of G and "walk up" or start at the root and "walk down". Say you start at the root, iteratively expand vertexes and then remove unnecessary vertices until you have at least k sub-lattices. The approximation bound depends on the number of children of a vertex, which is OK for my application.
Does anyone know if this problem has a proper name, or maybe the tree-version of the problem? I would be interested to find out if this problem is NP-hard, maybe someone has ideas for a good NP-hard problem to reduce or has a polynomial algorithm to solve the problem. If you have both collect your million dollar price. ;)
The DAG version is hard by (drum roll) a reduction from set cover. Set k = 2 and do the obvious: condition (2) prevents us from taking the root. (Note that (3) doesn't actually imply (2) because of the lower bound k.)
The tree version is a special case of the series-parallel poset version, which can be solved exactly in polynomial time. Here's a recursive formula that gives a polynomial p(x) where the coefficient of xn is the number of covers of cardinality n.
Single vertex to be covered: p(x) = x.
Other vertex: p(x) = 1 + x.
Parallel composition, where q and r are the polynomials for the two posets: q(x) r(x).
Series composition, where q is the polynomial for the top poset and r, for the bottom: If the top poset contains no vertices to be covered, then p(x) = (q(x) - 1) + r(x); otherwise, p(x) = q(x).

Resources