Initialize Func from an array - halide

Is it possible to initialize a Func from an array in a generator class? The code should look like this.
class SobelConv: public Halide::Generator<SobelConv> {
const signed char kernelx[3][3] = {
{-1, 0, 1},
{-2, 0, 2},
{-1, 0, 1}
};
void generate() {
for (int y = 0; y < 3; y++)
for (int x = 0; x < 3; x++)
kernel_x(x, y) = kernelx[y][x];
conv_x(x, y) += gray(x+win.x, y+win.y) * kernel_x(win.x + 1, win.y + 1);
}
Func kernel_x{"kernel_x"};
Currently, the way I did is to define Input<Buffer<int8_t>> kernel_x. I do not want it to be an argument of the pipeline function, and would like kernel_x to be replaced with the respective numbers directly.

The following compiles and illustrates one way to do this:
#include "Halide.h"
class SobelConv: public Halide::Generator<SobelConv> {
signed char weights[3][3] = {
{-1, 0, 1},
{-2, 0, 2},
{-1, 0, 1}
};
Input<Buffer<int8_t>> gray{"gray", 2};
Halide::Buffer<int8_t> kernel_x{&weights[0][0], 3, 3};
Output<Buffer<int8_t>> conv_x{"conv_x", 2};
Var x, y;
RDom win{0, 3};
void generate() {
conv_x(x, y) += gray(x+win.x, y+win.y) * kernel_x(win.x + 1, win.y + 1);
}
};
The weights will be embedded in the generated code at compile time. We should have a way to provide the constant values for the weights in an initializer list as well, but I'm not finding it at the moment.

Related

getting wrong answer while soving matrix problem using recursion

Problem is:
given mXn matrix with random (0, 1) value. from very start position start moving towards the m-1, n-1 position (last position) the only direction we can move is either down or right.
Rules:
if 1 found can't be moved
only possible move is to 0
So find the possible ways to reach the (m-1, n-1) position .
Example:
matrix((0, 0, 0), (0, 0, 0), (0, 0, 0))
answer: 6
here is my logic:
public class Main {
static int possibility = 0;
static int r = 3;
static int c = 3;
public static void main(String[] args) {
int array[][] = {{0, 0, 0}, {0, 0, 0}, {0, 0, 0}};
// int array[][] = {{0, 1, 1}, {0, 0, 1}, {1, 0, 0}};
matrixProblem(array, 0, 0);
System.out.println("total possible solutions: ");
System.out.println(possibility);
}
static void matrixProblem(int[][] array, int i, int j) {
if (i == r - 1 && j == c - 1) {
possibility++;
return;
}
if(i+1 < r) {
if(array[++i][j] == 0) {
matrixProblem(array, i, j);
}
}
if(j+1 < c) {
if(array[i][++j] == 0) {
matrixProblem(array, i, j);
}
}
}
}
based on my logic it gives wrong answer.
Your logic is almost right but problem is in while recursive call you are passing the incremented i value instead just pass the i+1.
same for j value.
Edit 1:
if(i+1 < r) {
if(array[i+1][j] == 0) {
matrixProblem(array, i+1, j);
}
}
if(j+1 < c) {
if(array[i][j+1] == 0) {
matrixProblem(array, i, j+1);
}
}
}

Algorithms: distribute elements far away from each other

I have an array of sorted integers. Given an integer N i need to place N largest elements further away from each other so that they have maximum space between each other. The remaining elements should be placed between these big items. For example, array of 10 with N=3 would result in [0, 5, 8, 2, 6, 9, 3, 7, 10, 4].
public static void main(String[] args) {
int[] start = {10, 9, 8, 7, 6, 5, 4, 3, 2, 1};
int[] end = new int[10];
int N = 4;
int step = Math.round(start.length / N );
int y = 0;
int count = 0;
for (int i = 0; i < step; i++) {
for (int j = i; j<start.length; j = j + step) {
//System.out.println(j + " " + i);
if (count < start.length && start[count] != 0) {
end[j] = start[count];
count++;
}
}
}
System.out.println(end.toString());
}
You have an array of K elements. You have N max numbers you need to distribute. Then:
Step := K/N (removing the remainder)
Take any number from N maximum and insert it at Step/2 position.
Take other maximum numbers and insert it after the previous inserted maximum number at Step distance.
Giving [1,2,3,4,5,6,7,8,9,10]. So K = 10, N = 3. Then Step = 3. So the first maximum is placed at 3/2 position
[1,10,2,3,4,5,6,7,8,9]
Then other 2 are put at 3 distance from each other:
[1,10,2,3,9,4,5,8,6,7]
The code:
std::vector<int> Distribute(std::vector<int> aSource, int aNumber)
{
auto step = aSource.size() / aNumber; // Note integer dividing.
for (int i = 0; i < aNumber; ++i)
{
auto place = aSource.end() - i * step - step / 2;
aSource.insert(place, aSource.front());
aSource.erase(aSource.begin());
}
return aSource;
}
int main()
{
std::vector<int> vec{10,9,8,7,6,5,4,3,2,1,0,-1,-2,-3,-4,-5,-6,-7,-8,-9,-10};
auto res = Distribute(vec, 4);
for (auto e : res)
{
std::cout << e << ", ";
}
std::cout << std::endl;
}
Output:
6, 5, 4, 7, 3, 2, 1, 0, 8, -1, -2, -3, -4, 9, -5, -6, -7, -8, 10, -9, -10,

Golang sudoku algorithm not working

I'm very new to Golang, I'm trying to do a sudoku with backtracking algorithm.
But when I run my program, there are no errors but it only displays the grid not complete, with empty cases here is my code :
package main
import "fmt"
var sudoku = [9][9]int{
{9, 0, 0, 1, 0, 0, 0, 0, 5},
{0, 0, 5, 0, 9, 0, 2, 0, 1},
{8, 0, 0, 0, 4, 0, 0, 0, 0},
{0, 0, 0, 0, 8, 0, 0, 0, 0},
{0, 0, 0, 7, 0, 0, 0, 0, 0},
{0, 0, 0, 0, 2, 6, 0, 0, 9},
{2, 0, 0, 3, 0, 0, 0, 0, 6},
{0, 0, 0, 2, 0, 0, 9, 0, 0},
{0, 0, 1, 9, 0, 4, 5, 7, 0},
}
func main(){
IsValid(sudoku, 0)
Display(sudoku)
}
func Display(sudoku[9][9] int){
var x, y int
for x = 0; x < 9; x++ {
fmt.Println("")
if(x == 3 || x == 6){
fmt.Println(" ")
}
for y = 0; y < 9; y++ {
if(y == 3 || y == 6){
fmt.Print("|")
}
fmt.Print(sudoku[x][y])
}
}
}
func AbsentOnLine(k int, sudoku [9][9]int, x int) bool {
var y int
for y=0; y < 9; y++ {
if (sudoku[x][y] == k){
return false
}
}
return true
}
func AbsentOnRow(k int, sudoku [9][9]int, y int) bool {
var x int
for x=0; x < 9; x++{
if (sudoku[x][y] == k){
return false;
}
}
return true;
}
func AbsentOnBloc(k int, sudoku [9][9]int, x int, y int) bool {
var firstX, firstY int;
firstX = x-(x%3)
firstY = y-(y%3)
for x = firstX; x < firstX+3; x++ {
for y = firstY; y < firstY+3; y++ {
if (sudoku[x][y] == k){
return false;
}
}
}
return true;
}
func IsValid(sudoku [9][9]int, position int) bool {
if (position == 9*9){
return true;
}
var x, y, k int
x = position/9
y = position%9
if (sudoku[x][y] != 0){
return IsValid(sudoku, position+1);
}
for k=1; k <= 9; k++ {
if (AbsentOnLine(k,sudoku,x) && AbsentOnRow(k,sudoku,y) && AbsentOnBloc(k,sudoku,x,y)){
sudoku[x][y] = k;
if (IsValid(sudoku, position+1)){
return true;
}
}
}
sudoku[x][y] = 0;
return false;
}
I'm getting this in the console :
900|100|005
005|090|201
800|040|000
000|080|000
000|700|000
000|026|009
200|300|006
000|200|900
001|904|570
I don't understand why it's not completing the grid, has anyone any ideas ?
I don't know Golang, but I have written a sudoku solving algorithm using backtracking.
Your code only iterates the board once. You start with position=0, your code than iterates over the board ,if a position has the value zero you try values 1-9 and if that doesn't work you go to the next position. When position=81, your code stops.
You added new values to the board with your Isvalid function, but you are not iterating over the new board again to see if those new values help your AbsentOn... function to return a new value that is different than the previous iteration. You have to iterate your board again and again until you are sure that there are no 0 valued cells.
That is the reason you have to many 0 on the board at the end of your program. Your program iterated only once, at It is can not solve your example sudoku on it's first try. It has to add new values to the board and make the sudoku board easier with every iteration.
Another problem is your code does not give a feedback. For example it gives 1 to an empty cell. That seems okey at first, but It doesn't mean that the final value of that cell has to be 1. It may change because in your next iterations you realize that there is another cell that can only take the value 1, so now you have to change back to your initial cell and find a new value other than 1. Your code also fails to do that. That's why humans put some possible values near a cell when they are not sure.
It looks like your problem is with the algorithm. You have to understand the backtracking algorithm. You can try it in another language that you know well first and then migrate it to golang(I wrote mine in C++). Other than that your golang code is easy to read and I don't see any golang related problems.
Your IsValid function changes the contents of the sudoku. The problem is, it actually, in your code as is, changes only a copy of the sudoku. You need to pass it as a pointer if it should change the actual variable.
Here are the changes that you need in your code, it is only five characters:
func main() {
IsValid(&sudoku, 0)
Display(sudoku)
}
// ...
func IsValid(sudoku *[9][9]int, position int) bool {
// ...
if AbsentOnLine(k, *sudoku, x) && AbsentOnRow(k, *sudoku, y) && AbsentOnBloc(k, *sudoku, x, y) {

Looking for non-recursive algorithm for visiting all k-combinations of a multiset in lexicographic order

More specifically, I'm looking for an algorithm A that takes as its inputs
a sorted multiset M = {a1, a2, …, an } of non-negative integers;
an integer 0 &leq; k &leq; n = |M |;
a "visitor" callback V (taking a k-combination of M as input);
(optional) a sorted k-combination K of M (DEFAULT: the k-combination {a1, a2, …, ak }).
The algorithm will then visit, in lexicographic order, all the k-combinations of M, starting with K, and apply the callback V to each.
For example, if M = {0, 0, 1, 2}, k = 2, and K = {0, 1}, then executing A(M, k, V, K ) will result in the application of the visitor callback V to each of the k-combinations {0, 1}, {0, 2}, {1, 2}, in this order.
A critical requirement is that the algorithm be non-recursive.
Less critical is the precise ordering in which the k-combinations are visited, so long as the ordering is consistent. For example, colexicographic order would be fine as well. The reason for this requirement is to be able to visit all k-combinations by running the algorithm in batches.
In case there are any ambiguities in my terminology, in the remainder of this post I give some definitions that I hope will clarify matters.
A multiset is like a set, except that repetitions are allowed. For example, M = {0, 0, 1, 2} is a multiset of size 4. For this question I'm interested only in finite multisets. Also, for this question I assume that the elements of the multiset are all non-negative integers.
Define a k-combination of a multiset M as any sub-multiset of M of size k. E.g. the 2-combinations of M = {0, 0, 1, 2} are {0, 0}, {0, 1}, {0, 2}, and {1, 2}.
As with sets, the ordering of a multiset's elements does not matter. (e.g. M can also be represented as {2, 0, 1, 0}, or {1, 2, 0, 0}, etc.) but we can define a canonical representation of the multiset as the one in which the elements (here assumed to be non-negative integers) are in ascending order. In this case, any collection of k-combinations of a multiset can itself be ordered lexicographically by the canonical representations of its members. (The sequence of all 2-combinations of M given earlier exhibits such an ordering.)
UPDATE: below I've translated rici's elegant algorithm from C++ to JavaScript as faithfully as I could, and put a simple wrapper around it to conform to the question's specs and notation.
function A(M, k, V, K) {
if (K === undefined) K = M.slice(0, k);
var less_than = function (a, b) { return a < b; };
function next_comb(first, last,
/* first_value */ _, last_value,
comp) {
if (comp === undefined) comp = less_than;
// 1. Find the rightmost value which could be advanced, if any
var p = last;
while (p != first && ! comp(K[p - 1], M[--last_value])) --p;
if (p == first) return false;
// 2. Find the smallest value which is greater than the selected value
for (--p; comp(K[p], M[last_value - 1]); --last_value) ;
// 3. Overwrite the suffix of the subset with the lexicographically
// smallest sequence starting with the new value
while (p !== last) K[p++] = M[last_value++];
return true;
}
while (true) {
V(K);
if (!next_comb(0, k, 0, M.length)) break;
}
}
Demo:
function print_it (K) { console.log(K); }
A([0, 0, 0, 0, 1, 1, 1, 2, 2, 3], 8, print_it);
// [0, 0, 0, 0, 1, 1, 1, 2]
// [0, 0, 0, 0, 1, 1, 1, 3]
// [0, 0, 0, 0, 1, 1, 2, 2]
// [0, 0, 0, 0, 1, 1, 2, 3]
// [0, 0, 0, 0, 1, 2, 2, 3]
// [0, 0, 0, 1, 1, 1, 2, 2]
// [0, 0, 0, 1, 1, 1, 2, 3]
// [0, 0, 0, 1, 1, 2, 2, 3]
// [0, 0, 1, 1, 1, 2, 2, 3]
A([0, 0, 0, 0, 1, 1, 1, 2, 2, 3], 8, print_it, [0, 0, 0, 0, 1, 2, 2, 3]);
// [0, 0, 0, 0, 1, 2, 2, 3]
// [0, 0, 0, 1, 1, 1, 2, 2]
// [0, 0, 0, 1, 1, 1, 2, 3]
// [0, 0, 0, 1, 1, 2, 2, 3]
// [0, 0, 1, 1, 1, 2, 2, 3]
This, of course, is not production-ready code. In particular, I've omitted all error-checking for the sake of readability. Furthermore, an implementation for production will probably structure things differently. (E.g. the option to specify the comparator used by next_combination's becomes superfluous here.) My main aim was to keep the ideas behind the original algorithm as clear as possible in a piece of functioning code.
I checked the relevant sections of TAoCP, but this problem is at most an exercise there. The basic idea is the same as Algorithm L: try to "increment" the least significant positions first, filling the positions after the successful increment to have their least allowed values.
Here's some Python that might work but is crying out for better data structures.
def increment(M, K):
M = list(M) # copy them
K = list(K)
for x in K: # compute the difference
M.remove(x)
for i in range(len(K) - 1, -1, -1):
candidates = [x for x in M if x > K[i]]
if len(candidates) < len(K) - i:
M.append(K[i])
continue
candidates.sort()
K[i:] = candidates[:len(K) - i]
return K
return None
def demo():
M = [0, 0, 1, 1, 2, 2, 3, 3]
K = [0, 0, 1]
while K is not None:
print(K)
K = increment(M, K)
In iterative programming, to make combinations of K size you would need K for loops. First we remove the repetitions from the sorted input, then we create an array that represents the for..loop indices. While the indices array doesn't overflow we keep generating combinations.
The adder function simulates the pregression of counters in a stacked for loop. There is a little bit of room for improvement in the below implementation.
N = size of the distinct input
K = pick size
i = 0 To K - 1
for(var v_{i0} = i_{0}; v_{i} < N - (K - (i + 1)); v_{i}++) {
...
for(var v_{iK-1} = i_{K-1}; v_{iK-1} < N - (K - (i + 1)); v_{iK-1}++) {
combo = [ array[v_{i0}] ... array[v_{iK-1}] ];
}
...
}
Here's the working source code in JavaScript
function adder(arr, max) {
var k = arr.length;
var n = max;
var carry = false;
var i;
do {
for(i = k - 1; i >= 0; i--) {
arr[i]++;
if(arr[i] < n - (k - (i + 1))) {
break;
}
carry = true;
}
if(carry === true && i < 0) {
return false; // overflow;
}
if(carry === false) {
return true;
}
carry = false;
for(i = i + 1; i < k; i++) {
arr[i] = arr[i - 1] + 1;
if(arr[i] >= n - (k - (i + 1))) {
carry = true;
}
}
} while(carry === true);
return true;
}
function nchoosekUniq(arr, k, cb) {
// make the array a distinct set
var set = new Set();
for(var i=0; i < arr.length; i++) { set.add(arr[i]); }
arr = [];
set.forEach(function(v) { arr.push(v); });
//
var n = arr.length;
// create index array
var iArr = Array(k);
for(var i=0; i < k; i++) { iArr[i] = i; }
// find unique combinations;
do {
var combo = [];
for(var i=0; i < iArr.length; i++) {
combo.push(arr[iArr[i]]);
}
cb(combo);
} while(adder(iArr, n) === true);
}
var arr = [0, 0, 1, 2];
var k = 2;
nchoosekUniq(arr, k, function(set) {
var s="";
set.forEach(function(v) { s+=v; });
console.log(s);
}); // 01, 02, 12

Make 2D array into 1D array in Processing

I am trying to flatten a two dimensional array into a one dimensional array. This is what I currently have. I would like the flatten out my array so that the one dimensional array looks like this
int[] oneDim = {1, 2, 3, 4, 5, 6, 7, 8 ,9 ,10, 11, 12};
This is what I currently have. I dont really know how to go about doing this. All help and input is appreciated.
void setup() {
int[][] twoDim = { {1, 2, 3, 4},
{5, 6, 7, 8},
{9, 10, 11, 12} };
int[] oneDim = new int[twoDim.length];
for (int i = 0; i < twoDim.length; i++) {
for (int j = 0; j < twoDim[i].length; j++) {
oneDim[j] += twoDim[j][i];
}
}
println(oneDim);
}
int[][] twoDim = { {1, 2, 3, 4},
{5, 6, 7, 8},
{9, 10, 11, 12} };
int x = twoDim.length;
int y = twoDim[0].length;
int[] oneDim = new int[x*y];
for (int i = 0; i < x; i++) {
for (int j = 0; j < y; j++) {
oneDim[i*y + j] = twoDim[i][j];
}
}
println(oneDim);
Here's a hint: the usual formula for mapping two dimensions to one is: width*y + x, where width is the number of elements in each row (4, in your case, as given by twoDim[i].length, assuming they are all the same length), 'x' is the iterator over columns (j, in your case), and y is the iterator over rows (i for you).
You will want to check that the size of your one dimensional array is sufficient to accept all the elements of twoDim. It doesn't look big enough as it is - it needs to be twoDim[i].length * twoDim.length elements long, at least.
You're currently writing the same row of data over and over again, because you're assigning to oneDim[j] in the inner loop for every iteration of the outer loop. Try assigning to oneDim (once it is of appropriate size) using the formula I suggested at the start of this answer instead.

Resources