How to optimize the given algorithm by reordering the instructions - algorithm

I know that the following pseudocode can be optimized by changing the order of the instructions, but I don't know how to go about it.
The algorithm performs operations on a matrix that is a multidimensional array.
m is the height of this array (int)
n is the width of this array (int)
tab is that array given as a function parameter
function coto(m, n, tab){
for(i:=0; i<n; i:=i+2){
if(n==m){
tab[i][i]:=i-3
}
}
for(j:=m-1; j>0; j:=j-1){
tab[j-1][j]:=j+2
tab[j][j]:= -tab[j][j]
}
return tab
}
example input for that function:
coto(4,4,|2, 2, 2, 2|)
|2, 2, 2, 2|
|2, 2, 2, 2|
|2, 2, 2, 2|
#and output I got:
|-3, 3, 2, 2|
|2, -2, 4, 2|
|2, 2, 1, 5|
|2, 2, 2,-2|
Didnt know how to include latex in this question so tried my best.
I dont know what should I do. I cant get instruction outsite if or for bcs it doesnt make sense in my opinion.

You can divide the possibility chances as two ways.
The first is n==m and the second is n!=m.
I think this code will be more faster than yours.
function coto(m, n, tab){
if(m==0) return
if(n==m){
tab[0][0]:=-3
for(i:=2; i<n; i:=i+2){
tab[i][i]:=3-i
tab[i-1][i-1]:=-tab[i-1][i-1]
tab[i-2][i-1]:=i+1
tab[i-1][i]:=i+2
}
if(n%2==0){
tab[n-1][n-1]:=tab[n-1][n-1]
tab[n-2][n-1]:=n+1
}
} else {
for(j:=m-1; j>0; j:=j-1){
tab[j-1][j]:=j+2
tab[j][j]:=-tab[j][j]
}
}
}
And in my experience it's faster to call the variables from one dimensional array than two dimensional array.
So we can declare a[j]:=tab[j][j] and b[j]:=tab[j-1][j], then change all tab to a and b.

Related

Number of ways to pick the elements of an array?

How to formulate this problem in code?
Problem Statement:
UPDATED:
Find the number of ways to pick the element from the array which are
not visited.
We starting from 1,2,.....,n with some (1<= x <= n) number of elements already picked/visited randomly which is given in the input.
Now, we need to find the number of ways we can pick rest of the (n - x) number of elements present in the array, and the way we pick an element is defined as:
On every turn, we can only pick the element which is adjacent(either left or right) to some visited element i.e
in an array of elements:
1,2,3,4,5,6 let's say we have visited 3 & 6 then we can now pick
2 or 4 or 5, as they are unvisited and adjacent to visited nodes, now say we pick 2, so now we can pick 1 or 4 or 5 and continues.
example:
input: N = 6(number of elements: 1, 2, 3, 4, 5, 6)
M = 2(number of visited elements)
visited elements are = 1, 5
Output: 16(number of ways we can pick the unvisited elements)
ways: 4, 6, 2, 3
4, 6, 3, 2
4, 2, 3, 6
4, 2, 6, 3
4, 3, 2, 6
4, 3, 6, 2
6, 4, 2, 3
6, 4, 2, 3
6, 2, 3, 4
6, 2, 4, 3
2, 6, 4, 3
2, 6, 3, 4
2, 4, 6, 3
2, 4, 3, 6
2, 3, 4, 6
2, 3, 6, 4.
Some analysis of the problem:
The actual values in the input array are assumed to be 1...n, but these values do not really play a role. These values just represent indexes that are referenced by the other input array, which lists the visited indexes (1-based)
The list of visited indexes actually cuts the main array into subarrays with smaller sizes. So for example, when n=6 and visited=[1,5], then the original array [1,2,3,4,5,6] is cut into [2,3,4] and [6]. So it cuts it into sizes 3 and 1. At this point the index numbering loses its purpose, so the problem really is fully described with those two sizes: 3 and 1. To illustrate, the solution for (n=6, visited=[1,5]) is necessarily the same as for (n=7, visited[1,2,6]): the sizes into which the original array is cut, are the same in both cases (in a different order, but that doesn't influence the result).
Algorithm, based on a list of sizes of subarrays (see above):
The number of ways that one such subarray can be visited, is not that difficult: if the subarray's size is 1, there is just one way. If it is greater, then at each pick, there are two possibilities: either you pick from the left side or from the right side. So you get like 2*2*..*2*1 possibilities to pick. This is 2size-1 possibilities.
The two outer subarrays are an exception to this, as you can only pick items from the inside-out, so for those the number of ways to visit such a subarray is just 1.
The number of ways that you can pick items from two subarrays can be determined as follows: count the number of ways to pick from just one of those subarrays, and the number of ways to pick from the other one. Then consider that you can alternate when to pick from one sub array or from the other. This comes down to interweaving the two sub arrays. Let's say the larger of the two sub arrays has j elements, and the smaller k, then consider there are j+1 positions where an element from the smaller sub array can be injected (merged) into the larger array. There are "k multichoose j+1" ways ways to inject all elements from the smaller sub array.
When you have counted the number of ways to merge two subarrays, you actually have an array with a size that is the sum of those two sizes. The above logic can then be applied with this array and the next subarray in the problem specification. The number of ways just multiplies as you merge more subarrays into this growing array. Of course, you don't really deal with the arrays, just with sizes.
Here is an implementation in JavaScript, which applies the above algorithm:
function getSubArraySizes(n, visited) {
// Translate the problem into a set of sizes (of subarrays)
let j = 0;
let sizes = [];
for (let i of visited) {
let size = i - j - 1;
if (size > 0) sizes.push(size);
j = i;
}
let size = n - j;
if (size > 0) sizes.push(size);
return sizes;
}
function Combi(n, k) {
// Count combinations: "from n, take k"
// See Wikipedia on "Combination"
let c = 1;
let end = Math.min(k, n - k);
for (let i = 0; i < end; i++) {
c = c * (n-i) / (end-i); // This is floating point
}
return c; // ... but result is integer
}
function getPickCount(sizes) {
// Main function, based on a list of sizes of subarrays
let count = 0;
let result = 1;
for (let i = 0; i < sizes.length; i++) {
let size = sizes[i];
// Number of ways to take items from this chunk:
// - when items can only be taken from one side: 1
// - otherwise: every time we have a choice between 2, except for the last remaining item
let pickCount = i == 0 || i == sizes.length-1 ? 1 : 2 ** (size-1);
// Number of ways to merge/weave two arrays, where relative order of elements is not changed
// = a "k multichoice from n". See
// https://en.wikipedia.org/wiki/Combination#Number_of_combinations_with_repetition
let weaveCount = count == 0 ? 1 // First time only
: Combi(size+count, Math.min(count, size));
// Number of possibilities:
result *= pickCount * weaveCount;
// Update the size to be the size of the merged/woven array
count += size;
}
return result;
}
// Demo with the example input (n = 6, visited = 1 and 5)
let result = getPickCount(getSubArraySizes(6, [1, 5]));
console.log(result);

Algorithm to generate all multiset size-n partitions

I've been trying to figure out a way to generate all distinct size-n partitions of a multiset, but so far have come up empty handed. First let me show what I'm trying to archieve.
Let's say we have an input vector of uint32_t:
std::vector<uint32_t> input = {1, 1, 2, 2}
An let's say we want to create all distinct 2-size partitions. There's only two of these, namely:
[[1, 1], [2, 2]], [[1, 2], [1, 2]]
Note that order does not matter, i.e. all of the following are duplicate, incorrect solutions.
Duplicate because order within a permutation group does not matter:
[[2, 1], [1, 2]]
Duplicate because order of groups does not matter:
[[2, 2], [1, 1]]
Not homework of some kind BTW. I encountered this while coding something at work, but by now it is out of personal interest that I'd like to know how to deal with this. The parameters for the work-related problem were small enough that generating a couple thousand duplicate solutions didn't really matter.
Current solution (generates duplicates)
In order to illustrate that I'm not just asking without having tried to come up with a solution, let me try to explain my current algorithm (which generates duplicate solutions when used with multisets).
It works as follows: the state has a bitset with n bits set to 1 for each partition block. The length of the bitsets is size(input) - n * index_block(), e.g. if the input vector has 8 elements and n = 2, then the first partition block uses an 8-bit bitset with 2 bits set to 1, the next partition block uses a 6-bit bitset with 2 bits set to 1, etc.
A partition is created from these bitsets by iterating over each bitset in order and extracting the elements of the input vector with indices equal to the position of 1-bits in the current bitset.
In order to generate the next partition, I iterate over the bitsets in reverse order. The next bitset permutation is calculated (using a reverse of Gosper's hack). If the first bit in the current bitset is not set (i.e. vector index 0 not selected), then that bitset is reset to its starting state. Enforcing that the first bit is always set prevents generating duplicates when creating size-n set partitions (duplicates of the 2nd kind shown above). If the current bitset is equal to its starting value, this step is then repeated for the previous (longer) bitset.
This works great (and very fast) for sets. However, when used with multisets it generates duplicate solutions, since it is unaware that both elements appear more than once in the input vector. Here's some example output:
std::vector<uint32_t> input = {1, 2, 3, 4};
printAllSolutions(myCurrentAlgo(input, 2));
=> [[2, 1], [4, 3]], [[3, 1], [4, 2]], [[4, 1], [3, 2]]
std::vector<uint32_t> input = {1, 1, 2, 2};
printAllSolutions(myCurrentAlgo(input, 2));
=> [[1, 1], [2, 2]], [[2, 1], [2, 1]], [[2, 1], [2, 1]]
That last (duplicate) solution is generated simply because the algorithm is unaware of duplicates in the input, it generates the exact same internal states (i.e. which indices to select) in both examples.
Wanted solution
I guess it's pretty clear by now what I'm trying to end up with. Just for the sake of completeness, it would look somewhat as follows:
std::vector<uint32_t> multiset = {1, 1, 2, 2};
MagicClass myGenerator(multiset, 2);
do {
std::vector<std::vector<uint32_t> > nextSolution = myGenerator.getCurrent();
std::cout << nextSolution << std::endl;
} while (myGenerator.calcNext());
=> [[1, 1], [2, 2]]
[[1, 2], [1, 2]]
I.e. the code would work somewhat like std::next_permutation, informing that is has generated all solutions and has ended back at the "first" solution (for whatever definition of first you want to use, probably lexicographically, but doesn't need to be).
The closest related algorithm I found is Algorithm M from Knuth's The Art of Computer Programming, Volume 4 Part 1, section 7.2.1.5 (p. 430). However, that generates all possible multiset partitions. There is also an exercise in the book (7.2.1.5.69, solution on p. 778) about how to modify Alg. M in order to generate only solutions with at most r partitions. However, that still allows partitions of different sizes (e.g. [[1, 2, 2], [1]] would be a valid output for r = 2).
Any ideas/tricks/existing algorithms on how to go about this? Note that the solution should be efficient, i.e. keeping track of all previously generated solutions, figuring out if the currently generated one is a permutation and if so skipping it, is infeasible because of the rate by which the solution space explodes for longer inputs with more duplicates.
A recursive algorithm to distribute the elements one-by-one could be based on a few simple rules:
Start by sorting or counting the different elements; they don't have to be in any particular order, you just want to group identical elements together. (This step will simplify some of the following steps, but could be skipped.)
{A,B,D,C,C,D,B,A,C} -> {A,A,B,B,D,D,C,C,C}
Start with an empty solution, and insert the elements one by one, using the following rules:
{ , , } { , , } { , , }
Before inserting an element, find the duplicate blocks, e.g.:
{A, , } { , , } { , , }
^dup^
{A, , } {A, , } {A, , }
^dup^ ^dup^
Insert the element into every non-duplicate block with available space:
partial solution: {A, , } {A, , } { , , }
^dup^
insert element B: {A,B, } {A, , } { , , }
{A, , } {A, , } {B, , }
If an identical element is already present, don't put the new element before it:
partial solution: {A, , } {B, , } { , , }
insert another B: {A,B, } {B, , } { , , } <- ILLEGAL
{A, , } {B,B, } { , , } <- OK
{A, , } {B, , } {B, , } <- OK
When inserting an element of which there are another N identical elements, make sure to leave N open spots after the current element:
partial solution: {A, , } {A, , } {B,B, }
insert first D: {A,D, } {A, , } {B,B, } <- OK
{A, , } {A, , } {B,B,D} <- ILLEGAL (NO SPACE FOR 2ND D)
The last group of identical elements can be inserted in one go:
partial solution: {A,A, } {B,B,D} {D, , }
insert C,C,C: {A,A,C} {B,B,D} {D,C,C}
So the algorithm would be something like this:
// PREPARATION
Sort or group input. // {A,B,D,C,C,D,B,A,C} -> {A,A,B,B,D,D,C,C,C}
Create empty partial solution. // { , , } { , , } { , , }
Start recursion with empty partial solution and index at start of input.
// RECURSION
Receive partial solution, index, group size and last-used block.
If group size is zero:
Find group size of identical elements in input, starting at index.
Set last-used block to first block.
Find empty places in partial solution, starting at last-used block.
If index is at last group in input:
Fill empty spaces with elements of last group.
Store complete solution.
Return from recursion.
Mark duplicate blocks in partial solution.
For each block in partial solution, starting at last-used block:
If current block is not a duplicate, and has empty places,
and the places left in current and later blocks is not less than the group size:
Insert element into copy of partial solution.
Recurse with copy, index + 1, group size - 1, current block.
I tested a simple JavaScript implementation of this algorithm, and it gives the correct output.
Here's my pencil and paper algorithm:
Describe the multiset in item quantities, e.g., {(1,2),(2,2)}
f(multiset,result):
if the multiset is empty:
return result
otherwise:
call f again with each unique distribution of one element added to result and
removed from the multiset state
Example:
{(1,2),(2,2),(3,2)} n = 2
11 -> 11 22 -> 11 22 33
11 2 2 -> 11 23 23
1 1 -> 12 12 -> 12 12 33
12 1 2 -> 12 13 23
Example:
{(1,2),(2,2),(3,2)} n = 3
11 -> 112 2 -> 112 233
11 22 -> 113 223
1 1 -> 122 1 -> 122 133
12 12 -> 123 123
Let's solve the problem commented below by m69 of dealing with potential duplicate distribution:
{A,B,B,C,C,D,D,D,D}
We've reached {A, , }{B, , }{B, , }, have 2 C's to distribute
and we'd like to avoid `ac bc b` generated along with `ac b bc`.
Because our generation in the level just above is ordered, the series of identical
counts will be continuous. When a series of identical counts is encountered, make
the assignment for the whole block of identical counts (rather than each one),
and partition that contribution in descending parts; for example,
| identical |
ac b b
ac bc b // descending parts [1,0]
Example of longer block:
| identical block | descending parts
ac bcccc b b b // [4,0,0,0]
ac bccc bc b b // [3,1,0,0]
ac bcc bcc b b // [2,2,0,0]
...
Here's a working solution that makes use of the next_combination function presented by Hervé Brönnimann in N2639. The comments should make it pretty self-explanatory. The "herve/combinatorics.hpp" file contains the code listed in N2639 inside the herve namespace. It's in C++11/14, converting to an older standard should be pretty trivial.
Note that I only quickly tested the solution. Also, I extracted it from a class-based implementation just a couple of minutes ago, so some extra bugs might have crept in. A quick initial test seems to confirm it works, but there might be corner cases for which it won't.
#include <cstdint>
#include <iterator>
#include "herve/combinatorics.hpp"
template <typename BidirIter>
bool next_combination_partition (BidirIter const & startIt,
BidirIter const & endIt, uint32_t const groupSize) {
// Typedefs
using tDiff = typename std::iterator_traits<BidirIter>::difference_type;
// Skip the last partition, because is consists of the remaining elements.
// Thus if there's 2 groups or less, the start should be at position 0.
tDiff const totalLength = std::distance(startIt, endIt);
uint32_t const numTotalGroups = std::max(static_cast<uint32_t>((totalLength - 1) / groupSize + 1), 2u);
uint32_t curBegin = (numTotalGroups - 2) * groupSize;
uint32_t const lastGroupBegin = curBegin - 1;
uint32_t curMid = curBegin + groupSize;
bool atStart = (totalLength != 0);
// Iterate over combinations from back of list to front. If a combination ends
// up at its starting value, update the previous one as well.
for (; (curMid != 0) && (atStart);
curMid = curBegin, curBegin -= groupSize) {
// To prevent duplicates, first element of each combination partition needs
// to be fixed. So move start iterator to the next element. This is not true
// for the starting (2nd to last) group though.
uint32_t const startIndex = std::min(curBegin + 1, lastGroupBegin + 1);
auto const iterStart = std::next(startIt, startIndex);
auto const iterMid = std::next(startIt, curMid);
atStart = !herve::next_combination(iterStart, iterMid, endIt);
}
return !atStart;
}
Edit Below is my quickly thrown together test code ("combopart.hpp" obviously being the file containing the above function).
#include "combopart.hpp"
#include <algorithm>
#include <cstdint>
#include <iostream>
#include <iterator>
#include <vector>
int main (int argc, char* argv[]) {
uint32_t const groupSize = 2;
std::vector<uint32_t> v;
v = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9};
v = {0, 0, 0, 1, 1, 1, 2, 2, 2, 3};
v = {1, 1, 2, 2};
// Make sure contents are sorted
std::sort(v.begin(), v.end());
uint64_t count = 0;
do {
++count;
std::cout << "[ ";
uint32_t elemCount = 0;
for (auto it = v.begin(); it != v.end(); ++it) {
std::cout << *it << " ";
elemCount++;
if ((elemCount % groupSize == 0) && (it != std::prev(v.end()))) {
std::cout << "| ";
}
}
std::cout << "]" << std::endl;
} while (next_combination_partition(v.begin(), v.end(), groupSize));
std::cout << std::endl << "# elements: " << v.size() << " - group size: " <<
groupSize << " - # combination partitions: " << count << std::endl;
return 0;
}
Edit 2 Improved algorithm. Replaced early exit branch with combination of conditional move (using std::max) and setting atStart boolean to false. Untested though, be warned.
Edit 3 Needed an extra modification so as not to "fix" the first element in the 2nd to last partition. The additional code should compile as a conditional move, so there should be no branching cost associated with it.
P.S.: I am aware that the code to generate combinations by #Howard Hinnant (available at https://howardhinnant.github.io/combinations.html) is much faster than the one by Hervé Brönnimann. However, that code can not handle duplicates in the input (because as far as I can see, it never even dereferences an iterator), which my problem explicitly requires. On the other hand, if you know for sure your input won't contain duplicates, it is definitely the code you want use with my function above.

What is Sliding Window Algorithm? Examples?

While solving a geometry problem, I came across an approach called Sliding Window Algorithm.
Couldn't really find any study material/details on it.
What is the algorithm about?
I think of it as more a technique than an algorithm. It's a technique that could be utilized in various algorithms.
I think the technique is best understood with the following example. Imagine we have this array:
[ 5, 7, 1, 4, 3, 6, 2, 9, 2 ]
How would we find the largest sum of five consecutive elements? Well, we'd first look at 5, 7, 1, 4, 3 and see that the sum is 20. Then we'd look at the next set of five consecutive elements, which is 7, 1, 4, 3, 6. The sum of those is 21. This is more than our previous sum, so 7, 1, 4, 3, 6 is currently the best we've got so far.
Let's see if we could improve. 1, 4, 3, 6, 2? No, that sums to 16. 4, 3, 6, 2, 9? That sums to 24, so now that's the best sequence we've got. Now we move along to the next sequence, 3, 6, 2, 9, 2. That one sums to 22, which doesn't beat our current best of 24. And we've reached the end, so we're done.
The brute force approach to implementing this programmatically is as follows:
const getMaxSumOfFiveContiguousElements = (arr) => {
let maxSum = -Infinity;
let currSum;
for (let i = 0; i <= arr.length - 5; i++) {
currSum = 0;
for (let j = i; j < i + 5; j++) {
currSum += arr[j];
}
maxSum = Math.max(maxSum, currSum);
}
return maxSum;
};
What is the time complexity of this? It's O(n*k). The outer loop is going through n - k + 1 items, but when n is much larger than k, we can forget about the k + 1 part and just call it n items. Then the inner loop is going through k items, so we have O(n*k). Try visualizing it like this:
Can we get this down to just O(n)? Let's return to this array:
[ 5, 7, 1, 4, 3, 6, 2, 9, 2 ]
First we get the sum of 5, 7, 1, 4, 3. Next we need the sum of 7, 1, 4, 3, 6. Visualize it like this, with a "window" surrounding each group of five elements.
What's the difference between the first window and the second window? Well, the second window got rid of the 5 on the left but added a 6 on the right. So since we know the sum of the first window was 20, to get the sum of the second window, we take that 20, subtract out the 5, and add the 6 to get 21. We don't actually have to go through each element in the second window and add them up (7 + 1 + 4 + 3 + 6). That would involve doing repeated and unnecessary work.
Here the sliding window approach ends up being two operations instead of five, since k is 5. That's not a huge improvement, but you can imagine that for larger k (and larger n) it really does help.
Here's how the code would work using the sliding window technique:
const getLargestSumOfFiveConsecutiveElements = (arr) => {
let currSum = getSum(arr, 0, 4);
let largestSum = currSum;
for (let i = 1; i <= arr.length - 5; i++) {
currSum -= arr[i - 1]; // subtract element to the left of curr window
currSum += arr[i + 4]; // add last element in curr window
largestSum = Math.max(largestSum, currSum);
}
return largestSum;
};
const getSum = (arr, start, end) => {
let sum = 0;
for (let i = start; i <= end; i++) {
sum += arr[i];
}
return sum;
};
And that's the gist of the sliding window technique. In other problems you may be doing something more complicated than getting the sum of the elements inside the window. Or the window itself may be of varying size instead of the fixed size of five that we saw here. But this basic application of the sliding window technique should give you a foundation from which you could build off of.
Generally speaking a sliding window is a sub-list that runs over an underlying collection. I.e., if you have an array like
[a b c d e f g h]
a sliding window of size 3 would run over it like
[a b c]
[b c d]
[c d e]
[d e f]
[e f g]
[f g h]
This is useful if you for instance want to compute a running average, or if you want to create a set of all adjacent pairs etc.
The Sliding window is a problem-solving technique for problems that involve arrays/lists. These problems are easy to solve using a brute force approach in O(n^2) or O(n^3). Using the 'sliding window' technique, we can reduce the time complexity to O(n).
Great article on this is here: https://medium.com/outco/how-to-solve-sliding-window-problems-28d67601a66
So the first thing you want to be able to do is to identify a problem
that uses a sliding window paradigm. Luckily, there are some common
giveaways:
The problem will involve a data structure that is ordered and iterable like an array or a string
You are looking for some subrange in that array/string, like the longest, shortest or target value.
There is an apparent naive or brute force solution that runs in O(N²), O(2^N) or some other large time complexity.
But the biggest giveaway is that the thing you are looking for is
often some kind of optimal, like the longest sequence or shortest
sequence of something that satisfies a given condition exactly.
To add to the previous answers here are some more resources which illustrates this concept very well.
This youtube video is the best that I have found on this topic.
Here are the list of questions on leetcode which can be solved using this technique
The sliding window is one of the most frequent topic which is asked in the coding rounds in the top companies so it is definitely worth spending some time to master this.

Problem coming up with an array function

Let's say I have an increasing sequence of integers: seq = [1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 4 ... ] not guaranteed to have exactly the same number of each integer but guaranteed to be increasing by 1.
Is there a function F that can operate on this sequence whereby F(seq, x) would give me all 1's when an integer in the sequence equals x and all other integers would be 0.
For example:
t = [1, 1, 1, 1, 2, 2, 3, 3, 3, 4]
F(t, 2) = [0, 0, 0, 0, 1, 1, 0, 0, 0, 0]
EDIT: I probably should have made it more clear. Is there a solution where I can do some algebraic operations on the entire array to get the desired result, without iterating over it?
So, I'm wondering if I can do something like: F(t, x) = t op x ?
In Python (t is a numpy.array) it could be:
(t * -1) % x or something...
EDIT2: I found out that the identity function I(t[i] == x) is acceptable to use as an algebraic operation. Sorry, I did not know about identity functions.
There's a very simple solution to this that doesn't require most of the restrictions you place upon the domain. Just create a new array of the same size, loop through and test for equality between the element in the array and the value you want to compare against. When they're the same, set the corresponding element in the new array to 1. Otherwise, set it to 0. The actual implementation depends on the language you're working with, but should be fairly simple.
If we do take into account your domain, you can introduce a couple of optimisations. If you start with an array of zeroes, you only need to fill in the ones. You know you don't need to start checking until the (n - 1)th element, where n is the value you're comparing against, because there must be at least one of the numbers 1 to n in increasing order. If you don't have to start at 1, you can still start at (n - start). Similarly, if you haven't come across it at array[n - 1], you can jump n - array[n - 1] more elements. You can repeat this, skipping most of the elements, as much as you need to until you either hit the right value or the end of the list (if it's not in there at all).
After you finish dealing with the value you want, there's no need to check the rest of the array, as you know it'll always be increasing. So you can stop early too.
A simple method (with C# code) is to simply iterate over the sequence and test it, returning either 1 or 0.
foreach (int element in sequence)
if (element == myValue)
yield return 1;
else
yield return 0;
(Written using LINQ)
sequence.Select(elem => elem == myValue ? 1 : 0);
A dichotomy algorithm can quickly locate the range where t[x] = n making such a function of sub-linear complexity in time.
Are you asking for a readymade c++, java API or are you asking for an algorithm? Or is this homework question?
I see the simple algorithm for scanning the array from start to end and comparing with each. If equals then put as 1 else put as 0. Anyway to put the elements in the array you will have to access each element of the new array atleast one. So overall approach will be O(1).
You can certainly reduce the comparison by starting a binary search. Once you find the required number then simply go forward and backward searching for the same number.
Here is a java method which returns a new array.
public static int[] sequence(int[] seq, int number)
{
int[] newSequence = new int[seq.length];
for ( int index = 0; index < seq.length; index++ )
{
if ( seq[index] == number )
{
newSequence[index] = 1;
}
else
{
newSequence[index] = 0;
}
}
return newSequence;
}
I would initialize an array of zeroes, then do a binary search on the sequence to find the first element that fits your criteria, and only start setting 1's from there. As soon as you have a not equal condition, stop.
Here is a way to do it in O(log n)
>>> from bisect import bisect
>>> def f(t, n):
... i = bisect(t,n-1)
... j = bisect(t,n,lo=i) - i
... return [0]*i+[1]*j+[0]*(len(t)-j-i)
...
...
>>> t = [1, 1, 1, 1, 2, 2, 3, 3, 3, 4]
>>> print f(t, 2)
[0, 0, 0, 0, 1, 1, 0, 0, 0, 0]

Uniform integer divider

The problem looks like this,
You have to draw N px width line as a M uniform dashes.
If for example N=13 and M=5 you our dash will have 2 px width and we will have 3 px error.
We can do better, we can draw dashes with following widths: 3, 3, 3, 2, 2 .
But we can do even better the dashes can have following widths: 3, 2, 3, 2, 3 .
If I have a list a = ( 3, 3, 3, 2, 2 ) how can I find such list that the distance 'D' between all pairs in the list will be maximum?
In this example D(a) = 0 + 0 + 1 + 0 = 1.
For list b = ( 3, 2, 3, 2, 3 ), D(b) = 1+1+1+1 = 4.
What is the fastest/simplest method?
The simplest method I know of ? Using floating point numbers...
In Python:
def pace(D,M): return [round(float(D) / M * i) for i in range(1,M+1)]
I have already seen this somewhere here I think.
Something inspired by Bresenham's algorithm should do the trick. Believe me, you don't want to maximize D over all permutations of your set. This problem is overly complex (complexity is O(n!), so unless n is very small, this won't work)

Resources