I learned about ternary search from Wikipedia. I'm not sure what they mean by the parameter absolute precision. They didn't elaborate. But here is the pseudocode:
def ternarySearch(f, left, right, absolutePrecision):
#left and right are the current bounds; the maximum is between them
if (right - left) < absolutePrecision:
return (left + right)/2
leftThird = (2*left + right)/3
rightThird = (left + 2*right)/3
if f(leftThird) < f(rightThird):
return ternarySearch(f, leftThird, right, absolutePrecision)
return ternarySearch(f, left, rightThird, absolutePrecision)
I want to find max value from a unimodal function. That means I want to print the border point of the increasing and decreasing sequence. If the sequence is
1 2 3 4 5 -1 -2 -3 -4
then I want to print 5 as output.
Here is my attempt. It isn't giving output. Can you please help or give me link that contains good tutorial on ternary search for self learning?
#include<iostream>
using namespace std;
int ternary_search(int[], int, int, int);
int precval = 1;
int main()
{
int n, arr[100], target;
cout << "\t\t\tTernary Search\n\n" << endl;
//cout << "This program will find max element in an unidomal array." << endl;
cout << "How many integers: ";
cin >> n;
for (int i=0; i<n; i++)
cin >> arr[i];
cout << endl << "The max number in the array is: ";
int res = ternary_search(arr,0,n-1,precval)+0;
cout << res << endl;
return 0;
}
int ternary_search(int arr[], int left, int right, int precval)
{
if (right-left <= precval)
return (arr[right] > arr[left]) ? arr[right] : arr[left];
int first_third = (left * 2 + right) / 3;
int last_third = (left + right * 2) / 3;
if(arr[first_third] < arr[last_third])
return ternary_search(arr, first_third, right, precval);
else
return ternary_search(arr, left, last_third, precval);
}
Thank you in advance.
Absolute precision means the maximum error between the returned result and the true result i.e. max | returned_result - true_result |. In that context, f is a continuous function.
Since you are looking at a discrete function, you can't do much better than get to the point where right - left <= 1. Then, just compare the two resultant values and return the value corresponding to the larger one (since you're looking for max).
EDIT
The first partition point, being mathematically 2/3*left + right/3, should be discretized to ceil(2/3*left + right/3) (so that the relationship is left < first_third <= last_third < right
So first_third = (left*2+right)/3 should be changed to first_third = (left*2 + right + 2)/3.
Try Golden Section search (or Fibonacci search for discrete functions).
It has a smaller number of recursions AND a 50% reduction in evaluations of f, compared to the above ternary search.
Related
I'm struggling with this problem so if anyone can help, that would be appreciated. The problem goes like this:
Calculate the number of ways that k people can sit in a 2 x n matrix (n and k are obtained from the user through standard input). The matrix is also given by the user and can contain the following characters: '.' - people can sit here, '#' - people can't sit here.
People in the matrix can't be adjacent (that is if one person is situated at (row, column), another person can't sit at (row-1, column) or at (row, column-1) - notice that they can sit on (row-1, column-1)).
For example, if n = 3, k = 2 and given the following matrix:
..#
...
the answer would be 5. All possible ways to seat 2 people in the matrix are (u means that a person is sitting on that field):
u.# .u# ..# u.# .u#
.u. u.. u.u ..u ..u
Let's go through 2 x N matrix from left to right. On each column we could have only 3 states:
User on top position
User on bottom position
No users
So, on each step we could move from previous states and all we need to keep number of ways for each state and each number of users:
State Top can move to states: Bottom or None
State Bottom can move to states: Top or None
State None can move to states: Top, Bottom or None
Answer is a sum of all states with K users.
Sample code:
#include <iostream>
#include <map>
#include <string>
using namespace std;
enum State: int
{
Top, // u
// -
Bottom, // -
// u
None, // -
// -
};
int main()
{
int N, K; cin >> N >> K;
string S[2]; cin >> S[0] >> S[1];
map<State, map<int, int>> prev = { { None, {{0,1}} } };
for (int i = 0; i < N; ++i) {
map<State, map<int, int>> cur;
if (S[0][i] == '.') {
for (auto& w : prev[None]) cur[Top][w.first + 1] += w.second;
for (auto& w : prev[Bottom]) cur[Top][w.first + 1] += w.second;
}
if (S[1][i] == '.') {
for (auto& w : prev[None]) cur[Bottom][w.first + 1] += w.second;
for (auto& w : prev[Top]) cur[Bottom][w.first + 1] += w.second;
}
for (auto& w : prev[None]) cur[None][w.first] += w.second;
for (auto& w : prev[Top]) cur[None][w.first] += w.second;
for (auto& w : prev[Bottom]) cur[None][w.first] += w.second;
swap(cur, prev);
}
cout << (prev[Top][K] + prev[Bottom][K] + prev[None][K]) << endl;
return 0;
}
So i have some ranges like these:
2 4
1 9
4 5
4 7
For this the result should be
1 -> 1
2 -> 2
3 -> 2
4 -> 4
5 -> 3
6 -> 2
7 -> 2
8 -> 1
9 -> 1
The naive approach will be to loop through all the ranges but that would be very inefficient and the worst case would take O(n * n)
What would be the efficient approach probably in O(n) or O(log(n))
Here's the solution, in O(n):
The rationale is to add a range [a, b] as a +1 in a, and a -1 after b. Then, after adding all the ranges, then compute the accumulated sums for that array and display it.
If you need to perform queries while adding the values, a better choice would be to use a Binary Indexed Tree, but your question doesn't seem to require this, so I left it out.
#include <iostream>
#define MAX 1000
using namespace std;
int T[MAX];
int main() {
int a, b;
int min_index = 0x1f1f1f1f, max_index = 0;
while(cin >> a >> b) {
T[a] += 1;
T[b+1] -= 1;
min_index = min(min_index, a);
max_index = max(max_index, b);
}
for(int i=min_index; i<=max_index; i++) {
T[i] += T[i-1];
cout << i << " -> " << T[i] << endl;
}
}
UPDATE: Based on the "provocations" (in a good sense) by גלעד ברקן, you can also do this in O(n log n):
#include <iostream>
#include <map>
#define ull unsigned long long
#define miit map<ull, int>::iterator
using namespace std;
map<ull, int> T;
int main() {
ull a, b;
while(cin >> a >> b) {
T[a] += 1;
T[b+1] -= 1;
}
ull last;
int count = 0;
for(miit it = T.begin(); it != T.end(); it++) {
if (count > 0)
for(ull i=last; i<it->first; i++)
cout << i << " " << count << endl;
count += it->second;
last = it->first;
}
}
The advantage of this solution is being able to support ranges with much larger values (as long as the output isn't so large).
The solution would be pretty simple:
generate two lists with the indices of all starting and ending indices of the ranges and sort them.
Generate a counter for the number of ranges that cover the current index. Start at the first item that is at any range and iterate over all numbers to the last element that is in any range. Now if an index is either part of the list of starting-indices, we add 1 to the counter, if it's an element of the ending-indices, we substract 1 from the counter.
Implementation:
vector<int> count(int** ranges , int rangecount , int rangemin , int rangemax)
{
vector<int> res;
set<int> open, close;
for(int** r = ranges ; r < ranges + sizeof(int*) * rangecount ; r++)
{
open.add((*r)[0]);
close.add((*r)[1]);
}
int rc = 0;
for(int i = rangemin ; i < rangemax ; i++)
{
if(open.count(i))
++rc;
res.add(rc);
if(close.count(i))
--rc;
}
return res;
}
Paul's answer still counts from "the first item that is at any range and iterate[s] over all numbers to the last element that is in any range." But what is we could aggregate overlapping counts? For example, if we have three (or say a very large number of) overlapping ranges [(2,6),[1,6],[2,8] the section (2,6) could be dependent only on the number of ranges, if we were to label the overlaps with their counts [(1),3(2,6),(7,8)]).
Using binary search (once for the start and a second time for the end of each interval), we could split the intervals and aggregate the counts in O(n * log m * l) time, where n is our number of given ranges and m is the number of resulting groups in the total range and l varies as the number of disjoint updates required for a particular overlap (the number of groups already within that range). Notice that at any time, we simply have a sorted list grouped as intervals with labeled count.
2 4
1 9
4 5
4 7
=>
(2,4)
(1),2(2,4),(5,9)
(1),2(2,3),3(4),2(5),(6,9)
(1),2(2,3),4(4),3(5),2(6,7),(8,9)
So you want the output to be an array, where the value of each element is the number of input ranges that include it?
Yeah, the obvious solution would be to increment every element in the range by 1, for each range.
I think you can get more efficient if you sort the input ranges by start (primary), end (secondary). So for 32bit start and end, start:end can be a 64bit sort key. Actually, just sorting by start is fine, we need to sort the ends differently anyway.
Then you can see how many ranges you enter for an element, and (with a pqueue of range-ends) see how many you already left.
# pseudo-code with possible bugs.
# TODO: peek or put-back the element from ranges / ends
# that made the condition false.
pqueue ends; // priority queue
int depth = 0; // how many ranges contain this element
for i in output.len {
while (r = ranges.next && r.start <= i) {
ends.push(r.end);
depth++;
}
while (ends.pop < i) {
depth--;
}
output[i] = depth;
}
assert ends.empty();
Actually, we can just sort the starts and ends separately into two separate priority queues. There's no need to build the pqueue on the fly. (Sorting an array of integers is more efficient than sorting an array of structs by one struct member, because you don't have to copy around as much data.)
I am very new to tree data structures. I know how the entire structure works, but am not sure how to approach randomly generating one.
For example, to create a binary tree with depth 3, you essentially go putting the pieces together one by one. ie:
root = Node()
root.leftChild = Node()
root.rightChild = Node()
root.leftChild.leftChild = 'left'
root.rightChild.rightChild = 'right'
The above doesn't work when I want to randomly create binary tree structures that vary differently between each other. What I mean by randomly creating a tree structure is essentially randomly creating a node type, randomly assign a child or not assign one but the end result will always have a depth of N.
Does anyone have any suggestions on how to approach this? I would love to see some pseudo code/algorithm or anything of that nature.
thanks
I wrote a simple program to illustrate my method. The program will generate a binary-heap-like structure, and it will be simple to convert it to your structure.
#include <iostream>
#include <time.h>
using namespace std;
int main(){
int maxDepth; //The max depth of the tree
int totalNodes; //The least number of nodes in the tree.
int realTotalNodes = 0; //The real number of nodes in the tree.
cin >> maxDepth >> totalNodes;
srand(time(NULL));
int indexMax = (1 << maxDepth) - 1 ; //Max index of the nodes in the n-depth binary tree.
bool* nodes = new bool[indexMax + 1];
memset(nodes, 0, indexMax + 1);
int lastMax = indexMax, lastMin =1 << (maxDepth - 1); //Min and Max index of nodes at n-th level
//First, promise that the tree will be n-level high.
//That is, create a path from root to n-th level.
int lastIndex = (rand() % lastMin) + lastMin; //Generate a node that is at n-th level.
while(lastIndex > 0){ //Create its parent, grand-parent, grand-grand-parent...
nodes[lastIndex] = true;
realTotalNodes++;
lastIndex = lastIndex / 2;
totalNodes--;
}
while(totalNodes > 0){
int currentIndex = rand() % indexMax; //Randomly generate the leaves in the tree
totalNodes--;
while(currentIndex > 0){ //Create its parents...
if(nodes[currentIndex] == true){ //If some parent exists, then its grand-parents have already been created.
break;
}
nodes[currentIndex] = true;
realTotalNodes++;
currentIndex = currentIndex / 2;
totalNodes--;
}
}
//Print these stuff.
int level = 2;
for(int i = 1 ; i < indexMax ; i++){
if(nodes[i]){
cout << i << "\t";
}
if(i == level - 1){
cout << endl;
level = level * 2;
}
}
return 0;
}
Is there a single one-liner to find the largest element smaller than some element x in a sorted container? I'm essentially interested in any code that will give me an iterator pointing to the largest element smaller than x.
I know how to code this up myself, but would hope that there is a library function for it...
EDIT: Maybe I should make myself clear here, that the version I have in mind that I would code myself is based on binary search and thus runs in O(log n) time. I need to compute this for lists with up to a few million elements.
Since your container is sorted, you can use std::max_element on a range ending with the first element greater than your max, use std::find_if with a lambda, or std::lower_bound to get this range :
int main()
{
std::set<int> s{ 3, 1, -14, 1, 5, 9 };
std::set<int>::iterator result;
int max_value = 6;
result = std::max_element(std::begin(s), std::find_if(std::begin(s), std::end(s), [&](int i) { return i >= max_value; } ) );
std::cout << "max element is: " << *result;
}
Output :
max element is: 5
Live Demo
Or with std::lower_bound :
int main()
{
std::set<int> s{ 3, 1, -14, 1, 5, 9 };
std::set<int>::iterator result;
int max_value = 6;
result = std::max_element(std::begin(s), std::lower_bound(std::begin(s), std::end(s), max_value)) ;
std::cout << "max element is: " << *result;
}
Live Demo
You can just use
lower_bound(container.begin(), container.end(), currentElement);
Now if that is different than container.begin() then there is an element that is smaller than your current one, just substract one and you get it. If you are sure there is always such an element just do
lower_bound(container.begin(), container.end(), currentElement) - 1;
EDIT: Of course I assume that your iterator is bidirectional.
Use prev() in combination with lower_bound()
mycontainer.lower_bound(int_val) returns an iterator to the number that's equal to or greater than int_val
*prev(mycontainer.lower_bound(int_val)) returns the value just before int_val in the container
As a defensive check, first validate that it's not at the beginning of the container:
int result = -1, target = 15;
set<int> _container;
auto _iterator = _container.lower_bound(target);
if(_iterator != _container.begin())
result = *std::prev(_iterator);
Assuming that you have your data in a set s of integers, you can find the largest element smaller than x as follows:
auto it = s.lower_bound( x ); // s.lower_bound has a complexity of O(logn)
if( it == s.begin() )
{
std::cout << "No element found" << "\n";
}
else
{
--it;
std::cout << *it << "\n";
}
lower_bound essentially returns you an iterator it pointing to the smallest element which is >= x. Therefore, it's just previous pointer --it would point to the largest element which is < x. Complexity of this approach is O(log n).
In the last week i have been programming some 2-dimensional convolutions with FFTW, by passing to the frequency domain both signals, multiplying, and then coming back.
Surprisingly, I am getting the correct result only when input size is less than a fixed number!
I am posting some working code, in which i take simple initial constant matrixes of value 2 for the input, and 1 for the filter on the spatial domain. This way, the result of convolving them should be a matrix of the average of the first matrix values, i.e., 2, since it is constant. This is the output when I vary the sizes of width and height from 0 to h=215, w=215 respectively; If I set h=216, w=216, or greater, then the output gets corrupted!! I would really appreciate some clues about where could I be making some mistake. Thank you very much!
#include <fftw3.h>
int main(int argc, char* argv[]) {
int h=215, w=215;
//Input and 1 filter are declared and initialized here
float *in = (float*) fftwf_malloc(sizeof(float)*w*h);
float *identity = (float*) fftwf_malloc(sizeof(float)*w*h);
for(int i=0;i<w*h;i++){
in[i]=5;
identity[i]=1;
}
//Declare two forward plans and one backward
fftwf_plan plan1, plan2, plan3;
//Allocate for complex output of both transforms
fftwf_complex *inTrans = (fftwf_complex*) fftw_malloc(sizeof(fftwf_complex)*h*(w/2+1));
fftwf_complex *identityTrans = (fftwf_complex*) fftw_malloc(sizeof(fftwf_complex)*h*(w/2+1));
//Initialize forward plans
plan1 = fftwf_plan_dft_r2c_2d(h, w, in, inTrans, FFTW_ESTIMATE);
plan2 = fftwf_plan_dft_r2c_2d(h, w, identity, identityTrans, FFTW_ESTIMATE);
//Execute them
fftwf_execute(plan1);
fftwf_execute(plan2);
//Multiply in frequency domain. Theoretically, no need to multiply imaginary parts; since signals are real and symmetric
//their transform are also real, identityTrans[i][i] = 0, but i leave here this for more generic implementation.
for(int i=0; i<(w/2+1)*h; i++){
inTrans[i][0] = inTrans[i][0]*identityTrans[i][0] - inTrans[i][1]*identityTrans[i][1];
inTrans[i][1] = inTrans[i][0]*identityTrans[i][1] + inTrans[i][1]*identityTrans[i][0];
}
//Execute inverse transform, store result in identity, where identity filter lied.
plan3 = fftwf_plan_dft_c2r_2d(h, w, inTrans, identity, FFTW_ESTIMATE);
fftwf_execute(plan3);
//Output first results of convolution(in, identity) to see if they are the average of in.
for(int i=0;i<h/h+4;i++){
for(int j=0;j<w/w+4;j++){
std::cout<<"After convolution, component (" << i <<","<< j << ") is " << identity[j+i*w]/(w*h*w*h) << endl;
}
}std::cout<<endl;
//Compute average of data
float sum=0.0;
for(int i=0; i<w*h;i++)
sum+=in[i];
std::cout<<"Mean of input was " << (float)sum/(w*h) << endl;
std::cout<< endl;
fftwf_destroy_plan(plan1);
fftwf_destroy_plan(plan2);
fftwf_destroy_plan(plan3);
return 0;
}
Your problem has nothing to do with fftw ! It comes from this line :
std::cout<<"After convolution, component (" << i <<","<< j << ") is " << identity[j+i*w]/(w*h*w*h) << endl;
if w=216 and h=216 then `w*h*w*h=2 176 782 336. The higher limit for signed 32bit integer is 2 147 483 647. You are facing an overflow...
Solution is to cast the denominator to float.
std::cout<<"After convolution, component (" << i <<","<< j << ") is " << identity[j+i*w]/(((float)w)*h*w*h) << endl;
The next trouble that you are going to face is this one :
float sum=0.0;
for(int i=0; i<w*h;i++)
sum+=in[i];
Remember that a float has 7 useful decimal digits. If w=h=4000, the computed average will be lower than the real one. Use a double or write two loops and sum on the inner loop (localsum) before summing the outer loop (sum+=localsum) !
Bye,
Francis