I try to sort lots of points by distance to a specific point.
So, I decide to use std::sort, but i can't find the way to give Comp function 3rd argument.
I imagine lambda function in Python like lambda pnt1, pnt2: compare(pnt1, pnt2, myPoint) but I can't find it.
Something like:
int distance(Point const&, Point const&); // Returns distance between points.
Point p{x, y};
std::vector<Point> points{...};
std::sort(points.begin(), points.end(), [p](Point const& a, Point const& b) {
return distance(a, p) < distance(b, p);
});
Related
The question is as follows:
Find an algorithm that gets a pointer to a binary tree (to the beginning of it) and returns the number of leaves that are in an even depth (level).
In this example, the algorithm will return 2 because we won't count the leaf in level 1 (since 1 is not even).
I guess I need a recursive algorithm. It's pretty easy if I pass two parameters I pass in the function (a pointer to a tree and level).
I'm wondering if I can solve it with passing the pointer only, without the level.
Consider a function f which recursively descends in your tree. You have to differantiate three cases:
Your current node has no children and its depth is even. You return 1.
Your current node has no children and its depth is odd. You return 0.
Your current node has children. You return the sum of all recursive calls of f on these children.
You have to define f on your own.
And no, it is not possible to define f with only one parameter. You have to memorize the current node as well as the actual depth. Recursive Algorithms, by their very nature, have no idea from where they are being called. You can, of course (but not recommended) remember the latter in a static variable as long as you do not parallelize f.
Also, you can "override" f that it takes only one paremeter and calls function f taking two parameters with the current depth set to 0.
You can, indeed, solve it using only one perimeter. However in that case, you need two little helper functions:
typedef struct TreeNode
{
int val;
struct TreeNode *left;
struct TreeNode *right;
} TreeNode;
int countForOdd(TreeNode*);
int countForEven(TreeNode*);
int count(TreeNode*);
//If the TreeNode to be passed as perimeter is at an odd level, call this function
int countForOdd(TreeNode *node)
{
if(!node) return 0;
return countForEven(node->left)
+ countForEven(node->right);
}
//If the TreeNode to be passed as perimeter is at an even level, call this function
int countForEven(TreeNode *node)
{
if(!node) return 0;
return 1 + countForOdd(node->left)
+ countForOdd(node->right);
}
//And finally, our specific function for root is:
int count(TreeNode* root)
{
return countForOdd(root);
}
So say I have a list of "whitelist pairs" like so:
a | b
a | c
f | g
And say I want to write a method like so:
function checkIfInWhitelist(itemOne, itemTwo) {
...
}
Here's the desired functionality:
checkIfInWhiteList(a, b) // true
checkIfInWhitelist(b, a) // true
checkIfInWhitelist(b, c) // false
checkIfInWhiteList(g, f) // true
(Basically I want to check if the pair exists in the whitelist)
What's the best and most efficient way to do this?
I was thinking a dictionary where the keys are anything that appears in the whitelist and the values are a list of things that are matched with the key?
For instance, the three whitelist pairs above would map to:
a: [b, c]
b: [a]
f: [g]
g: [f]
Then, checkIfInWhitelist would be implemented like so:
function checkIfInWhitelist(itemOne, itemTwo) {
return map.contains(itemOne) && map[itemOne].contains(itemTwo)
}
Is there a better way to do this?
If you have a reasonable implementation of hash which works on std::pair (such as the one in Boost), and the objects have a fast total order method, then you can do it with a single hash table without artificially doubling the size of the table. Just use a std::unordered_set and normalize each pair into non-decreasing order before inserting it. (That is, if a < b, insert std::make_pair(a, b); otherwise insert std::make_pair(b, a).)
Very rough code, missing lots of boilerplate. I should have used perfect forwarding. Not tested.
template<typename T> struct PairHash {
std::unordered_set<std::pair<T, T>> the_hash_;
using iterator = typename std::unordered_set<std::pair<T, T>>::iterator;
std::pair<iterator, bool> insert(const T& a, const T& b) {
return the_hash_.insert(a < b ? std::make_pair(a, b)
: std::make_pair(b, a));
}
bool check(const T& a, const T& b) {
return the_hash_.end() != the_hash_.find(
a < b ? std::make_pair(a, b)
: std::make_pair(b, a));
}
};
Minimal way to do this:
Have an hashmap with data and what you want to check.
As you want to check an unsorted pair, keep an hashmap of unsorted pair
Possible solutions with unsorted datas:
Solution A keep a set of two
Solution B keep the two values in one certain order, but with no meaning. For example, (x,y) => X/Y, and (y,x) => X/Y.
you only choose ascendent (or descendent) : X < Y
Solution A takes more space, more time (you have to compare sets).
Solution B needs a little process (order a,b) but everything else is faster
For checking, you have to pre-process: unorder your data: (a,b) => a,b if a < b, or b,a if b < a
Solution C with sorted datas
It is longer to pre-process, but (a little faster) to check:
keep every (a,b), and (b,a) in your Hashmap: HashMap of List for example (or some implementation of Pair)
So your check is direct. But your first pre-process takes: O(2n . log 2.n) = 2 O(n.log n) + 2 log2 O(n).
So it depends on how many checks you will process after.
As it is very fast to compare and invert a and b, I would recommend solution B.
And if you know the type of your datas (alpha for example), you can store the couple in Strings, like: a-b
your solution is not optimal.
It combines bad effects of my solutions C and A: 2 embedded Hash collections, and duplicate datas.
And it forgets c:[a] .
Hope it helps.
You can't possibly do better than O(1) - just use a hash implementation that gives you O(1) lookup time on average (C++ STL unordered_map for example). Assuming you are okay with the memory hit, this should be the most performant solution (performant in terms of execution time, not necessarily memory overhead).
I have some 2d points in space and I need to find the point [xmin, ymax]. I can do it in 2 passes using x and then y, but I want to do it in a single pass.
Is there a way I can combine these values into a single float so I can find the right point by a single sort?
I thought of doing x+y, but I don't know if that's reliable, most likely not, as from [5, 2], [2, 5], the last one will have higher priority.
Any ideas?
You shouldn't sort your point list to find maximum and minimum values, since that will take about O(n*log(n)) time. Instead, iterate through the list once, keeping a reference to the highest and lowest values you have found. This will take O(n) time.
min_x = myPoints[0].x;
max_y = myPoints[0].y;
for(Point p in myPoints){
if (p.x < min_x){min_x = p.x;}
if (p.y > max_y){max_y = p.y;}
}
Edit: from Wikipedia's Graham scan article:
The first step in this algorithm is to find the point with the lowest y-coordinate. If the lowest y-coordinate exists in more than one point in the set, the point with the lowest x-coordinate out of the candidates should be chosen.
So, finding the min x and max y separately is inappropriate, because the point you're looking for might not have both. We can modify the code from above, to use these new criteria.
candidate = myPoints[0];
for (Point p in myPoints){
if (p.y < candidate.y or (p.y == candidate.y and px < candidate.x)){
candidate = p;
}
}
It may be necessary to change some of these "less than" signs to "greater than" signs, depending on your definition of "lowest". In some coordinate systems, (0,0) is in the upper-left corner of the graph, and the lower you go on the screen, the larger Y becomes. In which case, you ought to use if (p.y > candidate.y instead
If you are trying to find the minimum point according to some variant of the lexicographic order (or even some other kind of order over 2D points), then simply traverse your set of points once but use a custom comparison to find / keep the minimum. Here is an example in C++, min_element comes from the STL and is just a simple loop (http://www.cplusplus.com/reference/algorithm/min_element/):
#include <algorithm>
#include <iostream>
using namespace std;
struct Point {
int x, y;
};
struct PointCompare {
bool operator()(const Point& p1, const Point& p2) const {
if (p1.x < p2.x)
return true;
if (p1.x == p2.x)
return p1.y > p2.y; // Your order: xmin then ymax?
//return p1.y < p2.y; // Standard lexicographic order
return false;
}
};
int main()
{
// + 3
// + 4
// + 0
// + 2
// + 1
const Point points[] = {
{ 1, 1 }, // 0
{ 2,-1 }, // 1
{ 0, 0 }, // 2
{ 1, 3 }, // 3
{ 0, 2 }, // 4
};
const Point* first = points;
const Point* last = points + sizeof(points) / sizeof(Point);
const Point* it = min_element(first, last, PointCompare());
cout << it->x << ", " << it->y << endl;
}
It looks like you want to find a max-min point - a point with maximal y-coordinate among points with minimal x-coordinate, right?
If yes, you can store all your points in STL multimap, mapping x-coordinate to y-coordinate. This map will be automatically sorted, and there is a chance, that a point with minimal x-coordinate will be only one in this map. If it's not a single point, then you can scan all points with the same (minimal) x-coordinate to find a point with maximal y-coordinate. It will still need two passes, but the second pass statistically should be very short.
If you really want a single pass solution, then you can store your points into STL map, mapping x-coordinate into a set of y-coordinates. It requires more work, but in the end you will have your point - you will see its x-coordinate at the beginning of the map, and its y-coordinate will be at the end of the set, corresponding to this x-coordinate.
1.can sort by sort value: x+(MAX*y) or y+(MAX*x)
where MAX is big enough value MAX>=abs(max(x)-min(x)) or MAX>=abs(max(y)-min(y))
2.if you have enough memory and need freaking speed
then you can ignore sorting and use sort value as address
x,y values must be shifted to non-negative
x,y differences must be >= 1.0 (ideally =1.0) so they can be used as address
there are no duplicate points allowed or needed after sorting
create big enough array pnt[] of points and set them as unused (for example x=-1,y=-1)
sort input points pnt0[] like this
for (i=0i<points;i++)
{
x=pnt0[i].x;
y=pnt0[i].y;
pnt[x+(MAX*y)]=pnt[0];
}
if needed you can pack the pnt[] array to remove unused points but this can be considered as another pass
but you can remember min,max addresses and cycle only between them
I'm trying to use implement an "intelligent scissor" for an interactive image segmentation. Therefore, I have to create a directed graph from an image where each vertex represents a single pixel. Each vertex is then conntected to each of its neighbours by two edges: one outgoing and one incoming edge. This is due to the fact that the cost of an edge (a,b) may differ from the cost of (b,a). I'm using images with a size of 512*512 pixel so i need to create a graph with 262144 vertices and 2091012 edges. Currently, I'm using the following graph:
typedef property<vertex_index_t, int,
property<vertex_distance_t, double,
property<x_t, int,
property<y_t, int
>>>> VertexProperty;
typedef property<edge_weight_t, double> EdgeProperty;
// define MyGraph
typedef adjacency_list<
vecS, // container used for the out-edges (list)
vecS, // container used for the vertices (vector)
directedS, // directed edges (not sure if this is the right choice for incidenceGraph)
VertexProperty,
EdgeProperty
> MyGraph;
I'm using an additional class Graph (sorry for the uninspired naming) which handles the graph:
class Graph
{
private:
MyGraph *graph;
property_map<MyGraph, vertex_index_t>::type indexmap;
property_map<MyGraph, vertex_distance_t>::type distancemap;
property_map<MyGraph, edge_weight_t>::type weightmap;
property_map<MyGraph, x_t>::type xmap;
property_map<MyGraph, y_t>::type ymap;
std::vector<MyGraph::vertex_descriptor> predecessors;
public:
Graph();
~Graph();
};
Creating a new graph with 262144 vertices is pretty fast but the insertion of the edges tooks up to 10 seconds which is way too slow for the desired application. Right now, I'm inserting the edges the following way:
tie(vertexIt, vertexEnd) = vertices(*graph);
for(; vertexIt != vertexEnd; vertexIt++){
vertexID = *vertexIt;
x = vertexID % 512;
y = (vertexID - x) / 512;
xmap[vertexID] = x;
ymap[vertexID] = y;
if(y > 0){
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x-1)], *graph); // upper left neighbour
}
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x)], *graph); // upper
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y-1)+(x+1)], *graph); // upper right
}
}
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y)+(x+1)], *graph); // right
}
if(y < 511){
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x-1)], *graph); // lower left
}
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x)], *graph); // lower
if(x < 511){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y+1)+(x+1)], *graph); // lower right
}
}
if(x > 0){
tie(edgeID, ok) = add_edge(vertexID, indexmap[IRES2D*(y)+(x-1)], *graph); // left
}
}
Is there anything I can do do improve the speed of the programm? I'm using Microsoft Visual C++ 2010 Express in release mode with optimization (as recommended by Boost). I thought I could use a listS container for the vertices or edges but the vertices are no problem and if I use listS for the edges, it gets even slower.
adjacency_list is very general purpose; unfortunately it's never going to be as efficient as a solution exploiting the regularity of your particular use-case could be. BGL isn't magic.
Your best bet is probably to come up with the efficient graph representation you'd use in the absence of BGL (hint: for a graph of an image's neighbouring pixels, this is not going to explicitly allocate all those node and edge objects) and then fit BGL to it (example), or equivalently just directly implement a counterpart to the existing adjacency_list / adjacency_matrix templates (concept guidelines) tuned to the regularities of your system.
By an optimised representation, I of course mean one in which you don't actually store all the nodes and edges explicitly but just have some way of iterating over enumerations of the implicit nodes and edges arising from the fact that the image is a particular size. The only thing you should really need to store is an array of edge weights.
I have some function,
int somefunction( //parameters here, let's say int x) {
return something Let's say x*x+2*x+3 or does not matter
}
How do I find the derivative of this function? If I have
int f(int x) {
return sin(x);
}
after derivative it must return cos(x).
You can approximate the derivative by looking at the gradient over a small interval. Eg
const double DELTA=0.0001;
double dfbydx(int x) {
return (f(x+DELTA) - f(x)) / DELTA;
}
Depending on where you're evaluating the function, you might get better results from (f(x+DELTA) - f(x-DELTA)) / 2*DELTA instead.
(I assume 'int' in your question was a typo. If they really are using integers you might have problems with precision this way.)
You can get the numerical integral of mostly any function using one of many numerical techniques such as Numerical ordinary Differential Equations
Look at: Another question
But you can get the integration result as a function definition with a library such as Maple, Mathematica, Sage, or SymPy