Can I sort vector of tuple of references? - sorting

When following code:
std::vector<std::tuple<int&>> v;
int a = 5; v.emplace_back(a);
int b = 4; v.emplace_back(b);
int c = 3; v.emplace_back(c);
int d = 2; v.emplace_back(d);
int e = 1; v.emplace_back(e);
std::sort(std::begin(v), std::end(v));
is compiled with gcc/libstdc++ vs clang/libc++ binary gives different results.
For gcc/libstdc++ one element is copied to all the other references.
5 4 3 2 1
5 5 5 5 5
First I thought that clang/libc++ behaves as expected but it only works up to 5 elements in a vector (cause there is a special case for small containers).
5 4 3 2 1
1 2 3 4 5
When passing more elements result is similar to gcc.
5 4 3 2 1 0
3 4 5 5 5 5
So is it valid to use std::sort for container of tuples with references (i.e. made with std::tie, sorting subset of struct)?
If not, should I expect any warnings?

So is it valid to use std::sort for container of tuples with references (i.e. made with std::tie, sorting subset of struct)? If not, should I expect any warnings?
No, and no. One of the type requirements on std::sort() is that:
The type of dereferenced RandomIt must meet the requirements of MoveAssignable and MoveConstructible.
where MoveAssignable requires in the expression t = rv:
The value of t is equivalent to the value of rv before the assignment.
But std::tuple<int&> isn't MoveAssignable because int& isn't MoveAssignable. If you simply have:
int& ra = a;
int& rb = b;
ra = std::move(rb);
The value of ra isn't equivalent to the prior value of rb. ra still refers to a, it does not change to refer to b - what actually changed was the value of a.
Since our type doesn't meet the precondition of std::sort(), the result of the std::sort() call is just undefined behavior.
Note that you could sort a std::vector<std::tuple<std::reference_wrapper<int>>> though, because std::reference_wrapper is MoveAssignable.
Note also that this is reminiscent of not being able to sort a container of auto_ptr, per an old Herb Sutter article.

Related

How to print 7 when user input 3 without using any condition

So, here is a question when user input 3 than it prints the 7 as output and when user input 7 than it prints 3 as output, but without using any condition and loop.
here is what I done.But the problem is i'm using 2 numbers as input but i have to done it with one input.
void main() {
print(mod(7, 3));
}
int mod(int num1, int num2) {
//3*2 mod 7
//7*2 mod 3
int answer = (num1 * 2) % num2 + 1;
return answer;
}
please help how to do this.You can use any language to solve this.
Let's look at the binary representation of 3 and 7 : 3 is 0...0011 and 7 is 0...0111, which means that the only bit that differs between 3 and 7 is the 3rd bit. Therefore, we can transform 3 into 7 and vice-versa simply by "flipping" that 3rd bit.
"Flipping a bit" can be done with an "exclusive-or" operation. Xor is a very useful operation (one of my favorite): one way to look at it is to consider that you have a data input and a control input, and if the control input is 0, then the data input is left unchanged, but if the control input is 1, then the data input is flipped (you can draw the truth table of Xor to convince yourself of that interpretation). Of course, it's only a way to consider the xor operation, but it's particularly useful in our case : we want to flip the 3rd bit, and leave all of the other bit unchanged.
C offers a "bitwise xor" operation ( A ^ B), which do a xor of each bit of A and B. Therefore, what we want is to have a number, that we will use as the control number, which 3rd bit is 1 and all other bits are 0: this number is 4.
Finally, we can have this function that converts 3 into 7 and vice-versa by simply applying a Xor with 4 to the input.
#include <stdio.h>
int flip(int num1) {
return num1 ^ 4;
}
void main() {
printf("%d", flip(3));
}

Need to arrange sequence of events

I was asked this question in an coding interview I tried hashmap, heap tree, and queue but nothing worked. I want to understand what I missed, can someone tell me how to approach this problem.
DESCRIPTION It's the year 2050. Teleportation has been made possible since 2045. Now you have just bought a teleportation machine for yourself. However. operating the machine is not a child's play. Before you can teleport to some other places. you need to do some arrangements on the machine. There are N equipments in the machine which are needed to be switched on and there are M types of dependencies between various equipment. Now you need to figure out the order in which you should switch on the equipment so as to make the machine start.
Input
• First-line will contain 2 numbers:
N representing number of equipment in the machine.
M representing number of equipment dependencies.
• Next M lines contain 2 integers A and B representing a dependency from A to B (That means A must be switched on to start B.
Output
• On a single line, print the order(separated by space) in which the machines should be switched on to do teleportation.
Constraints
• 1 <= N <= 500
• 1 <= M = N(N-1)/2
• 1 <= A <= 500
• 1 < B <= 500
Sample Test Case
Solution Input:
6 6
1 2
1 3
2 5
3 4
5 6
4 6
Sample Output
1 3 4 2 5 6
Explanation
If you follow the pattern. you will observe that to switch on equipment 6. you would need to switch on 4 and 5. To switch on equipment 4 and 5. we need to switch on equipment 3 and 2 respectively. Finally to switch on equipment 3 and 2 we need to switch on equipment 1.
It's a classic problem called topological sort.
Count the number of machines needed to turn on to start each machine.
With your case:
cnt[1] = 0
cnt[2] = 1 // machine 1
cnt[3] = 1 // machine 1
cnt[4] = 1 // machine 3
cnt[5] = 1 // machine 2
cnt[6] = 2 // machine 4, 5
If cnt equal to 0, it means this machine can be turn on.
Something like this should work:
queue<int> q;
for (int i = 1; i <= n; i++)
if (cnt[i] == 0) q.push(i);
while (!q.empty()) {
int id = q.front(); q.pop(); // Turn on machine id
print (id)
for (int i = 0; i < dep[id].size(); i++) { // Dependencies for machine id
cnt[dep[id][i]]--;
if (cnt[dep[id][i]] == 0)
q.push(dep[id][i]);
}
}
The problem can be rephrased to:
Input: an unweighted, directed acyclic graph G(V, E)
Output: a topological ordering of G.
Possible Algorithm
The process needed is called Topological Sorting. This reference lists a few alternative algorithms.
One way is to use a depth-first search and mark nodes as visited as you encounter them. Do not visit those again. When backtracking from recursion, prepend the current node to the output list. This will ensure that all dependent nodes have already been add to that list, and this "ancestor" precedes them all.

Quick sort - Not a stable sort

Stable sort talks about equal keys NOT getting past each other, after sorting
Consider duplicate key 4 at array index 8 & 9, in the below sequence,
a = [5 20 19 18 17 8 4 5 4 4] where pivot = 0, i = 1, j = 9
Partition logic says,
i pointer moves left to right. Move i as long as a[i] value is ≤ to a[pivot]. swap(a[i], a[j])
j pointer moves right to left. Move j as long as a[j] value is ≥ to a[pivot]. swap(a[i], a[j])
After following this procedure two times,
a = [5 4 19 18 17 8 4 5 4 20] Swap done at i = 1 & j = 9.
a = [5 4 19 18 17 8 4 5 4 20] Stops at i = 2 & j = 8
a = [5 4 4 18 17 8 4 5 19 20] Swap done at i = 2 & j = 8
My understanding is, as duplicate key 4 lost their order after two swaps, Quick sort is not stable sort.
Question:
As per my understanding, Is this the reason for Quick sort not being stable? If yes, Do we have any alternative partition approach to maintain the order of key 4 in the above example?
There's nothing in the definition of Quicksort per se that makes it either stable or unstable. It can be either.
The most common implementation of Quicksort on arrays involves partitioning via swaps between a pair of pointers, one progressing from end to beginning and the other from beginning to end. This does produce an unstable Quicksort.
While this method of partitioning is certainly common, it's not a requirement for the algorithm to be a Quicksort. It's just a method that's simple and common when applying Quicksort to an array.
On the other hand, consider doing a quicksort on a singly linked list. In this case, you typically do the partitioning by creating two separate linked lists, one containing those smaller than the pivot value, the other containing those larger than the pivot value. Since you always traverse the list from beginning to end (not many other reasonable choices with a singly linked list). As long as you add each element to the end of the sub-list, the sub-lists you create contain equal keys in their original order. Thus, the result is a stable sort. On the other hand, if you don't care about stability, you can splice elements to the beginnings of the sub-lists (slightly easy to do with constant complexity). in this case, the sort will (again) be unstable.
The actual mechanics of partitioning a linked list are pretty trivial, as long as you don't get too fancy in choosing the partition.
node *list1 = dummy_node1;
node *add1 = list1;
node *list2 = dummy_node2;
node *add2 = list2;
T pivot = input->data; // easiest pivot value to choose
for (node *current = input; current != nullptr; current = current->next)
if (current->data < pivot) {
add1->next = current;
add1 = add1 -> next;
}
else {
add2->next = current;
add2 = add2->next;
}
add1->next = nullptr;
add2->next = nullptr;

Neat way of computing functions on key-value pairs

Suppose you have a have list with key - value pairs. Neither
keys, nor values, nor the pair are required to be unique.
The following example
a -> 1
b -> 2
c -> 3
a -> 3
b -> 1
would be valid.
Now suppose I want to associate to any key value pair (k->v) another value V,
which has the following properties:
it is the same for two pairs, if their keys are identical
it is uniquely determined by the set of key-value pairs in the entire list
This sounds abstract, but for example the sum, the maximum and counting function qualify as examples
Pair SUM MAX COUNT
a -> 1 4 3 2
b -> 2 3 2 2
c -> 3 3 3 1
a -> 3 4 3 2
b -> 1 3 2 2
I am looking for a fast methods/data structures to compute such functions on the entire list.
If the keys can be sorted, one can simply sort the list, then iterate through the sorted list, and compute the function V in each block with identical keys.
I am asking whether there are nice methods to do this, if the values are not comparable or one does
not want to change the order of the entries.
Some thoughts:
Of course, one could apply a hash function to the keys, in order to obtain sortable keys.
Of course, one could also store the original position of each pair, then do the sorting, then compute
the function, and finally undo the sorting.
So essentially the question is already answered. However, I am interested in whether
there are more elegant solutions maybe using some adapt data structure
EDIT: To clarify Sunny Agrawal comment, what I mean by associate. Well this is also part of the question on how to nicely arrange the data structure.
In my example, I would get another list/map with (k->v) as key and V as value. However, it might make sense to not arrange the data that way. I require, that V is stored in such a way that for given k it needs constant time to obtain V.
Maintain 2 DS
1. List< Pair< Key_Type, Value_Type > >
2. Map<Key_Type, Stats>
where Stats is struct as follows
struct Stats
{
int Sum;
int Count;
int Max;
};
First DS Contains all your key,val pairs in the order you want to store,
Second maintains the data stats for each key as shown in your example
Insert will work as follows(Pseudo C++ Code)
void Insert(key,val)
{
list.insert(Pair(key,val))
Stats curr;
if(map.contains(key))
{
curr = map[key];
curr.Max = Max(curr.Max, val);
curr.Count++;
curr.Sum += val;
}
else
{
curr.Max = val
curr.Count = 1;
curr.Sum = val;
}
map[key] = curr;
}
Complexity will be O(1) for updating list and O(lgM) for updating map
where M is no of unique Keys and
if N is total no of objects in list
Total time in inserts will be O(N) + O(NlogM)
Note: this will work if we have inserts only, in case of deletions, Updating Max will be difficult

Foo and Bar playing a strategical game

Foo and Bar is playing a game of strategy. At the start of the game,
there are N apples, placed in a row (in straight line). The apples are
numbered from 1 to N. Each apple has a particular price value.
The price of ith apple is pi for i=1 to N.
In this game,
the players Foo and bar make an alternative move.
In each move, the player does the following:
-If there is only one apple left, the player toss the coin and if its
head, the player takes the first apple among the apples that are
currently present in a row in a straight line , otherwise, the last apple is
taken by the player.
The goal here is to calculate the expected total price value, Foo will get
if Foo Plays First.
Note:
The coin is Unbiased.
Probability of head is 0.50 and similar is the probability of tail.
Total Price Value=summation of price value of all the apples, Foo will get.
Example 1:
N=5
Apple price val:
5 2 3 1 5
Answer is : 11.00
Example 2:
N=6
7 8 2 3 7 8
Answer : 21.250
Example 3:
N=3
1 4 9
First Second Third Foo Total Val
Foo gets 1 Bar gets 4 Foo gets 9 10
Foo gets 1 Bar gets 9 Foo gets 4 5
Foo gets 9 Bar gets 1 Foo gets 4 13
Foo gets 9 Bar gets 4 Foo gets 1 10
probability 0.5 • 0.5 = 0.25.
Expected value (Foo)= (0.25 *10 )+ (0.25 *5) + (0.25*13)+ (0.25*10) = 9.500
I wrote the following code:
#include<iostream>
using namespace std;
double calculate(int start,int end,int num,int current);
int arr[2010];
int main()
{
int T;
scanf("%d",&T);
for(int t=0;t<T;t++)
{
int N;
scanf("%d",&N);
for(int i=0;i<N;i++)
{
scanf("%d",&arr[i]);
}
printf("%.3lf\n",calculate(0,N-1,N/2+N%2,0));
}
return 0;
}
double calculate(int start,int end,int num,int current)
{
if(num==current)
return 0;
double value=0;
value=.5*arr[start]+.5*arr[end]+.5*calculate(start+1,end,num,current+1)+.5*calculate(start,end-1,num,current+1);
return value;
}
But the above code is quite slow ,as the constraints are price of apple<=1000
and 1<=N<=2000 and there are 500 test cases.
How to solve it in must efficient way ?
The first observation you can make is that you don't need all four arguments to calculate - two of them are redundant, i.e. the information they contain is already available in the other two arguments. (By the way, I'm not sure that performance is the only problem - I don't think you simulate the game correctly, maybe you should try it on some small test cases.)
Then, after you've removed the unnecessary parameters and brought them down to two integers from 0 to N - 1, you should read about memoization - it's a way to avoid doing the same calculations multiple times. For example, after you've calculated the answer for start = 2, end = 7, instead of doing the same calculations over and over again every time you need this value, you can store in the 2nd row, 7th column of a two-dimensional array and mark it as found. This way you'll calculate the answer for each subinterval only once, and then use it as something you already know.
This brings the complexity down to O(N^2), which, depending on the implementation and the testing machine, may or may not be fast enough to pass the problem, but is a good start, and has an educational value - dynamic programming and memoization are used pretty often, and you should learn them if you haven't.

Resources