Overload operator== on std::shared_ptr - c++11

I was considering overloading the equality and ordering operators of std::shared_ptr for a particular type. So if I have the following
struct Foo { /* Stuff */ };
bool operator==( const std::shared_ptr<Foo>& lhs, const std::shared_ptr<Foo>& rhs )
{
// Do something reasonable for equality using the Foo instances
}
So this would mean equality would no longer be just pointer equality. Is there a downside or some ugly pitfall to doing this?

It's completely illegal and banned by Standard. Horrible nasal demons ensue.

Related

unordered_set hash for type, whose comparison uses part of the type data

I need std::unordered_set of pairs, whose first elements should be different.
Is it correct to hash only first element of pairs, as below?
using Pair = std::pair<int, int>;
struct Eq
{
bool operator() ( Pair const& lhs,
Pair const& rhs ) const
{
return (lhs.first == rhs.first);
}
};
struct Hash
{
std::size_t operator() ( Pair const &p ) const
{
return std::hash<int>()( p.first );
}
};
// No two pairs with same '.first'.
std::unordered_set<Pair, Hash, Eq> pairs;
for ( Pair const& p : ... )
{
pairs.insert(p);
}
In general, for unordered_set<T>:
If equality functor for type T does not use part (some data members) of T, it makes sense not to use that part in hash<T> either.
Is this right?
Yeap, should work fine. From the documentation to std::unordered_set::insert() (emphasis mine):
Inserts element(s) into the container, if the container doesn't already contain an element with an equivalent key.
You clearly provided a predicate that says elements should be treated equivalent when their first fields match. And you specified a hash that makes sure equivalent elements end up in the same bucket. So this looks good to me.

Explicit and implicit conversion

I am pretty surprised that this struct, which is only explicitly convertible to bool, works fine inside a if statement:
struct A
{
explicit operator bool( ) const
{
return m_i % 2 == 0;
}
int m_i;
};
int main()
{
A a{ 10 };
if ( a ) // this is considered explicit
{
bool b = a; // this is considered implicit
// and therefore does not compile
}
return 0;
}
Why is it so? What is the design reason behind it in the C++ Standard?
I personally find more explicit the second conversion than the first one. To make it even more clear, I would have expected the compiler forcing to have the following for both the cases:
int main()
{
A a{ 10 };
if ( (bool)a )
{
bool b = (bool)a;
}
return 0;
}
§6.4 Selection statements [stmt.select]
The value of a condition that is an expression is the value of the expression, contextually converted to bool for statements other than switch;
§4 Standard conversions [conv]
Certain language constructs require that an expression be converted to
a Boolean value. An expression e appearing in such a context is said
to be contextually converted to bool and is well-formed if and only
if the declaration bool t(e); is well-formed, for some invented
temporary variable t (8.5).
So the expression of the condition in if must be contextually convertible to bool, which means that explicit conversions are allowed.
This is mode most likely done because the condition of if can only evaluate to a boolean value, so by saying if(cond) you are explicitly stating you want cond to be evaluated to a boolean value.

C++11 Multiline lambdas can deduce intrinsic types?

I use C++11 lambdas quite a lot, and I've often run into compile errors on multiline lambdas because I forgot to add the return type, as is expected, but I recently ran into one example that doesn't have this issue. It looks something like this:
auto testLambda = [](bool arg1, bool arg2)
{
if (arg1)
{
if (!arg2)
{
return false;
}
return true;
}
return false;
};
This compiles just fine even though there's no return type specified. Is this just Visual Studio being dumb and allowing something it shouldn't, or can lambdas just always deduce intrinsic types?
I tried this with return values of all ints or floating point values and it also compiled just fine. I just found this to be really surprising so I wanted to be absolutely sure how it works before I start making assumptions and omitting return types that might break later on.
Lambdas follow the same template deduction rules as auto-returning functions:
Template argument deduction is used in declarations of functions, when deducing the meaning of the auto specifier in the function's return type, from the return statement.
For auto-returning functions, the parameter P is obtained as follows: in T, the declared return type of the function that includes auto, every occurrence of auto is replaced with an imaginary type template parameter U. The argument A is the expression of the return statement, and if the return statement has no operand, A is void(). After deduction of U from P and A following the rules described above, the deduced U is substituted into T to get the actual return type:
auto f() { return 42; } // P = auto, A = 42:
// deduced U = int, the return type of f is int
If such function has multiple return statements, the deduction is performed for each return statement. All the resulting types must be the same and become the actual return type.
If such function has no return statement, A is void() when deducing.
Note: the meaning of decltype(auto) placeholder in variable and function declarations does not use template argument deduction.

Am I using inheritance, borrowed pointers, and explicit lifetime annotations correctly?

I'm learning Rust, coming from an almost exclusively garbage collected background. So I want to make sure I'm getting off on the right foot as I write my first program.
The tutorial on the Rust site said I should be dubious of using pointers that aren't &. With that in mind, here's what I ended up with in my little class hierarchy (names changed to protect innocent). The gist is, I have two different entities, let's say Derived1 and Derived2, which share some behavior and structure. I put the common data into a Foo struct and common behavior into a Fooish trait:
struct Foo<'a> {
name: &'a str,
an_array: &'a [AnEnumType],
/* etc. */
}
struct Derived1<'a> {
foo: &'a Foo<'a>,
other_stuff: &'a str,
}
struct Derived2<'a> {
foo: &'a Foo<'a>,
/* etc. */
}
trait Fooish<'a> {
fn new(foo: &'a Foo<'a>) -> Self;
/* etc. */
}
impl<'a> Fooish<'a> for Derived1<'a> {
fn new(foo: &'a Foo<'a>) -> Derived1<'a> {
Derived1 { foo: foo, other_stuff: "bar" }
}
/* etc. */
}
/* and so forth for Derived2 */
My questions:
Am I "doing inheritance in Rust" more-or-less idiomatically?
Is it correct to use & pointers as struct fields here? (such as for string data, and array fields whose sizes vary from instance to instance? What about for Foo in Derived?)
If the answer to #2 is 'yes', then I need explicit lifetime annotations, right?
Is it common to have so many lifetime annotations everywhere as in my example?
Thanks!
I'd say that this is not idiomatic at all, but sometimes there are tasks which require stepping away from idiomatic approaches, it is just not clear if this is really such a case.
I'd suggest you to refrain from using ideas from OO languages which operate in terms of classes and inheritance - they won't work correctly in Rust. Instead you should think of your data in terms of ownership: ask yourself a question, should the given struct own the data? In other words, does the data belong to the struct naturally or it can be used independently somewhere?
If you apply this reasoning to your structures:
struct Foo<'a> {
name: &'a str,
an_array: &'a [AnEnumType],
/* etc. */
}
struct Derived1<'a> {
foo: &'a Foo<'a>,
other_stuff: &'a str,
}
struct Derived2<'a> {
foo: &'a Foo<'a>,
/* etc. */
}
you would see that it doesn't really make sense to encode inheritance using references. If Derived1 has a reference to Foo, then it is implied that this Foo is created somewhere else, and Derived1 is only borrowing it for a while. While this may be something you really want, this is not how inheritance works: inherited structures/classes usually contain their "parent" contents inside them; in other words, they own their parent data, so this will be more appropriate structure:
struct Foo<'a> {
name: &'a str,
an_array: &'a [AnEnumType],
/* etc. */
}
struct Derived1<'a> {
foo: Foo<'a>
other_stuff: &'a str,
}
struct Derived2<'a> {
foo: Foo<'a>,
/* etc. */
}
Note that Derived* structures now include Foo into them.
As for strings and arrays (string slices and array slices in fact), then yes, if you want to hold them in structures you do have to use lifetime annotations. However, it does not happen that often, and, again, designing structures based on ownership usually helps to decide whether this should be a slice or a dynamically allocated String or Vec. There is a nice tutorial on strings, which explains, among everything else, when you need to use owned strings and when you need slices. Same reasoning applies to &[T]/Vec<T>. In short, if your struct owns the string/array, you have to use String/Vec. Otherwise, consider using slices.

without creation of object how enums are working in C

How enums are working without creation of object.
typedef enum
{
a=0,b,c;
}Sample_Enum;
int main()
{
printf("%d",a);
}
Output Ans: 0
Enums do not require initialization to be used.
In C enums are special kinds of integers. You can implicitly cast them from and to integers and a here is but an integer constant.

Resources