Get a constant definition from a specific unit file - pascal

I have a complex Lazarus program. One of my sources uses the standard units Graphics and ZStream, both of them contain a definition for clDefault.
The one (in Graphics) is:
const
clDefault = TColor($20000000);
The other (in ZStream) is:
type
Tcompressionlevel=(
clnone, {Do not use compression, just copy data.}
clfastest, {Use fast (but less) compression.}
cldefault, {Use default compression}
clmax {Use maximum compression}
);
In my code when I refer to clDefault I get a compiler error, because I want to use the const definition above to assign it a destination of TColor type, while the compiler tries to use the enumerated constant from ZStream.
How can I instruct the compiler to use the correct definition I want?

Using unit namespaces you can prefix the const clDefault with its unit name:
Graphics.clDefault
This will let the compiler pinpoint the wanted declaration in this case.
From Freepascal docs:
The rules of unit scope imply that an identifier of a unit can be redefined. To have access to an identifier of another unit that was redeclared in the current unit, precede it with that other units name...

Related

what is __attribute__((__bounded__(__string__,2,3)))?

I was openbsd bcrypt code and I got warning of unknown attribute bounded at below code snippet:
void SHA256Update(SHA2_CTX *, const void *, size_t)
__attribute__((__bounded__(__string__,2,3)));
I tried to google the attribute bounded but no relevant result was found. I want to port that code to a different platform and if I get the meaning of bounded attribute, I want to use a similar attribute of that platform.
Any suggestion would be appreciated!
The __bounded__ attribute is available in the context of function declarations to enable to determine the length of the memory region pointed by one of the function arguments using the value of another of it's arguments; the first parameter slightly changes the type of the check for different styles of functions.
In this case it augments the type of the second argument to the function with the length specified by the third argument; the __string__ bound style additionally checks that the size argument doesn't come from a sizeof applied to a pointer, as you want the destination's size.
It's only available in the OpenBSD's fork of GCC (see man 1 gcc-local); there also was a short-lived GNU C extension (somewhere between 2000 and 2003) by the same name and for the same purpose, which was a direct type qualifier instead and was usable outside function declarations too, however, AFAIK it was undocumented.

memcpy from one type to another type. How do we access the destination afterwards?

uint32_t u32 = 0;
uint16_t u16[2];
static_assert(sizeof(u32) == sizeof(u16), "");
memcpy(u16, &u32, sizeof(u32)); // defined?
// if defined, how to we access the data from here on?
Is this defined behaviour? And, if so, what type of pointer may we use to access the target data after the memcpy?
Must we use uint16_t*, because that suitable for the declared type of u16?
Or must we use uint32_t*, because the type of the source data (the source data copied from by memcpy) is uint_32?
(Personally interested in C++11/C++14. But a discussion of related languages like C would be interesting also.)
Is this defined behavio[u]r?
Yes. memcpying into a pod is well-defined and you ensured that the sizing is the correct.
Must we use uint16_t*, because that suitable for the declared type of u16?
Yes, of course. u16 is an array of two uint16_ts so it must be accessed as such. Accessing it via a uint32_t* would be undefined behavior by the strict-aliasing rule.
It doesn't matter what the source type was. What matters is that you have an object of type uint16_t[2].
On the other hand, this:
uint32_t p;
new (&p) uint16_t(42);
std::cout << p;
is undefined behavior, because now there is an object of a different type whose lifetime has begin at &p and we're accessing it through the wrong type.
The C++ standard delegates to C standard:
The contents and meaning of the header <cstring> are the same as the C standard library header <string.h>.
The C standard specifies:
7.24.1/3 For all functions in this subclause, each character shall be interpreted as if it had the type unsigned char (and therefore every possible object representation is valid and has a different value).
So, to answer your question: Yes, the behaviour is defined.
Yes, uint16_t* is appropriate because uint16_t is the type of the object.
No, the type of the source doesn't matter.
C++ standard doesn't specify such thing as object without declared type or how it would behave. I interpret that to mean that the effective type is implementation defined for objects with no declared type.
Even in C, the source doesn't matter in this case. A more complete version of quote from C standard (draft, N1570) that you are concerned about, emphasis mine:
6.5/6 [...] If a value is copied into an object having no declared type using memcpy or memmove, or is copied as an array of character type, then the effective type of the modified object for that access and for subsequent accesses that do not modify the value is the effective type of the object from which the value is copied, if it has one. [...]
This rule doesn't apply, because objects in u16 do have a declared type

When should I use static data members vs. const global variables?

Declaring const global variables has proven useful to determine some functioning parameters of an API. For example, on my API, the minimum order of numerical accuracy operators have is 2; thus, I declare:
const int kDefaultOrderAccuracy{2};
as a global variable. Would it be better to make this a static const public data member of the classes describing these operators? When, in general, is better to choose one over the other?
const int kDefaultOrderAccuracy{2};
is the declaration of a static variable: kDefaultOrderAccuracy has internal linkage. Putting names with internal linkage in a header is obviously an extremely bad idea, making it extremely easy to violate the One Definition Rule (ODR) in other code with external linkage in the same or other header, notably when the name is used in the body of an inline or template function:
Inside f.hpp:
template <typename T>
const T& max(const T &x, const T &y) {
return x>y ? x : y;
}
inline int f(int x) {
return max(kDefaultOrderAccuracy, x); // which kDefaultOrderAccuracy?
}
As soon as you include f.hpp in two TU (Translation Units), you violate the ODR, as the definition is not unique, as it uses a namespace static variable: which kDefaultOrderAccuracy object the definition designates depends on the TU in which it is compiled.
A static member of a class has external linkage:
struct constants {
static const int kDefaultOrderAccuracy{2};
};
inline int f(int x) {
return max(constants::kDefaultOrderAccuracy, x); // OK
}
There is only one constants::kDefaultOrderAccuracy in the program.
You can also use namespace level global constant objects:
extern const int kDefaultOrderAccuracy;
Context is always important.
To answer questions like this.
Also for naming itself.
If you as a reader (co-coder) need to guess what an identifier means, you start looking for more context, this may be supported through an API doc, often included in decent IDEs. But if you didn't provide a really great API doc (I read this from your question), the only context you get is by looking where your declaration is placed.
Here you may be interested in the name(s) of the containing library, subdirectory, file, namespace, or class, and last not least in the type being used.
If I read kDefaultOrderAccuracy, I see a lot of context encoded (Default, Order, Accuracy), where Order could be related for sales or sorting, and the k encoding doesn't say anything to me. Just to make you looking on your actual problem from a different perspective. C/C++ Identifiers have a poor grammar: they are restricted to rules for compound words.
This limitation of global identifiers is the most important reason why I mostly avoid global variables, even constants, sometimes even types. If its the meaning is limited to a given context, define a thing right within this context. Sometimes you first have to create this context.
Your explanation contains some unused context:
numerical operators
minimum precision (BTW: minimum doesn't mean default)
The problem of placing a definition into the right class is not very different from the problem to find the right place for a global: you have to find/create the right header file (and/or namespace).
As a side note, you may be interested to learn that also enum can be used to get cheap compile-time constants, and enums can also be placed into classes (or namespaces). Also a scoped enumeration is an option you should consider before introducing global constants. As with enclosing class definitions, the :: is a means of punctuation which separates more than _ or an in-word caseChange.
Addendum:
If you are interested in providing a useful default behaviour of your operations that can be overridden by your users, default arguments could be an option. If your API provides operators, you should study how the input/output manipulators for the standard I/O streams work.
my guess is that:
const takes up inline memory based on size of data value such as “mov ah, const value” for each use, which can be a really short command, in size overall, overall, based on input value.
whereas static values takes up a whole full data type, usually int, whatever that maybe on the current system for each static, maybe more, plus it may need a full memory access value to access the data, such as mov ah, [memory pointer], which is usually size of int on the system, for each use (with a full class it could even more complex). yet the static is still declared const so it may behave the same as the normal const type.

__verify_pcpu_ptr function in Linux Kernel - What does it do?

#define __verify_pcpu_ptr(ptr)
do {
const void __percpu *__vpp_verify = (typeof((ptr) + 0))NULL;
(void)__vpp_verify;
} while (0)
#define VERIFY_PERCPU_PTR(__p)
({
__verify_pcpu_ptr(__p);
(typeof(*(__p)) __kernel __force *)(__p);
})
What do these two functions do? What are they used for? How do they work?
Thanks.
This is part of the scheme used by per_cpu_ptr to support a pointer that gets a different value for each CPU. There are two motives here:
Ensure that accesses to the per-cpu data structure are only made via the per_cpu_ptr macro.
Ensure that the argument given to the macro is of the correct type.
Restating, this ensures that (a) you don't accidentally access a per-cpu pointer without the macro (which would only reference the first of N members), and (b) that you don't inadvertently use the macro to cast a pointer that is not of the correct declared type to one that is.
By using these macros, you get the support of the compiler in type-checking without any runtime overhead. The compiler is smart enough to eventually recognize that all of these complex machinations result in no observable state change, yet the type-checking will have been performed. So you get the benefit of the type-checking, but no actual executable code will have been emitted by the compiler.

Avoiding self assignment in std::shuffle

I stumbled upon the following problem when using the checked implementation of glibcxx:
/usr/include/c++/4.8.2/debug/vector:159:error: attempt to self move assign.
Objects involved in the operation:
sequence "this" # 0x0x1b3f088 {
type = NSt7__debug6vectorIiSaIiEEE;
}
Which I have reduced to this minimal example:
#include <vector>
#include <random>
#include <algorithm>
struct Type {
std::vector<int> ints;
};
int main() {
std::vector<Type> intVectors = {{{1}}, {{1, 2}}};
std::shuffle(intVectors.begin(), intVectors.end(), std::mt19937());
}
Tracing the problem I found that shuffle wants to std::swap an element with itself. As the Type is user defined and no specialization for std::swap has been given for it, the default one is used which creates a temporary and uses operator=(&&) to transfer the values:
_Tp __tmp = _GLIBCXX_MOVE(__a);
__a = _GLIBCXX_MOVE(__b);
__b = _GLIBCXX_MOVE(__tmp);
As Type does not explicitly give operator=(&&) it is default implemented by "recursively" applying the same operation on its members.
The problem occurs on line 2 of the swap code where __a and __b point to the same object which results in effect in the code __a.operator=(std::move(__a)) which then triggers the error in the checked implementation of vector::operator=(&&).
My question is: Who's fault is this?
Is it mine, because I should provide an implementation for swap that makes "self swap" a NOP?
Is it std::shuffle's, because it should not try to swap an element with itself?
Is it the checked implementation's, because self-move-assigment is perfectly fine?
Everything is correct, the checked implementation is just doing me a favor in doing this extra check (but then how to turn it off)?
I have read about shuffle requiring the iterators to be ValueSwappable. Does this extend to self-swap (which is a mere runtime problem and can not be enforced by compile-time concept checks)?
Addendum
To trigger the error more directly one could use:
#include <vector>
int main() {
std::vector<int> vectorOfInts;
vectorOfInts = std::move(vectorOfInts);
}
Of course this is quite obvious (why would you move a vector to itself?).
If you where swapping std::vectors directly the error would not occur because of the vector class having a custom implementation of the swap function that does not use operator=(&&).
The libstdc++ Debug Mode assertion is based on this rule in the standard, from [res.on.arguments]
If a function argument binds to an rvalue reference parameter, the implementation may assume that this parameter is a unique reference to this argument.
i.e. the implementation can assume that the object bound to the parameter of T::operator=(T&&) does not alias *this, and if the program violates that assumption the behaviour is undefined. So if the Debug Mode detects that in fact the rvalue reference is bound to *this it has detected undefined behaviour and so can abort.
The paragraph contains this note as well (emphasis mine):
[Note: If a program casts an lvalue to an xvalue while passing that lvalue to a library function (e.g., by calling the function with the argument
std::move(x)), the program is effectively asking that function to treat that lvalue as a temporary object. The implementation is free to optimize away aliasing checks which might be needed if the
argument was an lvalue. —end note]
i.e. if you say x = std::move(x) then the implementation can optimize away any check for aliasing such as:
X::operator=(X&& rval) { if (&rval != this) ...
Since the implementation can optimize that check away, the standard library types don't even bother doing such a check in the first place. They just assume self-move-assignment is undefined.
However, because self-move-assignment can arise in quite innocent code (possibly even outside the user's control, because the std::lib performs a self-swap) the standard was changed by Defect Report 2468. I don't think the resolution of that DR actually helps though. It doesn't change anything in [res.on.arguments], which means it is still undefined behaviour to perform a self-move-assignment, at least until issue 2839 gets resolved. It is clear that the C++ standard committee think self-move-assignment should not result in undefined behaviour (even if they've failed to actually say that in the standard so far) and so it's a libstdc++ bug that our Debug Mode still contains assertions to prevent self-move-assignment.
Until we remove the overeager checks from libstdc++ you can disable that individual assertion (but still keep all the other Debug Mode checks) by doing this before including any other headers:
#include <debug/macros.h>
#undef __glibcxx_check_self_move_assign
#define __glibcxx_check_self_move_assign(x)
Or equivalently, using just command-line flags (so no need to change the source code):
-D_GLIBCXX_DEBUG -include debug/macros.h -U__glibcxx_check_self_move_assign '-D__glibcxx_check_self_move_assign(x)='
This tells the compiler to include <debug/macros.h> at the start of the file, then undefines the macro that performs the self-move-assign assertion, and then redefines it to be empty.
(In general defining, undefining or redefining libstdc++'s internal macros is undefined and unsupported, but this will work, and has my blessing).
It is a bug in GCC's checked implementation. According to the C++11 standard, swappable requirements include (emphasis mine):
17.6.3.2 §4 An rvalue or lvalue t is swappable if and only if t is swappable with any rvalue or lvalue, respectively, of type T
Any rvalue or lvalue includes, by definition, t itself, therefore to be swappable swap(t,t) must be legal. At the same time the default swap implementation requires the following
20.2.2 §2 Requires: Type T shall be MoveConstructible (Table 20) and MoveAssignable (Table 22).
Therefore, to be swappable under the definition of the default swap operator self-move assignment must be valid and have the postcondition that after self assignment t is equivalent to it's old value (not necessarily a no-op though!) as per Table 22.
Although the object you are swapping is not a standard type, MoveAssignable has no precondition that rv and t refer to different objects, and as long as all members are MoveAssignable (as std::vector should be) the generate move assignment operator must be correct (as it performs memberwise move assignment as per 12.8 §29). Furthermore, although the note states that rv has valid but unspecified state, any state except being equivalent to it's original value would be incorrect for self assignment, as otherwise the postcondition would be violated.
I read a couple of tutorials about copy constructors and move assignments and stuff (for example this). They all say that the object must check for self assignment and do nothing in that case. So I would say it is the checked implementation's fault, because self-move-assigment is perfectly fine.

Resources