I have two modules A, B. A has a function f() that is globally acessible, i.e. the f() symbol is exported. B may want to call f() occasionally. But B should only call f() if module A is loaded. What is the best way for B to tell if A is loaded?
Part b to this question is there is a way to check if f() is exported?
I'm not sure which method is more effecient.
I assume you load module B first, then optionally module A. My strategy would be to have A register a set of functions with B when A first initializes. B keeps a statically allocated function pointer (or a pointer to a struct full of function pointers) and provides exported functions to register and unregisters a handler. When A loads, it registers its function (or struct of functions) with B. When A unloads, it unregisters its functions.
It might go something like this:
B.h
typedef int (*foo_t)(int);
int B_register_foo(foo_t);
int B_unregister_foo(foo_t);
B.c
static foo_t foo = NULL;
int B_register_foo(foo_t f) {
if (!foo) {
foo = f;
return 0;
}
return -EFOO;
}
void B_unregister_foo(foo_t f) {
if (foo == f)
foo = NULL;
}
EXPORT_SYMBOL(B_register_foo);
EXPORT_SYMBOL(B_unregister_foo);
void B_maybe_call_foo(int arg) {
return (foo) ? foo(arg) : 0;
}
A.c
static int A_foo(int arg);
static int __init init_A(void) {
if (B_register_foo(A_foo))
return -EFOO;
return 0;
}
static void __exit exit_A(void) {
B_unregister_foo(A_foo);
}
If B uses at least one of A's symbols, then by the time B is loaded, a module providing the required symbol(s) (in other words, a module like A) is also already loaded.
Part b to this question is there is a way to check if f() is exported?
If the symbol were not available, you would not be able to load the module (B) requesting it.
Related
Go allows for multiple named return values, but what about the receiving variables? Are they protected when return values are juggled around?
Let's say we start with this:
func foo() (i int, j int) {
i = 1
j = 2
return
}
a, b := foo()
Now what if some other coder comes by and makes the following change to foo's definition:
func foo() (j int, i int) {
my calling function is invalidated. Is it, then, possible to name the returned values from the calling side as well. For instance, if I called it like this:
(a:i, b:j) := foo()
then I would be attaching them to the named return values, rather than assigning them in the order they are returned.
So, is there a way to solve that problem?
This is no different than rearranging the input parameters. As a rule, don't do that unless you intend to make a breaking change. But if you want to deal with things by name rather than position, you want a struct. For example, you can use anonymous structs:
func foo() struct {
i int
j int
} {
return struct {
i int
j int
}{1, 2}
}
func main() {
result := foo()
fmt.Println(result.i, result.j)
}
Of course you can also name the struct if you used it in other places, but there's no need if you just want to name the fields.
Consider some given interface and a function of an imaginary library that uses it like
// Binary and Ternary operation on ints
type NumOp interface {
Binary(int, int) int
Ternary(int, int, int) int
}
func RandomNumOp(op NumOp) {
var (
a = rand.Intn(100) - 50
b = rand.Intn(100) - 50
c = rand.Intn(100) - 50
)
fmt.Printf("%d <op> %d = %d\n", a, b, op.Binary(a,b))
fmt.Printf("%d <op> %d <op> %d = %d\n", a, b, c, op.Ternary(a,b,c))
}
A possible type implementing that interface could be
// MyAdd defines additions on 2 or 3 int variables
type MyAdd struct {}
func (MyAdd) Binary(a, b int) int {return a + b }
func (MyAdd) Ternary(a, b, c int) int {return a + b + c }
I am dealing with many different interfaces defining a few functions that in some cases need to be implemented using functions mostly working as NOP-like operations, do not rely on any struct member and are only used in a single position in the project (no reusability needed).
Is there a simpler (less verbose) way in Go to define a (preferably) anonymous implementation of an interface using anonymous functions, just like (pseudo code, I know it's not working that way):
RandomNumOp({
Binary: func(a,b int) int { return a+b},
Ternary: func(a,b,c int) int {return a+b+c},
})
If implementation must work
If the value implementing the interface must work (e.g. its methods must be callable without panic), then you can't do it.
Method declarations must be at the top level (file level). And to implement an interface that has more than 0 methods, that requires to have the method declarations somewhere.
Sure, you can use a struct and embed an existing implementation, but then again, it requires to already have an existing implementation, whose methods must already be defined "somewhere": at the file level.
If you need a "dummy" but workable implementation, them use / pass any implementation, e.g. a value of your MyAdd type. If you want to stress that the implementation doesn't matter, then create a dummy implementation whose name indicates that:
type DummyOp struct{}
func (DummyOp) Binary(_, _ int) int { return 0 }
func (DummyOp) Ternary(_, _, _ int) int { return 0 }
If you need to supply implementation for some of the methods dynamically, you may create a delegator struct type which holds functions for the methods, and the actual methods check if the respective function is set, in which case it is called, else nothing will be done.
This is how it could look like:
type CustomOp struct {
binary func(int, int) int
ternary func(int, int, int) int
}
func (cop CustomOp) Binary(a, b int) int {
if cop.binary != nil {
return cop.binary(a, b)
}
return 0
}
func (cop CustomOp) Ternary(a, b, c int) int {
if cop.ternary != nil {
return cop.ternary(a, b, c)
}
return 0
}
When using it, you have the freedom to only supply a subset of functions, the rest will be a no-op:
RandomNumOp(CustomOp{
binary: func(a, b int) int { return a + b },
})
If implementation is not required to work
If you only need a value that implements an interface but you don't require its methods to be "callable" (to not panic if called), you may simply use an anonymous struct literal, embedding the interface type:
var op NumOp = struct{ NumOp }{}
I've been daydreaming about this language where you can define different functions with the same name but whose arguments are of different type (or length).
Here's a naive example, assuming a C-like syntax.
struct vect {
int x;
int y;
};
struct guy {
struct vect pos;
char *name;
};
struct guy new_guy(vect v, char *name) {
struct guy g;
g.pos = v;
g.name = name;
return g;
}
struct guy new_guy(int x, int y, char *name) {
struct vect v;
v.x = x;
v.y = y;
return new_guy(v, name);
}
int get_x(struct vect v) {
return v.x;
}
int get_x(struct guy g) {
return g.pos.x;
}
The point would be in avoiding having long names like: get_vect_x and get_guy_x, etc. The compiler knows what types we are using to call a function so it shouldn't be a problem to figure out which definition to use.
There could also be different definitions for different number of arguments.
From Wiki:
In some programming languages, function overloading or method overloading is the ability to create multiple functions of the same name with
different implementations. Calls to an overloaded function will run a specific implementation of that function appropriate to the context of the call,
allowing one function call to perform different tasks depending on context.
wiki link giving definition of function overloading
Java is capable of this, which would lead me to wonder if C++ is (much less experienced with it than Java). Declaring methods of the same name with different parameters or similar parameters different types is supported, even with constructors. This is likely easily implemented in Java because it's a statically typed language. A quick way to check if this is possible is to look at the docs of a particular language's standard libraries for constructors that take different combinations of parameters.
I'm wondering if anyone can please explain how, given types T and X, std::function takes T(X) as a template parameter.
int(double) looks like the usual cast from double to int, so how is std::function parsing it as distinct types?
I did search but didn't find anything that specifically addresses this question. Thanks!
It can use partial template specialization. Look at this:
template <typename T>
class Func;
template <typename R, typename... Args>
class Func<R(Args...)>
{
public:
Func(R(*fptr)(Args...)) {/*do something with fptr*/}
};
This class takes a single template parameter. But unless it matches R(Args...) (i.e. a function type returning R and taking zero or more args), there wont be a definition for the class.
int main() { Func<int> f; }
// error: aggregate 'Func<int> f' has incomplete type and cannot be defined
.
int func(double a) { return a+2; }
int main() { Func<int(double)> f = func; }
// ok
The specialization can now operate on R and Args to do its magic.
Note that int(double) is a function type. Since you cannot create raw function objects, you don't usually see this syntax outside the template world. If T is int(double) then T* is a function pointer just like int(*)(double).
I'm not sure how to search for this question, that's why I asked it.
When you have a project that contains, say 3 classes.
Class A
{
int doSomething();
}
// Depends on A.
Class B
{
A objA;
}
// Depends on nothing
Class C
{
void Terminate();
}
And you create a static library containing these 3 classes. When you link in your .lib file with your executable, will 1) 2)
All the classes in that library be joining the executable (A, B, C)
Or just the classes used (and it's dependencies)? (A, B, not C)
void main()
{
B b;
b.doSomething();
}
Static linking will record from which library and at which file locations the needed classes/functions are. No code other than the mapped location offsets are in the main().
Usually, this will look something like:
main(){
compiled_library.dll 0x01234ABC <- offset in file
}