Use variadic C functions in Go - go

I use shm_open with cgo. shm_open is defined with 3 arguments on Linux
int shm_open(const char *name, int oflag, mode_t mode);
whereas on OSX (Darwin) the 3rd mode flag is optional.
int shm_open(const char *name, int oflag, ...);
This creates a problem with CGO when trying to pass a mode on OSX. It complains that I pass 3 arguments, when only 2 are expected.
How can I work around this?

As usual the revelation comes 1 second after posting to SO. You can actually declare functions in the CGO comment section, so all you have to do is use a wrapper like this.
/*
#include <stdio.h>
int shm_open2(const char *name, int oflag, mode_t mode) {
return shm_open(name, oflag, mode);
}
*/
import "C"

Related

Where is "generic_start_main()" defined?

I got a segmentation fault from running a program. The backtrace command in gdb shows that the calling stack is
#0 0x000000001048d594 in .__libc_csu_init ()
#1 0x000000001048ce20 in .generic_start_main ()
#2 0x000000001048d030 in .__libc_start_main ()
#3 0x0000000000000000 in ?? ()
Can someone tell me where generic_start_main() is defined? I tried to search in glibc with
grep -R generic_start_main * but only got
sysdeps/unix/sysv/linux/powerpc/libc-start.c:29:#define LIBC_START_MAIN generic_start_main
sysdeps/unix/sysv/linux/powerpc/libc-start.c:102: return generic_start_main (stinfo->main, argc, argv, auxvec,
I'm running programs on a 3.10.0 Linux on a 64-bit PowerPC machine.
but only got
You give up too easily. Look in sysdeps/unix/sysv/linux/powerpc/libc-start.c, and you'll see that it #include <csu/libc-start.c> after defining LIBC_START_MAIN, and the csu/libc-start.c has:
STATIC int
LIBC_START_MAIN (int (*main) (int, char **, char ** MAIN_AUXVEC_DECL),
int argc, char **argv,
#ifdef LIBC_START_MAIN_AUXVEC_ARG
ElfW(auxv_t) *auxvec,
#endif
__typeof (main) init,
void (*fini) (void),
void (*rtld_fini) (void), void *stack_end)
{ ...
Update:
I'm not very familiar with how the #define macro works.
The #define creates a text substitution rule for the preprocessor. For example:
#define FOO Bar
tells the preprocessor: every time you see FOO, replace it with Bar (there are some details I am sweeping under the rug here, but they are not important for this question).
So, given:
#define LIBC_START_MAIN generic_start_main
int LIBC_START_MAIN() { ... }
This is what the compiler sees after preprocessing:
int generic_start_main() { ... }

Variadic templates vs vsnprintf

While attempting to upgrade the following function to C++11
int inline formatText(const char* const fmt, ...)
{
va_list args;
va_start(args, fmt);
int len = std::min(vsnprintf(m_text, m_textSize, fmt, args), m_textSize);
va_end(args);
m_text[len] = '\0'; // assume m_textSize is storage size - sizeof('\0')
return len;
}
Obviously since printf deals with PODs, it's not really a problem for this function to accept its arguments by value.
But I realized I wasn't clear on how to achieve macro-like exact forwarding of the arguments, I realize that by inlining a simple version of this the compiler can eliminate pass-by-value, but I'm not exactly sure which of the three following approaches is technically best:
template<typename... Args>
#if defined(APPROACH_1)
int formatText(const char* const fmt, Args...)
#elif defined(APPROACH_2)
int formatText(const char* const fmt, const Args&...)
#else
int formatText(const char* const fmt, Args&&...)
#endif
{
int len = std::min(snprintf(m_text, m_textSize, fmt, std::forward<Args>(args)...);
m_text[len] = '\0';
return len;
}
Since we're talking printf here, copy-by-value isn't terrible, because I shouldn't be passing it non-pod objects; specifying const ref would certainly help complex objects that I don't want copying but clearly is counter-productive for the normal use cases of printf. I'm plain not sure what the side-effects of approach 3 are.
Everything I've read so far has left me with the impression that 'Args&&...' is likely to be the normal case for variadic templates, but in this case my gut tells me to go for approach #1.

VC++ WINAPI Form: Identifier Not Found (C3861 Error)

I'm working on a port from some old Delphi code to VC++ 2013, and I'm encountering an error that I feel should be an easy fix but cannot for the life of me figure out...
The problem is this: I have a number of common utility functions in a local file Utils.h that I am deploying as part of a windows form. Most (90%) of the functions in this header work as normal. GetMsg(...), however, throws a C3861 Identifier not found error...
Utils.h (snippet): GetMsg declared at bottom
#pragma once
/*------------------------------------------------------------------------*
Includes:
*------------------------------------------------------------------------*/
using namespace std;
/*------------------------------------------------------------------------*
Constants:
*------------------------------------------------------------------------*/
#define GET_MSG_TIMEOUT 2
/*------------------------------------------------------------------------*
Typedefs, Structs, Enums:
*------------------------------------------------------------------------*/
typedef union
{
unsigned long ui32;
unsigned char ui8[4];
} UI32_UI8;
typedef union
{
unsigned short ui16;
unsigned char ui8[2];
} UI16_UI8;
typedef union
{
float f;
unsigned char ui8[4];
} F_UI8;
typedef struct
{
string sName;
string sVersion;
string sCompany;
string sCopyright;
} PRODUCT_INFORMATION;
/*------------------------------------------------------------------------*
Prototypes:
*------------------------------------------------------------------------*/
unsigned short SwapShort(unsigned short aShort);
float SwapFloat(float aFloat);
unsigned long SwapLong(unsigned long aLong);
unsigned int ReadLine(unsigned char *msgBuf, SerialPort^ Hdl, bool ReturnLF);
void __stdcall FillTheBuffer(char *buf, String sss, int length);
string __stdcall FillTheString(string sss, int length);
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf);
GetMsg Definition in Utils.cpp:
//---------------------------------------------------------
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf)
{
...
}
And, finally, GetMsg usage in form file:
#include "Utils.h"
...
void MainForm::UploadButton_Click
(System::Object^ object, System::EventArgs^ e)
{
...
SwapShort(1); //Works fine, also declared in Utils.h
GetMsg(spCom, inBuf); //C3861 ERROR
...
}
Where spCom is a (SerialPort^) contained, configured, and opened within the windows form. inBuf is a simple array of characters (char*) to buffer the input. I've tried renaming the function, thinking that there may have been an unintentional conflict / overload in other files, to no avail.
Any advice? Thanks, in advance
Solved the problem -- As it turns out I needed to be more explicit in my function definitions. Changing the declaration to read
GetMsg(System::IO::Ports::SerialPort^ Hdl, unsigned char *msgBuf)
eliminated the C3861 error. It would seem that the lack of a specific namespace on the declaration passed Intellisense but confused the compiler, rendering it unable to determine which prototype to use with the function call.

intercepting the openat() system call for GNU tar

I'm trying to intercept the openat() system call on Linux using a custom shared library that I can load via LD_PRELOAD. An example intercept-openat.c has this content:
#define _GNU_SOURCE
#include <sys/types.h>
#include <sys/stat.h>
#include <unistd.h>
#include <stdio.h>
#include <dlfcn.h>
int (*_original_openat)(int dirfd, const char *pathname, int flags, mode_t mode);
void init(void) __attribute__((constructor));
int openat(int dirfd, const char *pathname, int flags, mode_t mode);
void init(void)
{
_original_openat = (int (*)(int, const char *, int, mode_t))
dlsym(RTLD_NEXT, "openat");
}
int openat(int dirfd, const char *pathname, int flags, mode_t mode)
{
fprintf(stderr, "intercepting openat()...\n");
return _original_openat(dirfd, pathname, flags, mode);
}
I compile it via gcc -fPIC -Wall -shared -o intercept-openat.so intercept-openat.c -ldl. Then, when I run this small example program:
int main(int argc, char *argv[])
{
int fd;
fd = openat(AT_FDCWD, "/home/feh/.vimrc", O_RDONLY);
if(fd == -1)
return -1;
close(fd);
return 0;
}
The openat() call is re-written via the library:
$ LD_PRELOAD=./intercept-openat.so ./openat
intercepting openat()...
However, the same does not happen with GNU tar, even though it uses the same system call:
$ strace -e openat tar cf /tmp/t.tgz .vimrc
openat(AT_FDCWD, ".vimrc", O_RDONLY|O_NOCTTY|O_NONBLOCK|O_NOFOLLOW|O_CLOEXEC) = 4
$ LD_PRELOAD=./intercept-openat.so tar cf /tmp/t.tgz .vimrc
So the custom openat() from intercept-openat.so is not being called. Why is that?
It uses the same system call, but apparently it does not call that via the same C function. Alternatively, it could be that it does, but it's statically linked.
Either way, I think you've proved that it never dynamically links a function names "openat". If you still want to pursue this option, you might like to see if it links against a specific version of that function, but that's a long shot.
You can still intercept the system call by writing your program to use ptrace. This is the same interface used by strace and gdb. It will have a higher performance penalty though.
http://linux.die.net/man/2/ptrace

Call C++ library from main.mm results in compile error - verify declaration conflicts with macro

I have the following code in a header included in main.mm:
1. virtual int truncate(DbTxn *, u_int32_t *, u_int32_t);
2. virtual int upgrade(const char *name, u_int32_t flags);
3. virtual int verify(
4. const char *, const char *, __DB_STD(ostream) *, u_int32_t);
The first two lines are for context and to show what is working. The third and fourth lines have the following errors:
Macro "verify" passed 4 arguments, but takes just 1
'verify' declared as a 'virtual' field
If I add a random character to the end of the verify declaration like verityx then the file compiles without a problem. Is verify reserved?
Edit:
My main.mm file:
#import <Foundation/Foundation.h>
#import "db_cxx.h"
int main (int argc, const char * argv[])
{
return 0;
}
Edit 2:
The only two uses of the word "verify" in the berkeley header are:
virtual int log_verify(DB_LOG_VERIFY_CONFIG *);
virtual int verify(
const char *, const char *, __DB_STD(ostream) *, u_int32_t);
Macro "verify" passed 4 arguments, but takes just 1
means that there's a #define verify(x) ... somewhere. It's not reserved in C++ but something you're including is defining it.
A quick
fgrep -r verify /usr/include | fgrep '#define'
yields, amongst a lot of other things,
/usr/include/AssertMacros.h: #define verify(assertion) __Verify(assertion)
After you've included all the OS X/iOS headers you need, it should be safe to #undef verify before including bdb.

Resources