I found some weird function signature in glibc, can anyone expain it to me? - glibc

#include <string.h>
#include <memcopy.h>
#include <pagecopy.h>
#undef memcpy
void *
memcpy (dstpp, srcpp, len)
void *dstpp;
const void *srcpp;
size_t len;
{
unsigned long int dstp = (long int) dstpp;
unsigned long int srcp = (long int) srcpp;
...
This is our familiar lib function memcpy's implementation, but I don't recognize its
signature and I've never seen it before. Can anyone tell me what it is?

This is our familiar lib function memcpy's implementation, but I don't recognize its signature
Which aspect of it don't you recognize? Is it the fact that it is written in K&R (pre-ANSI) C, or something else? Would you have recognized it if it was written that way instead:
void *
memcpy(void *dstpp, const void *srcpp, size_t len)
{
...
}

Related

How Do I use libbsm/openbsm with Swift5 on macOS?

I try to use libbsm/openbsm library and Swift5 in Xcode11.
Why? I can not fine AUDITPIPE_SET_PRESELECT_MODE.
Can I use libbsm/openbsm in Swift?
var mode = AUDITPIPE_PRESELECT_MODE_LOCAL // <- works very well
if ioctl(auditFD, AUDITPIPE_SET_PRESELECT_MODE, &mode) == -1 {
return -1
}
I ran into the same issue with Swift unable to import these complex macros. Quinn β€œThe Eskimo!” pointed me in the direction of declaring them as c constants. So I created them in a c file in my project.
In the .h file I did the following:
#include <stdio.h>
#include <bsm/libbsm.h>
#include <sys/ioctl.h>
#include <security/audit/audit_ioctl.h>
// -- Macros used for creating an auditpipe
extern unsigned long QAUDITPIPE_SET_PRESELECT_MODE;
extern unsigned long QAUDITPIPE_GET_QLIMIT_MAX;
extern unsigned long QAUDITPIPE_SET_QLIMIT;
extern unsigned long QAUDITPIPE_SET_PRESELECT_FLAGS;
extern unsigned long QAUDITPIPE_SET_PRESELECT_NAFLAGS;
// -- End auditpipe macros
And in my .c file i have:
unsigned long QAUDITPIPE_SET_PRESELECT_MODE = AUDITPIPE_SET_PRESELECT_MODE;
unsigned long QAUDITPIPE_GET_QLIMIT_MAX = AUDITPIPE_GET_QLIMIT_MAX;
unsigned long QAUDITPIPE_SET_QLIMIT = AUDITPIPE_SET_QLIMIT;
unsigned long QAUDITPIPE_SET_PRESELECT_FLAGS = AUDITPIPE_SET_PRESELECT_FLAGS;
unsigned long QAUDITPIPE_SET_PRESELECT_NAFLAGS = AUDITPIPE_SET_PRESELECT_NAFLAGS;
So now I can just use QAUDITPIPE_SET_PRESELECT_MODE in my Swift code. Works great!

exposing a function with 2D slice as a parameter in a c-shared library (to be used in Java via JNA and C)

I am trying to write a simple matrix operations API using go and expose the APIs as a shared library. This shared library will be used from Java(using JNA) and from C.
The documentation is very sparse about using any data type beyond simple int or string as function parameters.
My requirement is to expose functions with 1 or more 2D slices as parameters AND also as return types. I am not able to figure out if such a thing is supported.
Is this possible? Are there any examples for this?
I think the key point is to have a look to the c bindings of slice,string and int generated by go build tool. I not tried 2D slice, but it should no different to 1D slice with unsafe pointer converter, maybe just be one more time allocation and convertion.
I'm not sure it's the best way, but here's the example for 1D slice:
the go part:
import "C"
//export CFoo
func CFoo(content []byte) string{
var ret []byte
//blablabla to get ret
cbuf := unsafe.Pointer(C.malloc(C.size_t(len(ret))))
C.memcpy(cbuf, unsafe.Pointer(&ret[0]), C.size_t(len(ret)))
var finalString string
hdr := (*reflect.StringHeader)(unsafe.Pointer(&finalString))
hdr.Data = uintptr(unsafe.Pointer(cbuf))
hdr.Len = len(ret)
return finalString
}
compile with -buildmode=c-shared, to get libmygo.so.
I not know JNA, expecting it like JNI. the JNI part as well as pure C part:
#include <stdio.h>
#include <jni.h>
#include <string.h>
typedef signed char GoInt8;
typedef unsigned char GoUint8;
typedef short GoInt16;
typedef unsigned short GoUint16;
typedef int GoInt32;
typedef unsigned int GoUint32;
typedef long long GoInt64;
typedef unsigned long long GoUint64;
typedef GoInt32 GoInt;
typedef GoUint32 GoUint;
typedef __SIZE_TYPE__ GoUintptr;
typedef float GoFloat32;
typedef double GoFloat64;
typedef float _Complex GoComplex64;
typedef double _Complex GoComplex128;
typedef struct { const char *p; GoInt n; } GoString;
typedef void *GoMap;
typedef void *GoChan;
typedef struct { void *t; void *v; } GoInterface;
typedef struct { void *data; GoInt len; GoInt cap; } GoSlice;
JNIEXPORT JNICALL jbyteArray Java_com_mynextev_infotainment_app_myev_Native_foo(JNIEnv* env, jobject obj,jbyteArray content){
JNIEnv ienv = *env;
void * Ccontent = ienv->GetByteArrayElements(env, content, 0);
int Lcontent = ienv->GetArrayLength(env, content);
GoSlice Gcontent = {Ccontent, Lcontent, Lcontent};
if(!gret.n){
printf("jni CDoAESEnc");
return NULL;
}
jbyteArray ret = ienv->NewByteArray(env, gret.n);
ienv->SetByteArrayRegion(env, ret, 0, gret.n, gret.p);
free((void*)gret.p);
ienv->ReleaseByteArrayElements(env, content, Ccontent, JNI_ABORT);
return ret;
}
build it with libmygo.so.
finally you get two so files. one for C which can be used standalone; one for Java which must be used with libmygo.so together.

VC++ WINAPI Form: Identifier Not Found (C3861 Error)

I'm working on a port from some old Delphi code to VC++ 2013, and I'm encountering an error that I feel should be an easy fix but cannot for the life of me figure out...
The problem is this: I have a number of common utility functions in a local file Utils.h that I am deploying as part of a windows form. Most (90%) of the functions in this header work as normal. GetMsg(...), however, throws a C3861 Identifier not found error...
Utils.h (snippet): GetMsg declared at bottom
#pragma once
/*------------------------------------------------------------------------*
Includes:
*------------------------------------------------------------------------*/
using namespace std;
/*------------------------------------------------------------------------*
Constants:
*------------------------------------------------------------------------*/
#define GET_MSG_TIMEOUT 2
/*------------------------------------------------------------------------*
Typedefs, Structs, Enums:
*------------------------------------------------------------------------*/
typedef union
{
unsigned long ui32;
unsigned char ui8[4];
} UI32_UI8;
typedef union
{
unsigned short ui16;
unsigned char ui8[2];
} UI16_UI8;
typedef union
{
float f;
unsigned char ui8[4];
} F_UI8;
typedef struct
{
string sName;
string sVersion;
string sCompany;
string sCopyright;
} PRODUCT_INFORMATION;
/*------------------------------------------------------------------------*
Prototypes:
*------------------------------------------------------------------------*/
unsigned short SwapShort(unsigned short aShort);
float SwapFloat(float aFloat);
unsigned long SwapLong(unsigned long aLong);
unsigned int ReadLine(unsigned char *msgBuf, SerialPort^ Hdl, bool ReturnLF);
void __stdcall FillTheBuffer(char *buf, String sss, int length);
string __stdcall FillTheString(string sss, int length);
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf);
GetMsg Definition in Utils.cpp:
//---------------------------------------------------------
unsigned int __stdcall GetMsg(SerialPort^ Hdl, unsigned char *msgBuf)
{
...
}
And, finally, GetMsg usage in form file:
#include "Utils.h"
...
void MainForm::UploadButton_Click
(System::Object^ object, System::EventArgs^ e)
{
...
SwapShort(1); //Works fine, also declared in Utils.h
GetMsg(spCom, inBuf); //C3861 ERROR
...
}
Where spCom is a (SerialPort^) contained, configured, and opened within the windows form. inBuf is a simple array of characters (char*) to buffer the input. I've tried renaming the function, thinking that there may have been an unintentional conflict / overload in other files, to no avail.
Any advice? Thanks, in advance
Solved the problem -- As it turns out I needed to be more explicit in my function definitions. Changing the declaration to read
GetMsg(System::IO::Ports::SerialPort^ Hdl, unsigned char *msgBuf)
eliminated the C3861 error. It would seem that the lack of a specific namespace on the declaration passed Intellisense but confused the compiler, rendering it unable to determine which prototype to use with the function call.

Separating out .cu and .cpp(using c++11 library)

I am trying to convert a c++ program I have which uses random library which is a C++11 feature. After having read through a couple of similar posts here, I tried by separating out the code into three files. At the outset I would like to say that I am not very conversant at C/C++ and mostly use R at work.
The main file looks as follows.
#ifndef _KERNEL_SUPPORT_
#define _KERNEL_SUPPORT_
#include <complex>
#include <random>
#include <iostream>
#include "my_code_header.h"
using namespace std;
std::default_random_engine generator;
std::normal_distribution<double> distribution(0.0,1.0);
const int rand_mat_length = 24561;
double rand_mat[rand_mat_length];// = {0};
void create_std_norm(){
for(int i = 0 ; i < rand_mat_length ; i++)
::rand_mat[i] = distribution(generator);
}
.
.
.
int main(void)
{
...
...
call_global();
return 0;
}
#endif
The header file looks as follows.
#ifndef mykernel_h
#define mykernel_h
void call_global();
void two_d_example(double *a, double *b, double *my_result, size_t length, size_t width);
#endif
And the .cu file looks like the following.
#ifndef _MY_KERNEL_
#define _MY_KERNEL_
#include <iostream>
#include "my_code_header.h"
#define TILE_WIDTH 8
using namespace std;
__global__ void two_d_example(double *a, double *b, double *my_result, size_t length, size_t width)
{
unsigned int row = blockIdx.y*blockDim.y + threadIdx.y;
unsigned int col = blockIdx.x*blockDim.x + threadIdx.x;
if ((row>length) || (col>width)) {
return;
}
...
}
void call_global()
{
const size_t imageLength = 528;
const size_t imageWidth = 528;
const dim3 threadsPerBlock(TILE_WIDTH,TILE_WIDTH);
const dim3 numBlocks(((imageLength) / threadsPerBlock.x), ((imageWidth) / threadsPerBlock.y));
double *d_a, *d_b, *mys ;
...
cudaMalloc((void**)&d_a, sizeof(double) * imageLength);
cudaMalloc((void**)&d_b, sizeof(double) * imageWidth);
cudaMalloc((void**)&mys, sizeof(double) * imageLength * imageWidth);
two_d_example<<<numBlocks,threadsPerBlock>>>(d_a, d_b, mys, imageLength, imageWidth);
...
cudaFree(d_a);
cudaFree(d_b);
}
#endif
Please note that the __global__ has been removed from .h since I was getting the following error owing to it being compiled by g++.
In file included from my_code_main.cpp:12:0:
my_code_header.h:5:1: error: β€˜__global__’ does not name a type
When I compile the .cu file with nvcc it is all fine and generates a my_code_kernel.o. But since I am using C++11 in my .cpp I am trying to compile it with g++ and I am getting the following error.
/tmp/ccR2rXzf.o: In function `main':
my_code_main.cpp:(.text+0x1c4): undefined reference to `call_global()'
collect2: ld returned 1 exit status
I understand that this might not have to do anything with CUDA as such and may just be the wrong use of including the header at both places. Also what is the right way to compile and most importantly link the my_code_kernel.o and my_code_main.o(hopefully)? Sorry if this question is too trivial!
It looks like you are not linking with my_code_kernel.o. You have used -c for your nvcc command (causes it to compile but not link, i.e. generate the .o file), I'm going to guess that you're not using -c with your g++ command, in which case you need to add my_code_kernel.o to the list of inputs as well as the .cpp file.
The separation you are trying to achieve is completely possible, it just looks like your not linking properly. If you still have problems, add the compilation commands to your question.
FYI: You don't need to declare two_d_example() in your header file, it is only used within your .cu file (from call_global()).

Signature difference in openGlShaderSource between OpenGL ES 2 revisions

I've noticed that the signature of openGLShaderSource has changed:
If you look at gl2.h from http://www.khronos.org/registry/gles/ (for me at rev 16803), the signature reads:
GL_APICALL void GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar* const* string, const GLint* length);
Now, on an older revision of that file (rev 10602), or when looking at the desktop version of that function, the signature reads:
GL_APICALL void GL_APIENTRY glShaderSource (GLuint shader, GLsizei count, const GLchar** string, const GLint* length);
note the missing "const" for the string parameter...
Has anyone an idea how one could detect this and keep backwards compatibility? As far as I can see, there is no "revision" macro defined, that one could check to use one version or the other.
Background information: I'm trying to compile WebKit for an embedded platform that ships the new gl2.h header. And webkit has this:
typedef void (*glShaderSourceType) (GLuint, GLsizei, const char**, const GLint*);
in http://opensource.apple.com/source/WebCore/WebCore-1298/platform/graphics/cairo/OpenGLShims.h and thus fails to compile when it tries to assign the glShaderSource function pointer to a glShaderSourceType-variable with the new gl2.h version...
Thanks to the hint by Nicol Bolas! This seems to compile:
void foo(const char* const*) {}
void bar(const char**) {}
typedef void (*ftype)(const char**);
int main() {
ftype f1 = reinterpret_cast(foo);
ftype f2 = bar;
return 0;
}

Resources