Masks (macros of bits) - bit

So I'm about to finish my course in university in C programming.
I want to get better at bit operations (such as creating masks) so I'll go to it:
#define BIT_I_SET(TYPE,I) ((TYPE)(1) << (I))
#define SET_BIT(NUM,I,TYPE) \
NUM |= BIT_I_SET(I,TYPE)
I am still at the early stages and learning the syntax at the moment, I have no idea why the compiler says there's error:
Severity Code Description Project File Line Suppression State
Error (active) E0109 expression preceding parentheses of apparent call must have (pointer-to-) function type Project14
full program (yeah it's for synatx only):
#include <stdio.h>
#include <stdlib.h>
#define SHIFT(I,TYPE) ((TYPE)(1) << (I))
#define NEGATIVE(TYPE) (~(TYPE)(0))
#define BIT_I_SET(TYPE,I) ((TYPE)(1) << (I))
#define BIT_I_CLEAR(I,TYPE) (~((TYPE)(1)<< (I)))
#define MSB_SET(TYPE) ((TYPE)(1) << (sizeof(TYPE)*8-1)
#define SET_BIT(NUM,I,TYPE) \
NUM |= BIT_I_SET(I,TYPE)
void main()
{
unsigned char i, j;
int shift = 3;
i = 0;
j = 0;
SET_BIT(j, 2, unsigned char);
printf("%d\n",sizeof(j));
printf("%d",i);
system("pause>null");
}

change
NUM |= BIT_I_SET(I,TYPE)
to
NUM |= BIT_I_SET(TYPE, I)
You can run just the preprocessor stage of your compiler, which expand the macros
using the command:
gcc -E file.c

Related

GCC AVX __m256i cast to int array leads to wrong values [duplicate]

I'm trying to learn to code using intrinsics and below is a code which does addition
compiler used: icc
#include<stdio.h>
#include<emmintrin.h>
int main()
{
__m128i a = _mm_set_epi32(1,2,3,4);
__m128i b = _mm_set_epi32(1,2,3,4);
__m128i c;
c = _mm_add_epi32(a,b);
printf("%d\n",c[2]);
return 0;
}
I get the below error:
test.c(9): error: expression must have pointer-to-object type
printf("%d\n",c[2]);
How do I print the values in the variable c which is of type __m128i
Use this function to print them:
#include <stdint.h>
#include <string.h>
void print128_num(__m128i var)
{
uint16_t val[8];
memcpy(val, &var, sizeof(val));
printf("Numerical: %i %i %i %i %i %i %i %i \n",
val[0], val[1], val[2], val[3], val[4], val[5],
val[6], val[7]);
}
You split 128bits into 16-bits(or 32-bits) before printing them.
This is a way of 64-bit splitting and printing if you have 64-bit support available:
#include <inttypes.h>
void print128_num(__m128i var)
{
int64_t v64val[2];
memcpy(v64val, &var, sizeof(v64val));
printf("%.16llx %.16llx\n", v64val[1], v64val[0]);
}
Note: casting the &var directly to an int* or uint16_t* would also work MSVC, but this violates strict aliasing and is undefined behaviour. Using memcpy is the standard compliant way to do the same and with minimal optimization the compiler will generate the exact same binary code.
Portable across gcc/clang/ICC/MSVC, C and C++.
fully safe with all optimization levels: no strict-aliasing violation UB
print in hex as u8, u16, u32, or u64 elements (based on #AG1's answer)
Prints in memory order (least-significant element first, like _mm_setr_epiX). Reverse the array indices if you prefer printing in the same order Intel's manuals use, where the most significant element is on the left (like _mm_set_epiX). Related: Convention for displaying vector registers
Using a __m128i* to load from an array of int is safe because the __m128 types are defined to allow aliasing just like ISO C unsigned char*. (e.g. in gcc's headers, the definition includes __attribute__((may_alias)).)
The reverse isn't safe (pointing an int* onto part of a __m128i object). MSVC guarantees that's safe, but GCC/clang don't. (-fstrict-aliasing is on by default). It sometimes works with GCC/clang, but why risk it? It sometimes even interferes with optimization; see this Q&A. See also Is `reinterpret_cast`ing between hardware SIMD vector pointer and the corresponding type an undefined behavior?
See GCC AVX _m256i cast to int array leads to wrong values for a real-world example of GCC breaking code which points an int* at a __m256i.
(uint32_t*) &my_vector violates the C and C++ aliasing rules, and is not guaranteed to work the way you'd expect. Storing to a local array and then accessing it is guaranteed to be safe. It even optimizes away with most compilers, so you get movq / pextrq directly from xmm to integer registers instead of an actual store/reload, for example.
Source + asm output on the Godbolt compiler explorer: proof it compiles with MSVC and so on.
#include <immintrin.h>
#include <stdint.h>
#include <stdio.h>
#ifndef __cplusplus
#include <stdalign.h> // C11 defines _Alignas(). This header defines alignas()
#endif
void p128_hex_u8(__m128i in) {
alignas(16) uint8_t v[16];
_mm_store_si128((__m128i*)v, in);
printf("v16_u8: %x %x %x %x | %x %x %x %x | %x %x %x %x | %x %x %x %x\n",
v[0], v[1], v[2], v[3], v[4], v[5], v[6], v[7],
v[8], v[9], v[10], v[11], v[12], v[13], v[14], v[15]);
}
void p128_hex_u16(__m128i in) {
alignas(16) uint16_t v[8];
_mm_store_si128((__m128i*)v, in);
printf("v8_u16: %x %x %x %x, %x %x %x %x\n", v[0], v[1], v[2], v[3], v[4], v[5], v[6], v[7]);
}
void p128_hex_u32(__m128i in) {
alignas(16) uint32_t v[4];
_mm_store_si128((__m128i*)v, in);
printf("v4_u32: %x %x %x %x\n", v[0], v[1], v[2], v[3]);
}
void p128_hex_u64(__m128i in) {
alignas(16) unsigned long long v[2]; // uint64_t might give format-string warnings with %llx; it's just long in some ABIs
_mm_store_si128((__m128i*)v, in);
printf("v2_u64: %llx %llx\n", v[0], v[1]);
}
If you need portability to C99 or C++03 or earlier (i.e. without C11 / C++11), remove the alignas() and use storeu instead of store. Or use __attribute__((aligned(16))) or __declspec( align(16) ) instead.
(If you're writing code with intrinsics, you should be using a recent compiler version. Newer compilers usually make better asm than older compilers, including for SSE/AVX intrinsics. But maybe you want to use gcc-6.3 with -std=gnu++03 C++03 mode for a codebase that isn't ready for C++11 or something.)
Sample output from calling all 4 functions on
// source used:
__m128i vec = _mm_setr_epi8(1, 2, 3, 4, 5, 6, 7,
8, 9, 10, 11, 12, 13, 14, 15, 16);
// output:
v2_u64: 0x807060504030201 0x100f0e0d0c0b0a09
v4_u32: 0x4030201 0x8070605 0xc0b0a09 0x100f0e0d
v8_u16: 0x201 0x403 0x605 0x807 | 0xa09 0xc0b 0xe0d 0x100f
v16_u8: 0x1 0x2 0x3 0x4 | 0x5 0x6 0x7 0x8 | 0x9 0xa 0xb 0xc | 0xd 0xe 0xf 0x10
Adjust the format strings if you want to pad with leading zeros for consistent output width. See printf(3).
I know this question is tagged C, but it was the best search result also when looking for a C++ solution to the same problem.
So, this could be a C++ implementation:
#include <string>
#include <cstring>
#include <sstream>
#if defined(__SSE2__)
template <typename T>
std::string __m128i_toString(const __m128i var) {
std::stringstream sstr;
T values[16/sizeof(T)];
std::memcpy(values,&var,sizeof(values)); //See discussion below
if (sizeof(T) == 1) {
for (unsigned int i = 0; i < sizeof(__m128i); i++) { //C++11: Range for also possible
sstr << (int) values[i] << " ";
}
} else {
for (unsigned int i = 0; i < sizeof(__m128i) / sizeof(T); i++) { //C++11: Range for also possible
sstr << values[i] << " ";
}
}
return sstr.str();
}
#endif
Usage:
#include <iostream>
[..]
__m128i x
[..]
std::cout << __m128i_toString<uint8_t>(x) << std::endl;
std::cout << __m128i_toString<uint16_t>(x) << std::endl;
std::cout << __m128i_toString<uint32_t>(x) << std::endl;
std::cout << __m128i_toString<uint64_t>(x) << std::endl;
Result:
141 114 0 0 0 0 0 0 151 104 0 0 0 0 0 0
29325 0 0 0 26775 0 0 0
29325 0 26775 0
29325 26775
Note: there exists a simple way to avoid the if (size(T)==1), see https://stackoverflow.com/a/28414758/2436175
#include<stdio.h>
#include<emmintrin.h>
int main()
{
__m128i a = _mm_set_epi32(1,2,3,4);
__m128i b = _mm_set_epi32(1,2,3,4);
__m128i c;
const int32_t* q;
//add a pointer
c = _mm_add_epi32(a,b);
q = (const int32_t*) &c;
printf("%d\n",q[2]);
//printf("%d\n",c[2]);
return 0;
}
Try this code.

how does subtracting works in macro definition

why does C = 2 shouldn't it be 0 A+1 = 1 and 1-B =0 how does it work
#include <iostream>
#include <string>
using namespace std;
#define A 0
#define B A+1
#define C 1-B
int main()
{
cout << C ;
}
Thanks to Lakshay Garg i understand what he means it just replaces the Macro with whatever i define so in the case of "#define C 1-B" the B will be replaced with A+1 which is 0+1
So in in my cout C= 1-0+1 Which is 2 again thanks for the help

difference in output using std::size_t and std::bitset for bit operations

Having following code:
#include <iostream>
#include <bitset>
#include <limits>
#include <limits.h>
using namespace std;
constexpr std::size_t maxBits = CHAR_BIT * sizeof(std::size_t);
int main() {
std::size_t value =47;
unsigned int begin=0;
unsigned int end=32;
//std::size_t allBitsSet(std::numeric_limits<std::size_t>::max());
std::bitset<maxBits> allBitsSet(std::numeric_limits<std::size_t>::max());
//std::size_t mask((allBitsSet >> (maxBits - end)) ^(allBitsSet >> (maxBits - begin)));
std::bitset<maxBits> mask = (allBitsSet >> (maxBits - end)) ^(allBitsSet >> (maxBits - begin));
//std::size_t bitsetValue(value);
std::bitset<maxBits> bitsetValue(value);
auto maskedValue = bitsetValue & mask;
auto result = maskedValue >> begin;
//std::cout << static_cast<std::size_t>(result) << std::endl;
std::cout << static_cast<std::size_t>(result.to_ulong()) << std::endl;
}
Which in fact should return the same value as value, but for some reason the version with std::bitset works just fine and version with std::size_t does not.
It is strange as such, because AFAIK std::bitset, when something is wrong simply throws exception and what is more AFAIK bitset should behave the same way as operations on unsigned integers, but as we can see even if bitset has same number of bits it does not behave the same. In fact it seems for me, that std::bitset works fine, while std::size_t does not.
My configuration is:
intel corei7 - g++-5.4.0-r3
[expr.shift]/1 ... The behavior [of the shift operator - IT] is undefined if the right operand is negative, or greater than or equal to the length in bits of the promoted left operand.
Emphasis mine. allBitsSet >> (maxBits - begin) (in the non-bitset version) exhibits undefined behavior.
On the other hand, the behavior of bitset::operator>> is well-defined: allBitsSet >> (maxBits - begin) produces a bitset with all zero bits.

Is there a memset-like function which can set integer value in visual studio?

1, It is a pity that memset(void* dst, int value, size_t size) fools a lot of people when they first use this function! 2nd parameter "int value" should be "uchar value" to describe the real operation inside.
Don't misunderstand me, I am asking a memset-like function!
2, I know there are some c++ candy function like std::fill_n(my_array, array_length, constant_value);
even a pure c function in OS X: memset_pattern4(grid, &pattern, sizeof grid);
mentioned in a perfect thread Why is memset() incorrectly initializing int?.
So, is there a similar c function in runtime library of visual studio like memset_pattern4()?
3, for somebody asked why i wouldn't use a for-loop to set integer by integer. here is my answer: memset turns to a better performance when setting big trunk(10K?) of memory at least in x86.
http://www.gamedev.net/topic/472631-performance-of-memset/page-2 gives more discussion, although without a conclusion(I doubt there will be).
4, said function can be used to simplify counting sort by avoiding useless Fibonacci accumulation.
Original:
for (int i = 0; i < SRC_ARRY_SIZE; i++)
counter_arry[src_arry[i]]++;
for (int i = SRC_LOW_BOUND; i < SRC_HI_BOUND; i++)//forward fabnacci??
counter_arry[i+1] += counter_arry[i];
for (int i = 0; i < SRC_ARRY_SIZE; i++)
{
value = src_arry[i];
map = --counter_arry[value];//then counter down!
temp[map] = value;
}
Expected:
for (int i = 0; i < SRC_ARRY_SIZE; i++)
counter_arry[src_arry[i]]++;
for (int i = SRC_LOW_BOUND; i < SRC_HI_BOUND+1; i++)//forward fabnacci??
{
memset_4(cur_ptr,i, counter_arry[i]);
cur_ptr += counter_arry[i];
}
Thanks for your kindly review and reply!
Here's an implementation of memset_pattern4() that you might find useful. It's nothing like Darwin's SSE assembly language version, except that it has the same interface.
#include <string.h>
#include <stdint.h>
/*
A portable implementation of the Darwin memset_patternX() family of functions:
These are analogous to memset(), except that they fill memory with a replicated
pattern either 4, 8, or 16 bytes long. b points to a buffer of size len bytes
which is to be filled. The second parameter points to the pattern. If the
buffer length is not an even multiple of the pattern length, the last instance
of the pattern will be truncated. Neither the buffer nor the pattern pointer
need be aligned.
*/
/*
alignment utility macros stolen from Linux
see https://lkml.org/lkml/2006/11/25/2 for a discussion of why typeof() is used
*/
#if !_MSC_VER
#define __ALIGN_KERNEL(x, a) __ALIGN_KERNEL_MASK(x, (typeof(x))(a) - 1)
#define __ALIGN_KERNEL_MASK(x, mask) (((x) + (mask)) & ~(mask))
#define ALIGN(x, a) __ALIGN_KERNEL((x), (a))
#define __ALIGN_MASK(x, mask) __ALIGN_KERNEL_MASK((x), (mask))
#define PTR_ALIGN(p, a) ((typeof(p))ALIGN((uintptr_t)(p), (a)))
#define IS_ALIGNED(x, a) (((x) & ((typeof(x))(a) - 1)) == 0)
#define IS_PTR_ALIGNED(p, a) (IS_ALIGNED((uintptr_t)(p), (a)))
#else
/* MS friendly versions */
/* taken from the DDK's fltKernel.h header */
#define IS_ALIGNED(_pointer, _alignment) \
((((uintptr_t) (_pointer)) & ((_alignment) - 1)) == 0)
#define ROUND_TO_SIZE(_length, _alignment) \
((((uintptr_t)(_length)) + ((_alignment)-1)) & ~(uintptr_t) ((_alignment) - 1))
#define __ALIGN_KERNEL(x, a) ROUND_TO_SIZE( (x), (a))
#define ALIGN(x, a) __ALIGN_KERNEL((x), (a))
#define PTR_ALIGN(p, a) ALIGN((p), (a))
#define IS_PTR_ALIGNED(p, a) (IS_ALIGNED((uintptr_t)(p), (a)))
#endif
void nx_memset_pattern4(void *b, const void *pattern4, size_t len)
{
enum { pattern_len = 4 };
unsigned char* dst = (unsigned char*) b;
unsigned const char* src = (unsigned const char*) pattern4;
if (IS_PTR_ALIGNED( dst, pattern_len) && IS_PTR_ALIGNED( src, pattern_len)) {
/* handle aligned moves */
uint32_t val = *((uint32_t*)src);
uint32_t* word_dst = (uint32_t*) dst;
size_t word_count = len / pattern_len;
dst += word_count * pattern_len;
len -= word_count * pattern_len;
for (; word_count != 0; --word_count) {
*word_dst++ = val;
}
}
else {
while (pattern_len <= len) {
memcpy(dst, src, pattern_len);
dst += pattern_len;
len -= pattern_len;
}
}
memcpy( dst, src, len);
}

MakeCodeWritable

good afternoon.
I got the code below on a book. I'm trying to execute it, but I don't know what is the "first" and "last" parameters on the MakeCodeWritable function, or where I can find them. Someone can help? This code is about C obfuscation method. I'm using Xcode program and LLVM GCC 4.2 compiler.
#include <stdio.h>
#include <sys/mman.h>
#include <stdlib.h>
#include <unistd.h>
#include <sys/types.h>
typedef unsigned int uint32;
typedef char* caddr_t;
typedef uint32* waddr_t;
#define Tam_celula 64
#define ALIGN __attribute__((aligned(Tam_celula)))
void makeCodeWritable(char* first, char* last) {
char* firstpage = first - ((int)first % getpagesize());
char* lastpage = last - ((int)last % getpagesize());
int pages = (lastpage-firstpage)/getpagesize()+1;
if (mprotect(firstpage,pages*getpagesize(), PROT_READ|PROT_EXEC|PROT_WRITE)==-1) perror("mprotect");
}
void xor(caddr_t from, caddr_t to, int len){
int i;
for(i=0;i<len;i++){
*to ^= *from; from++; to++;
} }
void swap(caddr_t from, caddr_t to, int len){
int i;
for(i=0;i<len;i++){
char t = *from; *from = *to; *to = t; from++; to++;
} }
#define CELLSIZE 64
#define ALIGN asm volatile (".align 64\n");
void P() {
static int firsttime=1; if (firsttime) {
xor(&&cell5,&&cell2,CELLSIZE);
xor(&&cell0,&&cell3,CELLSIZE);
swap(&&cell1,&&cell4,CELLSIZE);
firsttime = 0; }
char* a[] = {&&align0,&&align1,&&align2,&&align3,&&align4,&&align5};
char*next[] ={&&cell0,&&cell1,&&cell2,&&cell3, &&cell4,&&cell5};
goto *next[0];
align0: ALIGN
cell0: printf("SPGM0\n");
xor(&&cell0,&&cell3,3*CELLSIZE);
goto *next[3];
align1: ALIGN
cell1: printf("SPGM2\n"); xor(&&cell0,&&cell3,3*CELLSIZE);
goto *next[4];
align2: ALIGN
cell2: printf("SPGM4\n"); xor(&&cell0,&&cell3,3*CELLSIZE);
goto *next[5];
align3: ALIGN
cell3: printf("SPGM1\n"); xor(&&cell3,&&cell0,3*CELLSIZE);
goto *next[1];
align4: ALIGN
cell4: printf("SPGM3\n"); xor(&&cell3,&&cell0,3*CELLSIZE);
goto *next[2];
align5: ALIGN
cell5: printf("SPGM5\n");
xor(&&cell3,&&cell0,3*CELLSIZE);
}
int main (int argc, char *argv[]) {
makeCodeWritable(...);
P(); P();
}
The first argument should be (char *)P, because it looks like you want to modify code inside function P. The second argument is the ending address of function P. You can first compile the code, and using objdump -d to see the address of beginning and end of P, then calculate the size of the function, SIZE, then manually specify in the makeCodeWritable( (char *)P, ((char *)P) + SIZE.
The second way is utilizing the as to get the size of function P, but it depends on the assembler language on your platform. This is code snipe I modified from your code, it should be able to compile and run in x86, x86_64 in GCC 4.x on Linux platform.
align5: ALIGN
cell5: printf("SPGM5\n");
xor(&&cell3,&&cell0,3*CELLSIZE);
// adding an label to the end of function P to assembly code
asm ("END_P: \n");
;
}
extern char __sizeof__myfunc[];
int main (int argc, char *argv[]) {
// calculate the code size, ending - starting address of P
asm (" __sizeof__myfunc = END_P-P \n");
// you can see the code size of P
printf("code size is %d\n", (unsigned)__sizeof__myfunc);
makeCodeWritable( (char*)P, ((char *)P) + (unsigned)__sizeof__myfunc);
P(); P();
}
With some modification to support LLVM GCC and as in Mac OS X
int main (int argc, char *argv[]) {
size_t sizeof__myfunc = 0;
asm volatile ("movq $(_END_P - _P),%0;"
: "=r" (sizeof__myfunc)
: );
printf("%d\n", sizeof__myfunc);

Resources