how to format my C++ code like this? - coding-style

To notify in advance, Astyle can't do this for me.
I want my code to align right. It would make me crazy if I have to type space manually. As you can see after modification, the code is much more beautiful.
I want to know what's the best method to do it? anybody can help me?
Here is the original code:
unsigned __int64 contentsStmSize = 0;
unsigned __int64 imageSize = 0;
unsigned __int64 fontSize = 0;
unsigned __int64 bookMarkSize = 0;
unsigned __int64 xObjectFormsSize = 0;
unsigned __int64 structureInfoSize = 0;
unsigned __int64 acroFormsSize = 0;
unsigned __int64 linkAnnotsSize = 0;
unsigned __int64 namedDestnationsSize = 0;
unsigned __int64 docOverheadSize = 0;
unsigned __int64 clrSpaceSize = 0;
unsigned __int64 patternInfoSize = 0;
unsigned __int64 shadingPatternInfoSize = 0;
unsigned __int64 extGraphicsStatesSize = 0;
unsigned __int64 crossRefTableSize = 0;
and here is what I want:
unsigned __int64 contentsStmSize = 0;
unsigned __int64 imageSize = 0;
unsigned __int64 fontSize = 0;
unsigned __int64 bookMarkSize = 0;
unsigned __int64 xObjectFormsSize = 0;
unsigned __int64 structureInfoSize = 0;
unsigned __int64 acroFormsSize = 0;
unsigned __int64 linkAnnotsSize = 0;
unsigned __int64 namedDestnationsSize = 0;
unsigned __int64 docOverheadSize = 0;
unsigned __int64 clrSpaceSize = 0;
unsigned __int64 patternInfoSize = 0;
unsigned __int64 shadingPatternInfoSize = 0;
unsigned __int64 extGraphicsStatesSize = 0;
unsigned __int64 crossRefTableSize = 0;

I like this style too. And I use to align my code with tabs to see the values better.
One help that I use too is to select some code lines with ALT + Mouse and with the tab button you can realign a group of lines at the same time.
Hope it helps.

In emacs you can select the lines and then type
ctrl-alt-shift-5
\(.*?\) *= 0;
\,(format "%-50s = 0;" \1)
Meaning is
\(.*?\) grab everything (non greedy to leave the spaces out)
*= 0; a sequence of spaces followed by = a space and 0;
\, replace with the value of elisp expression
(format "%-50s = 0;" \1) format group 1 as a left-aligned string of size 50 followed by the end part " = 0;"
Note that however this style of indentation is sort of annoying to keep updated (e.g. when you add a new variable with a name longer than the others) and this is the reason for which it's discouraged in many code style conventions.

It's not too hard in vi. Search for '=' (/=enter), Insert tab (itabesc). Then hit . a few times to align one line, and n to go to the next spot.
1G/=
i^I^[.....n.....n...n...n... etc.

Related

Get long from unsigned char* buffer via memcpy

After I have defined and filled the buffer from binary .exe data --
unsigned char *buffer ; /*buffer*/
buffer = malloc(300) ; /*allocate space on heap*/
fread(buffer, 300, 1, file) ;
Then how do I get bytes at position 121--124 of buffer
as a long value?
I have tried
long Hint = 0;
memcpy(Hint, buffer[121], 4);
printf("Hint=x%x\n", Hint);
but all I get is an abend on memcpy
Here is a simple way to do that (I put numbers in buffer for the example):
unsigned char *buffer ; /*buffer*/
buffer = (unsigned char*) malloc (300) ; /*allocate space on heap*/
for(int i=0;i<300;i++) /*initialize buffer with numbers for the demo*/
buffer[i] = i;
long Hint = 0;
long *h = (long *)&buffer[121];
Hint = *h;
printf("Hint=0x%x\n", Hint);
The output for this will be:
Hint=0x7c7b7a79
Which is the numbers 121-124 in hex.

error: no matching function for call to 'swap'

I am trying to sort cakeTypes vector by the size of their weight. But getting the error in sort implementation.
#include <iostream>
#include <vector>
#include <algorithm>
using namespace std;
class CakeType
{
public:
const unsigned int weight_;
const unsigned int value_;
CakeType(unsigned int weight = 0, unsigned int value = 0) :
weight_(weight),
value_(value)
{}
};
bool compareCakes(const CakeType& cake1, const CakeType& cake2) {
return cake1.weight_ < cake2.weight_;
}
unsigned long long maxDuffelBagValue(const std::vector<CakeType>& cakeTypes,
unsigned int weightCapacity)
{
// calculate the maximum value that we can carry
unsigned cakeTypesSize = cakeTypes.size();
unsigned long long valueCalculator[weightCapacity+1][cakeTypesSize+1];
for (unsigned int i = 0; i<=weightCapacity+1; i++) {
valueCalculator[i][0] = 0;
}
for (unsigned int i = 0; i<=cakeTypesSize+1; i++) {
valueCalculator[0][i] = 0;
}
vector<CakeType> sortedCakeTypes(cakeTypes);
sort(sortedCakeTypes.begin(), sortedCakeTypes.end(), compareCakes);
return 0;
}
This is part of there error:
exited with non-zero code (1).
In file included from solution.cc:1:
In file included from /usr/include/c++/v1/iostream:38:
In file included from /usr/include/c++/v1/ios:216:
In file included from /usr/include/c++/v1/__locale:15:
In file included from /usr/include/c++/v1/string:439:
/usr/include/c++/v1/algorithm:3856:17: error: no matching function for call to 'swap'
swap(*__first, *__last);
^~~~
I tried this solution sort() - No matching function for call to 'swap', but it is not the same issue.
Data type which is used by swap function in sort algorithm must be MoveAssignable, then you can perform operation like below
CakeType c1, c2;
c1 = move(c2); // <- move c2 to c1
But in your case CakeType has const data members. You can assign values to const data members only in constructors. Code cannot be compiled because default move/copy assignment operator can't be generated by this restriction (assignment to const member is illegal).
Remove const specifier from your class definition and code will work.
class CakeType
{
public:
unsigned int weight_;
unsigned int value_;
CakeType(unsigned int weight = 0, unsigned int value = 0) :
weight_(weight),
value_(value)
{}
};

Can I speed up type conversion using intrinsics?

I am working on an application which needs to convert data to float.
The data are unsigned char or unsigned short.
I am using both AVX2 and other SIMDs intrinsics in this code.
I wrote the conversion like this:
unsigned char -> float :
#ifdef __AVX2__
__m256i tmp_v =_mm256_lddqu_si256(reinterpret_cast<const __m256i*>(src+j));
v16_avx[0] = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(tmp_v,0x0));
v16_avx[1] = _mm256_cvtepu8_epi16(_mm256_extractf128_si256(tmp_v,0x1));
v32_avx[0] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[0],0x0));
v32_avx[1] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[0],0x1));
v32_avx[2] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[1],0x0));
v32_avx[3] = _mm256_cvtepi16_epi32(_mm256_extractf128_si256(v16_avx[1],0x1));
for (int l=0; l<4; l++) {
__m256 vc1_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_lt_avx[l]));
__m256 vc2_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_ge_avx[l]));
/*
....
some processing there.
*/
}
#endif
#ifdef __SSE2__
#ifdef __SSE3__
__m128i tmp_v = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src+j));
#else
__m128i tmp_v = _mm_loadu_si128(reinterpret_cast<const __m128i*>(src+j));
#endif
#ifdef __SSE4_1__
v16[0] = _mm_cvtepu8_epi16(tmp_v);
tmp_v = _mm_shuffle_epi8(tmp_v,mask8);
v16[1] = _mm_cvtepu8_epi16(tmp_v);
v32[0] = _mm_cvtepi16_epi32(v16[0]);
v16[0] = _mm_shuffle_epi32(v16[0],0x4E);
v32[1] = _mm_cvtepi16_epi32(v16[0]);
v32[2] = _mm_cvtepi16_epi32(v16[1]);
v16[1] = _mm_shuffle_epi32(v16[1],0x4E);
v32[3] = _mm_cvtepi16_epi32(v16[1]);
#else
__m128i tmp_v_l = _mm_slli_si128(tmp_v,8);
__m128i tmp_v_r = _mm_srli_si128(tmp_v,8);
v16[0] = _mm_unpacklo_epi8(tmp_v,tmp_v_l);
v16[1] = _mm_unpackhi_epi8(tmp_v,tmp_v_r);
tmp_v_l = _mm_srli_epi16(v16[0],8);
tmp_v_r = _mm_srai_epi16(v16[0],8);
v32[0] = _mm_unpacklo_epi16(v16[0],tmp_v_l);
v32[1] = _mm_unpackhi_epi16(v16[0],tmp_v_r);
v16[0] = _mm_unpacklo_epi8(tmp_v,tmp_v_l);
v16[1] = _mm_unpackhi_epi8(tmp_v,tmp_v_r);
tmp_v_l = _mm_srli_epi16(v16[1],8);
tmp_v_r = _mm_srai_epi16(v16[1],8);
v32[2] = _mm_unpacklo_epi16(v16[1],tmp_v_l);
v32[3] = _mm_unpackhi_epi16(v16[1],tmp_v_r);
#endif
for (int l=0; l<4; l++) {
__m128 vc1_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_lt[l]));
__m128 vc2_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_ge[l]));
/*
...
some processing there.
*/
}
#endif
unsigned short -> float
#ifdef __AVX2__
v32_avx[0] = _mm256_cvtepu16_epi32(_mm256_extractf128_si256(tmp_v,0x0));
v32_avx[1] = _mm256_cvtepu16_epi32(_mm256_extractf128_si256(tmp_v,0x1));
for(int l=0;l<2;l++) {
__m256 vc1_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_lt_avx[l]));
__m256 vc2_ps = _mm256_cvtepi32_ps(_mm256_and_si256(v32_avx[l],m_ge_avx[l]));
/*
...
some processing there.
*/
}
#endif
#ifdef __SSE2__
#ifdef __SSE3__
__m128i tmp_v = _mm_lddqu_si128(reinterpret_cast<const __m128i*>(src+j));
#else
__m128i tmp_v = _mm_loadu_si128(reinterpret_cast<const __m128i*>(src+j));
#endif
#ifdef __SSE4_1__
v32[0] = _mm_cvtepu16_epi32(tmp_v);
tmp_v = _mm_shuffle_epi32(tmp_v,0x4E);
v32[1] = _mm_cvtepu16_epi32(tmp_v);
#else
__m128i tmp_v_l = _mm_slli_si128(tmp_v,8);
__m128i tmp_v_r = _mm_srli_si128(tmp_v,8);
v32[0] = _mm_unpacklo_epi16(tmp_v,tmp_v_l);
v32[1] = _mm_unpackhi_epi16(tmp_v,tmp_v_r);
#endif
for(int l=0;l<2;l++) {
__m128 vc1_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_lt[l]));
__m128 vc2_ps = _mm_cvtepi32_ps(_mm_and_si128(v32[l],m_ge[l]));
/*
...
some processing there.
*/
}
#endif
The processing in the comments have nothing to do with the conversion step.
I would like to speed up those conversions.
I read in SSE: convert short integer to float and in Converting Int to Float/Float to Int using Bitwise that it's possible to do this using bitwise operations.
Are those approaches really any faster?
I experimented with the implementation in the first link; there was almost no change in processing time, it worked fine for signed short and also for unsigned short as long as the value is included between 0 and MAX_SHRT (32767 on my system):
#include <immintrin.h>
#include <iterator>
#include <iostream>
#include <chrono>
void convert_sse_intrinsic(const ushort *source,const int len, int *destination)
{
__m128i zero2 = _mm_setzero_si128();
for (int i = 0; i < len; i+=4)
{
__m128i value = _mm_unpacklo_epi16(_mm_set_epi64x(0,*((long long*)(source+i)) /**ps*/), zero2);
value = _mm_srai_epi32(_mm_slli_epi32(value, 16), 16);
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i),value);
}
}
void convert_sse_intrinsic2(const ushort *source,const int len, int *destination)
{
for (int i = 0; i < len; i+=8)
{
__m128i value = _mm_loadu_si128(reinterpret_cast<const __m128i*>(source+i));
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i),_mm_cvtepu16_epi32(value));
value = _mm_shuffle_epi32(value,0x4E);
_mm_storeu_si128(reinterpret_cast<__m128i*>(destination+i+4),_mm_cvtepu16_epi32(value));
}
}
int main(int argc, char *argv[])
{
ushort CV_DECL_ALIGNED(32) toto[16] =
{0,500,1000,5000,
10000,15000,20000,25000,
30000,35000,40000,45000,
50000,55000,60000,65000};
int CV_DECL_ALIGNED(32) tutu[16] = {0};
std::chrono::steady_clock::time_point start = std::chrono::steady_clock::now();
convert_sse_intrinsic(toto,16,tutu);
std::chrono::steady_clock::time_point stop = std::chrono::steady_clock::now();
std::cout<<"processing time 1st method : "<<std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()<<" : ns"<<std::endl;
std::copy(tutu,tutu+16,std::ostream_iterator<int>(std::cout," "));
std::cout<<std::endl;
start = std::chrono::steady_clock::now();
convert_sse_intrinsic2(toto,16,tutu);
stop = std::chrono::steady_clock::now();
std::cout<<"processing time 2nd method : "<<std::chrono::duration_cast<std::chrono::nanoseconds>(stop-start).count()<<" : ns"<<std::endl;
std::copy(tutu,tutu+16,std::ostream_iterator<int>(std::cout," "));
std::cout<<std::endl;
return 0;
}
Thanks in advance for any help.
Well I think there is not really any faster way to convert an unsigned char or an unsigned short to float rather than the intrinsics already there.
I tried several other ways using bitwise operators, but none was significantly faster.
So I think that it's not interesting to let this topic linger any longer.

XCode 4.2 - Openssl build error

I have successfully created the OpenSSL library for iPhone Simulator. I have successfully imported all the headers and libs. However, I am having problems in building the project and XCode tells me that there is an incomplete definition of a struct X509_ALGOR. Here is the code:
- (NSData *)encodePBEWithMD5AndDESData:(NSData *)inData password:(NSString *)password direction:(int)direction
{
// Change salt and number of iterations for your project !!!
static const char gSalt[] =
{
(unsigned char)0xaa, (unsigned char)0xd1, (unsigned char)0x3c, (unsigned char)0x31,
(unsigned char)0x53, (unsigned char)0xa2, (unsigned char)0xee, (unsigned char)0x05
};
unsigned char *salt = (unsigned char *)gSalt;
int saltLen = strlen(gSalt);
int iterations = 15;
EVP_CIPHER_CTX cipherCtx;
unsigned char *mResults; // allocated storage of results
int mResultsLen = 0;
const char *cPassword = [password UTF8String];
unsigned char *mData = (unsigned char *)[inData bytes];
int mDataLen = [inData length];
SSLeay_add_all_algorithms();
X509_ALGOR *algorithm = PKCS5_pbe_set(NID_pbeWithMD5AndDES_CBC,
iterations, salt, saltLen);
memset(&cipherCtx, 0, sizeof(cipherCtx));
if (algorithm != NULL)
{
EVP_CIPHER_CTX_init(&(cipherCtx));
**if (EVP_PBE_CipherInit(algorithm->algorithm, cPassword, strlen(cPassword),
algorithm->parameter, &(cipherCtx), direction))**
{
EVP_CIPHER_CTX_set_padding(&cipherCtx, 1);
int blockSize = EVP_CIPHER_CTX_block_size(&cipherCtx);
int allocLen = mDataLen + blockSize + 1; // plus 1 for null terminator on decrypt
mResults = (unsigned char *)OPENSSL_malloc(allocLen);
unsigned char *in_bytes = mData;
int inLen = mDataLen;
unsigned char *out_bytes = mResults;
int outLen = 0;
The pointer to struct X509_ALGOR, which is 'algorithm' is found to be incompletely defined. I don't have any clue about this. Can anyone help me please?
if (EVP_PBE_CipherInit(algorithm->algorithm, cPassword, strlen(cPassword),
algorithm->parameter, &(cipherCtx), direction))
I am not sure if it is appropriate to post answer to my own question. However, I am doing this if someone has the same problem, he/she may get some help from this.
The problem with the above mentioned code was the linker flags. Once I set the "-ObjC -load_all" in "Other Linker Flags" under "Build Settings", the problem was gone.
Regards.

What is the most efficient way to subtract signed integral data in binary (bits)?

I'm working in C on a PC, trying to leverage as little C++ as possible, working with binary data stored in unsigned char format, although other formats are certainly possible if worthwhile. The goal is subtracting two signed integer values (which can be ints, signed ints, longs, signed longs, signed shorts, etc.) in binary without converting to other data formats. The raw data is just prepackaged as unsigned char, though, with the user basically knowing which of the signed integer formats should be used for reading (i.e. we know how many bytes to read at once). Even though data is stored as an unsigned char array, data are meant to be read signed as two's-complement integers.
One common way we're often taught in school is adding the negative. Negation, in turn, is often taught to be performed as flipping bits and adding 1 (0x1), resulting in two additions (perhaps a bad thing?); or, as other posts point out, flipping bits past the first zero starting from the MSB. I'm wondering if there is a more efficient way, that may not be easily described as a pen-and-paper operation, but works because of the way data is stored in bit format. Here are some prototypes I've written, which may not be the most efficient way, but which summarizes my progress so far based on textbook methodology.
The addends are passed by reference in case I have to manually extend them to balance their length. Any and all feedback will be appreciated! Thanks in advance for considering.
void SubtractByte(unsigned char* & a, unsigned int & aBytes,
unsigned char* & b, unsigned int & bBytes,
unsigned char* & diff, unsigned int & nBytes)
{
NegateByte(b, bBytes);
// a - b == a + (-b)
AddByte(a, aBytes, b, bBytes, diff, nBytes);
// Restore b to its original state so input remains intact
NegateByte(b, bBytes);
}
void AddByte(unsigned char* & a, unsigned int & aBytes,
unsigned char* & b, unsigned int & bBytes,
unsigned char* & sum, unsigned int & nBytes)
{
// Ensure that both of our addends have the same length in memory:
BalanceNumBytes(a, aBytes, b, bBytes, nBytes);
bool aSign = !((a[aBytes-1] >> 7) & 0x1);
bool bSign = !((b[bBytes-1] >> 7) & 0x1);
// Add bit-by-bit to keep track of carry bit:
unsigned int nBits = nBytes * BITS_PER_BYTE;
unsigned char carry = 0x0;
unsigned char result = 0x0;
unsigned char a1, b1;
// init sum
for (unsigned int j = 0; j < nBytes; ++j) {
for (unsigned int i = 0; i < BITS_PER_BYTE; ++i) {
a1 = ((a[j] >> i) & 0x1);
b1 = ((b[j] >> i) & 0x1);
AddBit(&a1, &b1, &carry, &result);
SetBit(sum, j, i, result==0x1);
}
}
// MSB and carry determine if we need to extend:
if (((aSign && bSign) && (carry != 0x0 || result != 0x0)) ||
((!aSign && !bSign) && (result == 0x0))) {
++nBytes;
sum = (unsigned char*)realloc(sum, nBytes);
sum[nBytes-1] = (carry == 0x0 ? 0x0 : 0xFF); //init
}
}
void FlipByte (unsigned char* n, unsigned int nBytes)
{
for (unsigned int i = 0; i < nBytes; ++i) {
n[i] = ~n[i];
}
}
void NegateByte (unsigned char* n, unsigned int nBytes)
{
// Flip each bit:
FlipByte(n, nBytes);
unsigned char* one = (unsigned char*)malloc(nBytes);
unsigned char* orig = (unsigned char*)malloc(nBytes);
one[0] = 0x1;
orig[0] = n[0];
for (unsigned int i = 1; i < nBytes; ++i) {
one[i] = 0x0;
orig[i] = n[i];
}
// Add binary representation of 1
AddByte(orig, nBytes, one, nBytes, n, nBytes);
free(one);
free(orig);
}
void AddBit(unsigned char* a, unsigned char* b, unsigned char* c,
unsigned char* result) {
*result = ((*a + *b + *c) & 0x1);
*c = (((*a + *b + *c) >> 1) & 0x1);
}
void SetBit(unsigned char* bytes, unsigned int byte, unsigned int bit,
bool val)
{
// shift desired bit into LSB position, and AND with 00000001
if (val) {
// OR with 00001000
bytes[byte] |= (0x01 << bit);
}
else{ // (!val), meaning we want to set to 0
// AND with 11110111
bytes[byte] &= ~(0x01 << bit);
}
}
void BalanceNumBytes (unsigned char* & a, unsigned int & aBytes,
unsigned char* & b, unsigned int & bBytes,
unsigned int & nBytes)
{
if (aBytes > bBytes) {
nBytes = aBytes;
b = (unsigned char*)realloc(b, nBytes);
bBytes = nBytes;
b[nBytes-1] = ((b[0] >> 7) & 0x1) ? 0xFF : 0x00;
} else if (bBytes > aBytes) {
nBytes = bBytes;
a = (unsigned char*)realloc(a, nBytes);
aBytes = nBytes;
a[nBytes-1] = ((a[0] >> 7) & 0x1) ? 0xFF : 0x00;
} else {
nBytes = aBytes;
}
}
The first thing to notice is that signed vs. unsigned doesn't matter to the generated bit pattern in two's complement. All that changes is the interpretation of the result.
The second thing to notice is that an addition has carried if the result is less than either input when done with unsigned arithmetic.
void AddByte(unsigned char* & a, unsigned int & aBytes,
unsigned char* & b, unsigned int & bBytes,
unsigned char* & sum, unsigned int & nBytes)
{
// Ensure that both of our addends have the same length in memory:
BalanceNumBytes(a, aBytes, b, bBytes, nBytes);
unsigned char carry = 0;
for (int j = 0; j < nbytes; ++j) { // need to reverse the loop for big-endian
result[j] = a[j] + b[j];
unsigned char newcarry = (result[j] < a[j] || (unsigned char)(result[j]+carry) < a[j]);
result[j] += carry;
carry = newcarry;
}
}

Resources