How to check whether a system is big endian or little endian? - endianness

How to check whether a system is big endian or little endian?

In C, C++
int n = 1;
// little endian if true
if(*(char *)&n == 1) {...}
See also: Perl version

In Python:
from sys import byteorder
print(byteorder)
# will print 'little' if little endian

Another C code using union
union {
int i;
char c[sizeof(int)];
} x;
x.i = 1;
if(x.c[0] == 1)
printf("little-endian\n");
else printf("big-endian\n");
It is same logic that belwood used.

A one-liner with Perl (which should be installed by default on almost all systems):
perl -e 'use Config; print $Config{byteorder}'
If the output starts with a 1 (least-significant byte), it's a little-endian system. If the output starts with a higher digit (most-significant byte), it's a big-endian system. See documentation of the Config module.

In C++20 use std::endian:
#include <bit>
#include <iostream>
int main() {
if constexpr (std::endian::native == std::endian::little)
std::cout << "little-endian";
else if constexpr (std::endian::native == std::endian::big)
std::cout << "big-endian";
else
std::cout << "mixed-endian";
}

If you are using .NET: Check the value of BitConverter.IsLittleEndian.

In Rust (no crates or use statements required)
In a function body:
if cfg!(target_endian = "big") {
println!("Big endian");
} else {
println!("Little endian");
}
Outside a function body:
#[cfg(target_endian = "big")]
fn print_endian() {
println!("Big endian")
}
#[cfg(target_endian = "little")]
fn print_endian() {
println!("Little endian")
}
This is what the byteorder crate does internally: https://docs.rs/byteorder/1.3.2/src/byteorder/lib.rs.html#1877

In Powershell
[System.BitConverter]::IsLittleEndian

In Linux,
static union { char c[4]; unsigned long mylong; } endian_test = { { 'l', '?', '?', 'b' } };
#define ENDIANNESS ((char)endian_test.mylong)
if (ENDIANNESS == 'l') /* little endian */
if (ENDIANNESS == 'b') /* big endian */

A C++ solution:
namespace sys {
const unsigned one = 1U;
inline bool little_endian()
{
return reinterpret_cast<const char*>(&one) + sizeof(unsigned) - 1;
}
inline bool big_endian()
{
return !little_endian();
}
} // sys
int main()
{
if(sys::little_endian())
std::cout << "little";
}

In Rust (byteorder crate required):
use std::any::TypeId;
let is_little_endian = TypeId::of::<byteorder::NativeEndian>() == TypeId::of::<byteorder::LittleEndian>();

Using Macro,
const int isBigEnd=1;
#define is_bigendian() ((*(char*)&isBigEnd) == 0)

In C
#include <stdio.h>
/* function to show bytes in memory, from location start to start+n*/
void show_mem_rep(char *start, int n)
{
int i;
for (i = 0; i < n; i++)
printf("%2x ", start[i]);
printf("\n");
}
/*Main function to call above function for 0x01234567*/
int main()
{
int i = 0x01234567;
show_mem_rep((char *)&i, sizeof(i));
return 0;
}
When above program is run on little endian machine, gives “67 45 23 01” as output , while if it is run on big endian machine, gives “01 23 45 67” as output.

A compilable version of the top answer for n00bs:
#include <stdio.h>
int main() {
int n = 1;
// little endian if true
if(*(char *)&n == 1) {
printf("Little endian\n");
} else {
printf("Big endian\n");
}
}
Stick that in check-endianness.c and compile and run:
$ gcc -o check-endianness check-endianness.c
$ ./check-endianness
This whole command is a copy/pasteable bash script you can paste into your terminal:
cat << EOF > check-endianness.c
#include <stdio.h>
int main() {
int n = 1;
// little endian if true
if(*(char *)&n == 1) {
printf("Little endian\n");
} else {
printf("Big endian\n");
}
}
EOF
gcc -o check-endianness check-endianness.c \
&& ./check-endianness \
&& rm check-endianness check-endianness.c
The code is in a gist here if you prefer. There is also a bash command that you can run that will generate, compile, and clean up after itself.

In Nim,
echo cpuEndian
It is exported from the system module.

In bash (from How to tell if a Linux system is big endian or little endian?):
endian=`echo -n "I" | od -to2 | head -n1 | cut -f2 -d" " | cut -c6`
if [ "$endian" == "1" ]; then
echo "little-endian"
else
echo "big-endian"
fi

C logic to check whether your processor follows little endian or big endian
unsigned int i =12345;
char *c = (char *)&i; // typecast int to char* so that it points to first bit of int
if(*c != 0){ // If *c points to 0 then it is Big endian else Little endian
printf("Little endian");
}
else{
printf("Big endian");
}
Hope this helps. Was one of the question asked in my interview for the role of embedded software engineer role

All the answers using a program to find endianess at runtime is wrong! The fact whether a machine is big endian or little endian is hidden from the programmer, by the compiler. On a big-endian machine the typecast will again return 1, because the compiler knows that the machine is big endian and the casting will fetch the higher memory address. Only way to find the endianess is to fetch the system's configuration or environment variable. Similar to some of the answers above like the one liner perl answer etc.

Related

SOLVED: GCC Compare String IF else ( A way to verify that you have written a certain word )

I'm learning at GCC and while I was trying various solutions to verify the entry of a certain word, IF Word = Word {do something;}
It seems that in C it cannot be done directly and so I tried this solution that seems to work:
#include <stdio.h>
#include <string.h>
int main(){
int CClose = 0;
int VerifyS = 0;
char PWord[30] ={'\0'};
do {
printf("\n Type a word: ");
scanf(" %s", &PWord);
VerifyS = strncmp(PWord, "exit", 4);
if (!VerifyS){ CClose = 1;}else{ printf("\n The Word is:%s", PWord);}
}while (CClose != 1);
return 0;
}
I wanted to know if there is another way to do the same thing.
Thank you.
What you've written is essentially the most common way to do this. There is indeed no way in C to compare two strings in a single expression without calling a function.
You can cut out the temporary variable VerifyS if you like, by writing
if (!strncmp(pWord, "exit", 4)) { /...
or, perhaps slightly clearer
if (strncmp(pWord, "exit", 4) == 0) { /...

How can I use simd in MIPS?

I have installed mipsel-linux-gcc in the pictrue
Now I have a file simd.c
#include<stdio.h>
#include"simdType.h"
int main()
{
v4i32 m,t;
v4f32 a,b,c,s;
a=b+c;
t=b<c;
s = __builtin_shuffle(b,c,m);
return 0;
}
then I run this command:
mipsel-linux-gcc -S simd.c -mfp64 -Wa,-mmsa -mhard-float
then I get simd.s, but it's not in simd format.
Who can help me ? Thanks!

STXXL: limited parallelism during sorting?

I populate a very large array using a stxxl::VECTOR_GENERATOR<MyData>::result::bufwriter_type (something like 100M entries) which I need to sort in parallel.
I use the stxxl::sort(vector->begin(), vector->end(), cmp(), memoryAmount) method, which in theory should do what I need: sort the elements very efficiently.
However, during the execution of this method I noticed that only one processor is fully utilised, and all the other cores are quite idle (I suspect there is little activity to fetch the input, but in practice they don't do anything).
This is my question: is it possible to exploit more cores during the sorting phase, or is the parallelism used only to fetch the input asynchronously? If so, are there documents that explain how to enable it? (I looked extensively the documentation on the website, but I couldn't find anything).
Thanks very much!
EDIT
Thanks for the suggestion. I provide below some more information.
First of all I use MacOs for my experiments. What I do is that I launch the following program and I study its behaviour.
typedef struct Triple {
long t1, t2, t3;
Triple(long s, long p, long o) {
this->t1 = s;
this->t2 = p;
this->t3 = o;
}
Triple() {
t1 = t2 = t3 = 0;
}
} Triple;
const Triple minv(std::numeric_limits<long>::min(),
std::numeric_limits<long>::min(), std::numeric_limits<long>::min());
const Triple maxv(std::numeric_limits<long>::max(),
std::numeric_limits<long>::max(), std::numeric_limits<long>::max());
struct cmp: std::less<Triple> {
bool operator ()(const Triple& a, const Triple& b) const {
if (a.t1 < b.t1) {
return true;
} else if (a.t1 == b.t1) {
if (a.t2 < b.t2) {
return true;
} else if (a.t2 == b.t2) {
return a.t3 < b.t3;
}
}
return false;
}
Triple min_value() const {
return minv;
}
Triple max_value() const {
return maxv;
}
};
typedef stxxl::VECTOR_GENERATOR<Triple>::result vector_type;
int main(int argc, const char** argv) {
vector_type vector;
vector_type::bufwriter_type writer(vector);
for (int i = 0; i < 1000000000; ++i) {
if (i % 10000000 == 0)
std::cout << "Inserting element " << i << std::endl;
Triple t;
t.t1 = rand();
t.t2 = rand();
t.t3 = rand();
writer << t;
}
writer.finish();
//Sort the vector
stxxl::sort(vector.begin(), vector.end(), cmp(), 1024*1024*1024);
std::cout << vector.size() << std::endl;
}
Indeed there seems to be only one or maximum two threads working during the execution of this program. Notice that the machine has only a single disk.
Can you please confirm me whether the parallelism work on macos? If not, then I will try to use linux to see what happens. Or is perhaps because there is only one disk?
In principle what you are doing should work out-of-the-box. With everything working you should see all cores doing processing.
Since it doesnt work, we'll have to find the error, and debugging why we see no parallel speedups is still tricky business these days.
The main idea is to go from small to large examples:
what platform is this? There is no parallelism on MSVC, only on Linux/gcc.
By default STXXL builds on Linux/gcc with USE_GNU_PARALLEL. you can turn it off to see if it has an effect.
Try reproducing the example values shown in http://stxxl.sourceforge.net/tags/master/stxxl_tool.html - with and without USE_GNU_PARALLEL
See if just in memory parallel sorting scales on your processor/system.

vsnprintf on an ATMega2560

I am using a toolkit to do some Elliptical Curve Cryptography on an ATMega2560. When trying to use the print functions in the toolkit I am getting an empty string. I know the print functions work because the x86 version prints the variables without a problem. I am not experienced with ATMega and would love any help on this matter. The print code is included below.
Code to print a big number (it itself calls a util_print)
void bn_print(bn_t a) {
int i;
if (a->sign == BN_NEG) {
util_print("-");
}
if (a->used == 0) {
util_print("0\n");
} else {
#if WORD == 64
util_print("%lX", (unsigned long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*lX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long int)a->dp[i]);
}
#else
util_print("%llX", (unsigned long long int)a->dp[a->used - 1]);
for (i = a->used - 2; i >= 0; i--) {
util_print("%.*llX", (int)(2 * (BN_DIGIT / 8)),
(unsigned long long int)a->dp[i]);
}
#endif
util_print("\n");
}
}
The code to actually print a big number variable:
static char buffer[64 + 1];
void util_printf(char *format, ...) {
#ifndef QUIET
#if ARCH == AVR
char *pointer = &buffer[1];
va_list list;
va_start(list, format);
vsnprintf(pointer, 128, format, list);
buffer[0] = (unsigned char)2;
va_end(list);
#elif ARCH == MSP
va_list list;
va_start(list, format);
vprintf(format, list);
va_end(list);
#else
va_list list;
va_start(list, format);
vprintf(format, list);
fflush(stdout);
va_end(list);
#endif
#endif
}
edit: I do have UART initialized and can output printf statments to a console.
I'm one of the authors of the RELIC toolkit. The current util_printf() function is used to print inside the Avrora simulator, for debugging purposes. I'm glad that you could adapt the code to your purposes. As a side note, the buffer size problem was already fixed in more recent releases of the toolkit.
Let me know you have further problems with the library. You can either contact me personally or write directly to the discussion group.
Thank you!
vsnprintf store it's output on the given buffer (which in this case is the address point by pointer variable), in order for it to show on the console (through UART) you must send your buffer using printf (try to add printf("%s", pointer) after vsnprintf), if you're using avr-libc don't forget to initialized std stream first before making any call to printf function
oh btw your code is vulnerable to buffer overflow attack, buffer[64 + 1] means your buffer size is only 65 bytes, vsnprintf(pointer, 128, format, list); means that the maximum buffer defined by your application is 128 bytes, try to change it below 65 bytes in order to avoid overflow
Alright so I found a workaround to print the bn numbers to a stdout on an ATMega2560. The toolkit comes with a function that writes a variable to a string (bn_write_str). So I implemented my own print function as such:
void print_bn(bn_t a)
{
char print[BN_SIZE]; // max precision of a bn number
int bi = bn_bits(a); // get the number of bits of the number
bn_write_str(print, bi, a, 16) // 16 indicates the radix (hexadecimal)
printf("%s\n"), print);
}
This function will print a bn number in hexadecimal format.
Hope this helps anyone using the RELIC toolkit with an AVR.
This skips the util_print calls.

What Time Is This Returning

Deep in the sauce here. I haven't worked with time to much so I'm a little confused here. I know there is FILETIME and SYSTEMTIME. What I am trying to get at this point (because it might change) are file that are less than a 20 seconds old. This returning the files and their size and something in seconds, What I'd like to know is where it is filtering by time if it is, and how can I adjust it to suit my needs. Thank you.
using namespace std;
typedef vector<WIN32_FIND_DATA> tFoundFilesVector;
std::wstring LastWriteTime;
int getFileList(wstring filespec, tFoundFilesVector &foundFiles)
{
WIN32_FIND_DATA findData;
HANDLE h;
int validResult=true;
int numFoundFiles = 0;
h = FindFirstFile(filespec.c_str(), &findData);
if (h == INVALID_HANDLE_VALUE)
return 0;
while (validResult)
{
numFoundFiles++;
foundFiles.push_back(findData);
validResult = FindNextFile(h, &findData);
}
return numFoundFiles;
}
void showFileAge(tFoundFilesVector &fileList)
{
unsigned _int64 fileTime, curTime, age;
tFoundFilesVector::iterator iter;
FILETIME ftNow;
//__int64 nFileSize;
//LARGE_INTEGER li;
//li.LowPart = ftNow.dwLowDateTime;
//li.HighPart = ftNow.dwHighDateTime;
CoFileTimeNow(&ftNow);
curTime = ((_int64) ftNow.dwHighDateTime << 32) + ftNow.dwLowDateTime;
for (iter=fileList.begin(); iter<fileList.end(); iter++)
{
fileTime = ((_int64)iter->ftLastWriteTime.dwHighDateTime << 32) + iter->ftLastWriteTime.dwLowDateTime;
age = curTime - fileTime;
cout << "FILE: '" << iter->cFileName << "', AGE: " << (_int64)age/10000000UL << " seconds" << endl;
}
}
int main()
{
string fileSpec = "*.*";
tFoundFilesVector foundFiles;
tFoundFilesVector::iterator iter;
int foundCount = 0;
getFileList(L"c:\\Mapper\\*.txt", foundFiles);
getFileList(L"c:\\Mapper\\*.jpg", foundFiles);
foundCount = foundFiles.size();
if (foundCount)
{
cout << "Found "<<foundCount<<" matching files.\n";
showFileAge(foundFiles);
}
system("pause");
return 0;
}
I don't know what you've done to try to debug this but your code doesn't work at all. The reason is you're passing getFileList() a wstring but then passing that to the ANSI version of FindFirstFile(). Unless you #define UNICODE or use the appropriate compiler option, all system calls will expect char *, not UNICODE.
The easiest fix is to simply change the declaration of getFileList() to this:
int getFileList(const char * filespec, tFoundFilesVector &foundFiles)
Change the call to FindFirstFile() to this:
h = FindFirstFile((LPCSTR)filespec, &findData);
And then change the calls to it to this:
getFileList("c:\\Mapper\\*.txt", foundFiles);
getFileList("c:\\Mapper\\*.jpg", foundFiles);
Your other option is to switch all char strings to wide chars, but either way you need to be consistent throughout. Once you do that the program works as expected.
As for your final question, your program is not filtering by time at all.
Not quite an answer, but you might want to read about file system tunneling.
It may prevent you from what you're trying to do in some situations.

Resources