I need to convert an integer to it's ASCII representation from within the Linux Kernel. How can I do this? I can't find any built-in conversion methods. Are there any already in the kernel or do I need to add my own?
The kernel does offer snprintf(), would that suit your need? I'm also curious what you are doing with the ASCII representation of an integer within the kernel.
It's very likely that you just want printk().
Related
I was given a question in my homework.
How the pascal variables represented on the machine?
for example: in C it can be different on different machines and compilers, in java there is a VM, so the programmer can assume he will get the exact same representation on different machines.
I have been googling for a while and could not find an answer about pascal. The question is about the original version of pascal, if it changer something.
Thank you!
Original (J&W) Pascal is extremely rare, and most people don't know it. It got cleaned up a bit into the ISO 7185 standard, though those changes IIRC mostly affect scoping and type equivalence, not the kind of types.
Original (non UCSD, over a decade before Borland/Turbo dialects) Pascal has nearly no machine dependent types. Just one type INTEGER and one floating point type, REAL, and non integer ordinal types like enums, boolean and char. Char was not guaranteed 8-bit, the with dependent on the machine word.
Pascal shows his mainframe roots here, where words had exotic sizes like 60-bit, didn't allow subword acess (say bytelevel access, but that is a stretch, since they might not know the concept of byte), and multiple chars were packed into machinewords. (see packed array below). C was several years later, and targeted minis, so avoided the worst of that legacy.
The integer type is the biggest type in the system, quite often the biggest type that the machine can do conveniently. Smaller integer sizes are constructed with subranges, there are no unsigned types, but these can be defined with relevant subranges (and it is up to the compiler/VM to implement those efficiently)
e..g BYTE= 0..255;
Arrays can be packed, and must be unpacked before use (with pack() and unpack()).
There is no stringtype, typically packed fixed size array of char is used, with right padding with space to signal endofstring (so trailing spaces is hard, but it is only a convention, and not much runtime support, so in exceptional cases, you simply make an exception)
Unions contain all components as separate fields (no overlap) and are always named.
It had pointers, but you couldn't take addresses of arbitrary symbols, and new pointers could only be created with NEW.
So in general original Pascal is what you would call a reasonably "safe" language, though it was not fully designed as such (and afaik not 100% safe in theory. It was also much more suitable for VMing than TP (and that happened with UCSD, albeit for a subset).
Pascal and its successors can be considered as reconnaissance of concepts that were later popularized with Java.
In C it can be different on different machines and compilers, so are they in Pascal, of course.
In Windows systems different data types have the same size (see http://msdn.microsoft.com/en-us/library/s3f49ktz(v=VS.100).aspx) I couldn't help but wonder is there a difference between double and long double or between long and int? When I ask about differences I mean difference in calculations.
According to the C++-standard they may be different, but don't have to. The guarantee is that the long versions are always at least as large as their non-long counterparts.
In general, the sizes of data-types depend on the system on which you are running. So while there may not be a difference on your system, there may be on others. You have to be aware of that if you want to write portable code.
Given a string in form of a pointer to a array of bytes (chars), how can I detect the encoding of the string in C/C++ (I used visual studio 2008)?? I did a search but most of samples are done in C#.
Thanks
Assuming you know the length of the input array, you can make the following guesses:
First, check to see if the first few bytes match any well know byte order marks (BOM) for Unicode. If they do, you're done!
Next, search for '\0' before the last byte. If you find one, you might be dealing with UTF-16 or UTF-32. If you find multiple consecutive '\0's, it's probably UTF-32.
If any character is from 0x80 to 0xff, it's certainly not ASCII or UTF-7. If you are restricting your input to some variant of Unicode, you can assume it's UTF-8. Otherwise, you have to do some guessing to determine which multi-byte character set it is. That will not be fun.
At this point it is either: ASCII, UTF-7, Base64, or ranges of UTF-16 or UTF-32 that just happen to not use the top bit and do not have any null characters.
It's not an easy problem to solve, and generally relies on heuristics to take a best guess at what the input encoding is, which can be tripped up by relatively innocuous inputs - for example, take a look at this Wikipedia article and The Notepad file encoding Redux for more details.
If you're looking for a Windows-only solution with minimal dependencies, you can look at using a combination of IsTextUnicode and MLang's DetectInputCodePage to attempt character set detection.
If you are looking for portability, but don't mind taking on a fairly large dependency in the form of ICU then you can make use of it's character set detection routines to achieve the same thing in a portable manner.
I have written a small C++ library for detecting text file encoding. It uses Qt, but it can be just as easily implemented using just the standard library.
It operates by measuring symbol occurrence statistics and comparing it to pre-computed reference values in different encodings and languages. As a result, it not only detects encoding but also the language of the text. The downside is that pre-computed statistics must be provided for the target language to detect this language properly.
https://github.com/VioletGiraffe/text-encoding-detector
I've been reviewing the year 2038 problem (Unix Millennium Bug).
I read the article about this on Wikipedia, where I read about a solution for this problem.
Now I would like to change the time_t data type to an unsigned 32bit integer, which will allow me to be alive until 2106. I have Linux kernel 2.6.23 with RTPatch on PowerPC.
Is there any patch available that would allow me to change the time_t data type to an unsigned 32bit integer for PowerPC? Or any patch available to resolve this bug?
time_t is actually defined in your libc implementation, and not the kernel itself.
The kernel provides various mechanisms that provide the current time (in the form of system calls), many of which already support over 32-bits of precision. The problem is actually your libc implementation (glibc on most desktop Linux distributions), which, after fetching the time from the kernel, returns it back to your application in the form of a 32-bit signed integer data type.
While one could theoretically change the definition of time_t in your libc implementation, in practice it would be fairly complicated: such a change would change the ABI of libc, in turn requiring that every application using libc to also be recompiled from sources.
The easiest solution instead is to upgrade your system to a 64-bit distribution, where time_t is already defined to be a 64-bit data type, avoiding the problem altogether.
About the suggested 64-bit distribution suggested here, may I note all the issues with implementing that. There are many 32-bit NONPAE computers in the embedded industry. Replacing these with 64-bit computers is going to be a LARGE problem. Everyone is used to desktop's that get replaced/upgraded frequently. All Linux O.S. suppliers need to get serious about providing a different option. It's not like a 32-bit computer is flawed or useless or will wear out in 16 years. It doesn't take a 64 bit computer to monitor analog input , control equipment, and report alarms.
Currently, Boost only implements the random_device class for Linux (maybe *nix) systems. Does anyone know of existing implementations for other OS-es? Ideally, these implementations would be open-source.
If none exist, how should I go about implementing a non-deterministic RNG for Windows as well as Mac OS X? Do API calls exist in either environment that would provide this functionality? Thanks (and sorry for all the questions)!
On MacOSX, you can use /dev/random (since it's a *nix).
On Windows, you probably want the CryptGenRandom function. I don't know if there's an implementation of boost::random_device that uses it.
Depends on what you want to use you RNG for.
In general terms, you'll feed seed data into a buffer, generate hash values of the buffer, mix a counter into the result and hash it some more. The reason for using a hash function is that good hashes are designed to yield random-looking results from input data that's more structured.
If you want to use it for cryptography, things'll turn a lot hairier. You'll need to jump through more hoops to ensure that your RNG keeps repeating patterns within reasonably safe limits. I can recommend Bruce Schneier's "Practical Cryptography" (for an introduction on RNGs, and a sample implementation). He's also got some RNG-related stuff up about his yarrow RNG.
If boost relies on /dev/random, chances are it works on MacOS also (as it has that).
On Windows there is CryptoAPI as part of the OS, and that provides a crypto quality RNG.
Also, I believe modern Intel CPUs have a hardware RNG on the chip - however you'd have to figure out how to get at that on each OS. Using the higher level APIs is probably a better bet.
edit: Here's a link to how the Intel RNG works
OpenSSL has a decent one.
#include <openssl/rand.h>
...
time_t now = time(NULL);
RAND_seed(&now, sizeof(now)); // before first number you need
int success = RAND_bytes(...);
if (!success) die_loudly();
RAND_cleanup(); // after you don't need any more numbers
Microsoft CryptoAPI has one on Win32. It requires a few more function calls. Not including the details here because there are 2 to 5 args to each of these calls. Be careful, CryptoAPI seems to require the user to have a complete local profile (C:\Documents and Settings\user\Local Settings) correctly set up before it can give you a random number.
CryptAcquireContext // see docs
CryptGenRandom
CryptReleaseContext