// this is a substraction example
int x=3098;
int z=3088;
int somme=x-z;
char buffer[4];
// convert int to char
itoa(somme,buffer,10);
// I want to push the buffer value on a char table like this "**0010**" not
// like "**10**"
Then you have to use a formater, standard ones in C are of printf family. Take care of the length of the array because if you want to store a string of length n you need an array of length n+1 (c-strings are Null-terminated). Thus:
// this is a substraction example
int x=3098;
int z=3088;
int somme=x-z;
char buffer[5];
sprintf(buffer,"%04d",somme);
will fit your needs. It means to format the integer somme as (%04d) decimal representation of length 4 padded with leading zeros if needed, and to store the result in memory starting at the beginning of buffer.
Related
I tried this
#include<stdio.h>
int main(void){
char charVal1 = '1';
char charVal2 = '91';
printf("%d\n", charVal1);
printf("%d", charVal2);
}
The output of the first printf statement is 49. But the other one is showing 14641. How 91 is converted into 14641? Also, sometimes it shows that the implicit conversion resulted in overflow and the output is 49.
In your system, the character '1' is encoded as 49. C requires that the characters '1'…'9' be in that order and contiguous so on your system '9' is encoded as 57. So, '91' is encoded as 57 followed by 49. In hexadecimal, this 0x39 followed by 0x31. If you consider that two-byte integer (possibly a short [aka short int] on your system) with the big-endian byte ordering, it is 0x3931 or 14641. So your machine uses the big-endian byte ordering.
'91' is an int with a value 14641. On my system, int is bigger than char and char is -128 to 128 so assigning a char with an integer outside that range, gives a compiler warning.
Now your formatted print call uses the "%d" format specifier, which is for normal length integers. This means that it will convert however many bytes an integer is on your system to decimal and output that. But, you are passing it char values so it is printing those values as integers, or 91 and 14641.
You probably mean something more like this:
char[] strVal1 = "1";
char[] strVal2 = "91";
printf("%s\n", strVal1);
printf("%s", strVal2);
Is there an easy STL way to convert a std::string to a std::u32string, i.e. a basic_string of char to char32_t?
This is not a Unicode question.
To initialise a new string:
std::u32string s32(s.begin(), s.end());
To assign to an existing string:
s32.assign(s.begin(), s.end());
If the string might contain characters outside the supported range of char, then this might cause sign-extension issues, converting negative values into large positive values. Dealing with that possibility is messier; you'll have to convert to unsigned char before widening the value.
s32.resize(s.size());
std::transform(s.begin(), s.end(), s32.begin(),
[](char c) -> unsigned char {return c;});
or a plain loop
s32.clear(); // if not already empty
for (unsigned char c : s) {s32 += c;}
s32.resize(s.length());
std::copy(s.begin(),s.end(),s32.begin());
I implemented an insertion sort in C and someone who was helping me told my make something a pointer, as shown in the following line near the end, but why?
size_t size = sizeof( array ) / sizeof( *array );
Why is the second one a pointer to array, and what does size_t do?
sizeof(array) = size, in bytes, of the entire array;
sizeof(*array) = size, in bytes, of the first item in the array;
As items in a C array are of uniform size, dividing the first by the second gives the number of items in the array.
size_t is an unsigned integer large enough to store the size of any item the computer dan store in memory. So, usually, it's the same as an unsigned int, but it's not guaranteed to be and there's semantic value in it being a different thing.
Why is the second one a pointer to array
Example 1
char a[5];
sizeof(a)=5
sizeof(*a)=1
So, size = 5/1 = 5 // this indicates the no of elements in the array
Example 2
int a[5];
sizeof(a)= 20
sizeof(*a)=4
So, size = 20/4 = 5 // this indicates the no of elements in the array
and what does size_t do?
Read: What is size_t in C?
I've read in a book:
..characters are just 16-bit unsigned integers under the hood. That means you can assign a number literal, assuming it will fit into the unsigned 16-bit range (65535 or less).
It gives me the impression that I can assign integers to characters as long as it's within the 16-bit range.
But how come I can do this:
char c = (char) 80000; //80000 is beyond 65535.
I'm aware the cast did the magic. But what exactly happened behind the scenes?
Looks like it's using the int value mod 65536. The following code:
int i = 97 + 65536;
char c = (char)i;
System.out.println(c);
System.out.println(i % 65536);
char d = 'a';
int n = (int)d;
System.out.println(n);
Prints out 'a' and then '97' twice (a is char 97 in ascii).
I want to create a color from a given string. The string does not have to be related to the resulting color in any form, but the same string should always result in the same color.
This question is not bound to a specific programming language, so the "Color" should be in a language-independent format like RGB.
It would be good if the algorithm creates colors in a wide colorspectrum and not just greyish colors.
Perfectly would be something like this (C++):
#include <string>
int getRedFromString( std::string givenString )
{ /*Your code here...*/ }
int getGreenFromString( std::string givenString )
{ /*Your code here...*/ }
int getBlueFromString( std::string givenString )
{ /*Your code here...*/ }
int main()
{
std::string colorString = "FooBar";
int R = getRedFromString ( colorString );
int G = getGreenFromString( colorString );
int B = getBlueFromString ( colorString );
}
Take a hash of the string, then use the first three bytes of the hash as Red, Blue, and Green values.
You could use any hashing algorithm to create a value from the string that is always the same for any given string, and get the color components from that.
The GetHashCode method in .NET for example returns an integer, so it would be easy to create an RGB value from that:
int RGB = colorString.GetHashCode() & FFFFFFh;
or
int code = colorString.GetHashCode();
int B = code & FFh;
code >>= 8;
int G = code & FFh;
code >>= 8;
int R = code & FFh;
I will have a try with an MD5 on the string:
from hashlib import md5
def get_color_tuple(item)
hash = md5(item).hexdigest()
hash_values = (hash[:8], hash[8:16], hash[16:24]) # note: we ignore the values from 24 to 32, but it shouldn't be a problem.
return tuple(int(value, 16)%256 for value in hash_values)
What the algorithm does is basically this: it gets the first three chunks of 4 bytes (i.e. 8 characters) , and returns them in a tuple modulo 256, so that their range will be in [0, 255]
#include <string>
#include <locale>
using namespace std;
int main()
{
locale loc;
string colorString;
COLORREF color;
colorString = "FooBar";
const collate<char>& coll = use_facet<collate<char> >(loc);
color = coll.hash(colorString.data(), colorString.data()+ colorString.length());
}
Example of the hash
You can compute the Godel number of the string. Basically it would be
(int)A[0] * 256 ^ n + (int) a[1] * 256 ^ (n-1) .... + (int)A[0]
Just same idea as our number system, but using base 256 because there are 256 possible character values.
Next, just reduce by a factor for the range of the spectrum you want to map to:
e.g. suppose you want into range 0 ... 2000
Then just take whatever number you get and divide by (largest number in your range)/2000
The advantage of this approach is that it will give you a broader range of colors than just RGB. However, if you want the simplicity of the 3 primary colors, then you can just divide by 3 instead and take different ranges, or take mod 3.
There's a number of ways to do this based on what you are trying to accomplish. The easiest is to turn the string into a stream with str_stream and read the text values as unsigned chars.