Using Char.chr in SML - char

I need to convert an int to its equivalent char using the Char.chr-function, but why does the function return every char in the form of #"\^A" instead of just #"A" (that's how I want it to be).

What you see there is just the way control characters (ASCII code 0-31) are pretty-printed by the interactive toplevel. For example, #"\^A" is equivalent to #"\001". The SML system presumably uses its own Char.toString function to print values of type char. Try chr 65, which should be printed as #"A".

Related

string size limit input cin.get() and getline()

In this project the user can type in a text(maximum 140 characters).
so for this limitation I once used getline():
string text;
getline(cin, text);
text = text.substr(1, 140);
but in this case the result of cout << text << endl; is an empty string.
so I used cin.get() like:
cin.get(text, 140);
this time I get this error: no matching function for call to ‘std::basic_istream::get(std::__cxx11::string&, int)’
note that I have included <iostream>
so the question is how can I fix this why is this happening?
Your first approach is sound with one correction - you need to use
text = text.substr(0, 140);
instead of text = text.substr(1, 140);. Containers (which includes a string) in C/C++ start with index 0 and you are requesting the string to be trimmed from position 1. This is perfectly fine, but if the string happens to be only one character long, calling text.substr(1, 140); will not necessarily cause the program to crash, but will not end up in the desired output either.
According to this source, substr will throw an out of range exception if called with starting position larger than string length. In case of a one character string, position 1 would be equal to string length, but the return value is not meaningful (in fact, it may even be an undefined behavior but I cannot find a confirmation of this statement - in yours and my case, calling it returns an empty string). I recommend you test it yourself in the interactive coding section following the link above.
Your second approach tried to pass a string to a function that expected C-style character arrays. Again, more can be found here. Like the error said, the compiler couldn't find a matching function because the argument was a string and not the char array. Some functions will perform a conversion of string to char, but this is not the case here. You could convert the string to char array yourself, as for instance described in this post, but the first approach is much more in line with C++ practices.
Last note - currently you're only reading a single line of input, I assume you will want to change that.

Gnuplot: Convert integer to ASCII value

I would like to generate multiple plots within one gnuplot script, and would like to start each plot's title with a running capital letter, i.e., the first plot will have the title (A) sample title for first chart, the second one (B) sample title for second chart, and so on.
In Java, for this I basically have to do
int i = 65; // ASCII value for 65
char c = (char)i; // Convert 65 to corresponding ASCII value ('A')
i++;
// Use c; then repeat
I just tried something similar in gnuplot by using gprintf and using the %c formatter, yet I could not get it working due to the problem
These format specifiers are not the same as those used by the standard C-language routine sprintf().
Long question short: How to convert an integer to its corresponding ASCII value?
gnuplot> print sprintf("%c", 65)
#A
Gnuplot provides a gprintf which uses gnuplot format specifiers and sprintf.

How to compute the display width of a prompt on the CLI with ANSI escape codes?

A trivial implementation:
extern crate unicode_width;
fn main () {
let prompt = "\x1b[1;32m>>\x1b[0m ";
println!("{}", unicode_width::UnicodeWidthStr::width(prompt));
}
returns 12 but 3 is expected.
I would also be happy to use a crate that already does this, if there is one.
You're not going to get the width of an escape-sequence using a Unicode width calculation, simply because none of the string is printable—on a terminal.
If you control the content of the string, you could calculate the width by
copying the string to a temporary variable
substituting the escape sequences to empty strings, e.g., changing the pattern starting with \x1b, allowing any combination of [, ], <, >', =, ?, ; or decimal digits through the "final" characters in the range # to ~
measuring the length of what (if anything) is left.
In your example
let prompt = "\x1b[1;32m>>\x1b[0m ";
only ">> " would be left to measure.
For patterns... you would start here: Regex
Further reading:
crate Regex
17.3 Strings, Rust by Example

Emacs Lisp: getting ascii value of character

I'd like to translate a character in Emacs to its numeric ascii code, similar to casting char a = 'a'; int i = (int)a in c. I've tried string-to-number and a few other functions, but none seem to make Emacs read the char as a number in the end.
What's the easiest way to do this?
To get the ascii-number which represents the character --as Drew said-- put a question mark before the character and evaluate that expression
?a ==> 97
Number appears in minibuffer, with C-u it's written behind expression.
Also the inverse works
(insert 97) will insert an "a" in the buffer.
BTW In some cases the character should be quoted
?\" will eval to 34
A character is a whole number in Emacs Lisp. There is no separate character data type.
Function string-to-char is built-in, and does what you want. (string-to-char "foo") is equivalent to (aref "foo" 0), which is #abo-abo's answer --- but it is coded in C.
String is an array.
(aref "foo" 0)

XCode: preprocessor concatenation broken?

We have a piece of cross-platform code that uses wide strings. All our string constants are wide strings and we need to use CFSTR() on some of them. We use these macros to get rid of L from wide strings:
// strip leading L"..." from wide string macros
// expand macro, e.g. turn WIDE_STRING (#define WIDE_STRING L"...") into L"..."
# define WIDE2NARROW(WideMacro) REMOVE_L(WideMacro)
// L"..." -> REM_L"..."
# define REMOVE_L(WideString) REM_##WideString
// REM_L"..." -> "..."
# define REM_L
This works on both Windows and Linux. Not on Mac – we get the following error:
“error: pasting "REM_" and "L"qm"" does not give a valid preprocessing token”
Mac example:
#define TRANSLATIONS_DIR_BASE_NAME L"Translations"
#define TRANSLATIONS_FILE_NAME_EXTENSION L"qm"
CFURLRef appUrlRef = CFBundleCopyResourceURL( CFBundleGetMainBundle()
, macTranslationFileName
, CFSTR(WIDE2NARROW(TRANSLATIONS_FILE_NAME_EXTENSION))
, CFSTR(WIDE2NARROW(TRANSLATIONS_DIR_BASE_NAME))
);
Any ideas?
During tokenization, which happens before the preprocessor language, string literals are processed. So the L"qm" is converted to a wide string literal. Which means you are trying to token paste with a string literal(and not the letter L), which C99 forbids.

Resources