How can a floating point number be converted into an integer in ATS? - ats

For instance, I need to turn the number 3.14 (of the type double) into 3 (of the type int). I tried double2int but it did not work. I also tried floor, but it did not work, either.

You should use g0float2int.
For example:
#include
"share/atspre_staload.hats"
#include
"share/atspre_staload_libats_ML.hats"
implement main0() = ()
val double_pi:double = 3.14
val int_pi:int = g0float2int(double_pi)
val () = println!(double_pi)
val () = println!(int_pi)
You might be confused because there is a function in the ATS codebase called double2int, but it is unique to the libraries used to target certain languages like JavaScript, etc.

Related

How to use List, Map, etc. from TypeScript

I have a combo ReScript and TypeScript project. I want to use gentype to expose TypeScript friendly objects. Gentype works for string, bool, records, and other basic objects without any hassles. How can I work with list and char and other ReScript-specific types from TypeScript?
There is a shims feature but I don't know how to do that. I would assume there are built-in shims for the standard Belt library but can't find them.
I plan to do most of the calculations and heavy-lifting inside Rescript. So when the info is sent to TypeScript, I'll probably just be consuming the data, not modifying it. This might be iterating through a list to display the contents in a non-Rescript React project. I don't plan to manipulate the data.
One option is to convert lists to arrays inside ReScript before exporting the results. So any time I want to consume information from TypeScript, I'll create a function and/or type in ReScript that only has bool, number, string, array, and object. I could create an "interop" module with all the stuff I want to consume or use from TypeScript. This provides a clean separation between the ReScript and TypeScript world and is easy to get my head around but seems a little inefficient (like converting all lists to arrays) and extra work.
On the other hand, if using the Belt objects like list and map is cumbersome from TypeScript, even with the shims, then I'm probably better off creating my "interop" module anyway.
What is the recommended/simplest/best way to use list, map, char, and other ReScript specific objects from within TypeScript? Are there friendly Typescript definitions for the Belt standard library that I could use even if I wasn't using ReScript?
===
One additional note. I have experience trying to use F# (functional) from C#. It was painful. The best solution for me in that environment was to create an interface on the F# side that was easily consumable on the C# side that did not use the F# native objects.
As rescript compiles to JavaScript, and the output is very readable and (usually) straight-forward, you could just emulate what it generates.
For example, this rescript code:
let xs = list{1, 2, 3}
let _ = List.map(x => x + 1, xs)
is compiled into this (slightly simplified) JavaScript:
var List = require("./stdlib/list.js");
var xs = {
hd: 1,
tl: {
hd: 2,
tl: {
hd: 3,
tl: /* [] */0
}
}
};
List.map((x) => x + 1, xs);
There is a slight problem with the literal syntax of lists specifically, but that could be simplified a bit by using List.cons instead:
let xs = List.cons(1, List.cons(2, List.cons(3, list{})))
which becomes:
var xs = List.cons(1, List.cons(2, List.cons(3, /* [] */0)));
Pattern matching also isn't as convenient, obviously, but still pretty straight-forward for the simple things at least. For example:
let rec sum = xs => switch xs {
| list{} => 0
| list{x, ...rest} => x + sum(rest)
}
becomes:
function sum(xs) {
if (xs) {
return xs.hd + sum(xs.tl);
} else {
return 0;
}
}
Most other types don't have any special compiler support, and so becomes just plain function calls. Using Belt.Map for example, despite using some advanced language features, compiles to very straight-forward JavaScript:
module IntCmp = Belt.Id.MakeComparable({
type t = int
let cmp = (a, b) => a - b
})
let m = Belt.Map.make(~id=module(IntCmp))
let _ = Belt.Map.set(m, 0, "a")
becomes (more or less):
var Belt_Id = require("./stdlib/belt_Id.js");
var Belt_Map = require("./stdlib/belt_Map.js");
var IntCmp = Belt_Id.MakeComparable({
cmp: (a, b) => a - b
});
var m = Belt_Map.make(IntCmp);
Belt_Map.set(m, 0, "a");

What are the side effects of using structs to group constants

I know we can't use struct as constant in Go.
But I would like to group my constant for cleaner code, instead having many constant with some prefix, so I am doing this:
var SomeStatus = struct {
Active int
Inactive int
Others int
}{
Active: 1,
Inactive: 2,
Others: 3
}
// usage example
status = Status.Active
// example with some prefix
const StatusActive = 1
const StatusInactive = 2
const StatusOthers = 3
const OtherConstantVariable = 1
...
If it doesn't matter, the value of the variable can be rewritten.
What is the other side effect of this trick?
What is the other side effect of this trick?
The ones I can think of. There may be others:
It's less efficient, as variables allocate runtime memory.
Any values that could be pre-computed at compile time due to a constant, will now be calculated at runtime.
For exported symbols, it opens you up to modification at runtime by anyone who imports your package.
It's not idiomatic, so will potentially confuse anyone who sees your code.
The flexibility that comes from untyped constants is lost.

Why is it that traits for operator overloading require ownership of self? [duplicate]

I made a two element Vector struct and I want to overload the + operator.
I made all my functions and methods take references, rather than values, and I want the + operator to work the same way.
impl Add for Vector {
fn add(&self, other: &Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
Depending on which variation I try, I either get lifetime problems or type mismatches. Specifically, the &self argument seems to not get treated as the right type.
I have seen examples with template arguments on impl as well as Add, but they just result in different errors.
I found How can an operator be overloaded for different RHS types and return values? but the code in the answer doesn't work even if I put a use std::ops::Mul; at the top.
I am using rustc 1.0.0-nightly (ed530d7a3 2015-01-16 22:41:16 +0000)
I won't accept "you only have two fields, why use a reference" as an answer; what if I wanted a 100 element struct? I will accept an answer that demonstrates that even with a large struct I should be passing by value, if that is the case (I don't think it is, though.) I am interested in knowing a good rule of thumb for struct size and passing by value vs struct, but that is not the current question.
You need to implement Add on &Vector rather than on Vector.
impl<'a, 'b> Add<&'b Vector> for &'a Vector {
type Output = Vector;
fn add(self, other: &'b Vector) -> Vector {
Vector {
x: self.x + other.x,
y: self.y + other.y,
}
}
}
In its definition, Add::add always takes self by value. But references are types like any other1, so they can implement traits too. When a trait is implemented on a reference type, the type of self is a reference; the reference is passed by value. Normally, passing by value in Rust implies transferring ownership, but when references are passed by value, they're simply copied (or reborrowed/moved if it's a mutable reference), and that doesn't transfer ownership of the referent (because a reference doesn't own its referent in the first place). Considering all this, it makes sense for Add::add (and many other operators) to take self by value: if you need to take ownership of the operands, you can implement Add on structs/enums directly, and if you don't, you can implement Add on references.
Here, self is of type &'a Vector, because that's the type we're implementing Add on.
Note that I also specified the RHS type parameter with a different lifetime to emphasize the fact that the lifetimes of the two input parameters are unrelated.
1 Actually, reference types are special in that you can implement traits for references to types defined in your crate (i.e. if you're allowed to implement a trait for T, then you're also allowed to implement it for &T). &mut T and Box<T> have the same behavior, but that's not true in general for U<T> where U is not defined in the same crate.
If you want to support all scenarios, you must support all the combinations:
&T op U
T op &U
&T op &U
T op U
In rust proper, this was done through an internal macro.
Luckily, there is a rust crate, impl_ops, that also offers a macro to write that boilerplate for us: the crate offers the impl_op_ex! macro, which generates all the combinations.
Here is their sample:
#[macro_use] extern crate impl_ops;
use std::ops;
impl_op_ex!(+ |a: &DonkeyKong, b: &DonkeyKong| -> i32 { a.bananas + b.bananas });
fn main() {
let total_bananas = &DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = &DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + &DonkeyKong::new(4);
assert_eq!(6, total_bananas);
let total_bananas = DonkeyKong::new(2) + DonkeyKong::new(4);
assert_eq!(6, total_bananas);
}
Even better, they have a impl_op_ex_commutative! that'll also generate the operators with the parameters reversed if your operator happens to be commutative.

Crash when casting the result of arc4random() to Int

I've written a simple Bag class. A Bag is filled with a fixed ratio of Temperature enums. It allows you to grab one at random and automatically refills itself when empty. It looks like this:
class Bag {
var items = Temperature[]()
init () {
refill()
}
func grab()-> Temperature {
if items.isEmpty {
refill()
}
var i = Int(arc4random()) % items.count
return items.removeAtIndex(i)
}
func refill() {
items.append(.Normal)
items.append(.Hot)
items.append(.Hot)
items.append(.Cold)
items.append(.Cold)
}
}
The Temperature enum looks like this:
enum Temperature: Int {
case Normal, Hot, Cold
}
My GameScene:SKScene has a constant instance property bag:Bag. (I've tried with a variable as well.) When I need a new temperature I call bag.grab(), once in didMoveToView and when appropriate in touchesEnded.
Randomly this call crashes on the if items.isEmpty line in Bag.grab(). The error is EXC_BAD_INSTRUCTION. Checking the debugger shows items is size=1 and [0] = (AppName.Temperature) <invalid> (0x10).
Edit Looks like I don't understand the debugger info. Even valid arrays show size=1 and unrelated values for [0] =. So no help there.
I can't get it to crash isolated in a Playground. It's probably something obvious but I'm stumped.
Function arc4random returns an UInt32. If you get a value higher than Int.max, the Int(...) cast will crash.
Using
Int(arc4random_uniform(UInt32(items.count)))
should be a better solution.
(Blame the strange crash messages in the Alpha version...)
I found that the best way to solve this is by using rand() instead of arc4random()
the code, in your case, could be:
var i = Int(rand()) % items.count
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max:Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
The crash that you were experiencing is due to the fact that Swift detected a type inconsistency at runtime.
Since Int != UInt32 you will have to first type cast the input argument of arc4random_uniform before you can compute the random number.
Swift doesn't allow to cast from one integer type to another if the result of the cast doesn't fit. E.g. the following code will work okay:
let x = 32
let y = UInt8(x)
Why? Because 32 is a possible value for an int of type UInt8. But the following code will fail:
let x = 332
let y = UInt8(x)
That's because you cannot assign 332 to an unsigned 8 bit int type, it can only take values 0 to 255 and nothing else.
When you do casts in C, the int is simply truncated, which may be unexpected or undesired, as the programmer may not be aware that truncation may take place. So Swift handles things a bit different here. It will allow such kind of casts as long as no truncation takes place but if there is truncation, you get a runtime exception. If you think truncation is okay, then you must do the truncation yourself to let Swift know that this is intended behavior, otherwise Swift must assume that is accidental behavior.
This is even documented (documentation of UnsignedInteger):
Convert from Swift's widest unsigned integer type,
trapping on overflow.
And what you see is the "overflow trapping", which is poorly done as, of course, one could have made that trap actually explain what's going on.
Assuming that items never has more than 2^32 elements (a bit more than 4 billion), the following code is safe:
var i = Int(arc4random() % UInt32(items.count))
If it can have more than 2^32 elements, you get another problem anyway as then you need a different random number function that produces random numbers beyond 2^32.
This crash is only possible on 32-bit systems. Int changes between 32-bits (Int32) and 64-bits (Int64) depending on the device architecture (see the docs).
UInt32's max is 2^32 − 1. Int64's max is 2^63 − 1, so Int64 can easily handle UInt32.max. However, Int32's max is 2^31 − 1, which means UInt32 can handle numbers greater than Int32 can, and trying to create an Int32 from a number greater than 2^31-1 will create an overflow.
I confirmed this by trying to compile the line Int(UInt32.max). On the simulators and newer devices, this compiles just fine. But I connected my old iPod Touch (32-bit device) and got this compiler error:
Integer overflows when converted from UInt32 to Int
Xcode won't even compile this line for 32-bit devices, which is likely the crash that is happening at runtime. Many of the other answers in this post are good solutions, so I won't add or copy those. I just felt that this question was missing a detailed explanation of what was going on.
This will automatically create a random Int for you:
var i = random() % items.count
i is of Int type, so no conversion necessary!
You can use
Int(rand())
To prevent same random numbers when the app starts, you can call srand()
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
let randomNumber: Int = Int(rand()) % items.count

Passing C++ structure pointer from Perl to arbitary dll function call

I am using Win32::API to call an arbitary function exported in a DLL which accepts a C++ structure pointer.
struct PluginInfo {
int nStructSize;
int nType;
int nVersion;
int nIDCode;
char szName[ 64 ];
char szVendor[ 64 ];
int nCertificate;
int nMinAmiVersion;
};
As we need to use the "pack" function to construct the structure and need to pass an argument
my $name = " " x 64;
my $vendor = " " x 64;
my $pluginInfo = pack('IIIIC64C64II',0,0,0,0,$name,$vendor,0,0);
Its not constructing the structure properly.
It seems that length argument applied to C will gobble those many arguments.
Can some one please suggest the best way to construct this structure form Perl and passon to dll call.
Thanks in advance,
Naga Kiran
Use Z (NUL-padded string) in your template, as in
my $pluginInfo = pack('IIIIZ64Z64II',0,0,0,0,$name,$vendor,0,0);
Also, take a look at Win32::API::Struct, which is part of the Win32::API module.
For anything complicated, check out Convert::Binary::C. It may seem daunting at first, but once you realize its power, it's an eye opener.
Update: Let me add a bit of information. You need to have a look at a specific section of the module's manpage for the prime reason to use it. I'll quote it for convenience:
Why use Convert::Binary::C?
Say you want to pack (or unpack) data
according to the following C
structure:
struct foo {
char ary[3];
unsigned short baz;
int bar;
};
You could of course use Perl's pack
and unpack functions:
#ary = (1, 2, 3);
$baz = 40000;
$bar = -4711;
$binary = pack 'c3 Si', #ary, $baz, $bar;
But this implies that the struct
members are byte aligned. If they were
long aligned (which is the default for
most compilers), you'd have to write
$binary = pack 'c3 x S x2 i', #ary, $baz, $bar;
which doesn't really increase
readability.
Now imagine that you need to pack the
data for a completely different
architecture with different byte
order. You would look into the pack
manpage again and perhaps come up with
this:
$binary = pack 'c3 x n x2 N', #ary, $baz, $bar;
However, if you try to unpack $foo
again, your signed values have turned
into unsigned ones.
All this can still be managed with
Perl. But imagine your structures get
more complex? Imagine you need to
support different platforms? Imagine
you need to make changes to the
structures? You'll not only have to
change the C source but also dozens of
pack strings in your Perl code. This
is no fun. And Perl should be fun.
Now, wouldn't it be great if you could
just read in the C source you've
already written and use all the types
defined there for packing and
unpacking? That's what
Convert::Binary::C does.

Resources