SIGSEGV MAPERR in Racket when calling Raylib via FFI - scheme

I'm trying to use Raylib (https://www.raylib.com/, https://github.com/raysan5/raylib) from Racket code via FFI. Here is the most simple example:
#lang racket
(require ffi/unsafe
ffi/unsafe/define)
; raylib shared object must be available for Racket
; for example, in Linux it must be in
; ~/.racket/<racket-version>/lib> or /usr/lib/racket
(define-ffi-definer define-raylib (ffi-lib "libraylib" #:global? #t))
(define-raylib BeginDrawing (_fun -> _void))
(define-raylib CloseWindow (_fun -> _void))
(define-raylib EndDrawing (_fun -> _void))
(define-raylib InitWindow (_fun _int _int _string -> _void))
(define-raylib SetTargetFPS (_fun _int -> _void))
(define-raylib WindowShouldClose (_fun -> _int))
(void InitWindow 640 480 "Test window")
(void SetTargetFPS 60)
(define (main-loop)
(BeginDrawing)
(EndDrawing)
(if (= (WindowShouldClose) 0)
(main-loop)
(CloseWindow)))
(main-loop)
But even this very simple example crashes with the message:
SIGSEGV MAPERR si_code 1 fault on addr (nil)
Aborted (core dumped)
Looks like it crashes when calling BeginDrawing() function. The code of this function is also very simple:
// Setup canvas (framebuffer) to start drawing
void BeginDrawing(void)
{
currentTime = GetTime(); // Number of elapsed seconds since InitTimer()
updateTime = currentTime - previousTime;
previousTime = currentTime;
rlLoadIdentity(); // Reset current matrix (MODELVIEW)
rlMultMatrixf(MatrixToFloat(downscaleView)); // If downscale required, apply it here
}
Functions with rl prefix are OpenGL wrappers. May it be an OpenGL context issue?
I tried to call the same functions in Guile Scheme and in Guile all works great.

It looks like you're not actually calling either InitWindow or SetFPS.
Instead of (void InitWindow 640 480 "Test window"), try (InitWindow 640 480 "Test window").

Related

F# Performance Impact of Checked Calcs?

Is there a performance impact from using the Checked module? I've tested it out with sequences of type int and see no noticeable difference. Sometimes the checked version is faster and sometimes unchecked is faster, but generally not by much.
Seq.initInfinite (fun x-> x) |> Seq.item 1000000000;;
Real: 00:00:05.272, CPU: 00:00:05.272, GC gen0: 0, gen1: 0, gen2: 0
val it : int = 1000000000
open Checked
Seq.initInfinite (fun x-> x) |> Seq.item 1000000000;;
Real: 00:00:04.785, CPU: 00:00:04.773, GC gen0: 0, gen1: 0, gen2: 0
val it : int = 1000000000
Basically I'm trying to figure out if there would be any downside to always opening Checked. (I encountered an overflow that wasn't immediately obvious, so I'm now playing the role of the jilted lover who doesn't want another broken heart.) The only non-contrived reason I can come up with for not always using Checked is if there were some performance hit, but I haven't seen one yet.
When you measure performance it's usually not a good idea to include Seq as Seq adds lots of overhead (at least compared to int operations) so you risk that most of the time is spent in Seq, not in the code you like to test.
I wrote a small test program for (+):
let clock =
let sw = System.Diagnostics.Stopwatch ()
sw.Start ()
fun () ->
sw.ElapsedMilliseconds
let dbreak () = System.Diagnostics.Debugger.Break ()
let time a =
let b = clock ()
let r = a ()
let n = clock ()
let d = n - b
d, r
module Unchecked =
let run c () =
let rec loop a i =
if i < c then
loop (a + 1) (i + 1)
else
a
loop 0 0
module Checked =
open Checked
let run c () =
let rec loop a i =
if i < c then
loop (a + 1) (i + 1)
else
a
loop 0 0
[<EntryPoint>]
let main argv =
let count = 1000000000
let testCases =
[|
"Unchecked" , Unchecked.run
"Checked" , Checked.run
|]
for nm, a in testCases do
printfn "Running %s ..." nm
let ms, r = time (a count)
printfn "... it took %d ms, result is %A" ms r
0
The performance results are this:
Running Unchecked ...
... it took 561 ms, result is 1000000000
Running Checked ...
... it took 1103 ms, result is 1000000000
So it seems some overhead is added by using Checked. The cost of int add should be less than the loop overhead so the overhead of Checked is higher than 2x maybe closer to 4x.
Out of curiousity we can check the IL Code using tools like ILSpy:
Unchecked:
IL_0000: nop
IL_0001: ldarg.2
IL_0002: ldarg.0
IL_0003: bge.s IL_0014
IL_0005: ldarg.0
IL_0006: ldarg.1
IL_0007: ldc.i4.1
IL_0008: add
IL_0009: ldarg.2
IL_000a: ldc.i4.1
IL_000b: add
IL_000c: starg.s i
IL_000e: starg.s a
IL_0010: starg.s c
IL_0012: br.s IL_0000
Checked:
IL_0000: nop
IL_0001: ldarg.2
IL_0002: ldarg.0
IL_0003: bge.s IL_0014
IL_0005: ldarg.0
IL_0006: ldarg.1
IL_0007: ldc.i4.1
IL_0008: add.ovf
IL_0009: ldarg.2
IL_000a: ldc.i4.1
IL_000b: add.ovf
IL_000c: starg.s i
IL_000e: starg.s a
IL_0010: starg.s c
IL_0012: br.s IL_0000
The only difference is that Unchecked uses add and Checked uses add.ovf. add.ovf is add with overflow check.
We can dig even deeper by looking at the jitted x86_64 code.
Unchecked:
; if i < c then
00007FF926A611B3 cmp esi,ebx
00007FF926A611B5 jge 00007FF926A611BD
; i + 1
00007FF926A611B7 inc esi
; a + 1
00007FF926A611B9 inc edi
; loop (a + 1) (i + 1)
00007FF926A611BB jmp 00007FF926A611B3
Checked:
; if i < c then
00007FF926A62613 cmp esi,ebx
00007FF926A62615 jge 00007FF926A62623
; a + 1
00007FF926A62617 add edi,1
; Overflow?
00007FF926A6261A jo 00007FF926A6262D
; i + 1
00007FF926A6261C add esi,1
; Overflow?
00007FF926A6261F jo 00007FF926A6262D
; loop (a + 1) (i + 1)
00007FF926A62621 jmp 00007FF926A62613
Now the reason for the Checked overhead is visible. After each operation the jitter inserts the conditional instruction jo which jumps to code that raises OverflowException if the overflow flag is set.
This chart shows us that the cost of an integer add is less than 1 clock cycle. The reason it's less than 1 clock cycle is that modern CPU can execute certain instructions in parallel.
The chart also shows us that branch that was correctly predicted by the CPU takes around 1-2 clock cycles.
So assuming a throughtput of at least 2 the cost of two integer additions in the Unchecked example should be 1 clock cycle.
In the Checked example we do add, jo, add, jo. Most likely CPU can't parallelize in this case and the cost of this should be around 4-6 clock cycles.
Another interesting difference is that the order of additions changed. With checked additions the order of the operations matter but with unchecked the jitter (and the CPU) has a greater flexibility moving the operations possibly improving performance.
So long story short; for cheap operations like (+) the overhead of Checked should be around 4x-6x compared to Unchecked.
This assumes no overflow exception. The cost of a .NET exception is probably around 100,000x times more expensive than an integer addition.

Modifying structure fields within function?

I am learning how to use structures with MIT-scheme and am trying to "translate" the following function from C to scheme:
static inline void
body_integrate(struct body *body, double dt)
{
body->vx += dt * body->fx / body->mass;
body->vy += dt * body->fy / body->mass;
body->rx += dt * body->vx;
body->ry += dt * body->vy;
}
With the following definitions
(define-structure body rx ry vx vy fx fy mass)
(define integrate (lambda (body dt) (
(set-body-vx! body (+ (body-vx body) (* dt (/ (body-fx body) (body-mass body)))))
(set-body-vy! body (+ (body-vy body) (* dt (/ (body-fy body) (body-mass body)))))
(set-body-rx! body (+ (body-rx body) (* dt (body-vx body))))
(set-body-ry! body (+ (body-ry body) (* dt (body-vy body))))
)))
I get:
MIT/GNU Scheme running under GNU/Linux
Type `^C' (control-C) followed by `H' to obtain information about interrupts.
Copyright (C) 2011 Massachusetts Institute of Technology
This is free software; see the source for copying conditions. There is NO warranty; not even for
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
Image saved on Thursday November 5, 2015 at 8:44:48 PM
Release 9.1.1 || Microcode 15.3 || Runtime 15.7 || SF 4.41 || LIAR/x86-64 4.118 || Edwin 3.116
;Loading "body.ss"... done
1 ]=> (define b (make-body 1.0 1.0 2.0 2.0 10.0 10.0 0.1))
;Value: b
1 ]=> (integrate b 5.0)
;The object #!unspecific is not applicable.
;To continue, call RESTART with an option number:
; (RESTART 2) => Specify a procedure to use in its place.
; (RESTART 1) => Return to read-eval-print level 1.
2 error>
I have the feeling I can't do multiple (set-body-X! ...) inside integrate. But then how should I proceed to do this?
The opening parentheses right at the end of the first line is incorrect, in Scheme when you surround an expression with () it's treated as a function application, and that's not what you want here.
Anyway, it'd be more idiomatic to create a new structure with the new values, instead of modifying a parameter - remember, Scheme encourages a functional programming style, and you should avoid mutating values; besides modifying a parameter is considered bad style in most programming languages (that's ok in C, though). Try this:
(define (integrate body dt)
(make-body (+ (body-rx body) (* dt (body-vx body)))
(+ (body-ry body) (* dt (body-vy body)))
(+ (body-vx body) (* dt (/ (body-fx body) (body-mass body))))
(+ (body-vy body) (* dt (/ (body-fy body) (body-mass body))))
(body-fx body)
(body-fy body)
(body-mass body)))

OCaml websocket "Invalid UTF8 data"

I am trying to build a loop with Lwt that will push a frame to a Websocket, wait for the response, print it to the screen, wait 60 seconds and then repeat the process again. I have been able to get something that compiles but I do not have it 100% right yet. The first time through the loop everything works fine, then every time after that I receive the error message "Invalid UTF8 data". I must have something wrong in my Lwt loop or in my understanding of Websocket protocols. My code:
#require "websocket";;
#require "lwt";;
#require "lwt.syntax";;
open Lwt
(* Set up the websocket uri address *)
let ws_addr = Uri.of_string "websocket_address"
(* Set up the websocket connection *)
let ws_conn = Websocket.open_connection ws_addr
(* Set up a frame *)
let ws_frame = Websocket.Frame.of_string "json_string_to_server"
(* push function *)
let push frame () =
ws_conn
>>= fun (_, ws_pushfun) ->
ws_pushfun (Some frame);
Lwt.return ()
(* get stream element and print to screen *)
let get_element () =
let print_reply (x : Websocket.Frame.t) =
let s = Websocket.Frame.content x in
Lwt_io.print s; Lwt_io.flush Lwt_io.stdout;
in
ws_conn
>>= fun(ws_stream, _) ->
Lwt_stream.next ws_stream
>>= print_reply
let rec main () =
Lwt_unix.sleep 60.0
>>= (push ws_frame)
>>= get_element
>>= main
Lwt_main.run(main ())
I'm not sure what particularly incorrect with your code. It even doesn't compiles on my system. It looks like you were experimenting with it in a top-level and created some strange context. I've rewritten your code in a somewhat more cleaner way. First of all I pass a connection to the function, so that it is more cleaner, what your functions do. Also it is not a good idea to wait for the same thread again and again. This is not how things are done is Lwt.
open Lwt
(* Set up the websocket uri address *)
let ws_addr = Uri.of_string "websocket_address"
(* Set up a frame *)
let ws_frame = Websocket.Frame.of_string "json_string_to_server"
(* push function *)
let push (_,push) frame =
push (Some frame);
return_unit
(* get stream element and print to screen *)
let get_element (stream,_) =
let print_reply (x : Websocket.Frame.t) =
let s = Websocket.Frame.content x in
Lwt_io.printlf "%s%!" s in
Lwt_stream.next stream
>>= print_reply
let rec main conn : unit t =
Lwt_unix.sleep 60.0
>>= fun () -> push conn ws_frame
>>= fun () -> get_element conn
>>= fun () -> main conn
let () = Lwt_main.run (
Websocket.open_connection ws_addr >>= main)

Can I customize the format of error outputs of common lisp?

I'm using SBCL. When something goes wrong in my program, SBCL will print a long list of back trace informations. This is annoying sometimes, and I have to scroll back and back to find out what the error message was. Can I customize the error outputs(e.g., shorten the back trace list)?
See: *backtrace-frame-count*.
I did some experimenting with sbcl:
(defun crash-big-stack (&optional (c 20))
(if (= c 0)
(error "crash boooooom")
(another-crash (- c 1))))
(defun another-crash (&optional c)
(crash-big-stack c))
1) I am using SBCL 1.0.57.0 which wont give any stacktrace if not asked (using slime though will result in a stacktrace), the only scenario where sbcl crashes and prints the complete stacktrace is when you either use (sb-ext:disable-debugger) or provide the toplevel argument sbcl --disable-debugger
SBCL (without (sb-ext:disable-debugger)):
* (crash-big-stack)
debugger invoked on a SIMPLE-ERROR in thread
#<THREAD "main thread" RUNNING {1002978CA3}>:
crash boooooom
Type HELP for debugger help, or (SB-EXT:QUIT) to exit from SBCL.
restarts (invokable by number or by possibly-abbreviated name):
0: [ABORT] Exit debugger, returning to top level.
(CRASH-BIG-STACK 0)
0]
SBCL (with (sb-ext:disable-debugger)):
(crash-big-stack)
unhandled SIMPLE-ERROR in thread #<SB-THREAD:THREAD "main thread" RUNNING
{1002978CA3}>:
crash boooooom
0: (SB-DEBUG::MAP-BACKTRACE
#<CLOSURE (LAMBDA # :IN BACKTRACE) {100465352B}>
:START
0
:COUNT
128)
1: (BACKTRACE 128 #<SYNONYM-STREAM :SYMBOL SB-SYS:*STDERR* {1000169AE3}>)
2: (SB-DEBUG::DEBUGGER-DISABLED-HOOK
#<SIMPLE-ERROR "crash boooooom" {1004651C23}>
#<unavailable argument>)
3: (SB-DEBUG::RUN-HOOK
*INVOKE-DEBUGGER-HOOK*
#<SIMPLE-ERROR "crash boooooom" {1004651C23}>)
4: (INVOKE-DEBUGGER #<SIMPLE-ERROR "crash boooooom" {1004651C23}>)
5: (ERROR "crash boooooom")
6: (CRASH-BIG-STACK 0)
7: (SB-INT:SIMPLE-EVAL-IN-LEXENV (CRASH-BIG-STACK) #<NULL-LEXENV>)
8: (EVAL (CRASH-BIG-STACK))
9: (INTERACTIVE-EVAL (CRASH-BIG-STACK) :EVAL NIL)
10: (SB-IMPL::REPL-FUN NIL)
11: ((LAMBDA () :IN SB-IMPL::TOPLEVEL-REPL))
12: (SB-IMPL::%WITH-REBOUND-IO-SYNTAX
#<CLOSURE (LAMBDA # :IN SB-IMPL::TOPLEVEL-REPL) {100450355B}>)
13: (SB-IMPL::TOPLEVEL-REPL NIL)
14: (SB-IMPL::TOPLEVEL-INIT)
15: ((FLET #:WITHOUT-INTERRUPTS-BODY-236911 :IN SAVE-LISP-AND-DIE))
16: ((LABELS SB-IMPL::RESTART-LISP :IN SAVE-LISP-AND-DIE))
unhandled condition in --disable-debugger mode, quitting
As far as the SBCL manual goes, there is no way to influence the predefined behavior of the SBCL debugger interface, but you can provide your own by setting sb-ext:*invoke-debugger-hook*
* (defun crash-big-stack (&optional (c 20))
(if (= c 0)
(error "crash boooooom")
(let ((waste (another-crash (- c 1))))
(+ waste 42))))
CRASH-BIG-STACK
* (defun another-crash (&optional c)
(crash-big-stack c))
ANOTHER-CRASH
* (setf sb-ext:*invoke-debugger-hook* #'(lambda(&rest args) (sb-ext:exit)))
#<FUNCTION (LAMBDA (&REST ARGS)) {10045CEF1B}>
* (crash-big-stack)
~:

Haskell Gloss Not Animating

I have a program which simulates the interaction of many agents in a community. I'm animating the interaction using the Gloss library, the pictures of the agents render correctly, just not the animation. I animate it by generating a simulation, which is a list of list of interactions, which I then take the one corresponding to the second of the animation, where I then render that interaction. The code for simulating works fine when outputting it to terminal. The code:
render :: Int -> History -> Picture -- assume window to be square
render size (History int agents) = Pictures $ map (drawAgent (step`div`2) colors step) agents
where step = size*6 `div` (length agents)
--agents = nub $ concat $ map (\(Interaction a1 a2 _ ) -> [a1,a2]) int
nubNames = nub $ map (getName . name) agents --ignore the first two letters of name
colors = Map.fromList $ zipWith (\name color-> (name, color)) nubNames (cycle colorlist)
colorlist = [red,green,blue,yellow,cyan,magenta,rose,violet,azure,aquamarine,chartreuse,orange]
drawAgent :: Int -> Map.Map String Color -> Int -> Agent -> Picture
drawAgent size colors step agent =
color aColor (Polygon [(posA,posB),(posA,negB),(negA,negB),(negA,posB)])
where aColor = fromMaybe black $ Map.lookup (getName $ name agent ) colors
a = (fst $ position agent) * step
b = (snd $ position agent) * step
posA = fromIntegral $ a+size
negA = fromIntegral $ a-size
posB = fromIntegral $ b+size
negB = fromIntegral $ b-size
simulate :: Int -> [Agent] -> [History]
simulate len agents = trace ("simulation"
(playRound agents len) :
simulate len (reproduce (playRound agents len))
main = do
a <- getStdGen
G.animate (G.InWindow "My Window" (400, 400) (0,0)) G.white
(\time ->(render 400) $ ((simulate 5 (agent a))!!(floor time)))
where agent a = generate a 9
sim a = simulate 40 (agent a)
When I execute this, it will say the simulation is running, but only render the first interaction.
$ ghc -O2 -threaded main.hs && ./main
[6 of 8] Compiling Prisoners ( Prisoners.hs, Prisoners.o )
Linking main ...
simulation
simulation
simulation
It will continue like this until I stop it, rendering the same picture each time. What am I doing wrong?
(SO Comment box won't let me paste code)
Your doesn't compile for me. What version of Gloss are you using, the API has changed from v1.0 to v1.7.x. What version of GHC? What OS?
Does this simple example work for you?
{- left click to create a circle; escape to quit -}
import Graphics.Gloss
import Graphics.Gloss.Interface.Pure.Game
initial _ = [(0.0,0.0) :: Point]
event (EventMotion (x,y)) world = world --(x,y)
event (EventKey (MouseButton LeftButton) Up mods (x,y)) world = (x,y):world
event _ world = world
step time world = world
draw pts = Pictures $ map f pts
where f (x,y) = translate x y (circle 10)
m = play (InWindow "Hi" (600,600) (200,200)) white 1 (initial 0) draw event step

Resources