module Digits
type Digit = Unison | Semitone | Tone | MinorThird | MajorThird | PerfectFourth | AugmentedFourth | PerfectFifth | MinorSixth | MajorSixth | MinorSeventh | MajorSeventh type 'd GeneralizedDigit = SmallDigit of 'd | Octave type 't Number = EmptyNumber | CountedNumber of 't * 't Number
let swapOctave: Digit GeneralizedDigit -> Digit GeneralizedDigit = fun x -> match x with SmallDigit Unison -> Octave | Octave -> SmallDigit Unison | g -> g
let limitLength: 'r Number -> Digit = fun a -> match a with EmptyNumber -> Unison | CountedNumber(_,EmptyNumber) -> Semitone | CountedNumber(_,CountedNumber(_,EmptyNumber)) -> Tone | CountedNumber(_,CountedNumber(_,CountedNumber(_,EmptyNumber))) -> MinorThird | _ -> MajorSeventh
In F# I can put multiple type definitions on the same line without semicolons without any problems, but when I remove the newline between the let statements I get the error FS0010. I know that in Haskell statements can be separated by a single semicolon but in F# neither a single semicolon nor a double semicolon will work. How do I have multiple let statements on the same line?
You can do this with the let .. in syntax like so:
let f () = let a = 1 in let b = 2 in a + b
f () // gives 3 as a result
But I would really recommend against doing multiple single-line definitions like this. It's hard for people to read.
If you want multiple let bindings to bound values to variables, then you also can use the "tuple syntax".
let x,y,z = 1, "Hi", 3.0
As explained by Phillip, the let .. in .. construct allows you to define a local variable as part of a one-line expression.
However, your example seems to be trying to define multiple top-level definitions in a module, which is something you cannot achieve with let .. in ...
As far as I can tell, you can actually do this by separating the definitions with two semicolons, i.e. ;;. If I save the following as test.fs and load it using #load, I get no errors:
module Digits
type Digit = Unison | Semitone | Tone | MinorThird | MajorThird | PerfectFourth | AugmentedFourth | PerfectFifth | MinorSixth | MajorSixth | MinorSeventh | MajorSeventh type 'd GeneralizedDigit = SmallDigit of 'd | Octave type 't Number = EmptyNumber | CountedNumber of 't * 't Number
let swapOctave: Digit GeneralizedDigit -> Digit GeneralizedDigit = fun x -> match x with SmallDigit Unison -> Octave | Octave -> SmallDigit Unison | g -> g;; let limitLength: 'r Number -> Digit = fun a -> match a with EmptyNumber -> Unison | CountedNumber(_,EmptyNumber) -> Semitone | CountedNumber(_,CountedNumber(_,EmptyNumber)) -> Tone | CountedNumber(_,CountedNumber(_,CountedNumber(_,EmptyNumber))) -> MinorThird | _ -> MajorSeventh
I tested this in F# 5.0. It may be the case that this has changed in F# 6 which removed deprecated features like #light "off". The removal of ;; is not discussed in the post, but it may have been a related change. If that is the case, you may report it as a regression - but it is likely support for ;; should also be removed!
As mentioned by Phillip, I do not see any reason for actually trying to do this.
Related
I want to reference a specific part of a shape. For example: From Best Apple to Basket 1, instead of apple_node to Basket 1.
The below image will better explain what I wish to achieve.
https://imgur.com/a/B0TEoWO
This is my graphviz code and what I have achieved so far:
digraph fruits {
node [shape=record]
apple_node [label="Apple | {{Best Apple} | {Worst Apple}}"];
banana_node [label="Banana | {{Best Banana} | {Worst Banana}}"];
basket1_node [label="basket1|{Colour 10 | Seeds 10}"];
basket2_node [label="basket2|{Colour 10 | Seeds 10}"];
apple_node -> basket1_node;
banana_node -> basket2_node;
}
Since you are using record-based nodes, you can add field id's to the label and use them as portnames which indicate where to attach an edge to (see also the official documentation about record-based nodes).
Example:
examplenode [shape=record; label="<fieldid1> one|<fieldid2> two"];
examplenode:fieldid1 -> othernode;
Your apple-banana example:
digraph fruits {
node [shape=record]
apple_node [label="Apple | {{<bestapple>Best Apple} | {<worstapple>Worst Apple}}"];
banana_node [label="Banana | {{Best Banana} | {<worstbanana>Worst Banana}}"];
basket1_node [label="basket1|{Colour 10 | Seeds 10}"];
basket2_node [label="basket2|{Colour 10 | Seeds 10}"];
apple_node:bestapple -> basket1_node;
apple_node:worstapple -> basket1_node;
banana_node:worstbanana -> basket2_node;
}
I wrote this in HelperFunctions.fs:
namespace Tutorial1.HelperFunctions
module Factorials =
let rec fact n =
match n with
| 0 -> 1
| 1 -> 1
| _ -> n * fact (n - 1)
And then this in Tutorial.fsx:
#load "HelperFunctions.fs"
open Tutorial1.HelperFunctions
module start =
let x = Factorials.fact 5
printfn "%d" x
Code compiles and returns 120 as expected BUT: VS throws FS0039 error: Factorials and Tutorial1 namespace, type or module not defined... Tried many other combinations of open, module etc but then codes does not even compile. What is the problem I am not seeing here?
Okay, apparently the order of files in the vstudio matters, even if you include the file with #load. I had to shift the files upwards and it worked
I think everything is in the title but I am looking specifically for:
What is the "standard" unit test framework in OCaml?
How do I integrate executing tests in the build?
How to automatically execute tests upon every file change?
As a bonus, I would be interested in test coverage tooling...
It seems that the package ounit enjoys quite a large popularity, there are several other packages like kaputt or broken – I am the author of the latter.
I guess you are interested as the specific part of TDD where tests can be automated, here is how I do it on my own projects. You can find a few examples on GitHub such as Lemonade or Rashell that both have a test suite found in their respective testsuite folders.
Usually I work according to the according workflow:
I start to work simultaneously on tests and interface (.mli) files, this way I write a minimal program and do not only write a test case for the functions I want to implement but also have the opportunity to experiment with interfaces to be sure that I have an easy-to-use interface.
For instance, for the interface to the find(1) command found in Rashell_Posix I started by writing test cases:
open Broken
open Rashell_Broken
open Rashell_Posix
open Lwt.Infix
let spec base = [
(true, 0o700, [ base; "a"]);
(true, 0o750, [ base; "a"; "b"]);
(false, 0o600, [ base; "a"; "b"; "x"]);
(false, 0o640, [ base; "a"; "y" ]);
(true, 0o700, [ base; "c"]);
(false, 0o200, [ base; "c"; "z"]);
]
let find_fixture =
let filename = ref "" in
let cwd = Unix.getcwd () in
let changeto base =
filename := base;
Unix.chdir base;
Lwt.return base
in
let populate base =
Toolbox.populate (spec base)
in
make_fixture
(fun () ->
Lwt_main.run
(Rashell_Mktemp.mktemp ~directory:true ()
>>= changeto
>>= populate))
(fun () ->
Lwt_main.run
(Unix.chdir cwd;
rm ~force:true ~recursive:true [ !filename ]
|> Lwt_stream.junk_while (fun _ -> true)))
let assert_find id ?expected_failure ?workdir predicate lst =
assert_equal id ?expected_failure
~printer:(fun fft lst -> List.iter (fun x -> Format.fprintf fft " %S" x) lst)
(fun () -> Lwt_main.run(
find predicate [ "." ]
|> Lwt_stream.to_list
|> Lwt.map (List.filter ((<>) "."))
|> Lwt.map (List.sort Pervasives.compare)))
()
lst
The spec and find_fixture functions are used to create a file hierarchy with the given names and permissions, to exercise the find function. Then the assert_find function prepares a test-case comparing the results of a call to find(1) with the expected results:
let find_suite =
make_suite ~fixture:find_fixture "find" "Test suite for find(1)"
|& assert_find "regular" (Has_kind(S_REG)) [
"./a/b/x";
"./a/y";
"./c/z";
]
|& assert_find "directory" (Has_kind(S_DIR)) [
"./a";
"./a/b";
"./c"
]
|& assert_find "group_can_read" (Has_at_least_permission(0o040)) [
"./a/b";
"./a/y"
]
|& assert_find "exact_permission" (Has_exact_permission(0o640)) [
"./a/y";
]
Simultaneously I was writing on the interface file:
(** The type of file types. *)
type file_kind = Unix.file_kind =
| S_REG
| S_DIR
| S_CHR
| S_BLK
| S_LNK
| S_FIFO
| S_SOCK
(** File permissions. *)
type file_perm = Unix.file_perm
(** File status *)
type stats = Unix.stats = {
st_dev: int;
st_ino: int;
st_kind: file_kind;
st_perm: file_perm;
st_nlink: int;
st_uid: int;
st_gid: int;
st_rdev: int;
st_size: int;
st_atime: float;
st_mtime: float;
st_ctime: float;
}
type predicate =
| Prune
| Has_kind of file_kind
| Has_suffix of string
| Is_owned_by_user of int
| Is_owned_by_group of int
| Is_newer_than of string
| Has_exact_permission of int
| Has_at_least_permission of int
| Name of string (* Globbing pattern on basename *)
| And of predicate list
| Or of predicate list
| Not of predicate
val find :
?workdir:string ->
?env:string array ->
?follow:bool ->
?depthfirst:bool ->
?onefilesystem:bool ->
predicate -> string list -> string Lwt_stream.t
(** [find predicate pathlst] wrapper of the
{{:http://pubs.opengroup.org/onlinepubs/9699919799/utilities/find.html} find(1)}
command. *)
Once I was pleased with my test-cases and interfaces, I could try to compile them, even without an implementation. This is possible with bsdowl by just giving an interface file instead of an implementation file in the Makefile.
Here compilation probably uncovered a few type errors in my tests that I could fix.
When the test compiled against the interface, I could implement the function, starting with an alibi function:
let find _ =
failwith "Rashell_Posix.find: Not implemented"
With this implementation I was able to compile my library and my test-suite. Of-course at this point, the test just fails.
At that point, I just needed to implement the Rashell_Posix.find function and iterate the tests until they passed.
This is how I do test-driven development in OCaml when I use automated tests. Some persons see interacting with the REPL as a form of test-driven development, this is a technique that I also like to use, it is rather straightforward to setup and use. The only setup step to use this latter form of test-driven-development in Rashell was to write an .ocamlinit file for the toplevel loading all the required libraries. This file looks like:
#use "topfind";;
#require "broken";;
#require "lemonade";;
#require "lwt.unix";;
#require "atdgen";;
#directory "/Users/michael/Workshop/rashell/src";;
#directory "/Users/michael/obj/Workshop/rashell/src";;
The two #directory directives correspond to the directories for sources and objects.
(Disclaimer: if you look carefully at the history, you will find that I took some liberties with the chronology, but there are other projects where I proceed exactly this way – I just cannot remember precisely which ones.)
I am trying to use NLTK package to capture the following chunk in a sentence:
verb + smth + noun
or it may be
verb + smth + noun + and + noun
I truthfully spent entire day messing with regex, but still nothing proper is produced..
I was looking at this tutorial which wasn't much of help.
When you have an idea of what those somethings that might come in between are, there is a relatively easy method using NLTK's CFG. This is most certainly not the most efficient way. For a comprehensive analysis, consult NLTK's book on chapter 8.
We have two patterns as you mentioned:
<verb> ... <noun>
<verb> ... <noun> "and" <noun>
We should assemble a list of VPs and NPs and also the range of possible words that could happen in between. As a silly little example:
grammar = nltk.CFG.fromstring("""
% start S
S -> VP SOMETHING NP
VP -> V
SOMETHING -> WORDS SOMETHING
SOMETHING ->
NP -> N 'and' N
NP -> N
V -> 'told' | 'scolded' | 'loved' | 'respected' | 'nominated' | 'rescued' | 'included'
N -> 'this' | 'us' | 'them' | 'you' | 'I' | 'me' | 'him'|'her'
WORDS -> 'among' | 'others' | 'not' | 'all' | 'of'| 'uhm' | '...' | 'let'| 'finish' | 'certainly' | 'maybe' | 'even' | 'me'
""")
Now suppose this is the list of the sentences we want to use our filter against:
sentences = ['scolded me and you', 'included certainly uhm maybe even her and I', 'loved me and maybe many others','nominated others not even him', 'told certainly among others uhm let me finish ... us and them', 'rescued all of us','rescued me and somebody else']
As you can see, the third and the last phrases don't pass the filter. We can check whether the rest match the pattern:
def sentence_filter(sent, grammar):
rd_parser = nltk.RecursiveDescentParser(grammar)
try:
for p in rd_parser.parse(sent):
print("SUCCESS!")
except:
print("Doesn't match the filter...")
for s in sentences:
s = s.split()
sentence_filter(s, grammar)
When we run this, we get this result:
>>>
SUCCESS!
SUCCESS!
Doesn't match the filter...
SUCCESS!
SUCCESS!
SUCCESS!
Doesn't match the filter...
>>>
I tried to include the summary of an lm object in an Rmd file, using code like the following but it didn't work. Could you help me do that?
```{r summary_lm, results='asis', echo=FALSE, comment=NA}
library(apsrtable)
my_model <- lm(y ~ x, data = data.frame(y = rnorm(10), x = 1:10))
res <- apsrtable(my_model) # my_model is a linear regression model (lm)
cat("$$latex \n",res,"\n$$ \n")
```
The $$ syntax only applies to math expressions, and you were trying to put a table in it, which will not work. The apsrtable, as far as I understand, is for LaTeX only, but LaTeX and Markdown are very different -- there is little hope you can redo LaTeX entirely with Markdown. I think people invented the $$ syntax for Markdown due to the fact that it is well supported by MathJax, and also note there are many variants/flavors based on the original Markdown.
At the moment you may consider:
use the xtable or ascii or R2HTML package to generate HTML tables
request the package author of apsrtable to support HTML tables
What about including my_model in Markdown format with `pander˙:
> library(pander)
> pander(my_model)
--------------------------------------------------------------
Estimate Std. Error t value Pr(>|t|)
----------------- ---------- ------------ --------- ----------
**x** 0.1174 0.1573 0.7465 0.4767
**(Intercept)** -0.2889 0.9759 -0.296 0.7748
--------------------------------------------------------------
Table: Fitting linear model: y ~ x
Or in PHP MarkdownExtra/rmarkdown format:
> panderOptions('table.style', 'rmarkdown')
> pander(my_model)
| | Estimate | Std. Error | t value | Pr(>|t|) |
|:-----------------:|:----------:|:------------:|:---------:|:----------:|
| **x** | 0.1174 | 0.1573 | 0.7465 | 0.4767 |
| **(Intercept)** | -0.2889 | 0.9759 | -0.296 | 0.7748 |
Table: Fitting linear model: y ~ x
Cross-posting my answer to Table of multiple lm() models using apsrtable in Rmarkdown:
It can be done in a pdf_document with apsrtable and also stargazer, which additionally supports HTML.
---
title: "stargazer"
author: "hplieninger"
date: "3 August 2018"
output: pdf_document
header-includes:
- \usepackage{dcolumn}
---
```{r}
m1 <- lm(Fertility ~ Education , data = swiss)
m2 <- lm(Fertility ~ Education + Agriculture, data = swiss)
m3 <- lm(Fertility ~ . , data = swiss)
```
```{r, results='asis'}
apsrtable::apsrtable(m1, m2, m3, Sweave = TRUE)
```
```{r, results='asis'}
# If output: pdf_document
stargazer::stargazer(m1, m2, m3)
# If output: html_document
# stargazer::stargazer(m1, m2, m3, type = "html")
```