I would like to define a grammar in Haskell that matches a string in format "XY12XY" (some alpha followed by some numerics), eg variable names in programming languages.
customer123 is a valid variable name, but '123customer' is not a valid variable name.
I am at a loss how to define the grammar and write a validator function that would validate whether a given string is valid variable name. I have been trying to understand and adapt the parser example at: https://wiki.haskell.org/GADT but I just can't get my head around how to tweak it to make it work for my need.
If any kind fellow Haskell gurus would help me define this please:
validate :: ValidFormat -> String -> Bool
validate f [] = False
validate f s = ...
I would like to define the ValidFormat grammar as:
varNameFormat = Concat Alpha $ Concat Alpha Numeric
I'd start with a simple parser and see if that satisfies your needs, unless you can explain why this is not enough for your use case. Parsers are pretty straightforward. I'll give a very simple (and maybe incomplete) example with attoparsec:
import Control.Applicative
import Data.Attoparsec.ByteString.Char8
import qualified Data.ByteString.Char8 as B
validateVar :: B.ByteString -> Bool
validateVar bstr = case parseOnly variableP bstr of
Right _ -> True
Left _ -> False
variableP :: Parser String
variableP =
(++)
<$> many1 letter_ascii -- must start with one or more letters
<*> many (digit <|> letter_ascii) -- then can have any combination of letters/digits
<* endOfInput -- make sure we don't ignore invalid trailing chars
variableP combines parsers via <*> and will require you to handle both results of many1 letter_ascii and many (digit <|> letter_ascii). In this case we just concatenate both results via (++), check the types of many1, many, letter_ascii and digit. The <* says "parse this, but discard the result of the right hand parser" (otherwise you'd have to handle 3 results).
That means if you run the parser on "abc123" you'll get back "abc123". If you parse "1abc" the parser will fail.
Check the type of parseOnly:
parseOnly :: Parser a -> ByteString -> Either String a
We pass it our parser and the bytestring it should parse. If the parser fails we'll get Left <something went wrong>. If the parser succeeds, we'll get Right <our string>. The cool thing is... instead of just giving a string on success, we could do pretty much anything with the results in variableP, as in: use something different than (++), convert the types and whatnot (mind that the Parser type might also have to change then).
Since we only care if the parser succeeded in validateVar, we can just ignore the result in either case.
So instead of defining GADTs for your grammar, you just define Parsers.
You might also find this link useful for a tutorial: http://www.seas.upenn.edu/~cis194/fall14/spring13/lectures.html (week 10 and 11, including the assignments where you basically write your own little parser library)
I've taken this from examples of regex-applicative
import Text.Regex.Applicative
import Data.Char
import Data.Maybe
varNameFormat :: RE Char String
varNameFormat = (:) <$> psym isAlpha <*> many (psym isAlphaNum)
validate :: RE Char String -> String -> Bool
validate re str = isJust $ str =~ re
You will have
*Main> validate varNameFormat "a123"
True
*Main> validate varNameFormat "1a23"
False
Related
I have a set of lambda expressions which I'm passing to other lambdas. All lambdas rely only on their arguments, they don't call any outside functions. Of course, sometimes it gets quite confusing and I'll pass an function with the incorrect number of arguments to another, creating a GHCi exception.
I want to make a debug function which will take an arbitrary lambda expression (with an unknown number of arguments) and return a string based on the structure and function of the lambda.
For example, say I have the following lambda expressions:
i = \x -> x
k = \x y -> x
s = \x y z -> x z (y z)
debug (s k) should return "\a b -> b"
debug (s s k) should return "\a b -> a b a" (if I simplified that correctly)
debug s should return "\a b c -> a c (b c)"
What would be a good way of doing this?
I think the way to do this would be to define a small lambda calculus DSL in Haskell (or use an existing implementation). This way, instead of using the native Haskell formulation, you would write something like
k = Lam "x" (Lam "y" (App (Var "x") (Var "y")))
s = Lam "x" (Lam "y" (Lam "z" (App (App (Var "x") (Var "z")
(App (Var "y") (Var "z"))))
and similarly for s and i. You would then write/use an evaluation function so that you could write
debug e = eval e
debug (App s k)
which would give you the final form in your own syntax. Additionally you would need a sort of interpreter to convert your DSL syntax to Haskell, so that you can actually use the functions in your code.
Implementing this does seem like quite a lot of (tricky) work, and it's probably not exactly what you had in mind (especially if you need the evaluation for typed syntax), but I'm sure it would be a great learning experience. A good reference would be chapter 6 of "Write you a Haskell". Using an existing implementation would be a lot easier (but less fun :)).
If this is merely for debugging purposes you might benefit from looking at the core syntax ghc compiles to. See chapter 25 of Real world Haskell, the ghc flag to use is -ddump-simpl. But this would mean looking at generated code rather than generating a representation inside your program. I'm also not sure to what extent you would be able to identify specific functions in the Core code easily (I have no experience with this so YMMV).
It would of course be pretty cool if using show on functions would give the kind of output you describe but there are probably very good reasons functions are not an instance of Show (I wouldn't be able to tell you).
You can actually achieve that by utilising pretty-printing from Template Haskell, which comes with GHC out of the box.
First, the formatting function should be defined in separate module (that's a TH restriction):
module LambdaPrint where
import Control.Monad
import Language.Haskell.TH.Ppr
import Language.Haskell.TH.Syntax
showDef :: Name -> Q Exp
showDef = liftM (LitE . StringL . pprint) . reify
Then use it:
{-# LANGUAGE TemplateHaskell #-}
import LambdaPrint
y :: a -> a
y = \a -> a
$(return []) --workaround for GHC 7.8+
test = $(showDef 'y)
The result is more or less readable, not counting fully qualified names:
*Main> test
"Main.y :: forall a_0 . a_0 -> a_0"
Few words about what's going on. showDef is a macro function which reifies the definition of some name from the environment and pretty-prints it in a string literal expression. To use it, you need to quote the name of the lambda (using ') and splice the result (which is a quoted string expression) into some expression (using $(...)).
I have code that does some parsing of files according to specified rules. The whole parsing takes place in a monad that is a stack of ReaderT/STTrans/ErrorT.
type RunningRule s a = ReaderT (STRef s LocalVarMap) (STT s (ErrorT String Identity)) a
Because it would be handy to run some IO in the code (e.g. to query external databases), I thought I would generalize the parsing, so that it could run both in Identity or IO base monad, depending on the functionality I would desire. This changed the signature to:
type RunningRule s m a = ReaderT (STRef s LocalVarMap) (STT s (ErrorT String m)) a
After changing the appropriate type signatures (and using some extensions to get around the types) I ran it again in the Identity monad and it was ~50% slower. Although essentially nothing changed, it is much slower. Is this normal behaviour? Is there some simple way how to make this faster? (e.g. combining the ErrorT and ReaderT (and possibly STT) stack into one monad transformer?)
To add a sample of code - it is a thing that based on a parsed input (given in C-like language) constructs a parser. The code looks like this:
compileRule :: forall m. (Monad m, Functor m) =>
-> [Data -> m (Either String Data)] -- For tying the knot
-> ParsedRule -- This is the rule we are compiling
-> Data -> m (Either String Data) -- The real parsing
compileRule compiled (ParsedRule name parsedlines) =
\input -> runRunningRule input $ do
sequence_ compiledlines
where
compiledlines = map compile parsedlines
compile (Expression expr) = compileEx expr >> return ()
compile (Assignment var expr) =
...
compileEx (Function "check" expr) = do
value <- expr
case value of
True -> return ()
False -> fail "Check failed"
where
code = compileEx expr
This is not so unusual, no. You should try using SPECIALIZE pragmas to specialize to Identity, and maybe IO too. Use -ddump-simpl and watch for warnings about rule left hand sides being too complicated. When specialization doesn't happen as it should, GHC ends up passing around typeclass dictionaries at runtime. This is inherently somewhat inefficient, but more importantly it prevents GHC from inlining class methods to enable further simplification.
Say I have some input word like "føøbær" and I want a hash table of letter frequencies s.t. f→1, ø→2 – how do I do this in OCaml?
The http://pleac.sourceforge.net/pleac_ocaml/strings.html examples only work on ASCII and https://ocaml-batteries-team.github.io/batteries-included/hdoc2/BatUTF8.html doesn't say how to actually create a BatUTF8.t from a string.
The BatUTF8 module you refer to defines its type t as string, thus there is no conversion needed: a BatUTF8.t is a string. Apparently, the module encourages you to validate your string before using other functions. I guess that a proper way of operating would be something like:
let s = "føøbær"
let () = BatUTF8.validate s
let () = BatUTF8.iter add_to_table s
Looking at the code of Batteries, I found this of_string_unsafe, so perhaps this is the way:
open Batteries
BatUTF8.iter (fun c -> …Hashtbl.add table c …) (BatUTF8.of_string_unsafe "føøbær")`
although, since it's termed "unsafe" (the doc's don't say why), maybe this is equivalent:
BatUTF8.iter (fun c -> …Hashtbl.add table c …) "føøbær"
At least it works for the example word here.
Camomile also seems to iterate through it correctly:
module C = CamomileLibraryDefault.Camomile
C.iter (fun c -> …Hashtbl.add table c …) "føøbær"
I don't know of the tradeoffs between Camomile and BatUTF8 here, though they end up storing different types (BatUChar vs C.Pervasives.UChar).
I plunged in an attemp to translate Haskell.
I need walk the HsModule structure (returned by parseModule source),
to translate every HsIdent String, where String is an english identifier
into HsIdent String, where String is an identifier in some other natural language (i.e. italian, french, ...).
I wonder if exists some direct strategy, perhaps in TH, to walk a HsModule Structure (i.e. to apply a function to every HsIdent String), without explicit unfold-functions for the involved substructures?
I hope I was plain enough in my request; many thanks for your precious aid.
Best regards.
I found a solution in Data.Generics packages.
HsModule is an instance of Data and Typeable, so it is eligible to process it with a traverse function of a Generic package. I chose SYB because is quite well documented .
My solution is:
module Main where
import Data.Generics
import Language.Haskell.Syntax
import Language.Haskell.Parser
import Language.Haskell.Pretty
import Control.Monad
translate:: ParseResult HsModule -> Maybe String
translate r = case r of
ParseOk a -> Just (show $ prettyPrint $ translateHsIdent "_italian" a)
ParseFailed _ _ -> Nothing
translateHsIdent :: Data a => String -> a -> a
translateHsIdent k = everywhere (mkT (addStrangerIdentifier k))
where
addStrangerIdentifier :: String -> HsName -> HsName
addStrangerIdentifier s (HsIdent i) = HsIdent (i ++ s)
main = maybe (putStrLn "Parse Error") putStrLn result
where
result :: Maybe String
result = translate $ parseModule "main = putStrLn \"Just a Try\""
I hope it can be useful for someone else.
I'm trying to use HXT to read in some big XML data files (hundreds of MB.)
My code has a space-leak somewhere, but I can't seem to find it. I do have a little bit of a clue as to what is happening thanks to my very limited knowledge of the ghc profiling tool chain.
Basically, the document is parsed, but not evaluated.
Here's some code:
{-# LANGUAGE Arrows, NoMonomorphismRestriction #-}
import Text.XML.HXT.Core
import System.Environment (getArgs)
import Control.Monad (liftM)
main = do file <- (liftM head getArgs) >>= parseTuba
case file of(Left m) -> print "Failed."
(Right _) -> print "Success."
data Sentence t = Sentence [Node t] deriving Show
data Node t = Word { wSurface :: !t } deriving Show
parseTuba :: FilePath -> IO (Either String ([Sentence String]))
parseTuba f = do r <- runX (readDocument [] f >>> process)
case r of
[] -> return $ Left "No parse result."
[pr] -> return $ Right pr
_ -> return $ Left "Ambiguous parse result!"
process :: (ArrowXml a) => a XmlTree ([Sentence String])
process = getChildren >>> listA (tag "sentence" >>> listA word >>> arr (\ns -> Sentence ns))
word :: (ArrowXml a) => a XmlTree (Node String)
word = tag "word" >>> getAttrValue "form" >>> arr (\s -> Word s)
-- | Gets the tag with the given name below the node.
tag :: (ArrowXml a) => String -> a XmlTree XmlTree
tag s = getChildren >>> isElem >>> hasName s
I'm trying to read a corpus file, and the structure is obviously something like <corpus><sentence><word form="Hello"/><word form="world"/></sentence></corpus>.
Even on the very small development corpus, the program takes ~15 secs to read it in, of which around 20% are GC time (that's way too much.)
In particular, a lot of data is spending way too much time in DRAG state. This is the profile:
monitoring DRAG culprits. You can see that decodeDocument gets called a lot, and its data is then stalled until the very end of the execution.
Now, I think this should be easily fixed by folding all this decodeDocument stuff into my data structures (Sentence and Word) and then the RT can forget about these thunks. The way it's currently happening though, is that the folding happens at the very end when I force evaluation by deconstruction of Either in the IO monad, where it could easily happen online. I see no reason for this, and my attempts to strictify the program have so far been in vain. I hope somebody can help me :-)
I just can't even figure out too many places to put seqs and $!s in…
One possible thing to try: the default hxt parser is strict, but there does exist a lazy parser based on tagsoup: http://hackage.haskell.org/package/hxt-tagsoup
In understand that expat can do lazy processing as well: http://hackage.haskell.org/package/hxt-expat
You may want to see if switching parsing backends, by itself, solves your issue.