Agda input is an input method for Emacs that allows one to write \bV and have the unicode đť•Ť inserted instead. This makes writing mathematical symbols more comfortable than it would be with the usual Ctrl+Shift+U+(unicode point), if only because it is much easier to memorize and find \bV than 1D54D. The mode comes with suggested autocompletions as well.
Can this input method be repeated system-wise instead of only inside emacs? Is there any already available software doing this?
Related
According to the Bash Reference Manual, the Bash scripting language is constituted of 4 distinct subclasses of syntactic elements:
built-in commands (alias, cd)
reserved words (if, function)
parameters and variables ($, IFS)
functions (abort, end-of-file - activated with keybindings such as Ctrl-d)
Apart from reading the manual, I became inherently curious if there was a programmatic way to list out or generate all such keywords, at least from one of the above categories. I think this could be useful in some contexts. Sometimes I wish I could see all the options available to me for what I can write in any given moment, and having that information as data, instead of a formatted manual, is convenient, focused, and can be edited, in case you want to strike out commands you know well, or that are too obscure for now.
My understanding is that Bash takes the input into stdin and passes it to the running shell process. When code is distributed in a production-ready form, it is compiled, so it runs faster. Unlike using a Python REPL, you don’t have access to the Bash source code from within Bash, so it is not a very direct route to write a program that searches through source files to find various defined commands. I mean that if you wanted to list all functions, Python has the dir() function which programmatically looks for function names in the namespace. But I don’t think Bash can do that. I think it doesn’t have a special syntax in its source files which makes it easy to find and identify all the keywords. Instead, they will be found if you simply enter them - like cd will “find” the program cd because $PATH returns the path to that command - but there’s no special way to discover them.
Or am I wrong? Technically, you could run a “brute force” search by generating every combination of symbols of every length and record when you did not get “error: unknown command” as a response.
Is there any other clever programmatic way to do this?
I mean I want to see a list of every symbol or string that the bash
compiler
Bash is not a compiler. It and every other shell I know are interpreters of various languages.
recognises and knows what to do with, including commands like
“ls” or just a symbol like “*”. I also want to see the inputs and
outputs for each symbol, i.e., some commands are executed in the shell
prompt by themselves, but what data type do they return?
All commands executed by the shell have an exit status, which is a number between 0 and 255. This is as close to a "return type" as you get. Many of them also produce idiosyncratic output to one or two streams (a standard output stream and a standard error stream) under some conditions, and many have other effects on the shell environment or operating environment.
And some
require a certain data type to standard input.
I can't think of a built-in utility whose expected input is well characterized as having a particular data type. That's not really a stream-oriented concept.
I want to do this just as a rigorous way to study the language.
If you want to rigorously study the language, then you should study its manual, where everything you describe has already been compiled. You might also want to study the POSIX shell command language manual for a slightly different perspective, which is more thorough in some areas, though what it documents differs in a few details from Bash's default behavior.
If you want to compile your own summary of Bash syntax and behavior, then those are the best source materials for such an effort.
You can get a list of all reserved words and syntactic elements of bash using this trick:
help -s '*' | cut -d: -f1
Or more accurately:
help -s \* | awk -F ': ' 'NR>2&&!/variables/{print $1}'
Whenever I see a Julia macro in use like #assert or #time I'm always wondering about the need to distinguish a macro syntactically with the # prefix. What should I be thinking of when using # for a macro? For me it adds noise and distraction to an otherwise very nice language (syntactically speaking).
I mean, for me '#' has a meaning of reference, i.e. a location like a domain or address. In the location sense # does not have a meaning for macros other than that it is a different compilation step.
The # should be seen as a warning sign which indicates that the normal rules of the language might not apply. E.g., a function call
f(x)
will never modify the value of the variable x in the calling context, but a macro invocation
#mymacro x
(or #mymacro f(x) for that matter) very well might.
Another reason is that macros in Julia are not based on textual substitution as in C, but substitution in the abstract syntax tree (which is much more powerful and avoids the unexpected consequences that textual substitution macros are notorious for).
Macros have special syntax in Julia, and since they are expanded after parse time, the parser also needs an unambiguous way to recognise them
(without knowing which macros have been defined in the current scope).
ASCII characters are a precious resource in the design of most programming languages, Julia very much included. I would guess that the choice of # mostly comes down to the fact that it was not needed for something more important, and that it stands out pretty well.
Symbols always need to be interpreted within the context they are used. Having multiple meanings for symbols, across contexts, is not new and will probably never go away. For example, no one should expect #include in a C program to go viral on Twitter.
Julia's Documentation entry Hold up: why macros? explains pretty well some of the things you might keep in mind while writing and/or using macros.
Here are a few snippets:
Macros are necessary because they execute when code is parsed,
therefore, macros allow the programmer to generate and include
fragments of customized code before the full program is run.
...
It is important to emphasize that macros receive their arguments as
expressions, literals, or symbols.
So, if a macro is called with an expression, it gets the whole expression, not just the result.
...
In place of the written syntax, the macro call is expanded at parse
time to its returned result.
It actually fits quite nicely with the semantics of the # symbol on its own.
If we look up the Wikipedia entry for 'At symbol' we find that it is often used as a replacement for the preposition 'at' (yes it even reads 'at'). And the preposition 'at' is used to express a spatial or temporal relation.
Because of that we can use the #-symbol as an abbreviation for the preposition at to refer to a spatial relation, i.e. a location like #tony's bar, #france, etc., to some memory location #0x50FA2C (e.g. for pointers/addresses), to the receiver of a message (#user0851 which twitter and other forums use, etc.) but as well for a temporal relation, i.e. #05:00 am, #midnight, #compile_time or #parse_time.
And since macros are processed at parse time (here you have it) and this is totally distinct from the other code that is evaluated at run time (yes there are many different phases in between but that's not the point here).
In addition to explicitly direct the attention to the programmer that the following code fragment is processed at parse time! as oppossed to run time, we use #.
For me this explanation fits nicely in the language.
thanks#all ;)
I often use Sweave to produce LaTeX documents where certain chunks are produced dynamically by executing R code. This works well - but is it also possible to have code chunks that are executed in different ways, e.g. by executing the code in the shell, or by running Perl, and so on? It would be helpful to be able to mix things up, so I could do things like run some shell commands to fetch some data, run some perl commands to pre-process it, and then run R commands to analyze it.
Of course I could use all R chunks and use system() as a poor-man's substitute, but that doesn't make for very pleasant reading in the document.
The new new thing (for multi-language, multi-format) docs may be dexy.it which for example these guys at opengamma.org use as the backend.
Ana, who is behind dexy, is also giving a lot of talks about it so also look at the dexy blog.
It's not directly related to Sweave, but org-babel, which is part of Emacs org-mode, allows to mix code chunks of different languages in one file, pass data from one chunk to another, execute them, and generate LaTeX or HTML export from the output.
You can find more informations about org-mode here :
http://www.orgmode.org/
And to see how org-babel works :
http://orgmode.org/worg/org-contrib/babel/
There is certainly no easy way to do this other than through either foreign language interfaces from R (maybe through inline if it's supported), or system(). For what it's worth, I would just use system(); that should be easy enough.
You can see this previous question about having a Sweave equivalent for Python, where one of the respondents actually creates a separate interface. This can give you a sense what what it would take to embed other languages which may not already be supported. At a minimum, you have to do major hacking on the Sweave driver.
Do you know emacs" org-mode and, more specifically, Babel? If you already know Emacs or are willing to switch to Emacs, then org-mode and Babel are the answer to your question(s).
For instance, I am currently working on a document which contains some shell-scripts, does computations with R and creates flow charts with dot (graphviz). Org-mode can export a variety of formats, e.g. LaTeX (that's what I use).
There is the StatWeave project which uses java rather than R to do the weaving, but will run multiple programs instead of just R. I don't know how hard it would be to get it to do Perl or other programs like that, but the homepage indicates that it already works with R, SAS, Stata, and others:
http://www.cs.uiowa.edu/~rlenth/StatWeave/
This question follows on from the answer given by Michael Pilat in Preventing “Plus” from rearranging things. There he defined a custom + notation using
Format[myPlus[expr__]] := Row[Riffle[{expr}, "+"]]
The problem with this is you can't copy and paste the output (although % or Out[] still works). To get around this you should use the Interpretation type facility which allows an expression to be displayed as one thing, but interpreted as another when supplied as input. My modification of Michael's answer is
Format[myPlus[expr__]] := Interpretation[Row[{expr}, "+"], myPlus[expr]]
This can be copied and pasted successfully. The problem lies in modifying copied expressions. You can convert a copied expression back to InputForm using Ctrl-Shift-I then change anything you want and use the InputForm in any expression. But if you try to change it back to StandardForm using Ctrl-Shift-N then you enter an recursion where the second argument in the Interpretation repeatedly gets evaluated. This is despite Interpretation having the attribute HoldAll (which works properly during normal evaluation).
Normally, when defining simple notations I use the low-level MakeBoxes, eg
myPlus/:MakeBoxes[myPlus[expr__],fmt_]:=With[{r=Riffle[MakeBoxes/#{expr},"+"]},
InterpretationBox[RowBox[r],myPlus[expr]]]
which works perfectly, so I have not encountered this recursion problem before.
So my question (finally) is:
What went wrong with my Format type command and how can it by fixed?
Or: How do you make a high-level equivalent of my MakeBoxes type command?
I consulted with a colleague about this, and his recommendation was essentially that putting up-value definitions on MakeBoxes as you demonstrate is better than using Format when you want things to be tightly integrated from output back to input. Format isn't really intended to produce output that can be re-used as input, but just to format output, hence the unexpected recursion with Interpretation when converting to StandardForm, etc.
You might find the function ToBoxes a useful complement to MakeBoxes.
Finally, here's a tutorial about box structures.
HTH!
I'm really used to auto-completion coming from Netbeans.
In Netbeans, when I type a 'string' and then hit a 'dot' it will print out a list of methods for the String class.
TextMate doesn't seem to have that function.
Is it something you could add?
Would save A LOT of time instead of using the ri/irb/online doc all the time.
Install the Ruby TextMate bundle, open a Ruby file and type alt+esc to get the autocompletion.
You have discovered the fundamental difference between a text editor and an IDE: a text editor edits text (duh!), i.e. an unstructured stream of characters. It doesn't know anything about objects, messages, methods, mixins, modules, classes, namespaces, types, strings, arrays, hashes, numbers, literals etc. This is great, because it means that you can edit anything with a text editor, but it also means that editing any particular thing is harder than it were with a specialized editor.
A Ruby IDE edits Ruby programs, i.e. a highly structured semantic graph of objects, methods, classes etc. This is great, because the IDE knows about the rules that make up legal Ruby programs and thus will e.g. make it impossible for you to write illegal Ruby programs and it can offer you automated transformations that guarantee that if you start out with a legal Ruby program, you end up with a legal Ruby program (e.g. automated refactorings). But it also means that you can only edit Ruby programs.
In short: it's simply impossible to do what you ask with a text editor. You need an IDE. (Note: you can of course build an IDE on top of a text editor. Emacs is a good example of this. But from what I have read, the TextMate plugin API is simply not powerful enough to do this. I could be wrong, though – since I don't have a Mac, I'm mostly dependent on hearsay.)
TM's "equivalent" is hitting escape, I believe.
You can make escape "go across files" for completion if you use the ruby amp TM bundle http://code.google.com/p/ruby-amp/
GL.
-r