How to navigate the file system in common lisp - shell

The more I write Common Lisp in a REPL (in Emacs/Slime), the more I'm annoyed about leaving the REPL to perform operations like making directories, listing files in directories, changing directories (although ,cd is nice), etc.
So I was wondering if other Lispers used the REPL to perform the sort of file operations I'd normally use a shell for, and if so how they do it? The best I've managed starting to write a few wrappers around uiop. Is there a better way?

Not long time ago I had the same problem you have, so I made some research. Result:
SHELISP: Unix shell commands from Common Lisp:
http://dan.corlan.net/shelisp/
Shelisp is a very short program that provides mechanisms for composing
and running Unix shell (particularly bash) commands and constructs
from Common Lisp.
Essentially, it provides a '!' syntax that you can
use to run commands and a '[]' embedded mode where you can enter bash
scripts and obtain the standard output as a lisp string.
Lisp expressions can be included in any command or script using a '?'
syntax. The new version also includes a 'sh' macro that allows to call
unix utilities directly with a syntax familiar to the lisp users.
I didn't use it yet, but I read manual and it looks interesting

The McCLIM Lisp Listener has some file/directory commands
One unusual option is to use McCLIM. It's Lisp listener has a bunch of commands and some of them are about files and directories. One can also add new commands.
Commands look not like Lisp calls, but they offer prompts, completion, help, dialogs, command listings, argument completion, etc. They are internally implemented by functions, but they work over typed objects with a lot meta information. Lisp objects (like pathnames) printed to the Listener are thus objects which the user interface recognizes as such.
Typical commands might be:
Show File /foo/bar.lisp
Show Directoy /foo/bar.lisp
Edit File /foo/bar.lisp
See the McCLIM Lisp Listener.
This comes from the Lisp Listener of the Symbolics Lisp Machine, which introduced the specific user interface and which had all kinds of fancy commands, not just file system commands. One could list a directory, and then the directory listing is a table of actual pathname objects - where one can invoke with the mouse a bunch of commands on displayed - depending on what it is: a host, a file, a directory, ... McCLIM allows similar things.
The main drawback of McCLIM is that the most used version of it is based on relatively raw X11. I would also expect it to be mostly used on Linux (or similar).
But on a Lisp Machine one usually had also a Filesystem browser - in addition to a Dired mode in Zmacs. The Filesystem browser was another application - with a really oldish user interface - which also had commands to deal with disks and similar.

Related

Compile shell script to make it totally unreadable

I need to compile my shell script, because I want to protect its source code. I already read about shc, but I also read, that it isn't completely safe, because with a small amount of knowledge (or brain and google) any user can 'decompile' it. Is there a way to compile my script to make it executable, but completely unreadable and 'undecompileable'?
You can only make it harder to read by a human being. Scripts are plain text files, they have to be readable by the script interpreter.
I made my own bash obfuscator (obash) in a way that the keys required to extract the script are not stored inside the script (but that makes the script non distributable). There is an option to generate static reusable binary but that will only execute on systems that have several levels of compatibility (kernel system call interface, glibc, basic binary compatibility).
Making the re-distributable binary also implies storing a key in the generated binary but the stored keys need to be manipulated before use.
You might like to try and see if obash serves you any better then shc.

jq or xsltproc alternative for s-expressions?

I have a project which contains a bunch of small programs tied together using bash scripts, as per the Unix philosophy. Their exchange format originally looked like this:
meta1a:meta1b:meta1c AST1
meta2a:meta2b:meta2c AST2
Where the :-separated fields are metadata and the ASTs are s-expressions which the scripts pass along as-is. This worked fine, as I could use cut -d ' ' to split the metadata from the ASTs, and cut -d ':' to dig into the metadata. However, I then needed to add a metadata field containing spaces, which breaks this format. Since no field uses tabs, I switched to the following:
meta1a:meta1b:meta1c:meta 1 d\tAST1
meta2a:meta2b:meta2c:meta 2 d\tAST2
Since I envision more metadata fields being added in the future, I think it's time to switch to a more structured format rather than playing a game of "guess the punctuation".
Instead of delimiters and cut I could use JSON and jq, or I could use XML and xsltproc, but since I'm already using s-expressions for the ASTs, I'm wondering if there's a nice way to use them here instead?
For example, something which looks like this:
(echo '(("foo1" "bar1" "baz1" "quux 1") ast1)'
echo '(("foo2" "bar2" "baz2" "quux 2") ast2)') | sexpr 'caar'
"foo1"
"foo2"
My requirements are:
Straightforward use of stdio with minimal boilerplate, since that's where my programs read/write their data
Easily callable from shell scripts or provide a very compelling alternative to bash's process invocation and pipelining
Streaming I/O if possible; ie. I'd rather work with one AST at a time rather than consuming the whole input looking for a closing )
Fast and lightweight, especially if it's being invoked a few times; each AST is only a few KB, but they can add up to hundreds of MB
Should work on Linux at least; cross-platform would be nice
The obvious choice is to use a Lisp/Scheme interpreter, but the only one I'm experienced with is Emacs, which is far too heavyweight. Perhaps another implementation is more lightweight and suited to this?
In Haskell I've played with shelly, turtle and atto-lisp, but most of my code was spent converting between String/Text/ByteString, wrapping/unwrapping Lisps, implementing my own car, cdr, cons, etc.
I've read a little about scsh, but don't know if that would be appropriate either.
You might give Common Lisp a try.
Straightforward use of stdio with minimal boilerplate, since that's
where my programs read/write their data
(loop for (attributes ast) = (safe-read) do (print ...)
Read/write from standard input and output.
safe-read should disable execution of code at read-time. There is at least one implementation. Don't eval your AST directly unless you perfectly know what's in there.
Easily callable from shell scripts or provide a very compelling
alternative to bash's process invocation and pipelining
In the same spirit as java -jar ..., you can launch your Common Lisp executable, e.g. sbcl, with a script in argument: sbcl --load file.lisp. You can even dump a core or an executable core of your application with everything preloaded (save-lisp-and-die).
Or, use cl-launch which does the above automatically, and portably, and generates shell scripts and/or makes executable programs from your code.
Streaming I/O if possible; ie. I'd rather work with one AST at a time
rather than consuming the whole input looking for a closing )
If the whole input stream starts with a (, then read will read up-to the closing ) character, but in practice this is rarely done: source code in Common Lisp is not enclosed in one pair of parenthesis per-file, but as a sequence of forms. If your stream produces not one but many s-exps, the reader will read them one at a time.
Fast and lightweight, especially if it's being invoked a few times;
each AST is only a few KB, but they can add up to hundreds of MB
Fast it will be, especially if you save a core. Lightweight, well, it is well-known that lisp images can take some disk space (e.g. 46MB), but this is rarely an issue. Why is is important? Maybe you have another definition about what lightweight means, because this is unrelated to the size of the AST you will be parsing. There should be no problem reading those AST, though.
Should work on Linux at least; cross-platform would be nice
See Wikipedia. For example, Clozure CL (CCL) runs on Mac OS X, FreeBSD, Linux, Solaris and Windows, 32/64 bits.
Working on a slightly different task, I again found the need to process a bunch of s-expressions. This time I needed to perform some non-trivial processing of the given s-expressions (extracting lists of symbols used, etc.), rather than having the option to pass them along as opaque strings.
I gave Racket a try and was pleasantly surprised; it was much nicer than the other Lisps I've used before (Emacs Lisp and various application-specific Scheme scripts), since it has nice documentation and a batteries included standard library.
Some of the relevant points for this kind of task:
"Ports" for reading and writing data. These can be (dynamically?) scoped across an expression, and default to stdio (i.e. (current-input-port) defaults to stdin and (current-output-port) defaults to stdout). Ports make stdio and file access about as nice to use as a shell: more verbose, but fewer gnarly edge-cases.
Various conversion functions like port->string, file->lines, read, etc. make it easy to get data at the appropriate form of granularity (characters, lines, strings, expressions, etc.).
I couldn't find a "standard" way to read multiple s-expressions, since read only returns one, so iteration/recursion would be needed to do this in a streaming fashion.
If streaming isn't needed, I found it easiest to read the whole input as a string, append "(\n" and "\n)", then use (with-input-from-string my-modified-input read) to get one big list.
I found Racket's startup time to be pretty slow, so I wouldn't recommend invoking a script over and over as part of a loop if speed is a concern. It was easy enough to move my looping into Racket and have the script invoked once though.

Is it possible to create an event driven service in shells

Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).

Creating a UNIX shell

I want to create a mini shell for UNIX just to know the ins and outs of everything. I am having some confusions understanding things that I used to get for granted. This is kinda philosophical question. When I creating a "shell", I assume I have a UNIX with no shell, so what would be the std in and std out in this case? doesnt functions like system() and exec() use the shell to execute programs, so if I am creating a shell in the first place. How do these functions work?
There are several functions in the exec family: execve(2), execl(3), execle(3), execlp(3), execv(3), execvp(3). The first one, execve(2) is provided by the operating system kernel as a system call. (Well, okay, the function that programs call is provided by the system C library, but it is a simple wrapper around the system call.) The other functions provide slightly different semantics and are implemented in terms of the execve(2) function.
The shells could use execvp(3) or execlp(3) to provide the PATH search for executables, but at least bash(1) hashes the full pathname of executables to provide a performance benefit. (See bash(1) built-in hash for details.)
system(3) is implemented via /bin/sh -c, as you've surmised.
The standard input and output is set up by whichever program spawned the shell. If a user logs in on the console directly, it'll be handled by agetty(8) or mgetty(8) or whichever getty-alike program handles direct logins. If a user logs in via sshd(8), then sshd(8) is in charge of creating the pty and delegating the terminal slave to the shell. If a user creates their shells via xterm(1) or other terminal emulators, then those processes will be responsible for hooking up the standard input, output, and error for the shell.
system(3) does indeed use (possibly directly or indirectly via exec) a shell to do its work. exec(3) and friends, however, do not use a shell, but rather execute the specified program image directly. You can see this simply by reading their respective man pages.
One difference is that with system(), you will see sugar like wildcards being expanded, whereas if you pass * as an argument to your program using exec(), your program will see the literal asterisk (and probably not know what to do).
A shell can be implemented using exec() among other things. It gets its stdin and stdout from something called the TTY (teletype, or old-school terminal) or PTY (pseudo-terminal, as in modern systems). See posix_openpt(2).

Real life SHELL SCRIPTS usage?

I'm learning UNIX/LINUX shell scripting and trying to think about it appropriate usage?
The only thing that comes into mind - it'll be nice for let's say backup operations and logs management....But I'm sure it goes way beyond that...or is it?
I'm sure there are people on this server who use Shell scripting on the daily basis.
Can you tell me what do you use it for in your organization/business?
Thanks:)
Why use shell scripts
Basically, there are any number of tasks related to backup, maintenance, etc. that need to be automated, and shell scripts do that.
You can do quite everything in shell, but it is easy to write ugly and slow scripts.
First domain of expertise of shells is to start and combine other programs. This is exceptionally well suited for:
file manipulations: list, move, copy, compress, archive
text lines manipulation: filter (grep), modify (sed), delete lines (sed), combine files (paste), sort (sort), unify (sort -u)
All those operation are NOT shell operation, but the shell is the glue that put them all together.
file operations are generally combined with flow control instructions (while, if, for)
line operations are combined with pipes | and named pipe mkfifo
Things you can do in less than 20 lines with shell commands.
I personally use it to batch miscellaneous daily/weekly commands and start up long running processes. They can be unwieldy and hard to debug when they get big. Unknown variables evaluate to empty strings (icky).
Scripting languages languages such as Python, Perl, and Ruby become more attractive as the code becomes more complex.
I work on an actively developed software project that runs in a unix environment. Unfortunately it uses a lot of different environment variables for configuration and stashes binary programs, data files, and shared libraries on version dependent paths.
All that is a pain to set up.
But it gets worse: at any given time I might want to work with the stable version, the pretty-stable-but-more-up-to-date version, the bleeding-edge-every-new-feature version, or my personally hacked development version.
Switching between them is a even bigger hassle.
Enter a shell script which insures that I am set up for exactly one version at a time. Ta da!
BTW--The script I use for this makes extensive use of the accepted answer to How do I manipulate $PATH elements in shell scripts?, so you know Stack Overflow works for me in the real world. More over, I've infected several other people with this technology.
I've seen and worked-on full-blown applications (medical records and scheduling processing) written in Korn shell.
Batch programming, PostScript print filters, automatic mailers and automated airline checkin systems, regular stock price tracking, software installers, et al, et al.
Better question = what could not be programmed in Shell?
for our company, we use shell scripts for the following:
backups - it would be very disastrous for us if we lose our data. Various parts of our backup like database backup, offsite backup, continuous backups etc all uses shell scripts that runs daily and some runs once a week.
update dates - we do not use ntp so we rely on sh scripts to update the date due to firewall restrictions.
log cleanup
send emails
I didn't think bash programming was particularly powerful until I saw that the OS startup scripts are all written in it. That made me re-examine my assumptions. I now have several dozen important shell scripts that I've written over the years that automate some common tasks.
For example, I wrote one that polls the current load average, and then executes a provided command if it exceeds a certain value (useful for examining events that only happen once or twice a day).
Another that I wrote iterates through all the mysql databases on the server and outputs a mysqldump for each one into its own appropriately-named .sql file.
Another iterates through a list of homedirs and changes the ownership of all the files under the corresponding public_html dir to match the user who should own them to be compliant with suPHP's restrictions.
Another examines the current hardware configuration and downloads, installs, and configures appropriate software for monitoring the health of the currently-attached RAID controller.
These are all relatively simple tasks that could be done by hand -- but whenever I find myself doing the same task more than once, I write a shell script to automate the process.
I also built a base-64 decoder in bash just to see if I could. It works, but it's terribly slow. I use shell scripting for simple tasks that primarily involve executing other programs. I often use Perl when a significant amount of string processing is required, and I use Python for the more complex scripting tasks. The more languages you know, the better you will be at choosing the right one for the job.

Resources