Creating a UNIX shell - shell

I want to create a mini shell for UNIX just to know the ins and outs of everything. I am having some confusions understanding things that I used to get for granted. This is kinda philosophical question. When I creating a "shell", I assume I have a UNIX with no shell, so what would be the std in and std out in this case? doesnt functions like system() and exec() use the shell to execute programs, so if I am creating a shell in the first place. How do these functions work?

There are several functions in the exec family: execve(2), execl(3), execle(3), execlp(3), execv(3), execvp(3). The first one, execve(2) is provided by the operating system kernel as a system call. (Well, okay, the function that programs call is provided by the system C library, but it is a simple wrapper around the system call.) The other functions provide slightly different semantics and are implemented in terms of the execve(2) function.
The shells could use execvp(3) or execlp(3) to provide the PATH search for executables, but at least bash(1) hashes the full pathname of executables to provide a performance benefit. (See bash(1) built-in hash for details.)
system(3) is implemented via /bin/sh -c, as you've surmised.
The standard input and output is set up by whichever program spawned the shell. If a user logs in on the console directly, it'll be handled by agetty(8) or mgetty(8) or whichever getty-alike program handles direct logins. If a user logs in via sshd(8), then sshd(8) is in charge of creating the pty and delegating the terminal slave to the shell. If a user creates their shells via xterm(1) or other terminal emulators, then those processes will be responsible for hooking up the standard input, output, and error for the shell.

system(3) does indeed use (possibly directly or indirectly via exec) a shell to do its work. exec(3) and friends, however, do not use a shell, but rather execute the specified program image directly. You can see this simply by reading their respective man pages.
One difference is that with system(), you will see sugar like wildcards being expanded, whereas if you pass * as an argument to your program using exec(), your program will see the literal asterisk (and probably not know what to do).
A shell can be implemented using exec() among other things. It gets its stdin and stdout from something called the TTY (teletype, or old-school terminal) or PTY (pseudo-terminal, as in modern systems). See posix_openpt(2).

Related

How to navigate the file system in common lisp

The more I write Common Lisp in a REPL (in Emacs/Slime), the more I'm annoyed about leaving the REPL to perform operations like making directories, listing files in directories, changing directories (although ,cd is nice), etc.
So I was wondering if other Lispers used the REPL to perform the sort of file operations I'd normally use a shell for, and if so how they do it? The best I've managed starting to write a few wrappers around uiop. Is there a better way?
Not long time ago I had the same problem you have, so I made some research. Result:
SHELISP: Unix shell commands from Common Lisp:
http://dan.corlan.net/shelisp/
Shelisp is a very short program that provides mechanisms for composing
and running Unix shell (particularly bash) commands and constructs
from Common Lisp.
Essentially, it provides a '!' syntax that you can
use to run commands and a '[]' embedded mode where you can enter bash
scripts and obtain the standard output as a lisp string.
Lisp expressions can be included in any command or script using a '?'
syntax. The new version also includes a 'sh' macro that allows to call
unix utilities directly with a syntax familiar to the lisp users.
I didn't use it yet, but I read manual and it looks interesting
The McCLIM Lisp Listener has some file/directory commands
One unusual option is to use McCLIM. It's Lisp listener has a bunch of commands and some of them are about files and directories. One can also add new commands.
Commands look not like Lisp calls, but they offer prompts, completion, help, dialogs, command listings, argument completion, etc. They are internally implemented by functions, but they work over typed objects with a lot meta information. Lisp objects (like pathnames) printed to the Listener are thus objects which the user interface recognizes as such.
Typical commands might be:
Show File /foo/bar.lisp
Show Directoy /foo/bar.lisp
Edit File /foo/bar.lisp
See the McCLIM Lisp Listener.
This comes from the Lisp Listener of the Symbolics Lisp Machine, which introduced the specific user interface and which had all kinds of fancy commands, not just file system commands. One could list a directory, and then the directory listing is a table of actual pathname objects - where one can invoke with the mouse a bunch of commands on displayed - depending on what it is: a host, a file, a directory, ... McCLIM allows similar things.
The main drawback of McCLIM is that the most used version of it is based on relatively raw X11. I would also expect it to be mostly used on Linux (or similar).
But on a Lisp Machine one usually had also a Filesystem browser - in addition to a Dired mode in Zmacs. The Filesystem browser was another application - with a really oldish user interface - which also had commands to deal with disks and similar.

Is it possible to create an event driven service in shells

Hi I would like to create a small program that listens for copy comands copied content for later retrival in bash. Is it possible to listen to key strokes while still keeping the shell interactive? And how can this be don arcitectualy. I don't need the whole program just a hint at how it can be done. I have no preferance when it comes to language exept that it should be implemented in a scripting language or maby c++.
Pherhaps this needs to be written like a shell extension or somthing. just a hint would be fine.
Consider the way that the script program works (see man script). I havn't done this in a while, but basically you write your pseudo terminal in C and push that into the stream, then launch the shell.
See tcgetattr/tcsetattr, grantpt, unlockpt, and ptsname, with ptem, ldterm and possibly ttcompat to be pushed using ioctl.
A simpler, though less efficient, is to run script into a pipe and capture the output. You probably will need script -f to flush the buffer (I think the -f is only in the GNU version).

What do I need to read to understand $PATH

I'm new to programming/development and I'm having trouble installing development tools.One of my biggest problems when installing something is understanding the shell or terminal (are they the same thing?) and how it relates to installing tools like uncrustify for example. What do I need to read to understand the shell/terminal and $PATH?
Have you tried Googling?
Environment variable
PATH (variable)
(I think you're getting good advice so far on PATH)
The most generic description of a shell is that is a program that facilitates interaction w programs. Programs facilitate 'communication' with the OS to perform work by the hardware.
There are two modes that you will normally interact with a shell.
a command-line processor, where you type in commands, letter-by-letter, word-by-word until you press the enter key. Then the shell will read what you have typed, validate that it understands the general form of what you have asked for, and then start running the 1 (or more) programs specified in what you have typed.
a batch-script processor. In this case you have assembled all of the commands you want executed into a file, and then thru 1 of several mechanisms, you arrange to have the batch-script run so it will in turn run the commands you have specified and the computer does your work for you. Have you done a Windows .Bat file? same idea, but more powerful.
So, a terminal widow is program that is responsible for a. getting input and b., printing output. When you get to the c-programming that underlies the Unix system, you are talking about a feature of the OS design which are called Standard In and Standard Out. Normal unix commands expect to read instructions from StdIn and print output to StdOut.
Of course, all good programs can get their input from files and write there output to files as well, and most programs will take over the StdIn/Out and process files instead of reading input from the keyboard and/or writing to the screen.
To return to the shell, this program that lets you type while the terminal window is open. There are numerous versions of the shell that you may run into AND have varying levels of features that support a. interactive-mode, b. batch-script mode.
To sum it up, here a diagram of what is involved (very basically) for terminal and shell
(run a) terminal-window (program)
shell-command-prompt (program) (automatically started as subprogram)
1. enter commands one at a time, with input from
a. typed at keyboard (std-in)
b. infile
and output to
a. screen (std-out)
b. outFile
program
calls OS level functions for
a. computation
b. I/O
OR 2.
(run the shell program without a terminal, usually from the cron sub-system)
shell-batch-processor
shell program reads batch-script file, 1 'statement' at a time
validate statements
run program, relying on script or cfg to provide inFile data and
indicate where to put outfile data.
I hope this helps.

Pitfalls of using shell scripts to wrap a program?

Consider I have a program that needs an environment set. It is in Perl and I want to modify the environment (to search for libraries a special spot).
Every time I mess with the the standard way to do things in UNIX I pay a heavy price and I pay a penalty in flexibility.
I know that by using a simple shell script I will inject an additional process into the process tree. Any process accessing its own process tree might be thrown for a little bit of a loop.
Anything recursive to a nontrivial way would need to defend against multiple expansions of the environment.
Anything resembling being in a pipe of programs (or closing and opening STDIN, STDOUT, or STDERR) is my biggest area of concern.
What am I doing to myself?
What am I doing to myself?
Getting yourself all het up over nothing?
Wrapping a program in a shell script in order to set up the environment is actually quite standard and the risk is pretty minimal unless you're trying to do something really weird.
If you're really concerned about having one more process around — and UNIX processes are very cheap, by design — then use the exec keyword, which instead of forking a new process, simply exec's a new executable in place of the current one. So, where you might have had
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
perl myprog.pl
You'd just say
#!/bin/bash -
FOO=hello
PATH=/my/special/path:${PATH}
exec perl myprog.pl
and the spare process goes away.
This trick, however, is almost never worth the bother; the one counter-example is that if you can't change your default shell, it's useful to say
$ exec zsh
in place of just running the shell, because then you get the expected behavior for process control and so forth.

How can I fork a background processes from a Perl CGI script on Windows?

I've had some trouble forking of processes from a Perl CGI script when running on Windows. The main issue seems to be that 'fork' is emulated when running on windows, and doesn't actually seem to create a new process (just another thread in the current one). This means that web servers (like IIS) which are waiting for the process to finish continue waiting until the 'background' process finishes.
Is there a way of forking off a background process from a CGI script under Windows? Even better, is there a single function I can call which will do this in a cross platform way?
(And just to make life extra difficult, I'd really like a good way to redirect the forked processes output to a file at the same time).
If you want to do this in a platform independent way, Proc::Background is probably the best way.
Use Win32::Process->Create with DETACHED_PROCESS parameter
perlfork:
Perl provides a fork() keyword that
corresponds to the Unix system call of
the same name. On most Unix-like
platforms where the fork() system call
is available, Perl's fork() simply
calls it.
On some platforms such as Windows
where the fork() system call is not
available, Perl can be built to
emulate fork() at the interpreter
level. While the emulation is designed
to be as compatible as possible with
the real fork() at the the level of
the Perl program, there are certain
important differences that stem from
the fact that all the pseudo child
``processes'' created this way live in
the same real process as far as the
operating system is concerned.
I've found real problems with fork() on Windows, especially when dealing with Win32 Objects in Perl. Thus, if it's going to be Windows specific, I'd really recommend you look at the Thread library within Perl.
I use this to good effect accepting more than one connection at a time on websites using IIS, and then using even more threads to execute different scripts all at once.
This question is very old, and the accepted answer is correct. However, I just got this to work, and figured I'd add some more detail about how to accomplish it for anyone who needs it.
The following code exists in a very large perl CGI script. This particular sub routine creates tickets in multiple ticketing systems, then uses the returned ticket numbers to make an automated call via Twilio services. The call takes awhile, and I didn't want the CGI users to have to wait until the call ended to see the output from their request. To that end, I did the following:
(All the CGI code that is standard stuff. Calls the subroutine needed, and then)
my $randnum = int(rand(100000));
my $callcmd = $progdir_path . "/aoff-caller.pl --uniqueid $uuid --region $region --ticketid $ticketid";
my $daemon = Proc::Daemon->new(
work_dir => $progdir_path,
child_STDOUT => $tmpdir_path . '/stdout.txt',
child_STDERR => $tmpdir_path . '/stderr.txt',
pid_file => $tmpdir_path . '/' . $randnum . '-pid.txt',
exec_command => $callcmd,
);
my $pid = $daemon->Init();
exit 0;
(kill CGI at the appropriate place)
I am sure that the random number generated and attached to the pid is overkill, but I have no interest in creating issues that are extremely easily avoided. Hopefully this helps someone looking to do the same sort of thing. Remember to add use Proc::Daemon at the top of your script, mirror the code and alter to the paths and names of your program, and you should be good to go.

Resources