What I am trying to achieve is, when I release a piece of software it has his software version, (1.0, 1.1, etc.) Can I make a variable in bash that could not be edited once the user downloads my program?
I have something like:
declare -r version=11
If the final user opens vim as sudo/root he can easly edit this value, can bash provide something to work around this issue?
Thanks.
In the context of programming languages the term "immutable" means that a variable can not be modified after the first assignment. This is typically a constant.
What you are looking for is a way to make the whole program immutable. This can be done in the file system.
sudo chattr +i your-script
Now the script is immutable and it can not be changed by vi directly. But the user can remove the immutable flag in order to edit the file.
Related
According to the Bash Reference Manual, the Bash scripting language is constituted of 4 distinct subclasses of syntactic elements:
built-in commands (alias, cd)
reserved words (if, function)
parameters and variables ($, IFS)
functions (abort, end-of-file - activated with keybindings such as Ctrl-d)
Apart from reading the manual, I became inherently curious if there was a programmatic way to list out or generate all such keywords, at least from one of the above categories. I think this could be useful in some contexts. Sometimes I wish I could see all the options available to me for what I can write in any given moment, and having that information as data, instead of a formatted manual, is convenient, focused, and can be edited, in case you want to strike out commands you know well, or that are too obscure for now.
My understanding is that Bash takes the input into stdin and passes it to the running shell process. When code is distributed in a production-ready form, it is compiled, so it runs faster. Unlike using a Python REPL, you don’t have access to the Bash source code from within Bash, so it is not a very direct route to write a program that searches through source files to find various defined commands. I mean that if you wanted to list all functions, Python has the dir() function which programmatically looks for function names in the namespace. But I don’t think Bash can do that. I think it doesn’t have a special syntax in its source files which makes it easy to find and identify all the keywords. Instead, they will be found if you simply enter them - like cd will “find” the program cd because $PATH returns the path to that command - but there’s no special way to discover them.
Or am I wrong? Technically, you could run a “brute force” search by generating every combination of symbols of every length and record when you did not get “error: unknown command” as a response.
Is there any other clever programmatic way to do this?
I mean I want to see a list of every symbol or string that the bash
compiler
Bash is not a compiler. It and every other shell I know are interpreters of various languages.
recognises and knows what to do with, including commands like
“ls” or just a symbol like “*”. I also want to see the inputs and
outputs for each symbol, i.e., some commands are executed in the shell
prompt by themselves, but what data type do they return?
All commands executed by the shell have an exit status, which is a number between 0 and 255. This is as close to a "return type" as you get. Many of them also produce idiosyncratic output to one or two streams (a standard output stream and a standard error stream) under some conditions, and many have other effects on the shell environment or operating environment.
And some
require a certain data type to standard input.
I can't think of a built-in utility whose expected input is well characterized as having a particular data type. That's not really a stream-oriented concept.
I want to do this just as a rigorous way to study the language.
If you want to rigorously study the language, then you should study its manual, where everything you describe has already been compiled. You might also want to study the POSIX shell command language manual for a slightly different perspective, which is more thorough in some areas, though what it documents differs in a few details from Bash's default behavior.
If you want to compile your own summary of Bash syntax and behavior, then those are the best source materials for such an effort.
You can get a list of all reserved words and syntactic elements of bash using this trick:
help -s '*' | cut -d: -f1
Or more accurately:
help -s \* | awk -F ': ' 'NR>2&&!/variables/{print $1}'
I am writing a shell.
With the execvpe system call, I can run a program and control its environment. What are the minimum values I need to pass through here?
Alternatively, I understand that child processes should have a copy of their parent's environment, possibly with some values added. While testing my shell, I am running it from within bash from within my terminal from within a window manager, etc etc. What are the bare basics that I can assume are in my environment? If I were to run my shell straight from a TTY (the "lowest level", as far as I understand), what can I expect?
That’s a very broad question. To a certain extent,
programs should be able to run with no environment at all.
“X” display (i.e., GUI) programs need to know
where they are supposed to display.
This information is usually provided
through the DISPLAY environment variable,
but can also be passed on the command line.
There are probably other environment variables that are essential
(or nearly so) to “X” programs;
it’s been a while since I’ve looked under that hood.
Any program that needs to use special characteristics of your terminal
needs the TERM environment variable.
“Special characteristics” means being able to set colors
(as ls and grep can do, subject to options),
move around the screen (like vi / vim),
or even know the size of the screen (like less).
Note that size of the screen may also be available
through ROWS and COLUMNS.
Any program that needs to know the date and time
as perceived / understood by the user needs to know the time zone (TZ) —
although, if you’re willing to work with absolute (GMT / UTC),
you don’t need this.
etc.
The minimum that you need is a working PATH variable. Any extras beyond that depend on what programs you want to execute.
POSIX has a list of commonly-used environment variables, very few programs use more than a few of those.
Generally if you're using execvp*, you're not giving full pathnames for the executables. It makes your programs much simpler, you do not have to provide a full pathname for each executable, as is needed by the plain execv. POSIX describes these functions as
int execv(const char *path, char *const argv[]);
int execvp(const char *file, char *const argv[]);
and (referring to the parameters of the various exec* functions):
The argument path points to a pathname that identifies the new process image file.
The argument file is used to construct a pathname that identifies the new process image file. If the file argument contains a slash character, the file argument shall be used as the pathname for this file. Otherwise, the path prefix for this file is obtained by a search of the directories passed as the environment variable PATH (see XBD Environment Variables). If this environment variable is not present, the results of the search are implementation-defined.
and (remember that "file" is referring to execvp rather than execv, so the environ variable applies to the search using PATH for the "file" parameter):
For those forms not containing an envp pointer (execl(), execv(), execlp(), and execvp()), the environment for the new process image shall be taken from the external variable environ in the calling process.
So... you could technically remove the entire PATH variable, but the result would be implementation-defined.
The minimum neccessary environment is empty. You don't need anything.
e.g.
$ env -i env
$
We can see that env -i has created a blank environment.
We can take this further:
$ env -i /bin/bash
sweh#server:/home/sweh$ env
LS_COLORS=
PWD=/home/sweh
SHLVL=1
_=/usr/bin/env
We can see that bash has set a few variables, but nothing was inherited.
Now such an environment may break some things; e.g. a missing TERM variable means that vi or less may not work properly
$ less foo
WARNING: terminal is not fully functional
foo (press RETURN)
So, really, you need to determine what programs you expect to run inside the environment and what their needs are.
The scala documentation shows that the way to create a scala script is like this:
#!/bin/sh
exec scala "$0" "$#"
!#
/* Script here */
I know that this executes scala with the name of the script file and the arguments passed to it, and that the scala command apparently knows to read a file that starts like this and ignore everything up to the reversed shebang !#
My question is: is there any reason why I should use this (rather verbose) format for a scala script, rather than just:
#!/bin/env scala
/* Script here */
This, as far a I can tell from a quick test, does exactly the same thing, but is less verbose.
How old is the documentation? Usually, this sort of thing (often referred to as 'the exec hack') was recommended before /bin/env was common, and this was the best way to get the functionality. Note that /usr/bin/env is more common than /bin/env, and ought to be used instead.
Note that it's /usr/bin/env, not /bin/env.
There are no benefits to using an intermediate shell instead of /usr/bin/env, except running in some rare antique Unix variants where env isn't in /usr/bin. Well, technically SCO still exists, but does Scala even run there?
However the advantage of the shell variant is that it gives an opportunity to tune what is executed, for example to add elements to PATH or CLASSPATH, or to add options such as -savecompiled to the interpreter (as shown in the manual). This may be why the documentation suggests the shell form.
I am not on the Scala development team and I don't know what the historical motivation for the Scala documentation was.
Scala did not always support /usr/bin/env. No particular reason for it, just, I imagine, the person who wrote the shell scripting support was not familiar with that syntax, back in the mid 00's. The documentation followed what was supported, and I added /usr/bin/env support at some point (iirc), but never bothered changing the documentation, it would seem.
Simple question. I'd like to know how to tell whether the current shell is running as a mc subshell or not. If it is, I'd like to enter a degraded mode without some features mc can't handle.
In particular, I'd like this to
Be as portable as possible
Not rely on anything outside the shell and basic universal external commands.
Though it's not documented in the man page, a quick experiment shows that mc sets two environment variables: $MC_TMPDIR and $MC_SID. (It also sets $HISTCONTROL, but that's not specific to mc; it affects the behavior of bash, and could have been set by something other than mc.)
If you don't want to depend on undocumented features, you can always set an environment variable yourself. For example, in bash:
mc() { MC_IS_RUNNING=1 command mc "$#" ; }
Entering a "degraded mode" is another matter; I'm not sure how you'd do that. I don't know of any way in bash to disable specified features. You could disable selected built-in commands by defining functions that override them. What features do you have in mind?
In Python, we can use "import" to import the names of another namespace into the current namespace.
Similarly, is there a notion like "namespace" in existence in UNIX shell scripting at all? If so, then does Cygwin (or an actual UNIX shell) have some command to import names from another namespace to the current namespace, as in Python? Thanks.
Note to the community members with admin priviledges: I really think this question IS a programming question instead of a "superuser" question. Please kindly elaborate on why if you disagree with that. Thanks a lot for your time.
There is no way to do exactly what you are asking for.
The source envFile command and it's alternate . envFile can be very helpful.
envFile file will just be a list of environment assingments.
FrontOfficeSystem=MyFrontOffice
BackOfficeSystem=myBackOffice
When you include the command in your script to 'source' the envFile (any name will work), the shell reads the code as if it was directly in your main shell script. Like 'include' in a lot of langauges. But namespaces, ... nope. See next.
More helpful : see indirect references in advanced Bash scripting, this is probably better than using eval ... (per below), but I haven't had the opportunity to work with it.
finally, you may also benefit from eval and varname indirection, i.e.
src=FrontOffice
eval \$${src}System="${src} has data"
src=BackOffice
eval \$${src}System="${src} has data"
Not a great example, but I don't have access to the scripts where I really went to town on this idea. It helped me genericize (sp) some code that otherwise would have had to be repeated 10 times, for each data src (I put the repeating block of code in a for loop, with the src names as the element list for the for(each), then the eval would expand ${src}System as FrontOfficeSystem, BackOfficeSystem). If you windup with spaces in your values for your src list, then all bets are off.
use set -vx in your terminal window and copy/paste above code to see how it works. It might help.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.