Understanding `history` for bash - bash

Yesterday I ssh-d into my RemoteFS and performed some commands.
Today they do not appear in the history. Frustrating, as it took some time to look at the commands I used.
I frequently get this problem. I suspect that using the same login for multiple simultaneous terminal sessions maybe results in a separate history for each. And changing the user (e.g. elevating to superuser) opens up a different set of histories.
Could someone explain scientifically the life-cycle of history? When does a new one get created? How to access/view all the existing ones? And under what circumstances does history get destroyed? Do they ever get amalgamated?

Depends on variable settings, but by default there is only one history file per user, not per terminal session.
The history is nowadays held in an in-memory buffer and only written out to the history file when the buffer is full or on logout. Therefore multiple terminal sessions under the same user can overwrite each other's history. The history system is not really suitable for multiple sessions under the same user id.
If you want to keep sessions separate, modify variable HISTFILE.
It might seem neat to set:
HISTFILE="$HOME/.bash_history$$"
where $$ gives the current PID. While this gives each terminal session its own history, it quickly becomes a maintenance nightmare with all those history files floating around.
There are other variables that control history, see man bash for a description. You can also:
set | grep '^HIST'
which might be instructive.
Don't be tempted to edit the history file with a text editor. It is a binary file (contains non-text fields) and can be easily trashed.
When does a new one get created? First time a history filename is used.
How to access/view all the existing ones? Depends what name you have given them.
And under what circumstances does history get destroyed? When HISTSIZE is exceeded (default is 500 lines). Only HISTSIZE lines are stored. Remember that the file itself is overwritten only when the in-memory buffer is full, or on logout. However we do have the histappend option:
shopt -s histappend
which will append the session rather than overwrite. Be careful using this, you could end-up with a huge history file.
Do they ever get amalgamated? No, not unless you write a script to do it, or you set histappend.

Related

What is the mysterious relationship between `fchown()` and `flock()`?

Reading the man pages for fchown, I find this statement:
The fchown() system call is particularly useful when used in conjunction with the file locking primitives (see flock(2)).
Here's the mystery: the man page for flock(2) makes no mention of fchown or even how ownership affects it in general.
So can anyone explain what happens when fchown and flock are used together and why it's so "useful?"
I'm developing for macOS (Darwin), but I find the same statement (and lack of an explanation) in Linux, BSD, POSIX, and virtually every other *NIX man page I've searched.
Backstory ('cause every great villain has a backstory):
I have a set-UID helper process that gets executed as root, but spends much of its time running as user. While it's running as user, the files it creates belong to user. Good so far.
However, occasionally it needs to create files while running as root. When this happens, the files belong to root and I want them to belong to user. So my plan was to create+open the file, then call fchown() to change the ownership back to user.
But a few of these files are shared and I use flock() to block concurrent access to the file and now I'm wondering what will happen to my flocks.

Clear fish shell history permanently across sessions

I am trying to convert some bash dotfiles to their fish equivalents.
I have a ~/.bash_logout that looks like this:
# Flush the in-memory history, then persist the emptiness to disk.
history -c && history -w;
While fish does have a history command, it does not have the same options, so the above will not work as-is in fish.
The closest I have gotten so far is history clear, though this only seems to apply to the current session and is not persisted, as the history comes back in any new tabs I create. Further, it prompts for confirmation and I would prefer not to have to do that every time I log out.
Nonetheless, it is progress, so based on another answer, I put this in ~/.config/fish/config.fish:
function on_exit --on-process %self
history clear
end
This doesn't seem to even prompt me, let alone do anything useful. Or if it does, it's non-blocking and happens so fast that I can't see it.
How can I permanently erase the fish shell history and persist that state for future sessions, within an exit handler? Existing sessions would be nice too, but are less important.
Lastly, the history manpages notes that the "builtin history" does not prompt for confirmation. Does it make sense to use that instead and if so, how?
· clear clears the history file. A prompt is displayed before the history is erased asking you to confirm you really want to clear all history unless builtin history is used.
For me, history clear seems to apply across sessions and persists through startup. Have you updated to the latest version or FiSh ? I’m using version 2.6.0.
echo $version
To remove the confirmation prompt, you can edit the function directly (funced history), removing lines 188-195 sparing 191-192. Although, all this actually does is ultimately run the command builtin history clear, so this will achieve exactly the same thing minus the confirmation. Simply type:
builtin history clear
(which, again, for me, does seem to go across sessions and is persistent.)
Regarding the event handler, the reason the exit handler doesn’t fire is because %self gets expanded into the PID of the currently running shell process. However, once that exits and a new one starts, the event handler won’t fire as the PID will be different.
So this function will fire before on exiting the currently running shell process:
function bye --on-process-exit %self
echo byeeeeeeee
end
but only that process and not any subsequently created process. Instead, add the function to your config.fish file, which will ensure it initialises the event handler each startup using the correct and current PID. The config file, if it exists, is located at ~/.config/fish/config.fish If it doesn’t exist, ensure the environment variable XDG_CONFIG_HOME is set, which tells fish where to find the config files. You can set it yourself and export it globally with:
set -gx XDG_CONFIG_HOME ~/.config/
Then create the config file with the event handler:
function bye --on-process-exit %self
builtin history clear
echo Session history scrubbed. Goodbye
end
Done. The only history item remaining will be the command exit used to end the current shell session.
Hope this helps. Seems to work for me but if you run into problems, I might be able to point you to clues for some answers.

Save current shell history and recover it afterwards

Sometimes I work very intensively using a certain set of commands in the bash shell and then I spend some days working in a very different one. The problem is that by the time I want to keep going with the first set of tasks the shell history has been totally overwritten by the last one and I spend some time figuring out what I was exactly doing (I lack long-term memory).
I usually keep a journal so I can go back and continue where I stopped the last time, but I was wondering if there exists an easy way to do some "time travelling" in my shell session by saving the recent shell history (say, the last 100 commands) to a file and using this file to overwrite the history in the future.
So the history command actually has options for reading/writing files. You can write a history file with history -w my_history.txt and then import a history file by typing history -r my_history.txt.
Note: Be careful with simply writing the result of history command to a file with history > my_history.txt and then importing it, because that will write to history also the command numbers.

Bash: how to duplicate input/output from interactive scripts only in complete lines?

How can I capture the input/ output from a script in realtime (such as with tee), but line-by-line instead of character-by-character? My goal is to capture the input typed into the interactive prompts of a script only after backspaces and auto-completion have finished processing (after the RETURN key is hit).
Specifically, I am trying to create a wrapper script for ssh that creates a timestamped log of commands used on remote servers. The script, which uses tee to redirect the output for filtering, works well, but the redirected output gets jumbled with unsubmitted characters whenever I use the backspace key or the up/down keys to scroll through my remote history. For example: service test stopexitservice test stopart or cd ..logs[1Pls -al.
Perhaps there is a way to capture the terminal's scrollback and redirect that like with tee?
Update: I have found a character-based cleanup solution that does what I want most of the time. However, I am still hoping for an answer to this question (which may well be msw's answer that it is very difficult to do).
In the Unix world there are two primary modes of handling keyboard input. These are known as 'raw' in which characters are passed from the terminal to the reading program one at a time. This is the mode that editors (and such) will use because the editor needs to respond immediately when you press a key.
The other terminal discipline is called 'cooked' which is the line by line behavior that you think of as the bash line by line input where you get to backspace and the command is not executed until you press return. Ssh has to take your input in raw, character-by-character mode because it has no idea what is running on the other side. For example, if you are running an editor on the far side, it can't wait for a return before sending the key-press. So, as some have suggested, grabbing shell history on the far side is the only reasonable way to get a command-by-command record of the bash commands you typed.
I oversimplified for clarity; actually most installations of bash take input in raw mode because they allow editor like command modification. For example, Ctrl-P scrolls up the command history or Ctrl-A goes to the beginning of the line. And bash needs to be able to get those keys the moment they are typed not waiting for a return.
This is another reason that capturing on the local side is obnoxiously difficult: if you capture on the local side, the stream will be filled with Backspaces and all of bash's editing commands. To get a true transcript of what the remote shell actually executed you have to parse the character stream as if you were the remote shell. There also a problem if you run something like
vi /some_file/which_is_on_the_remote/machine
the input stream to the local ssh will be filled with movement commands snippets of text including backspaces and so on and it would be bloody difficult to figure out what is part of a bash command and what is you talking to the editor.
Few things involving computers are impossible; getting clean input from the local side of an ssh invocation is really, really hard.
I question the actual utility of recording the commands that you execute on a local or remote machine. The reason is that there is so much state which is not visible from a command log. As a simple example here's a log of two commands:
17:00$ cp important_file important_file.bak
17:15$ rm important_file
and two days later you are trying to figure out whether important_file.bak should have the contents you intended or not. Given that log you can't answer that simple question. Even if you had the sequence
16:58$ cat important_file
17:00$ cp important_file important_file.bak
17:15$ rm important_file
If you aren't capturing the output, the cat in the log will not tell you anything. Give me almost any command sequence and I can envision a scenario in which it will not give you the information you need to make sense of what was done.
For a very similar purpose I use GNU screen which offer the option to record everything you do in a shell session (INPUT/OUTPUT). The log it creates also comes with undesirable characters but I clean them with perl:
perl -ne 's/\x1b[[()=][;?0-9]*[0-9A-Za-z]?//g;s/\r//g;s/\007//g;print' < screenlog.0
I hope this helps.
Some features of screen:
http://speaking-my-language.blogspot.com/2010/09/top-5-underused-gnu-screen-features.html
Site I found the perl-oneliner:
https://superuser.com/questions/99128/removing-the-escape-characters-from-gnu-screens-screenlog-n

Adding commands in shell scripts to history?

I notice that the commands I have in my shell scripts never get added to the history list. Understandably, most people wouldn't want it to be, but for someone who does, is there a way to do it?
Thanks.
Edit:
Sorry about the extremely late reply.
I have a script that conveniently combines some statements that ultimately result in lynx opening a document. That document is in a dir several directories below the current one.
Now, I usually end up closing lynx to open another document in the current dir and need to keep switching back and forth between the two. I could do this by having another window open, but since I'm mostly on telnet, and the switches aren't too frequent, I don't want to do that.
So, in order to get back to lynx from the other document, I end up having to re-type the lynx command, with the (long) path/filename. In this case, of course, lynx isn't stored in the command history.
This is what I want added to the history, so that I can get back to it easily.
Call it laziness, but hey, if it teaches me a new command....
Cheers.
As #tripleee pointed out, which problem are you actually trying to solve? It's perfectly possible to include any shell code in the history, but above some level of complexity it's much better to keep them in separate shell scripts.
If you want to keep multi-line commands in the history as they are, you might want to try out shopt -s lithist, but that means searching through history will only return one line at a time.

Resources