Clear fish shell history permanently across sessions - shell

I am trying to convert some bash dotfiles to their fish equivalents.
I have a ~/.bash_logout that looks like this:
# Flush the in-memory history, then persist the emptiness to disk.
history -c && history -w;
While fish does have a history command, it does not have the same options, so the above will not work as-is in fish.
The closest I have gotten so far is history clear, though this only seems to apply to the current session and is not persisted, as the history comes back in any new tabs I create. Further, it prompts for confirmation and I would prefer not to have to do that every time I log out.
Nonetheless, it is progress, so based on another answer, I put this in ~/.config/fish/config.fish:
function on_exit --on-process %self
history clear
end
This doesn't seem to even prompt me, let alone do anything useful. Or if it does, it's non-blocking and happens so fast that I can't see it.
How can I permanently erase the fish shell history and persist that state for future sessions, within an exit handler? Existing sessions would be nice too, but are less important.
Lastly, the history manpages notes that the "builtin history" does not prompt for confirmation. Does it make sense to use that instead and if so, how?
· clear clears the history file. A prompt is displayed before the history is erased asking you to confirm you really want to clear all history unless builtin history is used.

For me, history clear seems to apply across sessions and persists through startup. Have you updated to the latest version or FiSh ? I’m using version 2.6.0.
echo $version
To remove the confirmation prompt, you can edit the function directly (funced history), removing lines 188-195 sparing 191-192. Although, all this actually does is ultimately run the command builtin history clear, so this will achieve exactly the same thing minus the confirmation. Simply type:
builtin history clear
(which, again, for me, does seem to go across sessions and is persistent.)
Regarding the event handler, the reason the exit handler doesn’t fire is because %self gets expanded into the PID of the currently running shell process. However, once that exits and a new one starts, the event handler won’t fire as the PID will be different.
So this function will fire before on exiting the currently running shell process:
function bye --on-process-exit %self
echo byeeeeeeee
end
but only that process and not any subsequently created process. Instead, add the function to your config.fish file, which will ensure it initialises the event handler each startup using the correct and current PID. The config file, if it exists, is located at ~/.config/fish/config.fish If it doesn’t exist, ensure the environment variable XDG_CONFIG_HOME is set, which tells fish where to find the config files. You can set it yourself and export it globally with:
set -gx XDG_CONFIG_HOME ~/.config/
Then create the config file with the event handler:
function bye --on-process-exit %self
builtin history clear
echo Session history scrubbed. Goodbye
end
Done. The only history item remaining will be the command exit used to end the current shell session.
Hope this helps. Seems to work for me but if you run into problems, I might be able to point you to clues for some answers.

Related

How to properly write an interactive shell program which can exploit bash's autocompletion mechanism

(Please, help me adjust title and tags.)
When I run connmanctl I get a different prompt,
enrico:~$ connmanctl
connmanctl>
and different commands are available, like services, technologies, connect, ...
I'd like to know how this thing works.
I know that, in general, changing the prompt can be just a matter of changing the variable PS1. However this thing alone (read "the command connmanctl changes PS1 and returns) wouldn't have any effect at all on the functionalities of the commands line (I would still be in the same bash process).
Indeed, the fact that the available commands are changed, looks to me like the proof that connmanctl is running all the time the prompt is connmanctl>, and that, upon running connmanctl, a while loop is entered with a read statement in it, followed by a bunch of commands which process the the input.
In this latter scenario that I imagine, there's not even need to change PS1, as the connmanctl> line could simply be obtained by echo -n "connmanctl> ".
The reason behind this curiosity is that I'm trying to write a wrapper to connmanctl. I've already written it, and it works as intended, except that I don't know how to properly setup the autocompletion feature, and I think that in order to do so I first need to understand what is the right way to write an interactive shell script.

Understanding `history` for bash

Yesterday I ssh-d into my RemoteFS and performed some commands.
Today they do not appear in the history. Frustrating, as it took some time to look at the commands I used.
I frequently get this problem. I suspect that using the same login for multiple simultaneous terminal sessions maybe results in a separate history for each. And changing the user (e.g. elevating to superuser) opens up a different set of histories.
Could someone explain scientifically the life-cycle of history? When does a new one get created? How to access/view all the existing ones? And under what circumstances does history get destroyed? Do they ever get amalgamated?
Depends on variable settings, but by default there is only one history file per user, not per terminal session.
The history is nowadays held in an in-memory buffer and only written out to the history file when the buffer is full or on logout. Therefore multiple terminal sessions under the same user can overwrite each other's history. The history system is not really suitable for multiple sessions under the same user id.
If you want to keep sessions separate, modify variable HISTFILE.
It might seem neat to set:
HISTFILE="$HOME/.bash_history$$"
where $$ gives the current PID. While this gives each terminal session its own history, it quickly becomes a maintenance nightmare with all those history files floating around.
There are other variables that control history, see man bash for a description. You can also:
set | grep '^HIST'
which might be instructive.
Don't be tempted to edit the history file with a text editor. It is a binary file (contains non-text fields) and can be easily trashed.
When does a new one get created? First time a history filename is used.
How to access/view all the existing ones? Depends what name you have given them.
And under what circumstances does history get destroyed? When HISTSIZE is exceeded (default is 500 lines). Only HISTSIZE lines are stored. Remember that the file itself is overwritten only when the in-memory buffer is full, or on logout. However we do have the histappend option:
shopt -s histappend
which will append the session rather than overwrite. Be careful using this, you could end-up with a huge history file.
Do they ever get amalgamated? No, not unless you write a script to do it, or you set histappend.

Save current shell history and recover it afterwards

Sometimes I work very intensively using a certain set of commands in the bash shell and then I spend some days working in a very different one. The problem is that by the time I want to keep going with the first set of tasks the shell history has been totally overwritten by the last one and I spend some time figuring out what I was exactly doing (I lack long-term memory).
I usually keep a journal so I can go back and continue where I stopped the last time, but I was wondering if there exists an easy way to do some "time travelling" in my shell session by saving the recent shell history (say, the last 100 commands) to a file and using this file to overwrite the history in the future.
So the history command actually has options for reading/writing files. You can write a history file with history -w my_history.txt and then import a history file by typing history -r my_history.txt.
Note: Be careful with simply writing the result of history command to a file with history > my_history.txt and then importing it, because that will write to history also the command numbers.

How to export a shell variable to all sessions?

I would like to know is there a way to export my shell variable to all sessions in the system (not only the current session). I'm not looking to set it in .bashrc file as the shell variable is a dynamic one it changes time to time.
You can set up your sessions to keep rereading a file on disk by setting a trap on DEBUG in your .bashrc:
trap 'source ~/.myvars' DEBUG
If you leave a terminal A open, run echo VAR=42 >> ~/.myvars in terminal B, then switch back to terminal A and echo $VAR, it'll "magically" be set.
You seem to misunderstand what export does. All it does is to move a local variable into the environment block within the process (/proc/$$/environ).
When a new process is created (a fork) then the program data areas, including the environment block, are copied to the new process (actually they are initially shared, then copied when one writes). When a different program is run (execve), by default the environment block remains from the previous program. See also the env(1) program.
So environment variables are normally inherited (copied) from their parent process. The only way to get a new environmnt variable into a running process is to use some sort of inoculation technique, as a debugger would do. Writing such a program is not an easy task, and I'm sure you could imagine the security implications.
You can't. A better explanation can be found in the unix stackexchange section here!
A shell variable probably is not suited for the use you are trying to achieve. Maybe you want to use files instead.

Screen Bash Scripting for Cronjob

Hi I am trying to use screen as part of a cronjob.
Currently I have the following command:
screen -fa -d -m -S mapper /home/user/cron
Is there anyway I can make this command do nothing if the screen mapper already exists? The mapper is set on a half an hour cronjob, but sometimes the mapping takes more than half an hour to complete and so they and up overlapping, slowing each other down and sometimes even causing the next one to be slow and so I end up with lots of mapper screens running.
Thanks for your time,
ls /var/run/screen/S-"$USER"/*.mapper >/dev/null 2>&1 || screen -S mapper ...
This will check if any screen sessions named mapper exist for the current user, and only if none do, will launch the new one.
Why would you want a job run by cron, which (by definition) does not have a terminal attached to it, to do anything with the screen? According to Wikipedia, 'GNU Screen is a software application which can be used to multiplex several virtual consoles, allowing a user to access multiple separate terminal sessions'.
However, assuming there is some reason for doing it, then you probably need to create a lock file which the process checks before proceeding. At this point, you need to run a shell script from the cron entry (which is usually a good technique anyway), and the shell script can check whether the previous run of the task has completed, exiting if not. If the previous incarnation is complete, then the current incarnation creates a lock file containing its PID and runs the job. When it completes, it removes the lock file. You should review the shell trap command, and make sure that the lock file is removed if the shell script exits as a result of a trappable signal (you can't trap KILL and some process-control signals).
Judging from another answer, the screen program already creates lock files; you may not have to do anything special to create them - but will need to detect whether they exist. Also check the GNU manual for screen.

Resources