Sometimes I work very intensively using a certain set of commands in the bash shell and then I spend some days working in a very different one. The problem is that by the time I want to keep going with the first set of tasks the shell history has been totally overwritten by the last one and I spend some time figuring out what I was exactly doing (I lack long-term memory).
I usually keep a journal so I can go back and continue where I stopped the last time, but I was wondering if there exists an easy way to do some "time travelling" in my shell session by saving the recent shell history (say, the last 100 commands) to a file and using this file to overwrite the history in the future.
So the history command actually has options for reading/writing files. You can write a history file with history -w my_history.txt and then import a history file by typing history -r my_history.txt.
Note: Be careful with simply writing the result of history command to a file with history > my_history.txt and then importing it, because that will write to history also the command numbers.
Related
Yesterday I ssh-d into my RemoteFS and performed some commands.
Today they do not appear in the history. Frustrating, as it took some time to look at the commands I used.
I frequently get this problem. I suspect that using the same login for multiple simultaneous terminal sessions maybe results in a separate history for each. And changing the user (e.g. elevating to superuser) opens up a different set of histories.
Could someone explain scientifically the life-cycle of history? When does a new one get created? How to access/view all the existing ones? And under what circumstances does history get destroyed? Do they ever get amalgamated?
Depends on variable settings, but by default there is only one history file per user, not per terminal session.
The history is nowadays held in an in-memory buffer and only written out to the history file when the buffer is full or on logout. Therefore multiple terminal sessions under the same user can overwrite each other's history. The history system is not really suitable for multiple sessions under the same user id.
If you want to keep sessions separate, modify variable HISTFILE.
It might seem neat to set:
HISTFILE="$HOME/.bash_history$$"
where $$ gives the current PID. While this gives each terminal session its own history, it quickly becomes a maintenance nightmare with all those history files floating around.
There are other variables that control history, see man bash for a description. You can also:
set | grep '^HIST'
which might be instructive.
Don't be tempted to edit the history file with a text editor. It is a binary file (contains non-text fields) and can be easily trashed.
When does a new one get created? First time a history filename is used.
How to access/view all the existing ones? Depends what name you have given them.
And under what circumstances does history get destroyed? When HISTSIZE is exceeded (default is 500 lines). Only HISTSIZE lines are stored. Remember that the file itself is overwritten only when the in-memory buffer is full, or on logout. However we do have the histappend option:
shopt -s histappend
which will append the session rather than overwrite. Be careful using this, you could end-up with a huge history file.
Do they ever get amalgamated? No, not unless you write a script to do it, or you set histappend.
I notice that the commands I have in my shell scripts never get added to the history list. Understandably, most people wouldn't want it to be, but for someone who does, is there a way to do it?
Thanks.
Edit:
Sorry about the extremely late reply.
I have a script that conveniently combines some statements that ultimately result in lynx opening a document. That document is in a dir several directories below the current one.
Now, I usually end up closing lynx to open another document in the current dir and need to keep switching back and forth between the two. I could do this by having another window open, but since I'm mostly on telnet, and the switches aren't too frequent, I don't want to do that.
So, in order to get back to lynx from the other document, I end up having to re-type the lynx command, with the (long) path/filename. In this case, of course, lynx isn't stored in the command history.
This is what I want added to the history, so that I can get back to it easily.
Call it laziness, but hey, if it teaches me a new command....
Cheers.
As #tripleee pointed out, which problem are you actually trying to solve? It's perfectly possible to include any shell code in the history, but above some level of complexity it's much better to keep them in separate shell scripts.
If you want to keep multi-line commands in the history as they are, you might want to try out shopt -s lithist, but that means searching through history will only return one line at a time.
Hi I am trying to use screen as part of a cronjob.
Currently I have the following command:
screen -fa -d -m -S mapper /home/user/cron
Is there anyway I can make this command do nothing if the screen mapper already exists? The mapper is set on a half an hour cronjob, but sometimes the mapping takes more than half an hour to complete and so they and up overlapping, slowing each other down and sometimes even causing the next one to be slow and so I end up with lots of mapper screens running.
Thanks for your time,
ls /var/run/screen/S-"$USER"/*.mapper >/dev/null 2>&1 || screen -S mapper ...
This will check if any screen sessions named mapper exist for the current user, and only if none do, will launch the new one.
Why would you want a job run by cron, which (by definition) does not have a terminal attached to it, to do anything with the screen? According to Wikipedia, 'GNU Screen is a software application which can be used to multiplex several virtual consoles, allowing a user to access multiple separate terminal sessions'.
However, assuming there is some reason for doing it, then you probably need to create a lock file which the process checks before proceeding. At this point, you need to run a shell script from the cron entry (which is usually a good technique anyway), and the shell script can check whether the previous run of the task has completed, exiting if not. If the previous incarnation is complete, then the current incarnation creates a lock file containing its PID and runs the job. When it completes, it removes the lock file. You should review the shell trap command, and make sure that the lock file is removed if the shell script exits as a result of a trappable signal (you can't trap KILL and some process-control signals).
Judging from another answer, the screen program already creates lock files; you may not have to do anything special to create them - but will need to detect whether they exist. Also check the GNU manual for screen.
Want to upgrade my file management productivity by replacing 2 panel file manager with command line (bash or cygwin). Can commandline give same speed? Please advise a guru way of how to do e.g. copy of some file in directory A to the directory B. Is it heavy use of pushd/popd? Or creation of links to most often used directories? What are the best practices and a day-to-day routine to manage files of a command line master?
Can commandline give same speed?
My experience is that commandline copying is significantly faster (especially in the Windows environment). Of course the basic laws of physics still apply, a file that is 1000 times bigger than a file that copies in 1 second will still take 1000 seconds to copy.
..(howto) copy of some file in directory A to the directory B.
Because I often have 5-10 projects that use similar directory structures, I set up variables for each subdir using a naming convention :
project=NewMatch
NM_scripts=${project}/scripts
NM_data=${project}/data
NM_logs=${project}/logs
NM_cfg=${project}/cfg
proj2=AlternateMatch
altM_scripts=${proj2}/scripts
altM_data=${proj2}/data
altM_logs=${proj2}/logs
altM_cfg=${proj2}/cfg
You can make this sort of thing as spartan or baroque as needed to match your theory of living/programming.
Then you can easily copy the cfg from 1 project to another
cp -p $NM_cfg/*.cfg ${altM_cfg}
Is it heavy use of pushd/popd?
Some people seem to really like that. You can try it and see what you thing.
Or creation of links to most often used directories?
Links to dirs are, in my experience used more for software development where a source code is expecting a certain set of dir names, and your installation has different names. Then making links to supply the dir paths expected is helpful. For production data, is just one more thing that can get messed up, or blow up. That's not always true, maybe you'll have a really good reason to have links, but I wouldn't start out that way, just because it is possible to do.
What are the best practices and a day-to-day routine to manage files of a command line master?
( Per above, use standardized directory structure for all projects.
Have scripts save any small files to a directory your dept keeps in the /tmp dir, .
i.e /tmp/MyDeptsTmpFile (named to fit your local conventions) )
It depends. If you're talking about data and logfiles, dated fileNames can save you a lot of time. I recommend dateFmts like YYYYMMDD(_HHMMSS) if you need the extra resolution.
Dated logfiles are very handy, when a current process seems like it is taking a long time, you can look at the log file from a week ago and quantify exactly how long this process took, a week, month, 6 months (up to how much space you can afford). LogFiles should also capture all STDERR messages, so you never have to re-run a bombed program just to see what the error message was.
This is Linux/Unix you're using, right? Read the man page for the cp cmd installed on your machine. I recommend using an alias like alias CP='/bin/cp -pi' so you always copy a file with the same permissions and with the original files' time stamp. Then it is easy to use /bin/ls -ltr to see a sorted list of files with the most recent files showing up at the bottom of the list. (No need to scroll back to the top, when you sort by time,reverse). Also the '-i' option will warn you that you are going to overwrite a file, and this has saved me more than a couple of times.
I hope this helps.
P.S. as you appear to be a new user, if you get an answer that helps you please remember to mark it as accepted, and/or give it a + (or -) as a useful answer.
I'm running a minecraft server on my linux box in a detached screen session. I'm not very fond of screen very much and would like to be able to constantly pipe the output of the server to a file(like a pipe) and pipe some input from a file to the server(so that I can input and output to the server from remote programs, like a python script). I'm not very experienced in bash, so could somebody tell me how to do this?
Thanks, NikitaUtiu.
It's not clear if you need screen at all. I don't know the minecraft server, but generally for server software, you can run it from a crontab entry and redirect output to log files.
Assuming your server kills itself at midnight sunday night, (we can discuss changing this if restarting 1x per week is too little or too much OR you require ad-hoc restarts), but for a basic idea of what to do, here is a crontab entry that starts the server each monday at 1 minute after midnight.
01 00 * * 1 dtTm=`/bin/date +\%Y\%m\%d.\%H\%M\%S`; export dtTm; { /usr/bin/mineserver -o ..... your_options_to_run_mineserver_here ... ; } > /tmp/mineserver_trace_log.${dtTm} 2>&1
consult your man page for crontab to confirm that day-of-week ranges are 0-6 (0=Sunday), and change the day-of-week value if 0!=Sunday.
Normally I would break the code up so it is easier to read, but for crontab entries, each entry has to be all on one line, (with some weird exceptions) AND usually a limit of 1024b-8K to how long the line can be. Note that the ';' just before the closing '}' is super-critical. If this is left out, you'll get un-deciperable error messages, or no error messages at all.
Basically, you're redirecting any output into a file (including std-err output). Now you can do a lot of stuff with the output, use more or less to look at the file, grep ERR ${logFile}, write scripts that grep for error messages and then send you emails that errors have been found, etc, etc.
You may have some sys-admin work on your hands to get the mineserver user so it can run crontab entries. Also if you're not comfortable using the vi or emacs editors, creating a crontab file may require help from others. Post to superuser.com to get answers for problems you have with linux admin issues.
Finally, there are two points I'd like to make about dated logfiles.
Good: a. If you app dies, you never have to rerun it to then capture output and figure out why something has stopped working. For long running programs this can save you a lot of time. b. keeping dated files gives you the ability to prove to you, your boss, others, that It used to work just fine, see here are the log files. c. Keeping the log files, assuming there is useful information in them, gives you the opportunity to mine those files for facts. I.E. : program used to take 1 sec for processing, now it is taking 1 hr, etc etc.
Bad: a. You'll need to set up a mechanism to sweep old log files, otherwise at some point everything will have stopped, AND when you finally figure out what the problem was, you discover that your /tmp OR whatever dir you chose to use IS completely full.
There is a self-maintaining solution to using dates on the logfiles I can tell you about if you find this approach useful. It will take a little explaining, so I don't want to spend the time writing it up if you don't find the crontab solution useful.
I hope this helps!