I've got these absolutely delicious bash scripts in my .bash_profile that makes working with git on the command line genuinely pleasant.
source ~/dev/git-completion.bash
source ~/dev/git-flow-completion.bash
Only problem is that they require a lot of disk IO (and some CPU) work to work. Every time I cd into a git repo directory (on an uncached disk) there's an annoying delay that can sometimes last for several seconds.
9/10 times I don't need the info in the prompt immediately. Often I just want to start a terminal, do some stuff and close it.
Would it be possible to make it run as a background task? I.e. asynchronously. That way the heavy blocking IO work could be done whilst I'm doing something else. If I need it immediately after opening a terminal I'm happy to wait. Like I have to do today.
A dream would be something like this:
source --async ~/dev/git-completion.bash
source --async ~/dev/git-flow-completion.bash
What do the scripts do? Do they set environment variables up or do they just do some on-disk stuff that's environment independent?
If the former, then your luck is probably out: I don't believe it is possible to run a script asynchronously and have it affect the current environment. If the latter, then have you just tried doing ~/dev/git-completion.bash & ?
Related
I have a server on which we execute multiple bash scripts to automate tasks (like copying files to other servers, kicking off backups, etc). It has been working for some months, but today it started to get erratic.
What is happening, is that the script gets 'stuck' for a while, and after that, it runs with no problem. If I copy and paste the commands one by one on the terminal, it works, so is not something on the script itself, but it seems something that is preventing the bash interpreter (if that makes sense).
Another weird behavior is that the same script will run with no issues eventually. However, as we use Jenkins for automation, the scripts are re-created every time a new job starts.
For example, I created a new script, tst.sh, which only contains an echo. If I try to run it directly, it gets stuck for a while. I tried to debug it with bash -xeav but it does not print my script code, which means that it is not reading it. After a while, the script ran, with no changes. However, creating one script, with the same content and a different name, resurfaces the issue.
My hypothesis is that something prevents the script to be read, and just waits until whatever is blocking it to finish. However, I did not see any process holding the file, which means that it may not the case.
Is there any other thing I should try? My knowledge in bash is pretty basic, so I don't know if there is a flag that may help me on debugging this internally.
I am working on RHEL 8.85, the bash version is GNU bash, version 4.4.20(1)-release (x86_64-redhat-linux-gnu)
UPDATES BASED ON THE COMMENTS
Server resources are OK, no usage for them.
Hardware for the server also works fine, the ops team has not reached out with any known issue at least
Reboot makes the issue disappear, however, it reappears after 5 minutes or so
The issue seems that is not related to bash profiles and such.
Issue solved, posting this as an answer so people can find it quicker.
Turns out, as multiple users suggested in the comments (thanks to all!!) the problem was caused by a security monitor, which analyzed each of the scripts that were executed. The team changed some settings on that end to prevent it from happening, and so far is working.
Whenever I start terminal on my Macbook Pro it is running a process. I have to use ctrl+C to kill it. If I close the window directly it warns me that following processs are running: login, bash, bash, perl5.12.
Any idea what might be going on here and how I get back to the normal state?
I personally had this issue a while ago. First check to see if it is from one of your profiles. Assuming you are using bash, we will look at your bash profile.
First, make sure the problem is actually stemming from your bash profile. Source the scrips as follows.
source ~/.bash_profile
source ~/.profile
If after running those you observe the same hanging problem where you have to cntrl+c, then you know what script has the problem.
The best way to remedy the situation is to comment out different parts of your script to figure out where the problem is. Backup your profile and then comment out half of your profile and do a
source ~/.bash_profile
and if it hangs or not will tell you what half the problem is on. Keep repeating this until you find the problem. It sounds longer than it actually is.
I'm developing an app. The operating system I'm using is linux. I need to run if possible a ruby script on the file created in the directory. I need to keep this script always running. The first thing I thought about is inotify:
The inotify API provides a mechanism for monitoring file system events. Inotify can be used to monitor individual files, or to monitor directories.
It's exactly what I need, then I found "rb-inotify", a wrapper fir inotify.
Do you think there is a better way of doing what I need than using inotify? Also, I really don't understand the way that I have to use rb-inotify.
I just create, for example, a rb file with:
notifier = INotify::Notifier.new
notifier.watch("directory/to/check",:create) do |event|
#do task with event.name file
end
notifier.run
Then I just ruby myRBNotifier.rb, and it will stay looping for ever. How do I stop it? Any idea? Is this a good approach?
I'd recommend looking at god. It's designed for this sort of task, and makes it pretty easy to build a monitoring system for background and daemon apps.
As for the main code itself, inotify isn't cross-platform, so if you have a possibility you'll need to run on Windows or Mac OS then you'll need a different solution. It's not too hard to write a little piece of code that checks your target directory periodically for a change. If you need to know what changed, read and cache the directory entries then compare them the next time your code runs. Use sleep between runs to wait some period of time before looping.
The old-school method of doing similar things is to use cron to fire off a job at regular intervals. That job can be your script that checks whether the file list changed by comparing it to the cached version, then acting as needed if something is different.
Just run your script in the background with
ruby myRBNotifier.rb &
When you need to stop it, find the process id and use kill on it:
ps ux
kill [whatever pid your process gets from the OS]
Does that answer your question?
If you're running on a mac/unix machine, look at the launchctl man page. You can set up a process to run and execute a ruby script whenever a file changes. It's highly configurable.
I am running a background process on Mac and have a problem with log update. If I run
someprog > mylog &
then mylog is updated not immediately, but with some intervals - I guess it's due to buffering. Same thing with at now. If I kill the program before output is written to mylog, then I loose the data. There was no such problem with the same program on Linux machines, so I hope I can make it run-time-updated on Mac as well. Any idea how?
someprog is a F77 program, which was not written by me.
I tried to ask this question at SuperUser, but no one can help me there.
EDIT1: I don't feel like changing the source, but keep it in mind. Logging works fine on Linux machines, so it should work on Macs as well. It must be a system setting, e.g. buffer size? It would be fine for me to limit the buffer size to a smaller value - now I have to wait hours to see something in the log.
If you have access to the source code, you can probably just add calls to fflush(stdout) after every printf. If you don't, you could try something tricky with LD_PRELOAD... Basically, make your own version of printf() that calls libc's printf... AND does a flush... Then LD_PRELOAD that library when you run... The app will use yours instead... Kinda risky tho...
That's the usual behaviour of POSIX C programs that are writing to a non-tty stream - I guess F77 shares the same behaviour, or is written in terms of the stdio routines.
I don't know what the right answer is - I guess you'll need to pipe the output through something that pretends to be a tty, but offhand I don't know what (if any) utility provides that option.
If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open.
I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated.
Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd?
Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script.
You can prevent a script from overwriting through I/O redirection by
set noclobber
Preventing overwriting by cp and the like is harder. My inclination would be to reset the PATH for the script to run with PATH containing just a single entry, a "blessed" directory where you place commands that you know are safe. This might mean, for example, that your version of cp is arranged always to use the --remove-destination option (probably a GNU-ism). In any case, you arrange for the script to execute only commands from the blessed directory. You can then vet each such command individually.
It would be good if you could prevent a script from executing a command by absolute pathname, but I don't know how to do that. If you are doing installations in your regular directories, a chroot jail probably does not help unless you do a lot of loopback mounting to make those directories visible. And if the directories into which you're installing contain dangerous commands, I don't see how you can protect yourself against them completely.
Incidentally, I tried and failed to learn if install(1) removes the desitination before installing. It would be interseting to learn.
Bash script I presume? Is the script very long? If not, you can do this manually:
if [ ! -f /tmp/foo.txt ] #If file does not exist
then
...code
fi
But I think you're asking for a way to wrap this around the script. You can certainly monitor the file writes with strace but AFAIK it doesn't have the functionality to interrupt them, unless you set up some sort of Intrusion Detection System with rules etc.
But to be honest, unless it's a huge script, that's probably more trouble than it's worth
Write your own safe_install() functions and make sure they're the only methods used. If you really need to be sure, run two processes. One would have permissions to make changes and the other would drop all privileges early and tell the other script to do the actual disk work.