I'm currently looking for tips to simulate a tail -F in lftp.
The goal is to monitor a log file the same way I could do with a proper ssh connection.
The closest command I found for now is repeat cat logfile.
It works but that not the best when my file is too big cause it displays each time all the file.
The lftp program specifically will not support this, but if the server supports the extension, it is possible to pull only the last $x bytes from a file with, e.g. curl --range (see this serverfault answer). This, combined with some logic to only grab as many bytes as have been added since the last poll, could allow you to do this relatively efficiently. I doubt if there are any off-the-shelf FTP clients with this functionality, but someone else may know better.
Related
We have a namespacing convention to distinguish our memcache entries. I would like to monitor the get and set that happens to a certain namespace to track a bug.
I can monitor the entire memcache operations but I fear that it is going to be a huge data because its almost a significant subset of the DB data and the logs are going to run into GB's, so I need to filter only the namespace I am interested in.
I have a client side solution which is to decorate (or over-ride) memcache.get and memcache.set to print the arguments if the key matches our desired pattern.
However I feel it is better to do this in server side. Also there would be too many clients if I have collect this information from all nodes. Is there something that we could in the server side to get the same effect? Anything in memcached debug module that would help us?
Unfortunately there's no way at the moment. I see on github that they're working on such a feature, but currently it does not exist as far as i know.
What i use in this case is:
tcpdump -i lo -s 65535 -A -ttt port 11211| cut -c 9- | grep -i '^get\|set'
As alternative you can use a proxy tool to get such a output like mcrouter. https://github.com/facebook/mcrouter/wiki/Mcpiper
I know there are several posts asking similar things, but none address the problem I'm having.
I'm working on a script that handles connections to different Bluetooth low energy devices, reads from some of their handles using gatttool and dynamically creates a .json file with those values.
The problem I'm having is that gatttool commands take a while to execute (and are not always successful in connecting to the devices due to device is busy or similar messages). These "errors" translate not only in wrong data to fill the .json file but they also allow lines of the script to continue writing to the file (e.g. adding extra } or similar). An example of the commands I'm using would be the following:
sudo gatttool -l high -b <MAC_ADDRESS> --char-read -a <#handle>
How can I approach this in a way that I can wait for a certain output? In this case, the ideal output when you --char-read using gatttool would be:
Characteristic value/description: some_hexadecimal_data`
This way I can make sure I am following the script line by line instead of having these "jumps".
grep allows you to filter the output of gatttool for the data you are looking for.
If you are actually looking for a way to wait until a specific output is encountered before continuing, expect might be what you are looking for.
From the manpage:
expect [[-opts] pat1 body1] ... [-opts] patn [bodyn]
waits until one of the patterns matches the output of a spawned
process, a specified time period has passed, or an end-of-file is
seen. If the final body is empty, it may be omitted.
Let's say I have a big gzipped file data.txt.gz, but often the ungzipped version needs to be given to a program. Of course, instead of creating a standalone unpacked data.txt, one could use the process substitution syntax:
./program <(zcat data.txt.gz)
However, depending on the situation, this can be tiresome and error-prone.
Is there a way to emulate a named process substitution? That is, to create a pseudo-file data.txt that would 'unfold' into a process substitution zcat data.txt.gz whenever it is accessed. Not unlike a symbolic link forwards a read operation to another file, but, in this case, it needs to be a temporary named pipe.
Thanks.
PS. Somewhat similar question
Edit (from comments) The actual use-case is having a large gzipped corpus that, besides its usage in its raw form, also sometimes needs to be processed with a series of lightweight operations (tokenized, lowercased, etc.) and then fed to some "heavier" code. Storing a preprocessed copy wastes disk space and repeated retyping the full preprocessing pipeline can introduce errors. In the same time, running the pipeline on-the-fly incurs a tiny computational overhead, hence the idea of a long-lived pseudo-file that hides the details under the hood.
As far as I know, what you are describing does not exist, although it's an intriguing idea. It would require kernel support so that opening the file would actually run an arbitrary command or script instead.
Your best bet is to just save the long command to a shell function or script to reduce the difficulty of invoking the process substitution.
There's a spectrum of options, depending on what you need and how much effort you're willing to put in.
If you need a single-use file, you can just use mkfifo to create the file, start up a redirection of your archive into the fifo, and and pass the fifo's filename to whoever needs to read from it.
If you need to repeatedly access the file (perhaps simultaneously), you can set up a socket using netcat that serves the decompressed file over and over.
With "traditional netcat" this is as simple as while true; do nc -l -p 1234 -c "zcat myfile.tar.gz"; done. With BSD netcat it's a little more annoying:
# Make a dummy FIFO
mkfifo foo
# Use the FIFO to track new connections
while true; do cat foo | zcat myfile.tar.gz | nc -l 127.0.0.1 1234 > foo; done
Anyway once the server (or file based domain socket) is up, you just do nc localhost 1234 to read the decompressed file. You can of course use nc localhost 1234 as part of a process substitution somewhere else.
It looks like this in action (image probably best viewed in separate tab):
Depending on your needs, you may want to make the bash script more sophisticated for caching etc, or just dump this thing and go for a regular web server in some scripting language you're comfortable with.
Finally, and this is probably the most "exotic" solution, you can write a FUSE filesystem that presents virtual files backed by whatever logic your heart desires. At this point you should probably have a good hard think about whether the maintainability and complexity costs of where you're going really offset someone having to call zcat a few extra times.
This question already has answers here:
Monitor directory content change
(2 answers)
Closed 8 years ago.
I want to write a shell script which checks all the logic files placed at /var/www/html/ and, if a user makes any change in those files, sends an alert to an administrator informing them that "User x has made change in f file." I do not have much experience in writing shell scripts, and for this I do not know how to start. Any help will be highly appreciated.
This is answered in superuser: https://superuser.com/questions/181517/how-to-execute-a-command-whenever-a-file-changes
Basically you use inotifywait
Simple, using inotifywait:
while inotifywait -e close_write myfile.py; do ./myfile.py; done
This has a big limitation: if some program replaces myfile.py with a different file, rather than writing to the existing myfile, inotifywait will die. Most editors work that way.
To overcome this limitation, use inotifywait on the directory:
while true; do
change=$(inotifywait -e close_write,moved_to,create .)
change=${change#./ * }
if [ "$change" = "myfile.py" ]; then ./myfile.py; fi
done
The asker has even put a script online for that, which can be called like this:
while sleep_until_modified.sh derivation.tex ; do latexmk -pdf derivation.tex ; done
To answer your question directly, you should probably take a look at the Advanced Bash Scripting Guide. It's about Bash specifically, but you may find it helpful even if you're not using Bash. As far as watching for changes in files, try inotify. There are also tools available to make it usable directly from the command line, which have worked quite nicely in my experience.
Now, there are a few other ways you might approach this problem. You might look at md5deep and ssdeep. They are tools designed for digital forensics which can create a list of cryptographic hashes (in the case of md5deep) or fuzzy hashes (in ssdeep's case) and later scan against that list and tell you which files have appeared, disappeared, changed, moved, etc. Or, if you want to detect potentially unauthorized changes, you might look at a host-based intrusion detection system such as OSSEC. Apparently, these can both scan for file changes and watch for other signs of unauthorized activity.
i'm new with bash script so please keep calm with me ^^
I want to write bash script that request 2000 cURL request
is it fast & possible ?
or what should I do for this situation ?
Thanks
EDIT
This is the script I got it from here
#!/bin/bash
url=http://www.***.com/getaccount.php?username=
while read users
do
content=$(curl "{$url}${users}")
echo $users
echo $content >> output.txt
done < users.txt
which users.txt has 2000 username
the question is, is it fast ? because I have to call that script every minute with my crontab .. so it is good for me ? or should I use another language just like Perl or whatever.
before I did 2000 request by crontab but it is very bad idea to add 2000 line to the crontab
so any idea ?
If all of the URLs you're requesting follow a simple pattern (such as all of the numbered pages from page1.html through page2000.html), then curl itself can easily download them all in one command line:
# Downloads all of page1.html through page2000.html. Note the quotes to
# protect the URL pattern from shell expansion.
curl --remote-name-all 'http://www.example.com/page[1-2000].html'
See the section labeled "URL" in the manual page for more information on URL patterns.
If you have a lot of URLs which don't follow a numeric pattern, you can put all of the URLs into a file use the -K option of curl to download them all in one go. So, using your example, what you'd want to do is modify your file to convert the usernames into URLs with a prefix of url =. One way to do that is with the sed(1) utility
# Convert list of usernames into a curl options file
sed 's|^\(.*\)$|url = http://www.***.com/getaccount.php?username=\1|' users > curl.config
# Download all of the URLs from the config file
curl --remote-name-all -K curl.config
This will be much faster than downloading individual files in separate commands, because curl can then enable HTTP pipelining within a single process. That way, it sets up a single TCP stream that gets reused for multiple requests, instead of needing to setup a new TCP stream for each request just to tear it down again, which is what would happen if you made each request in a separate process.
Do note, though, that such a large automated download may violate a site's terms of use. You should check a site's robots.txt file before doing such a task, and make sure you're not exceeding their rate limits.
Well, I think you'll need to put down a lot more information to really get a good answer here, but you can make a loop in bash pretty easily:
for i in {1..2000}
do
echo "This is iteration number $i"
curl foo
done
The above command will execute each loop sequentially, and all the output will just go to your terminal. You may want to investigate redirecting stdout and stderr, as well as backgrounding the parts you care about.
I highly recommend http://www.tldp.org/HOWTO/Bash-Prog-Intro-HOWTO.html and http://www.tldp.org/LDP/abs/html/. Those are my favorite resources for figuring out bash stuff (aside from StackOverflow, of course).
Regarding your question, "is it fast", this depends on your definition of fast. I'm sure the above could be optimized in lots of ways, and I'm even more sure that if you do it in another language it could be much much faster. But it might be fast enough to do whatever it is you are actually trying to do.