I want to read some parameters of a csv file when airodump-ng start.To capture and save the output of airodump-ng into a csv file im using this
airodump-ng wlan0 -w csvfilename --write-interval 60 -o csv
It work fine but i want to read automatically the bssid and channel values from the csv file generated by airodump-ng to use them after.Is possible to do that ?
Related
Is it possible to write to file in one bash process and read it with tail in another (same way you can read system generated logs with tail -f.
I would like to open and continuously write something to file
vi /tmp/myfile
And in other terminal prints what was written to that file
tail -f /tmp/myfile
I've tried this, but tail doesn't print anything after I save changes in vi (only initial lines, before save).
Motivation:
In my toy project. I would like to build shared clipboard using pipeto.me service. Where I would write to my file continuously and all changes captured by tail would be piped to curl. Something like watch log example from pipeto.me
tail -f logfile | curl -T- -s https://pipeto.me/2xrGcZtQ.
But instead of logfile it will watch my file, where I would write in vi
But apart from solving my problem, I'm looking for general answer if something like this is possible with vi and tail.
You can use cat command, by changing its output stream as /tmp/file that is whatever you type will be added to myfile,
cat > /tmp/myfile;
#input-> add text(standard input by default is set as keyboard)
#typing...
And to print the file with tail command with -F as argument,
tail -F /tmp/file; #-F -> output appended data as the file grows and with retry
#output-> input given to file
#typing....
Writing text to file with vim,
vi /tmp/file;
#typing...
#:w -> write text to file
tail -F /tmp/file;
#
#typing...
When you write to your file using vim, it doesn't write(save) it instantly as you type, instead when you exit the insert mode and save the file explicitly(:w), it is then the output of tail will be updated.
Hence you can use a plugin like Autosaveplugin which could help to save automatically, to display logs synchronously.
I am writing a bash script that loops over x number of logfiles and sends the most recent 10 entries for each logfile into a separate csv. So if I have 25 logfiles, after running the script I will expect to have 25 new csv files, each containing the 10 most recent lines.
Here is something along the lines of what I am trying to make:
#!/bin/bash
for file in /path/to/logs/*.log;
do tail -n 10 > "$file".csv
done
I can't seem to get it to loop over all the files - instead, it hangs on one. How do I get this small snippet to loop over each file?
I think you are missing the $file parameter in tail command. Could you try adding "$file" at the end of it?
#!/bin/bash
for file in /path/to/logs/*.log;
do tail -n 10 "$file" > "$file".csv
done
I have a csv file whose data I want to import into my mongodb but I want to make it point to a specific row number from were it should start importing the data from csv file.
Right now I'm importing it in the following way:
mongoimport -d dbname -c collection_name --type csv --file filename.csv --headerline
The reason I want to import it from a specific row number is because starting few rows are informational but not required to insert into DB.
SampleFile(2015),,,
,,,
,,,
,,,
,,,
Theme,Category,Topic
Automobile,Auto Brands,Acura
Automobile,Auto Brands,Aston Martin
So I want to point it from the row Theme,Category,Topic. Is it possible or do I have to manually edit the csv file for this.
On unix or with a ported version you can use tail to skip the lines in the file, as mongoimport will accept STDIN as an alternate to --file. You probably want to set up --fieldFile for the headers as well since the --headerline cannot be used when you are not reading that first line in the file:
tail -n+<linesToSkip> | mongoimport -d dbname -c collectionname --type csv --headerfile headers.txt
Note the + there as that tells tail to "skip to that line"
If you don't want to install anything else on windows then use for:
for /f "skip=<linesToSkip> delims=\n" %i in (base.js) do #echo %i | | mongoimport -d dbname -c collectionname --type csv --headerfile headers.txt
In your sample though just skip the lines to the headerline and still use the option.
So just pipe the input to STDIN and allow mongoimport to slurp it up.
I'd like to run a unix command in a loop, replacing a variable for each iteration and then store the output into a file.
I'll be grabbing the HTTP headers of a series of URL's using curl -I and then I want each instance outputted to a new line of a file.
I know
I could store the output with | cat or redirect it into a file with >, but how would I run the loop?
I have a file with a list of URL's one per line (or I could comma separate them, alternatively).
You can write:
while IFS= read -r url ; do
curl -I "$url"
done < urls-to-query.txt > retrieved-headers.txt
(using the built-in read command, which reads a line from standard input — in this case redirected from urls-to-query.txt — and saves it to a variable — in this case $url).
Given a list of URLs in a file:
http://url1.com
http://url2.com
You could run
cat inputfile | xargs curl -I >> outputfile
That would read each line of the input file and append the results for each row into the outputfile
So I have a Linux program that runs in a while(true) loop, which waits for user input, process it and print result to stdout.
I want to write a shell script that open this program, feed it lines from a txt file, one line at a time and save the program output for each line to a file.
So I want to know if there is any command for:
- open a program
- send text to a process
- receive output from that program
Many thanks.
It sounds like you want something like this:
cat file | while read line; do
answer=$(echo "$line" | prog)
done
This will run a new instance of prog for each line. The line will be the standard input of prog and the output will be put in the variable answer for your script to further process.
Some people object to the "cat file |" as this creates a process where you don't really need one. You can also use file redirection by putting it after the done:
while read line; do
answer=$(echo "$line" | prog)
done < file
Have you looked at pipes and redirections ? You can use pipes to feed input from one program into another. You can use redirection to send contents of files to programs, and/or write output to files.
I assume you want a script written in bash.
To open a file you just need to type a name of it.
To send a text to a program you either pass it through | or with < (take input from file)
To receive output you use > to redirect output to some file or >> to redirect as well but append the results instead of truncating the file
To achieve what you want in bash, you could write:
#/bin/bash
cat input_file | xargs -l1 -i{} your_program {} >> output_file
This calls your_program for each line from input_file and appends results to output_file