Print p4 change/file info to .txt file or .csv - reporting

I am trying to figure out how to generate a report from Perforce. Either GUI or command line is fine.
I have tried the below and that gives me everything I need with the exception of the list of files changed.
C:\Users\a148530>p4 changes -t -L -f #2015/03/30,#now
Requirements
List all changelists in a given date range
Include user, and files changed (with depot/path)
Essentially I am looking to export this view from the P4 GUI into .txt or .csv

I'm pretty new to Perforce so I assume there's a better way, but this is what I do:
Parse out the list of changelists, then loop through it and for each changelist number:
Display the changelist info.
List the relevant files.
Here are some example commands for inside the loop for changelist number 12345:
p4 changes -t -l -m1 #12345
p4 files #=12345

You can save the output in any format by providing
> output.csv or output.txt
e.g. p4 changes -t -L -f #2015/03/30,#now > output.csv

Related

Goaccess Process Multiple Logs

I have a directory with log files. I want to process the last 13 of them (past quarter). I can't use a wildcard using Goaccess because I don't want to include all of them, just the last 13 generated weeks' worth.
I have an array of the filenames of those last 13 files, but I don't know the syntax for the Goaccess command to include those files. I can't find any reference as to how to do this, as all notes I've seen refer to using a wildcard. I don't want to start copying and moving files around. There should be a way of doing this in the command line with multiple filenames which I can generate just fine.
How can I use a multiple logname input syntax in Goaccess?
Something like:
/usr/local/bin/goaccess -p /users/rich/things/goaccess.conf log1.log log2.log log3.log -o qreport.html
MULTIPLE LOG FILES
There are several ways to parse multiple logs with GoAccess. The
simplest is to pass multiple log files to the command line:
goaccess access.log access.log.1
goaccess-custom-logs
In your case, you need to process only the last three generated file so you can get the last three file using ls. The final command will become
/usr/local/bin/goaccess -p /users/rich/things/goaccess.conf $(ls -t log* | head -3 | tr '\r\n' ' ') -o qreport.html
This will process the last three files that is started with log

How do I write a bash script to copy files into a new folder based on name?

I have a folder filled with ~300 files. They are named in this form username#mail.com.pdf. I need about 40 of them, and I have a list of usernames (saved in a file called names.txt). Each username is one line in the file. I need about 40 of these files, and would like to copy over the files I need into a new folder that has only the ones I need.
Where the file names.txt has as its first line the username only:
(eg, eternalmothra), the PDF file I want to copy over is named eternalmothra#mail.com.pdf.
while read p; do
ls | grep $p > file_names.txt
done <names.txt
This seems like it should read from the list, and for each line turns username into username#mail.com.pdf. Unfortunately, it seems like only the last one is saved to file_names.txt.
The second part of this is to copy all the files over:
while read p; do
mv $p foldername
done <file_names.txt
(I haven't tried that second part yet because the first part isn't working).
I'm doing all this with Cygwin, by the way.
1) What is wrong with the first script that it won't copy everything over?
2) If I get that to work, will the second script correctly copy them over? (Actually, I think it's preferable if they just get copied, not moved over).
Edit:
I would like to add that I figured out how to read lines from a txt file from here: Looping through content of a file in bash
Solution from comment: Your problem is just, that echo a > b is overwriting file, while echo a >> b is appending to file, so replace
ls | grep $p > file_names.txt
with
ls | grep $p >> file_names.txt
There might be more efficient solutions if the task runs everyday, but for a one-shot of 300 files your script is good.
Assuming you don't have file names with newlines in them (in which case your original approach would not have a chance of working anyway), try this.
printf '%s\n' * | grep -f names.txt | xargs cp -t foldername
The printf is necessary to work around the various issues with ls; passing the list of all the file names to grep in one go produces a list of all the matches, one per line; and passing that to xargs cp performs the copying. (To move instead of copy, use mv instead of cp, obviously; both support the -t option so as to make it convenient to run them under xargs.) The function of xargs is to convert standard input into arguments to the program you run as the argument to xargs.

compare rows in two files in unix shell script and merge without redundant data

There is one old report file residing on a drive.
Everytime a new report is generated, it should be compared to the contents of this old file.
If any new account row is reported in this new report file, it should be added to the old file, else just skip.
Both files will have same title and headers.
Eg: old report
RUN DATE:xyz FEE ASSESSMENT REPORT
fee calculator
ACCOUNT NUMBER DELVRY DT TOTAL FEES
=======================================================
123456 2014-06-27 110.0
The new report might be
RUN DATE:xyz FEE ASSESSMENT REPORT
fee calculator
ACCOUNT NUMBER DELVRY DT TOTAL FEES
=======================================================
898989 2014-06-26 11.0
So now the old report should be merged to have both rows under it - 123456 and 898989 acc no rows.
I am new to shell scripting. I don't know if I should use diff cmd or while read LINE or awk?
Thanks!
This appears to be several commands in combination to create an actual script, rather than an adept commandlinefu in only one line.
Assuming the number of lines in the header section of the report is consistent, then you can use tail -n +7 to return the lines after the first 7 as you show in your example.
If they are not the same, but all end with the line you've shown above "==========" then you can use grep -n to find that line number and start parsing the account numbers after it.
#!/usr/bin/env bash
OLD_FILE="ancient_report.log"
NEW_FILE="latest_and_greatest.log"
tmp_ext=".tmp"
tail -n +7 ${OLD_FILE} > ${OLD_FILE}${tmp_ext}
tail -n +7 ${NEW_FILE} >> ${OLD_FILE}${tmp_ext}
sort -u ${OLD_FILE}${tmp_ext} > ${OLD_FILE}${tmp_ext}.unique
mv -f ${OLD_FILE}${tmp_ext}.unique ${OLD_FILE}
To illustrate this script:
#!/usr/bin/env bash
The shebang line above tells *nix how to run it.
OLD_FILE="ancient_report.log"
NEW_FILE="latest_and_greatest.log"
tmp_ext=".tmp"
Declare starting variables. You can also do this by using arguments of the file names. OLD_FILE=${1} to get the first argument on the command line.
tail -n +7 ${OLD_FILE} > ${OLD_FILE}${tmp_ext}
tail -n +7 ${NEW_FILE} >> ${OLD_FILE}${tmp_ext}
Put the endings of the two files into a single 'tmp' file
sort -u ${OLD_FILE}${tmp_ext} > ${OLD_FILE}${tmp_ext}.unique
sort and retain only the 'unique' entries with -u
If your OS version of sort does not have the -u then you can get the same results by using: sort <filename> | uniq
mv -f ${OLD_FILE}${tmp_ext}.unique ${OLD_FILE}
Replace old file with new uniq'd file.
There are of course many simpler ways to do this, but this one gets the job done with several commands in a sequence.
Edit:
To preserve the header portion of the file with the latest report date, then instead of mving the new tmp file over the old, do:
rm ${OLD_FILE};
head -n 7 ${NEW_FILE}) > ${OLD_FILE}
cat ${OLD_FILE}${tmp_ext}.unique >> ${OLD_FILE}
This removes the OLD_FILE (can't overwrite without deleting first) and cats together the header of the new file (for date) and the entire contents of the unique tmp file. After this you can do general file cleanup such as removing any new files you've created. To preserve/debug any changes, you can add a datestamp to each 'uniqued' file name and keep them as an audit trail of all report additions.

Find and copy the two most recent files added to a directory with a specific format

I'm currently writing a ksh script that will run every 5 minutes. I want to select the two most recently added files within a directory that have a specific format. The format of the file should be: OUS_*_*_*.html. The files should then be copied over to a destination directory.
I assume I can use find, but I am using HP-UX and it does not support the -amin, -cmin, -mmin options. Does anyone know how I can achieve this functionality?
Edit 1: I have found the following commands, each of which are supposed to return the single newest file, but in use more than one file is listed:
ls -Art | tail -n 1
ls -t | head -n1
Edit 2: I can see how the functionality of these commands should work, but ls -t lists files in a table format, and selecting the first line actually selects three separate file names. I attempted to use ls -lt, but now the first line is a string total 112 followed by the file names along with their access rights, time stamp, etc..
Edit 3: I found that the -1 (numeral 1, not l) option provides a list with just file names. Using the command ls -1t | head -n 2 I was able to gain the ability to list the two newest files.
Q: Is it possible to restrict the ls command to just look for files with the previously mentioned format?
I was able to use this block of code to list the most recently added files to a directory that conform to a specific format:
ls -1t $fileNameFormat | head -n 2

Create files using grep and wildcards with input file

This should be a no-brainer, but apparently I have no brain today.
I have 50 20-gig logs that contain entries from multiple apps, one of which addes a transaction ID to its log lines. I have 42 transaction IDs I need to review, and I'd like to parse out the appropriate lines into separate files.
To do a single file, the command would be simply,
grep CDBBDEADBEEF2020X02393 server.log* > CDBBDEADBEEF2020X02393.log
that creates a log isolated to that transaction, from all 50 server.logs.
Now, I have a file with 42 txnIDs (shortening to 4 here):
CDBBDEADBEEF2020X02393
CDBBDEADBEEF6548X02302
CDBBDE15644F2020X02354
ABBDEADBEEF21014777811
And I wrote:
#/bin/sh
grep $1 server.\* > $1.log
But that is not working. Changing the shebang to #/bin/bash -xv, gives me this weird output (obviously I'm playing with what the correct escape magic must be):
$ ./xtrakt.sh B7F6E465E006B1F1A
#!/bin/bash -xv
grep - ./server\.\*
' grep - './server.*
: No such file or directory
I have also tried the command line
grep - server.* < txids.txt > $1
But OBVIOUSLY that $1 is pointless and I have no idea how to get a file named per txid using the input redirect form of the command.
Thanks in advance for any ideas. I haven't gone the route of doing a foreach in the shell script, because I want grep to put the original filename in the output lines so I can examine context later if I need to.
Also - it would be great to have the server.* files ordered numerically (server.log.1, server.log.2 NOT server.log.1, server.log.10...)
try this:
while read -r txid
do
grep "$txid" server.* > "$txid.log"
done < txids.txt
and for the file ordering - rename files with one digit to two digit, with leading zeroes, e.g. mv server.log.1 server.log.01.

Resources