How to parse Memgraph audit log? - memgraphdb

I've located my audit logs in /audit/audit.log. I can also see older logs there (they are gziped). Here is what my log file looks like:
1551376833.225395,127.0.0.1,admin,"MATCH (n) DETACH DELETE n","{}"
1551376833.257825,127.0.0.1,admin,"CREATE (n {name: $name})","{\"name\":\"alice\"}"
1551376833.273546,127.0.0.1,admin,"MATCH (n), (m) CREATE (n)-[:e {when: $when}]->(m)","{\"when\":42}"
1551376833.300955,127.0.0.1,admin,"MATCH (n), (m) SET n.value = m.value","{}"
How can I parse it? What would be the header for this records?

The audit log contains the following information formatted into a CSV file:
<timestamp>,<address>,<username>,<query>,<params>
For each query, the supplied query parameters are also logged. The query is escaped and quoted so that commas in queries don't affect the correctness of the CSV. The parameters are encoded as JSON objects and are then escaped and quoted.
You can use the following PYthon script to get the data out:
import csv
import json
with open("audit.log") as f:
reader = csv.reader(f, delimiter=',', doublequote=False,
escapechar='\\', lineterminator='\n',
quotechar='"', quoting=csv.QUOTE_MINIMAL,
skipinitialspace=False, strict=True)
for line in reader:
timestamp, address, username, query, params = line
params = json.loads(params)
# Rest of your code that processes the logs.

Related

Read data from CSV and create Json Array in Jmeter

I have One POST request and below is the My Body Payload .
{
"ABC": "ITBK1",
"Code": "AH01001187",
"ScheduleDate": "2021-09-02T22:59:00Z",
"FilterType": 2,
"FilterData": [
"LoadTest92","LoadTest93"
]
}
I'm passing the ContractorId to filterData as below.
{
"ABC": "ITBK1",
"Code": "AH01001187",
"ScheduleDate": "${startTo}",
"FilterType": 2,
"FilterData": ["${contractorId}"]
}
but it taking one id at a time for this Json. How can i send multiple data for this FilterData jsonArray from csv please help on this.
First of all don't post code (including CSV file content) as image
As per CSV Data Set Config documentation:
By default, the file is only opened once, and each thread will use a different line from the file. However the order in which lines are passed to threads depends on the order in which they execute, which may vary between iterations. Lines are read at the start of each test iteration. The file name and mode are resolved in the first iteration.
So it means that you need to go to the next iteration in order to read the next line.
If you want to send all the values from column J as the "FilterData" you could do something like:
Add JSR223 PreProcessor as a child of the request you want to parameterize
Put the following code into "Script" area:
def lines = new File('/path/to/your/file.csv').readLines()
def payload = []
lines.each { line ->
def contractor = line.split(',')[9]
payload.add(contractor as String)
}
vars.put('payload', new groovy.json.JsonBuilder(payload).toPrettyString())
That's it, use ${payload} instead of your ["${contractorId}"] variable in the HTTP request.
More information:
JsonBuilder
JMeterVariables
Apache Groovy - Why and How You Should Use It

Gspread append is not removing the single quote

I have switched to using gspread instead of pure google sheets api. Before i formatted my input into a json body but now i send the list direct. The append works without an error but the first item has an extra single quote on the beginning of the date.
I fell like im following the example to the letter so it seems like a bug but i wanted to ask here first just in case i'm doing something silly.
values = ['2021-08-11', '-', '-', 372, 373, 'Brayden', 'ChrisT',
'Chris', 'Dida', 'Darren', 'Ferdi', 'Bernard', 'Cal', 'Gavin',
'Conor']
ws.append_row(values)
First item in sheet: '2021-08-11
Originally using the pure api i was formatting the body as follows but as i understand it with gspread i should just be able to send the list.
body = {
'majorDimension': 'ROWS',
'values': [
google_output,
],
}
The same seems to happen if i update instead of append:
ws.update(range, values, major_dimension='ROWS')
Im using version 4.0.0 of gspread and here is the guide im following:
(method) append_row: (values, value_input_option='RAW',
insert_data_option=None, table_range=None) -> Any Adds a row to the
worksheet and populates it with values.
Widens the worksheet if there are more values than columns.
:param list values: List of values for the new row. :param str
value_input_option: (optional) Determines how the input data
should be interpreted. See ValueInputOption_ in the Sheets API reference. :param str insert_data_option: (optional) Determines how
the input data
should be inserted. See InsertDataOption_ in the Sheets API reference. :param str table_range: (optional) The A1 notation of a
range to search
for a logical table of data. Values are appended after the last row of the table. Examples: A1 or B2:D4
It seems to be a problem with gspread itself. I encountered this problem myself and according to this thread not only us. In this thread they suggest to add
value_input_option = 'USER_ENTERED'
(so that your code looks like this:
values = ['2021-08-11', '-', '-', 372, 373, 'Brayden', 'ChrisT', 'Chris', 'Dida', 'Darren', 'Ferdi', 'Bernard', 'Cal', 'Gavin', 'Conor']
ws.append_row(values, value_input_option = 'USER_ENTERED')
)
in order to remove that single quote. It worked for me and I hope that it works for you as well

Is there a way to read fixed length files using csv.reader() module in Python 2.x

I have a fixed length file like:
0001ABC,DEF1234
The file definition is:
id[1:4]
name[5:11]
phone[12:15]
I need to load this data into a table. I tried to use CSV module and defined the fixed lengths of each field. It is working fine except for the name field.
For the NAME field, only the value till ABC is getting loaded. The reason is:
As I am using CSV module, it is treating 0001ABC, as a value and only parsing till that.
I tried to use escapechar = ',' while reading the file, but it removes the ',' from the data. I also tried quoting=csv.QUOTE_ALL but that didnt work either.
with open("xyz.csv") as csvfile:
readCSV = csv.reader(csvfile)
writeCSV = open("sample_csv", 'w');
output = csv.writer(writeCSV, dialect='excel', lineterminator="\n")
for row in readCSV:
print(row) # to debug #
data= str(row[0])
print(data) # to debug #
id = data[0:4]
name = data[5:11]
phone = data[12:15]
output.writerow([id,name,phone])
writeCSV.close()
Output of the print commands:
row: ['0001ABC','DEF1234']
data: 0001ABC
Ideally, I expect to see the entire set 0001ABC,DEF1234 in the variable: data.
I can then use the parsing as mentioned in the code to break it into different fields.
Can you please let me know where I am going wrong?

JMeter Array of variables to text file

I am running a query via JDBC request and I am able to get the data and place it in a variable array. The problem is I want the values of the variables to be saved to a text file. However, each variable is being given a unique number appended to it i.e. SCORED_1, SCORED_2,SCORED_3 etc. I am using a beanshell post processor to write to the text file. The problem is I unless I define A LINE Number. How can I get all results from a SQL query and dump them into a single variable without the variables separated by brackets and line separated on their own row.
import org.apache.jmeter.services.FileServer;
// get variables from regular expression extractor
ClaimId = vars.get("SCORED _9"); // I want to just use the
SCORED variable to contain all values from the array
without "{[" characters.
// pass true if want to append to existing file
// if want to overwrite, then don't pass the second
argument
FileWriter fstream = new FileWriter("C:/JMeter/apache-
jmeter-4.0/bin/FBCS_Verify_Final/Comp.txt", true);
BufferedWriter out = new BufferedWriter(fstream);
out.write(ClaimId);
out.write(System.getProperty("line.separator"));
out.close();
fstream.close();
We are not telepathic enough to come up with the solution without seeing your query output and the result file format.
However I'm under impression that you're going into wrong direction. Given you're talking about {[ characters it appears that you're using Result Variable Name field
which returns an ArrayList which should be treated differently
However if you switch to Variable Names field
JMeter will generate a separate variable per each result set row and it should be much easier to work with and eventually concatenate
More information:
JDBC Request
Debugging JDBC Sampler Results in JMeter
JDBC request>Enter a Variable Name> Store as string>Add a Beanshell PostProcessor and add the following script.
import org.apache.jmeter.services.FileServer;
{
FileWriter fstream = new FileWriter("C:/JMeter/apache-jmeter-4.0/bin/FBCS_Verify_Final/Comp.txt", false);
BufferedWriter out = new BufferedWriter(fstream);
Count = vars.get("SCORED_#");
Counter=Integer.parseInt(vars.get("SCORED_#"));
for (int i=1;i<=Counter;i++)
{
ClaimId = vars.get("SCORED_"+i);
out.write(ClaimId);
out.write(System.getProperty("line.separator"));
}
out.flush();
out.close();
fstream.close();
}

Pig: Unable to Load BAG

I have a record in this format:
{(Larry Page),23,M}
{(Suman Dey),22,M}
{(Palani Pratap),25,M}
I am trying to LOAD the record using this:
records = LOAD '~/Documents/PigBag.txt' AS (details:BAG{name:tuple(fullname:chararray),age:int,gender:chararray});
But I am getting this error:
2015-02-04 20:09:41,556 [main] ERROR org.apache.pig.tools.grunt.Grunt - ERROR 1200: <line 7, column 101> mismatched input ',' expecting RIGHT_CURLY
Please advice.
It's not a bag since it's not made up of tuples. Try
load ... as (name:tuple(fullname:chararray), age:int, gender:chararray)
For some reason Pig wraps the output of a line in curly braces which make it look like a bag but it's not. If you have saved this data using PigStorage you can save it using a parameter ('-schema') which tells PigStorage to create a schema file .pigschema (or something similar) which you can look at to see what the saved schema is. It can also be used when loading with PigStorage to save you the AS clause.
Yes LiMuBei point is absolutely right. Your input is not in the right format. Pig will always expect the bag should hold collection of tuples but in your case its a collection of (tuple and fields). In this case pig will retain the tuple and reject the fields(age and gender) during load.
But this problem can be easily solvable in different approach(kind of hacky solution).
1. Load each input line as chararray.
2. Remove the curly brackets and function brackets from the input.
3. Using strsplit function segregate the input as (name,age,sex) fields.
PigScript:
A = LOAD 'input' USING PigStorage AS (line:chararray);
B = FOREACH A GENERATE FLATTEN(REPLACE(line,'[}{)(]+','')) AS (newline:chararray);
C = FOREACH B GENERATE FLATTEN(STRSPLIT(newline,',',3)) AS (fullname:chararray,age:int,sex:chararray);
DUMP C;
Output:
(Larry Page,23,M)
(Suman Dey,22,M)
(Palani Pratap,25,M)
Now you can access all the fields using fullname,age,sex.

Resources