Serialize and deserialize protobufs through CLI? - protocol-buffers

I am trying to deserialize a file saved as a protobuf through the CLI (seems like the easiest thing to do). I would prefer not to use protoc to compile, import it into a programming language and then read the result.
My use case: A TensorFlow lite tool has output some data in a protobuf format. I've found the protobuf message definition in the TensorFlow repo too. I just want to read the output quickly. Specifically, I am getting back a tflite::evaluation::EvaluationStageMetrics message from the inference_diff tool.

I assume that the tool outputs a protobuf message in binary format.
protoc can decode the message and output in text format. See this option:
--decode=MESSAGE_TYPE Read a binary message of the given type from
standard input and write it in text format
to standard output. The message type must
be defined in PROTO_FILES or their imports.

While Timo Stamms answer was instrumental, I still struggled with the paths to get protoc to work in a complex repo (e.g. TensorFlow).
In the end, this worked for me:
cat inference_diff.txt | \
protoc --proto_path="/Users/ben/butter/repos/tensorflow/" \
--decode tflite.evaluation.EvaluationStageMetrics \
$(pwd)/evaluation_config.proto
Here I pipe the binary contents of the file containing protobuf (inference_diff.txt in my case, generated by following this guide), and specify the fully qualified protobuf message (which I got by combining the package tflite.evaluation; and the message name, EvaluationStageMetrics), the absolute path of the project for the proto_path (which is the project root/ TensorFlow repo), and also the absolute path for the file which actually contains the message. proto_path is just used for resolving imports, where as the PROTO_FILE (in this case, evaluation_config.proto), is used to decode the file.
Example Output
num_runs: 50
process_metrics {
inference_profiler_metrics {
reference_latency {
last_us: 455818
max_us: 577312
min_us: 453121
sum_us: 72573828
avg_us: 483825.52
std_deviation_us: 37940
}
test_latency {
last_us: 59503
max_us: 66746
min_us: 57828
sum_us: 8992747
avg_us: 59951.646666666667
std_deviation_us: 1284
}
output_errors {
max_value: 122.371696
min_value: 83.0335922
avg_value: 100.17548828125
std_deviation: 8.16124535
}
}
}
If you just want to get the numbers in a rush and can't be bothered to fix the paths, you can do
cat inference_diff.txt | protoc --decode_raw
Example output
1: 50
2 {
5 {
1 {
1: 455818
2: 577312
3: 453121
4: 72573828
5: 0x411d87c6147ae148
6: 37940
}
2 {
1: 59503
2: 66746
3: 57828
4: 8992747
5: 0x40ed45f4b17e4b18
6: 1284
}
3 {
1: 0x42f4be4f
2: 0x42a61133
3: 0x40590b3b33333333
4: 0x41029476
}
}
}

Related

How to use READ - FTPs in Mule

I am trying to pass a .csv file that came to me via FTPs, it comes in a binary format (I think) and I want to pass it to some format where I can edit the columns and rows.
I use Anypoint Studio version: 7.12.1, FTPS connector 1.7.1, Runtime version 4.4.0 EE
read XML:
<ftps:read doc:name="Read" doc:id="34e4139b-606e-4cd0-8650-4e3b2bab01d7" config-ref="FTPS_Config_OneHub" path="test_Dec.csv"/>
I'm doing it with a transform message:
<ee:transform doc:name="Transform Message" doc:id="165835a6-da27-442a-82bd-a3a28552611f" >
<ee:message >
<ee:set-payload ><![CDATA[%dw 2.0
output application/csv
---
read(payload, "application/string")]]></ee:set-payload>
</ee:message>
</ee:transform>
But I get the following error:
""You called the function 'AnonymousFunction' with these arguments:
1: Array ([{"PK\u0003\u0004\u0014\u0000\b\b\b\u0000": "q",column_1: "^",column_2: ""}, ...)
2: String ("application/string")
But it expects arguments of these types:
1: String | Binary
2: String
3: Object
4| read(payload, "application/string")
^^^^
Trace:
at anonymous::main (line: 4, column: 1)" evaluating expression: "%dw 2.0
output application/csv
---
read(payload, "application/string")"."
The payload with the file looks like this:
PK (\!T [Content_Types].xmlµSËnÂ0ü•È×*6ôPUEƒMCÇ©ô\{“Xø%¯¡ð÷]8”R‰
ƒM†õ9Ç!PEƒMEƒMõà$òEƒMÁEƒEƒMMÒ†äd¦cêD”j);·£ÑPÁgð¹ÎEƒMEƒM'OÐÊ•ÍÕEƒMãî¾H7LÆh’™R‰µ×G¢EƒMEƒMõ^'EƒMEƒM°
Thank you very much!
The input file is not a CSV. It is zip file. You should first decompress it and then see if it has a CSV file inside. You can use the Compression Module in Mule to decompress it.

Rewriting a Protocol buffers response in Charles. No 'desc' parameter specified in Content-Type header

I need to rewrite a response of type application/x-protobuf. In Charles I see that the uncompressed response doesn't come in a human-friendly format. It looks like this:
1 {
3: "328283785jkskj2"
4: "wejvjwevjjewjkfvj"
5: "43858934948358934898989"
6 {
6: 49
6: 80
6: 48
6: 0x2120323032303031
6: 0x2029363139333830
}
7 {
1 {
1: 0x2fb0751a
2: 0x41cf8894
}
}
I also see the message "No 'desc' parameter specified in Content-Type header" above the response section in Charles. The request is performed by a third-party library, and the library doesn't come with any *.proto files.
What's the point in concealing the data like that?
Is there any chance to restore (to analyse and rewrite subsequently) the content without its *.proto file?
We have the *.proto file.
Problem solved.

TextFSM Template for Netmiko for "inc" phrase

I am trying to create a textfsm template with the Netmiko library. While it works for most of the commands, it does not work when I try performing "inc" operation in the network device. The textfsm index file seems like it is not recognizing the same command for 2 different templates; for instance:
If I am giving the command - show running | inc syscontact
And give another command - show running | inc syslocation
in textfsm index; the textfsm template seems like it is recognizing only the first command; and not the second command.
I understand that I can get the necessary data by the regex expression for syscontact and syslocation for the commands( via the template ), however I want to achieve this by the "inc" command from the device itself. Is there a way this can be done?
you need to escape the pipe in the index file. e.g. sh[[ow]] ru[[nning]] \| inc syslocation
There is a different way to parse that you want all datas which is called TTP module. You can take the code I wrote below as an example. You can create your own templates.
from pprint import pprint
from ttp import ttp
import json
import time
with open("showSystemInformation.txt") as f:
data_to_parse = f.read()
ttp_template = """
<group name="Show_System_Information">
System Name : {{System_Name}}
System Type : {{System_Type}} {{System_Type_2}}
System Version : {{Version}}
System Up Time : {{System_Uptime_Days}} days, {{System_Uptime_HR_MIN_SEC}} (hr:min:sec)
Last Saved Config : {{Last_Saved_Config}}
Time Last Saved : {{Last_Time_Saved_Date}} {{Last_Time_Saved_HR_MIN_SEC}}
Time Last Modified : {{Last_Time_Modified_Date}} {{Last_Time_Modifed_HR_MIN_SEC}}
</group>
"""
parser = ttp(data=data_to_parse, template=ttp_template)
parser.parse()
# print result in JSON format
results = parser.result(format='json')[0]
print(results)
Example run:
[appadmin#ryugbz01 Nokia]$ python3 showSystemInformation.py
[
{
"Show_System_Information": {
"Last_Saved_Config": "cf3:\\config.cfg",
"Last_Time_Modifed_HR_MIN_SEC": "11:46:57",
"Last_Time_Modified_Date": "2022/02/09",
"Last_Time_Saved_Date": "2022/02/07",
"Last_Time_Saved_HR_MIN_SEC": "15:55:39",
"System_Name": "SR7-2",
"System_Type": "7750",
"System_Type_2": "SR-7",
"System_Uptime_Days": "17",
"System_Uptime_HR_MIN_SEC": "05:24:44.72",
"Version": "C-16.0.R9"
}
}
]

How to write my own code generator of protobuf

Google protobuf is a nice IDL for RPC. But I want to know how to write my own code generator for protobuf.
The protoc compiler can output a protobuf-formatted description of the .proto file. That way most of the parsing has been done for you already, and you only need to generate the output you want.
The .proto schema for the .proto file description is here:
https://github.com/google/protobuf/blob/master/src/google/protobuf/descriptor.proto
As an additional step, you can make your generator runnable via an "-mygenerator-out=." option on protoc itself:
https://developers.google.com/protocol-buffers/docs/reference/other
Here is one (albeit a bit convoluted) example on how a code generator can be written in Python:
https://github.com/nanopb/nanopb/blob/master/generator/nanopb_generator.py
A protoc plugin is a binary that takes a protobuf message of type CodeGeneratorRequest and returns a response of type CodeGeneratorResponse to standard out.
The binary must be called protoc-gen-NAME and can be used by invoking the protoc command with:
protoc --plugin=./path/to/protoc-gen-NAME --NAME_out=./test/generated ./test.proto
Note specifically that names are important. This will not work, it will invoke the java generator:
protoc --plugin=./path/to/protoc-gen-NAME --java_out=./test/generated ./test.proto
This will not work, because the binary does not have the correct name:
protoc --plugin=./path/to/whatever-NAME --NAME_out=./test/generated ./test.proto
In order to process the incoming CodeGeneratorRequest and generate a valid response, your binary must itself be able to parse the protobuf message as per the protocol file plugin.proto from the protocolbuffers repository.
Historically this was difficult to do in a self-contained manner, but you can do this 'end-to-end' entirely in rust simply with the protobuf crate, like this trivial example demonstrates:
[dependencies]
protobuf="3.0.2"
use protobuf::plugin::{code_generator_response, CodeGeneratorRequest, CodeGeneratorResponse};
use protobuf::Message;
use std::io;
use std::io::{BufReader, Read, Write};
fn main() {
// Read message from stdin
let mut reader = BufReader::new(io::stdin());
let mut incoming_request = Vec::new();
reader.read_to_end(&mut incoming_request).unwrap();
// Parse as a request
let req = CodeGeneratorRequest::parse_from_bytes(&incoming_request).unwrap();
// Generate the content for each output file
let mut response = CodeGeneratorResponse::new();
for proto_file in req.proto_file.iter() {
let mut output = String::new();
output.push_str(&format!("// from file: {:?}\n", &proto_file.name));
output.push_str(&format!("// package: {:?}\n", &proto_file.package));
for message in proto_file.message_type.iter() {
output.push_str(&format!("\nmessage: {:?}\n", &message.name));
for field in message.field.iter() {
output.push_str(&format!(
"- {:?} {:?} {:?}\n",
field.type_,
field.type_name,
field.name(),
));
}
}
// Add it to the response
let mut output_file = code_generator_response::File::new();
output_file.content = Some(output);
output_file.name = Some(format!("{:?}/out.txt", &proto_file.name.as_ref().unwrap()));
response.file.push(output_file);
}
// Serialize the response to binary message and return it
let out_bytes: Vec<u8> = response.write_to_bytes().unwrap();
io::stdout().write_all(&out_bytes).unwrap();
}
Obviously this trivial example doesn't generate code, just text files, but it shows the basic process. You should also iterate over service and deal with all the additional properties on each type.
What this basically gives you is an AST matching the .proto files; the codegen side of it can be done however you like.
Helpful hints:
Do not log to stdout in your plugin, eg. for debugging. The only permitted output to stdout is a protobuf format CodeGeneratorResponse message.
The plugin does not write files, the protoc command does that; it should generate content and then return an array of files, along with content and metadata.
For more information on plugins, carefully read the plugin.proto file linked above; it has extensive details.

Cannot parse multiple files with logstash input "file"

I am trying to parse a folder with logstash on windows but I have some strange results.
I have a file samplea.log with the following content :
1
2
3
4
5
I have a file sampleb.log with the following content:
6
7
8
9
10
This is my config file:
input {
file {
path=> "C:/monitoring/samples/*.log"
}
}
output {
stdout { debug => "true" }
elasticsearch {
protocol => "transport"
host => "127.0.0.1"
}
}
For unknown reasons, the events displayed on my console are 6,7,8,9 and the same events are stored in elasticsearch. The last line of my file sampleb is ignored and the whole file samplea is ignored.
Thanks in advance for your help.
To solve this problem, I've patched original logstash (1.4.2) distribution with the filewatch-0.6.1 gem (original is filewatch-0.5.1)
Logstash is delimiting lines with return character. Make sure you hit enter on the last line.
Known bug, the inner library tracks the file with inode. On Windows the library use a function that always return 0. In your case, I think logstash reads the simpleb.log, then reads simplea.log from index 6, which is end-of-file.
Bug tracked here : https://github.com/logstash-plugins/logstash-input-file/issues/2

Resources