I have a folder of files containing JSON descriptors of protobuf schemas, which appear to be formatted in this way:
https://github.com/protobufjs/protobuf.js#using-json-descriptors
// awesome.json
{
"nested": {
"awesomepackage": {
"nested": {
"AwesomeMessage": {
"fields": {
"awesomeField": {
"type": "string",
"id": 1
}
}
}
}
}
}
}
I am interested in converting them all to the original .proto schema file. I'm currently doing them by hand, which is a little tedious and error-prone, so I'm curious: if the JSON descriptor is intended to be functionally equivalent to the original .proto file, is there any existing way to automatically batch convert them? There are many examples of how to convert .proto to JSON descriptors but I couldn't find anything for doing the reverse.
Related
I am using Amazon Style Dictionary to generate SCSS variables from JS definition files. Given the following input:
module.exports = {
color: {
border: {
focus: { value: '#0071ff' }
}
}
}
I would like to generate not one output variable for this, but two (HEX and RGB). Is it possible to write a custom value transformer that spits out multiple values for a given input? Or do I need to run a separate pipeline for this use case?
This could be your colors.json file configuration
"source": ["sources/colors.js"],
"platforms": {
"css-rgb": {
"transforms": ["color/rgb"],
"buildPath": "dist/rgb/",
"files": [
{
"destination": "colors-rgb.css",
"format": "css/variables"
}
]
},
"css-hex": {
"transforms": ["color/hex"],
"buildPath": "dist/hex/",
"files": [
{
"destination": "colors-hex.css",
"format": "css/variables"
}
]
}
}
to provide to your StyleDictionary configuration.
So you grab the sources from your sources/colors.js file and create in output in the dist folder two subfolders, rgb containing colors-rgb.css and hex containing colors-hex.css.
I am trying to analyze a field and build a query in Elasticsearch that finds similar filenames based on a filename provided into the query.
For Example: If I have a filename 'invoice_1234.pdf', I'd like to find more filenames like 'invoice_3456.pdf', but not find files like 'inv_123456.pdf' or 'bigCompany.invoice.1234.pdf'.
I should probably expand upon this. My index stores the filename:
{
"documents": {
"mappings": {
"properties": {
"filename" : {
"type" : "text",
"fields" : {
"reverse" : {
"type" : "text",
"analyzer" : "filename_reverse"
}
}
}
}
}
}
}
In code, a filename may be called as part of the query:
GET /documents/_search
{
"query": {
"bool": {
"should": [
{
"match": {
"filename": {
"query": "potentially_any.filename.pdf"
}
}
}
]
}
}
}
What I am trying to figure out is how to make a field analyzer/tokenizer that would be able to take the filename that is passed in and find more filenames like the filename passed in. It can be any filename.
I tried using the approach used in Filename search with ElasticSearch, but I found that this approach is excellent for finding filenames when I query a known part of the filename, but what I am actually trying to do is find other related filenames based on a filename that has been passed in. When I ran tests using the methods described above, my results typically were all or nothing. I would either get all files as a match (usually because they all end in 'pdf', or contain 'inv'), or I would get only the exact filename from the query.
Am I missing something in my approach, or is this a capability beyond Elasticsearch?
GraphQL lets you ask for specific fields, the response contains only the fields that you had asked for. For example:
a graphql query like:
{
hero {
name
}
}
will return:
{
"data": {
"hero": {
"name": "R2-D2"
}
}
}
where as a graphQl query like:
{
hero {
name
friends {
name
}
}
}
would return:
{
"data": {
"hero": {
"name": "R2-D2",
"friends": [
{
"name": "Luke"
},
{
"name": "Han Solo"
},
{
"name": "Leia"
}
]
}
}
}
Is there a similar mechanism/library/pattern that can be used in gRPC to achieve the same?
FieldMask is similar in protobuf. It is a list of fields to retain, so the first example would be paths: "hero.name" and the second would be paths: ["hero.name", "hero.friends.name"].
It is probably most frequently used to specify which fields should be changed in an update. But it can equally be used to specify the fields that should be returned.
The server can either process the FieldMask directly (e.g., only using the listed fields in a SELECT SQL query), or it can retrieve all the information and filter the result using FieldMaskUtil.merge() to copy just the requested fields into a new proto message to return to the client.
I indexed all my files in Elasticsearch and I want to do a range search on the filename. The filename are currently stored as text and keyword, but I'm open to change the mapping.
In some cases, I have files that are name: image-1.jpg, image-2.jpg,...,image-200.jpg.
I would like to be able to get all the files that are between image-1.jpg to image-9.jpg.
I'm currently using the following request to get those files:
GET /file/_search
{
"query": {
"range": {
"filename": {
"from": "image-1",
"to": "image-1"
}
}
}
}
The output should be 9 images:
image-1.jpg, image-2.jpg,..., image-9.jpg
But I also get result like:
image-21.jpg, image-22.jpg, image-23.jpg, image-30.jpg, ...
Is there a query or a mapping to fix my issue?
I hope the question is not very vague....
I have bunch of logs in ES, and I'm trying to aggregate the sum of a specific field, in this case, file, the mapping is
http://localhost:9200/set_1/record/_mapping
"properties": {
"tags": {"type":"object","dynamic":false,"properties":{
"all":{"type":"string","analyzer":"my_analyzer","store":"true"},
"some_other_stuff_that's_not_important":.....
},
"file": {"type":"nested","dynamic":"false","include_in_root":"true","properties":{
"doctype":{"type":"string","index":"not_analyzed","include_in_all":"false"},
"some_other_stuff": {....}
}
so right now I want to filter everything with a tag called download, and I want to count the number of file in those filtered logs
{
"query": {
"bool": {
"must": [
{
"term": {
"tags.all": "Downloaded"
}
}
]
}
}
}
Now, the file will be a list of different files included in this log, and I want to know how to aggregate if I want to count all of the files but excluding certain file type like, say, .docx
How should I form the aggregation query?
So I aggregated with key=file.type, and I put an exclude arg in the aggregation query, making it exclude=["type1", "type2"] where type 1 and 2 are different file types
This is for ES 1.7 by the way.