Overwrite the StartTime and EndTime of a span with Golang opentelemetry (otel) - open-telemetry

I am producing traces for data that was collected before I am doing the tracing. This means that the span creation time does not match up with the real start time.
I have been able to do this in the past using opentracing:
span := tracer.StartSpan(
name,
opentracing.StartTime(startTime),
)
I have been unable to find an equivalent in the golang otel libraries. I get the impression that it should be possible based on this quote from the spec:
Start timestamp, default to current time. This argument SHOULD only be set when span creation time has already passed. If API is called at a moment of a Span logical start, API user MUST NOT explicitly set this argument.
The word "default" implies that it should be possible to change this timestamp after span creation.
I have tried using span.SetAttributes:
span.SetAttributes(attribute.Int64("StartTime", 0))
But this (unsurprisingly) sets an attribute:
{
// ...
"StartTime": "2022-04-09T12:28:57.271375+10:00",
"EndTime": "2022-04-09T12:28:57.271417375+10:00",
"Attributes": [
{
"Key": "StartTime",
"Value": {
"Type": "INT64",
"Value": 0
}
},
// ...
}

You need to use WithTimestamp.
startTime := time.Now()
ctx, span := tracer.Start(ctx, "foo", trace.WithTimestamp(startTime))
// do your work
span.End(trace.WithTimestamp(time.Now()))

Related

Filtering JSON based on sub array in a Power Automate Flow

I have some json data that I would like to filter in a Power Automate Flow.
A simplified version of the json is as follows:
[
{
"ItemId": "1",
"Blah": "test1",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "A"
},
{
"Name": "Category",
"Value": "Test"
}
]
},
{
"ItemId": "2",
"Blah": "test2",
"CustomFieldArray": [
{
"Name": "Code",
"Value": "B"
},
{
"Name": "Category",
"Value": "Test"
}
]
}
]
For example, I wish to filter items based on Name = "Code" and Value = "A". I should be left with the item with ItemId 1 in that case.
I can't figure out how to do this in Power Automate. It would be nice to change the data structure, but this is the way the data is, and I'm trying to work out if this is possible in Power Automate without changing the data itself.
Firstly, I had to fix your JSON, it wasn't complete.
Secondly, filtering on sub array information isn't what I'd call easy. However, to get around the limitations, you can perform a bit of trickery.
Prior to the step above, I create a variable of type Array and called it Array.
In the step above, the left hand side expression is ...
string(item()?['CustomFieldArray'])
... and the contains comparison on the right hand side is simply as you can see, a string with the appropriate filter value ...
{"Name":"Code","Value":"A"}
... it's not an expression or a proper object, just a string.
If you need to enhance it to cater for case sensitive values, just set everything to lower case using the toLower expression on the left.
Although it's hard to see, that will produce your desired result ...
... you can see by the vertical scrollbars that it's reduced the size of the array.

jmeter json path Conditional Extraction at peer level

I'm using jmeter v2.13 and jp#gc - JSON Path Extractor.
Here is my JSON sample:
{
"views": [{
"id": 9701,
"name": " EBS: EAS: IDC (EAS MBT IDC)",
"canEdit": true,
"sprintSupportEnabled": true,
"filter": {
"id": 55464,
"name": "Filter for EBS: EAS: IDC & oBill Boar",
"query": "project = \"EBS: EAS: IDC\"",
"owner": {},
"canEdit": false,
"isOrderedByRank": true,
"permissionEntries": [{
"values": [{
"type": "Shared with the public",
"name": ""
}]
}]
},
"boardAdmins": {}
},
{}
]
}
Is it possible to extract views[x].id where there exists an entry views[x].filter.permissionEntries[*].values[*].type that equals Shared with the public?
How would I do it?
Thanks
JSON Query would look like this (I admit I didn't try it in JMeter)
$.views[?(#.filter[?(#.permissionEntries[?(#.values[?(#.type == "Shared with the public")])])])].id
Explanation:
We expect under root ($) to have views and for it to have property id. The rest (in []) are conditions to select only views items based on predefined condition. Hence $.views[conditions].id
Conditions in this case are coming one within the other, but main parts are:
We define condition as a filter ?(...)
We ask filter to look under current item (#) for a specific child item (.child), child may have its own conditions ([...]). Hence #.child[conditions]. That way we move through filter, permissionEntries, values
Finally we get to field values and filter it for a child type field with particular value Shared with the public. Hence #.type == "Shared with the public"
As you see it's not very intuitive, and JSON path is a bit limited. If this is a repetitive issue, and your JSON is even more complicated, you ay consider investing into a scriptable pre-processor (similar to the one explained here.

Jelastic API environment create trigger data

The jelastic api environment.Trigger.AddTrigger takes "data" as parameter, but i cannot find what are all the different possible variables that i can use. Jelastic API docs just says "data : string , information about trigger". Is this "data" documented on somewhere else?
There are some JPS javascript/java examples that i have found, that are pointing me to the right direction, but more information would be nice to have.
https://github.com/jelastic-jps/magento-cluster/blob/master/scripts/addTriggers.js
https://docs.cloudscripting.com/0.99/examples/horizontal-scaling/
https://github.com/jelastic-jps/basic-examples/blob/master/automatic-horizontal-scaling/scripts/enableAutoScaling.js
The environment.Trigger.AddTrigger method requires a set of the parameters:
name - name of a notification trigger
nodeGroup - target node group (you can apply trigger to any node
group within the chosen environment)
period - load period for nodes
condition - rules for monitoring resources
type - comparison sign, the available values are GREATER and LESS
value - percentage of a resource that is monitored
resourceType - types of resources that are monitored by a trigger,
namely CPU, Memory (RAM), Network, Disk I/O, and Disk IOPS
valueType - measurement value. Here, PERCENTAGES is the only possible
measurement value. The available range is from 0 up to 100.
actions - object to describe a trigger action
type - trigger action, the available values are NOTIFY, ADD_NODE, and
REMOVE_NODE
customData:
notify - alert notification sent to a user via email
The following code shows how a new trigger can be created:
{
"type": "update",
"name": "AddTrigger",
"onInstall": {
"environment.trigger.AddTrigger": {
"data": {
"name": "new alert",
"nodeGroup": "sqldb",
"period": "10",
"condition": {
"type": "GREATER",
"value": "55",
"resourceType": "MEM",
"valueType": "PERCENTAGES"
},
"actions": [
{
"type": "NOTIFY",
"customData": {
"notify": false
}
}
]
}
}
}
}
More information about events and other CloudScripting language features you can find here.

Which is the better design for this API response

I'm trying to decide upon the best format of response for my API. I need to return a reports response which provides information on the report itself and the fields contained on it. Fields can be of differing types, so there can be: SelectList; TextArea; Location etc..
They each use different properties, so "SelectList" might use "Value" to store its string value and "Location" might use "ChildItems" to hold "Longitude" "Latitude" etc.
Here's what I mean:
"ReportList": [
{
"Fields": [
{
"Id": {},
"Label": "",
"Value": "",
"FieldType": "",
"FieldBankFieldId": {},
"ChildItems": [
{
"Item": "",
"Value": ""
}
]
}
]
}
The problem with this is I'm expecting the users to know when a value is supposed to be null. So I'm expecting a person looking to extract the value from "Location" to extract it from "ChildItems" and not "Value". The benefit to this however, is it's much easier to query for things than the alternative which is the following:
"ReportList": [
{
"Fields": [
{
"SelectList": [
{
"Id": {},
"Label": "",
"Value": "",
}
]
"Location": [
{
"Id": {},
"Label": "",
"Latitude": "",
"Longitude": "",
"etc": "",
}
]
}
]
}
So this one is a reports list that contains a list of fields which on it contains a list of fieldtype for every fieldtype I have (15 or something like that). This is opposed to just having a list of reports which has a list of fields with a "fieldtype" enum which I think is fairly easy to manipulate.
So the Question: Which format is best for a response? Any alternatives and comments appreciated.
EDIT:
To query all fields by fieldtype in a report and get values with the first way it would go something like this:
foreach(field in fields)
{
switch(field.fieldType){
case FieldType.Location :
var locationValue = field.childitems;
break;
case FieldType.SelectList:
var valueselectlist = field.Value;
break;
}
The second one would be like:
foreach(field in fields)
{
foreach(location in field.Locations)
{
var latitude = location.Latitude;
}
foreach(selectList in field.SelectLists)
{
var value= selectList.Value;
}
}
I think the right answer is the first one. With the switch statement. It makes it easier to query on for things like: Get me the value of the field with the id of this guid. It just means putting it through a big switch statement.
I went with the first one because It's easier to query for the most common use case. I'll expect the client code to put it into their own schema if they want to change it.

Order by nested object in RethinkDB using Go driver

How is it possible using Go driver fetch data from RethinkDB in order by of nested object?
So let's imagine I have such json in my table:
[
{
"id": "1",
"date": "2001-01-15",
"time": {
"begin": "09:00",
"end": "10:30"
}
},
{
"id": "2",
"date": "2001-01-16",
"time": {
"begin": "08:30",
"end": "10:30"
}
}
]
Go model is:
type MyTime struct {
Begin time.Time `json:"begin"`
End time.Time `json:"end"`
}
type Something struct {
Id string `json:"id"`
Date time.Time `json:"date"`
Time MyTime `json:"time"`
}
Example how it is possible to order by id:
var result []Something
db.Table("someTable").OrderBy("id").Run(session).All(&result)
I tried to order by beginning of time like this (thinking that the approach is the same as in ArangoDB, but apparently it is not):
var result []Something
db.Table("someTable").OrderBy("time.begin").Run(session).All(&result)
I saw an example on the official site how it works by using native javascript driver
Example: Use nested field syntax to sort on fields from subdocuments. (You can also
create indexes on nested fields using this syntax with indexCreate.)
r.table('user').orderBy(r.row('group')('id')).run(conn, callback)
But it is not really clear how to transform it to Go.
Any idea how to make it work?
You can use a function, something like this:
db.Table("someTable").OrderBy(func(row Term) Term {
return row.Field("time").Field("begin")
}).Run(session).All(&result)

Resources