Format a specific cell with currency rounded format using gspread in python - gspread

I am currently using this
sheet.format("C3:C3", {"numberFormat": {"type": "CURRENCY"}})
But it doesn't seem to work

From your following script and But it doesn't seem to work. Also on the sheet when I click on the cell I see its adding a ' before the number which is only visible on the formula,
sheet.update("C3", 6000000)
sheet.format("C3:C3", {"numberFormat": {"type": "CURRENCY"}})
In this case, how about using value_input_option as follows?
Modified script:
sheet.update("C3", 6000000, value_input_option="USER_ENTERED")
sheet.format("C3:C3", {"numberFormat": {"type": "CURRENCY"}})
or
sheet.update("C3", 6000000, value_input_option="USER_ENTERED")
sheet.format("C3", {"numberFormat": {"type": "CURRENCY"}})
Reference:
update(range_name, values=None, **kwargs)

Related

Summing a parsed field in Grafana Loki

I am trying to sum the count of rows (row_count) being inserted/updated according to my process logs, which look similar to the following line:
{"function": "Data Processing Insert", "module": "Data Processor", "environment": "DEV", "level": "INFO", "message": "Number of rows inserted", "time": "2022-04-29T09:07:02.735Z", "epoch_time": 1651223222.735133, "row_count": "8089"}
I'm able to build the filter to get those lines but haven't been able to perform calculations on row_count. How would I go about doing that?
The grafana community channel came thru. To do what I'm looking for I had to use unwrap:
sum_over_time({__aws_cloudwatch_log_group="/aws/batch/job"} | json | message=~".+inserted" | unwrap row_count [1h])

Renaming the field with special character with Simple Message Transform Kafka Connect

I'm using SMT in Splunk Sink Connector and having trouble with renaming the field. I need to change name of the field from, for example, "metric1" into "metric_name:cpu.usr".
That's how configurations of my connector look like:
SPLUNK_SINK_CONNECTOR_CONFIG='{
"name": "splunk_sink_connector_1",
"config": {
"connector.class": "com.splunk.kafka.connect.SplunkSinkConnector",
"tasks.max": "1",
"splunk.indexes": "'"$SPLUNK_INDEX"'",
"topics":"metrics_for_splunk",
"splunk.hec.uri": "'"$SPLUNK_HEC_URI"'",
"splunk.hec.token": "'"$SPLUNK_HEC_TOKEN"'",
"splunk.hec.raw": "true",
"offset.flush.interval.ms": 1000,
"splunk.hec.json.event.formatted": "true",
"transforms": "renameField,HoistField,insertTS,convertTS,insertEvent",
"transforms.renameField.type": "org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.renameField.renames": "metric1:"metric_name:cpu.usr"",
"transforms.HoistField.type": "org.apache.kafka.connect.transforms.HoistField$Value",
"transforms.HoistField.field": "fields",
"transforms.insertTS.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.insertTS.timestamp.field": "message_timestamp",
"transforms.convertTS.type": "org.apache.kafka.connect.transforms.TimestampConverter$Value",
"transforms.convertTS.format": "yyyy-MM-dd hh:mm",
"transforms.convertTS.target.type": "string",
"transforms.convertTS.field": "message_timestamp",
"transforms.insertEvent.type": "org.apache.kafka.connect.transforms.InsertField$Value",
"transforms.insertEvent.static.field": "event",
"transforms.insertEvent.static.value": "metric"
}}'
When I'm trying to run this connector, I get an error:
Connector configuration is invalid and contains the following 1 error(s):\nInvalid value [metric1:metric_name:cpu.usr] for configuration renames: Invalid rename mapping: metric1:metric_name:cpu.usr\n
If I'm running this connector without renameField SMT everything goes without a hitch.
I understand that the problem is with ":" character in the name of the field. I've tried to wrap metric_name:cpu.usr like this:
"transforms.renameField.renames": "metric1:'metric_name:cpu.usr'"
and like this
"transforms.renameField.renames": "metric1:'"metric_name:cpu.usr"'"
and like this
"transforms.renameField.renames": "metric1:"metric_name:cpu.usr""
and use escape character \ before :
"transforms.renameField.renames": "metric1:metric_name:cpu.usr"
with no positive effect.
Is it possible at all to use this SMT for renaming if I have special character in the name?
Or maybe there is some handy workaround?
It seems that this use case should be rather common, but I haven't found anything on the web.
Will be very grateful for advice.

How to use grok patterns to match this logs?

The log content like this:
[2018-07-09 11:30:59] [13968] [INFO] [1e74b6b7-fcb2-4dde-a259-7db1de0350ea] run entry() 11ms
[2018-07-09 11:30:59] [13968] [INFO] [1e74b6b7-fcb2-4dde-a259-7db1de0350ea] entry done
The first line logged function call info with exec time, other line is normal log.
Now, I want to match all of them, and if there is exec time in the line, I want to match it.
I write grok pattern like this:
\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{NUMBER:process_id}\] \[%{LOGLEVEL:loglevel}\] \[%{UUID:request_id}\] %{DATA:message}(\s%{NUMBER:use_time}ms)?
it dose not work. The match result is:
{
"process_id": "13968",
"loglevel": "INFO",
"message": "",
"request_id": "1e74b6b7-fcb2-4dde-a259-7db1de0350ea",
"timestamp": "2018-07-09 11:30:59"
}
if change DATA:message to GREEDYDATA:message, it can not match exec time.
I solved it by using raw regexp to replace total message field:
\[%{TIMESTAMP_ISO8601:timestamp}\] \[%{NUMBER:process_id}\] \[%{LOGLEVEL:loglevel}\] \[%{UUID:request_id}\] (?<message>.+ (?:(?<use_time>\d+(?:\.\d+)?)ms)?.*)

Translate multiple files into svf and get ths svf urn

I am trying to convert a zip folder into svf. The zip contains the following files :
- an .obj (3D object);
- a .mtl (links the object to its texture);
- a .tif (texture).
I used Postman's 'Request Translation (ZIP to SVF)' to get an urn. Everything seems fine until that step. I get a 64 based urn, and the request's result is "created".
But, when I try to display it with Forge viewer afterwards, I get the following error :
error : 9
According to this, the data would not contain any viewable data.
So I tried to use Forge extractor instead, and it works perfectly, I can view my model with its texture as extractor's output.
This post seems to give some instructions, but I do not understand how to link the files together and register them individually for translation.
Has anyone encountered this before ?
When calling POST Job for a .zip make sure to specify the compressedUrn and rootFilename attributes, something like:
curl -X 'POST' -H 'Authorization: Bearer WmzXZq9MATYyfrnOFpYOE75sL5dh' -H 'Content-Type: application/json' -v 'https://developer.api.autodesk.com/modelderivative/v2/designdata/job' -d
'{
"input": {
"urn": "dXJuOmFkc2sub2JqZWN0czpvcy5vYmplY3Q6bW9kZWxkZXJpdmF0aXZlL0E1LnppcA",
"rootFilename": "file.obj",
"compressedUrn": true
},
"output": {
"formats": [
{
"type": "svf",
"views": [
"2d",
"3d"
]
}
]
}
}'
Compressed sample & prepare for Viewer sample.
I managed to get my svf urn, but I did not use the zip folder. I had to convert the files inside to another format to make it work.
Thanks for the answer though.

Append an array to a json using jq in BASH

I have a json that looks like this:
{
"failedSet": [],
"successfulSet": [{
"event": {
"arn": "arn:aws:health:us-east-1::event/AWS_RDS_MAINTENANCE_SCHEDULED_xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxx",
"endTime": 1502841540.0,
"eventTypeCategory": "scheduledChange",
"eventTypeCode": "AWS_RDS_MAINTENANCE_SCHEDULED",
"lastUpdatedTime": 1501208541.93,
"region": "us-east-1",
"service": "RDS",
"startTime": 1502236800.0,
"statusCode": "open"
},
"eventDescription": {
"latestDescription": "We are contacting you to inform you that one or more of your Amazon RDS DB instances is scheduled to receive system upgrades during your maintenance window between August 8 5:00 PM and August 15 4:59 PM PDT. Please see the affected resource tab for a list of these resources. \r\n\r\nWhile the system upgrades are in progress, Single-AZ deployments will be unavailable for a few minutes during your maintenance window. Multi-AZ deployments will be unavailable for the amount of time it takes a failover to complete, usually about 60 seconds, also in your maintenance window. \r\n\r\nPlease ensure the maintenance windows for your affected instances are set appropriately to minimize the impact of these system upgrades. \r\n\r\nIf you have any questions or concerns, contact the AWS Support Team. The team is available on the community forums and by contacting AWS Premium Support. \r\n\r\nhttp://aws.amazon.com/support\r\n"
}
}]
}
I'm trying to add a new key/value under successfulSet[].event (key name as affectedEntities) using jq, I've seen some examples, like here and here, but none of those answers really show how to add a possible one key with multiple values (the reason why I say possible is because as of now, AWS is returning one value for the affected entity, but if there are more, then I'd like to list them).
EDIT: The value of the new key that I want to add is stored in a variable called $affected_entities and a sample of that value looks like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]
The value could look like this:
[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx",
...
...
...
]
You can use this jq,
jq '.successfulSet[].event += { "new_key" : "new_value" }' file.json
EDIT:
Try this:
jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Test:
sat~$ new_value='[
"arn:aws:acm:us-east-1:xxxxxxxxxxxxxx:certificate/xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxx"
]'
sat~$ jq --argjson argval "$new_value" '.successfulSet[].event += { "affected_entities" : $argval }' file.json
Note that --argjson works with jq 1.5 and above.

Resources