smartsheet API cell format of currency - format

I'm able to successfully format a cell through the API and the compact format string, cell.format, except for cells with currency formats.
I would like to see an example of the cell.format that works for submitting 123456 as the value/display value and a cell.format string to make it appear as $1,234.56
My best guess, based on the thin documentation I have found was ",,,,,,,,,,,13,2,1,2,," It isn't clear whether the decimalCount field needs to be set if the numberFormat is set. Either way, I have been unsuccessful in getting the value to format to anything resembling US currency.

I've successfully tested the scenario you've described as follows:
Get the row that contains a cell with the numeric value I want to format (so that I can inspect initial value/format in that cell, and later compare it to what is returned after I've updated the cell format).
GET ROW (before cell format update)
request URI: https://api.smartsheet.com/2.0/sheets/3826787173066628/rows/3277051783341956?include=format
abbreviated response showing data for the cell I'm interested in:
{
"id": 3277051783341956,
...
"cells": [
...
{
"columnId": 310981806057348,
"value": 1234.56,
"displayValue": "1234.56",
"format": ",,1,,,,,,,22,,,,,,,"
}
]
}
The cell value looks like this initially in the Smartsheet UI:
Update the format of the cell, to format the cell value as US Currency, with 2 decimal places and a comma separator.
UPDATE ROW (cell format)
request URI: https://api.smartsheet.com/2.0/sheets/3826787173066628/rows
request body:
[
{
"id": "3277051783341956",
"cells": [
{"columnId": "310981806057348", "value": 1234.56, "format": ",,,,,,,,,,,13,2,1,2,,"}
]
}
]
Examine the cell data again, now that I've updated the format.
GET ROW (after cell format update)
request URI: https://api.smartsheet.com/2.0/sheets/3826787173066628/rows/3277051783341956?include=format
abbreviated response showing data for the cell I'm interested in:
{
"id": 3277051783341956,
...
"cells": [
...
{
"columnId": 310981806057348,
"value": 1234.56,
"displayValue": "$1,234.56",
"format": ",,,,,,,,,,,13,2,1,2,,"
}
]
}
The cell value now looks like this (as desired) in the Smartsheet UI:
This example verifies that the correct format to use for the scenario you've described is: ",,,,,,,,,,,13,2,1,2,,"
Also please note that the cell value for the cell needs to contain 2 decimal places -- so if the raw value you have is 123456 and you want that value to appear in the cell as $1,234.56 then your integration will need to divide the raw value by 100 and set that as the cell value in the Update Row (cell) request, in order to get the desired result.

The format string you're using is correct, but the value you're using is a whole number. This would be formatted as $123,456.00. The decimalCount property does not change the numeric value of the cell (shifting the decimal point), but instead sets how many digits after the decimal point are displayed.
See the API docs for a complete listing of what's possible. Alternatively, you can set the formatting of a cell from within the web interface, and then retrieve the formatting via the API to see what the correct string would need to be.
If you want to alter the numeric value of a cell, such that it is divided by 100, you may apply a formula, or alter the value internally inside your code and set the new value back into the cell.

Related

Data Operation - Select (Json Array)

I have a JSON Array with the following structure:
{
"InvoiceNumber": "11111",
"AccountName": "Hospital",
"items": {
"item": [
{
"Quantity": "48.000000",
"Rate": "0.330667",
"Total": "15.87"
},
{
"Quantity": "1.000000",
"Rate": "25.000000",
"Total": "25.00"
}
]
}
}
I would like to use Data Operation "Select" to select invoice numbers with invoice details.
Select:
From body('Parse_Json')?['invoices']?['invoice']
Key: Invoice Number;Map:item()['InvoiceNumber'] - this line works
Key: Rate; Map: item()['InvoiceNumber']?['items']?['item']?['Rate']- this line doesnt work.
The error message says "Array elements can only be selected using an integer index". Is it possible to select the Invoice Number AND all the invoice details such as rate etc.? Thank you in advance! Also, I am trying not to use "Apply to each"
You have to use a loop in some form, the data resides in a array. The only way you can avoid looping is if you know that the number of items in the array will always be of a certain length.
Without looping, you can't be sure that you've processed each item.
To answer your question though, if you want to select a specific item in an array, as the error describes, you need to provide the index.
This is the sort of expression you need. In this one, I am selecting the item at position 1 (arrays start at 0) ...
body('Parse_JSON')?['items']?['item'][1]['rate']
Using your JSON ...
You can always extract just the items object individually but you'll still need to loop to process each item IF the length is never a static two items (for example).
To extract the items, you select the object from the dynamic content ...
Result ...

Apache NiFi: Add column to csv using mapped values

A csv is brought into the NiFi Workflow using a GetFile Processor. I have a column consisting of a "id". Each id means a certain string. There are around 3 id's. For an example if my csv consists of
name,age,id
John,10,Y
Jake,55,N
Finn,23,C
I am aware that Y means York, N means Old and C means Cat. I want a new column with a header named "nick" and have the corresponding nick for each id.
name,age,id,nick
John,10,Y,York
Jake,55,N,Old
Finn,23,C,Cat
Finally I want a csv with the extra column and the appropriate data for each record. How is this possible Using Apache NiFi. Please advice me on the processors that must be used and the configurations that must be changed in order to accomplish this task.
Flow:
add a new nick column
copy over the id to the nick column
look at each line and match id with it's corresponding value
set this value into current line in the nick column
You can achieve this using either ReplaceText or ReplaceTextWithMapping. I do it with ReplaceText:
UpdateRecord will parse the csv file, add the new column and copy the id value:
Create a CSVReader and keep the default properties. Create a CSVRecordSetWriter and set Schema access strategy to Schema Text. Set Schema Text property to
{
"type":"record",
"name":"foobar",
"namespace":"my.example",
"fields":[
{
"name":"name",
"type":"string"
},
{
"name":"age",
"type":"int"
},
{
"name":"id",
"type":"string"
},
{
"name":"nick",
"type":"string"
}
]
}
Notice that it has the new column. Finally replace the original values with the mapping:
PS: I noticed you are new to SO, welcome! You have not accepted a single answer in any of your previous questions. Accept them, if they solve your problem, as it will help others to find solutions.

Kibana scripted field which loops through an array

I am trying to use the metricbeat http module to monitor F5 pools.
I make a request to the f5 api and bring back json, which is saved to kibana. But the json contains an array of pool members and I want to count the number which are up.
The advice seems to be that this can be done with a scripted field. However, I can't get the script to retrieve the array. eg
doc['http.f5pools.items.monitor'].value.length()
returns in the preview results with the same 'Additional Field' added for comparison:
[
{
"_id": "rT7wdGsBXQSGm_pQoH6Y",
"http": {
"f5pools": {
"items": [
{
"monitor": "default"
},
{
"monitor": "default"
}
]
}
},
"pool.MemberCount": [
7
]
},
If I try
doc['http.f5pools.items']
Or similar I just get an error:
"reason": "No field found for [http.f5pools.items] in mapping with types []"
Googling suggests that the doc construct does not contain arrays?
Is it possible to make a scripted field which can access the set of values? ie is my code or the way I'm indexing the data wrong.
If not is there an alternative approach within metricbeats? I don't want to have to make a whole new api to do the calculation and add a separate field
-- update.
Weirdly it seems that the number values in the array do return the expected results. ie.
doc['http.f5pools.items.ratio']
returns
{
"_id": "BT6WdWsBXQSGm_pQBbCa",
"pool.MemberCount": [
1,
1
]
},
-- update 2
Ok, so if the strings in the field have different values then you get all the values. if they are the same you just get one. wtf?
I'm adding another answer instead of deleting my previous one which is not the actual question but still may be helpful for someone else in future.
I found a hint in the same documentation:
Doc values are a columnar field value store
Upon googling this further I found this Doc Value Intro which says that the doc values are essentially "uninverted index" useful for operations like sorting; my hypotheses is while sorting you essentially dont want same values repeated and hence the data structure they use removes those duplicates. That still did not answer as to why it works different for string than number. Numbers are preserved but strings are filters into unique.
This “uninverted” structure is often called a “column-store” in other
systems. Essentially, it stores all the values for a single field
together in a single column of data, which makes it very efficient for
operations like sorting.
In Elasticsearch, this column-store is known as doc values, and is
enabled by default. Doc values are created at index-time: when a field
is indexed, Elasticsearch adds the tokens to the inverted index for
search. But it also extracts the terms and adds them to the columnar
doc values.
Some more deep-dive into doc values revealed it a compression technique which actually de-deuplicates the values for efficient and memory-friendly operations.
Here's a NOTE given on the link above which answers the question:
You may be thinking "Well that’s great for numbers, but what about
strings?" Strings are encoded similarly, with the help of an ordinal
table. The strings are de-duplicated and sorted into a table, assigned
an ID, and then those ID’s are used as numeric doc values. Which means
strings enjoy many of the same compression benefits that numerics do.
The ordinal table itself has some compression tricks, such as using
fixed, variable or prefix-encoded strings.
Also, if you dont want this behavior then you can disable doc-values
OK, solved it.
https://discuss.elastic.co/t/problem-looping-through-array-in-each-doc-with-painless/90648
So as I discovered arrays are prefiltered to only return distinct values (except in the case of ints apparently?)
The solution is to use params._source instead of doc[]
The answer for why doc doesnt work
Quoting below:
Doc values are a columnar field value store, enabled by default on all
fields except for analyzed text fields.
Doc-values can only return "simple" field values like numbers, dates,
geo- points, terms, etc, or arrays of these values if the field is
multi-valued. It cannot return JSON objects
Also, important to add a null check as mentioned below:
Missing fields
The doc['field'] will throw an error if field is
missing from the mappings. In painless, a check can first be done with
doc.containsKey('field')* to guard accessing the doc map.
Unfortunately, there is no way to check for the existence of the field
in mappings in an expression script.
Also, here is why _source works
Quoting below:
The document _source, which is really just a special stored field, can
be accessed using the _source.field_name syntax. The _source is loaded
as a map-of-maps, so properties within object fields can be accessed
as, for example, _source.name.first.
.
Responding to your comment with an example:
The kyeword here is: It cannot return JSON objects. The field doc['http.f5pools.items'] is a JSON object
Try running below and see the mapping it creates:
PUT t5/doc/2
{
"items": [
{
"monitor": "default"
},
{
"monitor": "default"
}
]
}
GET t5/_mapping
{
"t5" : {
"mappings" : {
"doc" : {
"properties" : {
"items" : {
"properties" : {
"monitor" : { <-- monitor is a property of items property(Object)
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
}
}
}
}
}
}
}
}
}

Protocol buffers Fieldmask on Collections within resource

If I want to update the "amount" field within a particular element inside "f_units" collection in the below resource (protocol buffer), how will the FieldMask look like to update the amount field? Does the field mask operate on array index for collections?
{
"f_sel": {
"f_units": [
{
"id": "1",
"amount": {
"coefficient": 1000,
"exponent": -2
}
},
{
"id": "2",
"amount": {
"coefficient": 2000,
"exponent": -2
}
}
]
}
}
Will it be "f_sel.f_units.0.amount" ? How can I update the amount using FieldMask?
As far as I know, there is no way to replace individual elements of a repeated field with an index in a FieldMask.
Instead, you'd update the amount field for the element within f_units you wish to change and set the FieldMask to
"f_sel.f_units"
It would be slightly more efficient to only have to send a delta to the original list, but it would be hard to prevent bugs. For example, what if the proto was modified in the meantime and the specified index (presuming there was a way to specify one) for the repeated field was no longer in range?
As an aside, Google does propose the concept of MergeOptions which defines semantics for how repeated fields are to be handled when merging. Currently, it appears they intend for you either to replace the repeated field in its entirety or append to the end of the destination field. Both of these merging strategies avoid the aforementioned bug that could be caused by specifying an invalid index.

algorithm to patch values of an object with the latest patches in an ordered collection of patch sets

I have the following problem
At the start I have an "object", which is, basically, a dictionary of string to value. I.e. this is not a .net object, it's a construct which works sort of like a javascript object.
Then I have an ordered collection of patchsets (each patchset has a "rank"). Each patchset is a set of patches. Each patch overrides the value of a single field in the "object" or any higher-ranking patch to that field. A patch in the first patchset will override the value of a field. If the next patchset contains a patch referring to the same field it will be overridden again etc.
My goal is to take the object and override its fields with the latest patches only.
e.g. given object like { price: 35, qty: 10 } and sets of patches:
[ { price: 40 }]
[ { qty: 15 }, { price: 20 } ]
at the end I should get { price: 20, qty: 15 }
I am sure there's an algorithm for that, but I am stuck. Any suggestions most welcome
Iterate the collection from the end to the front and keep track of the fields already "patched" (for example using a Set), only set fields that weren't set before. You can further optimize if you count the number of set fields and as soon as that count equals the count of fields in the original object you're done.

Resources