How to add value description with API Blueprint? - apiblueprint

Is there any way to add a description to possible values of URI parameter?
## Search Items [/items{?s}]
### Get items [GET]
+ Parameters
+ s (optional, values) ... Sort results by
+ Values
+ `1 - price`
+ `4 - date`
If I use the approach given above, then I can not define example and default values (for ex., 4), since it expects the full value (4 - date).

No, there is currently no way to add description to possible values of URI parameters.
Neither
+ Values
+ `A - means something`
+ `B`
+ `C`
or
+ Values
+ `A` means something
+ `B`
+ `C`
will work correctly. I filed a feature request under API Blueprint's repository. If you want to be part of the design process and help us to get the best solution to your problem, you can track it and comment under it.
Using tables
When in troubles with API Blueprint, you can always use plain old Markdown in endpoint's description to supplement or substitute what's missing. E.g. you can freely use tables as an addition or replacement to the Values section:
# My API
## Sample [/endpoint{?id}]
Description.
| Value | Meaning |
| ------------ |:----------------:|
| A | Alaska |
| B | Bali |
| C | Czech Republic |
+ Parameters
+ id (string)
Description...
| Value | Meaning |
| ------------ |:----------------:|
| A | Alaska |
| B | Bali |
| C | Czech Republic |
Description...
+ Values
+ `A`
+ `B`
+ `C`

Related

PowerAutomate - replace nth occurrence of character

I'm attempting to parse email body to excel file.
After some manipulations, my current output is an array, where each line is data related to a product.
[  
"Periods: 01.01.2023 - 01.02.2023 | Code: 111 | Code2: 1111 | product-name",  
"Periods: 01.01.2023 - 01.02.2023 | Code: 222 | Code2: 2222 | product-name2"
]
I need to replace the 3rd occurrence of " | " with " | Product: " , so i can get field Product before the product name.
I've tried to use Apply to each -> current item -> various ways to find 3rd occurrence and replace it, but can't succeed.
Any suggestion?
You should be able to loop through each item and perform a simple replace expression like thus ...
replace(item(), split(item(), ' | ')[3], concat('Product: ', split(item(), ' | ')[3]))
That should get you across the line. Of course, I'm basing my answer off the limited information you provided.

Missing results after reducing the visualization size

I would like to count the same log messages in Kibana. With the Size set to 200, it turns out that there are two results that happened twice
But, if I lower the Size to 5, I don't see those two:
It should show me top 5 rows, ordered by count. I expected something like this:
| LogMessage | Count |
|------------|-------|
| xx | 2 |
| yy | 2 |
| zz | 1 |
| qq | 1 |
| ww | 1 |
What am I missing?
The issue is the little warning about Analyzed Field. You should use a keyword field.
With analyzed fields, the analyzer breaks down the original string during indexing into sub-strings to facilitate search use cases (handling things like word boundaries, punctuation, case insensitivity, declination, etc)
A keyword field is just a simple string.
What's probably happening is that you have data like
| LogMessage | Count |
|------------|-------|
| a | 1 |
| b | 1 |
| c x | 1 |
| d x | 1 |
With an analyzed field, if you have a terms agg of size 2 you might (depending on the sort order) get a and b
With a larger terms agg, the top sub-string will be x
This is a simplified example, but I hope it gets the issue across.
The Terms Aggregation docs have a good section about how to avoid/solve this issue.

Using a non-literal value in Apache Derby's OFFSET clause

Using Derby, is it possible to offset by a value from the query rather than an integer literal?
When I run this query, it complains about the value I've given to the offset clause.
select
PRIZE."NAME" as "Prize Name",
PRIZE."POSITION" as "Position",
(select
PARTICIPANT."NAME"
from PARTICIPANT
order by POINTS desc
offset PRIZE."POSITION" rows fetch next 1 row only <-- notice I'm trying to pass in a value to offset by
) as "Participant"
from PRIZE
With the expectation that the results would look like this:
| Prize Name | Position | Participant |
|--------------|----------|---------------|
| Gold medal | 1 | Mari Loudi |
| Silver medal | 2 | Keesha Vacc |
| Bronze medal | 3 | Melba Hammit |
| Hundredth | 100 | James Thornby |
The documentation suggests that it's possible to pass in a value from java code, but I'm trying to use a value from the query itself.
By the way, this is just an example schema to illustrate the point.
I know there are other ways to achieve the ranking, but I'm specifically interested if there's a way to pass values to the offset clause.

PageObject/Cucumber String being input incorrectly

In my scenario outline I have the below
Examples:
| user | password | from | to | amount | date | message |
| joel10 | lolpw12 | bankA | bankB | $100 | 1/30/2015 | Transfer Success. |
in my step definitions I have the below
And(/^the user inputs fields (.*), (.*), (.*)$/) do |from, to, amount|
on(TransferPage).from = /#{from}/
on(TransferPage).to = /#{to}/
on(TransferPage).amount = /#{amount}/
on(TransferPage).date = /#{date}/
end
The FROM, TO, and AMOUNT all comes out correct from the table but when it inputs the date, it comes out (?-mix:1/30/2015)
why is this happening and how do i fix?
When you do /#{date}/ you are taking the value returned from the parsing of the step definition and then turning it into a regular expression:
/#{date}/.class
#=> Regexp
You presumably want to leave the value in its original String format:
on(TransferPage).date = date

Data management with several variables

Currently I am facing the following problem, which I'm working in Stata to solve. I have added the algorithm tag, because it's mainly the steps that I'm interested in rather than the Stata code.
I have some variables, say, var1 - var20 that can possibly contain a string. I am only interested in some of these strings, let us call them A,B,C,D,E,F, but other strings can occur also (all of these will be denoted X). Also I have a unique identifier ID. A part of the data could look like this:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | | X
1 | | A | | | C
2 | X | F | A | |
8 | | | | | E
Now I want to create an entry for every ID and for every occurrence of one of the strings A,B,C,E,D,F in any of the variables. The above data should look like this:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | .. |
1 | | A | | |
1 | | | | | C
2 | | F | | |
2 | | | A | |
8 | | | | | E
Here we ignore every time there's a string X that is NOT A,B,C,D,E or F. My attempt so far was to create a variable that for each entry counts the number, N, of occurrences of A,B,C,D,E,F. In the original data above that variable would be N=1,2,2,1. Then for each entry I create N duplicates of this. This results in the data:
ID | var1 | var2 | var3 | .. | var20
1 | E | | | | X
1 | | A | | | C
1 | | A | | | C
2 | X | F | A | |
2 | X | F | A | |
8 | | | | | E
My problem is how do I attack this problem from here? And sorry for the poor title, but I couldn't word it any more specific.
Sorry, I thought the finally block was your desired output (now I understand that it's what you've accomplished so far). You can get the middle block with two calls to reshape (long, then wide).
First I'll generate data to match yours.
clear
set obs 4
* ids
generate n = _n
generate id = 1 in 1/2
replace id = 2 in 3
replace id = 8 in 4
* generate your variables
forvalues i = 1/20 {
generate var`i' = ""
}
replace var1 = "E" in 1
replace var1 = "X" in 3
replace var2 = "A" in 2
replace var2 = "F" in 3
replace var3 = "A" in 3
replace var20 = "X" in 1
replace var20 = "C" in 2
replace var20 = "E" in 4
Now the two calls to reshape.
* reshape to long, keep only desired obs, then reshape to wide
reshape long var, i(n id) string
keep if inlist(var, "A", "B", "C", "D", "E", "F")
tempvar long_id
generate int `long_id' = _n
reshape wide var, i(`long_id') string
The first reshape converts your data from wide to long. The var specifies that the variables you want to reshape to long all start with var. The i(n id) specifies that each unique combination of n and i is a unique observation. The reshape call provides one observation for each n-id combination for each of your var1 through var20 variables. So now there are 4*20=80 observations. Then I keep only the strings that you'd like to keep with inlist().
For the second reshape call var specifies that the values you're reshaping are in variable var and that you'll use this as the prefix. You wanted one row per remaining letter, so I made a new index (that has no real meaning in the end) that becomes the i index for the second reshape call (if I used n-id as the unique observation, then we'd end up back where we started, but with only the good strings). The j index remains from the first reshape call (variable _j) so the reshape already knows what suffix to give to each var.
These two reshape calls yield:
. list n id var1 var2 var3 var20
+-------------------------------------+
| n id var1 var2 var3 var20 |
|-------------------------------------|
1. | 1 1 E |
2. | 2 1 A |
3. | 2 1 C |
4. | 3 2 F |
5. | 3 2 A |
|-------------------------------------|
6. | 4 8 E |
+-------------------------------------+
You can easily add back variables that don't survive the two reshapes.
* if you need to add back dropped variables
forvalues i =1/20 {
capture confirm variable var`i'
if _rc {
generate var`i' = ""
}
}

Resources