I'm experimenting with grape and Ruby by trying to make a Yo API callback function.
I can get simple examples up and running like this . . .
resource :loc do
get ':loc' do
params.to_yaml
end
end
How would I go about extracting username and x and y coordinates into separate ruby variable given a callback with the following format?
http://yourcallbackurl.com/yourendpoint?username=THEYOER&location=42.360091;-71.094159
When the location data is screwed up . . .
--- !ruby/hash:Hashie::Mash
username: sfsdfsdf
location: '42.360091'
"-71.094159":
route_info: !ruby/object:Grape::Route
options:
:prefix:
:version: v1
:namespace: "/loc"
:method: GET
:path: "/:version/loc/:loc(.:format)"
:params:
loc: ''
:compiled: !ruby/regexp /\A\/(?<version>v1)\/loc\/(?<loc>[^\/.?]+)(?:\.(?<format>[^\/.?]+))?\Z/
version: v1
loc: toto
format: txt
This is how Rack::Utils works. Default params separators are "&" and ";" (its totally legal according to HTTP standard). So you have to parse query string by yourself here.
location = Rack::Utils.parse_nested_query(env['QUERY_STRING'], '&')['location']
coordinates = location.split(';')
UPD: typo with hash key fixed.
Related
I am creating a quarto book project in RStudio to render an html document.
I need to specify some parameters in the yml file but the qmd file returns
"object 'params' not found". Using knitR.
I use the default yml file where I have added params under the book tag
project:
type: book
book:
title: "Params_TEst"
author: "Jane Doe"
date: "15/07/2022"
params:
pcn: 0.1
chapters:
- index.qmd
- intro.qmd
- summary.qmd
- references.qmd
bibliography: references.bib
format:
html:
theme: cosmo
pdf:
documentclass: scrreprt
editor: visual
and the qmd file looks like this
# Preface {.unnumbered}
This is a Quarto book.
To learn more about Quarto books visit <https://quarto.org/docs/books>.
```{r}
1 + 1
params$pcn
When I render the book, or preview the book in Rstudio the error I receive is:
Quitting from lines 8-10 (index.qmd)
Error in eval(expr, envir, enclos) : object 'params' not found
Calls: .main ... withVisible -> eval_with_user_handlers -> eval -> eval
I have experimented placing the params line in the yml in different places but nothing works so far.
Could anybody help?
For multi-page renders, e.g. quarto books, you need to add the YAML to each page, not in the _quarto.yml file
So in your case, each of the chapters that calls a parameter needs a YAML header, like index.qmd, intro.qmd, and summary.qmd, but perhaps not references.qmd.
The YAML header should look just like it does in a standard Rmd. So for example, your index.qmd would look like this:
---
params:
pcn: 0.1
---
# Preface {.unnumbered}
This is a Quarto book.
To learn more about Quarto books visit <https://quarto.org/docs/books>.
```{r}
1 + 1
params$pcn
But, what if you need to change the parameter and re-render?
Then simply pass new parameters to the quarto_render function
quarto::quarto_render(input = here::here("quarto"), #expecting a dir to render
output_format = "html", #output dir is set in _quarto.yml
cache_refresh = TRUE,
execute_params = list(pcn = 0.2))
For now, this only seems to work if you add the parameters to each individual page front-matter YAML.
If you have a large number of pages and need to keep parameters centralized, a workaround is to run a preprocessing script that replaces the parameters in all pages. To add a preprocessing script, add the key pre-render to your _quarto.yml file. The Quarto website has detailed instructions.
For example, if you have N pages named index<N>.qmd, you could have a placeholder in the YML of each page:
---
title: This is chapter N
yourparamplaceholder
---
Your pre-render script could replace yourparamplaceholder with the desired parameters. Here's an example Python script:
for filename in os.listdir(dir):
if filename.endswith(".qmd"):
with open(filename, "r") as f:
txt = f.read()
f.replace('yourparamplaceholder', 'params:\n\tpcn: 0.1\n\tother:20\n')
with open(filename, "w") as ff:
ff.write(txt)
I agree with you that being able to set parameters centrally would be a good idea.
I am building an open source project using Ruby for testing HTTP services: https://github.com/Comcast/http-blackbox-test-tool
I want to be able to reference environment variables in my test-plan.yaml file. I could use ERB, however I don't want to support embedding any random Ruby code and ERB syntax is odd for non-rubyists, I just want to access environment variables using the commonly used Unix style ${ENV_VAR} syntax.
e.g.
order-lunch-app-health:
request:
url: ${ORDER_APP_URL}
headers:
content-type: 'application/text'
method: get
expectedResponse:
statusCode: 200
maxRetryCount: 5
All examples I have found for Ruby use ERB. Does anyone have a suggestion on the best way to deal with this? I an open to using another tool to preprocess the YAML and then send that to the Ruby application.
I believe something like this should work under most circumstances:
require 'yaml'
def load_yaml(file)
content = File.read file
content.gsub! /\${([^}]+)}/ do
ENV[$1]
end
YAML.load content
end
p load_yaml 'sample.yml'
As opposed to my original answer, this is both simpler and handles undefined ENV variables well.
Try with this YAML:
# sample.yml
path: ${PATH}
home: ${HOME}
error: ${NO_SUCH_VAR}
Original Answer (left here for reference)
There are several ways to do it. If you want to allow your users to use the ${VAR} syntax, then perhaps one way would be to first convert these variables to Ruby string substitution format %{VAR} and then evaluate all environment variables together.
Here is a rough proof of concept:
require 'yaml'
# Transform environments to a hash of { symbol: value }
env_hash = ENV.to_h.transform_keys(&:to_sym)
# Load the file and convert ${ANYTHING} to %{ANYTHING}
content = File.read 'sample.yml'
content.gsub! /\${([^}]+)}/, "%{\\1}"
# Use Ruby string substitution to replace %{VARS}
content %= env_hash
# Done
yaml = YAML.load content
p yaml
Use it with this sample.yml for instance:
# sample.yml
path: ${PATH}
home: ${HOME}
There are many ways this can be improved upon of course.
Preprocessing is easy, and I recommend you use a YAML loaderd/dumper
based solution, as the replacement might require quotes around the
replacement scalar. (E.g. you substitute the string true, if that
were not quoted, the resulting YAML would be read as a boolean).
Assuming your "source" is in input.yaml and your env. variable
ORDER_APP_URL set to https://some.site/and/url. And the following
script in expand.py:
import sys
import os
from pathlib import Path
import ruamel.yaml
def substenv(d, env):
if isinstance(d, dict):
for k, v in d.items():
if isinstance(v, str) and '${' in v:
d[k] = v.replace('${', '{').format(**env)
else:
substenv(v, env)
elif isinstance(d, list):
for idx, item in enumerate(d):
if isinstance(v, str) and '${' in v:
d[idx] = item.replace('${', '{').format(**env)
else:
substenv(item, env)
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
data = yaml.load(Path(sys.argv[1]))
substenv(data, os.environ)
yaml.dump(data, Path(sys.argv[2]))
You can then do:
python expand.py input.yaml output.yaml
which writes output.yaml:
order-lunch-app-health:
request:
url: https://some.site/and/url
headers:
content-type: 'application/text'
method: get
expectedResponse:
statusCode: 200
maxRetryCount: 5
Please note that the spurious quotes around 'application/text' are preserved, as would be any comments
in the original file.
Quotes around the substituted URL are not necessary, but the would have been added if they were.
The substenv routine recursively traverses the loaded data, and substitutes even if the substitution is in mid-scalar, and if there are more than substitution in one scalar. You can "tighten" the test:
if isinstance(v, str) and '${' in v:
if that would match too many strings loaded from YAML.
I need to remove the tags field from each of the methods in my OpenAPI spec.
The spec must be in YAML format, as converting to JSON causes issues later on when publishing.
I couldn't find a ready tool for that, and my programming skills are insufficient. I tried Python with ruamel.yaml, but could not achieve anything.
I'm open to any suggestions how to approach this - a repo with a ready tool somewhere, a hint on what to try in Python... I'm out of my own ideas.
Maybe a regex that catches all cases all instances of tags so I can do a search and replace with Python, replacing them with nothing? Empty lines don't seem to break the publishing engine.
Here's a sample YAML piece (I know this is not a proper spec, just want to show where in the YAML tags sits)
openapi: 3.0.0
info:
title: ""
description: ""
paths:
/endpoint
get:
tags:
-
tag1
-
tag3
#rest of GET definition
post:
tags:
- tag2
/anotherEndpoint
post:
tags:
- tag1
I need to get rid of all tags arrays entirely (not just make them empty)
I am not sure why you couldn't achieve anything with Python + ruamel.yaml. Assuing your spec
is in a file input.yaml:
import sys
from pathlib import Path
import ruamel.yaml
in_file = Path('input.yaml')
out_file = Path('output.yaml')
yaml = ruamel.yaml.YAML()
yaml.indent(mapping=4, sequence=4, offset=2)
yaml.preserve_quotes = True
data = yaml.load(in_file)
# if you only have the three instances of 'tags', you can hard-code them:
# del data['paths']['/endpoint']['get']['tags']
# del data['paths']['/endpoint']['post']['tags']
# del data['paths']['/anotherEndpoint']['post']['tags']
# generic recursive removal of any key names 'tags' in the datastructure:
def rm_tags(d):
if isinstance(d, dict):
if 'tags' in d:
del d['tags']
for k in d:
rm_tags(d[k])
elif isinstance(d, list):
for item in d:
rm_tags(item)
rm_tags(data)
yaml.dump(data, out_file)
which gives as output.yaml:
openapi: 3.0.0
info:
title: ""
description: ""
paths:
/endpoint:
get: {}
post: {}
/anotherEndpoint:
post: {}
You can write back data to input.yaml if you need that.
Please note that normally the comment #rest of GET definition would be preserved, but
not now as it is associated during loading with the key before it and that key gets deleted.
I am trying to call trained model from google colab with example provided.
But there is an error.
Who knows is it beta error or I have not set somethoing properly?
Thanks in advance.
The code
from google.cloud import automl_v1beta1 as automl
automl_client = automl.AutoMlClient()
# Create client for prediction service.
prediction_client =
automl.PredictionServiceClient().from_service_account_json(
'XXXXX.json')
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Read the file content for prediction.
#with open(file_path, "rb") as content_file:
snippet = "fsfsf" #content_file.read()
# Set the payload by giving the content and type of the file.
payload = {"text_snippet": {"content": snippet, "mime_type": "text/plain"}}
# params is additional domain-specific parameters.
# currently there is no additional parameters supported.
params = {}
response = prediction_client.predict(model_full_id, payload, params)
print("Prediction results:")
for result in response.payload:
print("Predicted class name: {}".format(result.display_name))
print("Predicted class score: {}".format(result.classification.score))
The eror msg^
InvalidArgument: 400 List of found errors: 1.Field: name; Message: The provided location ID is not valid.
You have to use a region that supports AutoML beta. This works for me:
create_dataset("myproj-123456", "us-central1", "my_dataset_id", "en", "de")
I clone the repo "python-docs-samples" :
$ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
I navigate to the automl examples
$ cd /home/MY_USER/python-docs-samples/language/automl/
I set the environment variables for [1]:
GOOGLE_APPLICATION_CREDENTIALS
PROJECT_ID
REGION_NAME
I typed:
$ python automl_natural_language_dataset.py create_dataset automltest1 False
I got this message:
Dataset name: projects/198768927566/locations/us-central1/datasets/TCN7889001684301386365
Dataset id: TCN7889001684301386365
Dataset display name: automltest1
Text classification dataset metadata:
classification_type: MULTICLASS
Dataset example count: 0
Dataset create time:
seconds: 1569367227
nanos: 873147000
I set the environment variable for :
DATASET_ID
Please note that I got this for the step 5.
I typed:
python automl_natural_language_dataset.py import_data $DATASET_ID "gs://$PROJECT_ID-lcm/complaints_manual.csv"
I got this message:
Processing import...
Dataset imported.
I'm learning rdf via the RDF.rb ruby library. I am able to load the following turtle file into a graph:
# filename: ex002.ttl
#prefix ab: <http://learningsparql.com/ns/addressbook#> .
ab:richard ab:homeTel "(229) 276-5135" .
ab:richard ab:email "richard49#hotmail.com" .
ab:cindy ab:homeTel "(245) 646-5488" .
ab:cindy ab:email "cindym#gmail.com" .
ab:craig ab:homeTel "(194) 966-1505" .
ab:craig ab:email "craigellis#yahoo.com" .
ab:craig ab:email "c.ellis#usairwaysgroup.com" .
But when I try to extract all triples with a #homeTel, using a query, I get no results at all. I am not sure if I have a basic misunderstanding of how the the query should work, or if I have a misunderstanding of how RDF.rb works, because I'm new to it all!
Here is the ruby code:
require 'rdf'
require 'rdf/ntriples'
require 'rdf/raptor'
graph = RDF::Graph.load("../data/ex002.ttl")
query = RDF::Query.new do
pattern [:person, RDF::URI("http://learningsparql.com/ns/addressbook/#homeTel"), :o]
end
response = query.execute(graph)
puts "response: #{response.to_s}" # gives me an empty result
I've tried various alternative ways of representing the predicate, including a plain string with full uri, and "ab:homeTel, and an RDF::Vocabulary. If I just put ':p' in place of the predicate above it does return results so I know the graph is loaded ok.
I hope someone can help. Thanks in advance!
The prefix in the data is
#prefix ab: <http://learningsparql.com/ns/addressbook#> .
that means that ab:homeTel is the IRI
http://learningsparql.com/ns/addressbook#homeTel
However, in your query, you're using a different IRI:
RDF::URI("http://learningsparql.com/ns/addressbook/#homeTel")
# ^
# |
# get rid of this slash