input and output format for Swagger YAML - yaml

I followed the below tutorial to create my machine learning app in Google Cloud:
https://github.com/GoogleCloudPlatform/ml-on-gcp/tree/master/sklearn/gae_serve#steps
I need to construct a 'modelserve.yaml' at first to define my input and output such as this file:
https://github.com/GoogleCloudPlatform/ml-on-gcp/blob/master/sklearn/gae_serve/default/modelserve.yaml
I know that I have multiple lines of strings for my input and multiple lines of double number (1.0 and 0.0) for the output. I described it in this question:
Google cloud ML with Scikit-Learn raises: 'dict' object has no attribute 'lower'
Now I could not find related documentation to tell how to struct this YAML file. What is the correct format of this file in my case?

Related

How to use $ (dollar sign) ^(exponent sign) in yaml?

I saw a YAML file that includes some signs like $, ^. For $, I think it tries to get value from a JSON file. But for ^, I'm not sure about that.
I tried to search for the YAML syntax but cannot find the usage of those signs.
Could anyone point out where that usage from? Thanks a lot!
examples:
json: $.A.Documents[*]
input: ^.B.ID
YAML doesn't assign any special meaning to those characters. As far as YAML is concerned, they are simply part of the content.
Of course, the software loading that YAML can do anything with the loaded data – including inspecting the loaded scalars for $ and ^ and implementing some action on them.
While someone might be able to correctly guess which software expects a YAML file like the one you show, it would be vastly easier for you to check the context in which you found that YAML file. This should lead you to the information you seek – i.e., for which software that YAML file has been written. That software's documentation will then describe how those characters are processed.

How to format CSV file for importing multiple files to Google Cloud AutoML Translate?

I am trying to use the Python Client Library to add multiple files to a dataset I have created for AutoMl Translate. I was unable to find a good example for the csv file that is to be used. Here is the link to their Python Client Library example code to add files to a dataset.
I have created a csv in a bucket of the following form:
UNASSIGNED,gs://<bucket name>/x,gs://<bucket name>/y
Where I am trying to add two files called x and y.
and I get the following error:
google.api_core.exceptions.GoogleAPICallError: None No files to import. Please specify your input files.
The problem was how I formatted the csv file. Each file I want to add needs to have its own line,
The correct csv file would look like this:
UNASSIGNED,gs://<bucket name>/x
UNASSIGNED,gs://<bucket name>/y

Ruby reading file content from GSC bucket without downloading the actual file

I am quite new to Google Cloud Storage. I want to read the CSV file content form the GCS bucket and get the single column data that too without downloading the actual file. I have tried this and got the column data after downloading and then reading the file content.
The API of google.cloud.storage.blob.Blob specifies that the download_as_string method has start and end keywords that provide byte ranges:
ref: https://googleapis.github.io/google-cloud-python/latest/storage/blobs.html#google.cloud.storage.blob.Blob
In ruby there is one issue is create for the same to read specified bytes
https://github.com/googleapis/google-cloud-ruby/issues/1356
You need to check its implemented or not but yes Its Do able

How to use "file:" parameter in gauge bdd

I'm trying to make use of the "file" special parameter in gauge to be able to feed in a json file.
is this feasible or does the file param only work for text files?
are there any examples i could follow on how the file parameter flows downstream into the step definitions or is this not the case.
[it wasn't clear from the documentation on how these special parameters could be used at a step definition level and the gauge documentation around these topics are high level and not detailed.]
file parameters work for text files, but one could argue that json is also a text file (if you ignore the types).
You could read the json as string, and unmarshall it in the step implementation.

Sphinx replace in inclusion command

I am struggling with this issue:
I have to document a fairly large project composed by a C core engine and different API that are built on top of that, say Java, Python C#.
The docs must be deployed separately for each API, i.e. for each language, but 99% of the docs are the same, just the code snippet and example mainly need to change.
I set the type of language in the conf.py file by defining a global variable
I have used primary_domain and highlight_language to set the correct syntax highlighting
For each example I have a source file with the same name but different extension
Now, I'd like to include say an example using the literalinclude directive specifying the name of the file and let its extension change depending on the language in use. I tried naively to use the replace macro but with no success:
rst_prolog = ".. |ext| replace:: .%s\n" % primary_domain
correctly replace |ext| around the docs, but not in the command
.. literalinclude: filename|ext|
Is there any way I can do this, except parse rst files using sed or the like?

Resources