I've recently setup Oracle Rest Data Services (ORDS) and managed to successfully create several endpoints returning both JSON and CSV data. However, is there was a way to change the delimiter (such as to a tab or pipe) on csv/query services? Since the ORDS packages are encrypted, there's no way to do modifications on that front, and none of the documentation I've read suggests there's a built in option to make this change.
I'm considering creating an plugin into ORDS that would basically call the CSV path and then convert it to the new delimiter before returning the data, but before that I wanted to make sure there wasn't any easier way of accomplishing this.
Take a look at Developing Oracle Rest Data Applications
Pattern: POST http://< HOST>:< PORT >/ords/< SchemaAlias>/< ObjectAlias>/batchload?< Parameters>
One parameter is: delimiter
"Sets the field delimiter for the fields in the file. The default is the comma (,)."
Related
We marked some fields in our schema using the #deprecated directive. Now we want to log if these fields are still in use from some of our clients. What would be the best way to do this, without using Apollo Studio.
If you have access to the client code, then you can utilize GraphQL Inspector to check for deprecated usage. Using the CLI, you just do:
graphql-inspector validate DOCUMENTS SCHEMA
where DOCUMENTS is a glob pattern used to match the files containing the queries and SCHEMA is a pointer to the schema used for validation. The files containing the queries can be .graphql files or .js/.ts files. The schema pointer can be a URL to your schema or one or more .graphql files with your schema's type definitions. See here and here for additional ways to provide the schema and documents.
If you don't have access to the client code, or specifically need to log deprecated usage on every request, then you can write your own Apollo Server plugin and utilize GraphQL Inspector's programmatic API instead to validate each request's parsed document as it comes in. The parsed document will be available beginning with the validationDidStart lifecycle hook. See the docs for a complete example of how to write your own plugin.
I have build a custom connector to get the data from a web service and then index it. The web service response returns only the data to be indexed.
I want to delete the documents from index which are not part of the web service response during the crawl but were added to the index in the last crawl.
Is there any way to achieve the above or can I flush the full index programmatically in the connector code and then add the recent content to the index.
Marged is correct. A feed (which is what the connector can send to the GSA) of type full will purge the existing feed and replace it. Otherwise, your connector is going to have to manage state and prune out documents as you decided.
Thanks Marged and Michael for the help.. I guess i have to write the custom logic in connector to delete the data from index.
What you're trying to achieve is exactly what happens when you send a "full" content feed. This is from the documentation:
When the feedtype element is set to full for a content feed, the system deletes all the prior URLs that were associated with the data source. The new feed contents completely replace the prior feed contents. If the feed contains metadata, you must also provide content for each record; a full feed cannot push metadata alone. You can delete all documents in a data source by pushing an empty full feed.
Marged is correct that v4.x is the way to go in the future, but if you've already started this with the 3.x connector framework and you're happy with it there's no need to rush to upgrade it. All the related code is open source and 3.x won't disappear any time soon, there are too many 3rd party connectors based on it.
I have a Talend job which has an input CSV file which needs to be converted to a JSON format and then using a tRESTclient/tREST , make a HTTP call request and post data.
In the current job, I have an Elasticsearch server installed on my local machine and provided that URL.
I was able to convert the files to JSON format and also verified with a tlogrow component but unable to post data.
(P.S: I was able to post data using a bulk Java code, loading jar files and making HTTP call and sending parameters using a tJAVArow component. So no issue with my localhost and posting data.)
After converting the data from input file to JSON format, set the context variable with your JSON data and then make the rest call. You can add the context variable in the HHTP Body. example : context.json_post without double quotes.
Can I have Paw for mac (http rest client) read Dynamic Values from a csv or json file? I need to run 10000 APIs by using different Dynamic Values in my collections.
You could create a dynamic value extension, and use readFile() to read your json file.
I have created a dynamic value extension for passwords, and other such stuff that I don't want to share, when I share my Paw documents.
Please note that because of sandboxing, your json file must live inside you extension folder.
I'm developing an API Server in Go and the server (at the moment) handles all translations for clients. When an API client fetches particular data it also asks for the translations that are available for the given section.
Ideally I want to have the following folder structure:
/messages
/home.en
/home.fr
/home.sv
/news.en
/news.fr
/news.sv
Where news and home are distinct modules.
Now the question I have for Revel is is it possible to fetch ALL language strings for a given module and given locale? For example pull all home strings for en-US.
EDIT:
I would like the output (something I can return to the client) a key:value string of translations.
Any guidance would be appreciated.
It seems to me that revel uses messaged based translation (just like gettext does), so you need
the original string to get the translation. These strings are stored in Config objects,
which are themselves stored in messages of i18n.go, sorted by language.
As you can see, this mapping is not exported, so you can't access it. The best way
to fix this is to write a function for what you want (getting the config by supplying a language)
or exporting one of the existing functions and create a pull request for revel.
You may workaround this by copying the code of loadMessageFile or by forking your version
of revel and exporting loadMessageFile or parseMessagesFile. This also is a great opportunity
to create a pull request.
Note that the localizations are stored in a INI file format parsed by robfig/config,
so manually parsing is also an option (although not recommended).