Please migrate off JSON-RPC and Global HTTP Batch Endpoints - Dataflow Template - google-api

I received an email with the Title ^ as the subject. Says it all. I'm not directly using the specified endpoint (storage#v1). The project in question is a postback catcher that funnels data into BigQuery
App Engine > Pub Sub > Dataflow > Cloud Storage > BigQuery
Related question here indicates Dataflow might be indirectly using it. I'm only using the Cloud PubSub to GCS Text template.
What is the recommended course of action if I'm relying on a template?

I think the warning may come from a dataflow job which uses the old version of storage API. Please upgrade Dataflow/Beam SDK version beyond 2.5.
Since you're using our PubsubToText template. The easiest way to to it would be:
Stop your pipeline. Be sure to select "Drain" when asked.
Relaunch the pipeline using the newest version (which is automatically done if you're using UI), from the same subscription.
Check the SDK version. It should be at least 2.7.
After that you should not see any more warnings.

Related

What causes version of NiFi component to update?

While creating a NiFi flow I'm realizing the versions of the components changing.
I understand that the version changes each time the component updates - but what is considered an update of a component?
For example, what causes an update in a connection's version?
I'm trying to find some pattern but with not much luck.
Thanks in advance!
The official documentation states that you can have multiple versions of your flow at the same time:
You have access to information about the version of your Processors, Controller Services, and Reporting Tasks. This is especially useful when you are working within a clustered environment with multiple NiFi instances running different versions of a component or if you have upgraded to a newer version of a processor.
You can opt-out of versioning all together:
Methods of disable the versioning:
NiFi UI: To change the version of a flow, right-click on the versioned process group and select Version→Change version (link).
Rest API: Send an http DELETE request to /versions/process-groups/{id} with the appropriate ID.
You can also use Toolkit CLI to view available versions, by executing ./bin/cli.sh registry diff-flow-versions (link).

Cannot create Virtual Data Model classes using Cloud SDK

I am trying to create VDMs using EDMX from SFSF, using this blog
I create a SCP Business Application template and then from in the srv module I try to add new data model from external source - in this case API Business Hub.
I try to use SuccessFactors Employee Central - Personal Information.
https://api.sap.com/api/ECPersonalInformation/overview
The process starts and fails with the message: "OData models with multiple schemas are not supported" and then "Could not generate Virtual Data Model classes."
The external folder is generated as expected with the XML in the EDMX folder but the csn folder is empty.
As I understand it this should work with any api from the business hub? Am I doing something wrong or am I missing something?
Thanks.
Update:
There seems to be an issue with the conversion from EDMX into CSN used by the Web IDE (which is not part of the SAP Cloud SDK).
The Java VDM generated by the OData Generator from the SAP Cloud SDK (used as a component by the Web IDE) should work without any problem.
This looks like an unexpected behavior. We will investigate this further.
In the meantime, as a workaround, you can use our maven plugin or CLI to create the data model for you. This is described in detail in this blog post.
The tl;dr version (for the CLI) is:
Determine which version of the SAP Cloud SDK you are using (search for sdk-bom in your parent pom.xml). I assume this to be version 2.16.0 for this example.
Download the CLI library from maven central: https://search.maven.org/artifact/com.sap.cloud.s4hana.datamodel/odata-generator-cli/2.16.0/jar
Download the metadata file (edmx) from the API Business Hub (as linked in your question)
Run the CLI with e.g. the following command:
java -jar odata-generator-cli-2.16.0.jar -i <input-directory> -o <output-directory> -b <base-path>
The <base-path> in there is the prefix (service independent) to be used in between your host configuration and the actual service name.
Add the generated code manually to your project.
I will updates this answer with the results of the investigation.

Run Google Cloud Build in a specific region and zone?

Is it possible to specify to run a Google Cloud Build in a specific region and zone?
The documentation seems to outline how to run a kubectl in a specific region/zone for deploying containers, but it doesn't seem to document where to run the cloud build itself. I've also not found this setting in the automatic build trigger configuration.
The latest public doc on regionalization can be found here: https://cloud.google.com/build/docs/locations
(Earlier answer on this topic follows)
This is not yet an available feature, however, at Google Next we announced an EAP / soon to be Alpha feature called "Custom Workers" which will enable this functionality for you. You can watch the demo here: https://www.youtube.com/watch?v=IUKCbq1WNWc&feature=youtu.be

Apply configuration yaml file via API or SDK

I started a pod in kubernetes cluster which can call kubernetes api via go-sdk (like in this example: https://github.com/kubernetes/client-go/tree/master/examples/in-cluster-client-configuration). I want to listen some external events in this pod (e.g. GitHub web-hooks), fetch yaml configuration files from repository and apply them to this cluster.
Is it possible to call kubectl apply -f <config-file> via kubernetes API (or better via golang SDK)?
As yaml directly: no, not that I'm aware of. But if you increase the kubectl verbosity (--v=100 or such), you'll see that the first thing kubectl does to your yaml file is convert it to json, and then POST that content to the API. So the spirit of the answer to your question is "yes."
This box/kube-applier project may interest you. While it does not appear to be webhook aware, I am sure they would welcome a PR teaching it to do that. Using their existing project also means you benefit from all the bugs they have already squashed, as well as their nifty prometheus metrics integration.

Symfony framework on windows azure cloud

is it possible to run symfony (1.4) on the windows azure cloud?
The two things I'm wondering is how to execute the symfony tasks and where will symfony save the cache files (blob storage?).
Thanks for your answers.
PHP is something that Microsoft are taking very seriously these days so yes, Symfony can run on top of Azure although documentation is sparse as most people stick to Linux servers.
Regarding tasks, there is a tool for running command line tasks on Windows Azure although I have not yet tried it myself.
http://azurephptools.codeplex.com/
In the mean time I got symfony 1.4 running inside the WindowsAzure Cloud. It was not as hard as expected. I was also able to write a blob storage caching for symfony. Session handling works ok, but you need to modify the symfony session handler to work correctly with more than one server instances.

Resources