I am trying to use Google Cloud CLOUD NATURAL LANGUAGE API.
I already have Google cloud running Account.
I enabled CLOUD NATURAL LANGUAGE API service and generated Service account keys and downloaded locally.
I ham using Goggle default program
LanguageServiceClient language = LanguageServiceClient.create();
// The text to analyze
String text = "My stay at this hotel was not so good";
Document doc = Document.newBuilder().setContent(text).setType(Type.PLAIN_TEXT).build();
// Detects the sentiment of the text
Sentiment sentiment = language.analyzeSentiment(doc).getDocumentSentiment();
System.out.printf("Text: %s%n", text);
System.out.printf("Sentiment: %s, %s%n", sentiment.getScore(), sentiment.getMagnitude());
I am using Eclipse as IDE on Mac
When I run application I got error
java.io.IOException: The Application Default Credentials are not available. They are available if running in Google Compute E
ngine. Otherwise, the environment variable GOOGLE_APPLICATION_CREDENTIALS must be defined pointing to a file defining the credentials. See https://developers.google.com/accounts/docs
/application-default-credentials for more information.
I even added GOOGLE_APPLICATION_CREDENTIALS as export in Terminal and on using "printenv" it shows the Path like this
GOOGLE_APPLICATION_CREDENTIALS=/Users/temp/Downloads/Sentiment-0e556940c1d8.json
Still it wasn't working with some hit and trial method I found out that in eclipse we can configure run.
There I have added environment variable and after that when I run program it works fine.
Now MY problem is I am implementing that code inside J2EE project and that ear file is to deploy in Wildfly.
I am again getting same error. Now I dont know where to set enviromnet variable in Wildfly or where???
Finally I found a way to set up GOOGLE_APPLICATION_CREDENTIALS as environment variable inside Wildfly
If you are running server through Eclipse
Open Wildfly Server setting by double clicking your server inside
Server Tab
Click "Open Launch Configuration"
Move to "Environment" tab and add new variable as key value pair
eg
GOOGLE_APPLICATION_CREDENTIALS /Users/temp/Downloads/Sentiment-0e556940c1d8.json
If you are running server using terminal
By default Wildfly looks for additional setting inside standalone.conf file.
just open wildfly/bin/standalone.conf file and add following line
GOOGLE_APPLICATION_CREDENTIALS=/Users/temp/Downloads/Sentiment-0e556940c1d8.json
Thats it. You are good to go.....
Related
We are currently set path of properties file which contains secret/access key for Credentials File for AWSCredentialsProviderControlerService . Issue, is we are changing properties path for prod and non prod each time we run nifi workflow. trying to come up no change on Configuration on Credential File path, so that access/secret key would be read regardless of prod and non prod. Since credential file wont support Nifi Expresion language, trying to make use of ACCESS KEY/SECRET properties ${ENV:equalsIgnoreCase("prod"):ifElse(${ACESS_PROD},${ACESS_NONPROD})} Issue we are facing, we are not able to store these access key/secret keys to the registry. Hence unable to implement this change. Is there any way to read access/secret key regardless of environment in Nifi. Curently we are using 1 property file for non prod nifi and 2nd property file for prod properties. In this set up, manually need to changed to credential file path when switching from prod to non prod. Trying to seamlessly work without changing path of credential file. Is there any way to make this happen?
enter image description here
The process that uses the AWSCredentialsProviderControlerService does not support param or variables, but the AWSCredentialsProviderControlerService "credential file" property supports "Parameters Context" as entries, make use of this for your solution.
Example:
Trigger something --> RouteOnAttribute --> If Prod (run executestreamcmd and change the Parameter Context Value to point to prod credfile) else if DEV(run executestreamcmd and change the Parameter Context Value to point to prod credfile) --> then run you AWS Processor.
You can use the toolkit client to edit the parameter context, or event nipyapi python module. It will not be fast tohu.
Recently i've came to an issue to configure Last Participant Support on deployed application. I've found some old post about that:
https://www.ibm.com/developerworks/community/forums/html/topic?id=77777777-0000-0000-0000-000014090728
On server itself i found how to do it. But with jython or wsadmin commands im not able to find how to do it on application itself.
But it does not help for me. Any ideas?
There is no command assistance available for the action of changing last participant support from the admin console which typically implies there is no scripting command associated the action. And there doesn't appear to be an wsadmin AdminApp command to modify the property. Looking at config repo changes made as a result of the admin console action, the IBM Programming Model Extensions (PME) deployment descriptor file "ibm-application-ext-pme.xmi" for an application is created/modified by the action.
If possible, the best long-term solution would be to use a tool like RAD to generate that extensions file when packaging the application because if you need to redeploy the app, your config changes wouldn't get overridden. If you can't mod the app, you can script the addition of an PME descriptor file in each of the desired apps with the knowledge that redeploying the app will overwrite your changes. The changes can be made by doing something along the lines of:
1) create a text file named ibm-application-ext-pme.xmi with contents similar to this:
<pmeext:PMEApplicationExtension xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:pmeext="http://www.ibm.com/websphere/appserver/schemas/5.0/pmeext.xmi" xmi:id="PMEApplicationExtension_1559836881290">
<lastParticipantSupportExtension xmi:id="LastParticipantSupportExtension_1559836881292" acceptHeuristicHazard="false"/>
</pmeext:PMEApplicationExtension>
2) in wsadmin or your jython script do the following (note in this example the xmi file you created is in the current directory, if not, include the full path to it in the createDocument command) :
deployUri = "cells/<your_cell_name>/applications/<your_app_name>.ear/deployments/<your_app_name>/META-INF/ibm-application-ext-pme.xmi"
AdminConfig.createDocument(deployUri, "ibm-application-ext-pme.xmi")
AdminConfig.save()
3) restart the server
I'm trying to combine suggestions on how to use SSL with openshift :
https://blog.openshift.com/openshift-demo-part-13-using-ssl/
with those on how to use ssl with mq:
Spring Configuration for JMS (Websphere MQ - SSL, Tomcat, JNDI, Non IBM JRE)
So I managed to modify my Spring Boot Camel app to move from connection via svrconn mq channel without SSL to one that uses SSL,
by adding SSLCipherSuite property to com.ibm.mq.jms.MQConnectionFactory bean, and by adding these VM arguments to Run Configuration
(as described in the second linked document):
-Djavax.net.ssl.trustStore=C:\path-to-keystore\key.jks
-Djavax.net.ssl.trustStorePassword=topsecret
-Djavax.net.ssl.keyStore=C:\path-to-keystore\key.jks
-Djavax.net.ssl.keyStorePassword=topsecret
-Dcom.ibm.mq.cfg.useIBMCipherMappings=false
And it works fine locally on embedded Tomcat server, however, I would need to deploy it to Openshift, so my first impulse was
to add the same VM arguments to those that I use for Openshift deployment, that is these ones:
-Dkubernetes.master=
-Dkubernetes.namespace=
-Dkubernetes.auth.basic.username=
-Dkubernetes.auth.basic.password=
-Dkubernetes.trust.certificates=
-Dfabric8.mode=openshift
but it obviously doesn't work, for example because I don't have the same path to keystore. So I investigated it a bit,
and learned that I have to use secrets, that can be defined via CLI >>oc secrets new<< command, or via Openshift console,
but I don't understand how exactly to proceed.
Do I have to add parameters to image, or environment variables to deployment config or something else?
Somehow it has to reference the defined secrets, and it has to be named by changing each dot with underscore in its name?
So, for example if I issue:
oc secrets new my-key-jks key.jks
then I have to >>Add Value from Config Map or Secret<<
JAVAX_NET_SSL_TRUSTSTORE my-key-jks key.jks
and Add Value:
COM_IBM_MQ_CFG_USEIBMCIPHERMAPPINGS false ??
I tried that, but this doesn't work, I added values to deploymentconfigs, so that I get such excerpt:
"spec": {
"containers": [
{
"env": [
{
"name": "KUBERNETES_NAMESPACE",
"valueFrom": {
"fieldRef": {
"apiVersion": "v1",
"fieldPath": "metadata.namespace"
}
}
},
{
"name": "JAVAX_NET_SSL_TRUSTSTORE",
"valueFrom": {
"secretKeyRef": {
"key": "key.jks",
"name": "my-key-jks"
}
}
},
when I do:
oc get dc app_name -o json
I have also created sa (serviceaccount) and assigned him as an admin to project, and assigned him to use newly created secret, I did it through Openshift console, so that I don't have oc CLI code right now.
This is also somewhat relevant (but it doesn't help me much):
https://github.com/openshift/openshift-docs/issues/699
After a build, pod's status becomes >>Crash Loop Back-off<<, and >>The logs are no longer available or could not be loaded.<< Without SSL, the same app runs fine on Openshift.
IMHO you are misinterpreting some of the settings you specify here.
1.
The VM arguments after "-Dkubernetes.master=" you specify here I assume are meant to be given to the fabric8 maven plugin which you use for deployment. Right?
The parameters that are about authentication/certificates here are ONLY for the communication to kubernetes and NOT intended for giving keystore data to your application to use. So I think they are unrelated.
Instead you need to ensure that inside your container your app gets started with the same parameters that you use for local execution. Of course you then would have to change the parameter values to where the respective data is available inside your container.
2.
Secrets are a tool to add sensitive data to your deployment that you don't want to be baked into your application image. So for example your keystores and the keystore passwords would qualify to be injected via secret.
An alternative to providing secret data as environment variable, like you tried, is to just mount them into the filesystem which makes the secret data available as files. As your application needs the JKS as a file you could do the following.
In the web console on your deployment, use the link "Add config files" under section "Volumes"
Select the secret "my-key-jks" created before as "source".
Specify some path where the secret should be mounted inside your container, for example "/secret". Then click "Add".
You jks will then be available inside your container under path "/secret/key.jks" so your applications parameter can point to this path.
I'm looking to start Spring XD in distributed mode (more specifically deploying it with BOSH). How does the admin component communicate to the module container?
If it's via TCP/HTTP, surely I'll have to tell the admin component where all the containers are? If it's via Redis, I would've thought that I'll need to tell the containers where the Redis instance is?
Update
I've tried running xd-admin and Redis on one box, and xd-container on another with redis.properties updated to point to the admin box. The container starts without reporting any exceptions.
Running the example stream submission curl -d "time | log" http://{admin IP}:8080/streams/ticktock yields no output to either console, and not output to the logs.
If you are using the xd-container script, then the redis.properties is expected to be under "XD_HOME/config" where XD_HOME points the base directory where you have bin, config, lib & modules of xd.
Communication between the Admin and Container runtime components is via the messaging bus, which by default is Redis.
Make sure the environment variable XD_HOME is set as per the documentation; if it is not you will see a logging message that suggests the properties file has been loaded correctly when it has not:
13/06/24 09:20:35 INFO support.PropertySourcesPlaceholderConfigurer: Loading properties file from URL [file:../config/redis.properties]
I have an old project in WSAD 5.1.2 with a WAS4 server configuration that's in a .wsi-file. If I double click it I get the server configuration editor and on the environment tab there is a System Properties section with some name-value pairs.
Now I have opened the same project in RAD 7.5.1. Where can i input the same name-value pairs for a server in RAD 7.5.1? There's no "environment-tab" if I double-click my server, just an "Overview" tab.
I finally found the proper way of doing it in the admin web interface...
Application servers > myServer > Process definition > Java Virtual Machine > Custom properties
In RAD 7.5.4 JVM name value pair is stored in Servers --> Application Servers --> java and process Management --> process definition --> java virtual machine --> custom properties
here you can create a new name value pair which can be used by Java code using System get properties function.
Apparently IBM started to ship a real application server starting with RAD/RSA6 instead of the test server that comes with WSAD. So to configure the appserver it's just to use the web admin interface as usual.
Thanks to Jeanne Boyarsky for the answer in the forums over at The Java Ranch.
The old app I'm porting needs some properties in the system properties of the JVM set so that they can be retrieved with System.getProperty(...) and I found a dirty way of making it work. So, until I find out how to do it the proper way, if there is a proper way, I came up with this hack:
After doing some "find" and "grep" in the profile directory of the appserver i found a file called:
runtimes\base_v7\profiles\<profilename>\config\cells\<cellname>\nodes\<nodename>\servers\<servername>\server.xml
At the bottom of server.xml there is a <jvmEntries xmi:id="JavaVirtualMachine_.... tag.
Inside it you can add system properties tags on the format:
<systemProperties xmi:id="someId" name="name of your property" value="the value" required="false"/>
Anyone who knows how to to this the proper way and has read all the way down here must be crying by now... :)
But, it seams to work...