unable to create trigger from openwhisk kafka feed that listens to a Generic Kafka instance - openwhisk

I have openwhisk local installation on Ubuntu 16.04 desktop. Actions, triggers, rules and alarm triggers are working.
I cloned the git repository https://github.com/apache/incubator-openwhisk-package-kafka and ran the following in sequence:
installCatalog.sh , gradlew :distDocker, installKafka.sh
Then I tried to create a trigger:
bin/wsk trigger create MyKafkaTrigger -f /messaging/kafkaFeed -p brokers "
[\"localhost:9092\", \"localhost:9093\"]" -p topic test -p isJSONData true -
-insecure
I am following this section of the README: "Creating a Trigger that listens to a Generic Kafka instance"
I am re-using the Kafka instance created as part of Openwhisk installation, and created a topic named 'test' - I am able to publish / consume to this topic using kafka command line tools.
The trigger creation fails (it deletes the trigger, saying resource does not exist).
One thing I observed is that the following packages were created under /guest:
bin/wsk package list /guest --insecurepackages
/guest/messagingWeb
private
/guest/messaging
I did change the feed name to /guest/messaging/kafkFeed, only kafkaFeed, etc., but the results are slightly different:
bin/wsk trigger create MyKafkaTrigger -f /guest/messaging/kafkaFeed -p
brokers "[\"localhost:9092\", \"localhost:9093\"]" -p topic test -p
isJSONData true --insecure
GIVES a JSON output saying "error": "The requested resource does not exist."
bin/wsk trigger create MyKafkaTrigger -f /messaging/kafkaFeed -p brokers "
[\"localhost:9092\", \"localhost:9093\"]" -p topic test -p isJSONData true -
-insecure
GIVES
ok: deleted trigger MyKafkaTrigger
error: Unable to create trigger 'MyKafkaTrigger': Unable to invoke trigger
'MyKafkaTrigger' feed action '/messaging/kafkaFeed'; feed is not configured:
Unable to invoke action 'kafkaFeed': The supplied authentication is not
authorized to access this resource. (code 186)
Looking for any help

I followed the dev guide to get up and running: https://github.com/apache/incubator-openwhisk-package-kafka/blob/master/devGuide.md
Here are the steps I did to get a feed trigger successfully created:
Take note of the auth key by running
bin/wsk -i property get --auth
Run the install script
./installKafka.sh <authKey> <edgehost> <dburl> <dbprefix> <apihost>
Note that the value of authKey is the value obtained from step 1. For the values of the other parameters, see https://github.com/apache/incubator-openwhisk-package-kafka/blob/master/devGuide.md#install-actions
Once the install script completes successfully, verify that the correct packages are installed. You should see the messaging and messagingWeb packages. e.g
bin/wsk -i package list /guest
packages
/guest/messagingWeb private
/guest/messaging shared
Now verify that the kafkaFeed action exists
bin/wsk -i package get --summary /guest/messaging
package /guest/messaging: Returns a result based on parameter endpoint
(parameters: *endpoint)
action /guest/messaging/kafkaProduce: Produce a message to a Kafka cluster
(parameters: base64DecodeKey, base64DecodeValue, brokers, key, topic, value)
feed /guest/messaging/kafkaFeed: Feed to listen to Kafka messages
(parameters: brokers, endpoint, isBinaryKey, isBinaryValue, isJSONData, topic)
Now you can create a trigger by either passing in the namespace as part of the full package name, or leaving off the leading / from the package name
bin/wsk -i trigger create MyKafkaTrigger -f /guest/messaging/kafkaFeed -p brokers "[\"localhost:9092\", \"localhost:9093\"]" -p topic test -p isJSONData true
OR
bin/wsk -i trigger create MyKafkaTrigger -f messaging/kafkaFeed -p brokers "[\"localhost:9092\", \"localhost:9093\"]" -p topic test -p isJSONData true

Related

Spring cloud data flow shell : Stuck on "The stream is being deployed"

I successfully registered three apps named appSink, appSource and appProcessor as follows
dataflow:>app register --name appSource --type source --uri maven://com.example:source:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'source:appSource'
dataflow:>app register --name appProcessor --type processor --uri maven://com.example:processor:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'processor:appProcessor'
dataflow:>app register --name appSink --type sink --uri maven://com.example:sink:jar:0.0.1-SNAPSHOT --force
Successfully registered application 'sink:appSink'
dataflow:>app list
╔══════════╤═════════════╤════════╤════╗
║ source │ processor │ sink │task║
╠══════════╪═════════════╪════════╪════╣
║appSource│appProcessor│appSink│ ║
╚══════════╧═════════════╧════════╧════╝
I then created and deployed a stream as follows:
dataflow:>stream create --name myStream --definition 'appSource | appProcessor | appSink’
Created new stream 'myStream'
dataflow:>stream deploy --name myStream
I get the message
Deployment request has been sent for stream 'myStream'
In the streams list I see
║myStream1 │source-app | processor-app | sink-app│The stream is being deployed. ║
The deployment never finishes it seems. The data flow server logs are just stuck on this
o.s.c.d.spi.local.LocalAppDeployer : Deploying app with deploymentId myStream1.source-app instance 0.
Why is my stream not deploying successfully?
Do you see any java processes running in your local (that correspond to the applications being deployed)?
You can try remote debugging your application deployment using the doc: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_remote_debugging
You can also try inheriting the apps logging using
https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_log_redirect
I am seeing this same problem. I inherited the logging as you suggested. The UI never moves off of Deploying status. There are no errors in the logs and my stream is working when I test it.
Add spring boot actuator dependency in your project, dataflow calls /health and /info to see if the app is deployed or not.

Gearman is not writing anything to DB

I'm new with gearman and cannot figure out why it's not sending anything in DB
So,
I've created new EC2 and RDS instances for gearman. RDS Engine version - MySQL 5.7.19
On EC2 I've performed:
rpm -Uvh https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm && yum install gearmand -y
Then, I've created config file:
vi /etc/sysconfig/gearmand
Which contains:
### Settings for gearmand
OPTIONS="--port=4730 --queue-type=MySQL --mysql-host=path_to_amazon_RDS_instance --mysql-port=3306 --mysql-user=root --mysql-password='dbpass' --mysql-db=db_prod --mysql-table=queue_dev --verbose DEBUG --log-file=/var/log/gearmand.log"
After I started gearmand service and connected to MySQL database on RDS, I see
that gearman created mysql table queue_dev. So, I assume, that there is no error in connection and/or access.
From log file I cannot see any ERROR type messages.
Anyone can help me or hint, what additionally must be done, so gearman can send messages to DB, or how can I send any test message to db?
gearmand does not persist non-background jobs at all.
Only request of the type will be persisted.
SUBMIT_JOB_BG
SUBMIT_JOB_HIGH_BG
SUBMIT_JOB_LOW_BG
See Persistent Queues
Persistent queues were added to allow background jobs to be stored in an external durable queue so they may live between server restarts and crashes.

composer-rest-server doesnt connect (failed to obtain cds for digitalproperty-network)

Installed playground with the steps from:
Using playground locally
Trying for 2 days now to get the composer-rest-server starting.
Steps done:
Changed all the ca/peer/orderer urls from "localhost" to their specific docker IPs in the ~/.composer-connection-profiles/hlfv1/connection.json, otherwise i just get a connection timeout when starting the composer-rest-server
Deployed the "digitalproperty-network" with composer
Got myself a new identity via composer and the secret
Now when i run:
> composer-rest-server -p hlfv1 -n digitalproperty-network -i baderth -s omgDBCimVAbB -N always
Im just getting:
Discovering types from business network definition ...
Error: failed to obtain cds for digitalproperty-network - transaction
not found digitalproperty-network/mychannel
at /usr/lib/node_modules/composer-rest-server/node_modules/fabric-client/node_modules/grpc/src/node/src/client.js:434:17 code: 2, metadata: Metadata { _internal_repr: {} } }
Same error also shown in the latest file in the /logs directory.
I have no clue what "transaction not found digitalproperty-network/mychannel" means and what i should provide the rest server with if not the digitalproperty-network, which i deployed.
You should follow the install instructions for setting up a Development Environment, and then the Developer Guide:
https://hyperledger.github.io/composer/installing/development-tools.html
https://hyperledger.github.io/composer/tutorials/developer-guide.html

When attempting to deploy a business network I get an error 'Failed parsing HTTP/2' in grpc

The command that was issued was
composer network deploy -a my-network.bna -i admin -s adminpw
The full error received was
Error: {"created":"#1495236947.733570390","description":"Failed parsing HTTP/2","file":"../src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":2022,"grpc_status":14,"referenced_errors":[{"created":"#1495236947.733545222","description":"Expected SETTINGS frame as the first frame, got frame type 80","file":"../src/core/ext/transport/chttp2/transport/parsing.c","file_line":479}{"created":"#1495236947.733562422","description":"Trying to connect an http1.x server","file":"../src/core/ext/transport/chttp2/transport/chttp2_transport.c","file_line":1995,"http_status":400}]}
Command failed
This error is the result of trying to deploy to a Hyperledger Fabric V1 runtime using a Hyperledger Fabric V0.6 profile.
In the above example, no profile was specified which means it will use a default profile and that default profile is specific to Hyperledger Fabric V0.6
It is highly recommended that for all command line interaction that you explicitly specify the profile you want to use, so for example if you have a profile for connecting to your local Hyperledger Fabric V1 runtime called hlfv1 then you should issue the command
composer network deploy -p hlfv1 -a my-network.bna -i admin -s adminpw
(note the -p option to specify the profile to use)

Kafka on EC2 instance for integration testing

I'm trying to set up some integration tests for part of our project that makes use of Kafka. I've chosen to use the spotify/kafka docker image which contains both kafka and Zookeeper.
I can run my tests (and they pass!) on a local machine if I run the kafka container as described at that project site. When I try to run it on my ec2 build server, however, the container dies. The final fatal error is "INFO gave up: kafka entered FATAL state, too many start retries too quickly".
My suspicion is that it doesn't like the address passed in. I've tried using both the public and the private ip address that ec2 provides, but the results are the same either way, just as with localhost.
Any help would be appreciated. Thanks!
It magically works now even though I'm still doing exactly what I was doing before. However, in order to help others who might come along, I will post what I did to get it to work.
I created the following batch file and have jenkins run this as a build step.
#!/bin/bash
if ! docker inspect -f 1 test_kafka &>/dev/null
then docker run -d --name test_kafka -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST=localhost --env ADVERTISED_PORT=9092 spotify/kafka
fi
even though the localhost resolves to the private ip address, it seems to take it now. The if block is just to test if the container already exists and reuse it otherwise.

Resources