OpenWhisk messaging package - messageHubProduce unstable - message-hub

I'm trying to use the "whisk.system/messaging" and trying to use the method messageHubProduce.
I created a bind to this package, and tried to use a simple call with postman.
Using the documentation, I created a simple json and did a call, but the method is really unstable. The same call sometimes return as a success, sometimes returns a timeout and sometimes as a "No brokers available".
I now the implementation of this code is on python. Have anyone with the same symptoms I getting?
This is the message I'm sending.
{
"topic": "mytopic",
"value": "MyMessage",
"blocking": false
}
These are the results for the same call
messageHubProduce 446d59eb816b4b34a52374a6a24f3efe
{ "error": "The action exceeded its time limits of 60000 milliseconds." }
messageHubProduce 4213b6a495bc4c5aa7af9e299ddd8fcd
{ "success": true }

After working closely with the Message Hub team, we have deployed an updated messageHubProduce action which should address your stability and performance issues.
Additionally, to provide real-time feedback please feel free to join us on Slack: https://openwhisk.incubator.apache.org/slack.html

Related

/actuator/health does not detect stopped binders in Spring Cloud Stream

We are using Spring Cloud Streams with multiple bindings based on Kafka Streams binders.
The output of /actuator/health correctly lists all our bindings and their state (RUNNING) - see example below.
Our expectation was, when a binding is stopped using
curl -d '{"state":"STOPPED"}' -H "Content-Type: application/json" -X POST http://<host>:<port>/actuator/bindings/mystep1,
it is still listed, but with threadState = NOT_RUNNING or SHUTDOWN and the overall health status is DOWN.
This is not the case!. After stopping a binder, it is removed from the list and the overall state of /actuator/health is still UP.
Is there a reason for this? We would like to have a alert, on this execution state of our application.
Are there code examples, how we can achieve this by a customized solution based on KafkaStreamsBinderHealthIndicator?
Example output of /actuator/health with Kafka Streams:
{
"status": "UP",
"components": {
"binders": {
"status": "UP",
"components": {
"kstream": {
"status": "UP",
"details": {
"mystep1": {
"threadState": "RUNNING",
...
},
...
},
"mystep2": {
"threadState": "RUNNING",
...
},
...
}
}
}
}
},
"refreshScope": {
"status": "UP"
}
}
}
UPDATE on the exact situation:
We do not stop the binding manually via the bindings endpoint.
We have implemented integrated error queues for runtime errors within all processing steps based on StreamBridge.
The solution has also some kind of circuit breaker feature. This is the one that stops a binding from within the code, when a configurable limit of consecutive runtime errors is reached, because we do not want to flood our internal error queues.
Our application is monitored by Icinga via /actuator/health, therefore we would like to get an alarm, when on of the bindings is stopped.
Switching in Icinga to another endpoint like /actuator/bindings cannot be done easily by our team.
Presently, the Kafka Streams binder health indicator only considers the currently active Kafka Streams for health check. What you are seeing as the output when the binding is stopped is expected. Since you used the bindings endpoint to stop the binding, you can use /actuator/bindings to get the status of the bindings. There you will see the state of all the bindings in the stopped processor as stopped. Does that satisfy your use case? If not, please add a new issue in the repository and we could consider making some changes in the binder so that the health indicator is configurable by the users. At the moment, applications cannot customize the health check implementation. We could also consider adding a property, using which you can force the stopped/inactive kafka streams processors as part of the health check output. This is going to be tricky - for e.g. what will be the overall status of the health if some processors are down?

Googleplay purchases.products.acknowledge return 400 not a valid state and 409 cocurrentUpdate

We try to do acknowledge google play purchase on the server-side through purchases.products.acknowledge with golang
However, the following errors come up sometime
googleapi: Error 409: The operation could not be performed since the object was
already in the process of being updated., concurrentUpdate
googleapi: Error 400: The purchase is not in a valid state to perform the desired operation
Is there anything am I missing? or how to solve those errors?
Per google support
For error 400, the purchaseState must be Purchased or 0 before you can acknowledge the purchase. For more information, please refer to this page: https://developer.android.com/google/play/billing/integrate#process
Error 400 can also mean that you already acknowledged the purchase.
For error 409, this means you are acknowledging the purchase multiple times concurrently. Unfortunately, we don't provide support for API concurrency issues.
Currently having this exactly issue, did you manage to resolve it in the end.
Acknowledgement Request Response: {
error: {
code: 409,
message: 'The operation could not be performed since the object was already in the process of being updated.',
errors: [ [Object] ]
}
}
I'm only sending it once, after i have validated and added to my database. I'm not sure why its happening.
EDIT:
I had a theory my code was executing to fast and maybe the order was still pending, so i added a 10 second gap between getting purchase token and then trying to acknowledge once again. Im now getting the following.
Acknowledgement Request Response: {
error: {
code: 400,
message: 'The purchase is not in a valid state to perform the desired operation.',
errors: [ [Object] ]
}
}
However at this time in Google Play Console, the state is Chargeable, meaning it just needs to be acknowledged.

Azure Maps Directions Service bad request with bicycle

I'm facing a strange issue with direction service in azure maps.
When I make this request with car in travel mode :
https://atlas.microsoft.com/route/directions/json?subscription-key={my-api-key}&api-version=1.0&query=48.81532,2.34954:52.31281,4.94103&language=fr-FR&computeTravelTimeFor=all&travelMode=car&arriveAt=2018-10-30T09:50:00-00:00
I got a good response with a proper json, but I make the same request with bicycle travel mode :
https://atlas.microsoft.com/route/directions/json?subscription-key={my-api-key}&api-version=1.0&query=48.81532,2.34954:52.31281,4.94103&language=fr-FR&computeTravelTimeFor=all&travelMode=bicycle&arriveAt=2018-10-30T09:50:00-00:00
I get the following error :
{ "error": {
"code": "400 BadRequest",
"message": "Bad request: one or more parameters were incorrectly specified or are mutually exclusive." } }
I can't figure out what makes this error
I changed the arriveAt time to 2018-10-31T09:50:00-00:00 and it works. Perhaps biking takes so long that it's impossible to arrive at the destination by the specified arriveAt time.

Is it possible to print errors?

For example such code:
os.Stderr.WriteString(rec.(string))
But this will not show as an error:
I know that I can panic after logging and catch it on API Gateway (against sending stacktrace to the client) - no other ways? Documentation is not mention anything like that.
It seems not possible. I assume, you're looking at the metrics in Amazon CloudWatch
AWS Lambda automatically monitors functions on your behalf, reporting
metrics through Amazon CloudWatch. These metrics include total
invocations, errors, duration, throttles, DLQ errors and Iterator age
for stream-based invocations.
https://docs.aws.amazon.com/lambda/latest/dg/monitoring-functions-metrics.html
Now, let's see how do they define errors
Metric "Errors" measures the number of invocations that failed due to errors in the
function (response code 4XX).
So, if you want to see the errors on that graph, you have to respond with the proper codes. If you're concerned about exposing the error stacktrace, here is a good read Error handling with API Gateway and Go Lambda functions. The basic idea there is about creating a custom lambdaError type, meant to be used by a Lambda handler function to wrap errors before returning them. This custom error message
{
"code": "TASK_NOT_FOUND",
"public_message": "Task not found",
"private_message": "unknown task: foo-bar"
}
will be wrapped in a standard one
{
"errorMessage": "{\"code\":\"TASK_NOT_FOUND\",\"public_message\":\"Task not found\",\"private_message\":\"unknown task: foo-bar\"}",
"errorType": "lambdaError"
}
and later on mapped in API Gateway, so, the end client will see only the public message
{
"code": "TASK_NOT_FOUND",
"message": "Task not found"
}

Sending events from server to client(s) in Meteor

Is there a way to send events from the server to all or some clients without using collections.
I want to send events with some custom data to clients. While meteor is very good in doing this with collections, in this case the added complexity and storage its not needed.
On the server there is no need for Mongo storage or local collections.
The client only needs to be alerted that it received an event from the server and act accordingly to the data.
I know this is fairly easy with sockjs but its very difficult to access sockjs from the server.
Meteor.Error does something similar to this.
The package is now deprecated and do not work for versions >0.9
You can use the following package which is originally aim to broadcast messages from clients-server-clients
http://arunoda.github.io/meteor-streams/
No collection, no mongodb behind, usage is as follow (not tested):
stream = new Meteor.Stream('streamName'); // defined on client and server side
if(Meteor.isClient) {
stream.on("channelName", function(message) {
console.log("message:"+message);
});
}
if(Meteor.isServer) {
setInterval(function() {
stream.emit("channelName", 'This is my message!');
}, 1000);
}
You should use Collections.
The "added complexity and storage" isn't a factor if all you do is create a collection, add a single property to it and update that.
Collections are just a shape for data communication between server and client, and they happen to build on mongo, which is really nice if you want to use them like a database. But at their most basic, they're just a way of saying "I want to store some information known as X", which hooks into the publish/subscribe architecture that you should want to take advantage of.
In the future, other databases will be exposed in addition to Mongo. I could see there being a smart package at some stage that strips Collections down to their most basic functionality like you're proposing. Maybe you could write it!
I feel for #Rui and the fact of using a Collection just to send a message feel cumbersome.
At the same time, once you have several of such message to send around is convenient to have a Collection named something like settings or similar where you keep these.
Best package I have found is Streamy. It allows you to send to everybody, or just one specific user
https://github.com/YuukanOO/streamy
meteor add yuukan:streamy
Send message to everybody:
Streamy.broadcast('ddpEvent', { data: 'something happened for all' });
Listen for message on client:
// Attach an handler for a specific message
Streamy.on('ddpEvent', function(d, s) {
console.log(d.data);
});
Send message to one user (by id)
var socket = Streamy.socketsForUsers(["nJyQvECmkBSXDZEN2"])._sockets[0]
Streamy.emit('ddpEvent', { data: 'something happened for you' }, socket);

Resources