Signalr and Websockets on KrakenD API Gateway - websocket

I have had trouble implementing SignalR Microservices when using a KrakenD API Gateway.
I presume it is possible as I have had it working with both an NGINX Load Balancer and an Emissary API Gateway respectively.
KrakenD, to my current understanding, seems a lot faster then both protocols. So it should be better to handle large amounts of real time data.
If anyone has any advice, has done this before or could supply me with an example krakend.json configuration example that would be much appreciated.
i.e. my current one below:
{
"version": 2,
"extra_config": {},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenGateway",
"port": 8080,
"endpoints": [
{
"endpoint": "/foohubname",
"backend": [
{
"url_pattern": "/ws",
"disable_host_sanitize": true,
"host": [ "ws://signalrservicename:80/foohubname" ]
}
],
"extra_config":{
"github.com/devopsfaith/krakend-websocket": {
"headers_to_pass":["Cookie"],
"connect_event": true,
"disconnect_event": true
}
}
}
]
}
Have a great day,
Matt

The WebSockets functionality is an enterprise function: https://www.krakend.io/docs/enterprise/websockets/
If you place an Enterprise-only configuration in a community edition binary won't have any effect.

Ended up using the Emisarry Gateway for now, will re-valuate speeds ect when I get closer to production and testing

Related

krakend api gateway panic: "X in new path conflicts with existing wildcard Y in existing prefix Z"

I have two webservices, and I'd like to manage both endpoints separated by the prefix using krakend API gateway.
Below is my configuration:
{
"version": 2,
"name": "My API Gateway",
"port": 8080,
"host": [],
"endpoints": [
{
"endpoint": "/api/entity/{entityID}",
"output_encoding": "no-op",
"method": "POST",
"backend": [
{
"url_pattern": "/api/entity/{entityID}",
"encoding": "no-op",
"host": [
"http://987.654.32.1"
]
}
]
},
{
"endpoint": "/api/entity/member/assign/{userID}",
"output_encoding": "no-op",
"method": "GET",
"backend": [
{
"url_pattern": "/api/entity/member/assign/{userID}",
"encoding": "no-op",
"host": [
"http://123.456.789.0"
]
}
]
}
]
}
when I run it, error occurs:
panic: 'member' in new path '/api/entity/member/assign/:userID' conflicts with existing wildcard ':entityID' in existing prefix '/api/entity/:entityID'
As far as I understand, it seems the {entityID} on the first endpoint is conflicting with /member/ in the second endpoint. Is this error expected behaviour or is there any problem with my configuration file?
This is a known limitation of the Gin library that KrakenD uses internally, you can reproduce this behavior directly in the library with this go code, which will reproduce exactly the same issue:
package main
import "github.com/gin-gonic/gin"
func main() {
r := gin.New()
r.GET("/ping", handler)
r.GET("/ping/foo", handler)
r.GET("/ping/:a", handler)
r.GET("/ping/:a/bar", handler)
}
func handler(c *gin.Context) {
c.JSON(200, gin.H{
"message": "pong",
})
}
See the code in this issue.
The solution is to declare endpoint paths that are not colliding subsets of other endpoints. In your configuration the endpoint /api/entity/member/assign/{userID} is a subset of /api/entity/{entityID}.
Notice that {placeholders} are like using wildcards, so your first endpoint could be expressed in other systems like /api/entity/*, and therefore /api/entity/member/assign/{userID} is a positive match.
Any slight change in your configuration where the wildcard does not collide will fix this situation. As an example, the following two endpoints would work for you:
/api/entity/find/{entityID}
/api/entity/member/assign/{userID}
Thanks to #alo on explaining this issue.
I have come across same one, as i have krakend endpoints in following manner:
GET: /v1/projects/{project_uuid}
GET: /v1/projects/{project_key}/portfolio
But surprisingly, by tricking krakenD like this worked fine.
GET: /v1/projects/{key} // In swagger docs mentioned this key to be supplied as uuid
GET: /v1/projects/{key}/portfolio // In swagger docs mentioned this key to be supplied as string
For now this endpoints triggering my backend client as expected. hopefully this annoying thing gets fixed.

How to update postgres uri value in cf vcaps env

I have a bound Postgres service to my spring application in CF (Cloud foundry)
The VCAPS env available are as following:
"postgresql": [
{
"binding_name": null,
"credentials": {
"dbname": "JDusZ6EpE1ixbTKS",
"end_points": [
{
"host": "10.11.241.2",
"network_id": "SF",
"port": "46371"
}
],
"hostname": "10.11.241.2",
"password": "SuVzOf2m5L5oNYSG",
"port": "46371",
"ports": {
"5432/tcp": "46371"
},
"uri": "postgres://eyv6avf27X9Z55Gx:SuVzOf2m5L5oNYSG#10.11.241.2:46371/JDusZ6EpE1ixbTKS",
"username": "eyv6avf27X9Z55Gx"
},
"instance_name": "mypostgres",
"label": "postgresql",
"name": "mypostgres",
"plan": "v9.6-dev",
"provider": null,
"syslog_drain_url": null,
"tags": [
"postgresql",
"relational"
],
"volume_mounts": []
}
],
I need to modefy the value of the uri to include also the current schema, I guess it needs to be as:
"uri": "postgres://eyv6avf27X9Z55Gx:SuVzOf2m5L5oNYSG#10.11.241.2:46371/JDusZ6EpE1ixbTKS?currentSchema=mycurrentschema"
Is this something possible to do? and If not what is the best practice to assign current schema for a spring app?
Thanks in advance
You have a few options.
You can talk to your service provider, the operator of the service broker from which you are obtaining your service. The service broker is the one that sets the credentials, so you could ask them to include the schema by default.
You can create a service key with cf create-service-key. The service key is like a service binding, but free floating so it's not attached to your app. It just exists as long as the service key exists. You can then create a user provided service, with cf cups and manually set whatever credentials or uri you require for your app. The downside of this approach is that you have to do a little more work to manage the service information.
You can read the current uri into your application and modify it before creating your DataSource. This is not particularly easy if you are using Spring Cloud Connectors because it handles creating the DataSource for you. I would not recommend using SCC.
Instead you can do this with the Spring Boot CloudFoundryVcapEnvironmentPostProcessor and property place holders. See the referenced Javadoc for how that works.
The other option is to use java-cvenv. That provides you with an easy way to obtain credentials information, like the URL and use that to create your own DataSource, which allows you to make slight modifications to things like the URL, if necessary.
Hope that helps!

How do I post a test Kinesis event from Postman to a local Lambda function running on serverless?

Sorry, wasn't sure how to make the question itself brief enough...
I can post data from Postman to my local Lambda function. The issue is that when running locally, I have use this line of code...
event = JSON.parse(event.body);
...so that I can do this...
event.Records.forEach(function(record)
{
// do some stuff
}
But when I deploy the function to AWS, parsing event.body is unnecessary. In fact it throws an error.
I was assuming that there is something different about the JSON (or other aspects of the request) that I'm posting from Postman to my local app when compared to what Kinesis actually sends. But the JSON blob I'm posting locally was logged directly from Lambda on AWS to Cloudwatch.
I'm missing something.
TBH, this only matters because having to comment out that line as a step in the deployment process is annoying and error-prone.
Here's the JSON (names have been changed to protect the innocent):
{
"Records": [
{
"kinesis": {
"kinesisSchemaVersion": "1.0",
"partitionKey": "Thursday, 11 April 2019",
"sequenceNumber": "49594660145138471912435706107712688932829223550684495922",
"data": "some base 64 stuff",
"approximateArrivalTimestamp": 1555045874.83
},
"eventSource": "aws:kinesis",
"eventVersion": "1.0",
"eventID": "shardId-000000000003:1234123412341234123412341234123412341234123412341234",
"eventName": "aws:kinesis:record",
"invokeIdentityArn": "arn:aws:iam::1234123412341234:role/lambda-kinesis-role",
"awsRegion": "us-west-2",
"eventSourceARN": "arn:aws:kinesis:us-west-2:1234123412341234:stream/front-end-requests"
}
]
}

Amazon Alexa Device Discovery for Smart Home API with Lambda Failing

I have setup an Alexa Smart Home Skill, all settings done, oauth2 processed done and skill is enabled on my Amazon Echo device. Lambda function is setup and linked to the skill. When I "Discover Devices" I can see the payload hit my Lambda function in the log. I am literally returning via the context.succeed() method the following JSON with a test appliance. However Echo tells me that it fails to find any devices.
{
"header": {
"messageId": "42e0bf9c-18e2-424f-bb11-f8a12df1a79e",
"name": "DiscoverAppliancesResponse",
"namespace": "Alexa.ConnectedHome.Discovery",
"payloadVersion": "2"
},
"payload": {
"discoveredAppliances": [
{
"actions": [
"incrementPercentage",
"decrementPercentage",
"setPercentage",
"turnOn",
"turnOff"
],
"applianceId": "0d6884ab-030e-8ff4-ffffaa15c06e0453",
"friendlyDescription": "Study Light connected to Loxone Kit",
"friendlyName": "Study Light",
"isReachable": true,
"manufacturerName": "Loxone",
"modelName": "Spot"
}
]
}
}
Does the above payload look correct?
According to https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/smart-home-skill-api-reference#discovery-messages the version attribute is required. Your response seems to be missing that attribute.
In my (very short) experience with this, even the smallest mistake in the response would generate a silent error like the one you are experiencing.
I had the same problem. If you are creating discovery for "Entertainment Device", make sure you have wrapped the output in 'event' key for context.succeed
var payload = {
endpoints:
[
{
"endpointId": "My-id",
"manufacturerName": "Manufacturer",
"friendlyName": "Living room TV",
"description": "65in LED TV from Demo AV Company",
"displayCategories": [ ],
"cookie": {
"data": "e.g. ip address",
},
"capabilities":
[
{
"interface": "Alexa.Speaker",
"version": "1.0",
"type": "AlexaInterface"
},
]
}
]
};
var header = request.directive.header;
header.name = "Discover.Response";
context.succeed({ event: {
header: header, payload: payload
} });
Although, in the sample code, this is never mentioned and an incorrect example is given (https://developer.amazon.com/public/solutions/alexa/alexa-skills-kit/docs/steps-to-create-a-smart-home-skill). However, the response body provided includes the "event" key.
Recreating lambda function helped me fix the issue. I also set "Enable trigger" check button while creating, though I'm not sure if that matters. After that my device provided by skill was found successfully.
Edit: Answer was wrong. Only useful information was this
This context.fail syntax is actually deprecated. Look up the Lambda context object properties, it should look more like "callback(null, resultObj)" now.
Did you include the return statement in your function?
return {
"header": header,
"payload": payload
}
It was missing in the example and after adding it, I was able to 'discover' my device.

I couldn't connect GCE windows instance from remmina RDP

I use GCE V1 rest api to launch instances. I rarely use google developer console. I created windows VM instance through rest api. I passed windows initial username and password in metadata property. Windows VM created successfully. I also able to get those credentials in response, which I sent while creating VM. But I couldn't connect the VM using that username and password. I read the doc about how to reset password from developer console. It works fine. But we would like to rest apis for all. I mean to created/manage GCE resources. So can anyone help to fix this issue?
The image I used to launch a vm is "windows-server-2012-r2-dc-v20150511"
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "twxsFL3U-/,*"
}
]
}
Note: I created many VMs through rest api. All instances have the same issue. When reseting the password from developer console, it works.
The credentials didn't work. I am able to reset them from developer console. But that will not fix my problem. Because we have our own system to launch VMs and other services. For that I'm building a connector. Here is the sample request I send from node.js script.
Request :
***********
options : {
"host": "www.googleapis.com",
"path": "/compute/v1/projects/project-id/zones/us-central1-f/instances",
"method": "POST",
"headers": {
"Authorization": "Bearer ya29.lQGsX8hwdWKaDDwOFnDIZB49eir-c2TUBqYpaVvir7C430Quy8kIWsL4rXv7qjSVQZJKK5e1BdxNug",
"Content-Type": "application/json charset=utf-8"
}
}
body : {
"name": "rin2qvxkz-e",
"zone": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f",
"machineType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/machineTypes/n1-standard-2",
"metadata": {
"items": [
{
"key": "gce-initial-windows-user",
"value": "administrator"
},
{
"key": "gce-initial-windows-password",
"value": "%1zuV27$.:?*"
}
]
},
"tags": {
"items": [
"default"
]
},
"disks": [
{
"type": "PERSISTENT",
"boot": true,
"mode": "READ_WRITE",
"deviceName": "rin2qvxkz-e",
"autoDelete": true,
"initializeParams": {
"sourceImage": "https://www.googleapis.com/compute/v1/projects/windows-cloud/global/images/windows-server-2012-r2-dc-v20150511",
"diskType": "https://www.googleapis.com/compute/v1/projects/project-id/zones/us-central1-f/diskTypes/pd-standard"
}
}
],
"canIpForward": false,
"networkInterfaces": [
{
"network": "https://www.googleapis.com/compute/v1/projects/project-id/global/networks/default",
"accessConfigs": [
{
"name": "External NAT",
"type": "ONE_TO_ONE_NAT"
}
]
}
],
"description": "rin2qvxkz-e",
"scheduling": {
"preemptible": false,
"onHostMaintenance": "MIGRATE",
"automaticRestart": true
}
}
Thanks.
You are using a new Windows image "windows-server-2012-r2-dc-v20150511" with an updated GCEAgent that doesn't look at the gce-initial-windows-user/gce-initial-windows-password instance metadata keys which were used by the old authentication scheme.
Here are explanations of how the new authentication works, starting from the "windows-server-2012-r2-dc-v20150511" image and onwards.
Please note that the initial Windows authentication and GCE API v1 are two separate topics and GCE API v1 has not changed as part of the authentication update.
The earlier answer didn't really explain when this changed. I did more research and found a note in the change log for Google Windows Images.
Metadata items gce-initial-windows-user and gce-initial-windows-password will no longer work for images v20150511 and later
https://cloud.google.com/compute/docs/release-notes-archive#february_2015
June 03, 2015
Updated Windows authentication process. Windows images v20150511 and
later will use the new scheme by default. gcloud will now generate a
random password for Windows login; it is no longer possible to
manually set a Windows password through gcloud but you can set a
custom password in the instance.
Here are some links that detail how to Add users to windows Images now
You can use the gcloud command line tool
https://cloud.google.com/sdk/gcloud/reference/compute/reset-windows-password
gcloud compute reset-windows-password INSTANCE_NAME [--user=USER]
[--zone=ZONE] [GCLOUD_WIDE_FLAG …]
You can call the API, They give GO and Python examples
They also detail a Step-By-Step manual process, in case you want more details
https://cloud.google.com/compute/docs/instances/windows/automate-pw-generation

Resources