DCOS/Mesos doesn't unreserve resources on framework delete - mesos

Hi after delete of a framework (e.g. Cassandra or Kafka) the resources for them are still reserved. This makes me impossible to install another framework because of lack of free resources.
Using this to check:
curl master.mesos/mesos/slaves
....
"reserved_resources": {
"cassandra-role": {
"disk": 10240.0,
"mem": 5376.0,
"gpus": 0.0,
"cpus": 1.5,
"ports": "[7000-7001, 7199-7199, 9001-9001, 9042-9042, 9160-9160]"
}
},
"unreserved_resources": {
"disk": 32503.0,
"mem": 567.0,
"gpus": 0.0,
"cpus": 0.5,
"ports": "[1025-2180, 2182-3887, 3889-5049, 5052-6999, 7002-7198, 7200-8079, 8082-8180, 8182-9000, 9002-9041, 9043-9159, 9161-32000]"
}
How do I can free these resources and is it normal that Mesos doesn't unreserve them on a framework deletion?

You can use Janitor as described in https://dcos.io/docs/1.8/usage/managing-services/uninstall/
The deletion of a framework and the unreservation of resources are two different things.

Related

SAP Fiori Launchpad on Cloud Foundry - Role Configuration Issues

We have a range of apps deployed to our Fiori Launchpad (via an mta) file on Cloud Foundry.
I came across this blog that describes setting up role access on an app by app basis.
Configuring Roles – SAP Fiori Launchpad Cloudfoundry | SAP Blogs.
Firstly, I setup approuter/xs-app.json as follows. Note this has as single config_admin scope as opposed to the 2 (approver and user) in the blog. The reason for this is we only need a single configurable role at the moment, so I'm making the assumption we only need a single scope.
Does the below snippet look correct? I've used "srv_api" as the destination from the blog, but not sure If it needs to be something else.
{
"authenticationMethod": "route",
"welcomeFile": "/cp.portal",
"routes": [
{
"source": "^/catalog(.*)$",
"target": "/catalog$1",
"destination": "srv_api",
"authenticationType": "xsuaa",
"scope": {
"GET": ["$XSAPPNAME.config_admin"],
"PATCH": ["$XSAPPNAME.config_admin"],
"POST": ["$XSAPPNAME.config_admin"],
"PUT": ["$XSAPPNAME.config_admin"],
"DELETE": ["$XSAPPNAME.config_admin"],
"default": ["$XSAPPNAME.config_admin"]
}
}
],
"logout": {
"logoutEndpoint": "/do/logout"
}
}
Next up, xs-security.json in the project root.
{
"xsappname": "demo",
"tenant-mode": "dedicated",
"description": "Security profile of called application",
"scopes": [
{
"name": "uaa.user",
"description": "UAA"
},
{
"name": "$XSAPPNAME.config_admin",
"description": "UAA configuration admin"
}
],
"role-templates": [
{
"name": "Token_Exchange",
"description": "UAA",
"scope-references": ["uaa.user"]
},
{
"name": "ADMIN_USER",
"description": "UAA ADMIN_USER",
"scope-references": ["uaa.config_admin"]
}
]
}
... and finally the manifest.json of the app I would like to apply the role to:
"sap.platform.cf": { "oAuthScopes": ["$XSAPPNAME.config_admin"] }
The app exists in a Group containing only that app.
When deployed to SAP Cloud Foundry, the Group and app are hidden. Fine I thought, just needs the role configured on the BTP side?
In BTP, I setup the role collection with my user, and the the two roles, ADMIN_USER and Token_Exchange, which were deployed correctly to BTP in the previous step.
However, the app and it's Catalog are still hidden from view on the Fiori Launchpad. The only apps that do appear are the one's without the "sap.platform.cf" manifest entry.
Am I approaching this the correct way? Have I missed something?
Or do I need to setup two separate scope, as in the guide, and include the relevant scope in each and every app?
*Note - I've tried setting up the user without the Token_Exhange role, with the same result.
The answer is a typo in xs-security.json
Should be: "scope-references": ["$XSAPPNAME.config_admin"]

Signalr and Websockets on KrakenD API Gateway

I have had trouble implementing SignalR Microservices when using a KrakenD API Gateway.
I presume it is possible as I have had it working with both an NGINX Load Balancer and an Emissary API Gateway respectively.
KrakenD, to my current understanding, seems a lot faster then both protocols. So it should be better to handle large amounts of real time data.
If anyone has any advice, has done this before or could supply me with an example krakend.json configuration example that would be much appreciated.
i.e. my current one below:
{
"version": 2,
"extra_config": {},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenGateway",
"port": 8080,
"endpoints": [
{
"endpoint": "/foohubname",
"backend": [
{
"url_pattern": "/ws",
"disable_host_sanitize": true,
"host": [ "ws://signalrservicename:80/foohubname" ]
}
],
"extra_config":{
"github.com/devopsfaith/krakend-websocket": {
"headers_to_pass":["Cookie"],
"connect_event": true,
"disconnect_event": true
}
}
}
]
}
Have a great day,
Matt
The WebSockets functionality is an enterprise function: https://www.krakend.io/docs/enterprise/websockets/
If you place an Enterprise-only configuration in a community edition binary won't have any effect.
Ended up using the Emisarry Gateway for now, will re-valuate speeds ect when I get closer to production and testing

WebFlux WebClient performance doesn't touch maximum of CPU

I'm writing a network intensive WebFlux application.
When receiving request, application requests and recieves something with another external server and then reply to original requester.
I'm talking about the WebFlux application, and I'm using WebClient when call to external server.
Application's performance is not that satisfiable.
I think it should touch the maximum of CPU resource, maximum TPS at maximum CPU.
But it shows low tps, cpu is just at 30 or 40%.
Why it does not use CPU any more to get more TPS, even though it has more room to execute more requests.
And I compared it with a task with no external call(WebClient), it shows full TPS at maximum of CPU resource usage.
====
Sample codes : https://github.com/mouse500/perfwebf
perfwebf
sample project for WebClient performance
/workloadwexcall : workload using external call
/workloadwoexcall : workload using only cpu job(but with 1ms delay)
external call is implemented with simple node server inside
prj includes everything.
You can build Dockerfile and run with docker
and prepare jmeter or something,
test1 : call /workloadwexcall api with more than 200 threads => shows 30~40% cpu level at perfwebf server
test2 : call /workloadwoexcall api with more than 200 threads => shows almost 100% cpu level at perfwebf server with m
======
Observation so far,
I ran test at AWS EC2 (8 core, 16 G Mem),
I think external server is enough simple and powerful to react
when test1,
high number of threads of the server waits at
{
"threadName": "reactor-http-epoll-3",
"threadId": 20,
"blockedTime": -1,
"blockedCount": 8,
"waitedTime": -1,
"waitedCount": 0,
"lockName": null,
"lockOwnerId": -1,
"lockOwnerName": null,
"inNative": true,
"suspended": false,
"threadState": "RUNNABLE",
"stackTrace": [
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": -2,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": true
},
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": 148,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": false
},
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": 141,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": false
},
{
"methodName": "epollWaitNoTimerChange",
"fileName": "EpollEventLoop.java",
"lineNumber": 290,
"className": "io.netty.channel.epoll.EpollEventLoop",
"nativeMethod": false
},
{
"methodName": "run",
"fileName": "EpollEventLoop.java",
"lineNumber": 347,
"className": "io.netty.channel.epoll.EpollEventLoop",
"nativeMethod": false
},
{
======
I have no idea,
netty epoll not meet hard situation?
docker net mechanism not meet? ( I also tested without docker, same result)
Linux kernel not meet hard situatin?
AWS EC2 has low performance of network bandwidth?
Question is, why it does not use CPU any more to get more TPS, even though it has more room to execute more requests.
Hope finding some solution for this...
This question was cross-posted on Github:
https://github.com/netty/netty/issues/11492
It seems you are satisfied with the answers there.
Nitesh gave a good clue of the situation.
Now I changed backend server from simple Node app to simple WebFlux app.
single core of Noode was the bottleneck,
when I changed backend server to WebFlux app, It shows maximum tps,
Now I think this sample project has no strange point.
Thank you for all.
--
by the way,, now i need to go back to my original probelm of my company project.
,its low performance.
Now it tuned out WebClient was not an issue.
Thank you for all

Set PrivilegeDepth with Microsoft CDS Web API

I'm trying to create an application user, along with its Security Role, for my Common Data Service environment using only the Web API. I've managed to create both the User, the Role and associate some Privileges to the Role. The only thing I can't do, is set the PrivilegeDepth of the RolePrivilege association.
This is the request payload I'm using to create the role with a few privileges:
{
"businessunitid#odata.bind": "/businessunits(6efad0b7-160b-eb11-a812-000d3ab2a6be)",
"name": "Security Role Test",
"iscustomizable": {
"Value": true,
"CanBeChanged": true,
"ManagedPropertyLogicalName": "iscustomizableanddeletable"
},
"canbedeleted": {
"Value": true,
"CanBeChanged": true,
"ManagedPropertyLogicalName": "canbedeleted"
},
"roleprivileges_association#odata.bind": [
"/privileges(2493b394-f9d7-4604-a6cb-13e1f240450d)",
"/privileges(707e9700-19ed-4cba-be06-9d7f6e845383)",
"/privileges(e62439f6-3666-4c0a-a732-bde205d8e938)",
"/privileges(e3f45b8e-4872-4bb5-8b84-01ee8f9c9da1)",
"/privileges(f36ff7e9-72b9-4882-afb6-f947de984f72)",
"/privileges(886b280c-6396-4d56-a0a3-2c1b0a50ceb0)"
]
}
The RolePrivileges are all created with the lowest depth (User). Anyone knows how to set different depths?
Also, is there a better way to assign privileges to the role? Like, upload an XML with the desired privileges to an endpoint which associates it with the role? And is there a better way to specify the privileges without having to know their GUIDs?
I would really appreciate it if you could help me with this. Thanks!
This should be the payload for setting depth like user, local, etc. Make sure to test this, I didn’t get a chance to test it now. Read more
"roleprivileges_association#odata.bind": [
{
"privilegeid#odata.bind" : "/privileges(2493b394-f9d7-4604-a6cb-13e1f240450d)",
"depth" : 1
},
]
Regarding the dynamic guid values instead of hard coding, just make another service call to pull all the privileges and iterate them. Read more
So I found the solution to set the Privilege depth. There's an action for that, AddPrivelegesRole.
Example:
POST https://org12345.crm4.dynamics.com/api/data/v9.0/roles(1b3df93a-070f-eb11-a813-000d3a666701)/Microsoft.Dynamics.CRM.AddPrivilegesRole
{
"Privileges": [
{
"Depth": "0",
"PrivilegeId": "886b280c-6396-4d56-a0a3-2c1b0a50ceb0",
"BusinessUnitId": "6efad0b7-160b-eb11-a812-000d3ab2a6be"
},
{
"Depth": "1",
"PrivilegeId": "7863e80f-0ab2-4d67-a641-37d9f342c7e3",
"BusinessUnitId": "6efad0b7-160b-eb11-a812-000d3ab2a6be"
},
{
"Depth": "2",
"PrivilegeId": "d26fe964-230b-42dd-ad93-5cc879de411e",
"BusinessUnitId": "6efad0b7-160b-eb11-a812-000d3ab2a6be"
},
{
"Depth": "3",
"PrivilegeId": "ca6c7690-c935-46b3-bfd2-abb306c2acc0",
"BusinessUnitId": "6efad0b7-160b-eb11-a812-000d3ab2a6be"
}
]
}

Azure Stream Analytics tools for Visual Studio: Error when executing aggregated queries - "Object reference not set to an instance of an object"

We have followed the documentation Use Azure Stream Analytics tools for Visual Studio and made the Visual Studio capable of creating Azure Stream Analytics project.
We have created an Azure Stream Analytics(ASA) job with Local Input data in the following JSON format:
[
{
"Driver": "Lewis",
"Speed": 0.8275496959686279,
"Accelerator": 1,
"Brakes": 0,
"Steering": 0,
"ErsBattery": 0.9398990273475647,
"Gear": 0,
"LapTimeMs": 107,
"EventTime": "2016-04-01T00:00:00.107",
"PosX": 1593.4061279296875,
"PosY": 934.5406494140625,
"PosZ": 101.44535064697266
},
{
"Driver": "James",
"Speed": 1.8795902729034424,
"Accelerator": 1,
"Brakes": 0,
"Steering": 0,
"ErsBattery": 0.9865896105766296,
"Gear": 0,
"LapTimeMs": 107,
"EventTime": "2016-04-01T00:00:00.107",
"PosX": 1593.3990478515625,
"PosY": 934.5374145507812,
"PosZ": 101.44610595703125
},
{
"Driver": "Damon",
"Speed": 0.4023849666118622,
"Accelerator": 1,
"Brakes": 0,
"Steering": 0,
"ErsBattery": 1,
"Gear": 0,
"LapTimeMs": 108,
"EventTime": "2016-04-01T00:00:00.108",
"PosX": 1593.411865234375,
"PosY": 934.5435180664062,
"PosZ": 101.44485473632812
}
]
Then we are running the query locally as below:
SELECT Driver, AVG(Speed) AS AvgSpeed
FROM Input
GROUP BY Driver, TumblingWindow(second, 10)
After executing the query, we are getting an error message as "Error : Object reference not set to an instance of an object."
Open your Visual Studio IDE in Administrator mode, run your ASA query with local input and you will get the expected query result.

Resources