I'm writing a network intensive WebFlux application.
When receiving request, application requests and recieves something with another external server and then reply to original requester.
I'm talking about the WebFlux application, and I'm using WebClient when call to external server.
Application's performance is not that satisfiable.
I think it should touch the maximum of CPU resource, maximum TPS at maximum CPU.
But it shows low tps, cpu is just at 30 or 40%.
Why it does not use CPU any more to get more TPS, even though it has more room to execute more requests.
And I compared it with a task with no external call(WebClient), it shows full TPS at maximum of CPU resource usage.
====
Sample codes : https://github.com/mouse500/perfwebf
perfwebf
sample project for WebClient performance
/workloadwexcall : workload using external call
/workloadwoexcall : workload using only cpu job(but with 1ms delay)
external call is implemented with simple node server inside
prj includes everything.
You can build Dockerfile and run with docker
and prepare jmeter or something,
test1 : call /workloadwexcall api with more than 200 threads => shows 30~40% cpu level at perfwebf server
test2 : call /workloadwoexcall api with more than 200 threads => shows almost 100% cpu level at perfwebf server with m
======
Observation so far,
I ran test at AWS EC2 (8 core, 16 G Mem),
I think external server is enough simple and powerful to react
when test1,
high number of threads of the server waits at
{
"threadName": "reactor-http-epoll-3",
"threadId": 20,
"blockedTime": -1,
"blockedCount": 8,
"waitedTime": -1,
"waitedCount": 0,
"lockName": null,
"lockOwnerId": -1,
"lockOwnerName": null,
"inNative": true,
"suspended": false,
"threadState": "RUNNABLE",
"stackTrace": [
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": -2,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": true
},
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": 148,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": false
},
{
"methodName": "epollWait",
"fileName": "Native.java",
"lineNumber": 141,
"className": "io.netty.channel.epoll.Native",
"nativeMethod": false
},
{
"methodName": "epollWaitNoTimerChange",
"fileName": "EpollEventLoop.java",
"lineNumber": 290,
"className": "io.netty.channel.epoll.EpollEventLoop",
"nativeMethod": false
},
{
"methodName": "run",
"fileName": "EpollEventLoop.java",
"lineNumber": 347,
"className": "io.netty.channel.epoll.EpollEventLoop",
"nativeMethod": false
},
{
======
I have no idea,
netty epoll not meet hard situation?
docker net mechanism not meet? ( I also tested without docker, same result)
Linux kernel not meet hard situatin?
AWS EC2 has low performance of network bandwidth?
Question is, why it does not use CPU any more to get more TPS, even though it has more room to execute more requests.
Hope finding some solution for this...
This question was cross-posted on Github:
https://github.com/netty/netty/issues/11492
It seems you are satisfied with the answers there.
Nitesh gave a good clue of the situation.
Now I changed backend server from simple Node app to simple WebFlux app.
single core of Noode was the bottleneck,
when I changed backend server to WebFlux app, It shows maximum tps,
Now I think this sample project has no strange point.
Thank you for all.
--
by the way,, now i need to go back to my original probelm of my company project.
,its low performance.
Now it tuned out WebClient was not an issue.
Thank you for all
Related
I have had trouble implementing SignalR Microservices when using a KrakenD API Gateway.
I presume it is possible as I have had it working with both an NGINX Load Balancer and an Emissary API Gateway respectively.
KrakenD, to my current understanding, seems a lot faster then both protocols. So it should be better to handle large amounts of real time data.
If anyone has any advice, has done this before or could supply me with an example krakend.json configuration example that would be much appreciated.
i.e. my current one below:
{
"version": 2,
"extra_config": {},
"timeout": "3000ms",
"cache_ttl": "300s",
"output_encoding": "json",
"name": "KrakenGateway",
"port": 8080,
"endpoints": [
{
"endpoint": "/foohubname",
"backend": [
{
"url_pattern": "/ws",
"disable_host_sanitize": true,
"host": [ "ws://signalrservicename:80/foohubname" ]
}
],
"extra_config":{
"github.com/devopsfaith/krakend-websocket": {
"headers_to_pass":["Cookie"],
"connect_event": true,
"disconnect_event": true
}
}
}
]
}
Have a great day,
Matt
The WebSockets functionality is an enterprise function: https://www.krakend.io/docs/enterprise/websockets/
If you place an Enterprise-only configuration in a community edition binary won't have any effect.
Ended up using the Emisarry Gateway for now, will re-valuate speeds ect when I get closer to production and testing
I'm working with Google Auto ML Natural Language API.
I have already a trained model.
In the beginning, when I trained the model, it was deployed and everything was fine. According with Google's new rules from 22th January 2019, models that have no prediction traffic for 7 or more weeks will be silently undeployed and archived.
At moment, I can't predict any result with that trained model, because it is undeployed as it was probably unused for 7 weeks.
Also according to Google if id like to continue using the model I should redeploy it using the Deploy API.
https://cloud.google.com/natural-language/automl/docs/models#deploying_or_undeploying_a_model
I try to redeploy the model and I get an error so I can't make any prediction.
How can I deploy a model, without errors, in order to begin predict results?
So, I'll show the steps that I made to try to solve this problem:
Run deploy request with right data.
Run operations request
Wait for deployment to finish
Run operations request again
Show list of models that I have (It's UNDEPLOYED)
1
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models/{Model ID}:deploy
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": {Time}
}
}
2 and 4
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/operations
"operations": [
{
"name": "projects/{project}/locations/{location}/operations/{Model ID}",
"metadata": {
"#type": "type.googleapis.com/google.cloud.automl.v1beta1.OperationMetadata",
"createTime": {Time},
"updateTime": "{Time},
"progressPercent": 100
},
"done": true,
"error": {
"code": 4
}
}
]
5
https://automl.googleapis.com/v1beta1/projects/{project}/locations/{location}/models
"model": [
{
"name": "projects/{project}/locations/{location}/models/{Model ID}",
"displayName": {name},
"datasetId": {dataset id},
"createTime": {time},
"deploymentState": "UNDEPLOYED",
"updateTime": {time},
"textClassificationModelMetadata": {}
}
]
So, I was expecting 0 errors in operations request, when the model finished the deployment progress, but it shows an error code 4. I have searched this error code 4 in this provided enum: https://github.com/googleapis/googleapis/blob/master/google/rpc/code.proto
For error code 4:
// The deadline expired before the operation could complete. For operations
// that change the state of the system, this error may be returned
// even if the operation has completed successfully. For example, a
// successful response from a server could have been delayed long
// enough for the deadline to expire.
//
// HTTP Mapping: 504 Gateway Timeout
DEADLINE_EXCEEDED = 4;
I don't know why this timeout is happening.
I already search in Quotas Limits, but everything is fine.
This problem should be resolved now. Sorry about the inconvenience, but your model should be deployable now. Please try and write back if you still see an issue.
I want to start by saying that this question is not for telegram bot API. I am trying to fetch images from a channel using telegram core API. The image is in the media property of the message object
"_": "message",
"pFlags": {
"post": true
},
"flags": 17920,
"post": true,
"id": 11210,
"to_id": {
"_": "peerChannel",
"channel_id": 1171605754
},
"date": 1550556770,
"message": "",
"media": {
"_": "messageMediaPhoto",
"pFlags": {},
"flags": 1,
"photo": {
"_": "photo",
"pFlags": {},
"flags": 0,
"id": "6294134956242348146",
"access_hash": "11226369941418527484",
"date": 1550556770,
I am using the upload.getFile API to fetch the file. Example is
upload.getFile({
location: {
_: 'inputFileLocation',
id: '6294134956242348146',
access_hash: '11226369941418527484'
},
limit: 1000,
offset: 0
})
But the problem is it throws the error RpcError: CODE#400 LIMIT_INVALID. From looking at the https://core.telegram.org/api/files it looks like limit value is invalid. I tried giving limit as
1024000 (1Kb)
20480000 (20Kb)
204800000 (200kb)
But it always return the same error.
For anyone who is also frustrated with the docs. Using, reading and trying out different stuff will ultimately work for you. If possible someone can take up the task of documenting the wonderful open source software.
Coming to the answer, the location object shouldn't contain id or access hash like other APIs rather it has its own parameters as defined in telegram schema.
There is a media property to a message which has a sizes object. This will contains 3 or more size options (thumbnail, preview, websize and so on). Choose the one that you will need and use the volume_id, local_id and secret properties. The working code will look something like this.
upload.getFile({
location: {
_: 'inputFileLocation', (This parameter will change for other files)
volume_id: volumeId,
local_id: localId,
secret: secret
},
limit: 1024 * 1024,
offset: 0
}, {
isFileTransfer: true,
createClient: true
})
The following points should be noted.
Limit should be in bytes (not bits)
Offset will be 0. But if its big file use this and limit to download parts of the file and join them.
Additional parameters such as isFileTransfer and createClient also exists. I haven't fully understood why its needed. If I have time I'll update it later.
Try using a library that's built on top the original telegram library. I'm using Airgram, a JS/TS library which is a well maintained Repo.
I've been reading forums and trying Steam APIs, I'm searching for an API which provides all Steam Games.
I found the API providing all SteamApps, and the Steam Store API which provides information for Apps (I'm looking for the type: 'game'), but for this, I need to call the store API once for each SteamApp... And the Store API is limited to 200 calls every 5 minutes! Is it the only solution?
EDIT:
All Apps API : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?key=STEAMKEY&format=json
App details API : http://store.steampowered.com/api/appdetails?appids={APP_ID}
There is no "Steam API all games and all their details in one go".
You use GetAppList to get all the steam apps. Then you have to query each app with appdetails which will take a long time.
GetAppList : http://api.steampowered.com/ISteamApps/GetAppList/v0002/?format=json
{
"applist": {
"apps": [
{"appid": 10, "name": "Counter-Strike"},
{"appid": 20, "name": "Team Fortress Classic"},
{"appid": 30, "name": "Day of Defeat"},
{"appid": 40, "name": "Deathmatch Classic"}
]
}
}
appdetails : http://store.steampowered.com/api/appdetails?appids=10
{
"10": {
"success": true,
"data": {
"type": "game",
"name": "Counter-Strike",
"steam_appid": 10,
"required_age": 0,
"is_free": false,
"detailed_description": "...",
"about_the_game": "...",
"short_description": "...",
"developers": ["Valve"],
"publishers": ["Valve"],
"EVEN_MORE_DATA": {}
}
}
}
There is a general API rate limit for each unique IP adress of 200 requests in five minutes which is one request every 1.5 seconds.
Another solution would be to use a third-party service such as SteamApis which offers more options but they are inevitably bound to what Steam offers in their API.
A common method here is to cache the results.
So for example, if your using something like PHP to query the results, you would do something like json_encode and json_decode with an array / object to hold the last results.
You can get fancy depending on what you want, but basically you'll need to cache and then perform an update of the oldest.
I am trying to stop the processor which is in running state using the PUT method in rest api /processors/{id}.
I am able to start the processor by changing the state in the component as follows "state": "RUNNING", and runStatus in the aggregatesnapshot as "runStatus": "Running".
Similarly I tried to stop the processor by changing the state as STOPPED but facing an error as
9204b68d-0159-1000-7d8f-720592b2a2dd is not stopped (409 error conflict nd 400 Badrequest).
Please let me know how to stop the processor.
Thanks in advance.
you can able to stop processor using rest api.
Example:
i having GetFile(ID:9204b68d-0159-1000-7d8f-720592b2a2dd) processor in UI.
RestAPI Url:
http://<host>:<port>/nifi-api/processors/9204b68d-0159-1000-7d8f-720592b2a2dd
Here json content i have passed as PUT Request to stop processor.
{
"status": {
"runStatus": "STOPPED"
},
"component": {
"state": "STOPPED",
"id": "9204b68d-0159-1000-7d8f-720592b2a2dd"
},
"id": "9204b68d-0159-1000-7d8f-720592b2a2dd",
"revision": {
"version": 10,
"clientId": "ab010dd6-0159-1000-615b-f095502a7ceb"
}
}
Revision and Status are most important things in stop the processor from RestAPI.
It works well for me.Try it.
And let me know if not worked.