I am developing an Adaptor for the GSA using the Adaptor framework 4.0.
The issue I am having lies in the fact that the GSA does not call the retriever method getDocContent()
I have set the hostload to max 10
the Adaptor VM itself has 6 Virtual CPUs and the number of worker threads for the adaptor has been set to 64
server.maxWorkerThreads=64
Its unclear to me why the GSA does not call the getDocContent() method untill I hit the Save button on the host load schedule section of the GSA.
Vinay,
It's going to depend on how you develop your connector and some basic infrastructure.
1) First, confirm that you have actually fed documents. This can be confirmed under the Feeds section.
2) Second, if there are fed documents in the index diagnostics but your method is not called, then take one of the urls and test it in the crawl diagnostics realtime diagnostics via a manual fetch.
3) Confirm the port that your adaptor is running is not blocked via firewall.
Related
Greetings for the day!
I am currently facing challenges to induce higher load using Jmeter Distributed mechanism which I configured in AWS windows machines.
My machines are throwing Response code:Non HTTP response code: java.net.BindException
Response message:Non HTTP response message: Address already in use: connect exceptions.
I am trying to calculate the foot print for the slave machines so that I can confidently communicate on how much load they can handle.
I am using a windows 2019 server, with a 16 GB RAM.
While surfing through internet for answers I got the below link which says to change the registry settings to optimally use the ports.
https://www.baselogic.com/2011/11/23/solved-java-net-bindexception-address-use-connect-issue-windows/
I am looking for suggestions from your end to execute more threads seamlessly (especially spikes using free from arrival thread groups) from the available resources I am currently having.
Please let me know if any further information is required from my end which can help me get better solutions.
Many Thanks for your support.
Regards,
Vijay
As per Adjusting TCP Settings for Heavy Load on Windows article you should:
create a DWORD value named TcpTimedWaitDelay with the value of 60 under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\ Services\TCPIP\Parameters
create a DWORD value named MaxUserPort with the value of 32768 under the same key
Make sure to follow JMeter Best Practices
Make sure to monitor the networking and other performance counters of the Windows machine using either built-in Windows perfmon or via JMeter PerfMon Plugin
I want to create a Jmeter script to test an asp.net core application. The application acts as a load balancer that distributes request to other asp.net core app running.
I want to test the efficacy of the algorithm I used, so I am looking for how to get the number of times a user or request was sent to each of the load-balanced application.
Can I use Jmeter for this? If not what test can I do? I am using Jmeter because it will load test the application.
JMeter doesn't know anything about underlying architecture of your application so it can give you only the number of the hits you made to the load balancer via Server Hits Per Second
If you need to include the number of hits to each application which is behind the load balancer you might want to use something like JMeter PerfMon Plugin which can either read the values from the file or execute an arbitrary shell command on the remote host so if you can expose the number of hits so Server Agent could consume it and plot the charts for each backend server behind the load balancer.
It also makes sense to add DNS Cache Manager to your test plan to avoid IP resolved IP addresses caching on JVM or OS level as under certain circumstances JMeter may hit only one IP address out of several
I've encountered poor performance in my BizTalk application that uses SOAP/ASMX Receive location web service hosted in IIS on the same sever. This service only invoke one function on Oracle DB (connected via Oracle Driver)
I've done load tests via Soap UI and I stressed DB a little from PL/SQL Profiler in SQL Navigator and it turned out that avg request time = 700ms, avg DB query time = 15ms, avg Orchestration done time = 30ms (via BT Admin Console), so there is an tremendous amount of time wasting by IIS, asmx or SOAP?
I've read this: Configuration Parameters that Affect Adapter Performance and tweaked minFreeThreads , minFreeLocalRequestFreeThreads but nothing really happened.
But as I understand well - there is send port described there and I have problem Receive Location, right?
Also read that article: BizTalk: Performance problems using the SOAP adapter
There is no such key like:
Registry Key:
HKLM\SYSTEM\CurrentControlSet\Services\BTSSvc$BizTalkServerApplication\CLR Hosting
How to achieve Option 2?
Option 2:
Look into process isolation – this would using a different instance of the .NET threadpool executed in a separate address space from the BizTalk NT service.
Guide me please
Go to your receive host properties and change message polling interval to 50ms from default 500ms, that will provide an improved performance. If you're using orchestration on a separate host to process service request and response, do the same on orchestration host but reduce orchestration polling interval. Doing this increase performance for low latency scenario, however it adds overhead on SQL message box. Depending on your volume and need tune this.
Also try upgrading to WCF services
I have a really simple setup: An azure load balancer for http(s) traffic, two application servers running windows and one database, which also contains session data.
The goal is being able to reboot or update the software on the servers, without a single request being dropped. The problem is that the health probe will do a test every 5 seconds and needs to fail 2 times in a row. This means when I kill the application server, a lot of requests during those 10 seconds will time out. How can I avoid this?
I have already tried running the health probe on a different port, then denying all traffic to the different port, using windows firewall. Load balancer will think the application is down on that node, and therefore no longer send new traffic to that specific node. However... Azure LB does hash-based load balancing. So the traffic which was already going to the now killed node, will keep going there for a few seconds!
First of all, could you give us additional details: is your database load balanced as well ? Are you performing read and write on this database or only read ?
For your information, you have the possibility to change Azure Load Balancer distribution mode, please refer to this article for details: https://learn.microsoft.com/en-us/azure/load-balancer/load-balancer-distribution-mode
I would suggest you to disable the server you are updating at load balancer level. Wait a couple of minutes (depending of your application) before starting your updates. This should "purge" your endpoint. When update is done, update your load balancer again and put back the server in it.
Cloud concept is infrastructure as code: this could be easily scripted and included in you deployment / update procedure.
Another solution would be to use Traffic Manager. It could give you additional option to manage your endpoints (It might be a bit oversized for 2 VM / endpoints).
Last solution is to migrate to a PaaS solution where all this kind of features are already available (Deployment Slot).
Hoping this will help.
Best regards
I have a selfhost signalr application, everything is ok but when users become more than 5000, users reconnected rapidly. I know that defalt value of appConcurrentRequestLimit is 5000. and i run this:
cd %windir%\system32\inetsrv
appcmd.exe set config /section:system.webserver/serverRuntime /appConcurrentRequestLimit:100000
but nothing changed. I increased maxConcurrentRequestsPerCPU and requestQueueLimit according to this
but i have got problem yet.
i'm using windows server 2012 and iis 8
You are shooting in the dark here, and you have no data about the actual performance and what's happening. The users could reconnect because of different reasons (server timeouts, regular interval reconnects, server errors). There are countless possibilities.
The correct way to know what's happening and measure performance is to run a Baseline performance load test using the default configuration, and collect the relevant performance counters like current requests, queued requests, current connections, max connections etc.
You should also collect any relevant Error logs on the server that could help you figure out what's happening.
You can find the full list of performance counters you need below:
Memory
.NET CLR Memory# bytes in all Heaps (for w3wp)
ASP.NET
ASP.NET\Requests Current
ASP.NET\Queued
ASP.NET\Rejected
CPU
Processor Information\Processor Time
TCP/IP
TCPv6\Connections Established
TCPv4\Connections Established
Web Service
Web Service\Current Connections
Web Service\Maximum Connections
Threading
.NET CLR LocksAndThreads\ # of current logical Threads
.NET CLR LocksAndThreads\ # of current physical Threads
Once you have your baseline performance results on a graph, then you can modify configuration (e.g. modify the number of concurrent requests like you tried above) and then re-run your test, and collect again the same performance counters.
The performance counter results will speak for themselves, and they will lead you to a solution.
You can generate the load with a tool like Crank:
https://github.com/SignalR/SignalR/tree/dev/src/Microsoft.AspNet.SignalR.Crank
In addition you can also check the SignalR troubleshooting guide:
http://www.asp.net/signalr/overview/testing-and-debugging/troubleshooting