Control Liberty http endpoint enablement - open-liberty

I have 2 war in my server. First starts very rapidly but the second one is longer to start.
My server is able to respond to REST calls when fast one is started but it seems that HTTP endpoint can only be accessed when both applications are started.
Is there a way to control this so that I can respond rapidely to REST calls with first application and let the second application startup in the background.

Yes, check this post - Configure IBM Websphere Liberty Profile Server Start & Stop timeout.
You can set <applicationManager startTimeout="1m"/> setting to tell server not to wait for applications that start longer than timeout. Be aware that any request that comes into an application that is not ready to serve requests will return a HTTP 404 response.

Related

Hoverfly - capturing request & responses for micro services

I am trying to capture request & response from an application using Hoverfly. The Hoverfly is installed on a machine and set to proxy with capture mode.
The application is a web application deployed in Weblogic in linux box. The application internally uses soap requests to communicate to a Tibco ESB server and fetch details from provider systems. I want to capture the soap requests & responses between application server & ESB.
To capture the request & response I have set proxy on the application server in following ways:
1) Add the proxy parameters in setDomianEnv.sh script of application server
EXTRA_JAVA_PROPERTIES="-Dhttp.proxyHost=10.0.0.1 -Dhttp.proxyPort=8500 ${EXTRA_JAVA_PROPERTIES}"
export EXTRA_JAVA_PROPERTIES
2) add the proxy parametes in JVM startup parameters of application
3) set proxy for os level user
http_proxy=http://10.0.0.1:8500
In all the three cases I have failed to capture requests & responses in Hoverfly.
Are there any other ways to do the same or any additional settings to be done to route the request & response through the proxy?
That should be sufficient. Is your SOAP service HTTP or HTTPS ? if HTTP this should work. If HTTPS you need to add the self signed Hoverfly certificate to your Weblogic JVM truststore (jre/lib/security/cacerts) to be able to capture those HTTPS requests. Also in case of HTTPS communication, the JVM args should be -Dhttps.proxyPort and -Dhttps.proxyHost
In my opinion, OS level proxy is not required as long as the JVM parameters are set.
Make sure Hoverfly is running and in the Hoverfly Dashboard page, you should set it in Capture mode and when you invoke the services from your Weblogic server, the Capture count in Hoverfly Dashboard should increase. And that's a sign of everything is working.

Spring Web Sockets over AWS Application Load Balancer not working

I have configured spring with web sockets, including rabbit mq on the back end and I can confirm that I can send push messages to the browser.
And using SockJS on the front end.
Up until now I have been using the classic load balancer.
I am trying to get web sockets to work on AWS. I have upgraded to the Application Load Balancer but I still get Bad Request response when I try to make the web socket connection to:
ws://XXXX.eu-west-1.elasticbeanstalk.com/spring/hello/870/sbmdv5tn/websocket
That call still gives 400 Bad Request response...
And I see
Handshake failed due to invalid Upgrade header: null
Errors on the back end...
It has to do the fact that the a connection upgrade is requested and these upgrade requests occur "per hop".
In my scenario I am running with apache in front of tomcat and in order for tomcat to receive these upgrade headers I need to enable web socket tunnelling on the apache proxy such that apache will simply pass through the upgrade request.
UPDATE:
Although a better solution is to bypass apache altogether and go straight to tomcat - that is configure the load balancer to route to port 8080 and not port 80. I suspect the reason elastic beanstalk does not do this by default because it then requires a load balancer - and if you only want single instance you don't need a load balancer.

Websphere web plug-in to automatically propagate cluster node shutdown

Does the WebServer web server plug-in automatically propagate the new configuration due to a manual shutdown of a node in the application server cluster? I've been going through the documentation and it looks like the only way for the web server to act on this is by detecting the node state by itself.
Is there any workaround?
By default, the WAS Plug-in only detects that a JVM is down by failing to send it a request or failing to establish a new TCP connection.
If you use the "Intelligent Management for WebServers" features available in 8.5 and later, there is a control connection between the cell and the Plug-in that will proactively tell the Plugin that a server is down.
Backing up to the non-IM case, here's what happens during an unplanned shutdown of a JVM (from http://publib.boulder.ibm.com/httpserv/ihsdiag/plugin_questions.html#failover)
If an application server terminates unexpectedly, several things
unfold. This is largely WebSphere edition independent.
The application servers operating system closes all open sockets.
WebServer threads waiting for the response in the WAS Plug-in are notified of EOF or ECONNRESET.
If the error occurred on a new connection to the application server, it will be marked down in the current webserver process. This server will not be retried until a configurable interval expires (RetryInterval).
If the error occurred on a an existng connection to the application server, it will not be marked down.
Retryable requests that were in-flight are retried by the WAS Plug-in, as permitted.
If the backend servers use memory to memory session replication (ND only), the WLM component will tell the WAS Plug-in to use a specific replacement affinity server.
If the backend servers use any kind of session persistence, the failover is transparent. Session persistence is available in all websphere editions.
New requests, with or without affinity, are routed to remaining servers..
After the RetryInterval expires, the WAS plug-in will try to establish new connections to the server. If it remains down, failure will be relatively fast, and put the server back into the markd down state.

How to get a web server to send outbound http requests through local fiddler proxy?

I'm running a local web server written in Go and I can debug traffic going to it from my browser; but, I can't see the http request that it makes to external services.
Do I have to run some particular configuration of the web server in order to get the traffic to appear in fiddler? It is running as a background process.
Short answer: you can't...
...unless your web application is written to open a connection to a Proxy server and route requests through that connection (e.g. connect to a remote proxy, and then send requests through it).
Typically what developers do is just dump the Web Request/Response to a debug file to inspect during development (or to debug on a live server, by enabling it with a flag at runtime).
Fiddler is a "proxy" service/server. When you are using it normally to debug browser requests, your Browser is configured to connect to a Proxy server. That is, it will send all web requests through your fiddler's local server (I think it's localhost:8888 if i remember from my Windows days of using Fiddler) which in turn makes a connection to your local web server that you are debugging.
You can read more about Proxies at Wikipedia.
In that picture above, your local web server would be Alice. Meaning, Alice would need to be configured to connect to a proxy server and then make web requests through it.
EDIT:
(for the "I really need this" crowd)
If you really want to modify your web server to send requests through a proxy, there are a few Go packages already written to help you. GoProxy is one such package.

HTTP GET requests work but POST requests do not

Our Spring application is running on several different servers. For one of those servers POST requests do not seem to be working. All site functionality that uses GET requests works completely fine; however, as soon as I hit something that uses a POST request (ex. form submit) the site just hangs permanently. The server won't give any response. We can see the requests in Tomcat Manager but they don't time out.
Has anyone ever seen this?
We have found the problem. Our DBA accidentally deleted the MySQL database files on that particular server (/sigh). In our Spring application we use GET requests for record retrieval and the records we were trying to retrieve must have been cached by MySQL. This made it seem as if GET requests were working. When trying to add new data to the database, which we use POST requests to do, Tomcat would wait for a response, which never came, from MySQL.
In my experience if you're getting a timeout error it's almost always due to not having correct ports open for your application. For example, go into your virtual machine's rules and insure port 8080, 8443 or 80, 443 are open for http and https traffic.
In google cloud platform: its under VPC networking -> firewall rules. Azure and AWS are similar.

Resources