Fix HTTP 504: Gateway Timeout when not using a load balancer - spring

I am developing an application which allows the user to download data from this remote database server. My server sode contacts another database server, get&package all the data, and send the data back to the client side. Everything works fine locally. However, when I deploy my code to AWS Elastic Beanstalk, I get a HTTP 504: Gateway Timeout, if my request doesn't get a respond in 60 seconds (when the data is too large and it takes more time to get all the data).
I have looked up a lot of posts online, but most solutions had to do with using a load balancer. I am not currently using a load balancer, and I am not really sure how to proceed with my issue. I know what I have to do is to change the timeout/idle limit, but I can't seem to find a resource that gives me insight on how to do that when I am not using a load balancer.
To give a main idea of how the project is built, it is written in ReactJS and Java, and it connects to a remote database server to request data. I am not using CORS/proxy, but using the Java backend code to have my server contacting the database server when I request for data. I am also using annotations in Spring framework for my requests (and more specifically, the controller class).
If you have any ideas on how to solve this issue, please let me know. I really don't know much about web application development. Thanks in advance!

Related

How to configure SSL on Spring Boot - Angular app on EC2 instance

I can't get my backend to send data after switching to secure connection.
I was able to successfully configure SSL with ssl_mod on Apache web server that serves my Angular app on AWS Linux 2 instance, the site is secure - but my Spring Boot backend is not responding, it is not sending any data. When I additionally convert .crt and .key files to PK12 that Spring understands and I use it in Spring app - I get this error:
net::ERR_SSL_PROTOCOL_ERROR
I've tried using AWS Load Balancer, but same thing happens, frontend is loaded in secure environment, but backend is not sending any data even after I change backend calls from http to https://my-site.com. I've tried following documentation and added this to my backend app properties file:
server.tomcat.remoteip.remote-ip-header=x-forwarded-for
server.tomcat.remoteip.protocol-header=x-forwarded-proto
and security configuration upgraded with this:
http.requiresChannel().anyRequest().requiresSecure()...
but to no avail.
Lastly, I created a new instance on EC2 and this time I didn't configure apache for the frontend on linux, I just used SSL certificate on my backend app with following properties:
server.ssl.enabled=true
server.ssl.key-store=/etc/ssl/mydomain_com.p12
server.ssl.key-store-password=******
server.ssl.key-alias=mydomain
To no avail, now my site doesn't load at all. I'm despearate, struggling with this for a week now. What is the procedure for a full stack app? How do I do it?
Let me respond because on the same day I asked the question - I found a solution. The solution was - converting free SSL certificate with the help of this website:
https://www.sslshopper.com/ssl-converter.html
After I've plugged it in my Spring Boot app - it works. Before that, I made the conversion with OpenSSL on Windows, and it seems it was faulty. I'm so happy now... I read so many articles on this website on my one and a half year journey of learning to code - and got stuck on the last step. I'm so happy. Thank you all for this amazing website and all the help. I love you! I'm proud of being a part of this programming community... the best humor, the best people!
Peace

SSL for backend server without domain

I have a vps, where my spring boot backend is running on. The frontend is a mobile app built with the ionic framework.
The backend is built this way: in the front there is an so called resource server, which is an graphql server, which redirects the requests to rest microservices which are behind the resource server. Every microservice has is own task, which he's responsible for. (e.g. an fileupload-server which uploads/downloads files to a database). The whole application, including the frontend is secured by an keycloak instance, which is running as an docker container like the whole application, except the frontend.
Now my questions is, we dont have a domain and for some reason they wont buy one, but we wont to secure the communications over ssl/lets encrypt. But lets encrypt isn't able to create ssl certificates for ip adresses. So finnaly my question is: do you guys, know a solution to my problem which fits?
So far,
Daniel

Get remote errors in Service Fabric using Web Api

Web API has GlobalConfiguration.Configuration.IncludeErrorDetailPolicy
= IncludeErrorDetailPolicy.Always; to turn on remote errors. (Allowing them to see them in a browser even if you are not browsing on the local machine.
But, near as I can tell, Service Fabric, running Web Api, does not support GlobalConfiguration.
Is there a way to configure things so I don't have to log into one of my Service Fabric server machines each time I want to see what a services error message is?
I recommend you don't show error details to everyone.
It's a security risk.
Consider moving your error logs out of your cluster. For instance, by using OMS, ELK or Application Insights.

Web app authentication and securing a separate web API (elasticsearch and kibana)

I have developed a web app that does its own user authentication and session management. I keep some data in Elasticsearch and now want to access it with Kibana.
Elasticsearch offers a RESTful web API without any authentication and Kibana is a purely browser side Javascript application that accesses Elasticsearch by direct AJAX calls. That is, there is no "Kibana server", just static HTML and Javascript.
My question is: How do I best implement common user sign on between the existing web app and Elasticsearch?
I am interested in specific Elasticsearch/Kibana solutions, but also in generic designs for single sign on to web apps and the external web APIs they use.
It seems the recommended way to secure Elasticsearch/Kibana is to have an Apache or Nginx reverse proxy in front that does SSL termination and user authentication (Basic auth). However, this doesn't play too well with the HTML form user authentication in my existing web app. Ideally I would like the user to sign on using the web app, and then be allowed direct access to the Elasticsearch API as well.
Solutions I've thought of so far:
Proxy everything in the web app: Have all calls go to the web app (server) which does the authentication, and have the web app issue the same request to the Elasticsearch web API and forward the response back to the browser.
Have the web app (server) store session info that Apache or Nginx somehow can look up and use to authorize access to the reverse proxy.
Ditch web app sign on and use basic auth for everything.
Note that this is a single installation, so I don't really need any federated SSO solutions.
My feeling is that the proxy within web app (#1) is a common generic solution, but it seems a bit heavyweight to have everything pass through the possibly slow web app, considering that Kibana uses the Elasticsearch API directly.
I haven't found an out of the box solution designed for the proxy authentication setup (#2). My idea is to have the web app store session info in memcache or the like and use some facility in the web server (Apache or Nginx) to look up the session based on a cookie and allow proxy access if authenticated.
The issue seems similar to serving static files directly using the web server (Apache or Nginx) while authenticating using a slow web app. Recommendations I've found for that are however very specific to that issue, like X-Sendfile.
You could use a sessionToken. This is a quite generic solution. Let me explain this. When the user logs in, you store a random string an pass him back to him. Each time the user tries to interact with your api you ask for the session Token you gave him. If it matches, you provide the service he is asking for, else, you just ignore his call. You should make session token expire in a certain interval of time and make a new one each time the user logs back in.
Hope this helps you.

How to bring a server online and link it to parse.com?

I have a server of my own running locally on my wifi, on 0.0.0.0:5000.
I have built an app with the parse.com backend, and I want to link this server to Cloud Code, so I can call functions on it.
I am completely lost and don't know where to start to bring my server online with only Parse being able to access it and use its API.
Or am I better off renting a VPS and connecting to that?

Resources