Reduce Embedded Tomcat startup time on Prod profile - heroku

I am running a basic Jhipster generated app - see bottom for setup details - that is to be run on a single basic Heroku dyno for the time being. I use the embedded tomcat approach and all is working out fine apart from the start-up time on the production profile.
Running the server locally on my machine with Foreman I get the following results:
On the dev profile I get start-up times of <30sec for both the local and the remote Heroku database.
On the prod profile this goes up to >100sec for both cases.
This causes Heroku to terminate the instance before start-up completes due to them requiring the server to bind to their designated port within 60 seconds.
Thus my question is how/if I could reduce that time. I am aware of Heroku offering to increase the timeout interval to 120sec on a per-application basis. I would appreciate a more elegant approach, though, especially since I am running on a basic dyno so that timeout extension might not even be sufficient.
I know of Jetty/WebappRunner, too, but would prefer to stick to the simpler embedded tomcat setup if at all possible.
Finally, I have seen that for Rails there is the possibility of employing a proxy during start-up to bind to Heroku's port right away - I did not find a Spring equivalent, though.
Thanks in advance!
Setup
jhipster v017.2
authenticationType": "token"
hibernateCache": "no"
clusteredHttpSession": "no"
websocket": "no"
databaseType": "sql"
devDatabaseType": "postgresql"
prodDatabaseType": "postgresql"
useCompass": true
buildTool": "maven",
frontendBuilder": "grunt"
javaVersion": "7"

Related

How to tell Octopus Deploy to wait until another deployment finishes on the same machine?

Sometimes it is preferred and/or required to host dozens of applications on a single server. Not saying this is "right" or "wrong," I'm only saying that it happens.
A downside to this configuration is the error message Waiting for the script in task [TASK ID] to finish as this script requires that no other Octopus scripts are executing on this target at the same time appears whenever more than one deployment to the same machine is running. It seems like Octopus Deploy is fighting itself.
How can I configure Octopus Deploy to wait for one deployment to completely finish before the next one is started?
Before diving into the answer, it is important to understand why that message is appearing in the first place. Each time a step is run on a deployment target, the tentacle will create a "Mutex" to prevent others projects from interfering with it. An early use case for this was updating the IIS metabase during a deployment. In certain cases, concurrent updates would cause random errors.
Option 1: Disable the Mutex
We've seen cases where the mutex is the cause of the delay. The mutex is applied per step, not per deployment. It is common to see a situation where it looks like Octopus is "jumping" between deployments. Depending on the number of concurrent deployments, that can slow down the deployment. The natural thought is to disable the mutex altogether.
It is possible to disable the mutex by adding the variable OctopusBypassDeploymentMutex and setting it to True. That variable can exist in either a specific project or in a variable set.
More details on what that variable does can be found in this document. If you do disable the mutex please test it and monitor for any failures. For the most part, we don't see issues disabling the mutex, but it has happened from time to time. It depends on a host of other factors such as application type and Windows version.
Option 2: Leverage Deploy a Release Step
Another option is to coordinate the projects using the deploy a release step. Typically this works best when the projects being deployed are part of the same application suite. In the example screenshot below I have five "deployment" projects:
Azure Worker IaC
Database Worker IaC
Kubernetes Worker IaC
Script Worker IaC
OctoStudy
The project Unleash the Kraken coordinates deployments for those projects.
It does this by using the Deploy a Release step. First it spins up all the infrastructure, then it deploys the application.
This won't work as well if the server is hosting 50 disparate applications.
Option 3: Leverage the API to check for running deployments
The final option is to include a step at the start of each project which hits the API to check for active releases to the deployment targets for the deployment target. If an active deployment is found then wait until it is done.
You can do this by hitting the endpoint https://[YOUR URL]/api/[SPACE ID]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[SPACE ID]&includeSystem=false. That will tell you all the active tasks being run for a specific machine.
You can get Machine Id by pulling the value from Octopus.Deployment.Machines. You can get Space Id by pulling the value from Octopus.Space.Id.
The pseudo code for this approach could look like this (I'm not including the actual code as your requirements could be very different).
activeDeployments = true
while (activeDeployments)
{
activeDeployments = false
foreach(machineId in Octopus.Deployment.Machines)
{
activeTasks = https://[YOUR URL]/api/[Octopus.Space.Id]/machines/[Machine Id]/tasks?skip=0&name=Deploy&states=Executing%2CCancelling&spaces=[Octopus.Space.Id]&includeSystem=false
if (activeTasks.Count > 0)
{
activeDeployments = true
}
}
if (activeDeployments = true)
{
Sleep for 5 seconds
}
}
I had this message hit me because I hit the Task Cap on the Octopus Server.
In Octopus\Configuration\Nodes change the task cap to 1 to have 1 deployment at a time even with agents on different servers. The message will display constantly
Or simply increase this value to prevent the message from occurring at all.

Dspring.profiles.active value difference between 'prod' and 'default'?

I'm connecting a Web server to a backend using gRPC services.
In the case of backend being set up with -Dspring.profiles.active=default, the gRPC api connects but using -Dspring.profiles.active=prod the connection times out.
In the code, there is no setups for neither value so I'm left to presume they are profile that come "out of the box" with Spring!?
Thats the hypothesis at least cause there doesn't seem to be any other setup and deployment differences that might be causing this connection errors.
Thanks for any pointers!
The spring profile determine which properties file needs to be picked up while running the application.
-Dspring.profiles.active=default //takes the application-default.properties file
-Dspring.profiles.active= prod //takes the application-prod.properties file

ClassPath resource not found

I'm trying to deploy my Spring Boot based application to a CloudControl container.
I've added the mysql.free add-on and configured it through my application.properties:
spring.datasource.driverClassName=com.mysql.jdbc.Driver
spring.datasource.max-active=1
spring.datasource.max-idle=1
spring.datasource.min-idle=1
spring.datasource.initial-size=1
spring.datasource.url=jdbc:mysql://${MYSQLS_HOSTNAME}:${MYSQLS_PORT}/${MYSQLS_DATABASE}
spring.datasource.username=${MYSQLS_USERNAME}
spring.datasource.password=${MYSQLS_PASSWORD}
On my local development system, everything works perfectly fine, but on the CloudControl container, the app won't start.
I added the StackTrace here. I'm trying to solve the problem for days, but I am not able to solve it by my own.
Spring apps are very memory consuming and the mysqls.free addon allows only a limited number of parallel connections. Although your Stacktrace doesn't show any of these problems. It's hard to solve this issue without more context like logs or environment settings.
The following commands may help:
cctrlapp app_name/default log error # shows startup log
cctrlapp app_name/default addon.creds # shows DB credentials
I've uploaded some spring-boot example code at https://github.com/cloudControl/spring-boot-example-app which I've tested on cloudControl today.
Please take a look at the configuration there. If you want to deploy it, make sure your container has memory size >= 768mb.
cctrlapp app_name/default deploy --memory 768MB
If you still have issues, please contact cloudControl support to help you.

Docpad Livereload plugin + Cloud9 IDE

Has anyone successfully got this combination working?
It seems to run correctly on the client side, but there's something about Cloud9's file system that means changes aren't detected when files are saved, so I'm having to restart the app every time.
problem is that cloud 9 gives u only one port(process.env.PORT) and you are using this port for running web server and you don't any have additional port for live-reload server.
for CSS you can use Live.js
Safareli is correct that Cloud 9 gives you only one port, but Live.js, the website Safareli linked in fact does work with refreshing everything, although I don't know how taxing it is to C9 since it is refreshing for headers pretty much all the time.

Why are there open connections on my Heroku app's PostgreSQL database? How do I close them?

My Heroku app is www.inflationtrends.com.
Usually, when I run "pg:info" in Git Bash to see how many connections there are, that number is zero.
Recently, I've seen a spike in traffic -- not much, only a little over 1,000 in the past 48 hours -- and when I ran "pg:info" this morning (around 11 a.m. Eastern time), the result shows that there are 4 or 5 open connections.
My app is run using the Ruby gem Sinatra. In the Sinatra file, I have the following code:
after do
DB.disconnect
end
The "after do" loop disconnects from the PostgreSQL database after a page is loaded.
The variable "DB" has the connection info for my PostgreSQL database (username, password, host, port number, SSL mode requirement):
DB = Sequel.postgres(
db_name,
:user=>user,
:password=>password,
:host=>host,
:port=>port,
:sslmode=>sslmode
)
Is there some reason that there are open connections? Are there ways to close these connections? Are there more efficient ways to handle this situation?
An alternate way to check the number of open connections on Heroku is to type this into your console/terminal and replace "myapp" with your app's name:
heroku pg:info -a myapp
Have you considered that perhaps your site is getting traffic? When people visit your site and use your application connections will be opened.
Try adding some tracking code (such as Google Analytics) to your web pages, then check if the number of recorded visitors matches the number of open connections.
It is also possible that the database has connections opened by various maintenance tasks, such as backing up.
I grabbed the following toolbelt add-on which worked perfectly.
https://github.com/heroku/heroku-pg-extras#usage
heroku pg:killall --app xyz

Resources