jhipster creating microservice with sample apps.jdl causing multiple npm errors - microservices

I am trying to build sample jhipster micro service app from the blog
https://dev.to/jhipster/how-to-deploy-jhipster-microservices-on-amazon-eks-using-terraform-and-kubernetes-49a5
while generating default with jhipster jdl apps.jdl https://raw.githubusercontent.com/oktadev/okta-jhipster-k8s-eks-microservices-example/main/apps.jdl causing multiple npm errors
entire logs can be accessed https://gist.githubusercontent.com/vidhya03/89fb728a0af15c3b81c8a60061b3cf5d/raw/0dd14758f7548c0c37f2be050eccd327d5e55b80/buildlogs.log

What is the JHipster version used? Please note that there are some issues with releases 7.9.0 and 7.9.1 on Windows. So if you are using any of them, please update to 7.9.2 first. Then delete the .npmrc file generated on the store app and try running npm install again on the store app. That should fix the issue.

Related

How to deploy Laravel 8 google cloud run with google cloud database

Iam looking for help to containerize a laravel application with docker, running it locally and make it deployable to gcloud Run, connected to a gcloud database.
My application is an API, build with laravel, and so far i have just used the docker-compose/sail package, that comes with laravel 8, in the development.
Here is what i want to achieve:
Laravel app running on gcloud Run.
Database in gcloud, Mysql, PostgreSQL or SQL server. (prefer Mysql).
Enviroment stored in gcloud.
My problem is can find any info if or how to use/rewrite the docker-composer file i laravel 8, create a Dockerfile or cloudbuild file, and build it for gcloud.
Maybe i could add something like this in a cloudbuild.yml file:
#cloudbuild.yml
steps:
# running docker-compose
- name: 'docker/compose:1.26.2'
args: ['up', '-d']
Any help/guidanceis is appreciated.
As mentioned in the comments to this question you can check this video that explains how you can use docker-composer, laravel to deploy an app to Cloud Run with a step-by-step tutorial.
As per database connection to said app, the Connecting from Cloud Run (fully managed) to Cloud SQL documentation is quite complete on that matter and for secret management I found this article that explains how to implement secret manager into Cloud Run.
I know this answer is basically just links to the documentation and articles, but I believe all the information you need to implement your app into Cloud Run is in those.

Critical Caching issue in Laravel Application (AWS Server)

I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.

Openshift 3 starter

Im trying to investigate how to get a simple SpringBoot project up and running on openshift 3 starter(free version).
Im using this simple starter project whixh uses widfly, see here:
https://github.com/callistaenterprise/spring-boot-openshift
I followed the build/deploy guide here and tried both wildfly and openjdk image types, both fail.
https://developers.redhat.com/blog/2017/02/23/getting-started-with-openshift-java-s2i/
The error in the build console is this:
Pod
blog2-1-build
Failed sync
Error syncing pod
Pod
blog2-1-build
Scheduled Successfully assigned blog2-1-build to ip-172-31-53-149.ec2.internal
i have no idea how to solve this, any advice help greatly appreciated.

I am unable to deploy phoenix app to heroku because of failed dependency (called coherence) compilation, how to make it work?

So to start I made an Elixir application using Phoenix framework.
This application uses coherence dependency for authentication to the website. This dependency was installed as it is advised on the git repo with -full argument to install all the options coherence has.
Then, I did just change a couple of lines in config.exs file of my project to use mailgun service for mailing and put credentials over there.
Next, I installed and configured my other deps (they have nothing to do with coherence).
Locally, my application could compile and run without problems.
Then, I wanted to deploy it to Heroku using Phoenix guidelines.
When I completed all the steps, I got an error when trying to push the application to Heroku.
I then tried to check the file lib/mix/tasks/coherence.clean.ex and the line 162 where I found a comment that said there is an error with updating a config file, but I couldn't figure out what that means and how to solve that.
I tried to make a fresh phoenix application, installing coherence with the same or different options and afterward deploying it following the Phoenix guidelines. Every time I was getting the same error.
I also want to note that I did try to create elixir_buildpack.config file and putting always_rebuild=true there and had no success. (it is a solution mentioned in troubleshooting section of deploying to Heroku guide)
So, my question is, what do I need to change in my config.exs file (or elsewhere) in order to make at least a fresh application with coherence installed to compile and work on Heroku?
useful links:
coherence dep github link
Thanks a ton guys.
The Heroku Buildpack for Elixir currently defaults to Elixir 1.2.6 while the code that throws that error uses the else syntax with with, a feature that was added in Elixir 1.3.0, so you need to set the Elixir version to use to 1.3.0 or later by adding the following to elixir_buildpack.config:
elixir_version=1.3.2

How to move cloud code from parse.com to heroku

I have moved parse sever from parse.com to heroku. Everything is working fine except cloud code('cloud/main.js' file).
I have replaced "main.js" of parse.com with "main.js" of parse server code and deployed on heroku, but it is not working. Getting following error when I make request from my mobile app
{"code":1,"message":"Internal server error."} (Code: 1, Version: x.xx.x)
Any idea?
Note:
I've followed following link for migrating parse server
https://learnappmaking.com/how-to-migrate-parse-app-parse-server-heroku-mongolab/
Migrating cloud code can range in difficulty depending on how involved that code is. Here's a workflow for validating your code:
1) Check that you can build your Heroku app locally with the right Node version.
2) Comment out all of your cloud code. You want to start introducing your code in parts and make sure it compiles with each re-introduced function.
3) Install the node modules for each service that you use. If you use stripe/mailgun or any other package, add them in your package.json file and run npm install. Then include them in your main.js file with the require('packageName').
4) The cloud server uses Express.js version 4.2 and a Parse.com runs Express version 2.0 or 3.0 but not 4.0. If you use any middlewear then you need to change it to the proper Express 4.0 syntax/methodology.
5) There is no support for cloud jobs so rename all your *.job functions to *.define and comment properly so you can come back to them later. If you did not use cloud jobs then don't worry.
6) If you did use cloud jobs, now you need to setup a heroku worker/scheduler to run those old *.job (now *.define) calls at the proper time intervals you had.

Resources