Why GraphQL and Strapi isn't working on AWS ECS? - graphql

I have a Strapi App (v4) running as a Docker image on AWS Fargate,
everything worked perfectly untill I decided to install GraphQL and pushed the updated image to the ECR.
Once the task starts running I get the following error:
TypeError: ApplicationError is not a constructor
at /strapi-api/node_modules/#strapi/plugin-graphql/server/services/builders/dynamic-zones.js:17:15 ...
the weired thing is that when I run the docker-image locally I get no errors at all and GraphQL is working properly.
hope someone could help me with this error,
thanks.

It turns out I've had a problem with my components folder that is connected to the AWS EFS, I created the project from scratch and now it works like a charm.
So if you're having this kind of problem, try creating a new strapi app.
For reference:
https://github.com/strapi/strapi/issues/12547Tg
:)

Related

Missing fields in API response in Production

I have a mern app that I am using to fetch packages from my backend.
It works in development but not in production when I host it on AWS EC2.
I figured out that the reason it's not working id because I am getting incomplete response back in production. How is it even possible? I am frustrated right now.
Attaching some images and console logs in development and production.
The field packageImages is missing in production response!
Because of this I am getting error about trying to read properties of undefined.
Please someone guide me with this.

laravel existing application Laravel.yml to codepipeline

I have a laravel application in local development added to codecommit. I have added AWS Beanstalk to run this application to run,
everything works perfectly fine,
but to connect with the database and other dependencies I don't know how to add the laravel.yml to the directory,
Please help me find this. Any documentation or tutorial on this?
thanks in advance,

fastapi prediction with machine learning model from mlflow works in local but not online on Heroku

When I test my API in local it is working fine:
When I test it online. It is still working fine.
When I make a prediction in local; it is still working fine. It uses a model saved online on MLflow to make the prediction.
But when I push my API online and try the same prediction, is it not working anymore.
When checking the status_code, I have an error 500:
And when trying to print the answer it says:
Any idea why?
Thank you
The error 500 means the website does not exist. You need to build a docker image and push it online.

Critical Caching issue in Laravel Application (AWS Server)

I am facing a critical issue in my application, it is developed in Laravel and Angular. The issue is I am getting the old email templates on live site and on local server I am getting the latest updated one. The process to deloy the code is automatic, I just to commit the code in BitBucket and after that Bitbucket Pipleline push the code to AWS server directly.
I have already run the cache cammands for Laravel and restarted the jobs but still i am getting the same issue. If anyone have expirienced the same issue or have knowledge of the same to resolve, Please guide!
I think you can try one of the following ways to overcome the issue, I faced a similar issue and resolved it by following ways -
Try deleting the cache files manually from Laravel from storage/framework/views
Upload the code directly into AWS for particular module without using the pipeline way
restart your server
This will surely resolve your issue!
Since you are using Laravel and angular application deployed on AWS,
I assume that bit bucket is pushing code and build commands are fired on every push
there are few things which can help you.
Try to build the angular side on every push, since angular builds hashes all the files in the dist folder
Try to delete the Laravel cached files which are stored in storage/framework/views
Check that on that your server is pointing to the right project folder
If any of the points from 1 or 2 works you can automate the process by passing CLI command after every push,
Point 1 and 2 are achievable by passing CLI commands.

graphiql UI is broken for Serverless/AWS-Lambda setup

Here's a reference setup of AWS Lambda and Serverless and GraphQL that i'm following:
https://github.com/serverless/serverless-graphql-apollo
I'm trying with yarn run start-server-lambda:offline to start the project offline, it does start without any problems, but upon navigating to /graphiql, I get this:
There's a configuration problem somewhere.
Basically, it's sending a request to /production/graphql when it should have been sending it to /graphql.

Resources