Using Vue's default scripts:
"scripts": {
"serve": "vue-cli-service serve",
"build": "vue-cli-service build"
},
I run "npm run build" it produces the production build in the "dist" directory however in the end it says:
Images and other types of assets were omitted.
I honestly don't understand what to do include them. I don't want to make a specific folder for images and upload so my web server can serve it. Vue should handle the files in its src/assets folder itself.
So far I have found a solution while googling which says to include:
NODE_ENV = PRODUCTION
But it doesn't work either.
Any clues how to get this fixed? I cannot launch a website without including its logo.
I believe that message just means that they are omitted from the file listing displayed in the console during the build.
Nope. They are omitted from build. My page is "broken" if I use npm run build and then serve it through DJango.
Doing npm run serve it will produce the right files in the django's static folders, and then, serving it from django (w npm server stopped) will work.
Related
When I deploy a Laravel 9 project to production, Laravel replies:
Spatie\LaravelIgnition\Exceptions\ViewException: Vite manifest not found at: /var/www/.../public/build/manifest.json in file /var/www/.../vendor/laravel/framework/src/Illuminate/Foundation/Vite.php on line 139
It turns out the files in /public/build folder are not committed in the git repository, thus missing on production server.
Should I?
Install npm on production server and run npm run build to generate the manifest files, or
Include the manifest files (e.g. manifest.json) of /public/build folder into my repository and pull them in production server ...
You can add buildpacks (scripts that are run when your app is deployed. They are used to install dependencies for your app and configure your environment) on Heroku which will allow you to run npm. Well, easy does it on Heroku.
But if you happen to be on Fortrabbit, where you can't run npm or vite in ssh. The simplest way is to build your assets locally (npm run build or vite build) and push them to production.
Make sure you comment the public/build folder in .gitignore before pushing it to production. This might work for a lot (almost) of servers including Heroku without adding buildpacks.
Should this fail, make sure your APP_ENV is set to production APP_ENV=production or anything else except local as the documentation of vite states.
I've recently started using lerna to manage a monorepo, and in development it works fine.
Lerna creates symlinks between my various packages, and so tools like 'tsc --watch' or nodemon work fine for detecting changes in the other packages.
But I've run into a problem with creating docker images in this environment.
Let's say we have a project with this structure:
root
packages
common → artifact is a private npm package, this depends on utilities, something-specific
utilities → artifact is a public npm package
something-specific -> artifact is a public npm package
frontend → artifact is a docker image, depends on common
backend → artifact is a docker image, depends on common and utilities
In this scenario, in development, everything is fine. I'm running some kind of live reload server and the symlinks work such that the dependencies are working.
Now let's say I want to create a docker image from backend.
I'll walk through some scenarios:
I ADD package.json in my Dockerfile, and then run npm install.
Doesn't work, as the common and utilities packages are not published.
I run my build command in backend, ADD /build and /node_modules in the docker file.
Doesn't work, as my built backend has require('common') and require('utilities') commands, these are in node_modules (symlinked), but Docker will just ignore these symlinked folders.
Workaround: using cp --dereference to 'unsymlink' the node modules works. See this AskUbuntu question.
Step 1, but before I build my docker image, I publish the npm packages.
This works ok, but for someone who is checking out the code base, and making a modification to common or utilities, it's not going to work, as they don't have privledges to publish the npm package.
I configure the build command of backend to not treat common or utilities as an external, and common to not treat something-specific as an external.
I think first build something-specific, and then common, and then utilities, and then backend.
This way, when the build is occuring, and using this technique with webpack, the bundle will include all of the code from something-specfic, common and utilities.
But this is cumbersome to manage.
It seems like quite a simple problem I'm trying to solve here. The code that is currently working on my machine, I want to pull out and put into a docker container.
Remember the key thing we want to achieve here, is for someone to be able to check out the code base, modify any of the packages, and then build a docker image, all from their development environment.
Is there an obvious lerna technique that I'm missing here, or otherwise a devops frame of reference I can use to think about solving this problem?
We got a similar issue and here is what we did: Put the Dockerfile in the root of the monorepo (where the lerna.json locates).
The reason: You really treat the whole repo as a single source of truth, and you want any modification to the whole repo to be reflected in the docker image, so it makes less sense to have separate Dockerfiles for individual packages.
Dockerfile
FROM node:12.13.0
SHELL ["/bin/bash", "-c"]
RUN mkdir -p /app
WORKDIR /app
# Install app dependencies
COPY package.json /app/package.json
COPY yarn.lock /app/yarn.lock
COPY packages/frontend/package.json /app/packages/frontend/package.json
COPY packages/backend/package.json /app/packages/backend/package.json
COPY lerna.json /app/lerna.json
RUN ["/bin/bash", "-c", "yarn install"]
# Bundle app source
COPY . /app
RUN ["/bin/bash", "-c", "yarn bootstrap"]
RUN ["/bin/bash", "-c", "yarn build"]
EXPOSE 3000
CMD [ "yarn", "start" ]
package.json
{
"private": true,
"workspaces": [
"packages/*"
],
"scripts": {
"bootstrap": "lerna clean --yes && lerna bootstrap",
"build": "lerna run build --stream",
"start": "cross-env NODE_ENV=production node dist/backend/main",
},
"devDependencies": {
"lerna": "^3.19.0",
"cross-env": "^6.0.3"
},
}
Late to the party but my approach is using webpack in conjunction with webpack-node-externals and generate-package-json-webpack-plugin, see npmjs.com/package/generate-package-json-webpack-plugin.
With node externals, we can bundle all the dependencies from our other workspaces (libs) into the app (this makes a private npm registry obsolete). With the generate package json plugin, a new package json is created containing all dependencies except our workspace dependencies. With this package json next to the bundle, we can do npm or yarn install in the dockerfile.
I'm trying to run a gulp watch when running my Java web-app with Gretty.
I'm using this plugin to run gulp from Gradle.
For the moment, I'm just able to run a single gulp build before the app run by doing this: appRun.dependsOn gulp_build
What I would like is that when I run the app, there is also gulp watch starting (task gulp_default in Gradle on my case), so that SCSS files are automatically compiled when I save them without having to restart the app.
I can't just do appRun.dependsOn gulp_build because gulp_default doesn't return anything, so the gradle task doesn't execute appRun.
Any idea how I can do this?
I found a way, but by using npm to start the app and not gradle.
I used the concurrently package. I start the app by doing npm start instead of gradle appRun, and I added this in my package.json:
"scripts": {
"start": "concurrently \"gradle appRun\" \"gulp watch\""
}
I've been trying to convert and deploy one of our node.js apps into a lambda function and have been having some problems with the node_modules dependencies - saying that it can't find certain modules. I started by creating a package.json, npm install the dependencies locally then copy the node modules folder up to lambda.
For instance I have a project that requires sequelize and convict and have been getting errors saying that it cannot find the moment module as a sub-dependency. I see that moment is included in the root of my node_modules but it was not included in the sub folder under the sequelize module.
However, this project runs fine locally. What is the difference in lambda and what's the best practice for deploying a somewhat long list of node modules with it - just a copy of the node_modules folder? On some of the other simpler projects I have, the small amount of node_modules can be copied up with no problems.
{
"errorMessage": "Cannot find module 'moment'",
"errorType": "Error",
"stackTrace": [
"Function.Module._resolveFilename (module.js:338:15)",
"Function.Module._load (module.js:280:25)",
"Module.require (module.js:364:17)",
"require (module.js:380:17)",
"VERSION (/var/task/node_modules/sequelize/node_modules/moment-timezone/moment-timezone.js:14:28)",
"Object. (/var/task/node_modules/sequelize/node_modules/moment-timezone/moment-timezone.js:18:2)",
"Module._compile (module.js:456:26)",
"Object.Module._extensions..js (module.js:474:10)",
"Module.load (module.js:356:32)",
"Function.Module._load (module.js:312:12)"
]
}
I resolved this by uploading all from a zip file which contains all the data I need for my lambda function.
you can just create your project in your local machine and make all the changes that you need then the file you are going to zip should have this same structure and also see that there is an option to load your code from a zip file.
This sounds to me like an issue caused by different versions of npm. Are you running the same version of nodejs locally as is used by Lambda (ie. v0.10.36)?
Depending on the version of npm you're using to install the modules locally, the node_modules directory's contents are laid out slightly differently (mainly in order to de-duplicate things), and that may be why your dependencies can't find their dependencies in Lambda.
After a bit of digging, it sounds like a clean install (ie. rm your node_modules directory and run npm install) might clean things up for you. The reason why is that it seems that npm doesn't install sub-dependencies if they're already present at the top level (ie. you installed moment before sequelize, etc).
I have this app that deploys to heroku:
https://github.com/justin808/react-webpack-rails-tutorial
http://react-webpack-rails-tutorial.herokuapp.com/
The technique is described here: http://www.railsonmaui.com/blog/2014/10/02/integrating-webpack-and-the-es6-transpiler-into-an-existing-rails-project/
Currently, package.json is at the root level of the project.
How do move the /package.json and /node_modules to be inside of the /webpack directory?
I.e., how do I tell the node buildpack where to look for package.json?
The fix is to use this in package.json:
"scripts": {
"postinstall": "cd client && npm install",
You can see the full details here: https://github.com/shakacode/react-webpack-rails-tutorial/blob/master/package.json#L10