I am working on a PHP project using Laravel and I have several utilities in my docker-compose:
composer:
image: composer:latest
container_name: composer
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
depends_on:
- php
networks:
- laravel
npm:
image: node:13.7
container_name: npm
volumes:
- ./src:/var/www/html
working_dir: /var/www/html
entrypoint: ["npm"]
With this I have to prefix each command with docker-compose run -rm such as:
docker-compose run -rm npm update
Is there a way to simply have an environment that set some aliases (npm, grunt, composer, mysql...) when I am in that project in VSCode?
You can add a task in VS code
Lots of tools exist to automate tasks like linting, building, packaging, testing, or deploying software systems. Examples include the TypeScript Compiler, linters like ESLint and TSLint as well as build systems like Make, Ant, Gulp, Jake, Rake, and MSBuild.
VScode Task
it should be placed inside .vscode
├── docker-compose.yml
└── .vscode
└── tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "npm",
"type": "shell",
"command": "docker-compose run ${input:npm}",
"problemMatcher": [],
"group": {
"kind": "build",
"isDefault": true
}
},
{
"label": "composer",
"type": "shell",
"command": "docker-compose run ${input:compose}",
"problemMatcher": [],
"group": {
"kind": "build",
"isDefault": true
}
}
],
"inputs": [
{
"id": "npm",
"description": "npm argument:",
"default": "npm",
"type": "promptString"
},
{
"id": "compose",
"description": "compose argument:",
"default": "composer",
"type": "promptString"
}
]
}
Now All set, all you need to press
Ctrl+Shift+B and both task will be listed, select and execute the task.
Related
I have the following configuration in a root workspace. The strange thing is that for backend it works, but for frontend - doesn't, no matter how I rename this
{
"private": true,
"name": "root",
"workspaces": [
"packages/frontend",
"packages/backend"
],
"scripts": {
"client": "yarn workspace frontend start",
"client-test": "yarn workspace frontend test",
"server": "yarn workspace backend start",
"start": "conc --kill-others-on-fail \"yarn client\" \"yarn server\""
},
"devDependencies": {
"concurrently": "^7.6.0"
}
}
And it always says: $ yarn workspace frontend test
error Unknown workspace "frontend".
info Visit https://yarnpkg.com/en/docs/cli/workspace for documentation about this command.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I tried to start from all directories, nothing works
The thing is that it does not matter how folders are called inside of "packages", the important thing is that package.json of each workspace should be called correspondingly:
{
"name": "frontend",
"version": "1.0.0",
"private": true,
...
}
I need to setup my development environment multiple times a day, I want to automate the process to a one-click solution.
The goal is to having a main script which opens up two VS Code instances, one for the frontend, one for the backend project.
The steps should be the following:
- open VS Code
- open Backend Project (located at e.g.: C:/myCompany/backend)
- run git pull
- open terminal
- run docker-compose up
- open split terminal
- run npm run start:dev
- open another vscode
- open terminal
- git pull
- open terminal
- run npm run start:dev
I am running windows, I can create very basic ps1 files, I know you can use terminal and run 'code' command to start an instance of VS Code. After that I don't find information what to do next.
I know there some kind of scripts you can run in Vs Code too, but I cannot put it all together.
There's a certain level of automation that can be achieved with VSCode's tooling itself.
Let's start with the backend part of the project. Inside C:/myCompany/backend create a folder .vscode and inside of it place two files: settings.json and tasks.json. They should be as follows:
// C:/myCompany/backend/.vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "git pull",
"type": "shell",
"command": "git pull",
"problemMatcher": [],
"runOptions": {
"runOn": "folderOpen"
}
},
{
"label": "docker-compose up",
"command": "docker-compose up",
"type": "shell",
"presentation": {
"reveal": "always",
"panel": "dedicated",
"group": "dev"
},
"group": "build",
"runOptions": {
"runOn": "folderOpen"
}
},
{
"label": "start:dev",
"type": "shell",
"command": "npm run start:dev",
"presentation": {
"panel": "dedicated",
"group": "dev"
},
"runOptions": {
"runOn": "folderOpen"
}
}
]
}
// C:/myCompany/backend/.vscode/settings.json
{
"task.allowAutomaticTasks": "on"
}
Similarly, under C:/myCompany/frontend create the same .vscode folder and the same two files under it; settings.json would stay the same, but tasks.json would be as follows:
// C:/myCompany/frontend/.vscode/tasks.json
{
"version": "2.0.0",
"tasks": [
{
"label": "git pull",
"type": "shell",
"command": "git pull",
"problemMatcher": [],
"runOptions": {
"runOn": "folderOpen"
}
},
{
"label": "start:dev",
"type": "shell",
"command": "npm run start:dev",
"presentation": {
"panel": "dedicated",
"group": "dev"
},
"runOptions": {
"runOn": "folderOpen"
}
}
]
}
To finish things up, the powershell script would be as simple as this:
code C:\myCompany\backend
code C:\myCompany\frontend
In previous VSCode versions it was necessary to invoke workbench.action.tasks.manageAutomaticRunning and then to choose Allow Automatic Tasks in Folder once for each folder, but that doesn't seem to be the case any more (the setting in settings.json seems to suffice).
For further customisation (e.g. task execution order and dependency), you can look at the documentation: https://code.visualstudio.com/Docs/editor/tasks. You can also experiment with running git pull right from the powershell script instead of the VSCode tasks.
Hi I have the following state:
The cargo rust project: /Users/daniel1302/www/aws-alarm/
The workspace dir: `/Users/daniel1302/www
I have the following debugging configuration:
{
"type": "lldb",
"request": "launch",
"name": "rust/aws-alarm",
"cwd": "/Users/daniel1302/www/aws-alarm/",
"cargo": {
"args": [
"build",
"--lib"
],
},
"program": "${cargo:program}",
"args": [],
"env": {
"AWS_PROFILE": "sf_MFA",
"AWS_REGION": "us-east-1"
},
}
When I am starting the project debugging I can see:
Running `cargo build --lib --message-format=json`...
error: could not find `Cargo.toml` in `/Users/daniel1302/www/releases` or any parent directory
The issue is, that cwd directive does not change the project directory.
Do you know How can I change the cargo project directory?
I have found workaround by setting cargo arg --manifest-path:
"configurations": [
{
...
"cargo": {
"args": [
"build",
"--bin=importer",
"--package=cprices",
"--manifest-path=${workspaceFolder}/cprices/Cargo.toml"
],
...
I was running an Electron project, and everything worked just fine. But now when I run any of the scripts in my package.json (including npm start), it just escapes a line and doesn't do anything.
My package.json:
{
"name": "interclip-desktop",
"version": "0.0.7",
"description": "Interclip for desktop",
"repository": "https://github.com/aperta-principium/Interclip-desktop",
"main": "main.js",
"scripts": {
"start": "electron .",
"package-mac": "electron-packager . --overwrite --asar=true --platform=darwin --arch=x64 --icon=assets/icons/mac/icon.icns --prune=true --out=release-builds",
"package-win": "electron-packager . Interclip --overwrite --platform=win32 --arch=ia32 --icon=assets/icons/win/icon.ico --prune=true --out=release-builds --version-string.CompanyName=CE --version-string.FileDescription=CE --version-string.ProductName=\"Interclip\"",
"package-linux": "electron-packager . Interclip --overwrite --asar=true --platform=linux --arch=x64 --icon=assets/icons/png/icon.png --prune=true --out=release-builds",
"win-install": "node installers/windows/createinstaller.js",
"postinstall": "electron-builder install-app-deps",
"build": "electron-builder --linux",
"release": "electron-builder --linux --publish always"
},
"keywords": [
"Desktop",
"Interclip"
],
"author": "Filip Troníček",
"license": "MIT",
"devDependencies": {
"electron": "^7.1.2",
"electron-builder": "^22.1.0",
"electron-installer-dmg": "^3.0.0",
"electron-packager": "^14.1.1",
"electron-reload": "^1.5.0",
"electron-winstaller": "^4.0.0"
},
"dependencies": {
"axios": "^0.19.0",
"mousetrap": "^1.6.3"
},
"build": {
"appId": "com.aperta-principium.interclip",
"productName": "Interclip",
"mac": {
"category": "public.app-category.utilities"
},
"dmg": {
"icon": false
},
"linux": {
"target": [
"AppImage"
],
"category": "Utility"
}
}
}
I tried updating NPM, didn't work. When I tried in different projects, also doesn't work.
Thanks in advance
npm has a ignore-scripts configuration key. It's expected value is a Boolean and it's set to false by default.
Perhaps it has inadvertently been set to true.
To get/set the ignore-scripts configuration you can utilize the npm-config command:
Check its current setting by running:
npm config get ignore-scripts
If the aforementioned command returns true then reset it to false by running:
npm config set ignore-scripts false
If you are using an integrated terminal (such as the VsCode integrated terminal) try running your npm "run dev' command from your PowerShell (or cmd) terminal. This error arises as a result of your integrated terminal not recognizing your command (especially if you created your app with a git bash terminal).
Try this, and I hope it helps someone cause it always works for me. Cheers!!!
When I share a folder between my host and my containers, my files edited in Sublime are not syncing inside the containers.
I'm using Docker version 1.13.0, build 49bf474 and I tried many fixes that some issues on github told me to do, but none of them worked for me.
I'm sharing my C/ driver with docker host, configuring my compose like this:
uwsgi:
build: .
links:
- postgres
command: ./uwsgi.sh
env_file: .env
volumes:
- /static
- /data/media:/media
- ./api:/app
My volume ./api:/app works, but when i change something, its not reflects on the container and I can't use for development.
Here is my inspect for this container: (Mounts/Volumes)
"Mounts": [
{
"Type": "bind",
"Source": "/C/Users/tif/projetos/my/jl.api/api",
"Destination": "/app",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "bind",
"Source": "/data/media",
"Destination": "/media",
"Mode": "rw",
"RW": true,
"Propagation": ""
},
{
"Type": "volume",
"Name": "b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879",
"Source": "/var/lib/docker/volumes/b931d6d30c2b8e1bcdc2a20d5e6d2c27dd515c5041d2ea64ca01b5dc08047879/_data",
"Destination": "/static",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
}
],
"Volumes": {
"/app": {},
"/media": {},
"/static": {}
},
This things I have already tried:
atomic_save: false (Sublime)
nginx.conf with sendfile off;
Someone have experienced this?
After some research, I could see that I was using uWsgi for development environment, and I couldn't get my app reloading without the py-autoreload.
All i had to get done was start my uwsgi setting the py-autoreload to 2 and my app started to reloads.
I'm starting this command on docker now:
"/usr/local/bin/uwsgi --socket :5000 --wsgi-file ......... --py-autoreload 2
Reading this could be useful if you are experiencing this issue: http://chase-seibert.github.io/blog/2014/03/30/uwsgi-python-reload.html