Qbs: How to run simple terminal command? - qt-creator

I'm used to working with Makefiles but my current project uses .qbs files. How do I run a simple terminal command through qbs without creating or requiring files? Similar to a phony rule in make.
The following works and shows "awesome" in my terminal.
import qbs 1.0
Project {
name: "cli"
Product {
name: "helloworld"
type: "application"
files: "TEST.c"
Depends { name: "cpp" }
}
Product {
type: ["custom-image"]
Depends { name: "helloworld" }
Rule {
inputsFromDependencies: ["application"]
Artifact {
fileTags: ["custom-image"]
}
prepare: {
var cmd = new Command("echo", "awesome")
return cmd
}
}
}
}
However I have to touch my dummy TEST.c file before each run. Without the helloworld dependency the Rule does not run.
Any ideas? Thank you very much!

It's buried in the documentation in a very non obvious place and further obscured by Command (which is not the correct way, lol). I've had your problem too.
What you need is this:
http://doc.qt.io/qbs/jsextension-process.html

I'm not sure what your end goal is but you could use Transformer{} instead of a Rule{}. The biggest difference between a Rule{} and a Transformer{} is you don't need any inputs for the Transformer{} to run.
Also see Transformer.alwaysRun property.
https://doc.qt.io/qbs/transformer-item.html

Related

Packaging a go binary and electron app together [duplicate]

Is there a good solution on how to include third party pre compiled binaries like imagemagick into an electron app? there are node.js modules but they are all wrappers or native binding to the system wide installed libraries. I wonder if it's possible to bundle precompiled binaries within the distribution.
Here's another method, tested with Mac and Windows so far. Requires 'app-root-dir' package, doesn't require adding anything manually to node_modules dir.
Put your files under resources/$os/, where $os is either "mac", "linux", or "win". The build process will copy files from those directories as per build target OS.
Put extraFiles option in your build configs as follows:
package.json
"build": {
"extraFiles": [
{
"from": "resources/${os}",
"to": "Resources/bin",
"filter": ["**/*"]
}
],
Use something like this to determine the current platform.
get-platform.js
import { platform } from 'os';
export default () => {
switch (platform()) {
case 'aix':
case 'freebsd':
case 'linux':
case 'openbsd':
case 'android':
return 'linux';
case 'darwin':
case 'sunos':
return 'mac';
case 'win32':
return 'win';
}
};
Call the executable from your app depending on env and OS. Here I am assuming built versions are in production mode and source versions in other modes, but you can create your own calling logic.
import { join as joinPath, dirname } from 'path';
import { exec } from 'child_process';
import appRootDir from 'app-root-dir';
import env from './env';
import getPlatform from './get-platform';
const execPath = (env.name === 'production') ?
joinPath(dirname(appRootDir.get()), 'bin'):
joinPath(appRootDir.get(), 'resources', getPlatform());
const cmd = `${joinPath(execPath, 'my-executable')}`;
exec(cmd, (err, stdout, stderr) => {
// do things
});
I think I was using electron-builder as base, the env file generation comes with it. Basically it's just a JSON config file.
See UPDATE below (this method isn't ideal now).
I did find a solution to this, but I have no idea if this is considered best practice. I couldn't find any good documentation for including 3rd party precompiled binaries, so I just fiddled with it until it finally worked with my ffmpeg binary. Here's what I did (starting with the electron quick start, node.js v6):
Mac OS X method
From the app directory I ran the following commands in Terminal to include the ffmpeg binary as a module:
mkdir node_modules/ffmpeg
cp /usr/local/bin/ffmpeg node_modules/ffmpeg/
cd node_modules/.bin
ln -s ../ffmpeg/ffmpeg ffmpeg
(replace /usr/local/bin/ffmpeg with your current binary path, download it from here) Placing the link allowed electron-packager to include the binary I saved to node_modules/ffmpeg/.
Then to get the bundled app path (so that I could use an absolute path for my binary... relative paths didn't seem to work no matter what I did) I installed the npm package app-root-dir by running the following command:
npm i -S app-root-dir
Now that I had the root app directory, I just append the subfolder for my binary and spawned from there. This is the code that I placed in renderer.js:.
var appRootDir = require('app-root-dir').get();
var ffmpegpath=appRootDir+'/node_modules/ffmpeg/ffmpeg';
console.log(ffmpegpath);
const
spawn = require( 'child_process' ).spawn,
ffmpeg = spawn( ffmpegpath, ['-i',clips_input[0]]); //add whatever switches you need here
ffmpeg.stdout.on( 'data', data => {
console.log( `stdout: ${data}` );
});
ffmpeg.stderr.on( 'data', data => {
console.log( `stderr: ${data}` );
});
Windows Method
Open your electron base folder (electron-quick-start is the default name), then go into the node_modules folder. Create a folder there called ffmpeg, and copy your static binary into this directory. Note: it must be the static version of your binary, for ffmpeg I grabbed the latest Windows build here.
To get the bundled app path (so that I could use an absolute path for my binary... relative paths didn't seem to work no matter what I did) I installed the npm package app-root-dir by running the following command from a command prompt in my app directory:
npm i -S app-root-dir
Within your node_modules folder, navigate to the .bin subfolder. You need to create a couple of text files here to tell node to include the binary exe file you just copied. Use your favorite text editor and create two files, one named ffmpeg with the following contents:
#!/bin/sh
basedir=$(dirname "$(echo "$0" | sed -e 's,\\,/,g')")
case `uname` in
*CYGWIN*) basedir=`cygpath -w "$basedir"`;;
esac
if [ -x "$basedir/node" ]; then
"$basedir/node" "$basedir/../ffmpeg/ffmpeg" "$#"
ret=$?
else
node "$basedir/../ffmpeg/ffmpeg" "$#"
ret=$?
fi
exit $ret
And the the second text file, named ffmpeg.cmd:
#IF EXIST "%~dp0\node.exe" (
"%~dp0\node.exe" "%~dp0\..\ffmpeg\ffmpeg" %*
) ELSE (
#SETLOCAL
#SET PATHEXT=%PATHEXT:;.JS;=;%
node "%~dp0\..\ffmpeg\ffmpeg" %*
)
Next you can run ffmpeg in your Windows electron distribution (in renderer.js) as follows (I'm using the app-root-dir node module as well). Note the quotes added to the binary path, if your app is installed to a directory with spaces (eg C:\Program Files\YourApp) it won't work without these.
var appRootDir = require('app-root-dir').get();
var ffmpegpath = appRootDir + '\\node_modules\\ffmpeg\\ffmpeg';
const
spawn = require( 'child_process' ).spawn;
var ffmpeg = spawn( 'cmd.exe', ['/c', '"'+ffmpegpath+ '"', '-i', clips_input[0]]); //add whatever switches you need here, test on command line first
ffmpeg.stdout.on( 'data', data => {
console.log( `stdout: ${data}` );
});
ffmpeg.stderr.on( 'data', data => {
console.log( `stderr: ${data}` );
});
UPDATE: Unified Simple Method
Well, as time as rolled on and Node has updated, this method is no longer the easiest way to include precompiled binaries. It still works, but when npm install is run the binary folders under node_modules will be deleted and have to be replaced again. The below method works for Node v12.
This new method obviates the need to symlink, and works similarly for Mac and Windows. Relative paths seem to work now.
You will still need appRootDir: npm i -S app-root-dir
Create a folder under your app's root directory named bin and place your precompiled static binaries here, I'm using ffmpeg as an example.
Use the following code in your renderer script:
const appRootDir = require('app-root-dir').get();
const ffmpegpath = appRootDir + '/bin/ffmpeg';
const spawn = require( 'child_process' ).spawn;
const child = spawn( ffmpegpath, ['-i', inputfile, 'out.mp4']); //add whatever switches you need here, test on command line first
child.stdout.on( 'data', data => {
console.log( `stdout: ${data}` );
});
child.stderr.on( 'data', data => {
console.log( `stderr: ${data}` );
});
The above answers helped me figure out how it could done but there is a much efficient way to distribute binary files.
Taking cues from tsuriga's answer, here is my code:
Note: replace or add OS path as required.
Update - 4 Dec 2020
This answer has been updated. Find the previous code to the bottom of this answer.
Download the needed packages
yarn add electron-root-path electron-is-packaged
# or
npm i electron-root-path electron-is-packaged
Create a directory ./resources/mac/bin
Place you binaries inside this folder
Create a file ./app/binaries.js and paste the following code:
import path from 'path';
import { rootPath as root } from 'electron-root-path';
import { isPackaged } from 'electron-is-packaged';
import { getPlatform } from './getPlatform';
const IS_PROD = process.env.NODE_ENV === 'production';
const binariesPath =
IS_PROD && isPackaged // the path to a bundled electron app.
? path.join(root, './Contents', './Resources', './bin')
: path.join(root, './build', getPlatform(), './bin');
export const execPath = path.resolve(
path.join(binariesPath, './exec-file-name')
);
Create a file ./app/get-platform.js and paste the following code:
'use strict';
import { platform } from 'os';
export default () => {
switch (platform()) {
case 'aix':
case 'freebsd':
case 'linux':
case 'openbsd':
case 'android':
return 'linux';
case 'darwin':
case 'sunos':
return 'mac';
case 'win32':
return 'win';
}
};
Add these lines inside the ./package.json file:
"build": {
....
"extraFiles": [
{
"from": "resources/mac/bin",
"to": "Resources/bin",
"filter": [
"**/*"
]
}
],
....
},
import binary as:
import { execPath } from './binaries';
#your program code:
var command = spawn(execPath, arg, {});
Why this is better?
The above answers require an additional package called app-root-dir
tsuriga's answer doesn't handle the (env=production) build or the pre-packed versions properly. He/she has only taken care of development and post-packaged versions.
Previous answer
Avoid using electron.remote as it is getting depreciated
app.getAppPath might throw errors in the main process.
./app/binaries.js
'use strict';
import path from 'path';
import { remote } from 'electron';
import getPlatform from './get-platform';
const IS_PROD = process.env.NODE_ENV === 'production';
const root = process.cwd();
const { isPackaged, getAppPath } = remote.app;
const binariesPath =
IS_PROD && isPackaged
? path.join(path.dirname(getAppPath()), '..', './Resources', './bin')
: path.join(root, './resources', getPlatform(), './bin');
export const execPath = path.resolve(path.join(binariesPath, './exec-file-name'));
tl;dr:
yes you can! but it requires you to write your own self-contained addon which does not make any assumptions on system libraries. Moreover in some cases you have to make sure that your addon is compiled for the desired OS.
Lets break this question in several parts:
- Addons (Native modules):
Addons are dynamically linked shared objects.
In other words you can just write your own addon with no dependency on system wide libraries (e.g. by statically linking required modules) containing all the code you need.
You have to consider that such approach is OS-specific, meaning that you need to compile your addon for each OS that you want to support! (depending on what other libraries you may use)
- Native modules for electron:
The native Node modules are supported by Electron, but since Electron is using a different V8 version from official Node, you have to manually specify the location of Electron's headers when building native modules
This means that a native module which has been built against node headers must be rebuilt to be used inside electron. You can find how in electron docs.
- Bundle modules with electron app:
I suppose you want to have your app as a stand-alone executable without requiring users to install electron on their machines. If so, I can suggest using electron-packager.
following Ganesh answer's which was really a great help, in my case what was working in binaries.js (for a mac build - did not test for windows or linux) was:
"use strict";
import path from "path";
import { app } from "electron";
const IS_PROD = process.env.NODE_ENV === "production";
const root = process.cwd();
const { isPackaged } = app;
const binariesPath =
IS_PROD && isPackaged
? path.join(process.resourcesPath, "./bin")
: path.join(root, "./external");
export const execPath = path.join(binariesPath, "./my_exec_name");
Considering that my_exec_name was in the folder ./external/bin and copied in the app package in ./Resources/bin. I did not use the get_platforms.js script (not needed in my case). app.getAppPath() was generating a crash when the app was packaged.
Hope it can help.
Heavily based on Ganesh's answer, but simplified somewhat. Also I am using the Vue CLI Electron Builder Plugin so the config has to go in a slightly different place.
Create a resources directory. Place all your files in there.
Add this to vue.config.js:
module.exports = {
pluginOptions: {
electronBuilder: {
builderOptions: {
...
"extraResources": [
{
"from": "resources",
"to": ".",
"filter": "**/*"
}
],
...
}
}
}
}
Create a file called resources.ts in your src folder, with these contents:
import path from 'path';
import { remote } from 'electron';
// Get the path that `extraResources` are sent to. This is `<app>/Resources`
// on macOS. remote.app.getAppPath() returns `<app>/Resources/app.asar` so
// we just get the parent directory. If the app is not packaged we just use
// `<current working directory>/resources`.
export const resourcesPath = remote.app.isPackaged ?
path.dirname(remote.app.getAppPath()) :
path.resolve('resources');
Note I haven't tested this on Windows/Linux but it should work assuming app.asar is in the resources directory on those platforms (I assume so).
Use it like this:
import { resourcesPath } from '../resources'; // Path to resources.ts
...
loadFromFile(resourcesPath + '/your_file');

Specify the webpack "mainFields" on a case by case basis

Webpack has a resolve.mainFields configuration: https://webpack.js.org/configuration/resolve/#resolvemainfields
This allows control over what package.json field should be used as an entrypoint.
I have an app that pulls in dozens of different 3rd party packages. The use case is that I want to specify what field to use depending on the name of the package. Example:
For package foo use the main field in node_modules/foo/package.json
For package bar use the module field in node_modules/bar/package.json
Certain packages I'm relying on are not bundled in a correct manner, the code that the module field is pointing to does not follow these rules: https://github.com/dherman/defense-of-dot-js/blob/master/proposal.md This causes the app to break if I wholesale change the webpack configuration to:
resolve: {
mainFields: ['module']
}
The mainFields has to be set to main to currently get the app to work. This causes it to always pull in the CommonJS version of every dependency and miss out on treeshaking. Hoping to do something like this:
resolve: {
foo: {
mainFields: ['main']
},
bar: {
mainFields: ['module'],
}
Package foo gets bundled into my app via its main field and package bar gets bundled in via its module field. I realize the benefits of treeshaking with the bar package, and I don't break the app with foo package (has a module field that is not proper module syntax).
One way to achieve this would be instead of using resolve.mainFields you can make use of resolve.plugins option and write your own custom resolver see https://stackoverflow.com/a/29859165/6455628 because by using your custom resolver you can programmatically resolve different path for different modules
I am copy pasting the Ricardo Stuven's Answer here
Yes, it's possible. To avoid ambiguity and for easier implementation,
we'll use a prefix hash symbol as marker of your convention:
require("#./components/SettingsPanel");
Then add this to your configuration file (of course, you can refactor
it later):
var webpack = require('webpack');
var path = require('path');
var MyConventionResolver = {
apply: function(resolver) {
resolver.plugin('module', function(request, callback) {
if (request.request[0] === '#') {
var req = request.request.substr(1);
var obj = {
path: request.path,
request: req + '/' + path.basename(req) + '.js',
query: request.query,
directory: request.directory
};
this.doResolve(['file'], obj, callback);
}
else {
callback();
}
});
}
};
module.exports = {
resolve: {
plugins: [
MyConventionResolver
]
}
// ...
};
resolve.mainFields not work in my case, but resolve.aliasFields works.
More details in https://stackoverflow.com/a/71555568/7534433

How to define and call custom methods in build.gradle?

As part of my project, I need to read files from a directory and do some operations all these in build script. For each file, the operation is the same(reading some SQL queries and execute it). I think its a repetitive task and better to write inside a method. Since I'm new to Gradle, I don't know how it should be. Please help.
One approach given below:
ext.myMethod = { param1, param2 ->
// Method body here
}
Note that this gets created for the project scope, ie. globally available for the project, which can be invoked as follows anywhere in the build script using myMethod(p1, p2) which is equivalent to project.myMethod(p1, p2)
The method can be defined under different scopes as well, such as within tasks:
task myTask {
ext.myMethod = { param1, param2 ->
// Method body here
}
doLast {
myMethod(p1, p2) // This will resolve 'myMethod' defined in task
}
}
If you have defined any methods in any other file *.gradle - ext.method() makes it accessible project wide. For example here is a
versioning.gradle
// ext makes method callable project wide
ext.getVersionName = { ->
try {
def branchout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'rev-parse', '--abbrev-ref', 'HEAD'
standardOutput = branchout
}
def branch = branchout.toString().trim()
if (branch.equals("master")) {
def stdout = new ByteArrayOutputStream()
exec {
commandLine 'git', 'describe', '--tags'
standardOutput = stdout
}
return stdout.toString().trim()
} else {
return branch;
}
}
catch (ignored) {
return null;
}
}
build.gradle
task showVersion << {
// Use inherited method
println 'VersionName: ' + getVersionName()
}
Without ext.method() format , the method will only be available within the *.gradle file it is declared. This is the same with properties.
You can define methods in the following way:
// Define an extra property
ext.srcDirName = 'src/java'
// Define a method
def getSrcDir(project) {
return project.file(srcDirName)
}
You can find more details in gradle documentation Chapter 62. Organizing Build Logic
An example with a root object containing methods.
hg.gradle file:
ext.hg = [
cloneOrPull: { source, dest, branch ->
if (!dest.isDirectory())
hg.clone(source, dest, branch)
else
hg.pull(dest)
hg.update(dest, branch)
},
clone: { source, dest, branch ->
dest.mkdirs()
exec {
commandLine 'hg', 'clone', '--noupdate', source, dest.absolutePath
}
},
pull: { dest ->
exec {
workingDir dest.absolutePath
commandLine 'hg', 'pull'
}
},
]
build.gradle file
apply from: 'hg.gradle'
hg.clone('path/to/repo')
Somehow, maybe because it's five years since the OP, but none of the
ext.someMethod = { foo ->
methodBody
}
approaches are working for me. Instead, a simple function definition seems to be getting the job done in my gradle file:
def retrieveEnvvar(String envvar_name) {
if ( System.getenv(envvar_name) == "" ) {
throw new InvalidUserDataException("\n\n\nPlease specify environment variable ${envvar_name}\n")
} else {
return System.getenv(envvar_name)
}
}
And I call it elsewhere in my script with no prefix, ie retrieveEnvvar("APP_PASSWORD")
This is 2020 so I'm using Gradle 6.1.1.
#ether_joe the top-voted answer by #InvisibleArrow above does work however you must define the method you call before you call it - i.e. earlier in the build.gradle file.
You can see an example here. I have used this approach with Gradle 6.5 and it works.
With Kotlin DSL (build.gradle.kts) you can define regular functions and use them.
It doesn't matter whether you define your function before the call site or after it.
println(generateString())
fun generateString(): String {
return "Black Forest"
}
tasks.create("MyTask") {
println(generateString())
}
If you want to import and use a function from another script, see this answer and this answer.
In my react-native in build.gradle
def func_abc(y){return "abc"+y;}
then
def x = func_abc("y");
If you want to check:
throw new GradleException("x="+x);
or
println "x="+x;

platform independent symlinks in build script

I need to add to gradle build script a command that will create a symlink. I know that the person who builds that has cygwin installed. The problem is with export command. Here is what I got so far
if(OS == 'win32') {
exec { commandLine "C:\\cygwin\\bin\\mintty.exe", "--hold always", "/bin/bash", "-l", "-e", "export", "CYGWIN=winsymlinks", "&&", "-e", "ln", "-s", link, file}
//exec { commandLine "cmd", "/c", "mklink", link, file}
//exec { commandLine "export", "CYGWIN=winsymlinks" }
//exec { commandLine "C:\\cygwin\\bin\\ln.exe" , "-s", link, file}
}
else {
exec { commandLine "ln", "-s", link, file}
}
Is there a standard way of doing it?
Gradle doesn't offer a public API for creating symlinks. If your builds run under JDK 7 or higher, you could try the JDK's symlink API. Otherwise, it should be possible to solve this with exec as well, as long as you can figure out the correct command.
The Ant builder, provided by every task has one. Seems like a solution, but as pointed out in the comments is not really cross platform:
task createLink << {
ant.symlink(resource: "file", link: "link")
}
Instead you could call the NIO API in Java, but you will need 1.7. Take a look at the
createSymbolicLink
Example of Files.createSymbolicLink from build.gradle:
File devDir = new File("${project.rootDir}/.dev")
File configDir = new File("${project.projectDir}/config")
Files.createSymbolicLink(devLink.toPath(), devDir.toPath())
In the special case where your aim is to create a symlink to an executable file, there is a cross platform workaround, which is to create a shell script that will forward to the target :
File forwarding_script_template:
#! /bin/bash
_DIR_=$(dirname "$'{'BASH_SOURCE[0]'}'")
"$_DIR_/"{0} "$#"
In Gradle build file:
String templateScript = new File(projectDir,
"gradle/forwarding_script_template.sh").text;
String script = MessageFormat.format(templateScript, target);
writeFile(destination, script);
destination.setExecutable(true);
with the writeFile method equal to:
void writeFile(File destination, String content) {
Writer writer = new FileWriter(destination);
writer.write(content);
writer.close();
}

Grunt - DSS Plugin

I'm trying to get this Grunt plugin to work:
https://npmjs.org/package/dss
This documentation plugin ironically seems to be lacking proper documentation. Can someone help me out by giving me an example of something that worked for them. The main thing that's screwing me up is the "your_target" property. Not sure what's suppose to go in there.
Say I have a SASS file in the following path from the root directory:
sass/main.scss
What would my ouput be by default? And where would it output to? Also what format does it ouput to?
grunt.initConfig({
DSS: {
options: {
// Task-specific options go here.
},
your_target: {
// Target-specific file lists and/or options go here.
},
},
})
Is "your_target" property the path to my sass file or the path to the documentation file I'm trying to create? Would it be defined like this?
...
your_target: {
// Target-specific file lists and/or options go here.
sass: "sass/main.scss"
},
...
I don't know what the property name should be. :(
dss: {
docs: {
options: {
},
files: {
'api/': 'css/**/*.{css,scss,sass,less,styl}'
}
}
}

Resources