How to increase the max memory limit for the app built by electron-builder? - electron-builder

Version: "electron": "1.6.2", "electron-builder": "^16.8.2",
Target: windows x64
I know I can add --js-flags="--max-old-space-size=4096" when run it using electron. But Where should I put this param to the build config of electron-builder?

In main.js, add:
app.commandLine.appendSwitch('js-flags', '--max-old-space-size=4096');
According to Supported Chrome Command Line Switches, this should be called before the ready event is emitted. For example:
import { app } from "electron";
app.commandLine.appendSwitch('js-flags', '--max-old-space-size=4096');
app.on('ready', () => {
// ...
});

Just wanted to add that, in my case, the max-old-space-size flag was only being successfully applied when I placed it in my webview's "preload" script, like so (Start_WebviewPreload.ts):
import {remote} from "electron";
remote.app.commandLine.appendSwitch("js-flags", "--max-old-space-size=8192");
Placing it in the "main.js" file (the entry point of the whole program), did not do anything. (I even checked the command-line arguments using Process Hacker 2 and it didn't show any of the electron processes as having that flag until I moved the code as mentioned above)
Also, I notice that there may be some kind of race condition between the setting of the command-line flag and the execution of app.on("ready") -- some of the time, the code above works for the main renderer process (the one not-in-the-webview), whereas other times it doesn't.
So basically: If you want to ensure your command-line switches work, apply them within the preload script for the given browser-window/web-view.

Related

DART : Why dart apps are so big in size? (console app)

i made a console app with dart that calculates 2 numbers and prints them
the size came about 5 MB !!
Download link (Windows only) https://drive.google.com/open?id=1sxlvlSZUdxewzFNiAv_bXwaUui2gs2yA
Here is the code
import 'dart:io';
int inputf1;
int inputf2;
int inputf3;
void main() {
stdout.writeln('Type The First Number');
var input1 = stdin.readLineSync();
stdout.writeln('You typed: $input1 as the first number');
sleep( Duration(seconds: 1));
stdout.writeln('\nType The Second Number');
var input2 = stdin.readLineSync();
stdout.writeln('You typed: $input2 as the second number');
sleep( Duration(seconds: 1));
inputf1 = int.parse(input1);
inputf2 = int.parse(input2);
inputf3 = inputf1 + inputf2;
print('\nfinal answer is : $inputf3');
sleep( Duration(seconds: 10));
}
The reason for the big executable is because the dart2native compiler is not really made to make a executable you can directly run on your machine from scratch. Instead, it package the dartaotruntime executable together with your AOT compiled Dart program.
The dartaotruntime contains all the Dart runtime libraries and dart2native does not remove anything from the dartaotruntime (also difficult since it is a binary) so you will get the whole runtime even if you only adds two numbers.
But it is not that bad since it is an one-cost penalty for every program. So if you make a very big program, the dartaotruntime are still only include once.
However, if you are deploying many small programs in a single package I will recommend you add the -k aot parameter to dart2native so it instead of an executable will generate an .aot file which you then can run with dartaotruntime <program.aot>.
This will make your deployment a bit more complicated but you will just need to provide the dartaotruntime binary together with you multiple .aot files.
I have compiled your program to both .exe and .aot on Dart for Windows 64 bit. version 2.8.2 so you can see the size difference:
Again, -k aot will not save you any disk space if you are only going to deploy a single executable. But it can save a lot if your project contains many programs.
It should also be noted that the .aot file is platform dependent like the .exe file would be. And you should use the same version of dartaotruntime which has been used to compile the file.
It is because natively dart app are made by incorporate the render engine... I know that is right for all the flutter app which are based on dart.
Look at this too https://medium.com/#rajesh.muthyala/app-size-in-flutter-5e56c464dea1

Why is WebViewControlProcess.CreateWebViewControlAsync() never completing?

I’m trying to write some Rust code that uses Windows.Web.UI.Interop.WebViewControl (which is a Universal Windows Platform out-of-process wrapper expressly designed so Win32 apps can use EdgeHTML), and it’s all compiling, but not working properly at runtime.
The relevant code boils down to this, using the winit, winapi and winrt crates:
use winit::os::windows::WindowExt;
use winit::{EventsLoop, WindowBuilder};
use winapi::winrt::roapi::{RoInitialize, RO_INIT_SINGLETHREADED};
use winapi::shared::winerror::S_OK;
use winrt::{RtDefaultConstructible, RtAsyncOperation};
use winrt::windows::foundation::Rect;
use winrt::windows::web::ui::interop::WebViewControlProcess;
fn main() {
assert!(unsafe { RoInitialize(RO_INIT_SINGLETHREADED) } == S_OK);
let mut events_loop = EventsLoop::new();
let window = WindowBuilder::new()
.build(&events_loop)
.unwrap();
WebViewControlProcess::new()
.create_web_view_control_async(
window.get_hwnd() as usize as i64,
Rect {
X: 0.0,
Y: 0.0,
Width: 800.0,
Height: 600.0,
},
)
.expect("Creation call failed")
.blocking_get()
.expect("Creation async task failed")
.expect("Creation produced None");
}
The WebViewControlProcess instantiation works, and the CreateWebViewControlAsync function does seem to care about the value it received as host_window_handle (pass it 0, or one off from the actual HWND value, and it complains). Yet the IAsyncOperation stays determinedly at AsyncStatus.Started (0), and so the blocking_get() call hangs indefinitely.
A full, runnable demonstration of the issue (with a bit more instrumentation).
I get the feeling that the WebViewControlProcess is at fault: its ProcessId is stuck at 0, and it doesn’t look to have spawned any subprocess. The ProcessExited event does not seem to be being fired (I attached something to it immediately after instantiation, is there opportunity for it to be fired before that?). Calling Terminate() fails as one might expect in such a situation, E_FAIL.
Have I missed some sort of initialization for using Windows.Web.UI.Interop? Or is there some other reason why it’s not working?
It turned out that the problem was threading-related: the winit crate was doing its event loop in a different thread, and I did not realise this; I had erroneously assumed winit to be a harmless abstraction, which it turned out not quite to be.
I discovered this when I tried minimising and porting a known-functioning C++ example, this time doing all the Win32 API calls manually rather than using winit, so that the translation was correct. I got it to work, and discovered this:
The IAsyncOperation is fulfilled in the event loop, deep inside a DispatchMessageW call. That is when the Completion handler is called. Thus, for the operation to complete, you must run an event loop on the same thread. (An event loop on another thread doesn’t do anything.) Otherwise, it stays in the Started state.
Fortunately, winit is already moving to a new event loop which operates in the same thread, with the Windows implementation having landed a few days ago; when I migrated my code to use the eventloop-2.0 branch of winit, and to using the Completed handler instead of blocking_get(), it all started working.
I shall clarify about the winrt crate’s blocking_get() call which would normally be the obvious solution while prototyping: you can’t use it in this case because it causes deadlock, since it blocks until the IAsyncOperation completes, but the IAsyncOperation will not complete until you process messages in the event loop (DispatchMessageW), which will never happen because you’re blocking the thread.
Try to initialize WebViewProcessControl with winrt::init_apartment(); And it may needs a single-threaded apartment(according to the this answer).
More attention on Microsoft Edge Developer Guide:
Lastly, power users might notice the apppearance of the Desktop App
Web Viewer (previously named Win32WebViewHost), an internal system app
representing the Win32 WebView, in the following places:
● In the Windows 10 Action Center. The source of these notifications
should be understood as from a WebView hosted from a Win32 app.
● In the device access settings UI
(Settings->Privacy->Camera/Location/Microphone). Disabling any of
these settings denies access from all WebViews hosted in Win32 apps.

How to detect if the current Go process is running in a headless (non-GUI) environment?

I have a Go program which wants to install a trayicon. In case the process is headless, that is, it will not be able to create a graphical user interface, the Go program still makes sense and shall run, but obviously it shall not install the trayicon.
What is the way in Go how to detect whether the current Go process is headless?
Currently, I use the following code:
func isHeadless() bool {
_, display := os.LookupEnv("DISPLAY")
return !(runtime.GOOS == "windows" || display)
}
This code works just fine on a "normal" Windows, Linux, or Mac OS X, and I bet it will also run just fine on FreeBSD, NetBSD, Dragonfly and many others.
Still, that code obviously has a lot of problems:
It assumes that Windows is never headless (wrong, what if the process was started without a user logged in, and also, there's Windows 10 IoT Core which can be configured to headless https://learn.microsoft.com/en-us/windows/iot-core/learn-about-hardware/headlessmode)
It doesn't support Android (of which there also is a headless version for IoT).
It assumes that everything non-Windows has an X-Server and thus a DISPLAY environment variable (wrong, for example, Android)
So, what is the correct way in Go to detect whether the current process is headless / running in a headless environment?
I'm not looking for workarounds, like adding a --headless command line switch to my program. Because, I already have that anyway for users who have heads but want the program to behave as if it were headless.
In some other programming environments, such capabilities exist. For example, Java has java.awt.GraphicsEnvironment.isHeadless(), and I'm looking for a similar capability in Go.
Some people have suggested to simply try creating the UI, and catch the error. This does not work, at least not with the library that I use. I use github.com/getlantern/systray. When systray.Run() cannot create the UI, the process dies. My code to setup the system tray looks like this:
func setupSystray() { // called from main()
go func() {
systray.Run(onReady, nil)
}()
}
func onReady() {
systray.SetTitle("foo")
// ...
}
When I run this code on Linux with DISPLAY unset, the output is as following:
$ ./myapp-linux-amd64
Unable to init server: Could not connect: Connection refused
(myapp-linux-amd64:5783): Gtk-WARNING **: 19:42:37.914: cannot open display:
$ echo $?
1
It could be argued that this is a flaw in the library (and I have created a ticket on the library https://github.com/getlantern/systray/issues/71), but nonetheless some other APIs and environments provide a function isHeadless(), and I'm looking for an equivalent in Golang.
I think you might be attacking this problem from a wrong angle.
Detecting reliably that your program really sees a headless machine is, IMO, rather futile for a number of reasons.
Hence I think I'd adopt an approach usually sported in, say, working with filesystems:
Try to perform an operation.
If it fails, collect the error.
Analyze the error and act accordingly.
That is, just try to explicitly initialize the client (yours) side of whatever works with the GUI stack in your code, trap any possible error and analyze it. If it says it failed to initialize the subsystem, then just raise a relevant flag and proceed.
In the perceived absence of a library/solution for this, I've created one myself. https://github.com/christianhujer/isheadless
Example Usage:
package main
import (
. "fmt"
. "github.com/christianhujer/isheadless"
. "os"
)
func main() {
headless := IsHeadless()
Fprintf(Stderr, "%s: info: headless: %v\n", Args[0], headless)
Exit(map[bool]int{true: 0, false: 1}[headless])
}
Example runs:
$ ./isheadless ; echo $?
./isheadless: info: headless: false
1
$ DISPLAY= ./isheadless ; echo $?
./isheadless: info: headless: true
0
Well, the answer to the question precisely as it was stated
is to just look at what Java does in its isHeadless().
Here is what OpenJDK 10 does.
I cannot copy the code as it would supposedly breach its license,
but in essense, the breakdown is as follows:
Get system property "java.awt.headless"; use it, if found.
Get system property "javaplugin.version"; if it exists,
the session is not headless. Use this value.
Get system property "os.name". If it literally contains
the substring "OS X" and the system property "awt.toolkit"
equals the string "sun.awt.HToolkit", the session is not headless.
Use this value.
Check whether the system property "os.name"
equals one of "Linux", "SunOS", "FreeBSD", "NetBSD", "OpenBSD"
or "AIX", and if so, try to find an environment variable "DISPLAY";
if it's absent, the session is headless.
As you can see, in reality the check is pretty lame
and I fail to see any special treatment of Windows.
Still, this answers your question precisely.

drmDropMaster requires root privileges?

Pardon for the long introduction, but I haven't seen any other questions for this on SO.
I'm playing with DRM (Direct Rendering Manager, a wrapper for Linux kernel mode setting) and I'm having difficulty understanding a part of its design.
Basically, I can open a graphic card device in my virtual terminal, set up frame buffers, change connector and its CRTC just fine. This results in me being able to render to VT in a lightweight graphic mode without need for X server (that's what kms is about, and in fact X server uses it underneath).
Then I wanted to implement graceful VT switching, so when I hit ctrl+alt+f3 etc., I can see my other consoles. Turns out it's easy to do with calling ioctl() with stuff from linux/vt.h and handling some user signals.
But then I tried to switch from my graphic program to a running X server. Bzzt! didn't work at all. X server didn't draw anything at all. After some digging I found that in Linux kernel, only one program can do kernel mode setting. So what happens is this:
I switch from X to a virtual terminal
I run my program
This program enters graphic mode with drmOpen, drmModeSetCRTC etc.
I switch back to X
X has no longer privileges to restore its own mode.
Then I found this in wayland source code: drmDropMaster() and drmSetMaster(). These functions are supposed to release and regain privileges to set modes so that X server can continue to work, and after switching back to my program, it can take it from there.
Finally the real question.
These functions require root privileges. This is the part I don't understand. I can mess with kernel modes, but I can't say "okay X11, I'm done playing, I'm giving you the access now"? Why? Or should this work in theory, and I'm just doing something wrong in my code? (e.g. work with wrong file descriptors, or whatever.)
If I try to run my program as a normal user, I get "permission denied". If I run it as root, it works fine - I can switch from X to my program and vice versa.
Why?
Yes, drmSetMaster and drmDropMaster require root privileges because they allow you to do mode setting. Otherwise, any random application could display whatever it wanted to your screen. weston handles this through a setuid launcher program. The systemd people also added functionality to systemd-logind (which runs as root) to do the drm{Set,Drop}Master calls for you. This is what enables recent X servers to run without root privileges. You could look into this if you don't mind depending on systemd.
Your post seems to suggest that you can successfully call drmModeSetCRTC without root privileges. This doesn't make sense to me. Are you sure?
It is up to display servers like X, weston, and whatever you're working on to call drmDropMaster before it invokes the VT_RELDISP ioctl, so that the next session can call drmSetMaster successfully.
Before digging into why it doesn't work, I had to understand how it works.
So, calling drmModeSetCRTC and drmSetMaster in libdrm in reality just calls ioctl:
include/xf86drm.c
int drmSetMaster(int fd)
{
return ioctl(fd, DRM_IOCTL_SET_MASTER, 0);
}
This is handled by the kernel. In my program the most important function that controls the display is drmModeSetCRTC and drmModeAddFB, the rest is just diagnostics really. So let's see how they're handled by the kernel. Turns out there is a big table that maps ioctl events to their handlers:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_MODE_GETCRTC, drm_mode_getcrtc, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_SETCRTC, drm_mode_setcrtc, DRM_MASTER|DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB, drm_mode_addfb, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
DRM_IOCTL_DEF(DRM_IOCTL_MODE_ADDFB2, drm_mode_addfb2, DRM_CONTROL_ALLOW|DRM_UNLOCKED),
...,
},
This is used by the drm_ioctl, out of which the most interesting part is drm_ioctl_permit.
drivers/gpu/drm/drm_ioctl.c
long drm_ioctl(struct file *filp,
unsigned int cmd, unsigned long arg)
{
...
retcode = drm_ioctl_permit(ioctl->flags, file_priv);
if (unlikely(retcode))
goto err_i1;
...
}
static int drm_ioctl_permit(u32 flags, struct drm_file *file_priv)
{
/* ROOT_ONLY is only for CAP_SYS_ADMIN */
if (unlikely((flags & DRM_ROOT_ONLY) && !capable(CAP_SYS_ADMIN)))
return -EACCES;
/* AUTH is only for authenticated or render client */
if (unlikely((flags & DRM_AUTH) && !drm_is_render_client(file_priv) &&
!file_priv->authenticated))
return -EACCES;
/* MASTER is only for master or control clients */
if (unlikely((flags & DRM_MASTER) && !file_priv->is_master &&
!drm_is_control_client(file_priv)))
return -EACCES;
/* Control clients must be explicitly allowed */
if (unlikely(!(flags & DRM_CONTROL_ALLOW) &&
drm_is_control_client(file_priv)))
return -EACCES;
/* Render clients must be explicitly allowed */
if (unlikely(!(flags & DRM_RENDER_ALLOW) &&
drm_is_render_client(file_priv)))
return -EACCES;
return 0;
}
Everything makes sense so far. I can indeed call drmModeSetCrtc because I am the current DRM master. (I'm not sure why. This might have to do with X11 properly waiving its rights once I switch to another VT. Perhaps this alone allows me to become automatically the new DRM master once I start messing with ioctl?)
Anyway, let's take a look at the drmDropMaster and drmSetMaster definitions:
drivers/gpu/drm/drm_ioctl.c
static const struct drm_ioctl_desc drm_ioctls[] = {
...
DRM_IOCTL_DEF(DRM_IOCTL_SET_MASTER, drm_setmaster_ioctl, DRM_ROOT_ONLY),
DRM_IOCTL_DEF(DRM_IOCTL_DROP_MASTER, drm_dropmaster_ioctl, DRM_ROOT_ONLY),
...
};
What.
So my confusion was correct. I don't do anything wrong, things really are this way.
I'm under the impression that this is a serious kernel bug. Either I shouldn't be able to set CRTC at all, or I should be able to drop/set master. In any case, revoking every non-root program rights to draw to screen because
any random application could display whatever it wanted to your screen
is too aggressive. I, as the user, should have the freedom to control that without giving root access to the whole program, nor depending on systemd, for example by making chmod 0777 /dev/dri/card0 (or group management). As it is now, it looks to me like lazy man's answer to proper permission management.
Thanks for writing this up. This is indeed the expected outcome; you don't need to look for a subtle bug in your code.
It's definitely intended that you can become the master implicitly. A dev wrote example code as initial documentation for DRM, and it does not use SetMaster. And there is a comment in the source code (now drm_auth.c) "successfully became the device master (either through the SET_MASTER IOCTL, or implicitly through opening the primary device node when no one else is the current master that time)".
DRM_ROOT_ONLY is commented as
/**
* #DRM_ROOT_ONLY:
*
* Anything that could potentially wreak a master file descriptor needs
* to have this flag set. Current that's only for the SETMASTER and
* DROPMASTER ioctl, which e.g. logind can call to force a non-behaving
* master (display compositor) into compliance.
*
* This is equivalent to callers with the SYSADMIN capability.
*/
The above requires some clarification IMO. The way logind forces a non-behaving master is not simply by calling SETMASTER for a different master - that would actually fail. First, it must call DROPMASTER on the non-behaving master. So logind is relying on this permission check, to make sure the non-behaving master cannot then race logind and call SETMASTER first.
Equally logind is assuming the unprivileged user doesn't have permission to open the device node directly. I would suspect the ability to implicitly become master on open() is some form of backwards compatibility.
Notice, if you could drop your master, you couldn't use SETMASTER to get it back. This means the point of doing so is rather limited - you can't use it to implement the traditional switching back and forth between multiple graphics servers.
There is a way you can drop the master and get it back: close the fd, and re-open it when needed. It sounds to me like this would match how old-style X (pre-DRM?) worked - wasn't it possible to switch between multiple instances of the X server, and each of them would have to completely take over the hardware? So you always had to start from scratch after a VT switch. This is not as good as being able to switch masters though; logind says
/* On DRM devices we simply drop DRM-Master but keep it open.
* This allows the user to keep resources allocated. The
* CAP_SYS_ADMIN restriction to DRM-Master prevents users from
* circumventing this. */
As of Linux 5.8, drmDropMaster() no longer requires root privileges.
The relevant commit is 45bc3d26c: drm: rework SET_MASTER and DROP_MASTER perm handling .
The source code comments provide a good summary of the old and new situation:
In the olden days the SET/DROP_MASTER ioctls used to return EACCES when
CAP_SYS_ADMIN was not set. This was used to prevent rogue applications
from becoming master and/or failing to release it.
At the same time, the first client (for a given VT) is always master.
Thus in order for the ioctls to succeed, one had to explicitly run the
application as root or flip the setuid bit.
If the CAP_SYS_ADMIN was missing, no other client could become master...
EVER :-( Leading to a) the graphics session dying badly or b) a completely
locked session.
...
Here we implement the next best thing:
ensure the logind style of fd passing works unchanged, and
allow a client to drop/set master, iff it is/was master at a given point
in time.
...

how to force gdb to stop right after the start of program execution?

I've tried to set breakpoint on every function that makes any sense but program exit before reaching any of those. Is there a way to make program run in step-by-step mode from the start so I can see what's going on?
I'm trying to debug /usr/bin/id if it's important (we have custom plugin for it and it's misbehaved)
P.S. Start command doesn't work for me here(it should be a comment, but I don't have enough rep for it)
Get the program entry point address and insert a breakpoint at that address.
One way to do this is to do info files which gives you for example "Entry point: 0x4045a4". Then do "break *0x4045a4". After run-ning program, it will immediately stop.
From here on you can use single stepping instructions (like step or stepi) to proceed.
You did not tell what system you are trying to debug. If code is in read-only memory you may need to use hardware breakpoints (hbreak) if they are supported by that system.
Use start command
The ‘start’ command does the equivalent of setting a temporary breakpoint at the beginning of the main procedure and then invoking the ‘run’ command.
e.g.
a program with debug info main, and usage like this: main arg1 arg2
gdb main
(gdb) start arg1 arg2
Use starti. Unlike start this stops at the actual first instruction, not at main().
You can type record full right after running the program. This will record all instructions and make them possible for replaying/going back.
For main function, you'd need to type this before reaching the breakpoint so you can set an earlier one by break _start -> _start is a function always called before the standard main function. (apparently applies only to the gcc compiler or similar)
Then continue to main breakpoint and do reverse-stepi to go exactly one instruction back
For more info about recording look here: link

Resources