App-V Virtual Process and local filesystem? - windows

I run this code in powershell:
$AppVName = Get-AppvClientPackage <Package>
Start-AppvVirtualProcess -AppvClientObject $AppVName cmd.exe
then i write file with cmd command; the file is persisted on host filesystem. Is this normal behavior, i thought that virtual processes is run in some kind of "bubble" ?
How do i enable this bubble so that files written by virtual processes are not persisted?

This is one of the correct methods to run inside an app-v container.
Is the file that you're modifying/writing is in a path that is part of the original VFS structure of your app-v package, or are you saving it in another folder from the machine?
If the cmd.exe process is modifying files that are not present in the VFS folders from the app-v package it is normal for those files to persist on the machine.
You can check the VFS folder structure from app-v package by unzipping with 7-zip.

Related

Run remote files directly in dockerfile

I am wondering if it is possible to RUN a remote file stored in an NFS share when building an image from a dockerfile.
Currently I am using the COPY command and then the RUN command to execute the files, however many of the files I need to create the image are extremely large.
Is it possible to execute files stored in an NFS share directly in the dockerfile without having to copy them all over?
You can only RUN files inside your container - so it needs to copied to your container.
What you can do is move the COPY commands to the beginning of your Dockerfile so that they are cached and don't need to be copied every time you change a command later in the Dockerfile.
You can RUN curl.... to grab the remote file ,then execute it sure.
But this will only run at image build time, not during lifecycle of the container
You could also mount the NFS volume to your host, then COPY the files.
Otherwise, remote execution is a pretty basic security flaw and shouldn't be possible under any circumstances

Modify source file used for the Copy on the target machine

I'm using the Copy module to transfer a 10G file from my machine to the remote /tmp dir. However, Copy uses an intermediate folder inside home and I need to transfer the file directly to /tmp because /home doesn't have enough space.
Is it possible to control the src path used by the Copy module?
Thanks
In your ansible.cfg change the "remote_tmp" to a location where sufficient space available or you run your playbook as below:
ANSIBLE_REMOTE_TEMP=/dir1/some_dir/large_space/ ansible-playbook copy.yml
Official Documentation for ANSIBLE_REMOTE_TEMP with shell plugin

Can lftp execute a command on the downloaded files (as part of the mirroring process)?

This may be asking too much from an already very powerful tool, but is there a chance that lftp mirror can execute a command during the mirroring process (from remote directory to the local machine)?
Specific example: lftp is asked to mirror a remote directory with xml files into a local folder and as soon as each file is downloaded/updated, it converts the file to JSON format using xml2json.
I can think of a solution that relies on monitoring the local copy of the mirrored folder for changes via find and then executing xml2json on the new/updated files, but perhaps there is a simpler way?
You can use xfer:verify and xfer:verify-command settings to run a local command on every transferred file.

monitor folder and execute command

I'm looking for a solution for monitoring a folder for new file creation and then execute shell command upon the created file. The scenario is I have a host machine that runs a virtual machine and they share a folder. What I want is when I create or copy a new file to that shared folder on my host machine, on the VM, the system should be able to detect those changes. I have tried incron and inotify but they only work when I do the copy, create as a user in the VM. Thanks
Method 1 in this answer may help: Bash script, watch folder, execute command
Just run that script in your VM, and you should be able to detect changes made by the host.

Appropriate location for scheduled tasks

In Windows, where is the most appropriate place to store an executable that will be ran as a scheduled task on a server?
A file share?
"C:"?
"C:\Windows"?
Others?
Stay out of C: and C:\Windows. Never put it on a file share (what if it isn't available)?
I'd just use a subdirectory of C:\Program Files (to be more precise, of %ProgramFiles% - the user's system may not be installed on C:, for instance).

Resources