Running Jar in Remote Access VS Jar in Shared Folder? - shared-directory

I have a jar in SERVER-A that reads from a file within SERVER-A. When I Remote SERVER-A (Remote Desktop Connection) and run it, it works fine.
But when I run the same Jar from a shared folder of SERVER-A via my local machine(not SERVER-A) it can not find the file.
I tried to print the current directory of the jar when its is running.
1. When running from Remote access to SERVER-A i got "E:\ISO_Tester", This is the correct path.
2. When running from Shared folder of SERVER-A i got "C:\Windows" which I think is from my local machine instead of SERVER-A.
How can I make the jar read from the server when its being accessed from shared folder?
P.S. I'm using environment variable for the file location to be read.
String propFile=System.getenv("OTHERS_HOME") + "\\conf\\FILE_TO_READ.txt";

Related

ImportRDF command uses appdata/local instead of appdata/roaming for repository location

Installed/running Ontotext GraphDB v10.1.0 (free desktop windows). All working fine, create repositories, run SPARQL, etc.
The server and UI are both loading/running/reporting repositories in the C:\Users<Username>\AppData\Roaming\Graph\data\repositories folder.
However, when running the ImportRdf.cmd utility, its "attaching to"/creating the repository in C:\Users<Username>\AppData\Local\Graph\data\repositories folder instead!?
Tried adding the correct path into C:\Users<user>\AppData\Local\GraphDB Desktop\app\GraphDB Desktop.cfg but makes no difference.
Anyone experienced this/got any fixes?
The data /repository/ directory can be set through the system or config property graphdb.home.data. The default value is the data subdirectory relative to the GraphDB home directory. For example, one way to configure it: Go in bin folder of graphdb distribution and start graphdb with the following command:
./graphdb -Dgraphdb.home="full path to where you want your repo directory".

Nifi: Failed to retrieve directory listing when connecting to FTP server

I have a ListenFTP processor opened on a port and when i am trying to connect to it via FileZila i have an error "Failed to retrieve directory listing".
The connection seems to be establish first but then this error occurs.
Nifi is hosted on an ubuntu server running in a docker image
ListenFTP processor is opened on port 2221
I tried to change some configuration in FileZila based on this issue but nothing worked.
The connection works well on localhost, i can connect to the ftp server and transfer files
Somone has an idea how to solved that ?
If you look at the documentation of the processor, it states that
"After starting the processor and connecting to the FTP server, an
empty root directory is visible in the client application. Folders can
be created in and deleted from the root directory and any of its
subdirectories. Files can be uploaded to any directory. Uploaded files
do not show in the content list of directories, since files are not
actually stored on this FTP server, but converted into FlowFiles and
transferred to the next processor via the 'success' relationship. It
is not possible to download or delete files like on a regular FTP
server. All the folders (including the root directory) are virtual
directories, meaning that they only exist in memory and do not get
created in the file system of the host machine. Also, these
directories are not persisted: by restarting the processor all the
directories (except for the root directory) get removed. Uploaded
files do not get removed by restarting the processor, since they are
not stored on the FTP server, but transferred to the next processor as
FlowFiles."

App-V Virtual Process and local filesystem?

I run this code in powershell:
$AppVName = Get-AppvClientPackage <Package>
Start-AppvVirtualProcess -AppvClientObject $AppVName cmd.exe
then i write file with cmd command; the file is persisted on host filesystem. Is this normal behavior, i thought that virtual processes is run in some kind of "bubble" ?
How do i enable this bubble so that files written by virtual processes are not persisted?
This is one of the correct methods to run inside an app-v container.
Is the file that you're modifying/writing is in a path that is part of the original VFS structure of your app-v package, or are you saving it in another folder from the machine?
If the cmd.exe process is modifying files that are not present in the VFS folders from the app-v package it is normal for those files to persist on the machine.
You can check the VFS folder structure from app-v package by unzipping with 7-zip.

Copy files from local directory to docker container on a remote host

I want to create a watcher that will automatically sync file changes from a local directory to a remote docker container. I need to find a way to transfer the files efficiently. I will also need it for a one time upload command which would transfer a complete folder from local directory to a remote docker container.
I figure one solution would be to scp to a tmp directory on the remote host, and then run docker cp via ssh to copy the files from tmp directory. Is that a good solution? Is there anything better?
By the way, if anyone knows a file sync utility for that use case, please let me know. I tried to search, but it seems like it's not the most popular development workflow?
I would tr using rsync for local to remote host syncing. From their volume mount the directory into the docker container.

How to copy files from remote server's share drive to my local computer using Ant

How to copy files from remote server's share drive to my local computer using an Ant script?
What I do currently is
Open Windows explorer
Type the location of the folder (like //192.168.2.12/xyz/abc)
Ask for credentials, I provide those
copy the file and paste it on the local server
I want to perform the same things through an Ant script.

Resources