.da files are not created for gcov when accessed remotely - gcc

I'm trying to generate coverage info with gcov. The configuration is a little tricky. I have some of the testing running from the same machine. Others are implemented in Matlab and access the executables on the test-machine remotely. When the test files are executed from the same machine, the .da files are generated. However, no .da files are created when I run Matlab test files which use the executables on the test machine remotely. I'm accessing the test machine remotely with root access, all the privileges seem to be correct. Any idea why .da file are not generated when the executables are accessed remotely?

Gcov generates .da files in the file system in which the executable resides. If your executable residing on the remote machine is the one enabled with coverage collection, you should look for .da files on the remote machine.

Related

Unable to save output from Rscripts in system directory using Devops Pipeline

I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.

App-V Virtual Process and local filesystem?

I run this code in powershell:
$AppVName = Get-AppvClientPackage <Package>
Start-AppvVirtualProcess -AppvClientObject $AppVName cmd.exe
then i write file with cmd command; the file is persisted on host filesystem. Is this normal behavior, i thought that virtual processes is run in some kind of "bubble" ?
How do i enable this bubble so that files written by virtual processes are not persisted?
This is one of the correct methods to run inside an app-v container.
Is the file that you're modifying/writing is in a path that is part of the original VFS structure of your app-v package, or are you saving it in another folder from the machine?
If the cmd.exe process is modifying files that are not present in the VFS folders from the app-v package it is normal for those files to persist on the machine.
You can check the VFS folder structure from app-v package by unzipping with 7-zip.

set execution permission to files deployed from windows to lambda using serverless

I'm using serveless to deploy lambda function, I need to add an executable bin file but when it is uploaded I don't have executable permissions, also I can't change permissions after deployed, the only thing I can do is to move the file to /tmp and there change the permissions, it works ok but adds a lot of overhead because I have to move the files on every Invoke becasue /tmp is ephemeral.
I know there is a known issue that windows&linux files permission are different, so if you zip a file on windows and unzip it on a linux machines you will have problem with permission, especialy with execution, and that happens when serverless deploys the files.
¿Anyone have a better workaround for this? (rather than "deploy from a windows machine")

Remove execute permission on file downloaded on a Mac

We have a web app running on a Windows server, which allows a user to do some processing and download the results. The result is a set of files which are dynamically created on the server and zipped into a single file for facilitating the download process.
Everything works fine on Windows, but when users download the file from the web app on a Mac, the contents of the zip file have the execute (chmod +x) permission set (I presume that the same happens on *NIX and Linux machines). This can, of course, be removed by running the 'chmod -x' command, but is there a way by which one can remove the execute permission on the files, so that when downloaded on a Mac, the files don't have the execute permission set by default?
I believe it's not possible - .zip files don't contain permissions, so on a Mac it has to default to "most permissive" (otherwise it's possible that there are applications inside the zip that wouldn't be marked as executable when they need to be).
tars, for instance, do record permissions, but that'd be a bit more difficult to create on a Windows server.

.Net Remoting server dll in same directory as the executable

I have an application that hosts a remote object. For the client application to access this remote object; the dll with the remote server implemented, should be in the same directory where the executable of the server application resides. When I install this application the dll resides in a different directory and I manually paste the dll to the same directory where the server executable resides.
I do not want to do this every time. Is there a way to get around this problem? That is the application refer the dll from where it is available rather than demanding the dll to be in the same directory where the executable resides.
See http://www.informit.com/articles/article.aspx?p=30601&seqNum=6

Resources