set execution permission to files deployed from windows to lambda using serverless - aws-lambda

I'm using serveless to deploy lambda function, I need to add an executable bin file but when it is uploaded I don't have executable permissions, also I can't change permissions after deployed, the only thing I can do is to move the file to /tmp and there change the permissions, it works ok but adds a lot of overhead because I have to move the files on every Invoke becasue /tmp is ephemeral.
I know there is a known issue that windows&linux files permission are different, so if you zip a file on windows and unzip it on a linux machines you will have problem with permission, especialy with execution, and that happens when serverless deploys the files.
¿Anyone have a better workaround for this? (rather than "deploy from a windows machine")

Related

Unable to save output from Rscripts in system directory using Devops Pipeline

I am running Rscripts on a self hosted Devops agent. My Windows agent is able to access the system's directory where its hosted. Below is the directory structure for my code
Agent loc. : F:/agent
Source Code : F:/agent/deployment/projects/project1/sourcecode
DWH _dump : F:/agent/deployment/DWH_dump/2021/
Output loca. : F:/agent/deployment/projects/project1/output_data/2021
The agent is using CMD in the devops pipeline to trigger R from the system and use the libraries from the system directory.
Problem statement: I am unable to save the output from my Rscript in to the Output Loca. directory. It give an error as Probable reason: permission denied error by pointing to that directory.
Output File Format: file_name.rds but same issue happens even for a csv file.
Command leading to failure: saverds(paste0(Output loca.,"/",file_name.rds))
Workaround : However I found a workaround, tried to save the scripts at the Source Code directory and then save the same files at the Output Loca. directory. This works perfectly fine but costs me 2 extra hours of run time because I have to save all intermediatory files and delete them in the end. Keeping the intermediatory files in memory eats up my RAM.
I have not opened that directory anywhere in the machine. Only open application in my explorer is my browser where the pipeline is running. I spent hours to figure out the reason but no success. Even I checked the system Path to see whether I have mentioned that directory over there and its not present.
When I run the same script directly, on the machine using Rstudio, I do not have any issues with saving the file at any directory.
Spent 2 full days already. Any pointers to figure out the root cause can save me few hours of runtime.
Solution was to set the Azure Pipeline Agent services in Windows to run with Admin Credentials. The agent was not configured as an admin during creation and so after enabling it with my userid which has admin access on the VM, the pipelines were able to save files without any troubles.
Feels great, saved few hours of run time!
I was able to achieve this by following this post.

Run remote files directly in dockerfile

I am wondering if it is possible to RUN a remote file stored in an NFS share when building an image from a dockerfile.
Currently I am using the COPY command and then the RUN command to execute the files, however many of the files I need to create the image are extremely large.
Is it possible to execute files stored in an NFS share directly in the dockerfile without having to copy them all over?
You can only RUN files inside your container - so it needs to copied to your container.
What you can do is move the COPY commands to the beginning of your Dockerfile so that they are cached and don't need to be copied every time you change a command later in the Dockerfile.
You can RUN curl.... to grab the remote file ,then execute it sure.
But this will only run at image build time, not during lifecycle of the container
You could also mount the NFS volume to your host, then COPY the files.
Otherwise, remote execution is a pretty basic security flaw and shouldn't be possible under any circumstances

Run node_modules binary on Lambda?

I'm trying to compress an image on Lambda using mozjpeg, but am having some issues.
The binary doesn't have execute permissions, and so I'm getting this error:
"exports._errnoException (util.js:870:11)",
"ChildProcess.spawn (internal/child_process.js:298:11)",
"Object.exports.spawn (child_process.js:362:9)",
"ret.catch.module.exports.promise (/var/task/node_modules/imagemin-mozjpeg/node_modules/exec-buffer/node_modules/execa/index.js:132:26)",
"/var/task/node_modules/imagemin-mozjpeg/node_modules/exec-buffer/index.js:36:15"
When I try to fix the permissions, I get this error:
'chmod: changing permissions of ‘/var/task/node_modules/imagemin-mozjpeg/node_modules/mozjpeg/vendor/cjpeg’: Read-only file system\n'
Is there a way to get the binaries to execute within node_modules, or an alternative to executing them manually from the tmp dir without the benefit of their nodejs wrappers?
You need to ensure that the method you use to zip your files includes preserving or setting the execute permissions in Unix format. They will then be preserved when the file is unzipped from S3.

Handling file watchers in Vagrant shared folder

I'm using Vagrant shared folder to develop a project using babel as a file watcher.
However, most likely, because of time difference between host and guest machine whenever I change a file the watcher doesn't see the changes and doesn't recompile modified assets, making the whole development environment useless.
I've tried changing the sync strategy to RSync but it only works when the file A is changed to A' but when I change B to B' it works as well, but reverts file A' back to A.
Is there any workflow that'd allow me to develop files in shared folder, still firing up file watcher hooks inside the guest machine?
Unfortunately you can't watch for file notifications on a shared folder. You have to use something like rsync auto or a third party file watcher like https://github.com/AgentCosmic/xnotify

Remove execute permission on file downloaded on a Mac

We have a web app running on a Windows server, which allows a user to do some processing and download the results. The result is a set of files which are dynamically created on the server and zipped into a single file for facilitating the download process.
Everything works fine on Windows, but when users download the file from the web app on a Mac, the contents of the zip file have the execute (chmod +x) permission set (I presume that the same happens on *NIX and Linux machines). This can, of course, be removed by running the 'chmod -x' command, but is there a way by which one can remove the execute permission on the files, so that when downloaded on a Mac, the files don't have the execute permission set by default?
I believe it's not possible - .zip files don't contain permissions, so on a Mac it has to default to "most permissive" (otherwise it's possible that there are applications inside the zip that wouldn't be marked as executable when they need to be).
tars, for instance, do record permissions, but that'd be a bit more difficult to create on a Windows server.

Resources