fs mkdir/copy intermittently fails in protected directory, despite having permissions - macos

I have an electron app on Mac with full disk permissions. I am using fs to make a directory in a protected folder, and copy files from a temp folder to the new directory.
When using fs.copy, I periodically get two different types of errors:
If the directory already exists and is owned by the user:
EPERM errors (operation not permitted, unlink xxx) when attempting to overwrite the existing directory, specifically when replacing a manifest.json file. This is very intermittent.
If the directory does not exist or is owned by root:
EACCES errors when attempting to make the directory or copy files to the new location.
Code:
[...Array(sourceDirs.length).keys()].map(async (idx) => {
try {
await fs.ensureDir(destPaths[idx]);
}
catch (e) {
console.log('Directory does not exist and could not be created');
}
try {
await fs.copy(sourceDirs[idx], destPaths[idx]);
}
catch (e) {
console.log('Copy error:', e);
}
});

After some more research, I determined that the directory's R/W permissions varied based on what entity created the directory. Some elements of the directory and its children were owned by root, and everyone only had read permissions, while other folders were owned by everyone and had write permissions.
Programmatically, the only way to solve this was by spawning a chmod command with sudo to update the permissions. In my case, there isn't any issue with taking ownership of the directory.

Related

Download file in Laravel from AWS S3(non public bucket)

I am able to save all my files in the bucket but having difficulties with download.
My code is:
$url = Storage::disk('s3')->temporaryUrl(
$request->file, now()->addMinutes(10)
);
return Storage::disk('s3')->download($url);
Full file path stored in $request->file
Example path: https://bucket_name.privacy_region_info/folder_inside_bucket/cTymyY2gzakfczO3j3H2TtbJX4eeRW4Uj073CZUW
I am getting the fallowing https://prnt.sc/1ip4g77
Did I not understand the purpose od generating temporaryUrl? How can I download files from S3 non public bucket?
BTW I am using Laravel 8 and league/flysystem-aws-s3-v3 1.0.29.
The error message you have shown suggests your user does not have the correct permissions, or that the file does not exist.
If you are sure the file exists, i would suspect a permissions issue.
In AWS IAM, make sure the user has a policy attached to it that grants the correct permissions.
In this case from the comments, i can see the user only has "Write" permissions. You will need explicit "Read" permissions too.

SFTP upload file Permission denied

I'm trying to upload excel file using SFTP to linux machine from my local windows PC.
Here is my code:
private void uploadToSftp() {
try
{
ChannelSftp sftpClient = null;
Channel channel = null;
JSch jsch = new JSch();
Session session = jsch.getSession("username", "host", 22);
session.setPassword("password");
Properties config = new Properties();
config.put("StrictHostKeyChecking","no");
session.setConfig(config);
session.connect();
channel = session.openChannel("sftp");
channel.connect();
sftpClient = (ChannelSftp) channel;
sftpClient.cd("/var/www/folder");
File localFile = new File("C:\\Workspace\\upload-file\\test.xlsx");
sftpClient.put(localFile.getAbsolutePath(),localFile.getName());
sftpClient.disconnect();
channel.disconnect();
session.disconnect();
} catch (JSchException e) {
e.printStackTrace();
} catch (SftpException e) {
e.printStackTrace();
}
}
but every time i run this application i get error:
3: Permission denied
at com.jcraft.jsch.ChannelSftp.throwStatusError(ChannelSftp.java:2873)
at com.jcraft.jsch.ChannelSftp._put(ChannelSftp.java:594)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:475)
at com.jcraft.jsch.ChannelSftp.put(ChannelSftp.java:365)
Doesn anyone know what could be problem and how can i solve this?
You seemed to upload your local file "C:\Workspace\upload-file\test.xlsx" to remote directory, "/var/www/folder" on SFTP.
I guess you have all permissions for reading,writing,executing etc on your local file("C:\Workspace\upload-file\test.xlsx"), but your remote folder, "/var/www/folder", might not accept your application's access including "upload" action.
SOLUTION:
The most simplest way to solve this issue is just granting all permission for all users to do anything in your upload target directory("/var/www/folder"). Please try this linux commands for checking permission on your upload folder.
ls -ld /var/www/folder
If you see your /var/www/folder/ directory is not allowed writing or reading(ex:drwxr-xr-x) for normal users, please grant permissions for this folder with the follwing command.
chmod 777 /var/www/folder
//check permission again.
ls -ld /var/www/folder
If you can check the target folder's permission is enough(drwxrwxrwx), please run your application again.
NOTE:
Giving all permissions for other users is not considered a good practice.
Please just do this solution for test, and change the permission setting fit to your specification later. For more detail, Please check this link(Click).

Cannot install search-guard - "ERROR: `elasticsearch` directory is missing in the plugin zip"

As topic states, I have a problem while trying to install search-guard plugin for my ELK stack:
[XXX#XXXX bin]$ ./elasticsearch-plugin install -b file:///home/xxxx/search-guard-6-6.2.1-21.0.zip
-> Downloading file:///home/xxxx/search-guard-6-6.2.1-21.0.zip
[=================================================] 100%  
ERROR: `elasticsearch` directory is missing in the plugin zip
I tried to do it from custom directory, then, following this answer from home, but it did not help. When I unzip the archive, I can see that there is a directory called "elasticsearch" there:
Does anyone have any suggestions how to proceed with that?
The error comes from InstallPluginCommand.class within the lib\plugin-cli-x.x.x.jar and is exactly what is says. Here's a clipped portion of the code as it's reading thru the entries in the zip file:
ZipInputStream zipInput = new ZipInputStream(Files.newInputStream(zip));
try {
ZipEntry entry;
while((entry = zipInput.getNextEntry()) != null) {
if (entry.getName().startsWith("elasticsearch/")) {
hasEsDir = true;
...
}
}
if (!hasEsDir) {
throw new UserException(2, "`elasticsearch` directory is missing in the plugin zip");
}
I realize that doesn't help you much, but as a last ditch effort, if you can't get to the root cause of the issue, 1 thing I did to get me over the hurdle was to just copy the files from the zip file into the es plugins directory (/usr/share/elasticsearch/plugins in our case). They go within /plugins, but under a directory, which is the name that Elasticsearch knows the plugin by.
The only 2 gotchas are:
You need to know the directory name to create under /plugins.
You need to know the replacement values for the plugin-descriptor.properties file.
If you can get that far, you can start ES and it should load everything fine.

Cannot Create New Directory in MacOS app

I am working on a mac app. I have been struggling to create a directory in "Desktop" on Mac OS. I am working on High Sierra. It was working fine, but suddenly the app is unable to create a directory in Desktop. I get the following permission error. I am using Xcode 9.3 and swift 4.1.
Error Domain=NSCocoaErrorDomain Code=513 "You don’t have permission to save the file “TestDir” in the folder “Desktop”." UserInfo={NSFilePath=/Users/sj/Library/Containers/com.user.TestSync/Data/Desktop/TestDir, NSUnderlyingError=0x60400044cc60 {Error Domain=NSPOSIXErrorDomain Code=1 "Operation not permitted"
Here is my code:
class DBManager: NSObject {
static func getDirectoryPath() -> URL {
let homeDirectory = URL(fileURLWithPath: NSHomeDirectory())
let desktopPath = homeDirectory.appendingPathComponent("Desktop")
let databaseFolder = desktopPath.appendingPathComponent("TestDir")
return databaseFolder
}
static func createFolderIfNotExist() {
let databaseFolder = DBManager.getDirectoryPath()
if !FileManager().fileExists(atPath: databaseFolder.path) {
do {
try FileManager.default.createDirectory(atPath: databaseFolder.path, withIntermediateDirectories: true, attributes: nil)
} catch {
print(error)
}
}
}
}
The code works fine if I am to create a directory inside "Documents". But for some reason, I am not able to create inside Desktop even if I have provided read/write access for Desktop to everyone. I have also included the screenshot of the directory where I have to create one.
I tried to create a directory inside "Downloads" but ended with same permission issue. But, again I was successful to create directory inside "Library". So, it seems that the OS is not giving permission to create inside alias directory since both Desktop and Downloads are alias.
Any help is highly appreciated.
That is because your macOS Application is running in the "App Sandbox", check your "Capabilities" tab in the target settings.
~/Library/Containers/%BundleID%/Data/Desktop is a soft link to the user's desktop, and you have no write permissions (depending on your entitlements file not even read permissions).
Depending on your use case you can
disable the sandbox mode
let the user pick a folder by opening an "Open" dialog (then you can write to this)
enable read/write in some other protected user folder (like Downloads, etc.) or
create the TestDir directly in your home directory without using any soft linked folder
Besides what mentioned by #ChaosCoder, we also need set NSOpenPanel.canCreateDirectories to true:
panel.canCreateDirectories = true
Delete the App Sandbox data from your Capabilities.
Click on your project in your file navigator, then your project in the TARGETS tab, then make sure "Signing & Capabilities" is selected, then "ALL" then double click the "x" next to "App Sandbox".

haddop/mapreduce local job directories are not deleted

I just started using hadoop and I noticed that local job directories are not deleted.
I am using hadoop 2.2.0 on Windows .
Is there any configuration that's needed so hadoop can do the clean up of all directories under “/tmp/hadoop-/mapred/local/”?
Also, after investigating and looking in the code, I found that part of the logic is in the the class “org.apache.hadoop.mapred.LocalJobRunner” (hadoop-mapreduce-client-common-2.2.0)
try {
fs.delete(systemJobFile.getParent(), true); // delete submit dir
localFs.delete(localJobFile, true); // delete local copy
// Cleanup distributed cache
localDistributedCacheManager.close();
} catch (IOException e) {
LOG.warn("Error cleaning up "+id+": "+e);
}
Why not just use (as it's the case for systemJobFile):
localFs.delete(localJobFile.getParent(), true); // delete local copy
Is it correct to do that?
I try it and looks like it's fixing the issue, but I am not sure.
Update: I just noticed that a lot of directories "attempy_local****" are still there. Not deleted by hadoop!
Thank you.
As I have to find a quick solution and I don't like the idea to create a script to clean-up these directories, I did this patch (org.apache.hadoop.mapred.LocalJobRunner):
// line: 114
private Path localCacheJobDir;
// line: 156
this.localCacheJobDir = localFs.makeQualified(new Path(new Path(new Path(conf.getLocalPath(jobDir), user), JOBCACHE), jobid.toString()));
// line: 492
try {
fs.delete(systemJobFile.getParent(), true); // delete submit dir
final Path localJobFilePath = localJobFile.getParent();
localFs.delete(localJobFile, true); // delete local copy
// Cleanup distributed cache
localDistributedCacheManager.close();
localFs.delete(localJobFilePath, true); // delete local copy
localFs.delete(localCacheJobDir, true); // delete local copy
} catch (IOException e) {
LOG.warn("Error cleaning up "+id+": "+e);
}
I have never worked with hadoop before and I just started playing with in the last two days, so I don't know if my solution won't have any impact on hadoop. Unfortunatly this is the best solution I have.
There are some configuration keys like
mapreduce.task.files.preserve.failedtasks
in mapred config.
Anyway...
By default hadoop should clear the temporary job directory.
On success the files are moved to ${mapreduce.output.fileoutputformat.outputdir}
If things gone wrong, files are deleted.
So I'm not sure this fix for real what is happening on your installation.

Resources