when trying to read files deployed in my ec2 instance while running my opencpu app I get file connection error. The same works with the single user version on rstudio server.
I checked the logs in /var/log/kern.log and found this
apparmor="DENIED" operation="open" profile="opencpu-exec" name="<path to my file>" pid=1444 comm="apache2" requested_mask="r" denied_mask="r" fsuid=33 ouid=1000
What does this mean? I had set chmod 777 to all the files from where I should read the data. How should I get my app to read these files?
EDIT: I added /** r to my /etc/apparmor.d/opencpu.d/custom file. Still not able to read my csv files. but the kern.log file looks like this
apparmor="STATUS" operation="profile_replace" profile="unconfined" name="/usr/lib/connman/scripts/dhclient-script" pid=2392 comm="apparmor_parser"
I cross checked my file paths and verified that the files are indeed present from where I have to read them.
Is the file stored in directory where apache (www-data) is allowed to read?
Related
I have a ListenFTP processor opened on a port and when i am trying to connect to it via FileZila i have an error "Failed to retrieve directory listing".
The connection seems to be establish first but then this error occurs.
Nifi is hosted on an ubuntu server running in a docker image
ListenFTP processor is opened on port 2221
I tried to change some configuration in FileZila based on this issue but nothing worked.
The connection works well on localhost, i can connect to the ftp server and transfer files
Somone has an idea how to solved that ?
If you look at the documentation of the processor, it states that
"After starting the processor and connecting to the FTP server, an
empty root directory is visible in the client application. Folders can
be created in and deleted from the root directory and any of its
subdirectories. Files can be uploaded to any directory. Uploaded files
do not show in the content list of directories, since files are not
actually stored on this FTP server, but converted into FlowFiles and
transferred to the next processor via the 'success' relationship. It
is not possible to download or delete files like on a regular FTP
server. All the folders (including the root directory) are virtual
directories, meaning that they only exist in memory and do not get
created in the file system of the host machine. Also, these
directories are not persisted: by restarting the processor all the
directories (except for the root directory) get removed. Uploaded
files do not get removed by restarting the processor, since they are
not stored on the FTP server, but transferred to the next processor as
FlowFiles."
I am trying to isolate a problem in a backend logic using log file. I have made a custom log file for the purpose because default log file has too much content to filter through. The module is already live so I have to read the log file from the server to debug the problem. I noticed that while performing commit, the log files I created was in gitignore. So I wanted to know how it works. Are log files generally placed in gitignore? And do servers make their own log files?
Yes, the server will create its own log file. Version control should not work with them since the information they contain will be relative to the environment in which they are contained (your server in this case). Therefore, by default, in the storage/logs directory you will find a .gitignore file with the content:
*
! .gitignore
which will cause none of the files contained in it to be managed by Git.
If your new log file is in this directory, it will not be processed by Git either.
I want to back my DynamoDB local server. I have install DynamoDB server in Linux machine. Some sites are refer to create a BASH file in Linux os and connect to S3 bucket, but in local machine we don't have S3 bucket.
So i am stuck with my work, Please help me Thanks
You need to find the database file created by DynamoDb local. From the docs:
-dbPath value — The directory where DynamoDB will write its database file. If you do not specify this option, the file will be written to
the current directory. Note that you cannot specify both -dbPath and
-inMemory at once.
The file name would be of the form youraccesskeyid_region.db. If you used the -sharedDb option, the file name would be shared-local-instance.db
By default, the file is created in the directory from which you ran dynamodb local. To restore you'll have to the copy the same file and while running dynamodb, specify the same dbPath.
I am trying to update a module to a newer version. In the past I have manually uploaded each file carefully into the new directory and overwritten older files using FTP. However I wanted to use SSH to try and do this more easily and without any file permission problems.
I have:
Uploaded the .tgz file to the root folder (/http) on the server
Logged into the server via SSH
Changed the directory to the correct directory
Run the following command: tar -zxvf fishpig_splash.tgz
In the command line I was then given a list of all the files that had been extracted. However if I use FTP to go to any of these files I can see that they are still the older version and have not been overwritten.
I was expecting that the files would extract into the correct directories and overwrite any that already existed. I have tested the extraction by creating a temporary directory and extracting into that and everything worked fine.
Is there another part to this script I need to use to overwrite the files?
Thanks
Glynn
Sorry this was just me being stupid! When extracting the tar file there was a subfolder within it for the extension, I completely missed it. I just went down a level in the file and zipped up the contents only then extracted them at the root and everything worked fine. Thanks for the help though!
I am trying to FTP a RAR (zipped) file to another server but am having problems doing so. This is a Windows environment. I know that my FTP connection is setup correctly because I have already transferred over several other RARs. But the difference from what I can tell is that this RAR that is failing is larger in size. It is 761 MB. So when I try to "put" it into the other server, I get the following:
200 PORT command successful.
150 Opening BINARY mode data connection for WCU.rar.
> WCU.rar:Permission denied
226 Transfer complete.
However, the file is never transferred over. Is there a size limitation? And FYI, WCU.rar is a zipped directory, not a file. But I was able to successfully FTP over several other zipped directories.
it can be size limitation, not just stored data but as well transfered data.
did you try to transfer a small file? a small file in the same format? I would say, permissions, but you said that you uploaded already files to this server.
just to help you debug, you can add both commands to your ftp session
ftp> hash
ftp> bin
WCU.rar:Permission denied
You don't have permission to write to that directory. You need write permissions on the folder in order to do so.