Oracle concurrent program cannot write to directory - oracle

We have an Oracle EBS concurrent program, ran as applmgr, that needs to load data using SQL*Loader, and move the input data to an archive location:
sqlldr $w_login \
control=$w_directory/$w_ctrl \
data=$w_directory/$w_data \
log=$w_directory/log/$w_data.log
[[ -f $w_directory/log/$w_data.log ]] && \
mv $w_directory/$w_data $w_directory/archive/$w_data.archive
However, it fails in the mv part:
34 Rows successfully loaded.
0 Rows not loaded due to data errors.
0 Rows not loaded because all WHEN clauses were failed.
0 Rows not loaded because all fields were null.
mv: cannot move `/w_directory/w_data.dat' to `/w_directory/archive/w_data.dat.archive': Permission denied
The applmgr user has write permissions on the directories and file:
drwxrwxr-x 14 otheruser othergroup 4096 Apr 16 2012 /w_directory
drwxrwxr-x 14 otheruser othergroup 4096 Apr 16 2012 /w_directory/log
drwxrwxr-x 14 otheruser othergroup 4096 Apr 16 2012 /w_directory/archive
-rwxrwxr-x 14 otheruser othergroup 4096 Apr 16 2012 /w_directory/w_data.dat
$ id applmgr
uid=1003(applmgr) gid=1000(dba) groups=1000(dba),1003(othergroup)
We can run the above program manually, through the command-line as applmgr, without any issues. But it fails with the above error when ran as a concurrent program. We have already bounced the server as well.
The server is on RHEL 6.4. Oracle EBS is R12.1.3.

Related

Developing inside docker on WSL2-Ubuntu from vscode

I am trying run docker inside WSL (am running Ubuntu in WSL). Also am new to docker. The doc says:
To get the best out of the file system performance when bind-mounting files:
Store source code and other data that is bind-mounted into Linux containers (i.e., with docker run -v <host-path>:<container-path>) in the Linux filesystem, rather than the Windows filesystem.
Linux containers only receive file change events (“inotify events”) if the original files are stored in the Linux filesystem.
Performance is much higher when files are bind-mounted from the Linux filesystem, rather than remoted from the Windows host. Therefore avoid docker run -v /mnt/c/users:/users (where /mnt/c is mounted from Windows).
Instead, from a Linux shell use a command like docker run -v ~/my-project:/sources <my-image> where ~ is expanded by the Linux shell to $HOME.
I also came across following:
Run sudo docker run -v "$HOME:/host" --name "[name_work]" -it docker.repo/[name]. With, [$HOME:/host], you can access your home directory in /host dir in docker image. This allows you to access your files on the local machine inside the docker. So you can edit your source code in your local machine using your favourite editor and run them directly inside the docker. Make sure that you have done this correct. Otherwise, you may need to copy files from the local machine to docker, for each edit (a painful job).
I am not able to understand the format of parameter passed to -v option and what it does. I am thinking that it will allow to access Ubuntu directories inside docker. So $HOME:/host will map Ubuntu's home directory to /host inside.
Q1. But what is /host?
Q2. Can I do what is stated by above two quotes together? I mean what they are saying is compatible? I guess yes. What all its saying is I should not mount from windows director like /mnt/<driveletter>/.... If I am mounting linux directory like $USER/... then it will give better performance, right?
I tried out running it to understand it:
~$ docker run -v "$HOME:/host" --name "mydokr" -it docker.repo.in/dokrimg
root#f814974a1cfb:/home# ls
root#f814974a1cfb:/home# ll
total 8
drwxr-xr-x 2 root root 4096 Apr 15 11:09 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
root#f814974a1cfb:/home# pwd
/home
root#f814974a1cfb:/home# cd ..
root#f814974a1cfb:/# ll
total 64
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
-rwxr-xr-x 1 root root 0 Sep 22 07:16 .dockerenv*
lrwxrwxrwx 1 root root 7 Jul 3 01:56 bin -> usr/bin/
drwxr-xr-x 2 root root 4096 Apr 15 11:09 boot/
drwxr-xr-x 5 root root 360 Sep 22 07:16 dev/
drwxr-xr-x 1 root root 4096 Sep 22 07:16 etc/
drwxr-xr-x 2 root root 4096 Apr 15 11:09 home/
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 host/
lrwxrwxrwx 1 root root 7 Jul 3 01:56 lib -> usr/lib/
lrwxrwxrwx 1 root root 9 Jul 3 01:56 lib32 -> usr/lib32/
lrwxrwxrwx 1 root root 9 Jul 3 01:56 lib64 -> usr/lib64/
lrwxrwxrwx 1 root root 10 Jul 3 01:56 libx32 -> usr/libx32/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 media/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 mnt/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 opt/
dr-xr-xr-x 182 root root 0 Sep 22 07:16 proc/
drwx------ 1 root root 4096 Aug 24 03:54 root/
drwxr-xr-x 1 root root 4096 Aug 11 10:24 run/
lrwxrwxrwx 1 root root 8 Jul 3 01:56 sbin -> usr/sbin/
drwxr-xr-x 2 root root 4096 Jul 3 01:57 srv/
dr-xr-xr-x 11 root root 0 Sep 22 03:32 sys/
-rw-r--r-- 1 root root 1610 Aug 24 03:56 test_logPath.log
drwxrwxrwt 1 root root 4096 Aug 24 03:57 tmp/
drwxr-xr-x 1 root root 4096 Aug 11 10:24 usr/
drwxr-xr-x 1 root root 4096 Jul 3 02:00 var/
root#f814974a1cfb:/home# cd ../host
root#f814974a1cfb:/host# ll
total 36
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 ./
drwxr-xr-x 1 root root 4096 Sep 22 07:16 ../
-rw-r--r-- 1 1000 1001 220 Sep 22 03:38 .bash_logout
-rw-r--r-- 1 1000 1001 3771 Sep 22 03:38 .bashrc
drwxr-xr-x 3 1000 1001 4096 Sep 22 04:56 .docker/
drwxr-xr-x 2 1000 1001 4096 Sep 22 03:38 .landscape/
-rw-r--r-- 1 1000 1001 0 Sep 22 03:38 .motd_shown
-rw-r--r-- 1 1000 1001 921 Sep 22 04:52 .profile
-rw-r--r-- 1 1000 1001 0 Sep 22 03:44 .sudo_as_admin_successful
drwxr-xr-x 5 1000 1001 4096 Sep 22 04:52 .vscode-server/
-rw-r--r-- 1 1000 1001 183 Sep 22 04:52 .wget-hsts
So I am not getting whats happening here. I know docker has its own file system.
Q3. Is is that, what am finding at /home and /host is indeed container's own file system?
Q4. Also, what happened to -v $HOME:/host here?
Q5. How can I do as stated by 2nd quote:
This allows you to access your files on the local machine inside the docker. So you can edit your source code in your local machine using your favourite editor and run them directly inside the docker.
Q6. How do I connect vscode to this container? From WSL-Ubuntu, I could just run code . to launch vscode. But the same does not seem to work here:
root#f814974a1cfb:/home# code .
bash: code: command not found
This link says:
A devcontainer.json file can be used to tell VS Code how to configure the development container, including the Dockerfile to use, ports to open, and extensions to install in the container. When VS Code finds a devcontainer.json in the workspace, it automatically builds (if necessary) the image, starts the container, and connects to it.
But I guess this says starting up creating new container form vscode. But not connecting to already existing container. I am not able to find my dockercontainer.json. I downloaded this container image using docker pull.

Writing to a mounted Windows share

I am using Ubunutu 20.04 and I am trying to write to a mounted Windows share. This is the command I am using to mount the share:
sudo mount.cifs //192.168.1.5/tv /mnt/tv -o username=xxxxxxxxxx,password=xxxxxxxxx,file_mode=0777,dir_mode=0777
I am able to view the contents of the Windows share in Ubuntu:
darren#homeserver:~$ ls -l /mnt/tv/
total 0
drwxrwxrwx 2 root root 0 Jun 30 15:33 '$RECYCLE.BIN'
drwxrwxrwx 2 root root 0 Jan 1 2019 MSOCache
drwxrwxrwx 2 root root 0 Apr 28 00:38 'Plex dance'
drwxrwxrwx 2 root root 0 Dec 30 2019 'System Volume Information'
drwxrwxrwx 2 root root 0 Jun 24 15:37 'TV Shows'
-rwxrwxrwx 1 root root 0 Jan 1 2019 desktop.ini
But if I try to create a test file i get this error:
[ Error writing lock file /mnt/tv/.test.swp: Permission denied ]
I have the Windows share permissions set to "Everyone":
Any thoughts?
Try this configuration:
-fstype=cifs,credentials=<fileWithCred>,vers=3.0,dir_mode=0777,file_mode=0777,noserverino ://<IP-Winshare>/Path

Cannot give myself root my permissions

I am trying to give myself root access to all the file in this folder and not have to sudo everything I want to run a command.
The file I am concerned with is pro
When I enter ls -l I get :
drwxr-xr-x+ 12 Guest _guest 384 13 Jan 14:56 Guest
drwxrwxrwt 9 root wheel 288 13 Jan 14:30 Shared
drwxr-xr-x+ 148 Santi staff 4736 1 Apr 17:13 pro
then I enter chmod 775 pro/
It doesnt seem to change the permssions. What can I do to fix this or why is the folder restricting permission even though I appear to be root?
drwxr-xr-x+ ...
the final + means that the file is governed by acl
see
apropos acl : give you the mans to consult
wikipedia
Access Control Lists on Arch wiki

hive script file not found exception

I am running below command file is in my local directory but I am getting below error while running the file.
[hdfs#ip-xxx-xxx-xx-xx scripts]$ ls -lrt
total 28
-rwxrwxrwx. 1 root root 17 Apr 1 15:53 hive.hive
-rwxrwxrwx 1 hdfs hadoop 88 May 7 11:53 shell_fun
-rwxrwxrwx 1 hdfs hadoop 262 May 7 12:23 first_hive
-rwxrwxrwx 1 root root 88 May 7 16:59 311_cust_shell
-rwxrwxrwx 1 root root 822 May 8 20:29 script_1
-rw-r--r-- 1 hdfs hadoop 31 May 8 20:30 script_1.log
**-rwxrwxrwx 1 hdfs hdfs 64 May 8 22:07 **hql2.sql***
[hdfs#ip-xxx-xxx-xx-xx scripts]$ hive -f hql2.sql
WARNING: Use "yarn jar" to launch YARN applications.
Logging initialized using configuration in file:/etc/hive/2.3.4.0-3485/0/hive-log4j.properties Could not open input file for reading.
(File file:/home/ec2-user/scripts/hive/scripts/hql2.sql does not exist)
[hdfs#ip-xxx-xxx-xx-xx scripts]$

Why does directory vanish when I do SSHFS? How to setup SSHFS share on Max OSX 10.9?

I'm running Max OSX 10.9.3 and I'm trying to setup an SSHFS file-share between my MacBook Pro and a remote file system. However, when I try to do it, it doesn't work.
Strangely enough, it makes the target directory disappear. Has anyone else seen this happen? Is it a bug?
First see that I can ssh normally into the target machine:
% ssh remoteuser#XXX.XXX.XXX.XXX # <--- SSH to remote system works! See below.
remoteuser#XXX.XXX.XXX.XXX % ls -altr remoteDir
total 8
drwxr-xr-x 26 remoteuser remoteuser 4096 Jun 22 01:00 ..
drwxrwxrwx 2 remoteuser remoteuser 4096 Jun 22 01:08 .
remoteuser#XXX.XXX.XXX.XXX % exit
% # <--- Logged out of remote system
Next, I create a directory locally and verify it was created:
% pwd
/mnt
% ls
total 0
drwxr-xr-x 31 root admin 1122 Jun 18 18:34 ../
drwxr-xr-x 2 root admin 68 Jun 23 08:11 ./
% sudo mkdir share1
% ls
drwxr-xr-x 31 root admin 1122 Jun 18 18:34 ../
drwxr-xr-x 4 root admin 136 Jun 23 08:50 ./
drwxr-xr-x 2 root admin 68 Jun 23 08:50 share/
Now I try to setup the SSHFS share:
% sudo sshfs remoteuser#XXX.XXX.XXX.XXX:remoteDir /mnt/share1
remoteuser#XXX.XXX.XXX.XXX's password:
%
Ok. It seems to have worked. No errors. So let's see the share we created, shall we?
% ls
ls: share1: No such file or directory
total 0
drwxr-xr-x 31 root admin 1122 Jun 18 18:34 ../
drwxr-xr-x 3 root admin 102 Jun 23 08:12 ./
What? Not only is the File Sharing not working, but the share1 directory seems to have vanished! (Although the file system seems to know it is missing, which is weird).
Where did /mnt/share1 go and how do I setup this SSHFS?
SSHFS doesn't come with OS X AFAIK, so you should mention how you installed it. But I'm guessing sshfs is designed to be used with fstab or mount rather than be called directly. Try something like:
mount -t sshfs remoteuser#XXX.XXX.XXX.XXX:remoteDir /mnt/share1

Resources