While creting a view I got this error; cleartool: Error: Failed to record hostname in storage directory - clearcase-ucm

I am creating a view and I got this error cleartool: Error: Failed to record hostname in storage directory .
Check that root or the ClearCase administrators group has permission to write to this directory.
I tried all the possible troubleshoot using online help and others, but no luck. Can anyone help?

You can check the technote "Registering a VOB or creating a new View or VOB reports error: Failed to record hostname"
View Tool
Error creating view - '<view-tag>'
Fail to record hostname " HOST " in storage directory "<path to view storage>.
Check that root or the ClearCase administrators group has permission to write to
this directory.
Unable to create view "<global path to view storage>".
Cause
The cause of the error ultimately stems from the inability of ClearCase to successfully record the hostname in the .hostname file located in the storage directory of the VOB or view.
In addition of the various solutions, check if that error persists on different clients, for different users.
If not, it is likely linked to your profile.
Check for instance your CLEARCASE_PRIMARY_GROUP and your credmap (credential mapping).
In my case, it was always a case of applying the right fix_prot to the view/vob storage.
For view storage, it was that exact sequence:
alias sfp sudo /usr/atria/etc/utils/fix_prot
sfp -force -rec -chown <owner> -chgrp <ClearCaseUsers> -chmod 775 /path/to/viewStorage/yourView.vws
sfp -force -root -chown <owner> -chgrp <ClearCaseUsers> /path/to/viewStorage/yourView.vws
Replace <owner> and <ClearCaseUsers> by the right owner and group.

On creating a view, other common problems for remotely stored views are:
1) The "clearcase" group on the client and server do not point to the same group. You would need to get clearbug2's of both hosts and compare the albd credentials and the host data in the registry data in the "clearcase_info" directory of the .zip file.
2) You are attempting to create a Unix-hosted view from a Windows client.

Related

Apache Drill: Local udf directory must be writable for application user error

I'm trying to get Drill up and running on my machine. However, whenever I enter drill-embedded mode (bin/drill-embedded on Bash), I get this error:
Error: Failure in starting embedded Drillbit: java.lang.IllegalStateException: Local udf directory [/tmp/drill/udf/udf/local] must be writable for application user (state=,code=0)
If I try to run a query at this point, it'll give back:
No current connection
Any idea how to fix this? I've tried starting with a clean shell with no luck. Is it a permissions issue?
You have to give the directory /tmp/drill/udf/udf/local write access. Since it is a directory in /tmp, you might need root access to give permissions, or you will have to use sudo. To give permission, use this:
chmod 777 -R /tmp/drill/udf/udf/local
Also make sure the user is having at least read permission on the parent directories, otherwise you will get a permission denied error again.

winutils.exe chmod command doesn't set permission

> D:\>echo %HADOOP_HOME%
> D:\Apps\winutils\hadoop-2.7.1
Create tmp/hive folders on the same disk as HADOOP_HOME
D:\>dir tmp\hive
Directory of D:\tmp\hive
06/13/2016 01:13 PM <DIR> .
06/13/2016 01:13 PM <DIR> ..
0 File(s) 0 bytes
2 Dir(s) 227,525,246,976 bytes free
Try to figure out what permission are set
D:\>winutils.exe ls \tmp\hive
FindFileOwnerAndPermission error (1789): The trust relationship between this workstation and the primary domain failed.
When I tried chmod for this folders it seems work
winutils.exe chmod 777 \tmp\hive
but ls shows same exception
Does anyone has an idea what is going on ? Moreover, It works for me a couple hours ago but now my spark application fails with an exception
java.lang.RuntimeException: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rw-rw-rw-
at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:522)
I am quite late here still posting it so it might help someone in future.
While setting the permission, make sure you are using correct path for winutils.exe (try to use complete path). For me winutils.exe was in C drive:
C:\path\to\winutils.exe chmod -R 777 C:\tmp\hive
Run the below command to check the permission and it should look like below image ([setting and checking the permission : click to see the image]):
https://i.stack.imgur.com/vE9vl.png
If this is your corporate system the you must be on the same network using VPN or Forti Client or any other tool your organisation has been using
https://support.microsoft.com/en-us/kb/2771040
Looks like domain access issues, please ensure you can access domain and take a try again.
After ensure domain access, below error disappear
Caused by: java.lang.RuntimeException: The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions a
re: rw-rw-rw-
I'm late here and I just encountered this issue. Writing this so it will help somebody.
If you are using your office laptop, make sure you are connected to office network and retry. The domain Member of Domain settings point to your office network. That must solve the issue.
Log on Windows 10 using local Administrator account
Hold Windowslogo and press E to open File Explorer
3.On the right side of the File Explorer right click on This PC and choose Properties Click Advanced System Settings
Choose Computer Name tab and select change to see the value configured.
I am a newbie here, so it might be wrong but I think you need to add -R in the command as below:
winutils chmod -R 777 \tmp\hive

Boot2Docker: Can't create directory: Protocol Error

Trying to learn Docker in a windows machine. When I was trying create a new directory inside the shared User folder (/c/Users) by executing sudo mkdir sample but getting an error saying Can't create directory 'sample': Protocol Error.
Any pointers to resolve this issue would be helpful.
VirtualBox does mount automatically C:\users (see VirtualBox Guest Additions), but that does not mean you can create anything directly in C:\Users (not without Administrative privilege, and sudo don't apply here)
You can create anything you want in your own folder: C:\Users\mylogin

Hadoop Web Interface : Access Denied

I have setup Hadoop pseudo-distributed cluster (hadoop-2.5.1) on a linux machine following the steps here.
I am able to access the web interface http://localhost:50070 if I login through root.
However, if I have logged in through any other user, I get the following error on browser :
Access Denied : You are not allowed to access the document at location http://localhost:50070
How to grant access to Hadoop Web Interface to other users?
The tutorial that your were using does not consider other users that may access the hadoop folder. Indeed, only the root user has the right to access the hadoop folder (which is the installation folder)
I suggest to redo the installation while taking into consideration the user that you want to create. This user will manipulate hadoop folder and installation. Please, try to follow this tutorial : In step 2, you will create the user, and in step 3, you will ssh this user and continue the installation (with that user). Be sure that this new created user has the appropriate rights for the hadoop folder and sub-folder.
try this command as root :
chown -R hadoop:root /usr/local/hadoop

File ownership and access

I have an established workflow, but a change has caused some complications. An upstream Windows server delivers a file to my Solaris server where the file is accessed by my Windows 2003 server.
The problem is that either the ownership or permissions on a file delivered daily to the Solaris server has changed, and now the service running on my Windows server cannot copy and delete the file.
My Windows server has a parent directory on the Solaris server mapped and authenticated by User1.
The failing file comes in with an ownership of User2 and permissions of 664.
The failing file can be copied and deleted directly through Windows Explorer without additional authentication. A scheduled task batch file also can perform the copy and delete without authentication. It is only the running service which is unable to perform these tasks.
For comparison, there are a collection of files following the same workflow. These have an of ownership of User1 and permissions of 755.
User1 is a member of User1.
User2 is a member of staff.
The Solaris directory holding the files has permissions of 755 and ownership of User1.
What change can I make to give my Windows services ongoing access to files with both ownerships?
UPDATE:
Using a persistent shell script to change the ownership.
Had to use a persistent shell script to edit the file ownership.

Resources