smb share not accessable by user - smb

I cannot access a smb share with "user2" (member of "sambausers" group) that is configured like this:
shared folder is owned by "user1:sambausers"
shared folder has chmod "drwsrwsr-x"
"smb.conf" has the share configured with "valid users = user2"
If I modify "smb.conf" to "valid users = #sambausers" it works!?
I don't understand that: "user2" is a member of "sambausers"!?

The answer was simple: I'm stupid -.-
A problem was that I had not found any logs and I didn't know anything about the cause.
Key was using the smbclient command line utility to get an error message of invalid credentials.
One research later I found that passwords are added via smbpasswd command.
And I really wasn't aware of how passwords are being stored in "tdbsam" mode.

Related

Why is .pgpass file not supplying a password for the pg_dump, vacuumdb, or reindexdb commands?

I'm trying to execute several different PostgreSQL commands inside of different bash scripts. I thought I had the .pgpass file properly configured, but when I try to run pg_dump, vacuumdb, or reindexdb, I get errors about how a password isn't being supplied. For my bash script to execute properly, I need these commands to return an exit code of 0.
I'm running PostgreSQL 9.5.4 on macOS 10.12.6 (16G1408).
In an admin user account [neither root nor postgres], I have a .pgpass file in ~. The .pgpass file contains:
localhost:5432:*:postgres:DaVinci
The user is indeed postgres and the password is indeed DaVinci.
Permissions on the .pgpass file are 600.
In the pg_hba.conf file, I have:
# pg_hba.conf file has been edited by DaVinci Project Server. Hence, it is recommended to not edit this file manually.
# TYPE DATABASE USER ADDRESS METHOD
local all all md5
host all all 127.0.0.1/32 md5
host all all ::1/128 md5
So, for example, from a user account [neither root nor postgres], I run:
/Library/PostgreSQL/9.5/pgAdmin3.app/Contents/SharedSupport/pg_dump --host localhost --username postgres testworkflow13 --blobs --file /Users/username/Desktop/testdestination1/testworkflow13_$(date "+%Y_%m_%d_%H_%M").backup --format=custom --verbose --no-password
And I get the following error:
pg_dump: [archiver (db)] connection to database "testworkflow13" failed: fe_sendauth: no password supplied
I get the same result if I run this with sudo as well.
Curiously, pg_dump does execute, and does export out a .backup file to the testdestination1 directory, but since it throws an error, if it's in a bash script, the script is halted.
Where am I going wrong? How can I make sure that the .pgpass file is being properly read so that the --no-password flag in the command works?
Please start with a read to official docs.
Also, even this topic is more than 2 years also, i strongly suggest to update to at least to version 10, anyhow nothing relevant has been changed around .pgpass
.pgpass need to be chmod 600, fine, the user that uses that must can read, so that must be the owner of that file.
Please remove the --no-password that just confuse and is not needed.
Using 127.0.0.1 instead of localhost clarify where you are going, "usually" are the same.
... from a user account [neither root nor postgres] ...
The user you are using for must have read access to .pgpass, as said, so you have to clarify that and provide that file to that user, maybe using the PGPASSFILE env variable could be useful for you.
Another way is the use of .pg_service.conf file with or without the .pgpass, for what you have written it looks like that may be more appropriate
Also you could set the PGPASSWORD in the env of the user.
Think about security, some choices look the simpliest but can expose accesses .. and as DBA I'm frankly tired about peoples that store password in visible places, printed in logs or on github or set "trust" in pg_hba and finally comes to me to say "postgreSQL is insecure".. hahaha!
Final note, you do not have a pg_hba error, in case you will have a "pg_hba" error message.
Turns out that changing all three lines in the pg_hba.conf file to the trust method of authentication solved this.
local all all trust
host all all 127.0.0.1/32 trust
host all all ::1/128 trust
Since the method is trust, the .pgpass file may be entirely irrelevant--I'm not sure, but at least I got it working.

Usernames in /etc/passwd

I'm new to linux operating system and I've explored today the /etc/passwd file and to my surprise I found that it contains many other user names like proxy,daemon..etc.What are all these users?Can I login using these users?
Here the cat command i performed on /etc/passwd.
root:x:0:0:root:/root:/bin/bash
daemon:x:1:1:daemon:/usr/sbin:/usr/sbin/nologin
bin:x:2:2:bin:/bin:/usr/sbin/nologin
sys:x:3:3:sys:/dev:/usr/sbin/nologin
sync:x:4:65534:sync:/bin:/bin/sync
games:x:5:60:games:/usr/games:/usr/sbin/nologin
man:x:6:12:man:/var/cache/man:/usr/sbin/nologin
lp:x:7:7:lp:/var/spool/lpd:/usr/sbin/nologin
mail:x:8:8:mail:/var/mail:/usr/sbin/nologin
news:x:9:9:news:/var/spool/news:/usr/sbin/nologin
uucp:x:10:10:uucp:/var/spool/uucp:/usr/sbin/nologin
proxy:x:13:13:proxy:/bin:/usr/sbin/nologin
www-data:x:33:33:www-data:/var/www:/usr/sbin/nologin
backup:x:34:34:backup:/var/backups:/usr/sbin/nologin
list:x:38:38:Mailing List Manager:/var/list:/usr/sbin/nologin
irc:x:39:39:ircd:/var/run/ircd:/usr/sbin/nologin
gnats:x:41:41:Gnats Bug-Reporting System (admin):/var/lib/gnats:/usr/sbin/nologin
nobody:x:65534:65534:nobody:/nonexistent:/usr/sbin/nologin
libuuid:x:100:101::/var/lib/libuuid:
syslog:x:101:104::/home/syslog:/bin/false
messagebus:x:102:106::/var/run/dbus:/bin/false
usbmux:x:103:46:usbmux daemon,,,:/home/usbmux:/bin/false
dnsmasq:x:104:65534:dnsmasq,,,:/var/lib/misc:/bin/false
avahi-autoipd:x:105:113:Avahi autoip daemon,,,:/var/lib/avahi-autoipd:/bin/false
kernoops:x:106:65534:Kernel Oops Tracking Daemon,,,:/:/bin/false
rtkit:x:107:114:RealtimeKit,,,:/proc:/bin/false
saned:x:108:115::/home/saned:/bin/false
whoopsie:x:109:116::/nonexistent:/bin/false
speech-dispatcher:x:110:29:Speech Dispatcher,,,:/var/run/speech-dispatcher:/bin/sh
avahi:x:111:117:Avahi mDNS daemon,,,:/var/run/avahi-daemon:/bin/false
lightdm:x:112:118:Light Display Manager:/var/lib/lightdm:/bin/false
colord:x:113:121:colord colour management
daemon,,,:/var/lib/colord:/bin/false
hplip:x:114:7:HPLIP system user,,,:/var/run/hplip:/bin/false
pulse:x:115:122:PulseAudio daemon,,,:/var/run/pulse:/bin/false
brucewilson:x:1000:1000:brucewilson,,,:/home/brucewilson:/bin/bash
mysql:x:116:125:MySQL Server,,,:/nonexistent:/bin/false
bharghav:x:1001:1001:bharghav,,,:/home/bharghav:/bin/bash
sshd:x:117:65534::/var/run/sshd:/usr/sbin/nologin
statd:x:118:65534::/var/lib/nfs:/bin/false
snmp:x:119:126::/var/lib/snmp:/bin/false
guest-MSvo95:x:120:127:Guest,,,:/tmp/guest-MSvo95:/bin/bash
Can anyone please explain what are these?
Most of those users are required by the OS processes to work. You can't login as one of those users because:
a. They don't have a shell as regular users does. For example, brucewilson has /bin/bash as shell, but pulse (Audio Controller ) has /bin/false.
b. There are not passwords for those users, so when the system asks for a password, no matter what you type you will never get in. You can check who has a password in /etc/shadow.
Actually, you can login as any user listed in /etc/passwd as of your choice.
for example, if you want to login as proxy, type the following command:
sudo -u proxy /bin/bash
It will asks password to authenticate the access, you can give your password only if your user account is added in sudoers list.
You can use the same command to login as any user in the /etc/passwd file.
For example, again if you want to log in as daemon, type the following command:
sudo -u daemon /bin/bash
and so on...
Hope this will help you.

Why does ec2 asks for password when i use an identity file?

I use the following command and i got the code from http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/AccessingInstancesLinux.html
ssh -i my-key-pair.pem ec2-user#ec2-198-51-100-1.compute-1.amazonaws.com
I'm not sure if it's because I lost the .pem file and recreated it or what is going on here, but no matter if I use the user ubuntu, root, or ec2-user the terminal asks me for a password.
Your local private key must be shrouded as it should be. It can be disabled with key management tools if you really want, but not advised.
Double-check the file permissions on your key file. Do:
chmod 400 my-key-pair.pem
and try again.
It is also likely that the key file is just the wrong one.
You have to terminate the instance and copy it with a new ssh key. If a key is lost then access to the server is also lost.

Transferring a file to an amazon ec2 instance using scp always gives me permission denied (publickey,gssapi-with-mic)

I am trying to transfer a file to an ec2 instance. I followed the Amazon's documentation, this is what my command looked like:
scp -i [the key's location] Documents/[the file's location] ec2-user#[public dns]:[home/[destination]]
where I replaced all the variables with the proper things, I am sure it's the correct key and it has permission 400. When I call the command, it tells me the RSA key fingerprint, asks me if I want to continue connecting. I type yes and it replies with
Permission denied (publickey,gssapi-with-mic)
lost connection
I have looked at many of the other similar questions on stack overflow and can't find a correct way to do it.
Also ssh traffic is enabled on port 22.
The example amazon provided is correct. It sounds like a folder permissions issue. If you created the folder you are trying to copy to with another user or another user created it, chances are you don't have permissions to copy to it or edit it.
If you have sudo abilities, you can try opening access for yourself. Though not recommended to be left this way, you could try this command:
sudo chmod 777 /folderlocation
That gives complete read/write/executable permissions to anyone (hence why you shouldn't leave it at 777) but it will give you the chance to test your scp command to rule out permissions.
Afterwards if you aren't familiar with permissions, I suggest you read up on it. this is an example: http://www.tuxfiles.org/linuxhelp/filepermissions.html It is generally suggested you lock down the folder as much as possible depending on the type of information held within.
If that was not the cause some other things you might want to check:
are you in the directory of your key when executing the 'scp -i keyname' command?
do you have permissions to use the folder you are transferring from?
Best of luck.
The problem may be the user name. I copied a file to my Amazon instance and first tried to use the command:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ec2-user#ec2-xx-yy-zz-tt.compute-1.amazonaws.com:~
and got the error:Permission denied (publickey).
I then realized that my instance is an Ubuntu environment and the user user is then "ubuntu" the correct command that worked for me is then:
scp -r -i ../.ssh/Amazon_server_key_pair.pem ./empty.test ubuntu#ec2-xx-yy-zz-tt.us-west-2.compute.amazonaws.com:~
The file "empty.test" is a text file containing the text "testing ...". Replace the address of your virtual server with the correct address to your instance's Public DNS. I have replaced the ip to my instance with xx.yy.zz.tt.
I have to use ubuntu# instead of ec2-user# because when i ssh i was seeing ubuntu# in my terminal, try changing to the name you see at your terminal
Also you have to set permission for pem file in your computer
chmod 400 /path/my-key-pair.pem
The below code will copy file from your computer to Ec2 instance.
scp -i ~/location_of_your_ec2_key_pair.pem ~/location_of_transfer_file/sample.txt ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/folder_to_which_it_needs_to_be_copied
The below code will copy file from Ec2 instance to your computer
scp -i ~/location_of_your_ec2_key_pair.pem ubuntu#ec2_your_ec2_instance.compute.amazonaws.com:~/location_of_transfer_file/sample.txt ~/folder_to_which_it_needs_to_be_copied
I was facing the same problem. Hope this will work for you.
scp -rp -i yourfile.pem ~/local_directory username#instance_url:directory
Permission should also be correct to make this work.
Might be ones uses wrong username. Happened to me, was the same error msg -> Permission denied (publickey,gssapi-keyex,gssapi-with-mic).
lost connection

EC2 non root user login

Is there a way to log into an EC2 ubuntu ami or a way to set up an ubuntu ami so that non-root users can log in? I tried creating a user and logging in with the associated password. I also tried using the private key, copied the authorized-keys file into the .ssh directory of the non-root user's home directory and tried to log in to the box with that user account id. Neither method worked.
Thanks in advance.
So, this works, but the missing high-order bit of information here has to do with setting the right permission on the authorized-keys file in the home directory for the user. So, I copied /root/.ssh/authorized-key to /home/user, then did with
cp -r /root/.ssh /home/user
chown -R user /home/user/.ssh
This allowed me to use the keypair.pem file to log in.
Make sure you are sending your AWS keypair as the identity file, i.e.
ssh -i ~/.ssh/keypair.pem user#ec2-174-129-xxx-xx.compute-1.amazonaws.com
Also check that SSH is enabled in your security group
Assuming you would like to have users log in with a password so they need not supply a key every time, all you must do is turn on the ability to SSH in with a password. This option is turned off by default in all Linux AMIs.
vi, nano, pico, etc. into the following file with root privileges:
sudo vi /etc/ssg/sshd_config
Change the following setting to yes:
PasswordAuthentication = yes
Finally you must restart SSH (Since you are SSHed onto a remote machine, a simple reboot is fine.)
That's it! Of course, you must still add users with the adduser command and give them passwords with the passwd command for them to be able to login to your AMI. Checkout this link for more info on the OpenSSH SSH client configuration files.

Resources