My Setup
3 nodes running ceph + cephfs
2 of these nodes running CTDB & Samba
1 client (not one of the 3 servers)
It is a Lab setup, so only one nic per server=node, one subnet as well as all Ceph components plus Samba on the same servers. I'm aware, that this is not the way to go.
The problem
I want to host a clustered Samba file share on top of Ceph with ctdb. I followed the CTDB documentation (https://wiki.samba.org/index.php/CTDB_and_Clustered_Samba#Configuring_Clusters_with_CTDB) and parts of this: https://wiki.samba.org/index.php/Samba_CTDB_GPFS_Cluster_HowTo.
The cluster is running and a share is reachable, readable and writeable on both nodes, my smb.conf looks as follows:
[global]
netbios name = CEPHFS
workgroup = SIMPLE
clustering = yes
idmap config * : backend = autorid
idmap config * : range = 1000000-1999999
log file = /var/log/samba/smb.log
# Set files creation permissions
create mask = 664
force create mode = 664
# Set directory creation mask
directory mask = 2775
force directory mode = 2775
[public]
comment = public share
path = /mnt/mycephfs/testshare
public = yes
writeable = yes
only guest = yes
ea support = yes
CTDB manages Samba and reports both nodes as OK.
But when i read or write to one of the nodes via the public IP and let it fail (restarting ctdb), the read or write fails. A second write attempt succeeds (the public IP gets taken by the other host successfully).
But CTDB should be able to do this according to https://ctdb.samba.org/ -> IP Takeover?
I have a tcpdump of the new server (the one taking over the public ip) sending a tcp RST to my client after the client sending retransmissions to the server.
Any idea, what the problem could be?
PS: I'm more than happy to provide you with more Information (ctdb config file, firewall configuration, pcap, whatever ;) ) but this is long enough ....
Related
I'm trying to use SMB for Time Machine backups. I'm currently using macOS 10.15.5, Samba 4.11.6-Ubuntu and Ubuntu 20.04 LTS.
This is my conf:
[timemachine]
comment = Time Machine
path = /mnt/HD/Backup/timemachine
browseable = yes
writeable = yes
create mask = 0600
directory mask = 0700
spotlight = yes
vfs objects = catia fruit streams_xattr
fruit:aapl = yes
fruit:time machine = yes
fruit:time machine max size = 750G
I can successfully register the share as time machine disk but when it tries to backup I get this error:
[2020/07/17 05:53:28.442317, 0] ../../source3/modules/vfs_fruit.c:4637(fruit_pwrite_meta_netatalk)
fruit_pwrite_meta_netatalk: ad_pwrite [071FEBFA-ACF7-5694-9FB6-A02D91AE7861.sparsebundle:AFP_AfpInfo] failed
If I mount the share via Finder I can successfully create file and folder.
Do you have any clue?
Thanks
I have a similar issue. Looks like it's a bug in samba, and it's fixed in smb 4.17.5.
TLDR; With vfs objects = catia fruit streams_xattr in my smb.conf, files created on the shares using Macs do not inherit permissions and get extended ACLs.
Background
I'm setting up a NAS with a Samba share for our office, which is a 50/50 macOs/Windows 10 shop. Everyone should have access to the shares using dedicated user accounts.
I wanted to leverage the latest enhancements in Samba 4 when it comes to performance with Macs, and TimeMachine support, so I enabled the modules vfs objects = catia fruit streams_xattr
Problem
Permissions are not inherited, and masks are not respected with these vfs objects set. I've tried a number of combinations of force createand create masks, and also (as in the example below) inherit permissions
Without the vfs objects modules set, permissions are as expected.
My smb.conf (relevant excerpt):
[global]
server string = %h server (Samba, Ubuntu)
server role = standalone server
client signing = disabled
unix password sync = yes
vfs objects = catia fruit streams_xattr
fruit:aapl = yes
map to guest = bad user
spotlight = yes
unix extensions = no
browseable = yes
read only = no
inherit permissions = yes
[OurShare]
path = /storage/OurShare
valid users = #office
OurShare has 2770 permissions:
ls -al /storage/OurShare
drwxrws--- adminuser office 4096 Oct 22 03:56
From a Windows machine, any new directory created in OurShare gets drwxrws---, as expected.
However a directory created from a Mac gets drwxr-xr-x+, so they are not writable by the group and that is the main problem here.
$ getfacl on such a directory tells me
# file: OurShare/testfile
# owner: someuser
# group: office
user::rwx
user:someuser:rwx #effective:r-x
group::rwx #effective:r-x
group:office:rwx #effective:r-x
mask::r-x
other::r-x
If I remove the modules vfs objects = catia fruit streams_xattr from smb.conf, then the permissions of files/folders created from Macs match that of those created from Windows - ie. there is no problem.
But without these modules I loose support for fruit:time machine for Mac backup purposes, and fruit:aapl, an extension which "enhances several deficiencies when connecting from Macs" (man vfs fruit).
This is an Ubuntu 19.04 system, with Samba v4.10.0
My question
How can I retain these Mac optimizations in Samba, while still being able to control permissions of created files and folders from the server side?
Thanks for all advice! This is driving me nuts
Turns out this was (already answered)[https://unix.stackexchange.com/questions/486919/creating-a-directory-in-samba-share-from-osx-client-always-has-acl-maskr-x] in the Unix stackexchange.
Answer:
Setting the global option fruit:nfs_aces = no will prevent macOS clients from modifying the UNIX mode of directories using NFS ACEs. An Access Control Entry is part of the Access Control List (ACL). This option defaults to yes - see the vfs_fruit manpage.
I can confirm that disabling this option results in permission inheritance working as expected with Mac clients, as they already are with Windows clients.
Happy to have figured it out!
we have two AIX servers, Live and Test. On our live server, I am able to add an entry into smb.conf to allow a directory to be shared across a Windows network as shown in the extract below, displaying the ImportExport shared folder in Explorer:
[ImportExport]
comment = Import Export directory
path = /path/folder
browseable = Yes
hosts allow = <IP>
guest ok = Yes
force user = <user>
forcegroup = pro4
read only = No
create mask = 0777
directory mask = 0777
dead time = 10
However, adding a very similar configuration on our Test server, I cannot even get to the server from a Windows box, I get the "\ is not accessible..." message as if the server does not exist, or there are no shares.
Is there anything else I need to do to the local AIX folder to get this visible to Windows, or can you give me some ideas of what the pre-reqs are for this?
Sorry, I am not an AIX specialist, primarily a Windows house.
Thanks
Got this working now - Samba 3.2.0 was restricted to 14 chars or less for the share name. Reduced, restarted SAMBA and now OK. Thanks all
I am having a problem. I am trying to get a node IP that's running postgresql and is the replication master. So I can implant the IP to repmgr cookbook so it will automatically set it in to SSH config file. I need it because I am not running SSH on port 22 so I need to automatize the process.
This is the configure.rb file:
template File.join(node[:repmgr][:pg_home], '.ssh/config') do
source 'ssh_config.erb'
mode 0644
owner node[:repmgr][:system_user]
group node[:repmgr][:system_user]
variables( :hosts => Array(master_node[:ipaddress]) )
end
directory File.dirname(node[:repmgr][:config_file_path])
template node[:repmgr][:config_file_path] do
source 'repmgr.conf.erb'
mode 0644
end
Master node IP address is taken from attributes (default.rb):
default[:repmgr][:addressing][:master] = nil
I need to change the nil to something else so I can get the IP of the master server so slave aka. standby server can add it's IP to SSH config so it will replicate over my SSH port not the default 22 port.
I hope someone can help be because I am a really new to Ruby and I only know the basics of it.
Thank you. I hope you understand my question.
If you are using chef-server, then you can find other provisioned nodes through search in recipe. You can search for nodes by different properties: such as role, recipe, environment, name and so on. I hope your replication master node has some attribute that makes it unique, for example postgres recipe or replication-master role in run_list.
nodes = search( :node, 'recipes:postgres' )
or
nodes = search( :node, 'role:replication-master' )
Search returns an array of nodes that have the corresponding attributes.
And then:
node.default[:repmgr][:addressing][:master] = nodes.first[:ipaddress]
The code should be written in recipe file, not in attributes.
I want to use the parallel capabilities of ipython on a remote computer cluster. Only the head node is accessible from the outside. I have set up ssh keys so that I can connect to the head node with e.g. ssh head and from there I can also ssh into any node without entering a password, e.g. ssh node3. So I can basically run any commands on the nodes by doing:
ssh head ssh node3 command
Now what I really want to do is to be able to run jobs on the cluster from my own computer from ipython. The way to set up the hosts to use in ipcluster is:
send_furl = True
engines = { 'host1.example.com' : 2,
'host2.example.com' : 5,
'host3.example.com' : 1,
'host4.example.com' : 8 }
But since I only have a host name for the head node, I don't think I can do this. One option is to set us ssh tunneling on the head node, but I cannot do this in my case, since this requires enough ports to be open to accommodate all the nodes (and this is not the case). Are there any alternatives?
I use ipcluster on the NERSC clusters by using the PBS queue:
http://ipython.org/ipython-doc/stable/parallel/parallel_process.html#using-ipcluster-in-pbs-mode
in summary you submit jobs which runs mpiexec ipengine, (after having launched ipcontroller on the login node). Do you have PBS on your cluster?
this was working fine with ipython .10, it is now broken in .11 alpha.
I would set up a VPN server on the master, and connect to that with a VPN client on my local machine. Once established, the virtual private network will allow all of the slaves to appear as if they're on the same LAN as my local machine (on a "virtual" network interface, in a "virtual" subnet), and it should be possible to ssh to them.
You could possibly establish that VPN over SSH ("ssh tunneling", as you mention); other options are OpenVPN and IPsec.
I don't understand what you mean by "this requires enough ports to be open to accommodate all the nodes". You will need: (i) one inbound port on the master, to provide the VPN/tunnel, (ii) inbound SSH on each slave, accessible from the master, (iii) another inbound port on each slave, over which the master drives the IPython engines. Wouldn't (ii) and (iii) be required in any setup? So all we've added is (i).