Mac OS X El Capitan (Version 10.11.5) want to connect to a server to access some shared images.
Finder -> Go -> Connect to Server, then input address
smb://172.16.X.X/
then next step enter username and password, then it hints
Check the server name or IP address, and then try again. If you continue to have problems, contact your system administrator.
but all my colleague could connect it successfully, only myself cannot connect it.
The error message in Console is
6/16/16 21:14:24.000 kernel[0]: smb_ntstatus_error_to_errno: Couldn't map ntstatus (0xc000019c) to errno returning EIO
6/16/16 21:14:25.000 kernel[0]: smb_ntstatus_error_to_errno: Couldn't map ntstatus (0xc000019c) to errno returning EIO
6/16/16 21:14:26.000 kernel[0]: smb_ntstatus_error_to_errno: Couldn't map ntstatus (0xc000019c) to errno returning EIO
6/16/16 21:14:26.465 NetAuthSysAgent[1218]: checkForDfsReferral: mounting dfs url failed, syserr = Unknown error: -1073741412
6/16/16 21:14:26.465 NetAuthSysAgent[1218]: smb_mount: mount failed to 172.16.X.X/
smb:, syserr = Unknown error: -1073741412
I also tried to connect it in terminal
mount -t smbfs '//172.16.X.X/' share
mount_smbfs: mount error: /Users/foo/share: Unknown error: -1073741412
I think I figured this out.
Open Keychain Access
From the Keychain Access menu, select Ticket Viewer
For me, I needed to use a different network account than I was logged into on my Mac, so I clicked Add Identity, and entered the username and password.
Also, in the main Keychain Access window, I searched for the server name I was connecting to, double clicked on it, and added the account and password info I wanted to connect with (I'm not sure if this step was necessary)
When I tried my SMB share again, I was able to get in.
I've just resolved the same problem but in my case I was trying to connect to MS Azure smb share. I was getting the same error. My problem was resolved as soon as I added share name after the address.
So for me it won't show the list of shares but if I add the name of a share to the url, it works. Try:
smb://172.16.X.X/sharename
Also, check your port 445 is open on your computer and on router.
(I know it's a late answer, but may be it will help somebody else)
I found the following to work,
Using Microsoft Azure's file share service, they provide a command to connect to your file share via SMB. This is the an example of what they give you (for Mac):
mount_smbfs -d 777 -f 777 //appstore2021test [ACCESS_KEY]==#appstore2021test.file.core.windows.net demo
You'll need to change it to
mount_smbfs -d 777 -f 777 //appstore2021test:[ACCESS_KEY]==#appstore2021test.file.core.windows.net/demo demo
'demo' is the name of the directory
Related
I've got a macOS 10.13 server running, on which I have recently had to change the hostname (upstream IT requirements) - and I suspect this has broken Kerberos.
Changing the hostname appears to have been successful: I exported the Open Directory setup, modified it, and reimported it into the updated setup - user accounts exist, and manual authentication works as expected. changeip is happy:
mac-mini:~ server_admin$ sudo changeip -checkhostname
dirserv:success = "success"
However SSO from client machines does not appear to be successful.
Attempting to run kinit with a valid user account shows this:
mac-mini:~ server_admin$ kinit test#MAC-MINI.EXAMPLE.COM
test#MAC-MINI.EXAMPLE.COM's password:
kinit: krb5_get_init_creds: Server (krbtgt/MAC-MINI.EXAMPLE.COM#MAC-MINI.EXAMPLE.COM) unknown
Looking at /etc/krb5.conf, I only see this:
[libdefaults]
kdc_timeout=5
...which is the same as it was on my previously-working configuration.
And now I'm a bit stumped. All the documentation for destroying and rebuilding Kerberos setups seem to be out of date. Any ideas?!
Thanks.
I am trying to login to a private repository from a windows machine using the docker command prompt, but I cannot figure out where I am supposed to place the SSL cert on a Windows machine.
I have successfully logged in from a Linux machine by placing the cert file in /etc/docker/certs.d/mydomain.com:port/
I have found in some of the documentation they are suggesting to place this .cert file in
C:\Program Files\Docker\certs.d{my domain goes here }{port}
But Still, I'm getting below error when I'm trying to log in
Error response from daemon: Get https://{my domain goes here }.com:{port No}/v2/: x509: certificate signed by unknown authority.
Can anyone help me to sort out this issue?
I think I have found my mistake, which is I have placed the .cert file in
C:\Program Files\Docker\certs.d{my domain goes here }{port}.
It should be in,
C:\ProgramData\docker\certs.d{my domain goes here }{port}
(Please note that this ProgramData folder is a hidden folder)
I am trying to make a script to install more or less automatically oracle database as well as some other application of my own. I haven't writen a line yet because I want to make all steps manually first.
So, my environment is the following. I have RHEL 5 with no graphic interface. I am connecting to the server from Windows laptop through SSH as root. I have enabled XForwarding, so when I login with root account I can run xdpyinfo so that I can check XServer configuration.
I need XForwarding because the Oracle DB installation procedure requires an XServer. However, Oracle requires the user oracle to perform the installation. I have already created the oracle user but when changing the user from root to oracle I can no longer run xdpyinfo command so the Oracle installation procedure fails. I get the following error:
Xlib: connection to "localhost:10.0" refused by server
Xlib: PuTTY X11 proxy: wrong authorisation protocol attempted
xdpyinfo: unable to open display "localhost:10.0".
I have tried to use xhost to enable my laptop to access my server but I have failed as well to do that.
If you really feel the need to do this, then while you are root, get the current $DISPLAY value, particularly the first value after the colon, which is 10 in your case. Then find the current X authorisation token for your session:
xauth list | grep ":10 "
Which will give you something like:
hostname/unix:10 MIT-MAGIC-COOKIE-1 2b3e51af01827d448acd733bcbcaebd6
After you su to the oracle account, $DISPLAY is probably still set but if not then set it to match your underlying session. Then add the xauth token to your current session:
xauth add hostname/unix:10 MIT-MAGIC-COOKIE-1 2b3e51af01827d448acd733bcbcaebd6
When you've finished you can clean up with:
xauth remove hostname/unix:10
That's assuming PuTTY is configured to use MIT-Magic-Cookie-1 as the remote X11 authentication protocol, in the Connection->SSH->X11 section. If that is set to MDM-Authorization-1 then the value you get and set with xauth will have XDM-AUTHORIZATION-1 instead.
It might be simpler to disconnect from root and start a new ssh session as oracle to continue the installation, which would also make sure you don't accidentally do anything unexpected as root. Well, until you have to run root.sh, anyway.
If you do a silent install with a response file then you don't need a working X11 connection anyway; you just need $DISPLAY to be set, but nothing is ever actually opened on that display so it doesn't matter if xdpyinfo or any other X11 command would fail. I'm not sure how you're thinking of scripting the X11 session, but even if that is possible a silent install will be simpler and more repeatable.
I'm attempting to set up a test environment where the software is being developed on host machines then tested in a Virtual Machine and the VM has all code mapped to a Z:/ drive. My issue is that Apache is complaining and won't start up saying that I have an invalid Include path of Z:/source/myconf.conf. Anyone have luck previously of setting conf files up in a different different drive path that can help me understand what I'm doing wrong? I've tried with and without quotes as well.
Include path statement:
Include "Z:/source/myconf.conf"
Additional info:
Z is a virtual drive through VMWare also known as \\vmware-host
The specific error in the Application logs is as follows:
The Apache service named reported the following error:
httpd.exe: Syntax error on .. of C:/.../httpd.conf: Invalid Include path Z:/source/myconf.conf
Seems like its finding issue connecting to the Z drive and as z is the network drive, it must require user name and password which is creating issue
solution
If we can save the user name and password so that it won't ask for user credential dbl clicking to z drive.
To save permanently the user credential
Open command prompt and type
"net use Z: \servername\sharename /persistent:yes /savecred"
Now restart and see it should not ask user credential to connect to z drive and the apache error won't be there.
I have setup a new EC2 instance on AWS and I'm trying to get FTP working to upload my application. I have installed VSFTPD as standard, so I haven't changed anything in the config file (/etc/vsftpd/vsftpd.conf).
I have not set my port 21 in the security group, because I'm doing it through SSH. I log into my EC2 through termal like so
sudo ssh -L 21:localhost:21 -vi my-key-pair ec2-user#ec2-instance
I open up filezilla and log into local host. Everything goes fine until it comes to listing the directory structure. I can log in and right and everything seems fine as you can see below:
Status: Resolving address of localhost
Status: Connecting to [::1]:21...
Status: Connection established, waiting for welcome message...
Response: 220 Welcome to EC2 FTP service.
Command: USER anonymous
Response: 331 Please specify the password.
Command: PASS ******
Response: 230 Login successful.
Command: OPTS UTF8 ON
Response: 200 Always in UTF8 mode.
Status: Connected
Status: Retrieving directory listing...
Command: PWD
Response: 257 "/"
Command: TYPE I
Response: 200 Switching to Binary mode.
Command: EPSV
Response: 229 Entering Extended Passive Mode (|||37302|).
Command: LIST
Error: Connection timed out
Error: Failed to retrieve directory listing
Is there something which I'm missing in my config file. A setting which needs to be set or turned off. I thought it was great that it connected but when it timed out you could picture my face. It meant time to start trawling the net try and find the answer! Now with no luck.
I'm using the standard Amazon AMI 64 bit. I have a traditional lamp setup.
Can anyone steer me in the right direction? I have read a lot about getting this working but they are all incomplete, as if they got bored half way through typing up how to do it.
I would love to hear how you guys do it as well. If it makes life easier. How do you upload your apps to a EC2 instance? (Steps please - it saves a lot of time plus it is a great resource for others.)
I figured it out, after the direction help by Antti Haapala.
You don't even need VSFTP setup on the instance created. All you have to do is make sure the settings are right in FileZilla.
This is what I did (I'm on a mac so it should be similar on windows):
Open up file zilla and go to preferences.
Under preferences click sftp and add a new key. This is your key pair for your ec2 instance. You will have to convert it to the format FileZilla uses. It will give you a prompt for the conversion
Click okay and go back to site manager
In site manager enter in your EC2 public address, this can also be your elastic IP
Make sure the protocol is set to SFTP
Put in the user name of ec2-user
Remove everything from the password field - make it blank
All done! Now connect.
That's it you can now traverse your EC2 system. There is a catch. Because you are logged in as ec2-user and not root you will not be able to modify anything. To get around this, change the group ownership of the directory where your application will lie (/var/www/html) or what ever. I would change it so it is on a EBS volume. ;) Also make sure this group has read write and execute permissions. The group for the ec2-user is ec2-user. Leave everyone else as nothing. So the command you use while logged in via ssh
sudo chgrp ec2-user file/folder
sudo chmod 770 file/folder
Hope this helps someone.
FTP is a very troublesome protocol because it requires a secondary pipe for the actual data transfer and does not definitely work well when piped. With ssh you should use SFTP which has nothing to do with FTP but is a completely different protocol.
Read also on Wikipedia
Adding the key to www is a recipe for disaster! Any minor issue with your app will become a security nightmare.
As an alternative to ftp, consider using rsync or a more "mature" deploy strategy based on capistrano for instance. There are plenty of tools for that around.
Antti Haapala's tips are the only way to work around with EC2 SFTP. It works just fine! Just note that you need to create the /var/www/.ssh/ folder and copy the authorized_keys file there.
After that you'll need to change authorized_keys ownership to www-data so ssh connection can recognize it. Amazon should let people know that. I looked for this in there forums, FAQ, etc. No clue at all... Cheers once more to stackoverflow, the way to go haha!