Windows 10 unable to connect to Ubuntu 20.04.3 Samba Server (0x80004005 Unspecified Error) - windows

My goal: my windows 10 client can access, view, and execute .exe file shared by the Ubuntu server using guest account (Without password).
Expected Result: When I type \ubuntu-server-hostname\bakro in the file explorer, I can see and execute the .exe file
Actual Result: Windows cannot access \ubuntu-server-hostname\bakro with Error code: 0x80004005 Unspecified Error
Additional Observations:
When I run net use \\ubuntu-server-hostname\bakro on Windows 10 client, it results in System error 53 has occurred. The network path was not found.
I can access the shared files from the ubuntu server using smbclient.
When I run smbclient -L \\\\ubuntu-server-hostname, I can see bakro listed there.
When I run smbclient \\\\ubuntu-server-hostname\\bakro to enter the smb command line and I type ls to list the files inside the share, I can see the .exe file.
In both of these cases, I was asked for my current user account password. I responded by pressing enter key (blank password).
I checked the log by using systemctl status smbd and results in multiple lines of session closed for user samba-guest. The timing seems consistent with me acessing the share via smbclient.
The ubuntu server is also running OpenVPN server. The OpenVPN server uses 10.8.0.0/24 subnet and is assigned 10.8.0.1 ip address. If I connect to the OpenVPN server using the same Windows 10 computer and then access the samba share by typing \\10.8.0.1 in the file explorer, I can see the folder bakro listed. I can browse the folder and execute the .exe file (which is my desired and expected result). This access attempt is recorded in the samba log using systemctl status smbd.
What I have done:
Based on Observation #1, I replaced the hostname with server public ip address. It yields the same result for accessing via both net use and file explorer. Both attempts are not recorded in samba log obtained via systemctl status smbd.
Based on Observation #1 and #2, I checked the Ubuntu server firewall using ufw status. Samba is listed as allowed via both ipv4 and ipv6. I also checked the server's security group. Port 445 TCP is listed as allowed.
I have tried disabling ufw and setting security group to allow connection to all ports from anywhere and I still can't access the share.
Based on Observation #3, I obtained a list of network interfaces using ip link show. 3 interfaces are obtained: lo (loopback), eth0 (internet), tun0 (OpenVPN). I added interfaces = lo eth0 into smb.conf. The result: I cannot access the share from both \\ubuntu-server-hostname\bakro and \\10.8.0.1\bakro.
I tried changing the guest account from samba-guest to nobody. Nothing changed except the log now shows session closed for user nobody instead.
I tried adding client min protocol = SMB2 and client max protocol = SMB3 to smb.conf source
Changed File sharing connections to enable 40 and 56 bit encryption in Advanced sharing settings in Windows 10 client.
Minor Observation:
I swear I tested the file sharing capabilities using the same Windows 10 computer when I first setup the samba service (But my memory is unreliable at best)
I also tried to access \ubuntu-server-hostname\bakro via file explorer on 2 other Windows 7 computer with same result.
The following are the contents of my smb.conf:
# Global parameters
[global]
disable netbios = Yes
guest account = samba-guest
interfaces = 0.0.0.0/0
log file = /var/log/samba/log.%m
logging = file
map to guest = Bad User
max log size = 1000
obey pam restrictions = Yes
pam password change = Yes
panic action = /usr/share/samba/panic-action %d
passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
passwd program = /usr/bin/passwd %u
security = USER
server role = standalone server
server string = ubuntu-samba-server
unix password sync = Yes
usershare allow guests = Yes
idmap config * : backend = tdb
[printers]
browseable = No
comment = All Printers
create mask = 0700
path = /var/spool/samba
printable = Yes
[print$]
comment = Printer Drivers
path = /var/lib/samba/printers
[bakro]
guest ok = Yes
path = /srv/files/bakro

Everything is changing slowly. I have been using for more than 15 years very silmple smb.conf but suddenly it was not possible to connect to a share as guest. If I configured the home-dir share, it was OK, but the free share for guest was denied. It took me some time to find that in the share definition it is necessary to specify: valid users = nobody as you can see in the example:
[data]
path = /srv/data
valid users = nobody
force user = nobody
read list = nobody
write list = nobody
guest account = nobody
guest only = yes
guest ok = yes
I cannot explain, why and how it is function. Just know, it is dependent on on samba version (now: 4.11.5-Debian).
The dir /srv/data on linux side should have nobody:nogroup and at least 666 for files and 777 for dirs. Hence the create mask = 666 and directioy mask = 777 could be useful. In this way you should correct the global settings.

Related

How to disable ssh strict host checking on Windows 10?

My PC is Windows 10 Pro, 22H2
In my closed work environment, I SSH from Windows command line into many devices that all have the same IP (one at a time, not concurrently on my network at the same time). I'm running an automated test script and I constantly have trouble scripting something when this warning gets thrown up during the login to a new device that I'm testing.
###########################################################
# WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! #
###########################################################
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ECDSA key sent by the remote host is
SHA256:{hash}
Please contact your system administrator.
Add correct host key in C:\\Users\\myusername/.ssh/known_hosts to get rid of this message.
Offending ECDSA key in C:\\Users\\myusername/.ssh/known_hosts:3
ECDSA host key for 192.168.1.5 has changed and you have requested strict checking.
Host key verification failed.
I'm using password-based login to these devices.
I made C:\Users\myusername\config with contents:
Host *
StrictHostKeyChecking no
But this didn't stop the warning from happening and blocking the attempt. So far the only solution I have is to constantly delete the C:\Users\myusername\known_hosts file. Is there any way to get Windows to ignore strict checking?

Google Cloud Platform - SSH/Telnet

I am running apps on Compute Engine. I run on a Windows box and use Putty to connect to the CE. This pretty much seems to work fine (leaving aside the problems in the Google doc on this).
I have set up another user who I want to enable for SSH (on a Mac) and have her use FileZilla to push files to the CE.
I am trying it out on my own Mac. I set up 2 firewall rules with 2 different priorities for tcp:22 =
myssh Apply to all IP ranges: 0.0.0.0/0 tcp:22 Allow 1000 default
default-allow-ssh Apply to all IP ranges: 0.0.0.0/0 tcp:22 Allow 65534 default
The user has permissions on of the Project of: "Compute Instance Admin(v1)"
On the Mac terminal I do the following:
ssh-keygen -t rsa -f ~/.ssh/userfirstname-ssh-key -C [googleusername.gmail.com]
I go to the GCP CE Meta data (logged in as myself) and then copy the contents of the userfirstname-ssh-key.pub to the Metadata/SSH Keys and save.
After GCP gives the ok on the key being added I enter the following in the Mac terminal:
ssh -i [userfirstname]-ssh-key [googleusername.gmail.com]#gcp-external-ip
Depending on i-don't-know-what, sometimes it says "Permission denied (public key)", "Operation timed out"
I've repeated this a few times and just tried to telnet in to the gcp-external-ip and get "Operation timed out" telnet: Unable to connect to remote host.
At a complete loss. Please help.
You could (and should) use the gcloud command line tools. Then it is easiest to simple copy the correct gcloud command from the Web Console. There is a little drop-down menu next to 'SSH' for each of your instances.

How to setup FTP on xampp

i want to make a server using xampp. i have already installed xampp and setting port 8080. php and mysql work fine but i can't access ftp from internet. Can you please suggest way how can I do this?
XAMPP comes preloaded with the FileZilla FTP server. Here is how to setup the service, and create an account.
Enable the FileZilla FTP Service through the XAMPP Control Panel to make it startup automatically (check the checkbox next to filezilla to install the service). Then manually start the service.
Create an ftp account through the FileZilla Server Interface (its the essentially the filezilla control panel). There is a link to it Start Menu in XAMPP folder. Then go to Users->Add User->Stuff->Done.
Try connecting to the server (localhost, port 21).
XAMPP for linux and mac comes with ProFTPD. Make sure to start the service from XAMPP control panel -> manage servers.
Further complete instructions can be found at localhost XAMPP dashboard -> How-to guides -> Configure FTP Access. I have pasted them below :
Open a new Linux terminal and ensure you are logged in as root.
Create a new group named ftp. This group will contain those user accounts allowed to upload files via FTP.
groupadd ftp
Add your account (in this example, susan) to the new group. Add other users if needed.
usermod -a -G ftp susan
Change the ownership and permissions of the htdocs/ subdirectory of the XAMPP installation directory (typically, /opt/lampp) so that it is writable by the the new ftp group.
cd /opt/lampp
chown root.ftp htdocs
chmod 775 htdocs
Ensure that proFTPD is running in the XAMPP control panel.
You can now transfer files to the XAMPP server using the steps below:
Start an FTP client like winSCP or FileZilla and enter connection details as below.
If you’re connecting to the server from the same system, use
"127.0.0.1" as the host address. If you’re connecting from a different
system, use the network hostname or IP address of the XAMPP server.
Use "21" as the port.
Enter your Linux username and password as your FTP credentials.
Your FTP client should now connect to the server and enter the /opt/lampp/htdocs/ directory, which is the default Web server document root.
Transfer the file from your home directory to the server using normal FTP transfer conventions. If you’re using a graphical FTP client, you can usually drag and drop the file from one directory to the other. If you’re using a command-line FTP client, you can use the FTP PUT command.
Once the file is successfully transferred, you should be able to see it in action.
I launched ubuntu Xampp server on AWS amazon.
And met the same problem with FTP, even though add user to group ftp SFTP and set permissions, owner group of htdocs folder.
Finally find the reason in inbound rules in security group, added All TCP, 0 - 65535 rule(0.0.0.0/0,::/0) , then working right!
On XAMPP click "Start" and after "Admin".
Login to localhost (127.0.0.1) without password, with second port, not with 21.
Add users and passwords, change your settings. Quit.

FTP tranfer is kept on hold

I've set up a PROFTP server on a CentOS 7 machine. And I am accessing it from other machines (with windows servers) to send files to it.
I've created some rules to only enable to stor files to a certain directory and the subdirectories will have different ownerships. At this point they are owned by user.
<Directory pathToDir>
<Limit STOR CWD>
AllowAll
</Limit>
<Limit READ RMD DELE MKD>
DenyAll
</Limit>
<Directory>
So here is what happens to me.
I log in with user from a windows server machine and access first sub-directory (own user grp user), mput several files and the files are copied.
I log in with user from a different windows server machine and access second sub-directory (own user grp user), put file and I get confirmation code (200 PORT command successful), but transfer doesn't start, however the file is created on the server and it is empty.
If I use my laptop, everything works.
Does anyone know how to fix this? Or what is wrong with my FTP server?
EDIT: FIXED. It was a windows firewall issue, couldn't get response from the ftp server. Since my server has a static ip I managed to add an exception to the windows firewall allowing only that ip to have full access to the ftp rather than opening a set of ports.
these would point to a firewall issue:
If the connection times out (rather than fails instantly)
If a directory listing from the client also fails
as a workaround you could try passive (PASV) FTP.

Using oracle db through ssh tunnel. Error "ORA-12541: TNS:no listener"

Hello I've got a problem accessing Oracle DB from our datacenter through a tunnel.
We've got a pretty standard datacenter with one machine being accessible from the outside
(I put it's IP in the /etc/hosts file as dc) and the Oracle DB inside. The IP address of our oracle database on internal network is 192.168.1.7
To create a tunnel I'm using the command:
ssh -L 1521:192.168.1.7:1521 root#dc
and of course it works (sometimes I also add some debug -vv to see if anything is passing through).
Now the difficult part - connecting to Oracle. I installed instantclient 11.2. and my tnsnames.ora looks like that:
testdb =
(DESCRIPTION =
(ADDRESS_LIST =
(ADDRESS = (PROTOCOL = TCP)(HOST = localhost)(PORT = 1521))
)
(CONNECT_DATA =
(SERVICE_NAME = dbname)
)
)
And when I try to connect using the command:
./sqlplus username/pass#testdb
It starts connecting through the tunnel (I see it in the ssh debug) but then it fails
telling:
./sqlplus username/pass#testdb
SQL*Plus: Release 11.2.0.1.0 Production on Wed Jan 13 20:46:07 2010
Copyright (c) 1982, 2009, Oracle. All rights reserved.
ERROR:
ORA-12541: TNS:no listener
Enter user-name:
When I'm trying to execute this same command on when I'm on the intranet it works (obviously the only difference is that in the tnsnames.ora HOST we have 192.168.1.7 and not the localhost).
I also tried to use the simple command line:
./sqlplus username/pass#//localhost:1521/testdb
or alternatively
./sqlplus username/pass#//localhost:1521/testdb
But nothing helped :)
I would appreciate any help or suggestions. Am I missing some ssh flag to make it possible?
Probably the log file:
***********************************************************************
Fatal NI connect error 12541, connecting to:
(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=DBNAME)(CID=(PROGRAM=sqlplus#velvet)(HOST=velvet)(USER=johndoe))))
VERSION INFORMATION:
TNS for Linux: Version 11.2.0.1.0 - Production
TCP/IP NT Protocol Adapter for Linux: Version 11.2.0.1.0 - Production
Time: 13-JAN-2010 20:48:42
Tracing not turned on.
Tns error struct:
ns main err code: 12541
TNS-12541: Message 12541 not found; No message file for product=network, facility=TNS
ns secondary err code: 12560
nt main err code: 511
TNS-00511: Message 511 not found; No message file for product=network, facility=TNS
nt secondary err code: 111
nt OS err code: 0
where velvet is my local hostname and johndoe is my local username.
Why is it sent to the other side?
UPDATE:
After investigating a little bit more from inside datacenter and it looks like:
- the first connection is going to the port 1521
- but then sqlplus is redirected to the port number > 3300, which is different everytime and incrementing by 3 (at least few tries I had)
- when we are trying to connect trough a tunnel sqlplus will try to connect to localhost and it will obviously fail
So the error "No Listener" comes probably from the fact that we are not redirecting those ports. Is there any way (probably some option in tnsnames.ora file) to force some specific port to be used?
Look into Metalink ID 361284.1 (Edit: effectively not public, but find the info here)
It seems like Oracle Connection Manager would be your option. It basically handles the port redirects inside the firewall. I haven't used it before, so cannot advise you further.
Update: Another way to go would be to use MTS, configure dispatchers with certain ports and open these ports in the firewall. You wouldn't have to install additional software for this, but connecting through shared server may require increasing LARGE_POOL_SIZE, among other considerations. So you'd still need the DBA role to change the DISPATCHERS parameter. You'd also have to bounce the DB.
Normally this should work. I would not use a default listener port as an entry for the ssh tunnel but that should not be the problem. I would also not user the root account to create the ssh connection, preferably a dedicated regular account. Are you using shared servers or does the database happen to be a RAC database with a load balance configuration?
A nice explanation is here How can I connect to ORACLE DB through ssh tunnel chain (double tunnel, server in company network) ?, a bit more complicated .....
update
checkout DbVisualizer, it now has integrated ssh tunneling. I think it is worth to al least give it a try. It's not free but good. Multi platform and multi database and very flexible.
In my case the problem is that the DB server has several IPs and when I used SSH tunnel it was connecting to wrong different one.
So try to check, if the destination IP is the same as the IP in the listener.ora file on the DB server.
Can you try to make a trace to determine exactly what is happening:
For server trace, try here (be carefull! all the new request will be traced and the server can be collapsed).
For client trace, checkout here.
MJ! Your tunnel is only for the initial tcp connect, your own LISTEN port is not tunnelled, and probably unimplemented. Firewall should allow a connect back to you, similar to active FTP.
All ports for Oracle are documented quite extensively starting page 670 of "Building Internet Firewalls" 2/E Chapter 23, paragraph: Oracle SQL*Net and Net8. You can view it on SafariBooksOnline.com
ISBN 1565928718
Perhaps your listener haven't been started yet. Try run "lsnrctrl start" command.
Also a good explanation is here connection to an oracle database though a SSH secure shell which worked for me.
Open putty and on the session page, enter the name of a server and make sure SSH is checked. The server can be any server that you have a
username and password to login with. I use one here called BLUEBIRD as
I own it!
On the connection->ssh->tunnels page, uncheck both options at the top ("Local ports accept ..." and "Remote ports do the same").
Enter 9999 (or any port above 1024 as the Source Port.
In the destination, enter the database host and port as per tnsnames. In my case, this is a server called GREENBIRD and a port of
Enter this as server:port.
As the port being forwarded is on your desktop, check the "Local" option. Leave "Auto" checked as well for the IP version.
Click the Add button. You will see L9999 greenbird:1521 (your will differ) in the list of forwarded ports.
Go to the session page again, Enter a name for your saved session and click save.
Click open. Supply a username and password for the server (BLUEBIRD in my case). You will login a normal ssh session to the server named
BLUEBIRD.

Resources