dnsmasq can't bind listen-address - raspberry-pi3

I tried to setup hostapd and dnsmasq to braodcast a WiFi from a Raspberry Pi 3. I only want to broadcast the WiFi so devices can connect to a http-server running on the Raspberry, no ethernet bridge is required.
I installed hostapd and dnsmasq and configured them as followed:
dhcpcd.conf
# A sample configuration for dhcpcd.
# See dhcpcd.conf(5) for details.
# Allow users of this group to interact with dhcpcd via the control socket.
#controlgroup wheel
# Inform the DHCP server of our hostname for DDNS.
hostname
# Use the hardware address of the interface for the Client ID.
clientid
# or
# Use the same DUID + IAID as set in DHCPv6 for DHCPv4 ClientID as per RFC4361.
# Some non-RFC compliant DHCP servers do not reply with this set.
# In this case, comment out duid and enable clientid above.
#duid
# Persist interface configuration when dhcpcd exits.
persistent
# Rapid commit support.
# Safe to enable by default because it requires the equivalent option set
# on the server to actually work.
option rapid_commit
# A list of options to request from the DHCP server.
option domain_name_servers, domain_name, domain_search, host_name
option classless_static_routes
# Respect the network MTU. This is applied to DHCP routes.
option interface_mtu
# Most distributions have NTP support.
#option ntp_servers
# A ServerID is required by RFC2131.
require dhcp_server_identifier
# Generate SLAAC address using the Hardware Address of the interface
#slaac hwaddr
# OR generate Stable Private IPv6 Addresses based from the DUID
slaac private
#denyinterfaces wlan0
# Example static IP configuration:
#interface eth0
#static ip_address=192.168.0.5/24
#static ip6_address=fd51:42f8:caae:d92e::ff/64
#static routers=192.168.0.5
#static domain_name_servers=192.168.0.5
interface wlan0
allow-hotplug wlan0
#iface wlan0 inet static
static ip_address=192.168.0.5/24
nohook wpa_supplicant
#netmask 255.255.255.0
#network 192.168.0.0
#broadcast 192.168.0.255
# It is possible to fall back to a static IP if DHCP fails:
# define static profile
#profile static_eth0
#static ip_address=192.168.1.23/24
#static routers=192.168.1.1
#static domain_name_servers=192.168.1.1
# fallback to static profile on eth0
#interface eth0
#fallback static_eth0
As you can see I already tried different options using denyinterfaces and other things I found in different tutorials, but none did work.
hostapd.conf
interface=wlan0
driver=nl80211
ssid=****
hw_mode=g
channel=6
ieee80211n=1
wmm_enabled=1
ht_capab=[HT40][SHORT-GI-20][DSSS_CCK-40]
macaddr_acl=0
auth_algs=1
ignore_broadcast_ssid=0
wpa=2
wpa_key_mgmt=WPA-PSK
wpa_passphrase=****
rsn_pairwise=CCMP
dnsmasq.conf
interface=wlan0
listen-address=192.168.0.5
bind-interfaces
server=8.8.8.8
domain-needed
bogus-priv
dhcp-range=192.168.0.25,192.168.0.150,255.255.255.0,240h
Now I got two problems:
hostapd does not run on startup although I did define the daemon_conf and it is working when I run hostapd /path/to/config
My main problem, dnsmasq is not running. When I try to start the service, it crashes with an error cannot bind listen-address.
service dnsmasq status
● dnsmasq.service - dnsmasq - A lightweight DHCP and caching DNS server
Loaded: loaded (/lib/systemd/system/dnsmasq.service; enabled; vendor preset:
Active: failed (Result: exit-code) since Tue 2022-03-29 12:58:34 CEST; 17min
Process: 483 ExecStartPre=/usr/sbin/dnsmasq --test (code=exited, status=0/SUCC
Process: 491 ExecStart=/etc/init.d/dnsmasq systemd-exec (code=exited, status=2
Mär 29 12:58:33 raspberrypitop systemd[1]: Starting dnsmasq - A lightweight DHCP
Mär 29 12:58:33 raspberrypitop dnsmasq[483]: dnsmasq: Syntaxprüfung OK.
Mär 29 12:58:34 raspberrypitop dnsmasq[491]: dnsmasq: Konnte Empfangs-Socket für
Mär 29 12:58:34 raspberrypitop dnsmasq[491]: Konnte Empfangs-Socket für 192.168.
Mär 29 12:58:34 raspberrypitop dnsmasq[491]: Start fehlgeschlagen
Mär 29 12:58:34 raspberrypitop systemd[1]: dnsmasq.service: Control process exit
Mär 29 12:58:34 raspberrypitop systemd[1]: dnsmasq.service: Failed with result '
Mär 29 12:58:34 raspberrypitop systemd[1]: Failed to start dnsmasq - A lightweig
lines 1-14/14 (END)
I guess I messed up the configuration somehow, but since this is something new for me and there are many different tutorials for several different kinds of OSs and OS versions, it's very hard to understand, what is going wrong.

Ok, figured it out myself.
In my case hostapd not starting automatically was actually causing the second issue, since it prevented the wlan0 interface from coming up.
I had to sudo systemctl unmask hostapd and reboot. dnsmasq would still not start since it tried to start before hostapd was finished setting everything up, even if told to wait for hostapd.service. So i edited the systemd/dnsmasq.service config and added
[service]
restart=always
retry=2
So it tries to restart every 2sec until hostapd has done it's job and so it all is working.

Related

After installing NMAP: dnet: Failed to open device eth0?

Error after starting command > nmap
dnet: Failed to open device eth0 after installing NMAP.
QUITTING!
In my case, the error was caused by nmap being installed through Snap.
In order to get nmap to work, I had to tell snap to connect it to the network-control:
sudo snap connect nmap:network-control
After that everything worked fine.
TLDR: Use the --unprivileged nmap option.
I just suffered the same problem when tried to scan/test hosts thru a Wireguard 0.3.14 tunnel in Windows 8.1 and Windows 7 using the last available versions, nmap 7.91, npcap 1.31. Tried several solutions/combinations, running as admin, reinstalling, etc., except downgrading to Winpcap 4.1.3 (the last available) with the same result:
C:\Windows\system32>nmap -n -P0 -p 22 192.168.20.1
Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times will be slower.
Starting Nmap 7.91 ( https://nmap.org ) at 2021-06-08 13:28 Hora de verano central (México)
dnet: Failed to open device eth0
QUITTING!
It's strange that the interface listing of nmap --iflist does not show a device name associated with eth0 (also doesn't show a MAC address, maybe it's Wireguard interface driver install/hooks at fault here). Relevant lines:
C:\Windows\system32>nmap --iflist
Starting Nmap 7.91 ( https://nmap.org ) at 2021-06-08 13:32 Hora de verano central (México)
************************INTERFACES************************
DEV (SHORT) IP/MASK TYPE UP MTU MAC
eth0 (eth0) 10.10.252.92/32 ethernet up 65535 00:00:00:00:00:00
:
DEV WINDEVICE
eth0 <none>
Relevant route print:
C:\Windows\system32>route print
===========================================================================
Interface list
8...........................Wintun Userspace Tunnel #77
:
IPv4 Route table
===========================================================================
Active routes:
Network Destination Net mask Gateway Interface Metric
192.168.20.0 255.255.255.0 On-link 10.10.252.92 5
192.168.20.255 255.255.255.255 On-link 10.10.252.92 261
Solved it using the --unprivileged option:
C:\Windows\system32>nmap --unprivileged -n -P0 -p 22 192.168.20.1
Host discovery disabled (-Pn). All addresses will be marked 'up' and scan times will be slower.
Starting Nmap 7.91 ( https://nmap.org ) at 2021-06-08 13:22 Hora de verano central (MÚxico)
Nmap scan report for 192.168.20.1
Host is up (0.20s latency).
PORT STATE SERVICE
22/tcp open ssh
Nmap done: 1 IP address (1 host up) scanned in 0.29 seconds
After installing nmap-7.80-setup.exe, install please install npcap-0.9986.exe which is fully compatible with latest Windows 10 releases.
You can try this: nmap (IP) -P (puerto) --interface (ethX)
or install npcap from oficially page https://nmap.org/npcap/#download
Not always too late..
I've seen this "reinstall or install npcapxversion all around the web but my solution was simple and logic based.
Just enable NPCAP protocols ONLY in the network adapter you want to use, disable on the others.
The logic on reinstalling may be the order in wich npcap disable and reenable network interfaces giving top priority (disabling and re-enabling at last) to the actual one you are using. But if you don't wanna mess with interface priorities.. do as I said and just enable NpCap only in the adapter you need nmap to.
If you are using Nessus also in the same machine, you are not able to use nmap. NPCAP will be used by Nessus and same is required for NMAP as well.

Kibana stopped working and now server not getting ready although kibana.service starts up nicely

Without any major system update of my Ubuntu (4.4.0-142-generic #168-Ubuntu SMP), Kibana 7.2.0 stopped working. I am still able to start the service with sudo systemctl start kibana.service and the corresponding status looks fine. There is only a warning and no error, this does not seem to be the issue:
# sudo systemctl status kibana.service
● kibana.service - Kibana
Loaded: loaded (/etc/systemd/system/kibana.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2019-07-10 09:43:49 CEST; 22min ago
Main PID: 14856 (node)
Tasks: 21
Memory: 583.2M
CPU: 1min 30.067s
CGroup: /system.slice/kibana.service
└─14856 /usr/share/kibana/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /usr/share/kibana/bin/../src/cli -c /etc/kibana/kibana.yml
Jul 10 09:56:36 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T07:56:36Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
Nevertheless, when I visit http://srv003:5601/ on my client machine, I keep seeing only (even after waiting 20 minutes):
Kibana server is not ready yet
On the server srv003 itself, I see
me#srv003:# curl -XGET http://localhost:5601/status -I
curl: (7) Failed to connect to localhost port 5601: Connection refused
This is a strange since Kibana seems to be really listening at that port and the firewall is disabled for testing purposes:
root#srv003# sudo lsof -nP -i | grep 5601
node 14856 kibana 18u IPv4 115911041 0t0 TCP 10.0.0.72:5601 (LISTEN)
root#srv003# sudo ufw status verbose
Status: inactive
There is nothing suspicious in the log of kibana.service either:
root#srv003:/var/log# journalctl -u kibana.service | grep -A 99 "Jul 10 10:09:14"
Jul 10 10:09:14 srv003 systemd[1]: Started Kibana.
Jul 10 10:09:38 srv003 kibana[14856]: {"type":"log","#timestamp":"2019-07-10T08:09:38Z","tags":["warning","task_manager"],"pid":14856,"message":"The task maps_telemetry \"Maps-maps_telemetry\" is not cancellable."}
My Elasticsearch is still up and running. There is nothing interesting in the corresponding log files about Kibana:
root#srv003:/var/log# cat elasticsearch/elasticsearch.log |grep kibana
[2019-07-10T09:46:25,158][INFO ][o.e.c.m.MetaDataIndexTemplateService] [srv003] adding template [.kibana_task_manager] for index patterns [.kibana_task_manager]
[2019-07-10T09:47:32,955][INFO ][o.e.c.m.MetaDataCreateIndexService] [srv003] [.monitoring-kibana-7-2019.07.10] creating index, cause [auto(bulk api)], templates [.monitoring-kibana], shards [1]/[0], mappings [_doc]
Now I am running a bit out of options, and I hope somebody can give me another hint.
Edit: I do not have any Kibana plugins installed.
Consulted sources:
How to fix "Kibana server is not ready yet" error when using AKS
Kibana service is running but can not access via browser to console
Why won't Kibana Node server start up?
https://discuss.elastic.co/t/failed-to-start-kibana-7-0-1/180259/3 - most promising thread, but nobody ever answered
https://discuss.elastic.co/t/kibana-server-is-not-ready-yet-issue-after-upgrade-to-6-5-0/157021
https://discuss.elastic.co/t/kibana-server-not-ready/162075
It looks like if Kibana enters the described undefined state, a simple reboot of the computer is necessary. This is of course not acceptable for a (virtual or physical) machine where other services are running.

NFS Vagrant on Fedora 22

I'm trying to run Vagrant using libvirt as my provider. Using rsync is unbearable since I'm working with a huge shared directory, but vagrant does succeed when the nfs setting is commented out and the standard rsync config is set.
config.vm.synced_folder ".", "/vagrant", mount_options: ['dmode=777','fmode=777']
Vagrant hangs forever on this step here after running vagrant up
==> default: Mounting NFS shared folders...
In my Vagrantfile I have this uncommented and the rsync config commented out, which turns NFS on.
config.vm.synced_folder ".", "/vagrant", type: "nfs"
When Vagrant is running it echos this out to the terminal.
Redirecting to /bin/systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: inactive (dead)
Redirecting to /bin/systemctl start nfs-server.service
Job for nfs-server.service failed. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
Results of systemctl status nfs-server.service
dillon#localhost ~ $ systemctl status nfs-server.service
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; disabled; vendor preset: disabled)
Active: failed (Result: exit-code) since Fri 2015-05-29 22:24:47 PDT; 22s ago
Process: 3044 ExecStart=/usr/sbin/rpc.nfsd $RPCNFSDARGS (code=exited, status=1/FAILURE)
Process: 3040 ExecStartPre=/usr/sbin/exportfs -r (code=exited, status=0/SUCCESS)
Main PID: 3044 (code=exited, status=1/FAILURE)
May 29 22:24:47 localhost.sulfur systemd[1]: Starting NFS server and services...
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: writing fd to kernel failed: errno 111 (Connection refused)
May 29 22:24:47 localhost.sulfur rpc.nfsd[3044]: rpc.nfsd: unable to set any sockets for nfsd
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service: main process exited, code=exited, status=1/FAILURE
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS server and services.
May 29 22:24:47 localhost.sulfur systemd[1]: Unit nfs-server.service entered failed state.
May 29 22:24:47 localhost.sulfur systemd[1]: nfs-server.service failed.
The journelctl -xe log has a ton of stuff in it so I won't post all of it here, but there are some things in the bold red.
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.mountd[3024]: Could not bind socket: (98) Address already in use
May 29 22:24:47 localhost.sulfur rpc.statd[3028]: failed to create RPC listeners, exiting
May 29 22:24:47 localhost.sulfur systemd[1]: Failed to start NFS status monitor for NFSv2/3 locking..
Before I ran vagrant up I looked to see if there were any process binding to port 98 with netstat -tulpn and did not see anything and in fact while vagrant is hanging I ran netstat -tulpn again to see what was binding to port 98 and didn't see anything. (checked for both current user and root)
UPDATE: Haven't gotten any responses.
I wasn't able to figure out the current issue I'm having. I tried using lxc instead, but gets stuck on booting. I'd also prefer not to use VirtualBox, but the issue seems to lie within nfs not the hypervisor. Going to try using the rsync-auto feature Vagrant provides, but I'd prefer to get nfs working.
Looks like when using libvirt the user is given control over nfs and rpcbind, and Vagrant doesn't even try to touch those things like I had assumed it did. Running these solved my issue:
service rpcbind start
service nfs stop
service nfs start
The systemd unit dependencies of nfs-server.service contain rpcbind.target but not rpcbind.service.
One simple solution is to create a file /etc/systemd/system/nfs-server.service containing:
.include /usr/lib/systemd/system/nfs-server.service
[Unit]
Requires=rpcbind.service
After=rpcbind.service
On CentOS 7, all I needed to do
was install the missing rpcbind, like this:
yum -y install rpcbind
systemctl enable rpcbind
systemctl start rpcbind
systemctl restart nfs-server
Took me over an hour to find out and try this though :)
Michel
I've had issues with NFS mounts using both the libvirt and the VirtualBox provider on Fedora 22. After a lot of gnashing of teeth, I managed to figure out that it was a firewall issue. Fedora seems to ship with a firewalld service by default. Stopping that service - sudo systemctl stop firewalld - did the trick for me.
Of course, ideally you would configure this firewall rather than disable it entirely, but I don't know how to do that.

Weblogic + Docker + Vagrant = Connection Issue

first time poster, but have been very impressed with this community. I've spent an embarrassing amount of time this week trying to resolve this issue - there doesn't seem to be much info on the net & I am stuck. Thanks in advance for any insights!
I am moving an existing WLS application into Docker. Goal is to have a repeatable Dev environment with WLS inside container & those containers running inside Vagrant (custom RHEL 6.5 VirtualBox).
I configured & started WLS container. I am also able to access WLS services from the container on VM. However, when I try to access the container from the host, I receive a connection timeout error.
I am running a private network 10.10.10.41 on Vagrant with port forwarding 7771:7001 - if I access that IP:Port (as I normally would when running a service within Vagrant), I get a connection refused.
I am able to run WLS "natively" from the VM and access from the host successfully. I am also able to run Apache conatiners from within the VM and access them from the host successfully. So the issue appears specific to WLS running inside a container in VM.
I turned off the firewall on the VM, which I've read is a common issue with Vagrant + Docker.
I have a whole host of information to share, but rather than drink from the firehose I will start out with a couple pieces. Happy to attach any further info as necessary. Thanks again!
Vagrantfile
config.vm.network "private_network", ip: "10.10.10.41"
config.vm.network :forwarded_port, host: 7771, guest: 7001
Dockerfile
EXPOSE 7001
Dockerrun
docker run -d -p 7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin
Container IP
docker inspect -f '{{ .NetworkSettings.IPAddress }}' wladmin
172.17.0.13
nmap VM (localhost)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000044s latency).
Other addresses for localhost (not scanned): 127.0.0.1
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
111/tcp open rpcbind
nmap VM (Vagrant private network IP)
Nmap scan report for 10.10.10.41
Host is up (0.000053s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
22/tcp open ssh
111/tcp open rpcbind
nmap WLS Docker Container
Nmap scan report for my.domain.com (172.17.0.11)
Host is up (0.000055s latency).
Not shown: 998 closed ports
PORT STATE SERVICE
7001/tcp open afs3-callback
7002/tcp open afs3-prserver
I found the root cause & wanted to share back.
It turns out that because Vagrant has a private network adapter, we have to bind the container to that adapter using.
docker run -d -p 10.10.10.41:7001:7001 -v /my/release:/domain/release --name "wladmin" --link wlmanaged:wlmanaged my/wladmin

postfix log shows error while sending email from amazon ec2 instance

I am trying to send an email using the postfix server on amazon EC2 instance.
The command is: sendmail xxxxxx#gmail.com
FROM:localhost
SUBJECT:Welcome
this is a test email....
.
However I am getting the following error in the /var/log/maillog file.
the error is:
Jan 13 09:00:37 ip-172-31-32-76 postfix/pickup[26635]: C43AE62D00: uid=222
from=
Jan 13 09:00:37 ip-172-31-32-76 postfix/cleanup[26727]: C43AE62D00:
message-id=<20140113090037.C43AE62D00#"HOSTNAME">
Jan 13 09:00:37 ip-172-31-32-76 postfix/qmgr[26636]: C43AE62D00:
from=<"MYHOSTNAME">, size=435, nrcpt=1 (queue active)
Jan 13 09:00:37 ip-172-31-32-76 postfix/smtp[26729]:
connect to 127.0.0.1[127.0.0.1]:2525: Connection refused
Jan 13 09:00:37 ip-172-31-32-76 postfix/smtp[26729]: C43AE62D00:
to=, relay=none, delay=22, delays=22/0.02/0/0, dsn=4.4.1, status=deferred (connect to 127.0.0.1[127.0.0.1]:2525: Connection refused)
I have hidden the details for hostname and the email ID to which I want to send.
please help me out in thus regard.
I have also added the port 25 in the outbound and inbound port in the security groups for my instance.
Regards,
Anurag
I think the other service is running in the same port,
"netstat -tap" run the command and check whether the same port is using for something.
connect to 127.0.0.1[127.0.0.1]:2525: Connection refused
Something is preventing Postfix from using this port. (Port 2525 is sometimes being used instead of 587 as an alternative smtp port. )
Verify which ports are listening:
netstat -tanp | grep LISTEN
If you see sendmail (or any other MTA except for Postfix):
tcp 0 0 127.0.0.1:2525 0.0.0.0:* LISTEN 1014/sendmail
get rid of it:
service sendmail stop
yum remove sendmail
Verify settings on the first table row in:
/etc/postfix/master.cf
If it says:
smtp inet n - n - - smtpd
postfix listens on port 25 and your security group settings make sense. IF the line says
2525 inet n - n - - smtpd
you are telling postfix to listen on port 2525 for incoming smtpd connections.
The line that says:
submission inet n - n - - smtpd
does not begin with a comment.
Verify iptables rules, adjust if necessary:
iptables -L -n
This could be unrelated but I'm going to post it here because I had a hard time finding the answer to my question. I was able to get outbound email working from a vagrant virtual box by editing my /etc/resolv.conf to use Google's nameserver rather than the 10.0.x.x IP it was set to:
sudo nano /etc/resolv.conf
Change the nameserver IP:
nameserver 8.8.8.8
Then you'll need to restart postfix:
sudo /etc/init.d/postfix restart

Resources