1.From client:
root#amsys-LIFEBOOK-AH502:/home/amsys# radtest -t chap usr password 127.0.0.1 0 testing123
This is how,the way i sended a packet access-request packet from the client (here,loop back only).
2.From server.
the server responds to client as shown as below:
Ready to process requests.
Ignoring request to auth address * port 1812 as server default from unknown client 127.0.0.1 port 34962 proto udp
3.server to client
Sending Access-Request of id 67 from 0.0.0.0 port 47852 to 127.0.0.1 port 1812
User-Name = 'usr'
User-Password = 'password'
NAS-IP-Address = 127.0.1.1
NAS-Port = 0
Message-Authenticator = 0x00
radclient: no response from server for ID 67 socket 3
if anybody would aware about this thing,please give your prompt response and pleased me.thanking you.!
Related
I am sending a get request to any host using sockets tcp, but I keep on getting "301 Moved Permanently" from pages with https.
I have tried to do it by changing the port from 80 to 443.
I have tried with the ssl library as well.
But keep getting 301 code
This is the code
import socket
import click
#click.command()
#click.option("-h", "--host", prompt=True)
#click.option("-p", "--port", type=int, prompt=True, default=80)
def cli(host, port):
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.connect((host, port))
message = f"GET / HTTP/1.1\r\nHost:{host}\r\nConnection: close\r\n\r\n"
request = message.encode('utf-8')
sent = 0
while sent < len(request):
sent = sent + sock.send(request[sent:])
response = b""
while True:
chunk = sock.recv(4096)
if len(chunk) == 0: # If no more data received, quitting
break
response = response + chunk
response_decode = response.decode('latin-1')
sock.close()
print(response_decode)
This is the response when I try to connect to www.eltiempo.com by port 80
HTTP/1.1 301 Moved Permanently
Server: AkamaiGHost
Content-Length: 0
Location: https://www.eltiempo.com/
Cache-Control: max-age=120
Expires: Sat, 12 Feb 2022 18:24:28 GMT
Date: Sat, 12 Feb 2022 18:22:28 GMT
Connection: close
Server-Timing: cdn-cache; desc=HIT
Server-Timing: edge; dur=1
version: desktop
I get this error with port 443
chunk = sock.recv(4096)
ConnectionResetError: [Errno 104] Connection reset by peer
Please tell me how to improve my code to avoid this 301 code.
I have a question regarding two async servers that run on the same event loop. when I close one connection from client side, I see that the second server stops as well.
Here is my code:
async def http_server(addr: str, port: int, verbose: int):
runner = aiohttp.AppRunner(await init_app())
await runner.setup()
site = web.TCPSite(runner, str(addr), port)
await site.start()
async def main(port: int, addr: str, verbose: int, restapiport: int):
# HTTP server
await http_server(str(addr), restapiport, verbose)
print(f'Serving RPC on {addr}:{restapiport} ...')
# TCP server for messaging
server = await asyncio.start_server(handle_client, str(addr), port)
addr = server.sockets[0].getsockname()
print(f'Serving MBE on {addr} CSID={os.environ["CSID"]}')
async with server:
await server.serve_forever()
When I close one connection from client side I get the following Exception (which is ok):
Task exception was never retrieved
future: <Task finished coro=<handle_client() done, defined at /opt/xenv.py:19>
exception=ConnectionResetError(104, 'Connection reset by peer')>
Traceback (most recent call last):
File "/opt/xenv.py", line 41, in handle_client
data = await reader.readexactly(msg_headers.X_MSG_TCP_DATA_BUF_SIZE)
File "/usr/local/lib/python3.7/asyncio/streams.py", line 679, in readexactly
await self._wait_for_data('readexactly')
File "/usr/local/lib/python3.7/asyncio/streams.py", line 473, in _wait_for_data
await self._waiter
File "/usr/local/lib/python3.7/asyncio/selector_events.py", line 814, in _
_read_ready__data_received
data = self._sock.recv(self.max_size)
ConnectionResetError: [Errno 104] Connection reset by peer
it seems that ConnectionResetError exception somehow impacts the other asynchronous tasks. How can I handle this exception without having an impact on the other async task?
here is netstat before the exception:
root#5901ff922714:/opt# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 5901ff922714:34833 symulator_pm-b_1.:33330 ESTABLISHED
tcp 0 0 5901ff922714:34271 5901ff922714:25010 ESTABLISHED
tcp 0 0 5901ff922714:36695 5901ff922714:33330 ESTABLISHED
tcp 8192 0 5901ff922714:25010 5901ff922714:34271 ESTABLISHED
tcp 49152 0 5901ff922714:33330 5901ff922714:36695 ESTABLISHED
tcp 0 0 5901ff922714:40831 symulator_pm-b_1.:25011 ESTABLISHED
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 3 [ ] STREAM CONNECTED 396830805
unix 3 [ ] STREAM CONNECTED 396830724
unix 3 [ ] STREAM CONNECTED 396830804
unix 3 [ ] STREAM CONNECTED 396830725
unix 2 [ ] DGRAM 396819365
here is netstat after the exception:
root#5901ff922714:/opt# netstat
Active Internet connections (w/o servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
Active UNIX domain sockets (w/o servers)
Proto RefCnt Flags Type State I-Node Path
unix 3 [ ] STREAM CONNECTED 396830805
unix 3 [ ] STREAM CONNECTED 396830724
unix 3 [ ] STREAM CONNECTED 396830804
unix 3 [ ] STREAM CONNECTED 396830725
any help would be much appreciated
I am currently struggling with receiving broadcast packets from the IP address 255.255.255.255 in ruby.
In general my network configuration has two different vlans:
vlan10: ip 10.10.10.1 netmask 255.255.0.0 => broadcast address 10.10.255.255
vlan20: ip 10.0.0.1 netmask 255.0.0.0 => broadcast address 10.255.255.255
As receiver I use the following test code to dispay the incoming packets:
require 'socket'
addr = ['0.0.0.0', 3020]
BasicSocket.do_not_reverse_lookup = true
# Create socket and bind to address
UDPSock = UDPSocket.new
UDPSock.bind(addr[0], addr[1])
while true
data, addr = UDPSock.recvfrom(1024)
puts "From addr: '%s', msg: '%s'" % [addr[0], data]
end
UDPSock.close
Receiving packets from 10.255.255.255 and 10.10.255.255 works fine.
Packets sent to IP 255.255.255.255 are not received.
Do I need to set additional properties to make ruby receive "limited broadcast" packets?
I hope somebody can help. I am really lost.
Thanks, Uwe
Thanks for your help. During checking the router configuration I noticed that I had a wrong vlan configuration that result in this behavior.
I'm trying to test out the failure recovery behavior of ZeroMQ ( via pyzmq ) when using DEALER and ROUTER sockets. Here's my code:
import sys, zmq
import threading
import time, gc
import socket
def tprint(msg):
"""like print, but won't get newlines confused with multiple threads"""
sys.stdout.write(msg + '\n')
sys.stdout.flush()
class ClientWorker(threading.Thread):
def __init__(self, id, ports):
self.id = id
self.ports = ports
super(ClientWorker, self).__init__()
def run(self):
context = zmq.Context()
socket = context.socket(zmq.DEALER)
for port in self.ports:
socket.connect("tcp://localhost:%d" % port)
tprint("client %d started" % (self.id))
for ia in xrange(self.id*100,self.id*100+100):
socket.send_string('request %d' % (ia))
time.sleep(1)
socket.close()
context.term()
class ServerWorker(threading.Thread):
def __init__(self, port, maxReq=None):
self.port = port
self.maxReq = maxReq
super(ServerWorker, self).__init__()
def run(self):
context = zmq.Context()
socket = context.socket(zmq.ROUTER)
socket.bind("tcp://127.0.0.1:%d" % (self.port))
tprint("server started on port %d" % (self.port))
numReq = 0
while True:
ident, msg = socket.recv_multipart()
print self.port, ident, msg
numReq += 1
if self.maxReq and numReq >= self.maxReq:
tprint("server on port %d exiting" % (self.port))
break
socket.unbind("tcp://127.0.0.1:%d" % (self.port))
socket.close()
context.term()
def main():
ports = [5555,5556,5557]
servers = [ServerWorker(port,10 if port==5555 else None) for port in ports]
for s in servers: s.start()
for ia in xrange(1,6):
w = ClientWorker(ia, ports)
w.start()
servers[0].join()
servers[0] = None
gc.collect()
time.sleep(30)
tprint("restarting server")
s = ServerWorker(port)
s.start()
if __name__ == "__main__":
main()
The behavior I observe is as follows:
the server at 5555 will print out 10 items that it receives at which point it exits
the client workers will NOT detect this failure, and will continue sending items to that server
when I attempt to re-.bind() a new server thread to port 5555, I get the "Address in use" error
this despite my closing the socket, calling context.term(), attempting to gc the server object, etc.
Three questions:
Am I correct to expect that the DEALER sockets should be able to detect the failure of one of the servers and redistribute the work to the remaining servers? I suspect that perhaps the reason why it can't detect the failure is the same reason that the socket on port 5555 remains open?
Any ideas about the "Address in use" error?
Am I correct to expect that when I reconnect the server to port 5555, the clients will be able to detect the reconnection and resume sending messages to the server in a round-robin way taking into account the new server?
Am I correct to expect that the DEALER sockets should be able to detect the failure of one of the servers and redistribute the work to the remaining servers?
No, this isn't how DEALERS work. DEALERs that connect load-balance across their peers, whether or not they are there. That means that messages are still queued to worker 5555, even while it's down. Those messages will be delivered immediately when worker 5555 returns.
Any ideas about the "Address in use" error?
This is caused by the fact that port when you start the resumed worker is ports[-1], not ports[0], so it's binding to a port that's still in use by one of your workers, not the one that stopped.
Am I correct to expect that when I reconnect the server to port 5555, the clients will be able to detect the reconnection and resume sending messages to the server in a round-robin way taking into account the new server?
Yes, messages will resume being delivered to 5555 when it comes back, but I think you aren't quite right about which messages will be delivered there.
With some minor adjustments to your script, I get the output:
server started on port 5555
server started on port 5556
server started on port 5557
client 1 started
client 2 started
client 3 started
client 4 started
client 5 started
5555 00800041a7 request 100
5555 00800041a8 request 200
5555 00800041a9 request 300
5555 00800041aa request 400
5555 00800041ab request 500
5556 0060b7acd9 request 101
5556 0060b7acdb request 301
5556 0060b7acdc request 401
5556 0060b7acdd request 501
5556 0060b7acda request 201
5557 004431b782 request 102
5557 004431b784 request 302
5557 004431b783 request 202
5557 004431b785 request 402
5557 004431b786 request 502
5555 00800041a7 request 103
5555 00800041a9 request 303
5555 00800041ab request 503
5555 00800041a8 request 203
5555 00800041aa request 403
server on port 5555 exiting
5556 0060b7acd9 request 104
5556 0060b7acda request 204
5556 0060b7acdd request 504
5556 0060b7acdb request 304
5556 0060b7acdc request 404
5557 004431b782 request 105
5557 004431b786 request 505
5557 004431b783 request 205
5557 004431b784 request 305
5557 004431b785 request 405
5556 0060b7acd9 request 107 <- note jump from 405 to 107
5556 0060b7acdc request 407
5556 0060b7acdd request 507
5556 0060b7acda request 207
5556 0060b7acdb request 307
restarting server on 5555
server started on port 5555
5557 004431b786 request 508
5557 004431b782 request 108
5557 004431b785 request 408
5557 004431b783 request 208
5557 004431b784 request 308
5555 0041c8aac3 request 506 <- here are the X06 messages on the new 5555 worker
5555 0041c8aac4 request 306
5555 0041c8aac5 request 406
5555 0041c8aac6 request 106
5555 0041c8aac7 request 206
5555 0041c8aac7 request 209
5555 0041c8aac4 request 309
5555 0041c8aac3 request 509
5555 0041c8aac5 request 409
5555 0041c8aac6 request 109
5556 0060b7acdd request 510
5556 0060b7acdb request 310
5556 0060b7acda request 210
5556 0060b7acdc request 410
5556 0060b7acd9 request 110
5557 004431b784 request 311
5557 004431b786 request 511
...
Messages 106-506 were sent to 5555 and redelivered later. They were not re-routed to another worker when 5555 wasn't there to receive them.
You can use client_socket.hwm = N to limit how many messages may be pending on a worker before the client should start excluding it from round-robin, but you can't make it zero.
The version of your script that I used:
from binascii import hexlify
import threading
import socket
import sys
import time
import zmq
def tprint(msg):
"""like print, but won't get newlines confused with multiple threads"""
sys.stdout.write(msg + '\n')
sys.stdout.flush()
class ClientWorker(threading.Thread):
def __init__(self, id, ports):
self.id = id
self.ports = ports
super(ClientWorker, self).__init__()
def run(self):
context = zmq.Context.instance()
socket = context.socket(zmq.DEALER)
socket.hwm = 1 # limit messages sent to dead workers
for port in self.ports:
socket.connect("tcp://localhost:%d" % port)
tprint("client %d started" % (self.id))
for ia in xrange(self.id*100,self.id*100+100):
socket.send_string('request %d' % (ia))
time.sleep(1)
socket.close()
context.term()
class ServerWorker(threading.Thread):
def __init__(self, port, maxReq=None):
self.port = port
self.maxReq = maxReq
super(ServerWorker, self).__init__()
def run(self):
context = zmq.Context.instance()
socket = context.socket(zmq.ROUTER)
tprint("server started on port %d" % (self.port))
socket.bind("tcp://127.0.0.1:%d" % (self.port))
numReq = 0
while True:
ident, msg = socket.recv_multipart()
print self.port, hexlify(ident), msg
numReq += 1
if self.maxReq and numReq >= self.maxReq:
tprint("server on port %d exiting" % (self.port))
break
socket.close()
context.term()
def main():
ports = [5555,5556,5557]
servers = [ServerWorker(port,10 if port==5555 else None) for port in ports]
for s in servers: s.start()
for ia in xrange(1,6):
w = ClientWorker(ia, ports)
w.start()
servers[0].join()
time.sleep(10)
port = ports[0]
tprint("restarting server on %i" % port)
s = ServerWorker(port)
s.start()
if __name__ == "__main__":
ctx = zmq.Context.instance()
try:
main()
finally:
ctx.term()
Let's say I have the following piece of code.
server = TCPServer.new(3200)
client = server.accept()
How do I find out what port number that client sent its message to me is? I have tried both client.peeraddr and client.addr and both of them do not give me the proper port number.
Port that clients are connecting to is 3200. And port on client side where connection is created from is random for every connection, given by OS from unused ports.
client.peeraddr gives you an array that corresponds to a struct addrinfo. For AF_INET, it looks something like this:
["AF_INET", 48942, "127.0.0.1", "127.0.0.1"]
You can create an Addrinfo object from it and get the port like so:
require 'socket'
server = TCPServer.new(3200)
client = server.accept()
addr = Addrinfo.new(client.peeraddr)
port = addr.ip_port