Our project use this apns provider that runing on centos 6.4 to push the oofline msg .
The apns provider just read from redis queue with brpop, then reformat the data and send to the apns msg to apple push service.
Recently, I faced an problem that the apn provider DO NOT read the msg from redis queue, I just strace the process:
The abnormal strace result:
tcp 0 0 ::1:39688 ::1:6379 ESTABLISHED 29452/ruby
[root#server]# strace -p 29452
Process 29452 attached - interrupt to quit
ppoll([{fd=56, events=POLLIN}], 1, NULL, NULL, 8
The normal strace result:
clock_gettime(CLOCK_MONOTONIC, {9266059, 349937955}) = 0
select(9, [8], NULL, NULL, {6, 0}) = 1 (in [8], left {3, 976969})
fcntl64(8, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
read(8, "*-1\r\n", 1024) = 5
write(8, "*3\r\n$5\r\nbrpop\r\n$9\r\napn_queue\r\n$1"..., 37) = 37
fcntl64(8, F_GETFL) = 0x802 (flags O_RDWR|O_NONBLOCK)
read(8, 0x9a0e5d8, 1024) = -1 EAGAIN (Resource temporarily unavailable)
clock_gettime(CLOCK_MONOTONIC, {9266061, 374086306}) = 0
select(9, [8], NULL, NULL, {6, 0}^C <unfinished ...>
Process 20493 detached
here is the related code:
loop do
begin
message = #redis.brpop(self.queue, 1)
if message
APN.log(:info, "---------->#{message} ----------->\n")
#notification = APN::Notification.new(JSON.parse(message.last,:symbolize_names => true))
send_notification
end
rescue Exception => e
if e.class == Interrupt || e.class == SystemExit
APN.log(:info, 'Shutting down...')
exit(0)
end
APN.log(:error, "class: #{e.class} Encountered error: #{e}, backtrace #{e.backtrace}")
APN.log(:info, 'Trying to reconnect...')
client.connect!
APN.log(:info, 'Reconnected')
client.push(#notification)
end
end
This problem occur aperiodically , the period time may be one or two month.
I think the code logic is right, guess the system network may affect the normal runnning of programming.
When I use pkill [pid] to kill the programm, it just restore the normal condiction starting read the msg from queue.
Now I don't know how to analyse the problem, so I have to use cron to reboot or send kill signal to the program every dawn periodcally. :(
Can everyone have the idea to handle the problem?
You used in your abnormal strace result ppoll with null timeout .
correct way is
const struct timespec timeout = { .tv_sec = 10, .tv_nsec = 0 };
struct pollfd myfds;
myfds.fd = fd;
myfds.events = POLLIN;
myfds.revents = 0;
retresult = ppoll(&myfds, 1,&timeout,NULL);
This will give 10sec delay once 10sec is finish its return to next code.
Related
The following code does not work on OSX (it works fine on Linux); it (bind) fails with errno=49 Can't assign requested address.
int fd, val;
struct sockaddr_in sa;
fd = socket(AF_INET, SOCK_DGRAM, 0);
if(fd < 0)
return -1;
val = 1;
if(setsockopt(fd, SOL_SOCKET, SO_BROADCAST, &val, sizeof(val)) < 0)
goto exit;
memset(&sa, 0, sizeof(sa));
sa.sin_len = sizeof(sa);
sa.sin_family = AF_INET;
sa.sin_addr.s_addr = htonl(INADDR_BROADCAST);
sa.sin_port = htons(50000);
if(bind(fd, (struct sockaddr *)&sa, sizeof(sa)))
goto exit;
This bit of code does work on OSX if you specify an actual address other than INADDR_BROADCAST (or, at least INADDR_ANY works fine). I found that Darwin has this sin_len field that Linux does not, but setting or leaving it cleared has no effect.
Any idea what could be the trouble? If it were related to MACF I feel as if it would return a security-related error.
There are few example sources for OSX in general, and I did not find any for UDP broadcast.
It seems even socat can't do this properly on OSX. Incidentally it failed in the same way with the same error.
$ echo "TEST" | socat - UDP-DATAGRAM:255.255.255.255:50000,broadcast
2022/06/13 22:39:16 socat[7349] E sendto(5, 0x14100c000, 17, 0, LEN=16 AF=2 255.255.255.255: 50000, 16): Can't assign requested address
I'm trying to monitor all newly created processes using Kevents by monitoring EVFILT_PROC using launchd pid, which is 1:
struct kevent ke = { 0 };
const pid_t pid_of_launchd = 1;
EV_SET(&ke, pid_of_launchd, EVFILT_PROC, EV_ENABLE | EV_ADD | EV_CLEAR, NOTE_FORK | NOTE_EXEC, 0, NULL);
I do receive events when new processes are created by I can't retrieve the new process PID nor name:
struct kevent change = { 0 };
int next_event = kevent(kq, NULL, 0, &change, 1, NULL);
// change.ident always equal 1
Has anyone encountered this ?
Thanks!
the ident is the id (serial #) of the event. You should be checking the filter specific data (a uint64_t).
I can't seem to accept TCP connections Grand Central Dispatch sources (dispatch_source_t) and I can't understand what I could possibly be doing wrong. I've gone through a lot of material and I'm still nowhere close (Apple's own documentation, Mike Ash's blog, etc.)
Of course, I could simply use third-party libraries (plenty on GitHub) but I'm trying to learn from this. Thank you.
I know the server is listening on the port since using telnet 127.0.0.1 6666 times out (but never actually connects) whereas any other port immediatly fails. The piece of code where I would accept() connections is never run.
class MyServer {
func startServing(#port: UInt16 = 6666) {
// Create a listeningSocket on TCP/IPv4.
println("\nStarting TCP server.\nCreating a listening Socket.")
let listeningSocket = socket(AF_INET, SOCK_STREAM, 0 /*IPPROTO_TCP*/)
if listeningSocket == -1 {
println("Failed!")
return
}
// Prepare an socket address.
var no_sig_pipe: Int32 = 1
setsockopt(listeningSocket, SOL_SOCKET, SO_NOSIGPIPE, &no_sig_pipe, socklen_t(sizeof(Int32)))
// Working around Swift's strict initialization policies
var addr = sockaddr_in( sin_len: __uint8_t(sizeof(sockaddr_in))
, sin_family: sa_family_t(AF_INET)
, sin_port: CFSwapInt16HostToBig(port)
, sin_addr: in_addr(s_addr: inet_addr("0.0.0.0"))
, sin_zero: (0, 0, 0, 0, 0, 0, 0, 0) )
var sock_addr = sockaddr( sa_len: 0
, sa_family: 0
, sa_data: (0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0) )
memcpy(&sock_addr, &addr, UInt(sizeof(sockaddr_in)))
// bind() the socket to the address.
println("Binding socket to \(port).")
if bind(listeningSocket, &sock_addr, socklen_t(sizeof(sockaddr_in))) == POSIXSocketError {
println("Failed!")
close(listeningSocket)
return
}
// If we still have a working socket at this point...
if listeningSocket >= 0 {
println("Creating GCD Source.")
connectionSource = dispatch_source_create(DISPATCH_SOURCE_TYPE_READ, UInt(listeningSocket), 0, dispatch_get_global_queue(0,0) /*serverQueue*/)
if connectionSource == .None {
println("Failed!")
close(listeningSocket)
return
}
let myself = self
dispatch_source_set_event_handler( connectionSource, {
// *** This NEVER gets called!! ***
dispatch_async(dispatch_get_main_queue(), { println("GCD Src fired.") })
println("GCD Source triggered.")
myself.acceptConnections(listeningSocket)
})
dispatch_resume(connectionSource)
}
} // end func
} // end class
It looks like you just forgot to listen after you called bind. From Mike Ash's blog:
With the socket bound, the next step is to tell the system to listen
on it. This is done with, you guessed it, the listen function. It
takes two parameters: the socket to operate on, and the desired length
of the queue used for listening. This queue length tells the system
how many incoming connections you want it to sit on while trying to
hand those connections off to your program. Unless you have a good
reason to use something else, passing the SOMAXCONN gives you a safe,
large value.
So, after your bind call and error checking, and before you create your dispatch source, add something like:
if listen(listeningSocket, SOMAXCONN) != 0 {
println("Listen Failed!")
close(listeningSocket)
return
}
I cobbled a quick test together with your code and the above change and this fires off a slew of GCD Src fired. and GCD Source triggered. messages when a connection to the socket is made.
I have a subprocess (running on MacOS) that I want to kill itself if the parent quits, exits, terminates, is killed or crashes. Having followed the advice from How to make child process die after parent exits? I can't get it to quietly kill itself if the parent program crashes. It will go to 100% CPU until I manually kill it.
Here are the key points of the code:
int main(int argc, char *argv[])
{
// Catch signals
signal(SIGINT, interruptHandler);
signal(SIGABRT, interruptHandler);
signal(SIGTERM, interruptHandler);
signal(SIGPIPE, interruptHandler);
// Create kqueue event filter
int kqueue_fd = kqueue();
struct kevent kev, recv_kev;
EV_SET(&kev, parent_pid, EVFILT_PROC, EV_ADD|EV_ENABLE, NOTE_EXIT, 0, NULL);
kevent(kqueue_fd, &kev, 1, NULL, 0, NULL);
struct pollfd kqpoll;
kqpoll.fd = kqueue_fd;
kqpoll.events = POLLIN;
// Start a run loop
while(processEvents())
{
if(kill(parent_pid, 0) == -1)
if(errno == ESRCH)
break;
if(poll(&kqpoll, 1, 0) == 1)
if(kevent(kqueue_fd, NULL, 0, &recv_kev, 1, NULL))
break;
parent_pid = getppid();
if(parent_pid == 1)
break;
sleep(a_short_time);
// (simple code here causes subprocess to sleep longer if it hasn't
// received any events recently)
}
}
Answering my own question here:
The reason for this problem was not down to detecting whether the parent process had died. In processEvents() I was polling the pipe from the parent process to see if there was any communication. When the parent died, poll() returned a value of 1 and the read loop thought there was infinite data waiting to be read.
The solution was to detect whether the pipe had been disconnected or not.
When sending two UDP messages to a computer on Windows 7, it looks like sometimes the first message is not sent at all. Has anyone else experienced this?
The test code below demonstrates the issue on my machine. When I run the test program and watch all UDP traffic to 10.10.42.22, I see the second UDP message being sent, but the first UDP message is not sent. If I immediately run the program again, then both UDP messages are sent.
It doesn't fail every time, but it usually happens if I wait a couple minutes before running the test again.
#include <iostream>
#include <winsock2.h>
int main()
{
WSADATA wsaData;
WSAStartup( MAKEWORD(2,2), &wsaData );
sockaddr_in addr;
addr.sin_family = AF_INET;
addr.sin_port = htons( 52383 );
addr.sin_addr.s_addr = inet_addr( "10.10.42.22" );
SOCKET s = socket( AF_INET, SOCK_DGRAM, IPPROTO_UDP );
if ( sendto( s, "TEST1", 5, 0, (SOCKADDR *) &addr, sizeof( addr ) ) != 5 )
std::cout << "first message not sent" << std::endl;
if ( sendto( s, "TEST2", 5, 0, (SOCKADDR *) &addr, sizeof( addr ) ) != 5 )
std::cout << "second message not sent" << std::endl;
closesocket( s );
WSACleanup();
return 0;
}
The problem here is basically the same as this post and it has to do with section 2.3.2.2 of RFC 1122:
2.3.2.2 ARP Packet Queue
The link layer SHOULD save (rather than
discard) at least one (the latest)
packet of each set of packets destined
to the same unresolved IP address, and
transmit the saved packet when the
address has been resolved.
It looks like opening a new socket for every UDP message is a workaround.