OK, I know there has been a lot of discussion regarding APNS connection failures. Most of the discussion recommends checking the outgoing server port 2195 to be sure it will allow the connection. This is not my problem, although I am experiencing the 'connection refused' error (111).
I have validated communication between my server and the sandbox server be receiving a 200 response from the Apple Gateway. I know my certificates are good because I have tested the connection using openssl from a Mac. I have also been able to connect with the gateway once from my server but cannot get a consistent connection.
The test code I am using is as follows:
$ctx = stream_context_create();
stream_context_set_option($ctx,'ssl', 'local_cert', $pem);
stream_context_set_option($ctx, 'ssl', 'passphrase', $key);
$gateway = 'gateway.sandbox.push.apple.com';
$port = '2195';
$remote_socket = 'ssl://'.$gateway.':'.$port;
$fp = stream_socket_client($remote_socket, $err, $errstr, 60,STREAM_CLIENT_CONNECT, $ctx);
if (!$fp){
echo $err.'<br>';
echo $errstr.'</br>';
echo 'error=apple failed to connect';
} else {
fclose($fp);
echo 'success';
}
I have placed the .pem file in the same directory as the script file, removed the use of the passphrase, specified verify_peer, used the STREAM_CLIENT_ASYNC_CONNECT and the STREAM_CLIENT_PERSISTENT flags without success.
Is this an issue with something that I am doing, an issue with the apns sandbox server or is this what I should expect from the apns? Any insights or help you can provide is greatly appreciated -- my hair is getting pretty thin!
Regards.
PS -- If I remove the passphrase I get a 115 error saying the key cannot be accessed.
Despite being able to connect at least once, it turns out that you cannot get access to the APNS gateway without, at least, a virtual dedicated server on GoDaddy. At a minimum of $80 per month, paid upfront, that is too expensive. On to other means.
Some other things to try :
Use STREAM_CLIENT_CONNECT|STREAM_CLIENT_PERSISTENT in the stream_socket_client call
Don't use verify_peer
Check that the owner of the PHP script has enough rights to read the pem file and connect to a port located elsewhere (ie/ not just localhost)
Related
The situation is, I wanna establish a QUIC connection based on quic-go from local to ECS server. The related tests using localhost are done both on local and remote device. That is:
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://127.0.0.1:6121/demo/tile
#local: .$QUIC-GO-PATH/example/main -qlog -tcp -v
These tests are completed.
Now is the problem,when I start local-remote connection an error occurred:
#remote: .$QUIC-GO-PATH/example/main -qlog -tcp -v
#local: .$QUIC-GO-PATH/example/client/main -insecure -keylog ssl.log -qlog trial.log -v https://$REMOTE_IPADDR:6121/demo/tile
timeout: no recent network activity
When I go through a wireshark examination, it seems like the CRYPTO handshake never finishes:
Wireshark
Also client Qlog file atteched here:
Qlog file
Codes are all the same with https://github.com/lucas-clemente/quic-go
Help!
This problem has been solved.
Code $QUIC-GO-PATH/example/main.go has binded the port as a default onto 127.0.0.1:6121, which led to the problem that the server cannot get reached by client outside, just get this on server running:
-bind 0.0.0.0:6121
I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.
when i run from windows-7 with cygwin to connect CFEngine bersion 3.4.2
cf-agent -Bs 217.64.173.210
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line 47
!! Authentication dialogue with 217.64.173.210 failed
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line
and in /var/cfengine/inputs/update.cf on line 47 is
47 : perms => m("600"),
on cgwin in folder keys
/var/cfengine/ppkeys
localhost.pub
localhost.priv
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58 (which is also at CFEngine Server Side shared ky)
on Cf-Engine Server Side
/var/cfengine/ppkeys/
localhost.priv
localhost.pub
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
With Regards
Sandeep
Did you also get the server to trust the client's key? like so:
cf-key -t root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
(on the server)
Also, try restarting cf-serverd in verbose mode with the -v switch on the server, and watch what error messages you get on that end.
I've tried several gems, examples, etc, and cannot get this working, the more promising gems were: double-bag-ftps and FTPFXP, I can connect but I cannot transfer files, in active or passive mode..
sample code with ftpfxp:
#conn2 = Net::FTPFXPTLS.new
#conn2.passive = true
#conn2.debug_mode = true
#conn2.connect('192.168.0.2', 990)
#conn2.login('myuser2', 'mypass2')
#conn2.chdir('/')
#conn2.get("data.txt")
#conn2.close
sample code with double-bag:
ftps = DoubleBagFTPS.new
ftps.ssl_context = DoubleBagFTPS.create_ssl_context(:verify_mode => OpenSSL::SSL::VERIFY_NONE)
ftps.connect('192.168.0.2')
ftps.login('myuser2', 'mypass2')
ftps.chdir('/')
ftps.get("data.txt")
ftps.close
sample error with double-bag:
~/.rbenv/versions/1.9.3-p385/lib/ruby/gems/1.9.1/gems/double-bag-ftps-0.1.0/lib/double_bag_ftps.rb:148:in `connect': Broken pipe - SSL_connect (Errno::EPIPE)
Sample error with ftpfxp:
~/.rbenv/versions/1.9.3-p385/lib/ruby/1.9.1/net/ftp.rb:206:in `initialize': No route to host - connect(2) (Errno::EHOSTUNREACH)
Any recomendation that does not involve external commands ?
Thanks.
I've solved the issue, the server was returning a private ip address while trying to connect in pasive mode with Explicit tls, so I've added a line to Double-Bag-FTPS to check if the returned ip was private fallback to the original public ip address...
GitHub Pull request
So if someone has the same issue maybe this is the answer hope that this can help someone else :)
I downloaded gsoap 2.8 and went into the samples folder and ran a make. Everything seems to have built fine. I then navigated into the "ssl" folder and ran the sllserver in one xterm and ran sslclient in a second xterm window. (I am running RHEL 6) The server seems to run fine, it says "Bind successful: socket = 4". But when I run the client I receive the following message:
Error -1 fault: SOAP-ENV:Client [no subcode]
"End of file or no input: Operation interrupted or timed out (30 s receive delay) (30 s send delay)"
Detail: [no detail]
I have not modified any of the sample code, so it seems like it should just work. Can anyone please give me some advice as to what I should look at? I am trying to learn how to set up a soap server that uses ssl. (I have a gsoap server running already) I searched all day for an example on the web and as usual, there is not one.
Thank you so much for any help.
You could rebuild this example with compiler switch -DDEBUG to enable message logging (make 'sslclient_CFLAGS = -DWITH_OPENSSL -DWITH_GZIP -DDEBUG'). The TEST.log will tell what went wrong. I suspect it is a network issue with the server address/port that is set by default to "https://localhost:18081".
You could set the timeout parameter: soap.recv_timeout = 60 (for 60 seconds)