Call to undefined function curl_init() for Magento TAF - magento

I'm trying to get Selenium to work with Magento TAF in Netbeans off a Wamp Server. When I run the test I get the following error:
Call to undefined function curl_init() in
C:\wamp\bin\php\php5.4.3\pear\PHPUnit\Extensions\SeleniumTestCase\Driver.php
on line 995
When I try and do a google search I keep getting results saying to enable curl, which is pointless for me because I do have it enabled. Have had it enabled for quite some time. The information in from the phpinfo for the curl section is as follows:
curl
cURL support: enabled
cURL Information: 7.24.0
Age: 3
Features
AsynchDNS: Yes
Debug: No
GSS-Negotiate: Yes
IDN: No
IPv6: Yes
Largefile: Yes
NTLM: Yes
SPNEGO: No
SSL: Yes
SSPI: Yes
krb4: No
libz: Yes
CharConv: No
Protocols: dict, file, ftp, ftps, gopher, http, https, imap, imaps, ldap, pop3, pop3s, rtsp, scp, sftp, smtp, smtps, telnet, tftp
Host: i386-pc-win32
SSL Version: OpenSSL/0.9.8u
ZLib Version: 1.2.5
libSSH Version: libssh2/1.3.0
On line 995 of the file I have the following line:
$curl = curl_init();
When I ctrl+click the function in phpstorm I get taken to:
C:\Program Files (x86)\JetBrains\PhpStorm
5.0.4\plugins\php\lib\php.jar!\com\jetbrains\php\lang\psi\stubs\data\curl.php
and on that line i have
function curl_init ($url = null) {}
Any ideas what's going on?

It appears that your IDE is referencing one php.ini, whilst when you run your tests on your WAMP server, it is referencing another.
Either:
a) Make sure you are using the same pho.ini file between the IDE and the WAMP server, or;
b) Make sure all php.ini files have extension=php_curl.dll uncommented.

Related

Why does wget (windows) behind a proxy needs the PROXY_HTTP/HTTPS environment variables, and Chrome doesn't

I tried to download files from google drive using wget (on windows) using the script shown here:
[JULY 2020 - Windows users batch file solution]wget/curl large file from google drive.
It works well, but when computer is behind proxy, it will work ONLY if I will set environment variables PROXY_HTTP and PROXY_HTTP (It may be that it can also set by flag in the command, but I didn't try it)
The fact that I can download files from google drive using Chrome without these environment variables and without setting chrome for the proxy gives me the understanding that there is a way to download behind proxy without setting an application for the proxy.
How can I make wget works without need to set it manually (by flag or by environment variables) for the proxy?
In all likelyhood your Chrome also has a proxy set up in some way. In any case, the simplest way to define the proxy for wget is to create a .wgetrc file in your local home folder and set the following:
> vi ~/.wgetrc
use_proxy=on
http_proxy=http://[proxy_ip]:[proxy_port]
https_proxy=https://[proxy_ip]:[proxy_port]
ftp_proxy=http://[proxy_ip]:[proxy_port]
That should be all you need to do.
I found the solution after #Wilmar comment which he pointed out here (thanks!).
An application can automatically finds if it is behind a proxy by sending "http://wpad/wpad.dat".
If a proxy server is behind, it will answer with a message that contains PAC file with proxy details. The application then can extract the proxy details for any needed settings. Thats how Chrome can set itself for automatically for proxy.
Example using wget in windows to find proxy details
In Windows, you can use wget as follows to get the proxy server details. The details must be extracted from the text messages and you can use tool like jrepl for such task. Here I only show where the details are.
call wget "http://wpad/wpad.dat" -o "ProcessLog.txt" -O "PAC.txt"
There are three possible scenarios here:
In case there is no proxy behind, then PAC.txt is empty and ProcessLog.txt contains text message similar to this one.
ProcessLog.txt
--2020-09-01 08:38:29-- http://wpad/wpad.dat
Resolving wpad (wpad)... failed: The requested name is valid, but no data of the requested type was found. .
wget: unable to resolve host address 'wpad'
In case there is a proxy server behind, and windows environment variables for proxy are set:
http_proxy=http://proxy.mc.company.com:777
https_proxy=https://proxy.mc.company.com:777
then wget already knows the proxy address so PAC.txt is empty and ProcessLog.txt contains text message similar to the follow one that contains the proxy details. In this example, the proxy details are [proxy_ip]:[proxy_port] = proxy.mc.company.com:777
ProcessLog.txt
--2020-09-01 08:29:59-- http://wpad/wpad.dat
Resolving proxy.mc.company.com (proxy.mc.company.com)... 10.100.200.150
Connecting to proxy.mc.company.com (proxy.mc.company.com)|10.100.200.150|:777... connected.
Proxy request sent, awaiting response... 302 Found
Location: http://www.wpad.com/wpad.dat [following]
--2020-09-01 08:30:00-- http://www.wpad.com/wpad.dat
Connecting to proxy.mc.company.com (proxy.mc.company.com)10.100.200.150|:777... connected.
Proxy request sent, awaiting response... 403 Forbidden
2020-09-01 08:30:00 ERROR 403: Forbidden.
In case there is a proxy server behind, but no windows environment variables for proxy are set, then wget gets the proxy details from proxy server. In this case PAC.txt contains long text message similar to the follow one that contains the proxy details. In this example, the proxy details are [proxy_ip]:[proxy_port] = proxy.mc.company.com:777
PAC.txt
function FindProxyForURL(url,host) {
var me=myIpAddress();
var resolved_ip = dnsResolve(host);
if (host == "127.0.0.1") {return "DIRECT";}
if (host == "localhost") {return "DIRECT";}
if (isPlainHostName(host)) {return "DIRECT";}
if (url.substring(0,37) == "http://lyncdiscoverinternal.company.com") {return "DIRECT";}
if (!resolved_ip) { if (url.substring(0,6) == "https:") {return "PROXY proxy-mc.company.com:778";} else {return "PROXY proxy-mc.company.com:777";}}
if (host == "moran-for-localhost-only.com") {return "DIRECT";}
...
...
Simplifying using wget in windows to find proxy details
When using wget to find proxy details, we can command it to ignore proxy environment variables (if are set) using the flag --no-proxy. This leaves us with only two possible scenarios (1) and (3) described above. So we just need the ProxyInfo file. If it is empty (scenario 1) then no proxy is behind, if it contains text (scenario 3), it is behind a proxy and you can extract the proxy details from it.
call wget --no-proxy "http://wpad/wpad.dat" -O "PAC.txt"

Laravel Guzzle : curl error 77 error setting certificate verify locations

OS: Ubuntu 16.04
PHP : 7.2
CURL : curl 7.47.0 (x86_64-pc-linux-gnu) libcurl/7.47.0 GnuTLS/3.4.10 zlib/1.2.8 libidn/1.32 librtmp/2.3
Guzzle: 6.3
My project currently is using some packages that depends on Guzzle, e.g: AWS, Mailgun...However, it often threw out this error:
error: cURL error 77: error setting certificate verify locations:
CAfile: /etc/ssl/certs
CApath: /etc/ssl/certs (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)
Below is part of my php.ini
[curl]
; A default value for the CURLOPT_CAINFO option. This is required to be an
; absolute path.
curl.cainfo='/etc/ssl/certs/ca-certificates.crt'
[openssl]
; The location of a Certificate Authority (CA) file on the local filesystem
; to use when verifying the identity of SSL/TLS peers. Most users should
; not specify a value for this directive as PHP will attempt to use the
; OS-managed cert stores in its absence. If specified, this value may still
; be overridden on a per-stream basis via the "cafile" SSL stream context
; option.
openssl.cafile='/etc/ssl/certs/ca-certificates.crt'
; If openssl.cafile is not specified or if the CA file is not found, the
; directory pointed to by openssl.capath is searched for a suitable
; certificate. This value must be a correctly hashed certificate directory.
; Most users should not specify a value for this directive as PHP will
; attempt to use the OS-managed cert stores in its absence. If specified,
; this value may still be overridden on a per-stream basis via the "capath"
; SSL stream context option.
openssl.capath='/etc/ssl/certs/'
None of this work, even though retrieving via ini_get() it's ok and fully recognized. For now, I have to make a workaround by modifying vendor/guzzlehttp/guzzle/src/Client.php and adjust default config to 'verify' => '/etc/ssl/certs/ca-certificates.crt' then everything's ok (which I believe not a good option)
retrieving via init_get()
array(8) {
["default_cert_file"]=> string(21) "/usr/lib/ssl/cert.pem"
["default_cert_file_env"]=> string(13) "SSL_CERT_FILE"
["default_cert_dir"]=> string(18) "/usr/lib/ssl/certs"
["default_cert_dir_env"]=> string(12) "SSL_CERT_DIR"
["default_private_dir"]=> string(20) "/usr/lib/ssl/private"
["default_default_cert_area"]=> string(12) "/usr/lib/ssl"
["ini_cafile"]=> string(34) "/etc/ssl/certs/ca-certificates.crt"
["ini_capath"]=> string(15) "/etc/ssl/certs/"
}
openssl.cafile: /etc/ssl/certs/ca-certificates.crt
curl.cainfo: /etc/ssl/certs/ca-certificates.crt
Note: I've tried setting up ~/.curlrc together with export CURL_CA_BUNDLE=/etc/ssl/certs/ca-certificates.crt but none of this work
Does anyone have any solution or any clue to solve this issue?
Relating to 'SSL certificate problem: unable to get local issuer certificate' error. Rather obviously this applies to the system sending the CURL request (and no the server receiving the request)
Download the latest cacert.pem from https://curl.haxx.se/ca/cacert.pem
Add the following line to php.ini (if this is shared hosting and you don't have access to php.ini then you could add this to .user.ini in public_html)
curl.cainfo="/path/to/downloaded/cacert.pem"
Make sure you enclose the path within double quotation marks!!!
grant permission to your web server user like ngnix or www-data to read the file.
sudo chown www-data /etc/ssl/certs/cacert.pem
last step restart fpm and ngnix or apache

Using nxlog to ship logs in to logstash from Windows using om_ssl

I have been looking at options to ship logs from Windows, I have already got logstash set up, and I currently ship logs from Linux (CentOS) servers to my ELK stack using the logstash-forwarder and ssl encryption.
For compliance reasons encryption is pretty much essential in this environment.
I was hoping to use logstash-forwarder in Windows as well, but after compiling with Go I ran in to issues shipping Event Logs, and I found some people saying that it wasn't possible because of file locking issues, which the logstash-forwarder people appear to be working on, but I can't really wait.
Anyway, eventually I found out that nxlog seems to be able to ship logs in an encrypted format using ssl, I've found a few posts about similar topics and while I've learned quite a bit about how to ship the logs across and how to set up nxlog, I am still at a loss with how to set up logstash to accept the logs so I can process them.
I've asked in the #nxlog and #logstash irc channels, and got some confirmation in #nxlog that it is possible, no further information on how it should be configured.
Anyway, I have taken the crt file created for use with my logstash-forwarder (I will create a new one if needed when I am happy that this will work) and renamed it with a pem extension, which I believe should work as it is readable in ASCII format. I have created the environment variable for %CERTDIR% and put my file in there, I have written the following config file for nxlog from the other articles I have read, I think it is right, but I am not 100% sure:
## This is a sample configuration file. See the nxlog reference manual about the
## configuration options. It should be installed locally and is also available
## online at http://nxlog.org/nxlog-docs/en/nxlog-reference-manual.html
## Please set the ROOT to the folder your nxlog was installed into,
## otherwise it will not start.
#define ROOT C:\Program Files\nxlog
define ROOT C:\Program Files (x86)\nxlog
Moduledir %ROOT%\modules
CacheDir %ROOT%\data
Pidfile %ROOT%\data\nxlog.pid
SpoolDir %ROOT%\data
LogFile %ROOT%\data\nxlog.log
# Enable json extension
<Extension json>
Module xm_json
</Extension>
# Nxlog internal logs
<Input internal>
Module im_internal
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
# Windows Event Log
<Input eventlog>
# Uncomment im_msvistalog for Windows Vista/2008 and later
Module im_msvistalog
# Uncomment im_mseventlog for Windows XP/2000/2003
# Module im_mseventlog
Exec $EventReceivedTime = integer($EventReceivedTime) / 1000000; to_json();
</Input>
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5000
CertFile %CERTDIR%/logstash-forwarder.crt
AllowUntrusted TRUE
OutputType Binary
</Output>
<Route 1>
Path eventlog, internal => sslout
</Route>
What I want to know is what input format to use in logstash I have tried shipping logs in to a lumberjack input type (using the same config as my logstash-forwarders use) with the following config:
input {
lumberjack {
port => 5000
type => "logs"
ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
}
}
But when the service started I get the following in the nxlog logfiles:
2014-11-06 21:16:20 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:20 INFO nxlog-ce-2.8.1248 started
2014-11-06 21:16:21 INFO successfully connected to lumberjack.domain.com:5000
2014-11-06 21:16:22 INFO remote closed SSL socket
2014-11-06 21:16:22 INFO reconnecting in 1 seconds
2014-11-06 21:16:23 INFO connecting to lumberjack.domain.com:5000
2014-11-06 21:16:24 INFO reconnecting in 2 seconds
2014-11-06 21:16:24 ERROR couldn't connect to ssl socket on lumberjack.antmarketing.com:5000; No connection could be made because the target machine actively refused it.
When I turned the logging up to DEBUG I see a massive amount of logs flying through, but I think the key part is:
2014-11-06 21:20:18 ERROR Exception was caused by "rv" at om_ssl.c:532/io_err_handler(); [om_ssl.c:532/io_err_handler()] -; [om_ssl.c:501/om_ssl_connect()] couldn't connect to ssl socket on lumberjack.domain.com:5000; No connection could be made because the target machine actively refused it.
I assume this points to me using the wrong input method on logstash, but I guess it could also be an issue with my ssl certs or the way it is configured. I don't appear to be getting any logs on the logstash server being generated at the time I make the connection from my Windows machine.
Thanks to b0ti for the help, there were a number of issues, my logstash config was crashing the service, but I also had issues with my nxlog setup as well as my ssl certs being set up in the correct way.
I found this post about creating ssl certs, which covers the way they are set up really nicely for self signed certs for use as a web service.
The main thing wrong with nxlog was as b0ti pointed out I was trying to ship in binary when that will only work when shipping to nxlog server. I also noticed in the docs that the default for AllowUntrusted is false, so I just had to delete it once I was happy ssl was working.
<Output sslout>
Module om_ssl
Host lumberjack.domain.com
Port 5001
CAFile %CERTDIR%\nxlog-ca.crt
OutputType LineBased
</Output>
Creating the CA key, and secure it as this needs to be kept secret (cd to /etc/pki/tls):
certtool --generate-privkey --bits 2048 --outfile private/nxlog-ca.key
chown logstash:logstash private/nxlog-ca.key
chmod 600 private/nxlog-ca.key
And then Self Signed CA Cert, which will need to be transferred to your clients:
certtool --generate-self-signed --load-privkey private/nxlog-ca.key --bits 2048 --template nxlog-ca-rules.cnf --outfile certs/nxlog-ca.crt
The cnf file is standard only with this option modified:
# Whether this is a CA certificate or not
ca
The logstash input method:
input {
tcp {
port => 5001
type => "nxlogs"
ssl_cacert => "/etc/pki/tls/certs/nxlog-ca.crt"
ssl_cert => "/etc/pki/tls/certs/nxlog.crt"
ssl_key => "/etc/pki/tls/private/nxlog.key"
ssl_enable => true
format => 'json'
}
}
Generate the private key:
certtool --generate-privkey --bits 2048 --outfile private/nxlog.key
chown logstash:logstash private private/nxlog.key
chmod 600 private/nxlog.key
Generate the CSR (Certificate Signing Request):
certtool --generate-request --bits 2048 --load-privkey private/nxlog.key --outfile private/nxlog.csr
Sign the Cert with the CA private key
certtool --generate-certificate --bits 2048 --load-request private/nxlog.csr --outfile certs/nxlog.crt --load-ca-certificate certs/nxlog-ca.crt --load-ca-privkey private/nxlog-ca.key --template nxlog-rules.cnf
Again the only important part over the standard inputs for the cnf file will be:
# Whether this certificate will be used to encrypt data (needed
# in TLS RSA ciphersuites). Note that it is preferred to use different
# keys for encryption and signing.
encryption_key
# Whether this certificate will be used for a TLS client
tls_www_client
I've tested this and it works well, I just need to get the filters set up now
The binary data format is nxlog specific, you should only use it if you send to nxlog.
OutputType Binary
If this doesn't help, check the logstash logs since it's the remote end (logstash) which closes the connection.

Challenge response from CFEngine Server Failure while conencting cygwin to CFEngine

when i run from windows-7 with cygwin to connect CFEngine bersion 3.4.2
cf-agent -Bs 217.64.173.210
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line 47
!! Authentication dialogue with 217.64.173.210 failed
Challenge response from server 217.64.173.210/217.64.173.210 was incorrect!
I: Made in version 'not specified' of '/var/cfengine/inputs/update.cf' near line
and in /var/cfengine/inputs/update.cf on line 47 is
47 : perms => m("600"),
on cgwin in folder keys
/var/cfengine/ppkeys
localhost.pub
localhost.priv
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58 (which is also at CFEngine Server Side shared ky)
on Cf-Engine Server Side
/var/cfengine/ppkeys/
localhost.priv
localhost.pub
root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
With Regards
Sandeep
Did you also get the server to trust the client's key? like so:
cf-key -t root-MD5=b8825ba0a0e7017e34b15766d3b3ac58
(on the server)
Also, try restarting cf-serverd in verbose mode with the -v switch on the server, and watch what error messages you get on that end.

Connecting to Mongod via Ruby driver using SSL returns Mongo::ConnectionFailure

I want to use SSL with MongoDB. It's not enabled by default so one has to compile from source with the necessary options. I followed the official documentation and got the v2.6.4 binary built and running nicely on a freshly deployed server running Ubuntu 14.04. All good so far.
Next I set up mongod as described in the official docs. I did follow their example of using a self-certified key for testing purposes. And the relevant part of the config looks like:
...
net:
bindIp: 127.0.0.1
port: 27017
ssl:
mode: requireSSL
PEMKeyFile: /opt/mongo/security/mongodb.pem
...
If I then run the client and specify to use SSL I connect fine. ($ mongo --ssl). FWIW if I try without the --ssl argument then it doesn't connect.
Ok, time to link up via Ruby. I'm on the same server and I try the following ruby script:
require 'rubygems'
require 'mongo'
client = Mongo::MongoClient.new('localhost', 27017, {:ssl => true})
Nope. It's not having it:
/home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:422:in `connect': Failed to connect to a master node at localhost:27017 (Mongo::ConnectionFailure)
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:661:in `setup'
from /home/test/.rvm/gems/ruby-1.9.3-p547/gems/mongo-1.11.1/lib/mongo/mongo_client.rb:177:in `initialize'
from test_mongo_ssl.rb:8:in `new'
from test_mongo_ssl.rb:8:in `<main>'
So best to make sure that there's nothing wrong with the default connection without SSL. I disable SSL on mongod and restart. Then try the ruby script again, this time without the ssl option:
...
client = Mongo::MongoClient.new('localhost', 27017)
And it's fine. Therefore I feel I've narrowed it down to the ruby driver & ssl, but beyond that there's little else to go on.
EDIT I tried their Python driver on the same server and used their example program:
from pymongo import MongoClient
c = MongoClient(host="localhost", port=27017, ssl=True)
And that did connect OK. So at least I can feel fairly confident that the mongod is configured properly and the issue lies somewhere within the Mongo Ruby driver. Quite possibly a bug in their current driver (v1.11.1).
UPDATE I've also had success connecting via ssl using the node.js driver:
var mongo = require('mongodb');
var database = new mongo.Db("my_database", new mongo.Server("127.0.0.1", 27017, {ssl:true} ), {w:0});
database.open(function(err, db) {
if(err) throw err;
db.authenticate('user', 'password', function(err, result) {
var collection = db.collection('foo');
collection.findOne(function(err, item) {
if(err) throw err;
console.log(item);
db.close();
});
});
});
There it seems to be increasingly likely that there's either a bug in the ruby driver, or the documentation is incomplete and not explaining accurately how to use SSL connections. Therefore I've opened a new issue on MongoDB's issue tracker to hopefully get to the bottom of this.
Rather embarrassingly, the solution to this issue was my /etc/hosts file had a typo for the localhost entry:
127.0.0.1 localhost.localdomain locahost
As you can see, it's missing the second letter L in "localhost". (I suspect it went missing during an accidental vim gesture.) Therefore to resolve I just had to reinstate the missing "l":
127.0.0.1 localhost.localdomain localhost
It's still a mystery as to why the Python sample worked correctly. And it's because of that I didn't twig earlier that it was a problem with the hosts file.

Resources