Use Laravel Echo with docker (CORS Problem) - laravel

I want to use Laravel Echo in the following way:
I have two docker containers, one for laravel (php) and one for the socket server (https://hub.docker.com/r/mintopia/laravel-echo-server).
Now I have the problem, that Laravel Echo can't connect to the server, because of CORS.
I already found one option for the echo server, so I added ECHO_ALLOW_ORIGIN=http://php:80 to the enviromnent variables. Unfortunately this changes nothing.
Can someone pls tell me how to fix this?

I use k1sliy/laravel-echo-server, but the locations/commands should be similar.
You share a directory with your TSL/SSL cert and laravel-echo-server.json or just the files themselves. For example, I start mine with something like (note I think my port is non-standard for echo because I need one cloudflare will proxy):
docker run -d --name echo \
-p 8443:8443 \
-v YOURPATH/laravel-echo-server.json:/app/laravel-echo-server.json \
-v YOURPATH/privkey.pem:/app/privkey.pem \
-v YOURPATH/cert.pem:/app/cert.pem k1sliy/laravel-echo-server
You'll want to edit the laravel-echo-server.json file and make sure it has this in it (where YOUR_ORIGIN_HERE is the orgin you want to allow) and destroy and recreate the docker container to force it to reread the config:
"apiOriginAllow": {
"allowCors": true,
"allowOrigin": " YOUR_ORIGIN_HERE ",
"allowMethods": "OPTIONS, GET, POST",
"allowHeaders": "Origin, Content-Type, X-Auth-Token, X-Requested-With, Accept, Authorization, X-CSRF-TOKEN, X-Socket-Id"
}
The origin is the origin as the host/client browser sees it. php is likely the hostname in the docker containers mapped to the private 172 network -- which isn't likely to be what you want. You want that to be whatever you are typing into the address bar (without protocol) of the browser to access the site, likely 128.0.0.1, localhost or 192.168.X.X followed by a colon and the port (likely 80 or 443 ... you can also do a * for port to allow any port to talk to the echo server).

Related

wget resolves to a different IP than host

I have a shell script in which I use host to get the IP of the target site to update ufw and allow outbound traffic to that IP. However, when I make the subsequent wget call to the same base URL, it resolves to a different IP, and thus is blocked by ufw. Just to test, I tried pinging the URL, and it returned a different third IP.
We're blocking all outbound traffic by default in ufw, and only enable what we need to go out, so I need the script to update the correct IP so I can wget the content. The IP in each instance (host vs wget) is consistently the same, but they return different values with respect to each other, so I don't think it's simply a DNS issue. How do I get a consistent IP to update the firewall with, so that the subsequent wget request performs successfully? I disabled the firewall as a test, and was able to download from the URL successfully, so the issue is definitely in getting a consistent IP to point to.
HOSTNAME=<name of site to resolve>
LOGFILE=<logfile path>
Current_IP=$(host $HOSTNAME | head -n 1 | cut -d " " -f 4)
#this echoes the correct value
echo $Current_IP
if [ ! -f $LOGFILE ]; then
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo New IP address found and logged >> ./download.log
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed >> ./download.log
else
/usr/sbin/ufw delete allow out from any to $Old_IP
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo IP Address was updated in ufw >> ./download.log
fi
fi
After that updates the firewall, a subsequent wget to HOSTNAME attempts to go out to a different IP than was just updated.
Turns out the difference was "www.". When I was resolving host I was not using www, and when I was using wget I was using www, and thus they resolved to different IPs for this particular site.

How to get custom header in bash

I'm adding a custom header in Asp.Net app:
context.Response.Headers.Add("X-Date", DateTime.Now.ToString());
context.Response.Redirect(redirectUrl, false);
When I'm using Fiddler I can see the "X-Date" header in the response.
I need to receive it by using bash.
I tried curl -i https://my.site.com and also wget -O - -o /dev/null --save-headers https://my.site.com with no success.
In both cases I see just the regular headers like: Content-Type, Server, Date, etc...
How I can receive the "X-Date" header?
Thanks,
Lev
protocol headers are different than file-headers (like http-header and tcp-header are different). When you create a protocol header you wiil need a server to resolve it and use the associated enviroment variables. Example ...
#!/bin/bash
# Apache - CGI
echo "text/plain"
echo ""
echo "$CONTENT_TYPE"
echo "$HTTP_ACCEPT"
echo "$SERVER_PROTOCOL"
When calling this script via web, The response ony my browser was...
text/html
text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
HTTP/1.1
What you looking for are enviorment variables called $HTTP_ACCEPT, $CONTENT_TYPE and maybe $SERVER_PROTOCOL too.

How to use curl post method while the login info is required?

For example, if I wanna issue a post request to the server. But the website requires the username and password to login first. How should I do these two operations?
If it's requires some programmatic username and password built into the web page, you'd need to submit what it expects for a user logging in, then capture the cookies you get, and then send those cookies back with your post. This can get involved if the login process involves multiple pages which are redirected to. curl can do this, but be prepared to spend some time on it.
To get the cookie being returned by the server, use curl -i to include headers. You can also add -L to automatically follow redirects (which you otherwise would have to do manually by retrieving the URI in the Location: field of an HTTP 301 or 302 response). Example:
curl -i -L stackoverflow.com > /tmp/so.html
grep -i 'Set-Cookie:' /tmp/so.html
Yields:
Set-Cookie: prov=31c24327-c0bf-474d-b504-fc97dc69ab61; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly
(Until you get the predictable logic right and how you need to submit the requests, you'll need to inspect the rest of the headers to be able to accomodate redirects, see if there are multple cookies, etc.)
To submit a cookie, use curl -b:
curl -b "prov=31c24327-c0bf-474d-b504-fc97dc69ab61" [rest of curl command]
Be patient and good luck, and be sure to check the curl man page.
curl -u username:password -X POST --data "name1=value1&name2=value2" http://yourwebpage.com/

Squid - Can I purge cache objects in squid-cache using url?

I am new to squid-cache. I am looking for purging objects using http url.
http://$cacheuser$:$cachepassword$#$cache$:8081/CE/Delete/<protocol>/<machine-name>/<folder>/<file>
Will this work properly. Does squid support this kind of purge through url?
Thanks.
I have hosted a cgi script in cache machine which listens for http request and executes squidclient.
use CGI qw(:standard);
$urltopurge=param("url");
print $urltopurge;
print header();
print "Trying to purge <b>$urltopurge</b><P>";
print "sending command <B>squidclient -v -m PURGE -h 172.24.133.181 -p 8081 $urltopurge</b> to proxy server<P><HR><b>Server Response:</b><P>";
$result = system ("C:\\squid\\bin\\squidclient.exe -v -m PURGE -p 8081 $urltopurge");
print $result;
print "<hr>";
print "purger.cgi - Praveen";

How do I execute an HTTP PUT in bash?

I'm sending requests to a third-party API. It says I must send an HTTP PUT to http://example.com/project?id=projectId
I tried doing this with PHP curl, but I'm not getting a response from the server. Maybe something is wrong with my code because I've never used PUT before. Is there a way for me to execute an HTTP PUT from bash command line? If so, what is the command?
With curl it would be something like
curl --request PUT --header "Content-Length: 0" http://website.com/project?id=1
but like Mattias said you'd probably want some data in the body as well so you'd want the content-type and the data as well (plus content-length would be larger)
If you really want to only use bash it actually has some networking support.
echo -e "PUT /project?id=123 HTTP/1.1\r\nHost: website.com\r\n\r\n" > \
/dev/tcp/website.com/80
But I guess you also want to send some data in the body?
Like Mattias suggested, Bash can do the job without further tools. If you want to send data, you have to preset at least "Content-length". With variables "host", "port", "resource" and "data" defined, you can do a HTTP put with
echo -e "PUT /$resource HTTP/1.1\r\nHost: $host:$port\r\nContent-Length: ${#data}\r\n\r\n$data\r\n" > /dev/tcp/$host/$port
I tested this with a Rest API and it workes fine.

Resources