Squid - Can I purge cache objects in squid-cache using url? - caching

I am new to squid-cache. I am looking for purging objects using http url.
http://$cacheuser$:$cachepassword$#$cache$:8081/CE/Delete/<protocol>/<machine-name>/<folder>/<file>
Will this work properly. Does squid support this kind of purge through url?
Thanks.

I have hosted a cgi script in cache machine which listens for http request and executes squidclient.
use CGI qw(:standard);
$urltopurge=param("url");
print $urltopurge;
print header();
print "Trying to purge <b>$urltopurge</b><P>";
print "sending command <B>squidclient -v -m PURGE -h 172.24.133.181 -p 8081 $urltopurge</b> to proxy server<P><HR><b>Server Response:</b><P>";
$result = system ("C:\\squid\\bin\\squidclient.exe -v -m PURGE -p 8081 $urltopurge");
print $result;
print "<hr>";
print "purger.cgi - Praveen";

Related

wget resolves to a different IP than host

I have a shell script in which I use host to get the IP of the target site to update ufw and allow outbound traffic to that IP. However, when I make the subsequent wget call to the same base URL, it resolves to a different IP, and thus is blocked by ufw. Just to test, I tried pinging the URL, and it returned a different third IP.
We're blocking all outbound traffic by default in ufw, and only enable what we need to go out, so I need the script to update the correct IP so I can wget the content. The IP in each instance (host vs wget) is consistently the same, but they return different values with respect to each other, so I don't think it's simply a DNS issue. How do I get a consistent IP to update the firewall with, so that the subsequent wget request performs successfully? I disabled the firewall as a test, and was able to download from the URL successfully, so the issue is definitely in getting a consistent IP to point to.
HOSTNAME=<name of site to resolve>
LOGFILE=<logfile path>
Current_IP=$(host $HOSTNAME | head -n 1 | cut -d " " -f 4)
#this echoes the correct value
echo $Current_IP
if [ ! -f $LOGFILE ]; then
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo New IP address found and logged >> ./download.log
else
Old_IP=$(cat $LOGFILE)
if [ "$Current_IP" = "$Old_IP" ] ; then
echo IP address has not changed >> ./download.log
else
/usr/sbin/ufw delete allow out from any to $Old_IP
/usr/sbin/ufw allow out from any to $Current_IP
echo $Current_IP > $LOGFILE
echo IP Address was updated in ufw >> ./download.log
fi
fi
After that updates the firewall, a subsequent wget to HOSTNAME attempts to go out to a different IP than was just updated.
Turns out the difference was "www.". When I was resolving host I was not using www, and when I was using wget I was using www, and thus they resolved to different IPs for this particular site.

Send an HTTPS request to TLS1.0-only server in Alpine linux

I'm writing a simple web crawler inside Docker Alpine image. However I cannot send HTTPS requests to servers that support only TLS1.0 . How can I configure Alpine linux to allow obsolete TLS versions?
I tried adding MinProtocol to /etc/ssl/openssl.cnf with no luck.
Example Dockerfile:
FROM node:12.0-alpine
RUN printf "[system_default_sect]\nMinProtocol = TLSv1.0\nCipherString = DEFAULT#SECLEVEL=1" >> /etc/ssl/openssl.cnf
CMD ["/usr/bin/wget", "https://www.restauracesalanda.cz/"]
When I build and run this container, I get
Connecting to www.restauracesalanda.cz (93.185.102.124:443)
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer
I can reproduce your issue using the builtin-busybox-wget. However, using the "regular" wget works:
root#a:~# docker run --rm -it node:12.0-alpine /bin/ash
/ # wget -q https://www.restauracesalanda.cz/; echo $?
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer
1
/ # apk add wget
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/1) Installing wget (1.20.3-r0)
Executing busybox-1.29.3-r10.trigger
OK: 7 MiB in 17 packages
/ # wget -q https://www.restauracesalanda.cz/; echo $?
0
/ #
I'm not sure, but maybe you should post an issue at https://bugs.alpinelinux.org
Putting this magic 1 liner into my dockerfile solved my issues and i was able to use TLS 1.0:
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/' /etc/ssl/openssl.cnf \ && sed -i 's/CipherString = DEFAULT#SECLEVEL=2/CipherString = DEFAULT#SECLEVEL=1/' /etc/ssl/openssl.cnf
Credit goes to this dude: http://blog.travisgosselin.com/tls-1-0-1-1-docker-container-support/

Use Laravel Echo with docker (CORS Problem)

I want to use Laravel Echo in the following way:
I have two docker containers, one for laravel (php) and one for the socket server (https://hub.docker.com/r/mintopia/laravel-echo-server).
Now I have the problem, that Laravel Echo can't connect to the server, because of CORS.
I already found one option for the echo server, so I added ECHO_ALLOW_ORIGIN=http://php:80 to the enviromnent variables. Unfortunately this changes nothing.
Can someone pls tell me how to fix this?
I use k1sliy/laravel-echo-server, but the locations/commands should be similar.
You share a directory with your TSL/SSL cert and laravel-echo-server.json or just the files themselves. For example, I start mine with something like (note I think my port is non-standard for echo because I need one cloudflare will proxy):
docker run -d --name echo \
-p 8443:8443 \
-v YOURPATH/laravel-echo-server.json:/app/laravel-echo-server.json \
-v YOURPATH/privkey.pem:/app/privkey.pem \
-v YOURPATH/cert.pem:/app/cert.pem k1sliy/laravel-echo-server
You'll want to edit the laravel-echo-server.json file and make sure it has this in it (where YOUR_ORIGIN_HERE is the orgin you want to allow) and destroy and recreate the docker container to force it to reread the config:
"apiOriginAllow": {
"allowCors": true,
"allowOrigin": " YOUR_ORIGIN_HERE ",
"allowMethods": "OPTIONS, GET, POST",
"allowHeaders": "Origin, Content-Type, X-Auth-Token, X-Requested-With, Accept, Authorization, X-CSRF-TOKEN, X-Socket-Id"
}
The origin is the origin as the host/client browser sees it. php is likely the hostname in the docker containers mapped to the private 172 network -- which isn't likely to be what you want. You want that to be whatever you are typing into the address bar (without protocol) of the browser to access the site, likely 128.0.0.1, localhost or 192.168.X.X followed by a colon and the port (likely 80 or 443 ... you can also do a * for port to allow any port to talk to the echo server).

OpenLDAP as a Proxy cache only, no local database

I am trying to get a local LDAP proxy cache running. The idea is this:
Currently a computer (A) is sending all ldap requests to a remote ldap server (L)
Instead of that, there should be a proxy cache "server" running on A to act as an intermediate between A and L. The cache would store all queries and all their attributes (until it is filled up and then it starts "recycling").
OpenLDAP's Proxy Cache Engine looks pretty good, but there is not much information about how to set it up. There is an example config file, but I cannot get it to work.
When connected to the internet, running this command will successfully bind me.
ldapwhoami -vvv -h localhost -D "CN=Melka Martin,OU=something,OU=else,(...),DC=int,DC=somedomain,DC=com" -x -w <passwd>
However, each following request will still pool the remote LDAP server (as shown by sniffing the connection, and when the machine is disconnected from the internet, the local bind fails).
In the slapd output there is a lot of stuff, but the elligible:
56449abd QUERY NOT ANSWERABLE
56449abd QUERY CACHEABLE
This is the current config file, which should cache all the bind requests
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "cn=admin,dc=int,dc=somedomain,dc=com"
rootpw <something>
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
pcacheBind (sn=) 0 3600 sub dc=int,dc=somedomain,dc=com
cachesize 200
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
I have created the /var/lib/ldap directory, added a default DB_CONFIG file in there and then edited the slapd.conf file. If there are more things to do to set it up properly, could you instruct me?
I am a little confused about the rootdn/rootpw directives. They are used to write into the remote LDAP server, correct?
Edit: Below here is the original issue, which was resolved by using the full proper DN.
As this is supposed to only be a proxy cache, I shouldn't need to set up a local database. So the config file looks like this:
include /etc/openldap/schema/core.schema
include /etc/openldap/schema/cosine.schema
include /etc/openldap/schema/inetorgperson.schema
include /etc/openldap/schema/nis.schema
moduleload pcache.la
database ldap
suffix "dc=int,dc=somedomain,dc=com"
rootdn "dc=int,dc=somedomain,dc=com"
uri ldap://dc-04.int.somedomain.com:389
overlay pcache
pcache hdb 100000 1 1000 100
pcacheAttrset 0 *
pcacheTemplate (sn=) 0 3600
cachesize 20
directory /var/lib/ldap
index objectClass eq
index cn eq,sub
Now I would expect that any request to ldap://localhost would mirror to the remote LDAP, if not in the cache.
I use this command to test the auth on the remote server:
ldapwhoami -vvv -h dc-04.int.somedomain.com -p 389 -D melka#somedomain.com -x -w <passwd>
Which works well, I get the auth.
However, when I try to run the same command on localhost:
ldapwhoami -vvv -h localhost -p 389 -D melka#somedomain.com -x -w <passwd>
It fails, saying
ldap_initialize( ldap://localhost:389 )
ldap_bind: Invalid DN syntax (34)
additional info: invalid DN
Slapd is listening on localhost, netstat contains this line:
tcp 0 0 0.0.0.0:389 0.0.0.0:* LISTEN 10352/slapd
Is there something I am missing?
Thanks
melka#somedomain.com
That may be a DN in the target LDAP system, who knows, but it certainly isn't in OpenLDAP. You need to provide a proper Distinguished Name.

How can I update jenkins plugins from the terminal?

I am trying to create a bash script for setting up Jenkins. Is there any way to update a plugin list from the Jenkins terminal?
At first setup there is no plugin available on the list
i.e.:
java -jar jenkins-cli.jar -s `http://localhost:8080` install-plugin dry
won't work
A simple but working way is first to list all installed plugins, look for updates and install them.
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins
Each plugin which has an update available, has the new version in brackets at the end. So you can grep for those:
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }'
If you call install-plugin with the plugin name, it is automatically upgraded to the latest version.
Finally you have to restart jenkins.
Putting it all together (can be placed in a shell script):
UPDATE_LIST=$( java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ install-plugin ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ safe-restart;
fi
You can actually install plugins from the computer terminal (rather than the Jenkins terminal).
Download the plugin from the plugin site (http://updates.jenkins-ci.org/download/plugins)
Copy that plugin into the $JENKINS_HOME/plugins directory
At that point either start Jenkins or call the reload settings service (http://yourservername:8080/jenkins/reload)
This will enable the plugin in Jenkins and assuming that Jenkins is started.
cd $JENKINS_HOME/plugins
curl -O http://updates.jenkins-ci.org/download/plugins/cobertura.hpi
curl http://yourservername:8080/reload
Here is how you can deploy Jenkins CI plugins using Ansible, which of course is used from the terminal. This code is a part of roles/jenkins_ci/tasks/main.yaml:
- name: Plugins
with_items: # PLUGIN NAME
- name: checkstyle # Checkstyle
- name: dashboard-view # Dashboard View
- name: dependency-check-jenkins-plugin # OWASP Dependency Check
- name: depgraph-view # Dependency Graph View
- name: deploy # Deploy
- name: emotional-jenkins-plugin # Emotional Jenkins
- name: monitoring # Monitoring
- name: publish-over-ssh # Publish Over SSH
- name: shelve-project-plugin # Shelve Project
- name: token-macro # Token Macro
- name: zapper # OWASP Zed Attack Proxy (ZAP)
sudo: yes
get_url: dest="{{ jenkins_home }}/plugins/{{ item.name | mandatory }}.jpi"
url="https://updates.jenkins-ci.org/latest/{{ item.name }}.hpi"
owner=jenkins group=jenkins mode=0644
notify: Restart Jenkins
This is a part of a more complete example that you can find at:
https://github.com/sakaal/service_platform_ansible/blob/master/roles/jenkins_ci/tasks/main.yaml
Feel free to adapt it to your needs.
You can update plugins list with this command line
curl -s -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' | curl -s -X POST -H 'Accept: application/json' -d #- http://localhost:8080/updateCenter/byId/default/postBack
FYI -- some plugins (mercurial in particular) don't install correctly from the command line unless you use their short name. I think this has to do with triggers in the jenkins package info data. You can simulate jenkins' own package update by visiting 127.0.0.1:8080/pluginManager/checkUpdates in a javascript-capable browser.
Or if you're feeling masochistic you can run this python code:
import urllib2,requests
UPDATES_URL = 'https://updates.jenkins-ci.org/update-center.json?id=default&version=1.509.4'
PREFIX = 'http://127.0.0.1:8080'
def update_plugins():
"look at the source for /pluginManager/checkUpdates and downloadManager in /static/<whatever>/scripts/hudson-behavior.js"
raw = urllib2.urlopen(self.UPDATES_URL).read()
jsontext = raw.split('\n')[1] # ugh, JSONP
json.loads(jsontext) # i.e. error if not parseable
print 'received updates json'
# post
postback = PREFIX+'/updateCenter/byId/default/postBack'
reply = requests.post(postback,data=jsontext)
if not reply.ok:
raise RuntimeError(("updates upload not ok",reply.text))
print 'applied updates json'
And once you've run this, you should be able to run jenkins-cli -s http://127.0.0.1:8080 install-plugin mercurial -deploy.
With a current Jenkins Version, the CLI can just be used via SSH. This has to be enabled in the "Global Security Settings" page in the administration interface, as described in the docs. Furthermore, the user who should trigger the updates must add its public ssh key.
With the modified shell script from the accepted answer, this can be automatized as follows, you just have to replace HOSTNAME and USERNAME:
#!/bin/bash
jenkins_host=HOSTNAME #e.g. jenkins.example.com
jenkins_user=USERNAME
jenkins_port=$(curl -s --head https://$jenkins_host/login | grep -oP "^X-SSH-Endpoint: $jenkins_host:\K[0-9]{4,5}")
function jenkins_cli {
ssh -o StrictHostKeyChecking=no -l "$jenkins_user" -p $jenkins_port "$jenkins_host" "$#"
}
UPDATE_LIST=$( jenkins_cli list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
jenkins_cli install-plugin ${UPDATE_LIST};
jenkins_cli safe-restart;
else
echo "No updates available"
fi
This greps the used SSH port of the Jenkins CLI and then connects via SSH without checking the host key, as it changes for every Jenkins restart.
Then all plugins with an update available are upgraded and afterwards Jenkins is restarted.
In groovy
The groovy path has one big advantage: it can be added to a 'system groovy script' build step in a job without any change.
Create the file 'update_plugins.groovy' with this content:
jenkins.model.Jenkins.getInstance().getUpdateCenter().getSites().each { site ->
site.updateDirectlyNow(hudson.model.DownloadService.signatureCheck)
}
hudson.model.DownloadService.Downloadable.all().each { downloadable ->
downloadable.updateNow();
}
def plugins = jenkins.model.Jenkins.instance.pluginManager.activePlugins.findAll {
it -> it.hasUpdate()
}.collect {
it -> it.getShortName()
}
println "Plugins to upgrade: ${plugins}"
long count = 0
jenkins.model.Jenkins.instance.pluginManager.install(plugins, false).each { f ->
f.get()
println "${++count}/${plugins.size()}.."
}
if(plugins.size() != 0 && count == plugins.size()) {
println "restarting Jenkins..."
jenkins.model.Jenkins.instance.safeRestart()
}
Then execute this curl command:
curl --user 'username:token' --data-urlencode "script=$(< ./update_plugins.groovy)" https://jenkins_server/scriptText

Resources