Send an HTTPS request to TLS1.0-only server in Alpine linux - https

I'm writing a simple web crawler inside Docker Alpine image. However I cannot send HTTPS requests to servers that support only TLS1.0 . How can I configure Alpine linux to allow obsolete TLS versions?
I tried adding MinProtocol to /etc/ssl/openssl.cnf with no luck.
Example Dockerfile:
FROM node:12.0-alpine
RUN printf "[system_default_sect]\nMinProtocol = TLSv1.0\nCipherString = DEFAULT#SECLEVEL=1" >> /etc/ssl/openssl.cnf
CMD ["/usr/bin/wget", "https://www.restauracesalanda.cz/"]
When I build and run this container, I get
Connecting to www.restauracesalanda.cz (93.185.102.124:443)
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer

I can reproduce your issue using the builtin-busybox-wget. However, using the "regular" wget works:
root#a:~# docker run --rm -it node:12.0-alpine /bin/ash
/ # wget -q https://www.restauracesalanda.cz/; echo $?
ssl_client: www.restauracesalanda.cz: handshake failed: error:1425F102:SSL routines:ssl_choose_client_version:unsupported protocol
wget: error getting response: Connection reset by peer
1
/ # apk add wget
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/main/x86_64/APKINDEX.tar.gz
fetch http://dl-cdn.alpinelinux.org/alpine/v3.9/community/x86_64/APKINDEX.tar.gz
(1/1) Installing wget (1.20.3-r0)
Executing busybox-1.29.3-r10.trigger
OK: 7 MiB in 17 packages
/ # wget -q https://www.restauracesalanda.cz/; echo $?
0
/ #
I'm not sure, but maybe you should post an issue at https://bugs.alpinelinux.org

Putting this magic 1 liner into my dockerfile solved my issues and i was able to use TLS 1.0:
RUN sed -i 's/MinProtocol = TLSv1.2/MinProtocol = TLSv1/' /etc/ssl/openssl.cnf \ && sed -i 's/CipherString = DEFAULT#SECLEVEL=2/CipherString = DEFAULT#SECLEVEL=1/' /etc/ssl/openssl.cnf
Credit goes to this dude: http://blog.travisgosselin.com/tls-1-0-1-1-docker-container-support/

Related

Mailgun attach file Gitlab CI

I'm trying to send csv file - artifact from Gitlab CI over Mailgun.
Regular mail works well, but when I'm add attachment it fails with an error:
curl: (26) Failed to open/read local data from file/application
My yaml file:
artifact:
paths:
-report_folder/result.csv
send_email:
script: curl --user "api:$Mailgun_API_KEY"
"https://api.mailgun.net/v3/$Mailgun_domain/messages"
-F from='Gitlab <gitlab#example.com>'
-F to=xxx#mail.com
-F subject='test'
-F text='hello form mailgun'
-F attachment='#report_folder/result.csv'
I guess something wrong in last line in a file path, but I tried different combinations, nothing works for now.

"wget" command question from Blobtool tutorial

I followed up a tutorial (https://blobtoolkit.genomehubs.org/install/) based on 2. Fetch the nt database follows up
first step 1.mkdir -p nt (I am done with that part)
second step 2.
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz" -P nt/ && \
for file in nt/*.tar.gz; \
do tar xf $file -C nt && rm $file; \
done
If I copied and paste the second step command, it won't work maybe I am not sure what
&& \
for file in nt/*.tar.gz; \
do tar xf $file -C nt && rm $file; \
done
means, so I tried using
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz"
first, but I received this error messages:
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 130.14.250.13, 2607:f220:41e:250::13, 2607:f220:41e:250::11, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|130.14.250.13|:21... failed: Connection refused.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::13|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::11|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::10|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::12|:21... failed: Network is unreachable.
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::7|:21... failed: Network is unreachable.
Any idea what the problem is ? how to I adjust the second step command to download the database, please let me know , thank you.
wildcards not supported in HTTP.
http://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)
The host looks like an ftp server. You shouldn't be requesting to it with http. It should be wget ftp://ftp.ncbi.... instead
I can't seem to find where in the tutorial you linked they have wget http://ftp... The command before the one you referenced (2. Fetch the nt database) is a curl command and uses ftp.
Perhaps edit the question with where in the docs it tells you to do what you did, and I can look closer.
Edit:
First try this: wget "ftp://ftp.ncbi.nlm.nih.gov". It's a simpler command. It should tell you that you logged in as anonymous.
Given more info in the question, I tried both the commands given.
The first one worked for me out of the box. I got the following output:
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz" -P nt/ && \ for file in nt/*.tar.gz; \ do tar xf $file -C nt && rm $file; \ done
--2020-11-15 13:16:30-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.??.tar.gz
=> ‘nt/.listing’
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 2607:f220:41e:250::13, 2607:f220:41e:250::10, 2607:f220:41e:250::11, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::13|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /blast/db ... done.
==> EPSV ... done. ==> LIST ... done.
.listing [ <=> ] 43.51K 224KB/s in 0.2s
2020-11-15 13:16:32 (224 KB/s) - ‘nt/.listing’ saved [44552]
Removed ‘nt/.listing’.
--2020-11-15 13:16:32-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt.00.tar.gz
=> ‘nt/nt.00.tar.gz’
==> CWD not required.
==> EPSV ... done. ==> RETR nt.00.tar.gz ... done.
Length: 3937869770 (3.7G)
nt.00.tar.gz 3%[ ] 133.87M 10.2MB/s eta 8m 31s
The second one seemed to also work. Probably a typo in the file path somewhere, but nothing big.
wget "ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz"
--2020-11-15 13:17:14-- ftp://ftp.ncbi.nlm.nih.gov/blast/db/nt/*.tar.gz
=> ‘.listing’
Resolving ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)... 2607:f220:41e:250::10, 2607:f220:41e:250::11, 2607:f220:41e:250::7, ...
Connecting to ftp.ncbi.nlm.nih.gov (ftp.ncbi.nlm.nih.gov)|2607:f220:41e:250::10|:21... connected.
Logging in as anonymous ... Logged in!
==> SYST ... done. ==> PWD ... done.
==> TYPE I ... done. ==> CWD (1) /blast/db/nt ...
No such directory ‘blast/db/nt’.
About && and \, those are just syntactic sugar. && means 'and', allowing you to chain multiple commands in one. \ means new line, so you can write a new line in the command line without it treating as you pressing enter.
Neither of these are the root of your problem.
The errors you're getting seems to be nothing to do with the actual commands and more to do with the network. Perhaps you're behind a firewall or a proxy or something. I would try the commands on a different WIFI network. Or if you know how to disable firewall settings on your router (I don't), try to fiddle around with that.

Magento Tupertine authentication error

I have install a module in my magento(1.9.1) store to cache pages with varnish called tupertine, but i got some issues with it.
After the installation I entered in the file: /etc/varnish/secret and copied the secret key, so I went back to the backend and pasted this key, however to save the system shows me the following message:
Failed to apply the VCL to 127.0.0.1:6082: Got unexpected response code from Varnish: 107 ftfavpxpdqciyfzwuwtddrefouwffsdl Authentication required.
Reading the module documentation: https://github.com/nexcess/magento-turpentine/wiki/Configuration checked that the key contains a line break, then it is suggested to put a /n at the end of the key in backend
When trying to put /n the key in the backend system displays the following message:
Failed to apply the VCL to 127.0.0.1:6082: Varnish data to write over length limit by 122 characters
Varnish esi_syntax param is not set correctly, please see these instructions to fix this warning.
the key was :
b6736327-be5e-4b52-a05a-875ea9271424
and looked like this:
b6736327-be5e-4b52-a05a-875ea9271424\n
try this
edit file varnish
sudo nano /etc/default/varnish
set this for "DAEMON_OPTS"
DAEMON_OPTS="
-p cli_buffer=16384 \
-a :80 \
-T localhost:6082 \
-f /etc/varnish/default.vcl \
-S /etc/varnish/secret \
-s malloc,256m"

Squid - Can I purge cache objects in squid-cache using url?

I am new to squid-cache. I am looking for purging objects using http url.
http://$cacheuser$:$cachepassword$#$cache$:8081/CE/Delete/<protocol>/<machine-name>/<folder>/<file>
Will this work properly. Does squid support this kind of purge through url?
Thanks.
I have hosted a cgi script in cache machine which listens for http request and executes squidclient.
use CGI qw(:standard);
$urltopurge=param("url");
print $urltopurge;
print header();
print "Trying to purge <b>$urltopurge</b><P>";
print "sending command <B>squidclient -v -m PURGE -h 172.24.133.181 -p 8081 $urltopurge</b> to proxy server<P><HR><b>Server Response:</b><P>";
$result = system ("C:\\squid\\bin\\squidclient.exe -v -m PURGE -p 8081 $urltopurge");
print $result;
print "<hr>";
print "purger.cgi - Praveen";

How can I update jenkins plugins from the terminal?

I am trying to create a bash script for setting up Jenkins. Is there any way to update a plugin list from the Jenkins terminal?
At first setup there is no plugin available on the list
i.e.:
java -jar jenkins-cli.jar -s `http://localhost:8080` install-plugin dry
won't work
A simple but working way is first to list all installed plugins, look for updates and install them.
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins
Each plugin which has an update available, has the new version in brackets at the end. So you can grep for those:
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }'
If you call install-plugin with the plugin name, it is automatically upgraded to the latest version.
Finally you have to restart jenkins.
Putting it all together (can be placed in a shell script):
UPDATE_LIST=$( java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ install-plugin ${UPDATE_LIST};
java -jar /root/jenkins-cli.jar -s http://127.0.0.1:8080/ safe-restart;
fi
You can actually install plugins from the computer terminal (rather than the Jenkins terminal).
Download the plugin from the plugin site (http://updates.jenkins-ci.org/download/plugins)
Copy that plugin into the $JENKINS_HOME/plugins directory
At that point either start Jenkins or call the reload settings service (http://yourservername:8080/jenkins/reload)
This will enable the plugin in Jenkins and assuming that Jenkins is started.
cd $JENKINS_HOME/plugins
curl -O http://updates.jenkins-ci.org/download/plugins/cobertura.hpi
curl http://yourservername:8080/reload
Here is how you can deploy Jenkins CI plugins using Ansible, which of course is used from the terminal. This code is a part of roles/jenkins_ci/tasks/main.yaml:
- name: Plugins
with_items: # PLUGIN NAME
- name: checkstyle # Checkstyle
- name: dashboard-view # Dashboard View
- name: dependency-check-jenkins-plugin # OWASP Dependency Check
- name: depgraph-view # Dependency Graph View
- name: deploy # Deploy
- name: emotional-jenkins-plugin # Emotional Jenkins
- name: monitoring # Monitoring
- name: publish-over-ssh # Publish Over SSH
- name: shelve-project-plugin # Shelve Project
- name: token-macro # Token Macro
- name: zapper # OWASP Zed Attack Proxy (ZAP)
sudo: yes
get_url: dest="{{ jenkins_home }}/plugins/{{ item.name | mandatory }}.jpi"
url="https://updates.jenkins-ci.org/latest/{{ item.name }}.hpi"
owner=jenkins group=jenkins mode=0644
notify: Restart Jenkins
This is a part of a more complete example that you can find at:
https://github.com/sakaal/service_platform_ansible/blob/master/roles/jenkins_ci/tasks/main.yaml
Feel free to adapt it to your needs.
You can update plugins list with this command line
curl -s -L http://updates.jenkins-ci.org/update-center.json | sed '1d;$d' | curl -s -X POST -H 'Accept: application/json' -d #- http://localhost:8080/updateCenter/byId/default/postBack
FYI -- some plugins (mercurial in particular) don't install correctly from the command line unless you use their short name. I think this has to do with triggers in the jenkins package info data. You can simulate jenkins' own package update by visiting 127.0.0.1:8080/pluginManager/checkUpdates in a javascript-capable browser.
Or if you're feeling masochistic you can run this python code:
import urllib2,requests
UPDATES_URL = 'https://updates.jenkins-ci.org/update-center.json?id=default&version=1.509.4'
PREFIX = 'http://127.0.0.1:8080'
def update_plugins():
"look at the source for /pluginManager/checkUpdates and downloadManager in /static/<whatever>/scripts/hudson-behavior.js"
raw = urllib2.urlopen(self.UPDATES_URL).read()
jsontext = raw.split('\n')[1] # ugh, JSONP
json.loads(jsontext) # i.e. error if not parseable
print 'received updates json'
# post
postback = PREFIX+'/updateCenter/byId/default/postBack'
reply = requests.post(postback,data=jsontext)
if not reply.ok:
raise RuntimeError(("updates upload not ok",reply.text))
print 'applied updates json'
And once you've run this, you should be able to run jenkins-cli -s http://127.0.0.1:8080 install-plugin mercurial -deploy.
With a current Jenkins Version, the CLI can just be used via SSH. This has to be enabled in the "Global Security Settings" page in the administration interface, as described in the docs. Furthermore, the user who should trigger the updates must add its public ssh key.
With the modified shell script from the accepted answer, this can be automatized as follows, you just have to replace HOSTNAME and USERNAME:
#!/bin/bash
jenkins_host=HOSTNAME #e.g. jenkins.example.com
jenkins_user=USERNAME
jenkins_port=$(curl -s --head https://$jenkins_host/login | grep -oP "^X-SSH-Endpoint: $jenkins_host:\K[0-9]{4,5}")
function jenkins_cli {
ssh -o StrictHostKeyChecking=no -l "$jenkins_user" -p $jenkins_port "$jenkins_host" "$#"
}
UPDATE_LIST=$( jenkins_cli list-plugins | grep -e ')$' | awk '{ print $1 }' );
if [ ! -z "${UPDATE_LIST}" ]; then
echo Updating Jenkins Plugins: ${UPDATE_LIST};
jenkins_cli install-plugin ${UPDATE_LIST};
jenkins_cli safe-restart;
else
echo "No updates available"
fi
This greps the used SSH port of the Jenkins CLI and then connects via SSH without checking the host key, as it changes for every Jenkins restart.
Then all plugins with an update available are upgraded and afterwards Jenkins is restarted.
In groovy
The groovy path has one big advantage: it can be added to a 'system groovy script' build step in a job without any change.
Create the file 'update_plugins.groovy' with this content:
jenkins.model.Jenkins.getInstance().getUpdateCenter().getSites().each { site ->
site.updateDirectlyNow(hudson.model.DownloadService.signatureCheck)
}
hudson.model.DownloadService.Downloadable.all().each { downloadable ->
downloadable.updateNow();
}
def plugins = jenkins.model.Jenkins.instance.pluginManager.activePlugins.findAll {
it -> it.hasUpdate()
}.collect {
it -> it.getShortName()
}
println "Plugins to upgrade: ${plugins}"
long count = 0
jenkins.model.Jenkins.instance.pluginManager.install(plugins, false).each { f ->
f.get()
println "${++count}/${plugins.size()}.."
}
if(plugins.size() != 0 && count == plugins.size()) {
println "restarting Jenkins..."
jenkins.model.Jenkins.instance.safeRestart()
}
Then execute this curl command:
curl --user 'username:token' --data-urlencode "script=$(< ./update_plugins.groovy)" https://jenkins_server/scriptText

Resources