I need to change my varnish param cli_buffer to bigger value than the default one (8192).
In the github thread https://github.com/nexcess/magento-turpentine/issues/136
they've already mentioned the following way to do this
start Varnish instance with "-p cli_buffer 10000"
i tried with the following command but i cant change
varnishd -p cli_buffer=10000
i guess i need to use vcl.inline, but am not sure how to do that (as i am developer and just a beginner with server stuff like this.)
I have Sudo access to start and stop the varnish and change the varnish config.
Can you help me how to do this ?
Thanks,
Jerome
Ok got it,
To change varnish params do the following:
Assumed that you're logged in to shell as a super user and has permission to change varnish configuration and start and stop the varnish services
use command varnishadm
varnishadm
After that change the param as shown below
varnish> param.set cli_buffer 10000
To verify its changed use the command below
param.show cli_buffer
You are done !
Seems cli_buffer need add to
/lib/systemd/system/varnish.service
like
ExecStart=/usr/sbin/varnishd -j unix,user=vcache -F -a :80 -T localhost:6082 -f /etc/varnish/default.vcl -S /etc/varnish/secret -p feature=+esi_ignore_other_elements -p vcc_allow_inline_c=on -p cli_buffer=16384 -s malloc,256m
Related
So you can't directly update a single item, but must get the entire config group associated with.
What I have done is:
# read the tag of target config i want
curl -u $USERNAME:$PASSWORD -H "X-Requested-By: ambari" -X GET $BASE_URI?fields=Clusters/desired_configs > .temp_json
# download my configs
curl -u $USERNAME:$PASSWORD -H "X-Requested-By: ambari" -X GET "$BASE_URI/configurations?type=$CONFIG_TYPE&tag=$TARGET_TAG" > .configs_to_update
# update configs here > UPDATED_FILE_HERE
# ??? (upload the configs)
The next step is to upload the configs to the server then restart the services. I can't seem to figure out the API call to upload the configs. Does anyone know how I can upload the configs with the Ambari REST API?
I am not sure if this helps your situation, but check out this command I use to make an adjustment to a single config:
python /var/lib/ambari-server/resources/scripts/configs.py -u admin -p admin -n HDP3 -l c7404.ambari.apache.org -t 8080 -a set -c cluster-env -k ignore_groupsusers_create -v true
#!/bin/bash
if ping -q -c 1 -W 1 8.8.8.8 >/dev/null; then
echo "The network is up"
else
echo "The network is down"
# Starting Login
wget "nonhttpssite.com" --no-check-certificate --keep-session-cookies --no-cache --timeout 30 -O - 2>/dev/null
fi
I try this code but still can't automatic after internet disconnect, using crontab execute every minute
im still confuse on wget line, login page is http://landing6.wifi.id/ but still adding other unique url example : http://landing6.wifi.id/landing/?NG94RktRQ3drZ05SbEZqOW5yenZ1ZmtrUU8xQnRLcnorSmtVNnJhQWFpL1RMRkErVDRjd3U5Q0tJRGFwa05leDBCZ0g5VWExZlRUOFBQNXVkY0E1dUFzcVkzbWxHM0lQd2JKZVJua3NkaU5lRCtwcUhPZHI2V2kyN3JaNExSKzhQVnNYN1RTMXNyT1VUZENVeU5zMG9pcjlEdHRUa0o2T3Rab0FhZERoajhYWTFVc2RtWG9CRzJWSnYzOWhOa0h6VktqNnJKL0pSbWVlTS9NK1FabW5Wdz09
since my mac already bypassed so i only need to open non https site to forwarded landing page, so no need to post data user/password
I will run this script on openwrt
Animate Browsing
I have a shell script that periodically checks the ADSL external IP address and send it to my email if it has changed.
#! /bin/sh
NEWIP=`/usr/bin/curl ifconfig.me`
OLDIP=`cat ./current`
logger "$NEWIP ... $OLDIP"
if [ "$NEWIP" != "$OLDIP" ]; then
TIME=`/bin/date`
/usr/bin/sendEmail -v -f ip_watcher#xxxoo.com \
-s smtp.gmail.com:587 -xu ip_watcher#xxxoo.com -xp xxxxxx \
-t xxx#xxxxx.com \
-o tls=yes \
-u "$NEWIP" \
-m "$NEWIP $TIME" -a
/bin/echo "$NEWIP" > ./current
logger "IP of bjserver1 has changed ..."
else
logger "New IP is the SAME with old. not sending ..."
fi
this works perfectly when I run it from command line. but after I put it into cron, NEWIP and OLDIP are always the same. I don't know why , can anybody help ?
What is ./current?
You are not using an absolute path in the script, so the file will be wherever it is run. You should use an absolute path.
The only other significant difference between cron and a command-line run is the user under whose account the script is executed. Make sure the account (if it's not root) has significant privileges to do what you're asking it to do.
Or, better yet, use an established dynamic DNS client so you don't need to be concerned with external hostnames. You do realize you're relying on that web site to be both honest and up, right?
At the start of the script you should change directory to the correct one (as a guess). Or use an absolute path.
I have a php page that runs on a local uri on a local nginx server. It is called like this:
http://example.dev/index.php?v=var
Is it possible to call this php page from inside a Bash script in order to make it run just like I do by typing the uri in Firefox?
I tryed to access the script directly in cli:
php /home/public_html/example.dev/index.php
but it didn't work (it looks that php running under fastCGI and PHP-CLI work somehow differently).
Any ideas?
Try GNU Wget
wget http://example.dev/index.php?v=var
or cURL
curl http://example.dev/index.php?v=var
to run it like a browser would.
Note: But this is not CLI in any way.
php -f <path-to-file>
php can output whatever you tell it to. It doesn't have to be HTML.
You can use a bash script to call a URI page by mfetching it with a program like curl:
curl -s 'http://example.dev/index.php?v=var' > /dev/null
…or you can be a little more hands on and use nc:
echo 'GET /index.php?var' | nc example.dev 80
I want to setup a simple ssh tunnel from a local machine to a machine on the internet.
I'm using
ssh -D 8080 -f -C -q -N -p 12122 <username>#<hostname>
Setup works fine (I think) cause ssh returs asking for the credentials, which I provide.
Then i do
export http_proxy=http://localhost:8080
and
wget http://www.google.com
Wget returns that the request has been sent to the proxy, but no data is received back.
What i need is a way to look at how ssh is processing the request....
To get more information out of your SSH connection for debugging, leave out the -q and -f options, and include -vvv:
ssh -D 8080 -vvv -N -p 12122 <username>#<hostname>
To address your actual problem, by using ssh -D you're essentially setting up a SOCKS proxy which I believe is not supported by default in wget.
You might have better luck with curl which provides SOCKS suport via the --socks option.
If you really really need to use wget, you'll have to recompile your own version to include socks support. There should be an option for ./configure somewhere along the lines of --with-socks.
Alternatively, look into tsock which can intercept outgoing network connections and redirecting them through a SOCKS server.