Homestead pass parameters to after.sh for xdebug autoconfigure - shell

I put the following to after.sh to autoconfigure the Xdebug form project:
#!/bin/sh
echo "Configuring Xdebug"
ip=$(netstat -rn | grep "^0.0.0.0 " | cut -d " " -f10)
xdebug_config="/etc/php/$(php -v | head -n 1 | awk '{print $2}'|cut -c 1-3)/mods-available/xdebug.ini"
echo "IP for the xdebug to connect back: ${ip}"
echo "Xdebug Configuration path: ${xdebug_config}"
echo "Port for the Xdebug to connect back: ${XDEBUG_PORT}"
echo "Optimize for ${IDE} ide"
if [ $IDE=='atom' ]; then
echo "Configuring xdebug for ATOM ide"
if [ -z ${xdebug_config} ]; then
sudo touch ${xdebug_config}
fi
sudo cat <<EOL >${xdebug_config}
zend_extension = xdebug.so
xdebug.remote_enable = 1
xdebug.remote_host=${ip}
xdebug.remote_port = ${XDEBUG_PORT}
xdebug.max_nesting_level = 1000
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_autostart=true
xdebug.remote_log=xdebug.log
EOL
fi
Also I have the following settings to Homestead.yaml:
ip: 192.168.10.10
memory: 2048
cpus: 1
provider: virtualbox
authorize: ~/.ssh/id_rsa.pub
timeout: 120
keys:
- ~/.ssh/id_rsa
folders:
-
map: /home/pcmagas/Kwdikas/php/apps/ellakcy_member_app/
to: /home/vagrant/code
sites:
-
map: homestead.test
to: /home/vagrant/code/web
type: symfony
databases:
- homestead
- homestead-test
variables:
- key: database_host
value: 127.0.0.1
- key: database_port
value: 3306
- key: database_name
value: homestead
- key: database_user
value: homestead
- key: database_password
value: secret
- key: smtp_host
value: localhost
- key: smtp_port
value: 1025
- key: smtp_user
value: no-reply#example.com
- key: IDE
value: atom
- key: XDEBUG_PORT
value: 9091
name: ellakcy-member-app
hostname: ellakcy-member-app
But for some reason it cannot read the values from enviromental variables defined in Homestead.yml as seen in the following output:
ellakcy-member-app: IP for the xdebug to connect back: 10.0.2.2
ellakcy-member-app: Xdebug Configuration path: /etc/php/7.2/mods-available/xdebug.ini
ellakcy-member-app: Port for the Xdebug to connect back:
ellakcy-member-app: Optimize for ide
ellakcy-member-app: Configuring xdebug for ATOM ide
As you can see it fails to read values from the IDE and XDEBUG_PORT do you knwo why and how I can fix that?

You can put a parse_yaml.sh:
#!/bin/sh
parse_yaml() {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo #|tr # '\034')
sed -ne "s|^\($s\)\($w\)$s:$s\"\(.*\)\"$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
And into after.sh
#!/bin/sh
# include parse_yaml function
. parse_yaml.sh
# read yaml file
eval $(parse_yaml zconfig.yml "config_")
# access yaml content
echo $config_development_database
thanks -> https://gist.github.com/pkuczynski/8665367

In my case I tried the approach of having a file named xdebug.conf where I place anything that the default xdebug.conf needs to get rewritten:
zend_extension = xdebug.so
xdebug.remote_enable = 1
xdebug.remote_host = $ip
xdebug.remote_port = 9091
xdebug.max_nesting_level = 1000
xdebug.remote_handler=dbgp
xdebug.remote_mode=req
xdebug.remote_autostart=true
xdebug.remote_log=xdebug.log
The $ip indicates the auto-replaced value with the correct ip in order for the xdebug to get connected into. The script that actually updates the xdebug configuration with the appropriate values is this one in my after.sh
#!/bin/sh
code_path="/home/vagrant/code"
cd $code_path
# Some other bootstrapping
echo "Configuring Xdebug"
ip=$(netstat -rn | grep "^0.0.0.0 " | cut -d " " -f10)
xdebug_config="/etc/php/$(php -v | head -n 1 | awk '{print $2}'|cut -c 1-3)/mods-available/xdebug.ini"
echo "Xdebug config file ${xdebug_config}"
if [ -f "${code_path}/xdebug.conf" ]; then
echo "Specifying the ip with ${ip}"
sed "s/\$ip/${ip}/g" xdebug.conf > xdebug.conf.tmp
echo "Moving Into ${xdebug_config}"
cat xdebug.conf.tmp
sudo cp ./xdebug.conf.tmp ${xdebug_config}
else
echo "File not found"
fi
The last step is to .gitignore any xdebug.conf* file. So now the developer has to create his own xdebug.conf.

Related

GitLab CI rules not working with extends and individual rules

Below are two jobs in the build stage.
Default, there is set some common condition, and using extends keyword for that, ifawsdeploy.
As only one of them should run, if variable $ADMIN_SERVER_IP provided then connect_admin_server should run, working that way.
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run, but it is not running.
.ifawsdeploy:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
variables:
TEST_CREATE_ADMIN:
#value: aws
description: "Platform, currently aws only"
SUB_PLATFORM:
value: aws
description: "Platform, currently aws only"
REGION:
value: "us-west-2"
description: "region where to deploy company"
PACKAGEURL:
value: "http://somerpmurl.x86_64.rpm"
description: "company rpm file url"
ACCOUNT_NAME:
value: "testsubaccount"
description: "Account name of sub account to refer in the deployment, no need to match in AWS"
ROLE_ARN:
value: "arn:aws:iam::491483064167:role/uat"
description: "ROLE ARN of the user account assuming: aws sts get-caller-identity"
tfenv_version: "1.1.9"
DEV_PUB_KEY:
description: "Optional public key file to add access to admin server"
ADMIN_SERVER_IP:
description: "Existing Admin Server IP Address"
ADMIN_SERVER_SSH_KEY:
description: "Existing Admin Server SSH_KEY PEM content"
#export variables below will cause the terraform to use the root account instead of the one specified in tfvars file
.configure_aws_cli: &configure_aws_cli
- aws configure set region $REGION
- aws configure set aws_access_key_id $AWS_FULL_STS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_FULL_STS_ACCESS_KEY_SECRET
- aws sts get-caller-identity
- aws configure set source_profile default --profile $ACCOUNT_NAME
- aws configure set role_arn $ROLE_ARN --profile $ACCOUNT_NAME
- aws sts get-caller-identity --profile $ACCOUNT_NAME
- aws configure set region $REGION --profile $ACCOUNT_NAME
.copy_remote_log: &copy_remote_log
- if [ -e outfile ]; then rm outfile; fi
- copy_command="$(cat $CI_PROJECT_DIR/scp_command.txt)"
- new_copy_command=${copy_command/"%s"/"outfile"}
- new_copy_command=${new_copy_command/"~"/"/home/ec2-user/outfile"}
- echo $new_copy_command
- new_copy_command=$(echo "$new_copy_command" | sed s'/\([^.]*\.[^ ]*\) \([^ ]*\) \(.*\)/\1 \3 \2/')
- echo $new_copy_command
- sleep 10
- eval $new_copy_command
.check_remote_log: &check_remote_log
- sleep 10
- grep Error outfile || true
- sleep 10
- returnCode=$(grep -c Error outfile) || true
- echo "Return code received $returnCode"
- if [ $returnCode -ge 1 ]; then exit 1; fi
- echo "No errors"
.prepare_ssh_key: &prepare_ssh_key
- echo $ADMIN_SERVER_SSH_KEY > $CI_PROJECT_DIR/ssh_key.pem
- cat ssh_key.pem
- sed -i -e 's/-----BEGIN RSA PRIVATE KEY-----/-bk-/g' ssh_key.pem
- sed -i -e 's/-----END RSA PRIVATE KEY-----/-ek-/g' ssh_key.pem
- perl -p -i -e 's/\s/\n/g' ssh_key.pem
- sed -i -e 's/-bk-/-----BEGIN RSA PRIVATE KEY-----/g' ssh_key.pem
- sed -i -e 's/-ek-/-----END RSA PRIVATE KEY-----/g' ssh_key.pem
- cat ssh_key.pem
- chmod 400 ssh_key.pem
connect-admin-server:
stage: build
allow_failure: true
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP && $ADMIN_SERVER_IP != "" && $ADMIN_SERVER_SSH_KEY && $ADMIN_SERVER_SSH_KEY != ""'
extends:
- .ifawsdeploy
script:
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- *prepare_ssh_key
- echo "ssh -i ssh_key.pem ec2-user#${ADMIN_SERVER_IP}" > $CI_PROJECT_DIR/ssh_command.txt
- echo "scp -q -i ssh_key.pem %s ec2-user#${ADMIN_SERVER_IP}:~" > $CI_PROJECT_DIR/scp_command.txt
- test_pre_command="$(cat "$CI_PROJECT_DIR/ssh_command.txt") -o StrictHostKeyChecking=no"
- echo $test_pre_command
- test_command="$(echo $test_pre_command | sed -r 's/(ssh )(.*)/\1-tt \2/')"
- echo $test_command
- echo "sudo yum install -yq $PACKAGEURL 2>&1 | tee outfile ; exit 0" | $test_command
- *copy_remote_log
- echo "Now checking log file for returnCode"
- *check_remote_log
artifacts:
untracked: true
when: always
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
after_script:
- cat $CI_PROJECT_DIR/ssh_key.pem
- cat $CI_PROJECT_DIR/ssh_command.txt
- cat $CI_PROJECT_DIR/scp_command.txt
create-admin-server:
stage: build
allow_failure: false
image:
name: amazon/aws-cli:latest
entrypoint: [ "" ]
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
extends:
- .ifawsdeploy
script:
- echo "admin server $ADMIN_SERVER_IP"
- TF_IN_AUTOMATION=true
- yum update -y
- yum install git unzip gettext jq -y
- *configure_aws_cli
- aws sts get-caller-identity --profile $ACCOUNT_NAME #to check whether updated correctly or not
- git clone "https://project-n-setup:$(echo $PERSONAL_GITLAB_TOKEN)#gitlab.com/company-oss/project-n-setup.git"
# Install tfenv
- git clone https://github.com/tfutils/tfenv.git ~/.tfenv
- ln -s ~/.tfenv /root/.tfenv
- ln -s ~/.tfenv/bin/* /usr/local/bin
# Install terraform 1.1.9 through tfenv
- tfenv install $tfenv_version
- tfenv use $tfenv_version
# Copy the tfvars temp file to the terraform setup directory
- cp .gitlab/admin_server.temp_tfvars project-n-setup/$SUB_PLATFORM/
- cd project-n-setup/$SUB_PLATFORM/
- envsubst < admin_server.temp_tfvars > admin_server.tfvars
- rm -rf .terraform || exit 0
- cat ~/.aws/config
- terraform init -input=false
- terraform apply -var-file=admin_server.tfvars -input=false -auto-approve
- echo "Your admin server key and info are added as artifacts"
# Copy the important terraform outputs to files for artifacts to pass into other jobs
- terraform output -raw ssh_key > $CI_PROJECT_DIR/ssh_key.pem
- terraform output -raw ssh_command > $CI_PROJECT_DIR/ssh_command.txt
- terraform output -raw scp_command > $CI_PROJECT_DIR/scp_command.txt
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/terraform.tfstate $CI_PROJECT_DIR
- cp $CI_PROJECT_DIR/project-n-setup/$SUB_PLATFORM/admin_server.tfvars $CI_PROJECT_DIR
artifacts:
untracked: true
paths:
- "$CI_PROJECT_DIR/ssh_key.pem"
- "$CI_PROJECT_DIR/ssh_command.txt"
- "$CI_PROJECT_DIR/scp_command.txt"
- "$CI_PROJECT_DIR/terraform.tfstate"
- "$CI_PROJECT_DIR/admin_server.tfvars"
How to fix that?
I tried the below step from suggestions on comments section.
.generalgrabclustertrigger:
rules:
- if: '$TEST_CREATE_ADMIN && $REGION && $ROLE_ARN && $PACKAGEURL && $TEST_CREATE_ADMIN == "aws" && $SUB_PLATFORM == "aws" && $ROLE_ARN != "" && $PACKAGEURL != "" && $REGION != ""'
.ifteardownordestroy: # Automatic if triggered from gitlab api AND destroy variable is set
rules:
- !reference [.generalgrabclustertrigger, rules]
- if: 'CI_PIPELINE_SOURCE == "triggered"'
when: never
And included the above in extends of a job.
destroy-admin-server:
stage: cleanup
extends:
- .ifteardownordestroy
allow_failure: true
interruptible: false
But I am getting syntax error in the .ifteardownordestroy part.
jobs:destroy-admin-server:rules:rule if invalid expression syntax
You are overriding rules: in your job that extends .ifawsdeploy. rules: are not combined in this case -- the definition of rules: in the job takes complete precedence.
Take for example the following configuration:
.template:
rules:
- one
- two
myjob:
extends: .template
rules:
- a
- b
In the above example, the myjob job only has rules a and b in effect. Rules one and two are completely ignored because they are overridden in the job configuration.
Instead of uinsg extends:, you can use !reference to preserve and combine rules. You can also use YAML anchors if you want.
create-admin-server:
rules:
- !reference [.ifawsdeploy, rules]
- ... # your additional rules
If no value provided to $ADMIN_SERVER_IP then create_admin_server should run
Lastly, pay special attention to your rules:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
In this case, there are no rules that allow the job to run ever. You either need a case that will evaluate true for the job to run, or to have a default case (an item with no if: condition) in order for the job to run.
To get the behavior you expect, you probably want your default case to be on_success:
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: on_success
you can change your rules to :
rules:
- if: '$ADMIN_SERVER_IP != ""'
when: never
- when: always
or
rules:
- if: '$ADMIN_SERVER_IP == ""'
when: always
I have a sample in here: try-rules-stackoverflow-72545625 - GitLab and the pipeline record Pipeline no value - GitLab, Pipeline has value - GitLab

Insert bcrypt hash into specific line in file using bash

I need to update an application configuration by inserting bcrypt hash into file config.yml at specific lines.
echo -e "$Enter password for user1"
read -p ": " user1_pass
echo -e "$Enter password for user2"
read -p ": " user2_pass
user1_hash=$(htpasswd -bnBC 10 "" $user1_pass | tr -d ':\n')
user2_hash=$(htpasswd -bnBC 10 "" $user2_pass | tr -d ':\n')
$user1_hash should be placed inline 2 and $user2_hash inline 7.
config.yml
user1:
hash: "$2y$12$shEKzuVfogdZFbbraSqhwOOh96hfxe1NzLQbpmHJvgDUeRfRrkf3a"
reserved: "true"
roles: "user"
user2:
hash: "$2y$12$Fkc5GAp9Za5caIfHjBgNQ.jNEss0SJfCLTlm9EhAcjzPVy.kLriBa"
reserved: "true"
roles: "user"
What is the best approach to do that using bash?
You can edit the file using Ruby :
#!/usr/bin/ruby
require 'yaml'
obj = YAML.load_file('config.yml')
bcrypt_hash='$2y$12$shEKzuVfogdZFbbraSqhwOOh96hfxe1NzLQbpmHJvgDUeRfRrkf3a'
obj['user1']['hash'] = bcrypt_hash
obj['user2']['hash'] = bcrypt_hash
puts YAML.dump(obj)
This will output :
... More yml content
user1:
hash: "$2y$12$shEKzuVfogdZFbbraSqhwOOh96hfxe1NzLQbpmHJvgDUeRfRrkf3a"
user2:
hash: "$2y$12$shEKzuVfogdZFbbraSqhwOOh96hfxe1NzLQbpmHJvgDUeRfRrkf3a"
... More yml content
Hope it helps!
Use the keyword echo to write into the file
echo -e "$Enter password for user1"
read -p ": " user1_pass
echo -e "$Enter password for user2"
read -p ": " user2_pass
user1_hash=$(htpasswd -bnBC 10 "" $user1_pass | tr -d ':\n')
user2_hash=$(htpasswd -bnBC 10 "" $user2_pass | tr -d ':\n')
echo "user1:" > config.yml
echo " hash: " + $user1_hash >> config.yml
echo "user2:" >> config.yml
echo " hash: " + $user2_hash >> config.yml
the contents of config.yml file being
user1:
hash: + $2y$10$7S0fC4wTqAfm9ytJ5BquC.3KITsqLoqPXHyj3mzgXdvw10TRIybni
user2:
hash: + $2y$10$jcYMrOsdIzwR3AlSONNfCuc.B5AoGVV4i31KSsx0PLlpn17issJfe

I'm trying to unseal vault using ansible. But I'm getting connection refused error

It worked a few days back, i even checked similar problems like here
I tried to add the environment variables and everything, my hcl file aslo is not a problem as far as i know
hcl file is
storage "file" {
path = "/home/***/vault/"
}
listener "tcp" {
address = "127.0.0.1:8200"
tls_disable = 1
}
my unseal.yml looks like this
---
- name: Removing login and putting to another file
shell: sed -n '7p' keys.txt > login.txt
- name: Remove all lines other than the keys
shell: sed '6,$d' keys.txt > temp.txt
- name: Extracting the keys
shell: cut -c15- temp.txt > unseal_keys.txt
- name: Deleting unnecessary files
shell: rm temp.txt
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==1' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==2' unseal_keys.txt)
- name: Unsealing the vault
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault operator unseal $(awk 'NR==3' unseal_keys.txt)
register: check
- debug: var=check.stdout_lines
- name: Login
environment:
VAULT_ADDR: http://127.0.0.1:8200
shell: vault login $(sed 's/Initial Root Token://; s/ //' login.txt)
register: checkLogin
- debug: var=checkLogin.stdout_lines
My start-server.yml looks like this
---
#- name: Disable mlock
# shell: sudo setcap cap_ipc_lock=+ep $(readlink -f $(which vault))
# shell: LimitMEMLOCK=infinity
- name: Start vault service
systemd:
state: started
name: vault
daemon_reload: yes
environment:
VAULT_ADDR: http://127.0.0.1:8200
become: true
- pause:
seconds: 15
This the error shown.
fatal: [europa]: FAILED! => {"changed": true, "cmd": "vault operator unseal $(awk 'NR==1' unseal_keys.txt)", "delta": "0:00:00.049258", "end": "2019-09-17 12:25:48.987789", "msg": "non-zero return code", "rc": 2, "start": "2019-09-17 12:25:48.938531", "stderr": "Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"], "stdout": "", "stdout_lines": []}
This is the main error
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused", "stderr_lines": ["Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
"Error unsealing: Put http://127.0.0.1:8200/v1/sys/unseal: dial tcp 127.0.0.1:8200: connect: connection refused"
As it is showing connection refused, most probably your vault service is not running.
Other thing which I can suggest is that you can make a script named unseal_vault.sh and can use that script to unseal your vault instead of repeating same tasks in your playbook.
Below is a script which I use in my setup to unseal vault.
#!/bin/bash
# Assumptions: vault is already initialized
# Fetching first three keys to unseal the vault
KEY_1=$(cat keys.log | grep 'Unseal Key 1' | awk '{print $4}')
KEY_2=$(cat keys.log | grep 'Unseal Key 2' | awk '{print $4}')
KEY_3=$(cat keys.log | grep 'Unseal Key 3' | awk '{print $4}')
# Unseal using first key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_1'"
}'
# Unseal using second key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_2'"
}'
# Unseal using third key
curl --silent -X PUT \
http://192.*.*.*:8200/v1/sys/unseal \
-H 'cache-control: no-cache' \
-H 'content-type: application/json' \
-d '{
"key": "'$KEY_3'"
}'
And you can run this script using a single task in ansible.

Auditbeat not picking up authentication events in CentOs 7

I am trying to ship the authentication related of my CentOS 7 to Elasticsearch. Strangely I am not getting any authentication events.
When I ran the debug command auditbeat -c auditbeat.conf -d -e "*" , I found something like below:
{
"#timestamp": "2019-01-15T11:54:37.246Z",
"#metadata": {
"beat": "auditbeat",
"type": "doc",
"version": "6.4.0"
},
"error": {
"message": "failed to set audit PID. An audit process is already running (PID 68504)"
},
"beat": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"hostname": "env-cs-westus-devtest-66-csos-logs-es-master-0",
"version": "6.4.0"
},
"host": {
"name": "env-cs-westus-devtest-66-csos-logs-es-master-0"
},
"event": {
"module": "auditd"
}
}
Also there was an error line like below:
Failure receiving audit events {"error": "failed to set audit PID. An audit process is already running (PID 68504)"}
Machine details
NAME="CentOS Linux"
VERSION="7 (Core)"
ID="centos"
ID_LIKE="rhel fedora"
VERSION_ID="7"
PRETTY_NAME="CentOS Linux 7 (Core)"
ANSI_COLOR="0;31"
CPE_NAME="cpe:/o:centos:centos:7"
HOME_URL="https://www.centos.org/"
BUG_REPORT_URL="https://bugs.centos.org/"
CENTOS_MANTISBT_PROJECT="CentOS-7"
CENTOS_MANTISBT_PROJECT_VERSION="7"
REDHAT_SUPPORT_PRODUCT="centos"
REDHAT_SUPPORT_PRODUCT_VERSION="7"
Audibeat Configuration File
#================================ General ======================================
fields_under_root: False
queue:
mem:
events: 4096
flush:
min_events: 2048
timeout: 1s
max_procs: 1
max_start_delay: 10s
#================================= Paths ======================================
path:
home: "/usr/share/auditbeat"
config: "/etc/auditbeat"
data: "/var/lib/auditbeat"
logs: "/var/log/auditbeat/auditbeat.log"
#============================ Config Reloading ================================
config:
modules:
path: ${path.config}/conf.d/*.yml
reload:
period: 10s
enabled: False
#========================== Modules configuration =============================
auditbeat.modules:
#----------------------------- Auditd module -----------------------------------
- module: auditd
resolve_ids: True
failure_mode: silent
backlog_limit: 8196
rate_limit: 0
include_raw_message: True
include_warnings: True
audit_rules: |
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
-w /etc/security/opasswd -p wa -k identity
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EACCES -k access
-a always,exit -F arch=b64 -S open,creat,truncate,ftruncate,openat,open_by_handle_at -F exit=-EPERM -k access
-a always,exit -F dir=/home -F uid=0 -F auid>=1000 -F auid!=4294967295 -C auid!=obj_uid -F key=power-abuse
-a always,exit -F arch=b64 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b32 -S setuid -F a0=0 -F exe=/usr/bin/su -F key=elevated-privs
-a always,exit -F arch=b64 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b32 -S setresuid -F a0=0 -F exe=/usr/bin/sudo -F key=elevated-privs
-a always,exit -F arch=b64 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
-a always,exit -F arch=b32 -S execve -C uid!=euid -F euid=0 -F key=elevated-privs
#----------------------------- File Integrity module -----------------------------------
- module: file_integrity
paths:
- /bin
- /usr/bin
- /sbin
- /usr/sbin
- /etc
- /home/jenkins
exclude_files:
- (?i)\.sw[nop]$
- ~$
- /\.git($|/)
scan_at_start: True
scan_rate_per_sec: 50 MiB
max_file_size: 100 MiB
hash_types: [sha1]
recursive: False
#================================ Outputs ======================================
#-------------------------- Elasticsearch output -------------------------------
output.elasticsearch:
enabled: True
hosts:
- x.x.x:9200
compression_level: 0
protocol: "http"
worker: 1
bulk_max_size: 50
timeout: 90
#================================ Logging ======================================
logging:
level: "info"
selectors: ["*"]
to_syslog: False
to_eventlog: False
metrics:
enabled: True
period: 30s
to_files: True
files:
path: /var/log/auditbeat
name: "auditbeat"
rotateeverybytes: 10485760
keepfiles: 7
permissions: 0600
json: False
Version of Auditbeat
auditbeat version 6.4.0 (amd64), libbeat 6.4.0
Have anyone faced a similar issue and got a resolution?.
Note: this configuration for auditbeat successfully captures authentication events for Ubuntu
So I posted the same in the Elastic beats forum and got a solution. You can find the same here
As per their suggestion, turning off the auditd service would allow Audit events to be captured by Audibeat. I tried the same and it worked for me. But I am not sure of the implications of turning the auditd off. So I might switch to a Filebeat based solution.

Minimal web server using netcat

I'm trying to set up a minimal web server using netcat (nc). When the browser calls up localhost:1500, for instance, it should show the result of a function (date in the example below, but eventually it'll be a python or c program that yields some data).
My little netcat web server needs to be a while true loop in bash, possibly as simple as this:
while true ; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 1500 ; done
When I try this the browser shows the currently available data during the moment when nc starts. I want the browser displays the data during the moment the browser requests it, though. How can I achieve this?
Try this:
while true ; do nc -l -p 1500 -c 'echo -e "HTTP/1.1 200 OK\n\n $(date)"'; done
The -cmakes netcat execute the given command in a shell, so you can use echo. If you don't need echo, use -e. For further information on this, try man nc. Note, that when using echo there is no way for your program (the date-replacement) to get the browser request. So you probably finally want to do something like this:
while true ; do nc -l -p 1500 -e /path/to/yourprogram ; done
Where yourprogram must do the protocol stuff like handling GET, sending HTTP 200 etc.
I had the problem where I wanted to return the result of executing a bash command:
$ while true; do { echo -e 'HTTP/1.1 200 OK\r\n'; sh test; } | nc -l 8080; done
NOTE:
This command was taken from: http://www.razvantudorica.com/08/web-server-in-one-line-of-bash
This executes a bash script and returns the result to a browser client connecting to the server running this command on port 8080.
My script does this:
$ nano test
#!/bin/bash
echo "************PRINT SOME TEXT***************\n"
echo "Hello World!!!"
echo "\n"
echo "Resources:"
vmstat -S M
echo "\n"
echo "Addresses:"
echo "$(ifconfig)"
echo "\n"
echo "$(gpio readall)"
and my web browser is showing
************PRINT SOME TEXT***************
Hello World!!!
Resources:
procs -----------memory---------- ---swap-- -----io---- -system-- ----cpu----
r b swpd free buff cache si so bi bo in cs us sy id wa
0 0 0 314 18 78 0 0 2 1 306 31 0 0 100 0
Addresses:
eth0 Link encap:Ethernet HWaddr b8:27:eb:86:e8:c5
inet addr:192.168.1.83 Bcast:192.168.1.255 Mask:255.255.255.0
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:27734 errors:0 dropped:0 overruns:0 frame:0
TX packets:26393 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:1924720 (1.8 MiB) TX bytes:3841998 (3.6 MiB)
lo Link encap:Local Loopback
inet addr:127.0.0.1 Mask:255.0.0.0
UP LOOPBACK RUNNING MTU:65536 Metric:1
RX packets:0 errors:0 dropped:0 overruns:0 frame:0
TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:0
RX bytes:0 (0.0 B) TX bytes:0 (0.0 B)
GPIOs:
+----------+-Rev2-+------+--------+------+-------+
| wiringPi | GPIO | Phys | Name | Mode | Value |
+----------+------+------+--------+------+-------+
| 0 | 17 | 11 | GPIO 0 | IN | Low |
| 1 | 18 | 12 | GPIO 1 | IN | Low |
| 2 | 27 | 13 | GPIO 2 | IN | Low |
| 3 | 22 | 15 | GPIO 3 | IN | Low |
| 4 | 23 | 16 | GPIO 4 | IN | Low |
| 5 | 24 | 18 | GPIO 5 | IN | Low |
| 6 | 25 | 22 | GPIO 6 | IN | Low |
| 7 | 4 | 7 | GPIO 7 | IN | Low |
| 8 | 2 | 3 | SDA | IN | High |
| 9 | 3 | 5 | SCL | IN | High |
| 10 | 8 | 24 | CE0 | IN | Low |
| 11 | 7 | 26 | CE1 | IN | Low |
| 12 | 10 | 19 | MOSI | IN | Low |
| 13 | 9 | 21 | MISO | IN | Low |
| 14 | 11 | 23 | SCLK | IN | Low |
| 15 | 14 | 8 | TxD | ALT0 | High |
| 16 | 15 | 10 | RxD | ALT0 | High |
| 17 | 28 | 3 | GPIO 8 | ALT2 | Low |
| 18 | 29 | 4 | GPIO 9 | ALT2 | Low |
| 19 | 30 | 5 | GPIO10 | ALT2 | Low |
| 20 | 31 | 6 | GPIO11 | ALT2 | Low |
+----------+------+------+--------+------+-------+
Add -q 1 to the netcat command line:
while true; do
echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l -p 1500 -q 1
done
The problem you are facing is that nc does not know when the web client is done with its request so it can respond to the request.
A web session should go something like this.
TCP session is established.
Browser Request Header: GET / HTTP/1.1
Browser Request Header: Host: www.google.com
Browser Request Header: \n #Note: Browser is telling Webserver that the request header is complete.
Server Response Header: HTTP/1.1 200 OK
Server Response Header: Content-Type: text/html
Server Response Header: Content-Length: 24
Server Response Header: \n #Note: Webserver is telling browser that response header is complete
Server Message Body: <html>sample html</html>
Server Message Body: \n #Note: Webserver is telling the browser that the requested resource is finished.
The server closes the TCP session.
Lines that begin with "\n" are simply empty lines without even a space and contain nothing more than a new line character.
I have my bash httpd launched by xinetd, xinetd tutorial. It also logs date, time, browser IP address, and the entire browser request to a log file, and calculates Content-Length for the Server header response.
user#machine:/usr/local/bin# cat ./bash_httpd
#!/bin/bash
x=0;
Log=$( echo -n "["$(date "+%F %T %Z")"] $REMOTE_HOST ")$(
while read I[$x] && [ ${#I[$x]} -gt 1 ];do
echo -n '"'${I[$x]} | sed -e's,.$,",'; let "x = $x + 1";
done ;
); echo $Log >> /var/log/bash_httpd
Message_Body=$(echo -en '<html>Sample html</html>')
echo -en "HTTP/1.0 200 OK\nContent-Type: text/html\nContent-Length: ${#Message_Body}\n\n$Message_Body"
To add more functionality, you could incorporate.
METHOD=$(echo ${I[0]} |cut -d" " -f1)
REQUEST=$(echo ${I[0]} |cut -d" " -f2)
HTTP_VERSION=$(echo ${I[0]} |cut -d" " -f3)
If METHOD = "GET" ]; then
case "$REQUEST" in
"/") Message_Body="HTML formatted home page stuff"
;;
/who) Message_Body="HTML formatted results of who"
;;
/ps) Message_Body="HTML formatted results of ps"
;;
*) Message_Body= "Error Page not found header and content"
;;
esac
fi
Happy bashing!
Another way to do this
while true; do (echo -e 'HTTP/1.1 200 OK\r\n'; echo -e "\n\tMy website has date function" ; echo -e "\t$(date)\n") | nc -lp 8080; done
Let's test it with 2 HTTP request using curl
In this example, 172.16.2.6 is the server IP Address.
Server Side
admin#server:~$ while true; do (echo -e 'HTTP/1.1 200 OK\r\n'; echo -e "\n\tMy website has date function" ; echo -e "\t$(date)\n") | nc -lp 8080; done
GET / HTTP/1.1 Host: 172.16.2.6:8080 User-Agent: curl/7.48.0 Accept:
*/*
GET / HTTP/1.1 Host: 172.16.2.6:8080 User-Agent: curl/7.48.0 Accept:
*/*
Client Side
user#client:~$ curl 172.16.2.6:8080
My website has date function
Tue Jun 13 18:00:19 UTC 2017
user#client:~$ curl 172.16.2.6:8080
My website has date function
Tue Jun 13 18:00:24 UTC 2017
user#client:~$
If you want to execute another command, feel free to replace $(date).
I had the same need/problem but nothing here worked for me (or I didn't understand everything), so this is my solution.
I post my minimal_http_server.sh (working with my /bin/bash (4.3.11) but not /bin/sh because of the redirection):
rm -f out
mkfifo out
trap "rm -f out" EXIT
while true
do
cat out | nc -l 1500 > >( # parse the netcat output, to build the answer redirected to the pipe "out".
export REQUEST=
while read -r line
do
line=$(echo "$line" | tr -d '\r\n')
if echo "$line" | grep -qE '^GET /' # if line starts with "GET /"
then
REQUEST=$(echo "$line" | cut -d ' ' -f2) # extract the request
elif [ -z "$line" ] # empty line / end of request
then
# call a script here
# Note: REQUEST is exported, so the script can parse it (to answer 200/403/404 status code + content)
./a_script.sh > out
fi
done
)
done
And my a_script.sh (with your need):
#!/bin/bash
echo -e "HTTP/1.1 200 OK\r"
echo "Content-type: text/html"
echo
date
mkfifo pipe;
while true ;
do
#use read line from pipe to make it blocks before request comes in,
#this is the key.
{ read line<pipe;echo -e "HTTP/1.1 200 OK\r\n";echo $(date);
} | nc -l -q 0 -p 8080 > pipe;
done
Here is a beauty of a little bash webserver, I found it online and forked a copy and spruced it up a bit - it uses socat or netcat I have tested it with socat -- it is self-contained in one-script and generates its own configuration file and favicon.
By default it will start up as a web enabled file browser yet is easily configured by the configuration file for any logic. For files it streams images and music (mp3's), video (mp4's, avi, etc) -- I have tested streaming various file types to Linux,Windows and Android devices including a smartwatch!
I think it streams better than VLC actually. I have found it useful for transferring files to remote clients who have no access beyond a web browser e.g. Android smartwatch without needing to worry about physically connecting to a USB port.
If you want to try it out just copy and paste it to a file named bashttpd, then start it up on the host with $> bashttpd -s
Then you can go to any other computer (presuming the firewall is not blocking inbound tcp connections to port 8080 -- the default port, you can change the port to whatever you want using the global variables at the top of the script). http://bashttpd_server_ip:8080
#!/usr/bin/env bash
#############################################################################
###########################################################################
### bashttpd v 1.12
###
### Original author: Avleen Vig, 2012
### Reworked by: Josh Cartwright, 2012
### Modified by: A.M.Danischewski, 2015
### Issues: If you find any issues leave me a comment at
### http://scriptsandoneliners.blogspot.com/2015/04/bashttpd-self-contained-bash-webserver.html
###
### This is a simple Bash based webserver. By default it will browse files and allows for
### retrieving binary files.
###
### It has been tested successfully to view and stream files including images, mp3s,
### mp4s and downloading files of any type including binary and compressed files via
### any web browser.
###
### Successfully tested on various browsers on Windows, Linux and Android devices (including the
### Android Smartwatch ZGPAX S8).
###
### It handles favicon requests by hardcoded favicon image -- by default a marathon
### runner; change it to whatever you want! By base64 encoding your favorit favicon
### and changing the global variable below this header.
###
### Make sure if you have a firewall it allows connections to the port you plan to
### listen on (8080 by default).
###
### By default this program will allow for the browsing of files from the
### computer where it is run.
###
### Make sure you are allowed connections to the port you plan to listen on
### (8080 by default). Then just drop it on a host machine (that has bash)
### and start it up like this:
###
### $192.168.1.101> bashttpd -s
###
### On the remote machine you should be able to browse and download files from the host
### server via any web browser by visiting:
###
### http://192.168.1.101:8080
###
#### This program requires (to work to full capacity) by default:
### socat or netcat (w/ '-e' option - on Ubuntu netcat-traditional)
### tree - useful for pretty directory listings
### If you are using socat, you can type: bashttpd -s
###
### to start listening on the LISTEN_PORT (default is 8080), you can change
### the port below.
### E.g. nc -lp 8080 -e ./bashttpd ## <-- If your nc has the -e option.
### E.g. nc.traditional -lp 8080 -e ./bashttpd
### E.g. bashttpd -s -or- socat TCP4-LISTEN:8080,fork EXEC:bashttpd
###
### Copyright (C) 2012, Avleen Vig <avleen#gmail.com>
###
### Permission is hereby granted, free of charge, to any person obtaining a copy of
### this software and associated documentation files (the "Software"), to deal in
### the Software without restriction, including without limitation the rights to
### use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
### the Software, and to permit persons to whom the Software is furnished to do so,
### subject to the following conditions:
###
### The above copyright notice and this permission notice shall be included in all
### copies or substantial portions of the Software.
###
### THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
### IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
### FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
### COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
### IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
### CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
###
###########################################################################
#############################################################################
### CHANGE THIS TO WHERE YOU WANT THE CONFIGURATION FILE TO RESIDE
declare -r BASHTTPD_CONF="/tmp/bashttpd.conf"
### CHANGE THIS IF YOU WOULD LIKE TO LISTEN ON A DIFFERENT PORT
declare -i LISTEN_PORT=8080
## If you are on AIX, IRIX, Solaris, or a hardened system redirecting
## to /dev/random will probably break, you can change it to /dev/null.
declare -a DUMP_DEV="/dev/random"
## Just base64 encode your favorite favicon and change this to whatever you want.
declare -r FAVICON="AAABAAEAEBAAAAEAIABoBAAAFgAAACgAAAAQAAAAIAAAAAEAIAAAAAAAAAQAAAAAAAAAAAAAAAAAAAAAAADg4+3/srjc/5KV2P+ortn/xMrj/6Ch1P+Vl9f/jIzc/3572f+CgNr/fnzP/3l01f+Ih9r/h4TZ/8fN4//P1Oj/3uPr/7O+1v+xu9X/u8XY/9bi6v+UmdD/XV26/3F1x/+GitT/VVXC/3x/x/+HjNT/lp3Z/6633f/E0eD/2ePr/+bt8v/U4+v/0uLp/9Xj6//Z5e3/oKbX/0pJt/9maML/cHLF/3p8x//T3+n/3Ofu/9vo7//W5Oz/0uHq/9zn7f/j6vD/1OLs/8/f6P/R4Oj/1OPr/7jA4f9KSbf/Skm3/3p/yf/U4ez/1ePq/9rn7//Z5e3/0uHp/87e5//a5Ov/5Ovw/9Hf6v/T4uv/1OLp/9bj6/+kq9r/Skq3/0pJt/+cotb/zdnp/9jl7f/a5u//1+Ts/9Pi6v/O3ub/2uXr/+bt8P/Q3un/0eDq/9bj7P/Z5u7/r7jd/0tKt/9NTLf/S0u2/8zW6v/c5+//2+fv/9bj6//S4un/zt3m/9zm7P/k7PD/1OPr/9Li7P/V5Oz/2OXt/9jl7v+HjM3/lZvT/0tKt/+6w+L/2ebu/9fk7P/V4+v/0uHq/83d5v/a5ev/5ezw/9Pi6v/U4+z/1eXs/9bj6//b5+//vsjj/1hYvP9JSLb/horM/9nk7P/X5e3/1eTs/9Pi6v/P3uf/2eXr/+Tr7//O3+n/0uLr/9Xk7P/Y5e3/w8/k/7XA3/9JR7f/SEe3/2lrw//G0OX/1uLr/9Xi7P/T4ev/0N/o/9zn7f/k7PD/zN3p/8rd5v/T4ur/1ePt/5We0/+0w9//SEe3/0pKt/9OTrf/p7HZ/7fD3//T4uv/0N/o/9Hg6f/d5+3/5ezw/9Li6//T4uv/2ubu/8PQ5f9+hsr/ucff/4eOzv+Ei8z/rLja/8zc6P/I1+b/0OLq/8/f6P/Q4Oj/3eft/+bs8f/R4On/0+Lq/9Tj6v/T4Ov/wM7h/9Df6f/M2uf/z97q/9Dg6f/Q4On/1OPr/9Tj6//S4ur/0ODp/93o7f/n7vH/0N/o/8/f5//P3+b/2OXt/9zo8P/c6fH/zdjn/7fB3/+3weD/1eLs/9nn7//V5Oz/0+Lr/9Pi6//e6O7/5u3x/9Pi6v/S4en/0uLp/9Tj6//W4+v/3Ojw/9rm7v9vccT/wcvm/9rn7//X5Oz/0uHq/9Hg6f/S4er/3uju/+bt8f/R4On/0uHp/9Xk6//Y5u7/1OTs/9bk7P/W5Ov/XFy9/2lrwf/a5+//1uPr/9Pi6v/U4er/0eHq/93o7v/v8vT/5ezw/+bt8f/o7vL/6e/z/+jv8v/p7/L/6e/y/9XZ6//IzOX/6e7y/+nv8v/o7vL/5+7x/+ft8f/r8PP/AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA=="
declare -i DEBUG=1
declare -i VERBOSE=0
declare -a REQUEST_HEADERS
declare REQUEST_URI=""
declare -a HTTP_RESPONSE=(
[200]="OK"
[400]="Bad Request"
[403]="Forbidden"
[404]="Not Found"
[405]="Method Not Allowed"
[500]="Internal Server Error")
declare DATE=$(date +"%a, %d %b %Y %H:%M:%S %Z")
declare -a RESPONSE_HEADERS=(
"Date: $DATE"
"Expires: $DATE"
"Server: Slash Bin Slash Bash"
)
function warn() { ((${VERBOSE})) && echo "WARNING: $#" >&2; }
function chk_conf_file() {
[ -r "${BASHTTPD_CONF}" ] || {
cat >"${BASHTTPD_CONF}" <<'EOF'
#
# bashttpd.conf - configuration for bashttpd
#
# The behavior of bashttpd is dictated by the evaluation
# of rules specified in this configuration file. Each rule
# is evaluated until one is matched. If no rule is matched,
# bashttpd will serve a 500 Internal Server Error.
#
# The format of the rules are:
# on_uri_match REGEX command [args]
# unconditionally command [args]
#
# on_uri_match:
# On an incoming request, the URI is checked against the specified
# (bash-supported extended) regular expression, and if encounters a match the
# specified command is executed with the specified arguments.
#
# For additional flexibility, on_uri_match will also pass the results of the
# regular expression match, ${BASH_REMATCH[#]} as additional arguments to the
# command.
#
# unconditionally:
# Always serve via the specified command. Useful for catchall rules.
#
# The following commands are available for use:
#
# serve_file FILE
# Statically serves a single file.
#
# serve_dir_with_tree DIRECTORY
# Statically serves the specified directory using 'tree'. It must be
# installed and in the PATH.
#
# serve_dir_with_ls DIRECTORY
# Statically serves the specified directory using 'ls -al'.
#
# serve_dir DIRECTORY
# Statically serves a single directory listing. Will use 'tree' if it is
# installed and in the PATH, otherwise, 'ls -al'
#
# serve_dir_or_file_from DIRECTORY
# Serves either a directory listing (using serve_dir) or a file (using
# serve_file). Constructs local path by appending the specified root
# directory, and the URI portion of the client request.
#
# serve_static_string STRING
# Serves the specified static string with Content-Type text/plain.
#
# Examples of rules:
#
# on_uri_match '^/issue$' serve_file "/etc/issue"
#
# When a client's requested URI matches the string '/issue', serve them the
# contents of /etc/issue
#
# on_uri_match 'root' serve_dir /
#
# When a client's requested URI has the word 'root' in it, serve up
# a directory listing of /
#
# DOCROOT=/var/www/html
# on_uri_match '/(.*)' serve_dir_or_file_from "$DOCROOT"
# When any URI request is made, attempt to serve a directory listing
# or file content based on the request URI, by mapping URI's to local
# paths relative to the specified "$DOCROOT"
#
#unconditionally serve_static_string 'Hello, world! You can configure bashttpd by modifying bashttpd.conf.'
DOCROOT=/
on_uri_match '/(.*)' serve_dir_or_file_from
# More about commands:
#
# It is possible to somewhat easily write your own commands. An example
# may help. The following example will serve "Hello, $x!" whenever
# a client sends a request with the URI /say_hello_to/$x:
#
# serve_hello() {
# add_response_header "Content-Type" "text/plain"
# send_response_ok_exit <<< "Hello, $2!"
# }
# on_uri_match '^/say_hello_to/(.*)$' serve_hello
#
# Like mentioned before, the contents of ${BASH_REMATCH[#]} are passed
# to your command, so its possible to use regular expression groups
# to pull out info.
#
# With this example, when the requested URI is /say_hello_to/Josh, serve_hello
# is invoked with the arguments '/say_hello_to/Josh' 'Josh',
# (${BASH_REMATCH[0]} is always the full match)
EOF
warn "Created bashttpd.conf using defaults. Please review and configure bashttpd.conf before running bashttpd again."
# exit 1
}
}
function recv() { ((${VERBOSE})) && echo "< $#" >&2; }
function send() { ((${VERBOSE})) && echo "> $#" >&2; echo "$*"; }
function add_response_header() { RESPONSE_HEADERS+=("$1: $2"); }
function send_response_binary() {
local code="$1"
local file="${2}"
local transfer_stats=""
local tmp_stat_file="/tmp/_send_response_$$_"
send "HTTP/1.0 $1 ${HTTP_RESPONSE[$1]}"
for i in "${RESPONSE_HEADERS[#]}"; do
send "$i"
done
send
if ((${VERBOSE})); then
## Use dd since it handles null bytes
dd 2>"${tmp_stat_file}" < "${file}"
transfer_stats=$(<"${tmp_stat_file}")
echo -en ">> Transferred: ${file}\n>> $(awk '/copied/{print}' <<< "${transfer_stats}")\n" >&2
rm "${tmp_stat_file}"
else
## Use dd since it handles null bytes
dd 2>"${DUMP_DEV}" < "${file}"
fi
}
function send_response() {
local code="$1"
send "HTTP/1.0 $1 ${HTTP_RESPONSE[$1]}"
for i in "${RESPONSE_HEADERS[#]}"; do
send "$i"
done
send
while IFS= read -r line; do
send "${line}"
done
}
function send_response_ok_exit() { send_response 200; exit 0; }
function send_response_ok_exit_binary() { send_response_binary 200 "${1}"; exit 0; }
function fail_with() { send_response "$1" <<< "$1 ${HTTP_RESPONSE[$1]}"; exit 1; }
function serve_file() {
local file="$1"
local CONTENT_TYPE=""
case "${file}" in
*\.css)
CONTENT_TYPE="text/css"
;;
*\.js)
CONTENT_TYPE="text/javascript"
;;
*)
CONTENT_TYPE=$(file -b --mime-type "${file}")
;;
esac
add_response_header "Content-Type" "${CONTENT_TYPE}"
CONTENT_LENGTH=$(stat -c'%s' "${file}")
add_response_header "Content-Length" "${CONTENT_LENGTH}"
## Use binary safe transfer method since text doesn't break.
send_response_ok_exit_binary "${file}"
}
function serve_dir_with_tree() {
local dir="$1" tree_vers tree_opts basehref x
## HTML 5 compatible way to avoid tree html from generating favicon
## requests in certain browsers, such as browsers in android smartwatches. =)
local no_favicon=" <link href=\"data:image/x-icon;base64,${FAVICON}\" rel=\"icon\" type=\"image/x-icon\" />"
local tree_page=""
local base_server_path="/${2%/}"
[ "$base_server_path" = "/" ] && base_server_path=".."
local tree_opts="--du -h -a --dirsfirst"
add_response_header "Content-Type" "text/html"
# The --du option was added in 1.6.0. "/${2%/*}"
read _ tree_vers x < <(tree --version)
tree_page=$(tree -H "$base_server_path" -L 1 "${tree_opts}" -D "${dir}")
tree_page=$(sed "5 i ${no_favicon}" <<< "${tree_page}")
[[ "${tree_vers}" == v1.6* ]]
send_response_ok_exit <<< "${tree_page}"
}
function serve_dir_with_ls() {
local dir="$1"
add_response_header "Content-Type" "text/plain"
send_response_ok_exit < \
<(ls -la "${dir}")
}
function serve_dir() {
local dir="$1"
# If `tree` is installed, use that for pretty output.
which tree &>"${DUMP_DEV}" && \
serve_dir_with_tree "$#"
serve_dir_with_ls "$#"
fail_with 500
}
function urldecode() { [ "${1%/}" = "" ] && echo "/" || echo -e "$(sed 's/%\([[:xdigit:]]\{2\}\)/\\\x\1/g' <<< "${1%/}")"; }
function serve_dir_or_file_from() {
local URL_PATH="${1}/${3}"
shift
URL_PATH=$(urldecode "${URL_PATH}")
[[ $URL_PATH == *..* ]] && fail_with 400
# Serve index file if exists in requested directory
[[ -d "${URL_PATH}" && -f "${URL_PATH}/index.html" && -r "${URL_PATH}/index.html" ]] && \
URL_PATH="${URL_PATH}/index.html"
if [[ -f "${URL_PATH}" ]]; then
[[ -r "${URL_PATH}" ]] && \
serve_file "${URL_PATH}" "$#" || fail_with 403
elif [[ -d "${URL_PATH}" ]]; then
[[ -x "${URL_PATH}" ]] && \
serve_dir "${URL_PATH}" "$#" || fail_with 403
fi
fail_with 404
}
function serve_static_string() {
add_response_header "Content-Type" "text/plain"
send_response_ok_exit <<< "$1"
}
function on_uri_match() {
local regex="$1"
shift
[[ "${REQUEST_URI}" =~ $regex ]] && \
"$#" "${BASH_REMATCH[#]}"
}
function unconditionally() { "$#" "$REQUEST_URI"; }
function main() {
local recv=""
local line=""
local REQUEST_METHOD=""
local REQUEST_HTTP_VERSION=""
chk_conf_file
[[ ${UID} = 0 ]] && warn "It is not recommended to run bashttpd as root."
# Request-Line HTTP RFC 2616 $5.1
read -r line || fail_with 400
line=${line%%$'\r'}
recv "${line}"
read -r REQUEST_METHOD REQUEST_URI REQUEST_HTTP_VERSION <<< "${line}"
[ -n "${REQUEST_METHOD}" ] && [ -n "${REQUEST_URI}" ] && \
[ -n "${REQUEST_HTTP_VERSION}" ] || fail_with 400
# Only GET is supported at this time
[ "${REQUEST_METHOD}" = "GET" ] || fail_with 405
while IFS= read -r line; do
line=${line%%$'\r'}
recv "${line}"
# If we've reached the end of the headers, break.
[ -z "${line}" ] && break
REQUEST_HEADERS+=("${line}")
done
}
if [[ ! -z "{$1}" ]] && [ "${1}" = "-s" ]; then
socat TCP4-LISTEN:${LISTEN_PORT},fork EXEC:"${0}"
else
main
source "${BASHTTPD_CONF}"
fail_with 500
fi
LOL, a super lame hack, but at least curl and firefox accepts it:
while true ; do (dd if=/dev/zero count=10000;echo -e "HTTP/1.1\n\n $(date)") | nc -l 1500 ; done
You better replace it soon with something proper!
Ah yes, my nc were not exactly the same as yours, it did not like the -p option.
If you're using Apline Linux, the BusyBox netcat is slightly different:
while true; do nc -l -p 8080 -e sh -c 'echo -e "HTTP/1.1 200 OK\n\n$(date)"'; done
And another way using printf:
while true; do nc -l -p 8080 -e sh -c "printf 'HTTP/1.1 200 OK\n\n%s' \"$(date)\""; done
while true; do (echo -e 'HTTP/1.1 200 OK\r\nConnection: close\r\n';) | timeout 1 nc -lp 8080 ; done
Closes connection after 1 sec, so curl doesn't hang on it.
Type in nc -h and see if You have -e option available. If yes, You can create a script, for example:
script.sh
echo -e "HTTP/1.1 200 OK\n\n $(date)"
and run it like this:
while true ; do nc -l -p 1500 -e script.sh; done
Note that -e option needs to be enabled at compilation to be available.
I think the problem that all the solution listed doesn't work, is intrinsic in the nature of http service, the every request established is with a different client and the response need to be processed in a different context, every request must fork a new instance of response...
The current solution I think is the -e of netcat but I don't know why doesn't work... maybe is my nc version that I test on openwrt...
with socat it works....
I try this https://github.com/avleen/bashttpd
and it works, but I must run the shell script with this command.
socat tcp-l:80,reuseaddr,fork EXEC:bashttpd &
The socat and netcat samples on github doesn't works for me, but the socat that I used works.
Actually, the best way to close gracefully the connection is to send the Content-Length header like following. Client (like curl will close the connection after receiving the data.
DATA="Date: $(date)";
LENGTH=$(echo $DATA | wc -c);
echo -e "HTTP/1.1 200 OK\nContent-Length: ${LENGTH}\n\n${DATA}" | nc -l -p 8000;
On OSX you can use :
while true; do echo -e "HTTP/1.1 200 OK\n\n $(date)" | nc -l localhost 1500 ; done

Resources