Docker MGT Development Environment and Mailbag - magento

I have installed Magento 1.9.0.1 on docker MGT development environment with 2 docker containers. The idea is for all e-mails produced by the magento container are to be caught by the mailhog container smtp.
docker run -d -p 8025:8025 -p 1025:1025 --name smtp mailhog/mailhog
docker run -d --net=bridge --restart=always --privileged -h mgt-dev-56 --link smtp --name mgt-dev-56 -it -p 80:80 -p 443:443 -p 22:22 -p 3306:3306 -p 3333:3333 mgtcommerce/mgt-development-environment-5.6
I have named the mailhog container smtp and have linked it via the --link smtp parameter on the mgt-dev-56 container. Both the container applications work via their respective URLs magento1.dev and 127.0.0.1:8025. However I can not get the smtp container to catch any the emails being generated from the mgt-dev-56 container.
I'm not sure if i need to configure postfix to point to a certain port or ip. I have noticed and confirmed that the there is network connectivity between containers mgt-dev-56 and smtp.
Has any one come across this issue before ?
Do I need to modify the configurations on postfix ?
Here is the main.cf of mgt-dev-56 container
root#mgt-dev-56:/etc/postfix# vi main.cf
smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no
append_dot_mydomain = no
readme_directory = no
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = mgt-dev-56
myorigin = $myhostname
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = mgt-dev-56, localhost.localdomain, , localhost
relayhost = 172.17.0.3:1025
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = all
Here is are the env of mgt-dev-56 container, BTW 172.17.0.3 is the IP address for the smtp container.
root#mgt-dev-56:/etc/postfix# env
SMTP_PORT_1025_TCP_ADDR=172.17.0.3
HOSTNAME=mgt-dev-56
SMTP_PORT_8025_TCP=tcp://172.17.0.3:8025
TERM=xterm
SMTP_ENV_no_proxy=*.local, 169.254/16
SMTP_PORT_1025_TCP_PORT=1025
SMTP_PORT_8025_TCP_PORT=8025
SMTP_PORT_1025_TCP_PROTO=tcp
SMTP_PORT=tcp://172.17.0.3:1025
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/etc/postfix
SMTP_PORT_8025_TCP_PROTO=tcp
SHLVL=1
HOME=/root
no_proxy=*.local, 169.254/16
SMTP_PORT_8025_TCP_ADDR=172.17.0.3
SMTP_NAME=/mgt-dev-56/smtp
SMTP_PORT_1025_TCP=tcp://172.17.0.3:1025
_=/usr/bin/env
OLDPWD=/root/cloudpanel

I have replaced the configuration parameter relayhost with actual ip and port number instead of using the environment variable SMTP_PORT_8025_TCP as main.cf and postfix does not like environment variables. MailHog now pickups the all e-mails created via line command and magento.

Related

Terraform local-exec command scp fails

I am trying to copy directory to new ec2 instance using terraform
provisioner "local-exec" {
command = "scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ../ansible ubuntu#${self.public_ip}:~/playbook_dir"
}
But after instance created I get an error
Error running command 'sleep 5; scp -i ~/.ssh/id_rsa -o StrictHostKeyChecking=no -o
│ UserKnownHostsFile=/dev/null -r ../ansible ubuntu#54.93.82.73:~/playbook_dir': exit status 1. Output:
│ ssh: connect to host 54.93.82.73 port 22: Connection refused
│ lost connection
The main thing is that if I copy command to terminal and replace IP it works. Why is that happens? Please, help to figure out
I read in documentation that sshd service may not work correctly right after creating, so I added sleep 5 command before scp, but it haven't work
I have tried the same in my local env, but unfortunately, when using the local-exec provisioner in aws_instance directly I also got the same error message and am honestly not sure of the details of it.
However, to workaround the issue you can use a null_resource with the local-exec provisioner with the same command including sleep and it works.
Terraform code
data "aws_ami" "ubuntu" {
most_recent = true
filter {
name = "name"
values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
}
filter {
name = "virtualization-type"
values = ["hvm"]
}
owners = ["099720109477"] # Canonical
}
resource "aws_key_pair" "stackoverflow" {
key_name = "stackoverflow-key"
public_key = "ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKml4tkIVsa1JSZ0OSqSBnF+0rTMWC5y7it4y4F/cMz6"
}
resource "aws_instance" "stackoverflow" {
ami = data.aws_ami.ubuntu.id
instance_type = "t2.micro"
subnet_id = var.subnet_id
vpc_security_group_ids = var.vpc_security_group_ids ## Must allow SSH inbound
key_name = aws_key_pair.stackoverflow.key_name
tags = {
Name = "stackoverflow"
}
}
resource "aws_eip" "stackoverflow" {
instance = aws_instance.stackoverflow.id
vpc = true
}
output "public_ip" {
value = aws_eip.stackoverflow.public_ip
}
resource "null_resource" "scp" {
provisioner "local-exec" {
command = "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#${aws_eip.stackoverflow.public_ip}:~/playbook_dir"
}
}
Code In Action
aws_key_pair.stackoverflow: Creating...
aws_key_pair.stackoverflow: Creation complete after 0s [id=stackoverflow-key]
aws_instance.stackoverflow: Creating...
aws_instance.stackoverflow: Still creating... [10s elapsed]
aws_instance.stackoverflow: Still creating... [20s elapsed]
aws_instance.stackoverflow: Still creating... [30s elapsed]
aws_instance.stackoverflow: Still creating... [40s elapsed]
aws_instance.stackoverflow: Creation complete after 42s [id=i-006c17b995b9b7bd6]
aws_eip.stackoverflow: Creating...
aws_eip.stackoverflow: Creation complete after 1s [id=eipalloc-0019932a06ccbb425]
null_resource.scp: Creating...
null_resource.scp: Provisioning with 'local-exec'...
null_resource.scp (local-exec): Executing: ["/bin/sh" "-c" "sleep 10 ;scp -i ~/.ssh/aws-stackoverflow -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -r ~/test/sub-test-dir ubuntu#3.76.153.108:~/playbook_dir"]
null_resource.scp: Still creating... [10s elapsed]
null_resource.scp (local-exec): Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
null_resource.scp: Creation complete after 13s [id=3541365434265352801]
Verification Process
Local directory and files
$ ls ~/test/sub-test-dir
some_test_file
$ cat ~/test/sub-test-dir/some_test_file
local exec is not nice !!
Files and directory on Created instance
$ ssh -i ~/.ssh/aws-stackoverflow ubuntu#$(terraform output -raw public_ip)
The authenticity of host '3.76.153.108 (3.76.153.108)' can't be established.
ED25519 key fingerprint is SHA256:8dgDXB/wjePQ+HkRC61hTNnwaSBQetcQ/10E5HLZSwc.
This key is not known by any other names
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
Warning: Permanently added '3.76.153.108' (ED25519) to the list of known hosts.
Welcome to Ubuntu 20.04.5 LTS (GNU/Linux 5.15.0-1028-aws x86_64)
* Documentation: https://help.ubuntu.com
* Management: https://landscape.canonical.com
* Support: https://ubuntu.com/advantage
System information as of Sat Feb 11 00:25:13 UTC 2023
System load: 0.0 Processes: 98
Usage of /: 20.8% of 7.57GB Users logged in: 0
Memory usage: 24% IPv4 address for eth0: 172.31.6.219
Swap usage: 0%
* Ubuntu Pro delivers the most comprehensive open source security and
compliance features.
https://ubuntu.com/aws/pro
* Introducing Expanded Security Maintenance for Applications.
Receive updates to over 25,000 software packages with your
Ubuntu Pro subscription. Free for personal use.
https://ubuntu.com/aws/pro
Expanded Security Maintenance for Applications is not enabled.
0 updates can be applied immediately.
Enable ESM Apps to receive additional future security updates.
See https://ubuntu.com/esm or run: sudo pro status
To run a command as administrator (user "root"), use "sudo <command>".
See "man sudo_root" for details.
ubuntu#ip-172-31-6-219:~$ cat ~/playbook_dir/some_test_file
local exec is not nice !!

Why is userdata not working in my Terraform code?

I am working with Terraform and trying to execute bash script using user date. Below is my code:
resource "aws_instance" "web_server" {
ami = var.centos
instance_type = var.instance-type
subnet_id = aws_subnet.private.id
private_ip = var.web-private-ip
associate_public_ip_address = true
user_data = <<-EOF
#!/bin/bash
yum install httpd -y
echo "hello world" > /var/www/html/index.html
yum update -y
systemctl start httpd
firewall-cmd --zone=public --permanent --add-service=http
firewall-cmd --zone=public --permanent --add-service=https
firewall-cmd --reload
EOF
}
However, when I navigate to the public IP I do not see the "hello world" message and also do not get a response fron the server. Is there something I am missing here? I've tried going straight through the aws console and user data is unsuccesful there to.
I verified your user data on my centos instance and your script is correct. However, the issue is probably because of two things:
subnet_id = aws_subnet.private.id this suggest that you've placed your instance in a private subnet. To connect to your instance form internet, it must be in public subnet
there is no vpc_security_group_ids specified, which leads to using a default SG from the VPC, which has internet traffic blocked by default.
Also I'm not sure what do you want to do with private_ip = var.web-private-ip. Its confusing.

execute shell script in ruby

I want to execute the following shell script
system('echo "
rdr pass on lo0 inet proto tcp from any to 192.168.99.1 port 80 -> 192.168.99.1 port 8080
rdr pass on lo0 inet proto tcp from any to 192.168.99.1 port 443 -> 192.168.99.1 port 4443
" | sudo pfctl -ef - > /dev/null 2>&1; echo "==> Fowarding Ports: 80 -> 8080, 443 -> 4443 & Enabling pf"'
)
This works fine, i now want to pass the IP address loaded from a YAML file, i tried the following
config.yaml
configs:
use: 'home'
office:
public_ip: '192.168.99.2'
home:
public_ip: '192.168.99.1'
Vagrantfile
require 'yaml'
current_dir = File.dirname(File.expand_path(__FILE__))
configs = YAML.load_file("#{current_dir}/config.yaml")
vagrant_config = configs['configs'][configs['configs']['use']]
system('echo "
rdr pass on lo0 inet proto tcp from any to '+vagrant_config['public_ip']+' port 80 -> '+vagrant_config['public_ip']+' port 8080
rdr pass on lo0 inet proto tcp from any to '+vagrant_config['public_ip']+' port 443 -> '+vagrant_config['public_ip']+' port 4443
" | sudo pfctl -ef - > /dev/null 2>&1; echo "==> Fowarding Ports: 80 -> 8080, 443 -> 4443 & Enabling pf"'
)
The second method does not work, nor it shows any error, can someone point me to the right direction, what i want is to read public_ip dynamically from config file or variable
Thanks
UPDATE 1
I get the following output
pfctl: Use of -f option, could result in flushing of rules
present in the main ruleset added by the system at startup.
See /etc/pf.conf for further details.
No ALTQ support in kernel
ALTQ related functions disabled
pfctl: pf already enabled
What can be possibly wrong?
For troubleshooting purposes, it would be wise to output the command you're going to run prior to sending it out to system.
cmd = 'echo "
rdr pass on lo0 inet proto tcp from any to '+vagrant_config['public_ip']+' port 80 -> '+vagrant_config['public_ip']+' port 8080
rdr pass on lo0 inet proto tcp from any to '+vagrant_config['public_ip']+' port 443 -> '+vagrant_config['public_ip']+' port 4443
" | sudo pfctl -ef - > /dev/null 2>&1; echo "==> Fowarding Ports: 80 -> 8080, 443 -> 4443 & Enabling pf"'
puts "Command to run:\n\n#{cmd}"
system( cmd )
Then, it would be wise to make the output from the system command visible. To make sure you get this feedback, I suggest you replace
sudo pfctl -ef - > /dev/null 2>&1
with (adding '-v' for more verbose output - pfctl man page)
sudo pfctl -efv -
and then look for the output and/or error messages.
Then, once the bugs are sorted out, you can put it back into stealthy, quiet mode :D
Also, since you are running with sudo you'll need to make sure the shell you're running within has sudo privileges and also make sure you're not being prompted for a password unknowingly.

Linking DNS to a consul node

I'm trying to set up a consul agent using an example in "Using Docker" (chapter 11). The example suggests running this to set up one of the consul nodes:
docker run -d --name consul -h consul-1 \
-p 8300:8300 -p 8301:8301 -p 8301:8301/udp \
-p 8302:8302/udp -p 8400:8400 -p 8500:8500 \
-p 172.17.42.1:53:8600/udp \
gliderlabs/consul agent -data-dir /data -server \
-client 0.0.0.0 \
-advertise $HOSTA -bootstrap-expect 2
I assume the line with -p 172.17.42.1:53:8600/upp is linking the container's DNS service with the consul node using an IP address that worked for the author. What IP address should I use here?
Looks like 172.17.42.1 was the default bridge address for docker 1.8 to use when a container is connecting to the host. This changed in 1.9 and seems to be 172.17.0.1 for me -- although I don't know if this is a guaranteed.
You seem to be running an example setup, so better if you expose it to your localhost 127.0.0.1 instead. That's a DNS service, as long as you give a dig command using the correct port for DNS, it will just work. For example following will do for port 8600:
dig #127.0.0.1 -p 8600 stackoverflow.service.consul
; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.62.rc1.55.amzn1 <<>> #127.0.0.1 -p 53 tracker.service.consul
; (1 server found)
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 57167
;; flags: qr aa rd; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0
;; WARNING: recursion requested but not available
;; QUESTION SECTION:
;stackoverflow.service.consul. IN A
;; ANSWER SECTION:
stackoverflow.service.consul. 0 IN A 10.X.X.X
;; Query time: 1 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Jul 7 11:29:01 2017
;; MSG SIZE rcvd: 56
If you want it to work on the default DNS port so that the queries can directly be handled. You can use something like dnsmaq or any of the methods listed at the following link for DNS forwarding:
https://www.consul.io/docs/guides/forwarding.html

nginx: use environment variables

I have the following scenario: I have an env variable $SOME_IP defined and want to use it in a nginx block. Referring to the nginx documentation I use the env directive in the nginx.conf file like the following:
user www-data;
worker_processes 4;
pid /run/nginx.pid;
env SOME_IP;
Now I want to use the variable for a proxy_pass. I tried it like the following:
location / {
proxy_pass http://$SOME_IP:8000;
}
But I end up with this error message: nginx: [emerg] unknown "some_ip" variable
With NGINX Docker image
Apply envsubst on template of the configuration file at container start. envsubst is included in official NGINX docker images.
Environment variable is referenced in a form $VARIABLE or ${VARIABLE}.
nginx.conf.template:
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
}
http {
server {
listen 80;
location / {
access_log off;
return 200 '${MESSAGE}';
add_header Content-Type text/plain;
}
}
}
Dockerfile:
FROM nginx:1.17.8-alpine
COPY ./nginx.conf.template /nginx.conf.template
CMD ["/bin/sh" , "-c" , "envsubst < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Build and run docker:
docker build -t foo .
docker run --rm -it --name foo -p 8080:80 -e MESSAGE="Hellou World" foo
NOTE:If config template contains dollar sign $ which should not be substituted then list all used variables as parameter of envsubst so that only those are replaced. E.g.:
CMD ["/bin/sh" , "-c" , "envsubst '$USER_NAME $PASSWORD $KEY' < /nginx.conf.template > /etc/nginx/nginx.conf && exec nginx -g 'daemon off;'"]
Nginx Docker documentation for reference. Look for Using environment variables in nginx configuration.
Using environment variables in nginx configuration
Out-of-the-box, nginx doesn’t support environment variables inside
most configuration blocks. But envsubst may be used as a workaround if
you need to generate your nginx configuration dynamically before nginx
starts.
Here is an example using docker-compose.yml:
web:
image: nginx
volumes:
- ./mysite.template:/etc/nginx/conf.d/mysite.template
ports:
- "8080:80"
environment:
- NGINX_HOST=foobar.com
- NGINX_PORT=80
command: /bin/bash -c "envsubst < /etc/nginx/conf.d/mysite.template > /etc/nginx/conf.d/default.conf && exec nginx -g 'daemon off;'"
The mysite.template file may then contain variable references like
this:
listen ${NGINX_PORT};
You can access the variables via modules - I found options for doing it with Lua and Perl.
Wrote about it on my company's blog:
https://web.archive.org/web/20170712003702/https://docs.apitools.com/blog/2014/07/02/using-environment-variables-in-nginx-conf.html
The TL;DR:
env API_KEY;
And then:
http {
...
server {
location / {
# set var using Lua
set_by_lua $api_key 'return os.getenv("API_KEY")';
# set var using perl
perl_set $api_key 'sub { return $ENV{"API_KEY"}; }';
...
}
}
}
EDIT: original blog is dead, changed link to wayback machine cache
The correct usage would be $SOME_IP_from_env, but environment variables set from nginx.conf cannot be used in server, location or http blocks.
You can use environment variables if you use the openresty bundle, which includes Lua.
Since nginx 1.19 you can now use environment variables in your configuration with docker-compose. I used the following setup:
# file: docker/nginx/templates/default.conf.conf
upstream api-upstream {
server ${API_HOST};
}
# file: docker-compose.yml
services:
nginx:
image: nginx:1.19-alpine
environment:
NGINX_ENVSUBST_TEMPLATE_SUFFIX: ".conf"
API_HOST: api.example.com
I found this answer on other thread: https://stackoverflow.com/a/62844707/4479861
For simple environment variables substitution, can use the envsubst command and template feature since docker Nginx 1.19. Note: envsubst not support fallback default, eg: ${MY_ENV:-DefaultValue}.
For more advanced usage, consider use https://github.com/guyskk/envsub-njs, it's implemented via Nginx NJS, use Javascript template literals, powerful and works well in cross-platform. eg: ${Env('MY_ENV', 'DefaultValue')}
You can also consider https://github.com/kreuzwerker/envplate, it support syntax just like shell variables substitution.
If you're not tied to bare installation of nginx, you could use docker for the job.
For example nginx4docker implements a bunch of basic env variables that can be set through docker and you don't have to fiddle around with nginx basic templating and all it's drawbacks.
nginx4docker could also be extended with your custom env variables. only mount a file that lists all your env variables to docker ... --mount $(pwd)/CUSTOM_ENV:/ENV ...
When the worst case happens and you can't switch/user docker, a workaround maybe to set all nginx variables with their names (e.g. host="$host") in this case envsubst replaces $host with $host.

Resources