freeswitch use private address for rtp ( osx + docker + x-lite ) - macos

Environment
OSX: 10.12.2 (16C68)
Docker: Version 17.03.1-ce-mac12 (17661)
freeswitch container: https://hub.docker.com/r/bettervoice/freeswitch-container/
Network
Docker container IP: 172.17.0.2
Docker host IP: 192.168.1.121
Docker setup
docker run -d \
--name freeswitch \
-p 5060:5060/tcp \
-p 5060:5060/udp \
-p 5066:5066/tcp \
-p 5080:5080/tcp \
-p 5080:5080/udp \
-p 8021:8021/tcp \
-p 7443:7443/tcp \
-p 60535-60635:60535-60635/udp \
-v /my/docker/freeswitch/conf:/usr/local/freeswitch/conf:rw \
-v /my/docker/freeswitch/default_freeswitch:/etc/default/freeswitch \
bettervoice/freeswitch-container:1.6.16
Only change the follow settings:
vars.xml
<X-PRE-PROCESS cmd="set" data="external_sip_ip=192.168.1.121"/>
<X-PRE-PROCESS cmd="set" data="external_rtp_ip=192.168.1.121"/>
sip_profiles/internal.xml:
<param name="ext-rtp-ip" value="$${external_rtp_ip}"/>
<param name="ext-sip-ip" value="$${external_sip_ip}"/>
sip_profiles/external.xml:
<param name="ext-rtp-ip" value="$${external_rtp_ip}"/>
<param name="ext-sip-ip" value="$${external_sip_ip}"/>
switch.conf.xml
<param name="rtp-start-port" value="60535"/>
<param name="rtp-end-port" value="60635"/>
default_freeswitch
# /etc/default/freeswitch
DAEMON_OPTS="-rp"
When I call X-Lite A (1000) -> X-Lite B(1001) , it use the private address 172.17.0.2 for rtp address. Actually I want these two X-Lite use 192.168.1.121 for rtp.
Could anyone give me some help ?

Related

Yocto do_install action not performed

here is my bbappend file.
LICENSE = "MIT"
IMAGE_LINGUAS = " "
# User preferences
inherit extrausers
# Change root password (note the capital -P)
EXTRA_USERS_PARAMS = "\
usermod -P toor root; \
useradd -P michael -G sudo michael; \
useradd -P nfi -G sudo nfi; \
"
# uncomment the line %sudo ALL=(ALL) ALL in /etc/sudoers
modify_sudoers() {
sed 's/# %sudo/%sudo/' < ${IMAGE_ROOTFS}${sysconfdir}/sudoers > ${IMAGE_ROOTFS}${sysconfdir}/sudoers.tmp
mv ${IMAGE_ROOTFS}${sysconfdir}/sudoers.tmp ${IMAGE_ROOTFS}${sysconfdir}/ROOTFS
}
sudoers_POSTPROCESS_COMMAND_append = " modify_sudoers;"
IMAGE_INSTALL = "base-files \
base-passwd \
busybox \
mtd-utils \
mtd-utils-ubifs \
libconfig \
swupdate \
swupdate-www \
${#bb.utils.contains('SWUPDATE_INIT', 'tiny', 'virtual/initscripts-swupdate', 'initscripts systemd', d)} \
util-linux-sfdisk \
mmc-utils \
e2fsprogs-resize2fs \
lua \
debugconfigs \
"
IMAGE_FSTYPES = "ext4.gz.u-boot ext4 cpio.gz.u-boot"
PACKAGE_EXCLUDE += " jailhouse kernel-module-jailhouse libncursesw5 libpanelw5 libpython3 python3* perl* apt dpkg "
SRC_URI += "file://set-ttymxc0-permissions.sh"
do_install() {
install -d ${D}${sysconfdir}/init.d
install -m 0755 ${WORKDIR}/set-ttymxc0-permissions.sh ${D}${sysconfdir}/init.d/
}
addtask install after do_build
I am using SWUpdate. I can build their kernel and run it on my device. However I cannot login as root or any user I have created. It seems this could be related to user permissions in the getty serial terminal ttymxc0. So I am attempting to add a script to init.d. The script contains
#!/bin/sh
# Set permissions on ttymxc0
chmod 660 /dev/ttymxc0
chown root:tty /dev/ttymxc0
The bitbake file I am appending to is swupdate-image.bb. This file does not do much. It does not have a do_install section. So I am attempting to add one. However it is never run. Can anyone speculate as to why?
You actually noticed that the file swupdate-image.bb require an other file swupdate-image.inc.
You should pay attention to this line:
${#bb.utils.contains('SWUPDATE_INIT', 'tiny', 'virtual/initscripts-swupdate', 'initscripts systemd', d)} \
${#bb.utils.contains() is a (Python) function. Basically it will check the SWUPDATE_INIT variable, if there is a match with tiny then it will return virtual/initscripts-swupdate to IMAGE_INSTALL. Else, it will return initscripts systemd to IMAGE_INSTALL.
So you should only set your variable SWUPDATE_INIT= "tiny" in a .bbappend file.
Adding this should install rcS.swupdate in your final image according to initscripts-swupdate recipe:
https://github.com/sbabic/meta-swupdate/blob/master/recipes-core/initscripts-swupdate/initscripts-swupdate-usb.bb
N.B: I have noticed that you added resize2fs. If you want to add this binary make sure that the right kernel flag is set ! You will more likely need to create a .bbappend file and add the following :
EXTRA_OECONF_append_class-target = " --enable-resizer"

How to run aws bash commands consecutively?

How can I execute the following bash commands consecutively?
aws logs create-export-task --task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1"
aws logs create-export-task --task-name "cloudwatch-log-group-export" \
--log-group-name "/my/log/group2" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group2"
The problem I have with the above commands is that after the first command completes execution, the script will stuck at the following state, making the second command not reachable.
{
"taskId": "0e3cdd4e-1e95-4b98-bd8b-3291ee69f9ae"
}
It seems that I should find a way to wait for cloudwatch-log-group-export1 task to complete.
You could have to crate a waiter function which uses describe-export-tasks to get current status of an export job.
Example of such function:
wait_for_export() {
local sleep_time=${2:-10}
while true; do
job_status=$(aws logs describe-export-tasks \
--task-id ${1} \
--query "exportTasks[0].status.code" \
--output text)
echo ${job_status}
[[ $job_status == "COMPLETED" ]] && break
sleep ${sleep_time}
done
}
Then you use it:
task_id1=$(aws logs create-export-task \
--task-name "cloudwatch-log-group-export1" \
--log-group-name "/my/log/group1" \
--from 1488708419000 --to 1614938819000 \
--destination "my-s3-bucket" \
--destination-prefix "my-log-group1" \
--query 'taskId' --output text)
wait_for_export ${task_id1}
# second export
aws-cli auto access to vim edit mode by default.
You can avoid it by setting AWS_PAGER environment variable is "" before execute aws command.
export AWS_PAGER=""
aws logs create-export-task...
Or, you can fix it in to aws's config file (~/.aws/config):
[default]
cli_pager=

couldn't find local name "" in the initial cluster configuration when start etcd service

I am start etcd(v3.3.15) service using this command:
systemctl start etcd
this is my etcd systemd config:
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
Documentation=https://github.com/coreos
[Service]
Type=notify
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/local/bin/etcd \
--name ${ETCD_NAME} \
--cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \
--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \
--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \
--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster infra1=https://172.19.104.231:2380,infra2=https://172.19.104.230:2380,infra3=https://172.19.150.82:2380 \
--initial-cluster-state new \
--data-dir=${ETCD_DATA_DIR}
Restart=on-failure
RestartSec=5
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target
and this is my etcd config:
# [member]
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://172.19.150.82:2380"
ETCD_LISTEN_CLIENT_URLS="https://172.19.150.82:2379"
#[cluster]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://172.19.150.82:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_ADVERTISE_CLIENT_URLS="https://172.19.150.82:2379"
Your IPs or names mismatch.
infra3=https://172.19.150.82
and
ETCD_NAME=infra1
ETCD_DATA_DIR="/var/lib/etcd"
ETCD_LISTEN_PEER_URLS="https://172.19.150.82:2380"

Have troubles while adding new etcd members

I'm planning to add new members to a single instance of etcd, but am faced with problems.
I started the first etcd member with the following command:
nohup etcd \
--advertise-client-urls=https://192.168.22.34:2379 \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--client-cert-auth=true \
--data-dir=/var/lib/etcd \
--initial-advertise-peer-urls=https://192.168.22.34:2380 \
--initial-cluster=test-master-01=https://192.168.22.34:2380 \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--listen-client-urls=https://0.0.0.0:2379 \
--listen-peer-urls=https://192.168.22.34:2380 \
--name=test-master-01 \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-client-cert-auth=true \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--snapshot-count=10000 \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt &
Then I checked the health of the cluster and it seems to be healthy:
member f13d668ae0cba84 is healthy: got healthy result from https://192.168.22.34:2379
cluster is healthy
I also checked the members:
f13d668ae0cba84: name=test-master-01 peerURLs=http://192.168.22.34:2380 clientURLs=https://192.168.22.34:2379 isLeader=true
Then I tried to add second member:
etcdctl \
--endpoints=https://127.0.0.1:2379 \
--ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--cert-file=/etc/kubernetes/pki/etcd/healthcheck-client.crt \
--key-file=/etc/kubernetes/pki/etcd/healthcheck-client.key \
member add test-master-02 https://192.168.22.37:2380
Added member named test-master-02 with ID 65bec874cca265d8 to cluster ETCD_NAME="test-master-02"
ETCD_INITIAL_CLUSTER="test-master-01=http://192.168.22.34:2380,test-master-02=https://192.168.22.37:2380"
ETCD_INITIAL_CLUSTER_STATE="existing"
Then started the second etcd member with the following command:
etcd \
--name test-master-02 \
--listen-client-urls https://192.168.22.37:2379 \
--advertise-client-urls https://192.168.22.37:2379 \
--listen-peer-urls https://192.168.22.37:2380 \
--cert-file=/etc/kubernetes/pki/etcd/server.crt \
--client-cert-auth=true \
--trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt \
--peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt \
--peer-client-cert-auth=true \
--peer-key-file=/etc/kubernetes/pki/etcd/peer.key \
--key-file=/etc/kubernetes/pki/etcd/server.key \
--initial-cluster-state=existing \
--initial-cluster=test-master-01=https://192.168.22.34:2380,test-master-02=https://192.168.22.37:2380
But I got an error:
etcdmain: error validating peerURLs {ClusterID:bc8c76911939f2de Members:[&{ID:f13d668ae0cba84 RaftAttributes:{PeerURLs:[http://192.168.22.34:2380]} Attributes:{Name:test-master-01 ClientURLs:[https://192.168.22.34:2379]}} &{ID:65bec874cca265d8 RaftAttributes:{PeerURLs:[https://192.168.22.37:2380]} Attributes:{Name: ClientURLs:[]}}] RemovedMemberIDs:[]}: unmatched member while checking PeerURLs
Update
Looks like I don't have such problem while starting cluster from scratch without restoring from snapshot.
Figured out that before adding new members I needed to update my main etcd member, because instead of etcd config, member list command returned 127.0.0.1 on peerurl

mailgun and curl: should i use private or public api key

Malign provides a public apikey and a private api key.
Which one should I use for a curl request like:
curl -s --user 'api:YOUR_API_KEY' \
https://api.mailgun.net/v3/YOUR_DOMAIN_NAME/messages \
-F from='Excited User <mailgun#YOUR_DOMAIN_NAME>' \
-F to=YOU#YOUR_DOMAIN_NAME \
-F to=bar#example.com \
-F subject='Hello' \
-F text='Testing some Mailgun awesomness!'
According to Mailgun Documentation it should be your secret (private) API key which you can find on your Mailgun Dashboard.

Resources