I built the feathers-chat demo with the feathers-chat-vuex client attached via socketio, but I am having trouble with remote access via socketio. The demo works fine as long as I access the client from a browser on the same system, which happens to be an Ubuntu VM running under VMware Fusion on a Macbook Pro. But if I try to connect to the client from a browser in the host MacOS, it brings up the login page but it fails to log in. In the devtools console, it says "WebSocket connection to ...localhost:3030 failed," which of course it did, because the feathers-chat server is not running on this localhost, it is running in the VM. The socketio is set up in feathers-client.js like this: "const socket = io('http://localhost:3030', {transports: ['websocket']})". If I hard-code the VM IP address in it like this: "const socket = io('http://172.16.184.194:3030', {transports: ['websocket']})" then the remote access works fine. But of course I cannot do that because in general I don't know the IP address of the server at run time. So can someone tell me the right way to set this up? Thanks!
I haven't worked with the socketio on feathers but I've dealt with this on a rest feathers app (so hopefully this applies!).
Put the IP address into your config file. When you eventually deploy the app you will be pointing it at the IP address of the production server anyway so it makes sense to have it configurable.
Then for you local dev it can be the IP of the VM, and when you deploy your config/production.json can have the live IP.
Here's the docs about using config variables:
https://docs.feathersjs.com/api/configuration.html#example
Update
If you are using Docker instead of VMWare for your local development, then you can use docker-compose and use the built in networking functionality. Here is an example of a docker-compose file for feathers app that I'm working on. It's just an API so I'm not dealing with the client but I think it should be a good starting point for you.
With docker installed, put these two files in your project root and then run docker-compose up -d. It exposes the port 3030 on the feathers server and you should be able to connect with http://localhost:3030 again from your host machine. If you want to add another docker container to docker-compose you can and then from that container you can access the server at http://server:3030 because it will use the internal docker networking.
docker-compose.yml:
version: "3"
services:
server:
build: .
volumes:
- .:/usr/src/app
ports:
- "3030:3030"
stdin_open: true
tty: true
mongo:
image: mongo
volumes:
- ./protected_db/howlround-mongodb:/data/db
command:
- --storageEngine=mmapv1
And my Dockerfile:
FROM ubuntu:16.04
# Dockerfile based on https://github.com/nodejs/docker-node/blob/master/6.4/slim/Dockerfile
# gpg keys listed at https://github.com/nodejs/node
RUN set -ex \
&& for key in \
9554F04D7259F04124DE6B476D5A82AC7E37093B \
94AE36675C464D64BAFA68DD7434390BDBE9B9C5 \
0034A06D9D9B0064CE8ADF6BF1747F4AD2306D93 \
FD3A5288F042B6850C66B31F09FE44734EB7990E \
71DCFD284A79C3B38668286BC97EC7A07EDE3FC1 \
DD8F2338BAE7501E3DD5AC78C273792F7D83545D \
B9AE9905FFD7803F25714661B63B535A4C206CA9 \
C4F0DFFF4E8C1A8236409D08E73BC641CC11F4C8 \
; do \
gpg --keyserver ha.pool.sks-keyservers.net --recv-keys "$key"; \
done
ENV NPM_CONFIG_LOGLEVEL info
ENV NODE_VERSION 8.11.1
ENV NODE_ENV dev
RUN buildDeps='xz-utils curl ca-certificates' \
&& set -x \
&& apt-get update && apt-get install -y $buildDeps --no-install-recommends \
&& rm -rf /var/lib/apt/lists/* \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/node-v$NODE_VERSION-linux-x64.tar.xz" \
&& curl -SLO "https://nodejs.org/dist/v$NODE_VERSION/SHASUMS256.txt.asc" \
&& gpg --batch --decrypt --output SHASUMS256.txt SHASUMS256.txt.asc \
&& grep " node-v$NODE_VERSION-linux-x64.tar.xz\$" SHASUMS256.txt | sha256sum -c - \
&& tar -xJf "node-v$NODE_VERSION-linux-x64.tar.xz" -C /usr/local --strip-components=1 \
&& rm "node-v$NODE_VERSION-linux-x64.tar.xz" SHASUMS256.txt.asc SHASUMS256.txt \
&& apt-get purge -y --auto-remove $buildDeps
# Install yarn and feathers-cli
RUN npm install -g yarn #feathersjs/cli pm2#latest
# Install Git
RUN add-apt-repository -y ppa:git-core/ppa;\
apt-get update;\
apt-get -y install git
# Create app directory
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
RUN cd /usr/src/app
# Install app dependencies
RUN yarn install
EXPOSE 3030
CMD [ "node" ]
Related
I would like to create a VNC server in two Docker containers, calling them ContainerA & ContainerB. ContainerA and ContainerB must both be exposed to the host network via --network="host". When I try to instantiate two VNC servers it fails to be created because the address space is already being used, address being alfred. Even after changing the port being used, display offset, etc, nothing seems to work.
If anyone has had any experience in this that would be great. A solution that I'll shoot down right now is bridging the containers where they each have their own unique address space via bridge. As how the system is configured this cannot happen.
Thanks!
My Current Setup
The setup is that I am now trying a intermediate VNC container and two Firefox containers, still in the works. The work for the two VNC servers exist in the two Dockerfiles listed for FirefoxVNC[A,B].
# compose.yaml
version: '3.7'
services:
vncserver:
build: ./VNCServer
container_name: vncserver
network_mode: "host"
firefoxa:
build: ./FirefoxVNCA
container_name: firefoxvnca
network_mode: "host"
firefoxb:
build: ./FirefoxVNCB
container_name: firefoxvncb
network_mode: "host"
# FirefoxVNCA/Dockerfile
FROM nvidia/cuda:11.7.0-devel-centos7
# RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum update -y
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y firefox \
x11vnc \
xorg-x11-server-Xvfb
EXPOSE 5900
#RUN echo "exec firefox" > ~/.xinitrc && chmod +x ~/.xinitrc
#CMD ["x11vnc", "-create", "-forever", "-rfbport", "5901"]
CMD ["/bin/bash", "-c", "-l", "firefox"]
# FirefoxVNCB/Dockerfile
FROM nvidia/cuda:11.7.0-devel-centos7
# RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum update -y
RUN yum install -y https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
RUN yum install -y firefox \
x11vnc \
xorg-x11-server-Xvfb
EXPOSE 5900
#RUN echo "exec firefox" > ~/.xinitrc && chmod +x ~/.xinitrc
#CMD ["x11vnc", "-create", "-forever", "-rfbport", "5901"]
CMD ["/bin/bash", "-c", "-l", "firefox"]
I have made a docker container meant with some code for deployment. However, I realised that the structure of the project I'm working with, it's more suitable to deploy a full ISO image, instead of running docker on top of a cloud VM running stock debian, leading to unnecessary layers of virtualization.
I know that dockers are meant to be deployed on kubernetes, but before diving into that route, is there a simple way to convert a deb9 docker image into a full deb9 OS image? Like an opposite of docker import?
You can convert your docker container to an full os image. An Dockerfile debian example would be
FROM debian:stretch
RUN apt-get -y install --no-install-recommends \
linux-image-amd64 \
systemd-sysv
In principal you have to install a kernel and an init system.
Full instructions can be found github
Docker images don't contain a Linux kernel and aren't configured to do things like run a full init system or go out and get their network configuration, so this won't work well.
Put differently: the same mismatch that makes docker import not really work well because the resulting container will have too much stuff and won't be set up for Docker, means you can't export a container or image into a VM and expect it to work in that environment.
But! If you've written a Dockerfile for your image, that's very close to saying "I'm going to start from this base Linux distribution and run this shell script to get an image". Tools like Packer are set up to take a base Linux VM image, run some sort of provisioner, and create a new image. If you have a good Docker setup already, and decide a VM is a better fit, you're probably done a lot of the leg-work already to have an automated build of your VM image.
#info
we can use docker like vm / flatpak / appimge / termux with gui, audio and hardware acceleration :)
how to do it ?
remove old docker
sudo apt-get remove docker docker-engine docker.io containerd runc
===================
install the commponents
sudo apt-get update
sudo apt install apt-transport-https ca-certificates curl gnupg lsb-release
=============================
3.. add docker repo docker
curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg
echo
"deb [arch=amd64 signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
================
install docker
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
=============================
=========================
create docker
docker image
su
cd /home/
kwrite Dockerfile
FROM debian:testing
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update && apt-get -y install midori bash mpv pulseaudio pulseaudio-utils pulsemixer pulseaudio-module-jack apulse neofetch vlc smplayer wget sudo cairo-dock cairo-dock-plug-ins xfce4 xfce4-goodies falkon kde-full xfce4-whiskermenu-plugin gnome tigervnc-standalone-server openssh-server openssh-client network-manager net-tools iproute2 gerbera openjdk-11-jdk mediainfo dcraw htop gimp krita libreoffice python3-pip terminator uget alsa-utils
ENV PULSE_SERVER=tcp:host.docker.internal:4713
CMD bash
save it
the run interminal
sudo docker build -t supersifobian .
how to use gui
xhost +
run image gui
sudo apt-get -y install xpra xserver-xephyr xinit xauth
xclip x11-xserver-utils x11-utils
run docker
sudo docker run -ti --net=host --device=/dev/dri:/dev/dri -e DISPLAY=:0 --privileged --cap-add=ALL --device /dev/snd --volume /dev:/dev -v /dev:/dev --group-add audio -v /var/run/docker.sock:/host/var/run/doc -e PULSE_SERVER=tcp:$P ULSE_SERVER -v /dev:/host/dev -v /proc:/host/proc:ro -v /boot:/host/boot:ro -v /lib/modules:/host/lib/modules:ro -v /usr:/host/usr:ro -v /media:/host/media:ro -v /home:/host/home:ro imageid / image name
nb
enter container
docker exec -it nama/id container bash
add user
adduser namauser
make use as sudo
usermod -aG sudo username
add group audio
usermod -aG audio username
=========================
audio in docker cli
using prafer
a. install paprefs
apt-get install paprefs
b.choose network in prafers
c. how to know the pulse audio port, we can type
pax11publish
d. export to terminal in docker
export "PULSE_SERVER=tcp:192.168.43.135:37721"
==================================
save docker
docker commit idcontainer nameimage:version
========================
check images
docker images
==================================
thanks :)
I have a Dockerfile and docker-compose.yml that I developed on OSX. It looks like this:
cat Dockerfile:
FROM alpine:3.7
ADD ./app /home/app/
WORKDIR /home/app/
RUN apk add --no-cache postgresql-dev gcc python3 python3-dev musl-dev && \
python3 -m ensurepip && \
rm -r /usr/lib/python*/ensurepip && \
pip3 install --upgrade pip setuptools && \
rm -r /root/.cache && \
pip3 install -r requirements.txt
EXPOSE 5000
ENTRYPOINT ["python3", "app.py"]
cat docker-compose.yml:
version: "3"
services:
web:
build: .
ports:
- "5000:5000"
volumes:
- ./app/:/home/app/
depends_on:
- db
db:
image: postgres:10
environment:
- POSTGRES_USER=testusr
- POSTGRES_PASSWORD=password
- POSTGRES_DB=testdb
expose:
- 5432
I then start via: docker-compose up --build -d and open the web browser on port 5000, it shows a form (as expected). Again, this works perfect on OSX.
However, on Windows 10 it just shows a blank page. And docker port <CONTAINER> shows 5000/tcp -> 0.0.0.0:5000 so I know it's bound to the right port.
Additionally, if I docker logs -f <python container> it doesn't show the request ever getting to the container. Normally a new line is printed (w/status code) for each flask/nginx response. In this case it looks like it's not even getting to the python application container.
So, any idea why this doesn't work on Windows?
So the answer is that you have to use docker-machine ip to see what IP the container is bound to. It doesn't just bind to localhost/127.0.0.1 like it does on OSX.
I have a maven project that usese the io.fabric8 docker-maven-plugin to launch a database as part of my integration tests. When I run the integration tests locally it works, but when run on my Jenkins server I get an error saying there is no DOCKER_HOST variable.
[ERROR] Failed to execute goal io.fabric8:docker-maven-plugin:0.20.1:start (prepare-itdatabase) on project myproject: Execution prepare-itdatabase of goal io.fabric8:docker-maven-plugin:0.20.1:start failed: No <dockerHost> given, no DOCKER_HOST environment variable, no read/writable '/var/run/docker.sock' or '//./pipe/docker_engine' and no external provider like Docker machine configured -> [Help 1]
It might be worth mentioning that my Jenkins instance itself is launched through docker, by simply using something like docker run jenkins.
I tried to set the DOCKER_HOST variable to tcp://192.168.59.103:2375 when starting Jenkins, but that just caused it to time out in the build.
my Jenkins instance itself is launched through docker, by simply using something like docker run jenkins
I assume you run your build directly on the Jenkins master of your container (no slave). Your build process run inside the container, which does not have Docker binary or socket installed.
You'll have to mount the Docker socket in your container and install the Docker binaries. This blog post explains things in details, in short what you can do is:
Launch your Jenkins container with Docker socket:
docker run -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
--name jenkins \
jenkins/jenkins:lts
And install Docker inside the container. Said blog post gives a handy script to be run inside the container:
apt-get update && \
apt-get -y install apt-transport-https \
ca-certificates \
curl \
gnupg2 \
software-properties-common && \
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg > /tmp/dkey; apt-key add /tmp/dkey && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get -y install docker-ce
You can run this script by-hand or as part of your Jenkins build. From now-on you should be able to run Docker commands from inside your container (and in your builds).
Alternatively, you can configure a Jenkins Slave independently from your Master and have Docker installed on this Slave.
I'm using the official elasticsearch Docker image instead of setting up my own elastic search instance. And that works great, up to the point when I wanted to extend it. I wanted to install marvel into that ElasticSearch instance to get more information.
Now dockerfile/elasticsearch automatically runs ElasticSearch and setting the command to /bin/bash doesn't work, neither does attaching to the container or trying to access it over SSH, nor installing ssh-daemon with apt-get install -y openssh-server.
In this particular case, I could just go into the container's file system and execute opt/elasticsearch/bint/plugin -i elasticsearch/marvel/latest and everything worked.
But how could I install an additional service which needs to be installed with apt-get when I can't have a terminal inside the running container?
Simply extend it using a Dockerfile that start with
FROM dockerfile/elasticsearch
and install marvel or ssh-server or whatever you need. Then, end with the correct command to start your services. You can use supervisor to start multple services, see Run a service automatically in a docker container for more info on that.
If you don't mind using docker-compose, what I usually do is to add a first section for the base image you plan to reuse, and then use that image as the base in the rest of the services' Dockerfiles, something along the lines of:
---
version: '2'
services:
base:
build: ./images/base
collector:
build: ./images/collector
Then, in images/collector/Dockerfile, and since my project is called webtrack, I'd type
FROM webtrack_base
...
And now it's done!
Update August 2016
Having found very little current information on how to do this with latest versions of ElasticSearch (2.3.5 for example), Kibana (4.5.3) and Marvel & Sense plugins, I opted to take the steeper path and write my own image.
Please find the source code (Dockerfile) and README here
FROM java:jre-alpine
MAINTAINER arcseldon <arcseldon#gmail.com>
ENV ES_VERSION=2.3.5 \
KIBANA_VERSION=4.5.3
RUN apk add --quiet --no-progress --no-cache nodejs \
&& adduser -D elasticsearch
USER elasticsearch
WORKDIR /home/elasticsearch
RUN wget -q -O - http://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/${ES_VERSION}/elasticsearch-${ES_VERSION}.tar.gz \
| tar -zx \
&& mv elasticsearch-${ES_VERSION} elasticsearch \
&& wget -q -O - http://download.elastic.co/kibana/kibana/kibana-${KIBANA_VERSION}-linux-x64.tar.gz \
| tar -zx \
&& mv kibana-${KIBANA_VERSION}-linux-x64 kibana \
&& rm -f kibana/node/bin/node kibana/node/bin/npm \
&& ln -s $(which node) kibana/node/bin/node \
&& ln -s $(which npm) kibana/node/bin/npm \
&& ./elasticsearch/bin/plugin install license \
&& ./elasticsearch/bin/plugin install marvel-agent \
&& ./kibana/bin/kibana plugin --install elasticsearch/marvel/latest \
&& ./kibana/bin/kibana plugin --install elastic/sense
CMD elasticsearch/bin/elasticsearch --es.logger.level=OFF --network.host=0.0.0.0 & kibana/bin/kibana -Q
EXPOSE 9200 5601
If you just want the pre-built image then please do:
docker pull arcseldon/elasticsearch-kibana-marvel-sense
You can visit the repository on hub.docker.com here
Usage:
docker run -d -p 9200:9200 -p 5601:5601 arcseldon/elasticsearch-kibana-marvel-sense
You can connect to Elasticsearch with http://localhost:9200 and its Kibana front-end with http://localhost:5601.
You can connect to Marvel with http://localhost:5601/app/marvel and Sense with http://localhost:5601/app/sense
Hope this helps others and saves some time!