MacOs, Docker-sync, Laravel 5.8, Postman - performance - laravel

INTRODUCTION
I am using Docker on Mac. I decided to use Docker-sync because bind mounts are slow on Mac. I've managed to successfully set up the whole thing. What I saw after makes me question if it is even worth using Docker on Mac. I hope it is the fault of my setup or something.
CONFIG
docker-sync.yml
version: "2"
options:
verbose: true
syncs:
appcode-native-osx-sync: # tip: add -sync and you keep consistent names as a convention
src: '../'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['vendor', 'node_modules']
docker-compose.yml
version: '3.7'
services:
webapp:
build:
context: ./php/
dockerfile: Dockerfile
container_name: webapp
image: php:7.3-fpm-alpine
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
apache2:
build:
network: host
context: ./apache2/
dockerfile: Dockerfile
container_name: apache2
image: httpd:2.4.39-alpine
ports:
- 8080:80
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
mysql:
container_name: mysql
image: mysql:latest
command: mysqld --default-authentication-plugin=mysql_native_password
ports:
- 13306:3306
volumes:
- mysql:/var/lib/mysql
environment:
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: dbname
MYSQL_USER: blogger
MYSQL_PASSWORD: secret
volumes:
mysql:
driver: local
appcode-native-osx-sync:
external: true
PROBLEM (I THINK)
Setting Docker-sync was supposed to make it feel much more like native setup/Linux setup in terms of performance.
I have noticed something that from my point of view makes the entire thing kinda useless.
So here we go.
WITHOUT DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~6.8s to finish. It is only a few lines of text, nothing else is going on. I am simply getting one dummy, short blog post out of the database and spitting out JSON.
If I make subsequent request straight away time drops to ~1.4s per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~6.8s.
WITH DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~5.1s (so not much better) to finish. Exactly the same data as last time.
If I make subsequent request straight away time drops to ~100ms(sic!) per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~5.1s.
QUESTIONS
What do you think - is this request cached by Docker, Laravel, Postman? I did notice a similar problem at work with Symfony 3.4 but I am not maintaining things like that at work. This is my personal project and my first time so deep inside of the Docker World.
Like I mentioned I am using Docker-sync for speed. Usually, when I work it looks like this: write code for a couple of minutes, hit endpoint, repeat. At this point, I am back to ~5.1s and I have to wait - is there any way of solving this problem with the first request being slow like that? Maybe I have misunderstood the idea behind Docker-sync but I was sure it was supposed to help me to keep all the requests I make fairly quick.
I personally blame Laravel. Can anyone shed a bit of light on what might be the actual source of the problem here?
EPILOGUE
I did install Linux on my Mac just to try it out on Linux however there are few things that make Linux much less attractive(I love Linux anyway!) when it comes to hours and hours of coding.
UPDATE 21.08.2019
I just did the same test on Ubuntu 18 with Docker... 80ms! (8.8 seconds) / (80 milliseconds) = 110 - this is horrifying!
UPDATE 03.09.2019
I did some tests yesterday - I tried to use different sync strategies - rsync and unison. It seems like it is not having any effect at all. Does anyone else have the same issue? Maybe we can work on it together?

Related

prometheus past metrics not shown on target node restart

I am new to Prometheus and need help to understand why past metric data is not shown when the target node restarts.
I have set up a Golang web server (target). This server makes use of the Go Prometheus Docs Golang Prometheus client to prepare metrics and exposes metrics on port 3000. Prometheus scrapes data from this target.
Prometheus Config file:
global: scrape_interval: 10s scrape_timeout: 10s
scrape_configs:
- job_name: 'webServer1'
static_configs:
- targets: ['webServer1:8080']
I have Also set the retention flag in docker-compose
prometheus:
image: prom/prometheus
volumes:
- ./prometheus/prometheus.yml:/etc/prometheus/prometheus.yml
ports:
- "127.0.0.1:9090:9090"
command:
- '--config.file=/etc/prometheus/prometheus.yml'
- '--storage.tsdb.path=/prometheus'
- '--web.console.libraries=/etc/prometheus/console_libraries'
- '--web.console.templates=/etc/prometheus/consoles'
- '--storage.tsdb.retention.time=200h'
- '--web.enable-lifecycle'
I have instrumented a web server (target) to count the number of HTTP requests made to /bar endpoint. I can see the correct request count on Prometheus (click on image 1 link).
image 1
But on webserver restart, previously recorded metrics are not shown on Prometheus (click on image 2 link).
image 2
It's unclear to me why metrics earlier scraped from the webserver (target) are not shown above on target node restart. I can see previously scraped metrics in graph view (see image 3 link). But not in the table view.
image 3
Looks like you made the hostname part of the metric name. This produces new metrics for every container. The table view only shows metrics that were contained in the most recent scrape for each target.
To fix the issue remove the hostname part from the metric name so the names don't change between restarts. If this is really useful information, add them as a label instead, although that is almost certainly a bad idea.

microk8s image pull keeps breaking

I am running microk8s v1.18.8 rev 1609 from 1.18/stable.
Several times I have got my deployments up and running perfectly (as far as I can tell). The images pull from localhost:32000. I have gone through many rounds of updating the deployments and the pods get automatically replaced, with the new images being pulled successfully from the repository.
Then I move onto another project for a few days (having nothing to do with microk8s). I leave microk8s running and untouched. Several times when I've returned to the microk8s project, all the pods have gone away and show an error state (ErrImagePull). If I delete a pod, a new pod tries to replace it, but hangs initially in the ContainerCreating state (last log entry is 'Pulling image "localhost:32000/..."'). Eventually it times out and goes through the ImagePullBackOff and ErrImagePull states. However, the last time I had anything to do with the project, these images were pulling perfectly fine.
I can push the image to localhost:32000 without error. I can pull the image without error. I can pull the image using microk8s.ctr:
microk8s ctr --debug images pull --plain-http localhost:32000/imagename
It works fine. I've tried changing ufw default to allow routed (no effect), iptables -P FORWARD ACCEPT (no effect). microk8s inspect does not report any issues. I've tried microk8s stop followed by microk8s start (no effect). Rebooting my machine (no effect). Everything else about the system appears fine: just the pods trying to pull images fails.
Previously, something in the above made it work again, but not this time. So my main question is "What else can I try?"
My secondary question is: Is this a stable platform for anyone? Can you leave a service/deployment (e.g. an nginx server) running for months without issue? I am tired of leaving a working environment and coming back a little while later to a badly broken system that takes hours/days to fix. I'm having serious doubts about microk8s in particular and k8s in general as a useful platform.
if you pull the image from external registry, if it shows ErrImagePull and ImagePullBackOff error, please try it
kubectl create secret docker-registry regprivate --docker-server=https://privateregistry.com/ --docker-username=user --docker-password=mypassword
spec:
imagePullSecrets:
- name: regprivate
containers:
- name: miapp
image: privateregistry.com/miapp:v2

Docker volume usage for storing jpg

So, i built a small application to test how docker works, is a small laravel app that registers users with its profile image. Everything is working properly but the profile image is not being display.
I assume this is because how docker works (ephemeral, unchange, etc), so i was reading a bit about volumes but unfortunately i was not able to make it work.
The images are stored inside a folder called uploads within public folder (laravel structure).
In my docker-compose.yml file i have the following volumes defined:
volumes:
- mysql-data:/var/lib/mysql
So i tried to add the one that i need, something like this:
volumes:
- mysql-data:/var/lib/mysql
- user-images:/app/public/uploads
volumes:
mysql-data:
user-images:
I also tried with bind mounts but i think this can only be used using docker container run (not quite sure).
Any idea on how could i fix this?
Thanks
user-images:/app/public/uploads would be a named volume, defined in /var/lib/docker/volumes
If you want to use a bind mount, that is mounting an host folder as a volume, use a path:
volumes:
- "./user-images:/app/public/uploads"
See also "Laravel + Docker" and its part 2 for a more complete example.
I'm assuming your image is in directory tree with the dockerfile/docker-compose. If that is the case you actually don't want to use a named volume given those are stored elsewhere on your system and would require you move your image to that location. (see "Mountpoint" in image below)
What you likely want to use is the --mount flag which in your compose file would look like so...
volumes:
- type: bind
source: ./path/to/file
target: /app/public/uploads
See docker volumes docks for more info. I wrote the long version which I prefer as it's more explicit but the sort version does the same thing. You should be aware that using a bind volume will overwrite any files you might of added to the actual image if they overlap. Tip this is handy when you have a hot reloading dev server as you can change your files locally and they be ran in the containers context.

Symfony 3.3 memcached performance issue

Recently I've set up memcached for my project(just started to work on it so I don't have any complicated DB queries yet). I am using it with Doctrine2 and Symfony 3.3 on my local vagrant machine(Ubuntu 16). I know it is working because I can see that it is writing things into memory:
What I don't understand is that why performance is so poor. According to this:
Memcached Basics
I should get significant performance increase.
How do I set up my memcached?
Version: 1.4.25
config.yml:
metadata_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
query_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
result_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
And then I do something like this:
$posts = $this->getEntityManager()->createQuery($dql)->useQueryCache(true)->useResultCache(true);
My questions/worries:
Why there are so many connections?(1st screenshot, middle column - Claster stats)? I have already seen this but my implementation is different.
Big number of connections make it slower?
Do I have to use big data set/lots of queries to test it? Currently I am using one query as a test and I am getting over 100 posts from database. However from what I understand speed and performance boost should be so big I should be able to see it even with that data set.
When I hit my endpoint without cache(via postman/insomia) I can get ~300-400ms but with memcached it's always above 1s. I was hoping for something like ~100ms.
Can anyone tell me more about what might be the problem here? I will be happy with every hint I can get.
Regards,
Rob

Correct way to link redis to sinatra docker containers

I am writing a small microservices based app, and in it I have a redis instance that some ruby code/containers access to use for via Resque. Currently in my docker compose I am linking like so:
redis:
image: redis:latest
ports:
- '6379:6379'
ruby_worker:
image: my_user/my_image:latest
links:
- redis:db
This works fine(I only name it :db for now cause that is the example I found when looking up linking).
In my ruby code, I have to set up my Resque redis server like:
Resque.redis = ENV['DB_PORT_6379_TCP_ADDR'] + ':6379'
But this just doesn't seem right. It is dependent on that exact redis name, and if I had to spin up a different redis instance(like how I did while playing with docker cloud today), it doesn't find the redis server. In total I have 3 containers(so far) connecting to this redis for resque. A small sinatra based front end, and 2 workers. I am not much of a rails person, and have never used resque before 3 days ago. So sorry if I missed some of the basics.
Is there a better way to connect to my redis instance in my ruby code? Is there a way to pass the redis name in my docker-compose? Right now my resque-web container is configured like below and it seems to work fine:
resque:
image: ennexa/resque-web
links:
- redis:redisserver
ports:
- "5678:5678"
command: "-r redis://redisserver:6379"
Don't use those link environment variables, they are deprecated, and they don't exist in user defined networks anymore.
A good way to do this would be first to default to the name hostname redis. You can always alias any service to have that name, so you should almost never need to change it from the default. For example:
links:
- someservice:redis
To override the default, you can create some application specific env var like REDIS_HOSTNAME, and set it from the environment section (but again, this is almost never necessary):
environment:
REDIS_HOSTNAME=foo

Resources