Correct way to link redis to sinatra docker containers - ruby

I am writing a small microservices based app, and in it I have a redis instance that some ruby code/containers access to use for via Resque. Currently in my docker compose I am linking like so:
redis:
image: redis:latest
ports:
- '6379:6379'
ruby_worker:
image: my_user/my_image:latest
links:
- redis:db
This works fine(I only name it :db for now cause that is the example I found when looking up linking).
In my ruby code, I have to set up my Resque redis server like:
Resque.redis = ENV['DB_PORT_6379_TCP_ADDR'] + ':6379'
But this just doesn't seem right. It is dependent on that exact redis name, and if I had to spin up a different redis instance(like how I did while playing with docker cloud today), it doesn't find the redis server. In total I have 3 containers(so far) connecting to this redis for resque. A small sinatra based front end, and 2 workers. I am not much of a rails person, and have never used resque before 3 days ago. So sorry if I missed some of the basics.
Is there a better way to connect to my redis instance in my ruby code? Is there a way to pass the redis name in my docker-compose? Right now my resque-web container is configured like below and it seems to work fine:
resque:
image: ennexa/resque-web
links:
- redis:redisserver
ports:
- "5678:5678"
command: "-r redis://redisserver:6379"

Don't use those link environment variables, they are deprecated, and they don't exist in user defined networks anymore.
A good way to do this would be first to default to the name hostname redis. You can always alias any service to have that name, so you should almost never need to change it from the default. For example:
links:
- someservice:redis
To override the default, you can create some application specific env var like REDIS_HOSTNAME, and set it from the environment section (but again, this is almost never necessary):
environment:
REDIS_HOSTNAME=foo

Related

MacOs, Docker-sync, Laravel 5.8, Postman - performance

INTRODUCTION
I am using Docker on Mac. I decided to use Docker-sync because bind mounts are slow on Mac. I've managed to successfully set up the whole thing. What I saw after makes me question if it is even worth using Docker on Mac. I hope it is the fault of my setup or something.
CONFIG
docker-sync.yml
version: "2"
options:
verbose: true
syncs:
appcode-native-osx-sync: # tip: add -sync and you keep consistent names as a convention
src: '../'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['vendor', 'node_modules']
docker-compose.yml
version: '3.7'
services:
webapp:
build:
context: ./php/
dockerfile: Dockerfile
container_name: webapp
image: php:7.3-fpm-alpine
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
apache2:
build:
network: host
context: ./apache2/
dockerfile: Dockerfile
container_name: apache2
image: httpd:2.4.39-alpine
ports:
- 8080:80
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
mysql:
container_name: mysql
image: mysql:latest
command: mysqld --default-authentication-plugin=mysql_native_password
ports:
- 13306:3306
volumes:
- mysql:/var/lib/mysql
environment:
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: dbname
MYSQL_USER: blogger
MYSQL_PASSWORD: secret
volumes:
mysql:
driver: local
appcode-native-osx-sync:
external: true
PROBLEM (I THINK)
Setting Docker-sync was supposed to make it feel much more like native setup/Linux setup in terms of performance.
I have noticed something that from my point of view makes the entire thing kinda useless.
So here we go.
WITHOUT DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~6.8s to finish. It is only a few lines of text, nothing else is going on. I am simply getting one dummy, short blog post out of the database and spitting out JSON.
If I make subsequent request straight away time drops to ~1.4s per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~6.8s.
WITH DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~5.1s (so not much better) to finish. Exactly the same data as last time.
If I make subsequent request straight away time drops to ~100ms(sic!) per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~5.1s.
QUESTIONS
What do you think - is this request cached by Docker, Laravel, Postman? I did notice a similar problem at work with Symfony 3.4 but I am not maintaining things like that at work. This is my personal project and my first time so deep inside of the Docker World.
Like I mentioned I am using Docker-sync for speed. Usually, when I work it looks like this: write code for a couple of minutes, hit endpoint, repeat. At this point, I am back to ~5.1s and I have to wait - is there any way of solving this problem with the first request being slow like that? Maybe I have misunderstood the idea behind Docker-sync but I was sure it was supposed to help me to keep all the requests I make fairly quick.
I personally blame Laravel. Can anyone shed a bit of light on what might be the actual source of the problem here?
EPILOGUE
I did install Linux on my Mac just to try it out on Linux however there are few things that make Linux much less attractive(I love Linux anyway!) when it comes to hours and hours of coding.
UPDATE 21.08.2019
I just did the same test on Ubuntu 18 with Docker... 80ms! (8.8 seconds) / (80 milliseconds) = 110 - this is horrifying!
UPDATE 03.09.2019
I did some tests yesterday - I tried to use different sync strategies - rsync and unison. It seems like it is not having any effect at all. Does anyone else have the same issue? Maybe we can work on it together?

Docker volume usage for storing jpg

So, i built a small application to test how docker works, is a small laravel app that registers users with its profile image. Everything is working properly but the profile image is not being display.
I assume this is because how docker works (ephemeral, unchange, etc), so i was reading a bit about volumes but unfortunately i was not able to make it work.
The images are stored inside a folder called uploads within public folder (laravel structure).
In my docker-compose.yml file i have the following volumes defined:
volumes:
- mysql-data:/var/lib/mysql
So i tried to add the one that i need, something like this:
volumes:
- mysql-data:/var/lib/mysql
- user-images:/app/public/uploads
volumes:
mysql-data:
user-images:
I also tried with bind mounts but i think this can only be used using docker container run (not quite sure).
Any idea on how could i fix this?
Thanks
user-images:/app/public/uploads would be a named volume, defined in /var/lib/docker/volumes
If you want to use a bind mount, that is mounting an host folder as a volume, use a path:
volumes:
- "./user-images:/app/public/uploads"
See also "Laravel + Docker" and its part 2 for a more complete example.
I'm assuming your image is in directory tree with the dockerfile/docker-compose. If that is the case you actually don't want to use a named volume given those are stored elsewhere on your system and would require you move your image to that location. (see "Mountpoint" in image below)
What you likely want to use is the --mount flag which in your compose file would look like so...
volumes:
- type: bind
source: ./path/to/file
target: /app/public/uploads
See docker volumes docks for more info. I wrote the long version which I prefer as it's more explicit but the sort version does the same thing. You should be aware that using a bind volume will overwrite any files you might of added to the actual image if they overlap. Tip this is handy when you have a hot reloading dev server as you can change your files locally and they be ran in the containers context.

Setup single CLI to interface with multiple containers?

I'm not a Docker expert and have been struggling with this problem for a few hours now -
I have 2 independent images - one for a Python REPL and another for a Scheme REPL. I want to create an application that provides a single CLI interface wrapped around 2 containers running either image - so that when I enter python it connects to the Python REPL container and executes everything that follows there; whereas, scheme connects to the Scheme REPL container.
I have 2 questions -
a) Is this possible at all using Docker Compose? Also, does this really qualify as a use case for Docker Compose?
b) Suppose I start with the following bare-bones docker-compose.yml -
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
Do I setup the common CLI (say a bash shell) in another container that communicates with the other two if I issue the python or scheme command? How do I define the entrypoints?
I know a simpler solution would simply be to make a Dockerfile that combines both the Python & Scheme setup into a single image. However, I really want to keep them separate and hence am going down this path.
Any help will be appreciated.
Using docker-compose does not provide you a single CLI interface. Instead, two separate containers are created but advantage is that different containers can communicate with each other using service name that you specify.
version: '3.3'
services:
python:
image: "python:3.6.2-alpine3.6"
racket:
image: "m4burns/racket"
From the example you gave, python can access racket by using http://racket. The containers can access each other using their service name. By default, service name is treated as hostname. And this hostname is used for communication. Most people use this default behavior, however, you can specify hostname separately also.

Using redis with heroku

This is my first time using redis and the only reason I am is because I'm trying out autocomplete search tutorial. The tutorial works perfectly in development but I'm having trouble setting up redis for heroku.
I already followed these steps on the heroku docs for setting up redis but when I run heroku run rake db:seed I get Redis::CannotConnectError: Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)
I'm not very familiar with heroku so if you guys need any more information let me know.
Edit
I've completed the initializer steps shown here and when I run heroku config:get REDISCLOUD_URL the result is exactly the same as the Redis Cloud URL under the config vars section of my Heroku settings.
Following the documentation, I then set up config/initializers/redis.rb like so:
if ENV["REDISCLOUD_URL"]
$redis = Redis.new(:url => ENV["REDISCLOUD_URL"])
end
Just to check, I tried substituting the actual URL for redis cloud inside the if block instead of just the REDISCLOUD_URL variable but that didn't work. My error message hasn't changed when I try to seed the heroku db.
It’s not enough to just create a $redis variable that points to the installed Redis server, you also need to tell Soulmate about it, otherwise it will default to localhost.
From the Soulmate README you should be able to do something like this in an initializer (instead of your current redis.rb initializer, which you won’t need unless you are using Redis somewhere else in your app):
if ENV["REDISCLOUD_URL"]
Soulmate.redis = ENV["REDISCLOUD_URL"]
end
Looking at the Soulmate source, an easier way may be to set the REDIS_URL environment variable to the Redis url, either instead of or as well as REDISCLOUD_URL, as it looks like Soulmate checks this before falling back to localhost.
Your code is trying to connect to a local Redis instance, instead the one from Redis Cloud - make sure you've completed the initializer step as detailed in order to resolve this.

Keeping a Ruby Service running on Elastic Beanstalk

I have been looking for a while now on setting up worker nodes in a cloud native application. I plan to have an autoscaling group of worker nodes pulling jobs from a queue, nothing special there.
I am just wondering, is there any best practice way to ensure that a (eg. ruby) script is running at all times? My current assumption is that you have a script running that polls the queue for jobs and sleeps for a few seconds or so if a job query returns no new job.
What really caught my attention was the Services key in the Linux Custom Config section of AWS Elastic Beanstalk Documentation.
00_start_service.config
services:
sysvinit:
<name of service>:
enabled: true
ensureRunning: true
files: "<file name>"
sources: "<directory>"
packages:
<name of package manager>:
<package name>: <version>
commands:
<name of command>:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/customize-containers-ec2.html
The example they give is this..
services:
sysvinit:
myservice:
enabled: true
ensureRunning: true
I find the example and documentation extremely vague and I have no idea how to get my own service up and running using this config key, which means I do not even know if this is what I want or need to use. I have tried creating a ruby executable file and putting the name in the field, but no luck.
I asked the AWS forums for more clarification and have received no response.
If anyone has any insight or direction on how this can be achieved, I would greatly appreciate it. Thank you!
I decided not to use the "services" section of the EB config files, instead just using the "commands" ..
I build a service monitor in ruby that monitors a given system process (in this case my service).
The service itself is a script looping infinitely with delays based on long polling times to the queue service.
A cron job runs the monitor every minute and if the service is down it is restarted.
The syntax for files in the documentation seems to be wrong. The following works for me (note square brackets instead of quotation marks):
services:
sysvinit:
my_service:
enabled: true
ensureRunning: true
files : [/etc/init.d/my_service]

Resources