Symfony 3.3 memcached performance issue - performance

Recently I've set up memcached for my project(just started to work on it so I don't have any complicated DB queries yet). I am using it with Doctrine2 and Symfony 3.3 on my local vagrant machine(Ubuntu 16). I know it is working because I can see that it is writing things into memory:
What I don't understand is that why performance is so poor. According to this:
Memcached Basics
I should get significant performance increase.
How do I set up my memcached?
Version: 1.4.25
config.yml:
metadata_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
query_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
result_cache_driver:
type: memcached
host: 127.0.0.1
port: 11211
instance_class: Memcached
And then I do something like this:
$posts = $this->getEntityManager()->createQuery($dql)->useQueryCache(true)->useResultCache(true);
My questions/worries:
Why there are so many connections?(1st screenshot, middle column - Claster stats)? I have already seen this but my implementation is different.
Big number of connections make it slower?
Do I have to use big data set/lots of queries to test it? Currently I am using one query as a test and I am getting over 100 posts from database. However from what I understand speed and performance boost should be so big I should be able to see it even with that data set.
When I hit my endpoint without cache(via postman/insomia) I can get ~300-400ms but with memcached it's always above 1s. I was hoping for something like ~100ms.
Can anyone tell me more about what might be the problem here? I will be happy with every hint I can get.
Regards,
Rob

Related

MacOs, Docker-sync, Laravel 5.8, Postman - performance

INTRODUCTION
I am using Docker on Mac. I decided to use Docker-sync because bind mounts are slow on Mac. I've managed to successfully set up the whole thing. What I saw after makes me question if it is even worth using Docker on Mac. I hope it is the fault of my setup or something.
CONFIG
docker-sync.yml
version: "2"
options:
verbose: true
syncs:
appcode-native-osx-sync: # tip: add -sync and you keep consistent names as a convention
src: '../'
# sync_strategy: 'native_osx' # not needed, this is the default now
sync_excludes: ['vendor', 'node_modules']
docker-compose.yml
version: '3.7'
services:
webapp:
build:
context: ./php/
dockerfile: Dockerfile
container_name: webapp
image: php:7.3-fpm-alpine
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
apache2:
build:
network: host
context: ./apache2/
dockerfile: Dockerfile
container_name: apache2
image: httpd:2.4.39-alpine
ports:
- 8080:80
volumes:
- appcode-native-osx-sync:/srv/app:nocopy
mysql:
container_name: mysql
image: mysql:latest
command: mysqld --default-authentication-plugin=mysql_native_password
ports:
- 13306:3306
volumes:
- mysql:/var/lib/mysql
environment:
MYSQL_ROOT_USER: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: dbname
MYSQL_USER: blogger
MYSQL_PASSWORD: secret
volumes:
mysql:
driver: local
appcode-native-osx-sync:
external: true
PROBLEM (I THINK)
Setting Docker-sync was supposed to make it feel much more like native setup/Linux setup in terms of performance.
I have noticed something that from my point of view makes the entire thing kinda useless.
So here we go.
WITHOUT DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~6.8s to finish. It is only a few lines of text, nothing else is going on. I am simply getting one dummy, short blog post out of the database and spitting out JSON.
If I make subsequent request straight away time drops to ~1.4s per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~6.8s.
WITH DOCKER-SYNC
I make 1 request via postman(Cache-Control: no-cache) which takes ~5.1s (so not much better) to finish. Exactly the same data as last time.
If I make subsequent request straight away time drops to ~100ms(sic!) per request. If I keep hitting that endpoint it will stay at this level.
If I wait a few seconds between requests then the first request after this pause will go back to ~5.1s.
QUESTIONS
What do you think - is this request cached by Docker, Laravel, Postman? I did notice a similar problem at work with Symfony 3.4 but I am not maintaining things like that at work. This is my personal project and my first time so deep inside of the Docker World.
Like I mentioned I am using Docker-sync for speed. Usually, when I work it looks like this: write code for a couple of minutes, hit endpoint, repeat. At this point, I am back to ~5.1s and I have to wait - is there any way of solving this problem with the first request being slow like that? Maybe I have misunderstood the idea behind Docker-sync but I was sure it was supposed to help me to keep all the requests I make fairly quick.
I personally blame Laravel. Can anyone shed a bit of light on what might be the actual source of the problem here?
EPILOGUE
I did install Linux on my Mac just to try it out on Linux however there are few things that make Linux much less attractive(I love Linux anyway!) when it comes to hours and hours of coding.
UPDATE 21.08.2019
I just did the same test on Ubuntu 18 with Docker... 80ms! (8.8 seconds) / (80 milliseconds) = 110 - this is horrifying!
UPDATE 03.09.2019
I did some tests yesterday - I tried to use different sync strategies - rsync and unison. It seems like it is not having any effect at all. Does anyone else have the same issue? Maybe we can work on it together?

Huge performance hit on a simple Go server with Docker

I've tried several things to get to the root of this, but I'm clueless.
Here's the Go program. It's just one file and has a /api/sign endpoint that accepts POST requests. These POST requests have three fields in the body, and they are logged in a sqlite3 database. Pretty basic stuff.
I wrote a simple Dockerfile to containerize it. Uses golang:1.7.4 to build the binary and copies it over to alpine:3.6 for the final image. Once again, nothing fancy.
I use wrk to benchmark performance. With 8 threads and 1k connections for 50 seconds (wrk -t8 -c1000 -d50s -s post.lua http://server.com/api/sign) and a lua script to create the post requests, I measured the number of requests per second between different situations. In all situations, I run wrk from my laptop and the server is in DigitalOcean VPS (2 vCPUs, 2 GB RAM, SSD, Debian 9.4) that's very close to me.
Directly running the binary produced 2979 requests/sec.
Docker (docker run -it -v $(pwd):/data -p 8080:8080 image) produced 179 requests/sec.
As you can see, the Docker version is over 16x slower than running the binary directly. Everything else is the same during both experiments.
I've tried the following things and there is practically no improvement in performance in the Docker version:
Tried using host networking instead of bridge. There was a slight increase to around 190 requests/sec, but it's still miserable.
Tried increasing the limit on the number of file descriptors in the container version with --ulimit nofile=262144:262144. No improvement.
Tried different go versions, nothing.
Tried debian:9.4 for the final image instead of alpine:3.7 in the hope that it's musl that's performing terribly. Nothing here either.
(Edit) Tried running the container without a mounted volume and there's still no performance improvement.
I'm out of ideas at this point. Any help would be much appreciated!
Using an in-memory sqlite3 database completely solved all performance issues!
db, err = sql.Open("sqlite3", "file=dco.sqlite3?mode=memory")
I knew there was a disk I/O penalty hit associated with Docker's abstractions (even on Linux; I've heard it's worse on macOS), but I didn't know it would be ~16x.
Edit: Using an in-memory database isn't really an option most of the time. So I found another sqlite-specific solution. Before all database operations, do this to switch sqlite to WAL mode instead of the default rollback journal:
PRAGMA journal_mode=WAL;
PRAGMA synchronous=NORMAL;
This dramatically improved the Docker version's performance to over 2.7k requests/sec!

Correct way to link redis to sinatra docker containers

I am writing a small microservices based app, and in it I have a redis instance that some ruby code/containers access to use for via Resque. Currently in my docker compose I am linking like so:
redis:
image: redis:latest
ports:
- '6379:6379'
ruby_worker:
image: my_user/my_image:latest
links:
- redis:db
This works fine(I only name it :db for now cause that is the example I found when looking up linking).
In my ruby code, I have to set up my Resque redis server like:
Resque.redis = ENV['DB_PORT_6379_TCP_ADDR'] + ':6379'
But this just doesn't seem right. It is dependent on that exact redis name, and if I had to spin up a different redis instance(like how I did while playing with docker cloud today), it doesn't find the redis server. In total I have 3 containers(so far) connecting to this redis for resque. A small sinatra based front end, and 2 workers. I am not much of a rails person, and have never used resque before 3 days ago. So sorry if I missed some of the basics.
Is there a better way to connect to my redis instance in my ruby code? Is there a way to pass the redis name in my docker-compose? Right now my resque-web container is configured like below and it seems to work fine:
resque:
image: ennexa/resque-web
links:
- redis:redisserver
ports:
- "5678:5678"
command: "-r redis://redisserver:6379"
Don't use those link environment variables, they are deprecated, and they don't exist in user defined networks anymore.
A good way to do this would be first to default to the name hostname redis. You can always alias any service to have that name, so you should almost never need to change it from the default. For example:
links:
- someservice:redis
To override the default, you can create some application specific env var like REDIS_HOSTNAME, and set it from the environment section (but again, this is almost never necessary):
environment:
REDIS_HOSTNAME=foo

Directadmin server performance lower than ispconfig?

I am running a web-application on 2 servers, but get strange performance problem.
server 1:
Core i5-4770 3.40GHz with 8gig DDR3 running ISPConfig server with PHP/Mysql.
server 2:
Core i7-5930K 3.50GHz with 64gig DDR4 running directadmin with PHP/Mysql
the new server (2) is more powerful, but it get slower page results than the old server.
any suggestions how to find the problem?
I think you are not getting this issues due to Directadmin server. You will have to optimize your Apache and MySQL for the better performance. I will suggest you please enable some php cache modules on your server.
What version of PHP do you use and how it's connected to apache:
PHP:FPM
MOD-PHP
etc...
You probably whould like to use PHP:FPM from the security and performance reasons.
One more suggestion. Default my.cnf config file has no additional configuration on DA. So MySQL performance could be really bad because of that. Please add you my.cnf file here and share info how much RAM do you have. With that I could try to pump your config.

Postgres: After importing production database (with replication) to my local machine, I notice network packets being sent and received from macbook

I've been a MySQL guy, and now I'm working with Postgres so I am learning. Wondering if someone can tell me why my postgres process on my macbook is sending and receiving data over my network. I am just noticing this is happening for the first time - so maybe it's been going on before this and I just never noticed postgres does this.
What has me a bit nervous, is that I pulled down a production datadump from our server which is set up with replication and I imported it to my local postgres db. The settings in my postgresql.conf don't indicate replication is turned on. So it shouldn't be streaming out to anything, right?
If someone has some insight into what may be happening, or why postgres is sending/receiving packets, I'd love to hear the easy answer (and the complex one if there's more to what's happening).
This is a postgres install via Homebrew on MacOSX.
Thanks in advance!
Some final thoughts: It's entirely possible, I guess, that Mac's activity monitor also shows local 'network' traffic stats. Maybe this isn't going out to the internets.....
In short, I would not expect replication to be enabled for a DB that was dumped from a server that had it if the server to which it was restored had no replication configured at all.
More detail:
Normally, to get a local copy of a database in Postgres, one would do a pg_dump of the remote database (this could be done from your laptop, pointing at your server), followed by a createdb on your laptop to create the database stub and then a pg_restore pointed at the dump to populate its contents. [Edit: Re-reading your post, it seems like you may perhaps have done this, but meant that the dump you used had replication enabled.)]
That would be entirely local (assuming no connections into the DB from off-box), so long as you didn't explicitly setup any replication or anything else that would go off-box. Can you elaborate on what exactly you mean by importing with replication?
Also, if you're concerned about remote traffic coming from Postgres, try running this command a few times over the period of a minute or two (when you are seeing the traffic):
netstat | grep postgres
In general, replication in Postgres in configured at a server level, and has to do with things such as the master server shipping WAL files to the standby server (for streaming replication). You would have almost certainly have had to setup entries in postgresql.conf and pg_hba.conf to ensure that the standby server had access (such as a replication entry in the latter conf file). Assuming you didn't do steps such as this, I think it can pretty safely be concluded that there's no replication going on (especially in conjunction with double-checking via netstat).
You might also double-check the Postgres log to see if it's doing anything replication related. In a default install, that'd probably be in /var/log/postgresql (although I'm not 100% sure if Homebrew installs put it somewhere else).
If it's UDP traffic, to and from a high port, it's likely to be PostgreSQL's internal statistics collector.
These are pre-bound to prevent interference and should not be accessible outside of PostgreSQL.

Resources