I can't store sessions in Memcached server !
I installed Memcached for PHP and the server
I run the server with this command
memcached -u root -d -m 64 -l 127.0.0.1 -p 11211
I have this in php.ini in fpm and cli
extension=memcached.so
session.save_handler = memcached
session.save_path = unix:/tmp/memcached.sock
I followed this for symfony2
https://gist.github.com/K-Phoen/4327229
You think everything is good ?
You are wrong , because I don't know !!
why the sessions are not stored in memcached
PS : I don't run the memcached server with service memcached start because that would be start the server in a different port with nobody as user ..
Help me debug this please.
You appear to be telling PHP to connect to the daemon via a socket, but the command you start Memcached with, doesn't include the -s <file> parameter, to have it create the socket you want to use.
See: MemcacheD sockets.
Related
I have used Embedded Redis for caching in my springboot application. The redis runs on localhost and default port "6379" on application start up.
Is there a way to get metrics(memory-used, keyspace_hits, keyspace_misses, etc..) for embedded redis, from outside the application, may be command line or any API?
PS: I have used Redisson as client to perform cache operations with redis.
Thanks.
Redis has provided a command line interface : redis-cli to interact with it and get the metrics. redis-cli can be used on embedded redis as well.
install command line interface
npm install -g redis-cli
connect to redis running locally(cmd: rdcli -h host -p port -a password )
rdcli -h localhost
use any redis commands
localhost:6379> info memory
#Memory
used_memory:4384744
used_memory_human:4.18M
used_memory_rss:4351856
used_memory_peak:4385608
used_memory_peak_human:4.18M
used_memory_lua:35840
mem_fragmentation_ratio:0.99
mem_allocator:dlmalloc-2.8
Ref: "Installing and running Node.js redis-cli" section of this post https://redislabs.com/blog/get-redis-cli-without-installing-redis-server
I'm having trouble connecting to a replica set.
[MongoDB\Driver\Exception\ConnectionTimeoutException]
No suitable servers found (`serverSelectionTryOnce` set):
[Server closed connection. calling ismaster on 'a.mongodb.net:27017']
[Server closed connection. calling ismaster on 'b.mongodb.net:27017']
[Server closed connection. calling ismaster on 'c.mongodb.net:27017']
I however, can connect using MongoChef
Switching any localhost references to 127.0.0.1 helped me. There is a difference between localhost and 127.0.0.1
See: localhost vs. 127.0.0.1
MongoDB can be set to run on a UNIX socket or TCP/IP
If all else fails, what I've found that works most consistently across all situations is the following:
In your hosts file, make sure you have a name assigned to the IP address you want to use (other than 127.0.0.1).
192.168.0.101 coolname
or
192.168.0.101 coolname.somedomain.com
In mongodb.conf:
bind_ip = 192.168.0.101
Restart Mongo
NOTE1: When accessing mongo from the command line, you now have to specify the host.
mongo --host=coolname
NOTE2: You'll also have to change any references to either localhost or 127.0.0.1 to your new name.
$client = new MongoDB\Client("mongodb://coolname:27017");
I had the same error in a docker based setup:
container1: nginx listening on port 80
container2: php-fpm listening on port 9000
container3: mongodb listening on port 27017
nginx forwarding php to php-fpm
Trying to access mongodb from php gave this error.
In the mongodb Dockerfile, the culprit was:
CMD ["mongod", "--bind_ip", "127.0.0.1"]
Needed to change it to:
CMD ["mongod", "--bind_ip", "0.0.0.0"]
And the error went away. Hope this helps somebody.
The IP address of your home network may have changed, which would lead to MongoDB locking you out.
I solved this problem for myself by going to MongoDB Atlas and changing which IP address is allowed to connect to my data. Originally, I'd set it up to only allow connections from my home network. But my home network IP address changed, and I started getting the same error message as you.
To check if this is the same issue with you, go to MongoDB Atlas, go into your project, and click "Network Access" on the left hand side of the screen. That's where you're able to update your IP address. It shows you what IP address(es) it's allowing in. To find out what your current IP address is, go to whatismyipaddress.com and update MongoDB if it's different.
In my case, I am temporarily coding PHP from Windows7 against MongoDB on my VPS running Linux Debian 9. The PHP will be eventually running in the same Linux box to provide an API to the MongoDB data.
BTW, it does not appear this local composer install is doing me any good, it's pure ugliness. My PHP after the fix below works without the require line require_once 'C:\Users\<Windows User Name>\vendor\autoload.php'.
My fix is different than the accepted answer which to me did not make sense.
I did not have to touch any hosts file
So edit your /etc/mongod.conf with your target machine's IP and restart with sudo systemctl restart mongod that's it
I don't know what to blame
PHP and MongoDB sites for the terrible documentation skimpy and incomplete PHP examples, or...
MongoDB installation on Linux failing to mention this bindIP.
My startup experience with MongoDB is so far very negative given all the changes that have occurred nothing matches what I expected from the videos I watched. I can't seem to find any that reflect what I am going thru like
$DB_CONNECTION_STRING="mongodb://user:password#164.152.09.84:27017"
$m = new MongoDB\Driver\Manager( $DB_CONNECTION_STRING )
instead of
$m = new MongoClient()
Hope this helps someone
PS. Always say NO to semicolons, camelCAsE and anything case-sensitive... absurdity at its best.
Previous night I was tinkering with Elixir running code on my both machines at home, but when I woke up, I asked myself Can I actually do the same using heroku run command?
I think theoretically it should be entirely possible if setup properly. Obviously heroku run iex --sname name executes and gives me access to shell (without functioning backspace which is irritating) but i haven't accessed my app yet.
Each time I executed the command it gave me different machine. I guess it's how Heroku achieve sandbox. I also was trying to find a way to determine address of my app's machine but haven't got any luck yet.
Can I actually connect with the dyno running the code to evaluate expressions on it like you would do iex -S mix phoenix.server locally ?
Unfortunately it's not possible.
To interconnect Erlang VM nodes you'd need EPMD port (4369) to be open.
Heroku doesn't allow opening custom ports so it's not possible.
In case You'd want to establish a connection between your Phoenix server and Elixir node You'd have to:
Two nodes on the same machine:
Start Phoenix using iex --name phoenix#127.0.0.1 -S mix phoenix.server
Start iex --name other_node#127.0.0.1
Establish a connection using Node.ping from other_node:
iex(other_node#127.0.0.1)1> Node.ping(:'phoenix#127.0.0.1')
(should return :pong not :pang)
Two nodes on different machines
Start Phoenix using some external address
iex --name phoenix#195.20.2.2 --cookie someword -S mix phoenix.server
Start second node
iex --name other_node#195.20.2.10 --cookie someword
Establish a connection using Node.ping from other_node:
iex(other_node#195.20.2.10)1> Node.ping(:'phoenix#195.20.2.2')
(should return :pong not :pang)
Both nodes should contact each other on the addresses they usually see each other on the network. (Full external IP when different networks, 192.168.X.X when in the same local network, 127.0.0.1 when on the same machine)
If they're on different machines they also must have set the same cookie value, because by default it takes automatically generated cookie in your home directory. You can check it out by running:
cat ~/.erlang.cookie
What's last you've got to make sure that your EPMD port 4369 is open, because Erlang VM uses it for internode data exchange.
As a sidenote if you will leave it open make sure to make your cookie as private as possible, because if someone knows it, he can have absolute power over your machine.
When you execute heroku run it will start a new one-off dyno which is a temporary instance that is deprovisioned when you finish the heroku run session. This dyno is not a web dyno and cannot receive inbound HTTP requests through Heroku's routing layer.
From the docs:
One-off dynos can never receive HTTP traffic, since the routers only route traffic to dynos named web.N.
https://devcenter.heroku.com/articles/one-off-dynos#formation-dynos-vs-one-off-dynos
If you want your phoenix application to receive HTTP requests you will have to set it up to run on a web dyno.
It has been a while since you've asked the question, but someone might find this answer valuable, though.
As of 2021 Heroku allows forwarding multiple ports, which allows to remsh into a running ErlangVM node. It depends on how you deploy your application, but in general, you will need to:
Give your node a name and a cookie (i.e. --name "myapp#127.0.0.1" --cookie "secret")
Tell exactly which port a node should bind to, so you know which pot to forward (i.e. --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000")
Forward EPMD and Node ports by running heroku ps:forward 9001:4369,9000
Remsh into your node: ERL_EPMD_PORT=9001 iex --cookie "secret" --name console#127.0.0.1 --remsh "myapp#127.0.0.1"
Eventually you should start your server with something like this (if you are still using Mix tool): MIX_ENV=prod elixir --name "myapp#127.0.0.1" --cookie "secret" --erl "-kernel inet_dist_listen_min 9000 -kernel inet_dist_listen_max 9000" -S mix phx.server --no-halt
If you are using Releases, most of the setup has already been done for you by the Elixir team.
To verify that EPMD port has been forwarded correctly, try running epmd -port 9001 -names. The output should be:
epmd: up and running on port 4369 with data:
name myapp#127.0.0.1 at port 9000
You may follow my notes on how I do it for Dockerized releases (there is a bit more hustle): https://paveltyk.medium.com/elixir-remote-shell-to-a-dockerized-release-on-heroku-cc6b1196c6ad
I have to set up a dev/test platform on the Amazon Web Service. So I was told, "install it" but I have no clue how to do that. I'm very used to 1&1, OVH and other hosting companies on which I upload my data through FileZilla but here it seems to be completely different. Am I wrong?
I read that I would need to install centOS to communicate with the server, right? is there no other way to do so? FileZilla?
And By the Way, how do I set up Magento on AWS? I found some documentation about it :
http://loadstorm.com/2009/magento-setup-amazon-associates-web-service
http://www.zetaprints.com/magentohelp/category/overview/
http://www.greengecko.co.nz/magento_on_amazon_ec2
But each time, it seems that I missed something in the first lines, the VERY FIRST step.
Could someone enlightene me please because I think I missed something at the starting point of this process and I clearly don't understand the way it works.
I downloaded both elasticFirefox extension and S3 organizer, but they are not very helpful for the understanding. In each of the docs I have read, the guy starts from a point I can't reach ..
PS: I've started developing the website with Magento so it is about transfering this version of Magento instead of installing a new one .. except if it's much much more complicated ..
Any Help or full documentation would be appreciated :)
Thanks for your help !
I did something very much alike (using CentOS 5.5 on rackspase) - follow the steps below. all the lines that start with "--" should be treated as remarks. Before you start "transferring" Magento you should install PHP, httpd and MySql:
-- MySql
yum install mysql-server
-- httpd
yum install httpd
-- open port 80 in iptables
vi /etc/sysconfig/iptables
-- add a line:
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-- configure httpd.conf (enable the use of .htaccess)
vi /etc/httpd/conf/httpd.conf
change the line under “< Directory "/var/www/html >” from “AllowOverride None” to “AllowOverride All”
-- install php 5
rpm -ivh http://repo.webtatic.com/yum/centos/5/`uname -i`/webtatic-release-5-1.noarch.rpm
yum --enablerepo=webtatic install php
yum --enablerepo=webtatic install php-mysql
-- go to /var/www/html
cd /var/www/html
-- and copy there all the content of Magento
--then clean cache if any:
rm -rf /var/www/html/<your app>/var/cache/*
-- you have to create a schema:
mysql
mysql> create database [your schema name];
mysql> grant all privileges on [your schema name].* to [your username]#localhost identified by '[your password]';
-- create sql dump on your computer:
mysqldump [your schema name] > [your schema name].sql
-- and import it on centos
mysql [your schema name] < [your schema name].sql;
--Make sure that the username/password are configured properly:
vi <your app>/app/etc/local.xml
-- Login the DB as [your user]:
mysql -u [your user] –p
-- Locate the entry that is configured to localhost (since you developed it on your computer) and change it to the installation-server’s IP (say 1.1.1.1):
select path, value from [your schema name].core_config_data where path like '%base_url%';
update [your schema name].core_config_data set value = 'http:/<your domain>/<your app>/' where path like '%base_url%';
-- now restart all the services
service iptables restart
service mysqld restart
service httpd restart
-- Troubleshooting
In order to print error to screen follow these steps:
cd /var/www/html/<your app>/errors
cp local.xml.sample local.xml
You may want to read this first which would solve your transfer to S3 question.
https://stackoverflow.com/questions/1855109/amazon-s3-ftp-interface
you can do it with a couple clicks using a Bitnami AMI or their cloud hosting tool
I followed the instructions for setting up postgresql from this site
All seems to go fine until I try:
createuser --superuser myname -U
postgres
I get the following exception:
createuser: could not connect to
database postgres: could not connect
to server: No such file or directory
Is the server running locally and
accepting connections on Unix domain
socket "/tmp/.s.PGSQL.5432"?
For the life of me I can't figure out how to resolve this. Any ideas???
I had to remove the existing postgres user before doing the install.
Perhaps you moved your postgres data directory after you installed postgres using macports
Find where your launchctl startup script is located.
ps -ef | grep postgres
Outputs
0 54 1 0 0:00.01 ?? 0:00.01 /opt/local/bin/daemondo --label=postgresql84-server --start-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper start ; --stop-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper stop ; --restart-cmd /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper restart ; --pid=none
So I edit
sudo vim /opt/local/etc/LaunchDaemons/org.macports.postgresql84-server/postgresql84-server.wrapper
And find the line
Start() {
su postgres -c "${PGCTL} -D ${POSTGRESQL84DATA:=/opt/local/var/db/postgresql84/wrong_place} start -l /opt/local/var/log/postgresql84/postgres.log"
}
Ahh.. my data directory is in the wrong place. I fix it by changing
/opt/local/var/db/postgresql84/wrong_place
to
/opt/local/var/db/postgresql84/right_place
for both the start and stop command.
Did you install the postgresql84-server port? If so, did you start the server:
$ sudo port load postgresql84-server
If you did both of those, I've noticed that sometimes the MacPorts daemon handler (daemondo) doesn't start handling requests for PostgreSQL until you restart your machine. (This only happens the first time it is started; subsequent attempts should work fine.)