magento setup:upgrade showing could not validate a connection to elastic search - elasticsearch

I am cloning a magento repo. after i did composer update and then bin/magento setup:upgrade it is giving me the following error
-- Could not validate a connection to elastic search. no alive nodes found in your cluster --
the elastic search is up and running. If i install a fresh magento project (2.4.3) setup:upgrade command works fine.
I also checked the status of the elastic search and it showed as below pic
elastic search status
I have already checked a previous thread relating to not connecting to elastic search. have tried every answers there and I believe that thread was a different problem.

Are you using a database dump from another enviroment?
Check your database entries for elasticsearch host:
SELECT * FROM magento.core_config_data
where path like '%elastic%'
you could well have hostname set to something other than your local setup. check keys:
search/engine/elastic_host
catalog/search/elasticsearch6_server_hostname
etc

Its seems to be a elasticsearch connection problems,
Verify the core_config_data according the #Andrew response.
If you are using docker, maybe can be a permission problems:
A permission 777 in your docker folders of your project can helps (of course, in local environments only), specially which has elasticsearch files (volumes and other configuration)

Related

Setting up Elastic Enterprise Search locally

In my app, I'd like to use "Elastic App Search" functionality, especially facets. I except it work like this: https://github.com/elastic/search-ui
At this point, I have installed Elastic Search & Kibana (using brew) and populated it with data. I am able to run it locally and make queries.
To install the App Search (which is included in Elastic Enterprise Search), I use the following instructions: https://www.elastic.co/downloads/enterprise-search.
I have done everything up to point 3.
In point 4:
I can't locate the elastic user password in the logs, I haven't set any security/passwords so far, so I guess there's no password at this moment.
I haven't seen or used any Kibana token so far. I tried to generate it, as it showed here, but it does not work for me. It seems like the default path for elasticsearch should be /usr/local/etc/elasticsearch, but I don't even have etc directory in my /usr/local. Instead, elasticsearch is inside the homebrew directory.
I can't find http_ca.crt file anywhere in my homebrew, should I enable security in elasticsearch first to generate this file?
Unlike Elastic Search and Kibana, the Elastic Enterprise Search file I downloaded in step 1 is not an application, but a regular directory. Where should I put it?
Does my approach even make sense? Is it possible to run this service locally just like I'm running ES/Kibana? Most of the examples on the Internet show how to run this service on Docker only.

Running Laravel's migration command on AWS Elastic Beanstalk

I'm having a hard time deploying a Laravel app for test purposes on AWS Elastic Beanstalk.
Followed all sources i could find in web including AWS documentation.
Created a Elastic Beanstalk environment and uploading an application is straightforward as long as i do not include .ebextensions and the .yaml file in it.
Based on Maximilian's tutorial i created init.config file inside .ebextensions with contents:
container_commands:
01initdb:
command: "php artisan migrate"
Environment gets to a degraded state as it finishes to update and i get the following logs:
[2018-11-20T23:14:08.485Z] INFO [7969] : Command processor returning results:
{"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"(TRUNCATED)...y exists\")\n/var/app/ondeck/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458\n\n2 PDOStatement::execute()\n/var/app/ondeck/vendor/laravel/framework/src/Illuminate/Database/Connection.php:458\n\nPlease use the argument -v to see more details. \ncontainer_command 01initdb in .ebextensions/init.config failed. For more detail, check /var/log/eb-activity.log using console or EB CLI","returncode":1,"events":[]}],"truncated":"true"}
I have been trying different .config files from other instruction resources but none of them seems to work.
I'm running:
Laravel Framework 5.7.5
EB Platform uses PHP 7.2 running on 64bit Amazon Linux/2.8.4
RDS uses MySQL 5.6.40
I really do not know what is going on and would appreciate if you could give any suggestion.
I finally found my way out. Providing some documentation for anyone that hits the same issue.
What I was trying to do...
My main objective was to test a Laravel 5.7 application on a live AWS Elastic Beanstalk (EB) server. I was also in need of a way to visualize data using phpMyAdmin, a tool that fits my need. This is a very simple CRUD app just for learning the basics of both technologies.
What I did (worked)
Followed the normal workflow of creating an EB application mainly using the web console.
Name the application
Chose PHP as a platform
Start off with a base application (do not upload code yet)
Hit configure more options
In security card select your key pair and save. (This is valuable for SSH'ing on your server)
In the database, the card creates an RDS instance. Select whatever options that fit your needs and set a username/password.
Create environment.
After a while, you should have all resources created by EB (EC2 and RDS instances, security group, EIP, Buckets, etc) in the app environment.
Preparing your Laravel application is a straight forward process. You must not forget to change config/database.php to read server variables. My approach was to define them at the start of the file.
The main sources of troubles reside in configuring your server instance to include all software and configuration needed by your app and specific needs. This is done by including a .yaml file inside .ebextensions folder. This folder should reside in the root directory of your Laravel application. It's also a good idea to check your syntax before submitting another app version to EB. As per my needs, I used this script which basically installs phpMyAdmin as I deploy a new version. Specifically for this startup script, environment variables should be defined, namely $PMA_VER, $PMA_USERNAME, $PMA_PASSWORD for phpMyAdmin to work. You can create more environment variables in the software tab of your EB configuration page. Read the docs.
Another detail that might cause issues in running commands at startup using YAML script (specifically migration) is caused by Laravel and MySql versions. As for example, I am using Laravel 5.7 and the default MySQL version option in EB RDS creation wizard is something like 5.6.x. This will throw issues of the type:
Illuminate\Database\QueryException : SQLSTATE[42000]: Syntax error or access violation: 1071 Specified key was too long; max key length is 767 bytes (SQL: alter table `users` add unique `users_email_unique`(`email`))
If this is your scenario, despite you should have already googled and sorted out that adding the line of code Schema::defaultStringLength(191); to the boot function of your app/Providers/AppServiceProviders.php file will do the trick.
You can do a typical migration passing the script:
container_commands:
01_drop_tables:
command:
"php artisan migrate:fresh"
02_initdb:
command:
"php artisan migrate"
This will drop existing tables avoiding conflicts and create a new one based on your code. You can read more logs from your server by SSH'ing and getting content of /var/log/eb-activity.log.

Kibana not saving dev tools history across sessions

I have Kibana plugin installed in each ES node. Kibana is behind nginx reverse proxy because it's served from /kibana/ route. Elastic is protected with SearchGuard plugin.
Question: History for dev tools/console is reset with each login (after each login, history is empty). Now, I'm not sure if I'm missing something or that's expected behaviour when SearchGuard is in use? I remember that worked well before installing SearchGuard. Not sure if it's coincidence or it's indeed related. It's saving properly during one session.
Elastic version: 6.1.3
Thank you!
It's stored in local storage under sense:editor_state in Chrome.
If it's wiped out daily or the cache is cleared, so will your searches be.
use ?load_from= in your url and save your queries in a json file... be aware of CORS if you use a web app of your own.

Where is Deploybot pushing my repo to on AWS EC2?

This is my setup;
Bitbucket Repo of HTML docs.
Elastic Beanstalk Environment
EC2 c3 Instance (8GB Elastic Block Store attached)
So I connect Deploybot to Elastic Beanstalk successfully and deploy. Path is default.
Success, so it seems.
output Creating application archive for Elastic Beanstalk.
output Application archive created.
output Uploading application archive to S3 bucket deploybot-elastic-beanstalk-> mysite-997c5d66.
output Application archive uploaded.
output Creating application version mysite-fbba70de-5e4736.
output Application version created.
output Updating environment e-txarhpt4wp with new version.
output Environment was updated and will now be refreshing.
But no... where are the files?
I drop in with filezilla (SFTP) and cannot find anywere on the server.
Moreover, my path is actually;
var/www/vhosts/my.site.yay/html/
If I change the path in the Deploybot environment settings - the repo never successfuly deploys, instead all I get is 'bypassed' with every single git push which indicates to me that deploybot is not acutally connecting to anything and thus constantly sees 'no changes'.
Anyone got any clues?
I have spent several hours searching prior to this post and there is almost nothing written about using deploybot with aws besides official deploybot documents.
thanks in advance to those with potential answers
Doh!
I had set up my ec2 instance to use Nginx instead of Apache - deleting Apache (httpd).
In the process of writing this question I looked more closely at the deploybot log and traced that Deploybot pushs a .zip to a S3 bucket which it creates and then triggers an elastic beanstalk environment refresh - using whatever built in webhook there must be.
So whatever webhook that uses looks for Apache (httpd) and the whole thing fails, as revealed in the Beanstalk environment event logs;
ERROR
[Instance: i-0dc350d4f82711400] Command failed on instance. Return code: 1 Output: (TRUNCATED)...tory # rb_sysopen - /etc/httpd/conf/httpd.conf (Errno::ENOENT) from /opt/elasticbeanstalk/support/php_apache_env:81:in update_apache_env_vars' from /opt/elasticbeanstalk/support/php_apache_env:125:in' Using configuration value for DocumentRoot:. Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/05_configure_php.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
So, I switched my deploybot to SFTP with public key and it works. (I'm not clever enough to edit/write my own beanstalk envior refresh webhook yet)

How to change path for data storage for elasticsearch

I have gone into my elasticsearch.yml and changed "path.data:" to the path where I want to store the data. Now when I start the elasticsearch service, the localhost:9200 would not work anymore. If I kept "path.data:" line commented out, localhost:9200 would work fine. I am on a centos 6 machine and I installed elasticsearch through yum. Thanks in advance.
I figured out the solution. I had created the folder using the root folder, so elasticsearch did not have permissions to make changes to the folder where the new data would be stored in. IF you have any issues like this, make sure you have changed your permissions for the newly created folder.

Resources