I have gone into my elasticsearch.yml and changed "path.data:" to the path where I want to store the data. Now when I start the elasticsearch service, the localhost:9200 would not work anymore. If I kept "path.data:" line commented out, localhost:9200 would work fine. I am on a centos 6 machine and I installed elasticsearch through yum. Thanks in advance.
I figured out the solution. I had created the folder using the root folder, so elasticsearch did not have permissions to make changes to the folder where the new data would be stored in. IF you have any issues like this, make sure you have changed your permissions for the newly created folder.
Related
In my app, I'd like to use "Elastic App Search" functionality, especially facets. I except it work like this: https://github.com/elastic/search-ui
At this point, I have installed Elastic Search & Kibana (using brew) and populated it with data. I am able to run it locally and make queries.
To install the App Search (which is included in Elastic Enterprise Search), I use the following instructions: https://www.elastic.co/downloads/enterprise-search.
I have done everything up to point 3.
In point 4:
I can't locate the elastic user password in the logs, I haven't set any security/passwords so far, so I guess there's no password at this moment.
I haven't seen or used any Kibana token so far. I tried to generate it, as it showed here, but it does not work for me. It seems like the default path for elasticsearch should be /usr/local/etc/elasticsearch, but I don't even have etc directory in my /usr/local. Instead, elasticsearch is inside the homebrew directory.
I can't find http_ca.crt file anywhere in my homebrew, should I enable security in elasticsearch first to generate this file?
Unlike Elastic Search and Kibana, the Elastic Enterprise Search file I downloaded in step 1 is not an application, but a regular directory. Where should I put it?
Does my approach even make sense? Is it possible to run this service locally just like I'm running ES/Kibana? Most of the examples on the Internet show how to run this service on Docker only.
I am cloning a magento repo. after i did composer update and then bin/magento setup:upgrade it is giving me the following error
-- Could not validate a connection to elastic search. no alive nodes found in your cluster --
the elastic search is up and running. If i install a fresh magento project (2.4.3) setup:upgrade command works fine.
I also checked the status of the elastic search and it showed as below pic
elastic search status
I have already checked a previous thread relating to not connecting to elastic search. have tried every answers there and I believe that thread was a different problem.
Are you using a database dump from another enviroment?
Check your database entries for elasticsearch host:
SELECT * FROM magento.core_config_data
where path like '%elastic%'
you could well have hostname set to something other than your local setup. check keys:
search/engine/elastic_host
catalog/search/elasticsearch6_server_hostname
etc
Its seems to be a elasticsearch connection problems,
Verify the core_config_data according the #Andrew response.
If you are using docker, maybe can be a permission problems:
A permission 777 in your docker folders of your project can helps (of course, in local environments only), specially which has elasticsearch files (volumes and other configuration)
I have recently deployed my instance of Vtiger 7 in a load balanced auto scaled configuration.
I have also created a NFS server and mounted this to my Vtiger server. This NFS will also be auto mounted to any additional servers in the auto scaled scenario.
In order for this to all work properly I need to move the /storage and the /test directories to the NFS utilizing a symbolic link.
I have set this up perfectly and also established the proper symbolic links.
Problem I’m running into is that the vtiger will read the data from the symbolic link folders without issue, however is unable to write to these folders due to permissions issues. I’ve set permissions on the NFS folders 775. I’ve also tried 777 permissions just to check it out but still getting the same errors and vtiger will not write to the directories. Any idea of how I can solve this?
After burning by eyes our for many hours, I have solved my own question.
The issue was regarding folder ownership settings. I essentially needed to change the symlink owner and the NFS directory owner to the same owner as the CRM web root.
I have Kibana plugin installed in each ES node. Kibana is behind nginx reverse proxy because it's served from /kibana/ route. Elastic is protected with SearchGuard plugin.
Question: History for dev tools/console is reset with each login (after each login, history is empty). Now, I'm not sure if I'm missing something or that's expected behaviour when SearchGuard is in use? I remember that worked well before installing SearchGuard. Not sure if it's coincidence or it's indeed related. It's saving properly during one session.
Elastic version: 6.1.3
Thank you!
It's stored in local storage under sense:editor_state in Chrome.
If it's wiped out daily or the cache is cleared, so will your searches be.
use ?load_from= in your url and save your queries in a json file... be aware of CORS if you use a web app of your own.
I was given a database backup (with filestore), the filestore folder and another folder with the installed modules on that database.
I am expected to restore that backup in Odoo 8 with no more data. So what I did is create the PostgreSQL role who owns the database tables and give it enough permissions (login, createdb, replication). Then I created an Odoo config file. Inside this, I set this new PostgreSQL role in db_user and its password in db_password. I added the path where I stored the filestore in data_dir, and the path of the folder with all the modules in addons_path.
As I was given no launcher file, I copied the OCB folder of other Odoo instance I have and used its odoo.py file to start Odoo.
The new instance seems to run well, but now I have just restored the database, and I get this error:
QWebException: "'HttpRequest' object has no attribute 'endpoint_arguments'" while evaluating
'website.get_alternate_languages(request.httprequest)'
I was googling a lot but I was not able to find anything about it, except for a non-answered question in other forum.
Does anyone know what is this about?
Changes related to this issue were introduced in Odoo on 29 February, 2016 (I mean the following changes: [FIX] website: alternate languages translated URL and [FIX] website: backport of as you can check for now these changes are available in the official Odoo 8.0 code base as well).
So most probably you have used outdated Odoo 8.0 server that do not contain above mentioned fixes. Please update to the latest official Odoo 8.0 and check if the issue still persists. Normally your issue should disappear after the update.
When you move backup databases and you want to restore them later on, make sure you mark the branch and commit point of the server files that you took the backup from. I have taken a look at my local v8 odoo and I can see that the endpoint_arguments variable is initialised upon the creation of a web request (openerp/http.py class WebRequest around line 192.)
You mention though that you are restoring the database on the v8 OCB Odoo. If you navigate their distribution and on commit:
https://github.com/OCA/OCB/commit/3913667396e17075528108ac1031939e6f479ced#diff-5e2f434047c379642786a87195c806f9
you will see that this variable was missing and they have added it. So make sure that you git pull the server file to get that commit.
The root of the issue is that you took a backup from a server that had different codebase than the one you are trying to restore the database to. (The qweb file was searching for a variable that is not there)