Solr index is not updating with publish - events

Using Sitecore 10.0.1
Solr 8.4
I published few items.
There are 3 servers and each server are pointing to different solrs, say A,B,C.
But No publishing changes are updating in Solr indexes B and C. I checked event queue tables and it is updating there.

If you have a geo-distributed setup with Sitecore servers in multiple regions and a Solr server for each region, you should make sure that one Sitecore server in each region has Indexing role in the web.config file. It can be a Content Management server or Content Delivery server depending on your environment configuration.
Here is an example of how the Indexing role can be added to web.config:
<add key="role:define" value="ContentManagement, Indexing"/>
See more details in Sitecore documentation here.

Related

Laravel App on AWS Elastic Beanstalk: How can I use a different database (on the same RDS) according to subdomain?

We are running a B2B business and we have a Laravel (v8) app running on AWS.
Before, I created different environments for different customer companies.
One subdomain for each customer company and running on different environment as the configuration described here:
AWS: Pointing sub-domains to different elastic beanstalk environments
In the beginning it seemed perfect. But then, managing git branches, merges and then deployments on several machines proved to be too difficult to maintain, furthermore, too expensive.
So, I decided to point all the subdomains to the same environment for production.
BUT, ALSO I don't want that data of customer companies get mixed together; I want to keep them separate, even if on the same database server.
SO, FINALLY I want that my laravel app uses a different database (on the same RDS instance) according to subdomain.
1. Does it make sense?
2. How can I do it?

WSO2 Governance Not Finding All Assets in JDBC Database

I'm new to WSO2 and working on an existing application that uses WSO2.
We load our database of assets into wso2 but not all of the assets show up in the store or publisher when queried.
It seems there is some disconnect between what is in the database/carbon and what can be seen in store/publisher.
the missing assets can be found by:
Calling the database directly
looking them up in carbon
using the store or publisher url with the asset id
the governance rest api through id only
the assets are missing in:
doing searches in the store/publisher gui
doing searches with the governance api
All the ones missing have invalid asset names according to our rxt definitions. I removed these validations in carbon but still was not able to find them.
We have validations in the rxt files for asset names, would this affect what is seen in store/publisher?
Is there a way to sync up the governance registry with the database so that it would show all the assets in the store and publisher?
Any help is much appreciated!!
I'm facing the same problem with store/publisher. After searching for a solution, we found some info about this issue. WSO2 is not indexing some assets in Solr.
You could try to reindex the assets with this steps:
1 - Backup the solr folder which resides in /solr and remove from API Manager home location.
2 - Open /repository/conf/registry.xml
3 - Under indexingConfiguration tag there is a value called lastAccessTimeLocation.
Default value is
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime
Change that value to
/_system/local/repository/components/org.wso2.carbon.registry/indexing/lastaccesstime1
4 - Restart the server
For me this didn't work, but in some questions here many people said that could be the best solution for this issue.
WSO2 loss APIs after changes in docker container
WSO2 API Manager issues with solr
After some long investigating, I found that some entries weren't in the REG_LOG table and some dates in the REG_LOG table made entries not be indexed. The solution for this was adding to the REG_LOG table with current timestamps which forced a reindex and then the missing assets were able to be searched for in the web ui.

What is Sonar Search?

I am upgrading an install of SonarQube from 4.5.1 to 5.2. I wasn't part of the original install and when looking at the sonar.properties file to see what needs to be update in the new one, I see properties for "sonar.search".
What is Sonar Search? Why would I need to uncomment/update these properties?
I haven't been able to find any good documentation on the SonarQube website on what it is and "sonar" and "search" internet searches bring up way too many unrelated results to sift through.
It is an Elasticsearch instance used for indexing some database tables. It allows for example powerful search requests on issues. See the sidebar of page "Issues". It supports multi-criteria searches and displays valuable facets.
Default settings in sonar.properties are good enough for most of environments. JVM settings of this dedicated process could be overridden if dozens of millions of issues are stored in database.

Autoscale Magento in the cloud

I have just entered into the world of e-commerce, and I am trying to get my Magento website up and running.
I am using AWS cloud for hosting my website. I am trying to use an architecture, where I can run multiple servers connected to a single DB server. Specifically, I want to use an AWS Auto scaling group along with ELB to start multiple EC2 instances, during high load. There is only one Mutli AZ RDS Database instance.
As initial trials, I tried creating 2 ec2 instances, and installed magento on both of them. I have used same RDS DB for both of them. But as is turns out, magento stores the base url of webserver in the database itself. Which means, I can only store one base url of magento website running one particular server.
To be precise magento stores base url in table core_config_data in column 'path' where row values ares "web/unsecure/base_url" and "web/secure/base_url", and the column 'value' for corresponding row specifies the url address of magento installed web server.
My question is how can I use multiple server using EC2 auto scaling, if magento permits only one server address in the base url.
Here's a partial view of the table with 2 rows -
config_id scope scope_id path value
5 default 0 web/unsecure/base_url http://server1.com/magento/
6 default 0 web/secure/base_url http://server1.com/magento/
Are there any other known methods to somehow use horizontal scaling during heavy load conditions in Magento.
I don't think load balancing works like that.
You need a load balancer that receives the requested URL and then passes it off to one of the servers running Magento - so I think you would pass the same url to both servers anyway, no?. I do not know how to do it.
You are trying to set up a very complicated system.
You could look to override some functions if you want to have different values for secure and non-secure urls. Try reading this code to get you started:
//file app/code/core/Mage/Core/Model/Store.php
//class Mage_Core_Model_Store
//function getBaseUrl()
//function getDistroServerVars()
//file app/code/core/Mage/Core/Model/Url.php
//class Mage_Core_Model_Url
//function getBaseUrl()
//file app/code/core/Mage/Core/Controller/Request/Http.php
//class Mage_Core_Model_Http
//function - I don't know, any of them, none of them
and look at any files with the string 'substDistroServerVars' in them or isDirectAccessFrontendName might expose something. getDistroServerVars is discussed at the end of this great article by the almighty Alan Storm.
But I don't think that is the real answer - for the real answer skip to the links at the end of this tedious monologue.
If this is your first foray into Magento and you really think you are going to get the volume of traffic into your shop that requires load balancing over two servers then you can afford, *must afford**, third party hosting and get professionals with many many many man years of experience running Magento on heavy loads across multiple servers. You will also want to hand off (at least) the images to a CDN.
*I mean, if your shop has that high a volume then it has a high revenue and you should invest that revenue in keeping your shop running: professional hosting with 24/7 support. Otherwise downtime will be expensive and a long implementation will mean lost revenue.
*If you are just trying this out for fun and to learn something about setting up Magento on multiple servers then I recommend two things:
1) Practice getting Magento running on one server first - and optimsing for volume there (caching, compilers, DB tuning, log file analysis, flat tables, cron jobs, CDNs, possibly combined JS and CSS, web server tuning and getting the headers right, possibly a full page cache and a sprinkling of Redis) - because that isn't a trivial list on one server never mind two + DB server and ELB.
And 2) practice getting Apache or nginx to serve load balanced content with your ecommerce SSL certificate in place. Only then should you try to join the two systems. Be prepared to spend many months on this - including figuring out Seige, AB or jmeter for simulated load testing.
But if you really want to get the AWS ELB set up here are a few excellent resources to get you started - particularly the detailed tutorial by Adrian Duke (first link) - pay great attention to the details in the last section of that article subtitled 'Magento', that may be the answer to your question.
Getting and scaling Magento in the cloud by Adrian Duke
Using AWS Auto Scaling with an Elastic Load Balancer cluster on EC2 (actually a WordPress install, not Magento, but Mr Shroder knows his Magento)
Running Magento in an AWS Environment (All hail Alan Storm)
I've had a rather large amount of success modifying the Magneto to be a beanstalk package. The steps (loosely) were:
Install GIT locally
Install AWS Command line tools
Install AWS Beanstlalk Command Line
Build module to upload image to s3 everytime it's uploaded in magento
Utilize OnePica's Magneto Extension
Use Amazon's REDIS Cache for caching data
Use RDS for database
Use Route53 for routing &
Use cloudfront for image, js & CSS Distro
Couple of drawbacks to AWS
Customizing magneto to look for things is a pain in the ass. As we speak I'm trying to get it to keep sessions persistent between EC2 instances as the loadbalancer chops it up.
Everytime you need to modify Magento in any way it's a git commit, (then we test locally, via a seperate beanstalk instance) and then push to production.
Short of that it's been fairly stable. We're not hitting high numbers yet, though.
Normally you put a load balancer in front of the nodes to distribute the load and each node is configured to use the same base_url. MySQL replication can be used if you want multiple db servers but I have never found the need to do this. Not used amazon ec2 with magento but have similar setup in a dedicated server environment with two nodes, one db server, load balancer, and shared media.
Diagram here is useful, especially with the shared storage for media, your going to need to do something like this. http://www.severalnines.com/blog/how-cluster-magento-nginx-and-mysql-multiple-servers-high-availability
Also, I found amazon seems to provide Elastic Load Balancing which is what your after I think. http://aws.amazon.com/documentation/elasticloadbalancing/

Using Solr on multiple CMS

I have an eZ Publish and a Magento site on two different servers, and one Solr server. The Solr server is now used as the search engine for eZ Publish but I would also like to use the same Solr-server on Magento.
eZ Publish comes with an extension (eZFind) which contains schema.xml, and I got it working straight of the box without any configuration (other than defining the Solr-server, user, password, etc).
Magento ships with a schema.xml and solrconfig.xml, which according the documentation needs to be copied to the Solr-server.
I'm a bit afraid of doing this since I don't want to break the search on eZ Publish.
Does anyone have any experience with this or has any recommendations on the Solr setup?
You need to use the multi-core feature of Solr (see there) so that you will only have one Solr instance, serving 2 cores (at least).
What does that mean ? Each core will be defined by at least 2 files (schema.xml and solrconfig.xml), which will be located in dedicated folders within your Solr installation. Then the cores have to be registered in a file named solr.xml which, in your case, could look like this :
<?xml version="1.0" encoding="UTF-8" ?>
<solr persistent="true" sharedLib="lib">
<cores adminPath="/admin/cores">
<core name="ezpublish" instanceDir="ezpublish" />
<core name="magento" instanceDir="magento" />
</cores>
</solr>
If your current solr installation is still in the eZ Find extension, then you should have a look a this page which tells you how to move the bundled Solr installation outside of eZ Publish. Then, add a new core with the Magenta configuration files.
Depending on the Solr version you are using, I would recommend installing Solr on your own (without taking the one for eZ Find) and apply the eZ Publish configuration on it.
You can use solr's multicore feature which allows you to host multiple indexes, each with its own schema and each accessible with its own url (http://localhost:8983/solr/ezpublish/ and http://localhost:8983/solr/magento).
eZPublish has a tutorial on how to do this : http://doc.ez.no/Extensions/eZ-Publish-extensions/eZ-Find/eZ-Find-2.7/Advanced-Configuration/Using-multi-core-features
All you should have left to do is copy your magento config

Resources