Magento Indexes Issue - Can't reindex - magento

I have a problem with index management inside my Magento 1.6.2.0 store. Basically I can't get them to update. The status says Processing but it says like that for over a 3 weeks now.
And when I try to reindex I am getting this message Stock Status Index process is working now. Please try run this process later but later is 3 weeks now? So it looks like the process is frozen but I don't know how to restart.
Any ideas?
cheers

Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder.
$ cd /path/to/magento
$ ls var/locks
index_process_1.lock index_process_4.lock index_process_7.lock
index_process_2.lock index_process_5.lock index_process_8.lock
index_process_3.lock index_process_6.lock index_process_9.lock
The lock file prevents another user from starting an indexing process. However, if the indexing request times out or fails before it can complete, the lock file will be left in a lock state. That's probably what happened to you. I'd recommend you check the last modified dates on the lock files to make sure someone else isn't running the re-indexer right now, and then remove the lock files. This will clear up your
Stock Status Index process is working now. Please try run this process later
error. After that, run the indexers one at a time to make sure each one completes.

Hello Did you call script manually if not then create one file in your root folder and write this code in it
require_once 'app/Mage.php';
umask( 0 );
Mage :: app( "default" );
$process = Mage::getSingleton('index/indexer')->getProcessByCode('catalog_product_flat');
$process->reindexAll();
this code do indexing of your magento manually some times it's happen that if your magento store contain large number of products then it will required lot's of time to reindexing of products so when you can go to your index management from admin it will show some indexing in processing stage so may be this code will help you to remove processing stage to ready stage of your indexes.
or you can also do indexing with SSH if you have rights of it. it's faster too for indexing

For newer versions of magento , ie 2.1.3 I had to use this solution:
http://www.elevateweb.co.uk/magento-ecommerce/magento-error-sqlstatehy000-general-error-1205-lock-wait-timeout-exceeded
This might happen if you are running a lot of custom scripts and killing the scripts before the database connection gets chance to close
If you login to MySQL from CLI and run the command
SHOW PROCESSLIST;
you will get the following output
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| Id | User | Host | db | Command | Time | State | Info | Rows_sent | Rows_examined | Rows_read |
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
| | 6794372 | db_user| 111.11.0.65:21532 | db_name| Sleep | 3800 | | NULL | 0 | 0 | 0
|
| 6794475 | db_user| 111.11.0.65:27488 | db_name| Sleep | 3757 | | NULL | 0 | 0 | 0
|
| 6794550 | db_user| 111.11.0.65:32670 | db_name| Sleep | 3731 | | NULL | 0 | 0 | 0
|
| 6794797 | db_user| 111.11.0.65:47424 | db_name | Sleep | 3639 | | NULL | 0 | 0 | 0
|
| 6794909 | db_user| 111.11.0.65:56029 | db_name| Sleep | 3591 | | NULL | 0 | 0 | 0
|
| 6794981 | db_user| 111.11.0.65:59201 | db_name| Sleep | 3567 | | NULL | 0 | 0 | 0
|
| 6795096 | db_user| 111.11.0.65:2390 | db_name| Sleep | 3529 | | NULL | 0 | 0 | 0
|
| 6795270 | db_user| 111.11.0.65:10125 | db_name | Sleep | 3473 | | NULL | 0 | 0 | 0
|
| 6795402 | db_user| 111.11.0.65:18407 | db_name| Sleep | 3424 | | NULL | 0 | 0 | 0
|
| 6795701 | db_user| 111.11.0.65:35679 | db_name| Sleep | 3330 | | NULL | 0 | 0 | 0
|
| 6800436 | db_user| 111.11.0.65:57815 | db_name| Sleep | 1860 | | NULL | 0 | 0 | 0
|
| 6806227 | db_user| 111.11.0.67:20650 | db_name| Sleep | 188 | | NULL | 1 | 0 | 0
+———+—————–+——————-+—————–+———+——+——-+——————+———–+—————+———–+
15 rows in set (0.00 sec)
You can see as an example
6794372 the command is sleep and time is 3800. This is preventing other operations
These processes should be killed 1 by 1 using the command.
KILL 6794372;
Once you have killed all the sleeping connections, things should start working as normal again

You need to do two steps:
give 777 permition to var/locks folders
delete all file of var/locks folder

Whenever you start an indexing process, Magento writes out a lock file to the var/locks folder. So uou need to do two steps:
Give 777 permission to var/locks folders
Delete all file of var/locks folder.
Now refresh the index management page in admin panel.
Enjoy!!

Related

Cannot decommission cockroachdb node

Inside Kubernetes, after scale node, cannot decommission
{"level":"warn","ts":1665138574.1910405,"logger":"controller.CrdbCluster","msg":"scaling down stateful set","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","have":5,"want":4}
{"level":"error","ts":1665138574.8271742,"logger":"controller.CrdbCluster","msg":"decommission failed","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\ngithub.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n\tpkg/actor/decommission.go:145\ngithub.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n\tpkg/controller/cluster_controller.go:153\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"info","ts":1665138574.8283174,"logger":"controller.CrdbCluster","msg":"Error on action","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","Action":"Decommission","err":"failed to stream execution results back: command terminated with exit code 1"}
{"level":"error","ts":1665138574.8283627,"logger":"controller.CrdbCluster","msg":"action failed","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"kF7Vns39vPGnqXncUhmWnX","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\ngithub.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n\tpkg/controller/cluster_controller.go:185\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"error","ts":1665138574.836441,"logger":"controller-runtime.manager.controller.crdbcluster","msg":"Reconciler error","reconciler group":"crdb.cockroachlabs.com","reconciler kind":"CrdbCluster","name":"cockroachdb","namespace":"cockroach-cluster-stage","error":"failed to stream execution results back: command terminated with exit code 1","errorVerbose":"failed to stream execution results back: command terminated with exit code 1\n(1) attached stack trace\n -- stack trace:\n | github.com/cockroachdb/cockroach-operator/pkg/scale.CockroachExecutor.Exec\n | \tpkg/scale/executor.go:57\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).findNodeID\n | \tpkg/scale/drainer.go:242\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*CockroachNodeDrainer).Decommission\n | \tpkg/scale/drainer.go:79\n | github.com/cockroachdb/cockroach-operator/pkg/scale.(*Scaler).EnsureScale\n | \tpkg/scale/scale.go:91\n | github.com/cockroachdb/cockroach-operator/pkg/actor.decommission.Act\n | \tpkg/actor/decommission.go:143\n | github.com/cockroachdb/cockroach-operator/pkg/controller.(*ClusterReconciler).Reconcile\n | \tpkg/controller/cluster_controller.go:153\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:297\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\n | sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n | \texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\n | k8s.io/apimachinery/pkg/util/wait.BackoffUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\n | k8s.io/apimachinery/pkg/util/wait.JitterUntil\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\n | k8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\n | k8s.io/apimachinery/pkg/util/wait.UntilWithContext\n | \texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99\n | runtime.goexit\n | \tsrc/runtime/asm_amd64.s:1581\nWraps: (2) failed to stream execution results back\nWraps: (3) command terminated with exit code 1\nError types: (1) *withstack.withStack (2) *errutil.withPrefix (3) exec.CodeExitError","stacktrace":"github.com/go-logr/zapr.(*zapLogger).Error\n\texternal/com_github_go_logr_zapr/zapr.go:132\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:301\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:252\nsigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func1.2\n\texternal/io_k8s_sigs_controller_runtime/pkg/internal/controller/controller.go:215\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:155\nk8s.io/apimachinery/pkg/util/wait.BackoffUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:156\nk8s.io/apimachinery/pkg/util/wait.JitterUntil\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:133\nk8s.io/apimachinery/pkg/util/wait.JitterUntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:185\nk8s.io/apimachinery/pkg/util/wait.UntilWithContext\n\texternal/io_k8s_apimachinery/pkg/util/wait/wait.go:99"}
{"level":"info","ts":1665138584.3979504,"logger":"controller.CrdbCluster","msg":"reconciling CockroachDB cluster","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"info","ts":1665138584.3980412,"logger":"webhooks","msg":"default","name":"cockroachdb"}
{"level":"info","ts":1665138584.4027824,"logger":"controller.CrdbCluster","msg":"Running action with name: Decommission","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"warn","ts":1665138584.4028075,"logger":"controller.CrdbCluster","msg":"check decommission opportunities","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
{"level":"info","ts":1665138584.4028518,"logger":"controller.CrdbCluster","msg":"replicas decommissioning","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm","status.CurrentReplicas":5,"expected":4}
{"level":"warn","ts":1665138584.4028952,"logger":"controller.CrdbCluster","msg":"operator is running inside of kubernetes, connecting to service for db connection","CrdbCluster":"cockroach-cluster-stage/cockroachdb","ReconcileId":"URAajRJhYWotEB4tQs6hRm"}
This was a bug and should be fixed by this PR

Is there any feasible and easy option to use a local folder as a Hadoop HDFS folder

I have a massive chunk of files in an extremely fast SAN disk that I like to do Hive query on them.
An obvious option is to copy all files into HDFS by using a command like this:
hadoop dfs -copyFromLocal /path/to/file/on/filesystem /path/to/input/on/hdfs
However, I don't want to create a second copy of my files, just to be to Hive query in them.
Is there any way to point an HDFS folder into a local folder, such that Hadoop sees it as an actual HDFS folder? The files keep adding to the SAN disk, so Hadoop needs to see the new files as they are being added.
This is similar to Azure's HDInsight approach that you copy your files into a blob storage and HDInsight's Hadoop sees them through HDFS.
For playing around with small files using the local file system might be fine, but I wouldn't do it for any other purpose.
Putting a file in an HDFS means that it is being split to blocks which are replicated and distributed.
This gives you later on both performance and availability.
Locations of [external] tables can be directed to the local file system using file:///.
Whether it works smoothly or you'll start getting all kinds of error, that's to be seen.
Please note that for the demo I'm doing here a little trick to direct the location to a specific file, but your basic use will probably be for directories.
Demo
create external table etc_passwd
(
Username string
,Password string
,User_ID int
,Group_ID int
,User_ID_Info string
,Home_directory string
,shell_command string
)
row format delimited
fields terminated by ':'
stored as textfile
location 'file:///etc'
;
alter table etc_passwd set location 'file:///etc/passwd'
;
select * from etc_passwd limit 10
;
+----------+----------+---------+----------+--------------+-----------------+----------------+
| username | password | user_id | group_id | user_id_info | home_directory | shell_command |
+----------+----------+---------+----------+--------------+-----------------+----------------+
| root | x | 0 | 0 | root | /root | /bin/bash |
| bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| uucp | x | 10 | 14 | uucp | /var/spool/uucp | /sbin/nologin |
+----------+----------+---------+----------+--------------+-----------------+----------------+
You can mount your hdfs path into local folder, for example with hdfs mount
Please follow this for more info
But if you want speed, it isn't an option

Display record count in listbox using multiple tables and fields

i need help with a query, can't get it to work correctly. What i'm trying to achieve is to have a select box displaying the number of records associated with a particular theme, for some theme it works well for some it displays (0) when infact there are 2 records, I'm wondering if someone could help me on this, your help would be greatly appreciated, please see below my actual query + table structure :
SELECT theme.id_theme, theme.theme, calender.start_date,
calender.id_theme1,calender.id_theme2, calender.id_theme3, COUNT(*) AS total
FROM theme, calender
WHERE (YEAR(calender.start_date) = YEAR(CURDATE())
AND MONTH(calender.start_date) > MONTH(CURDATE()) )
AND (theme.id_theme=calender.id_theme1)
OR (theme.id_theme=calender.id_theme2)
OR (theme.id_theme=calender.id_theme3)
GROUP BY theme.id_theme
ORDER BY theme.theme ASC
THEME table
|---------------------|
| id_theme | theme |
|----------|----------|
| 1 | Yoga |
| 2 | Music |
| 3 | Taichi |
| 4 | Dance |
| 5 | Coaching |
|---------------------|
CALENDAR table
|---------------------------------------------------------------------------|
| id_calender | id_theme1 | id_theme2 | id_theme3 | start_date | end_date |
|-------------|-----------|-----------|-----------|------------|------------|
| 1 | 2 | 4 | | 2015-07-24 | 2015-08-02 |
| 2 | 4 | 1 | 5 | 2015-08-06 | 2015-08-22 |
| 3 | 1 | 3 | 2 | 2014-10-11 | 2015-10-28 |
|---------------------------------------------------------------------------|
LISTBOX
|----------------|
| |
| Yoga (1) |
| Music (1) |
| Taichi (0) |
| Dance (2) |
| Coaching (1) |
|----------------|
Thanking you in advance
I think that themes conditions should be into brackets
((theme.id_theme=calender.id_theme1)
OR (theme.id_theme=calender.id_theme2)
OR (theme.id_theme=calender.id_theme3))
Hope this help

How can I compare against FileSystemRights using Powershell?

I want to check whether a given user has access to a given folder - by checking if they have "Modify" access assigned to them.
I thought that the PS for that would be:
(Get-Acl .\myfolder).Access | ?{$_.IdentityReference -eq "BUILTIN\Users"} |?{$_.filesystemrights.value -contains "Modify"}
But the final part of that isn't working - I get back no result. But I know that they have Modify access - if I put in:
(Get-Acl .\myfolder).Access | ?{$_.IdentityReference -eq "BUILTIN\Users"} | select -ExpandProperty filesystemrights
then I get back:
Modify, Synchronize
ReadAndExecute, Synchronize
Is this because the FileSystemRights property is an enumeration? And if so, how do I test against it?
It's a type problem. (Get-Acl .\myfolder).Access[].FileSystemRights is of type System.Security.AccessControl.FileSystemRights. It's not really displaying a string. To make it a string, just use the ToString() method:
(Get-Acl .\myfolder).Access | ?{$_.IdentityReference -eq "BUILTIN\Users"} |?{$_.filesystemrights.ToString() -contains "Modify"}
Or you can use the bitwise comparison method. However, it's very easy to confuse when you want to use this:
($_.FileSystemRights -band [System.Security.AccessControl.FileSystemRights]::Modify) -eq [System.Security.AccessControl.FileSystemRights]::Modify
With when you want to use this:
($_.FileSystemRights -band [System.Security.AccessControl.FileSystemRights]::Modify) -eq $_.FileSystemRights
They have very different meanings. For example, if you have Full Control, the former test is still true. Is that what you want? Or do you want to know when the FileSystemRights are literally just Modify?
Also, [System.Security.AccessControl.FileSystemRights] is an incomplete enumeration. In my environment, I found I needed this table:
+-------------+------------------------------+------------------------------+
| Value | Name | Alias |
+-------------+------------------------------+------------------------------+
| -2147483648 | GENERIC_READ | GENERIC_READ |
| 1 | ReadData | ListDirectory |
| 1 | ReadData | ReadData |
| 2 | CreateFiles | CreateFiles |
| 2 | CreateFiles | WriteData |
| 4 | AppendData | AppendData |
| 4 | AppendData | CreateDirectories |
| 8 | ReadExtendedAttributes | ReadExtendedAttributes |
| 16 | WriteExtendedAttributes | WriteExtendedAttributes |
| 32 | ExecuteFile | ExecuteFile |
| 32 | ExecuteFile | Traverse |
| 64 | DeleteSubdirectoriesAndFiles | DeleteSubdirectoriesAndFiles |
| 128 | ReadAttributes | ReadAttributes |
| 256 | WriteAttributes | WriteAttributes |
| 278 | Write | Write |
| 65536 | Delete | Delete |
| 131072 | ReadPermissions | ReadPermissions |
| 131209 | Read | Read |
| 131241 | ReadAndExecute | ReadAndExecute |
| 197055 | Modify | Modify |
| 262144 | ChangePermissions | ChangePermissions |
| 524288 | TakeOwnership | TakeOwnership |
| 1048576 | Synchronize | Synchronize |
| 2032127 | FullControl | FullControl |
| 268435456 | GENERIC_ALL | GENERIC_ALL |
| 536870912 | GENERIC_EXECUTE | GENERIC_EXECUTE |
| 1073741824 | GENERIC_WRITE | GENERIC_WRITE |
+-------------+------------------------------+------------------------------+
It's interesting to compare the output of these:
[System.Enum]::GetNames([System.Security.AccessControl.FileSystemRights]);
[System.Enum]::GetNames([System.Security.AccessControl.FileSystemRights]) | % { "$($_.ToString())`t`t$([System.Security.AccessControl.FileSystemRights]$_.ToString())`t`t$(([System.Security.AccessControl.FileSystemRights]$_).value__)";}
[System.Enum]::GetValues([System.Security.AccessControl.FileSystemRights]) | % { "$($_.ToString())`t`t$(($_).value__)";}
The GENERIC rights are not enumerated in the .Net class, but you will see that numeric value if you enumerate enough files.
Good luck!
Got it:
(get-acl .\myfolder).Access | ?{$_.IdentityReference -eq "BUILTIN\Users"} | ?{($_.FileSystemRights -band [System.Security.AccessControl.FileSystemRights]::Modify) -eq [System.Security.AccessControl.FileSystemRights]::Modify}
It's both a bitwise comparison - and therefore you need to use "-band".
But "-band" will return true if any of the same bits are set in both enumerations. And as even "Read" has several bits set (it's 100000000010001001) - some of which will match with "Modify", you need to also compare the result with "Modify" to make sure that the result is actually the same.
(Thanks to the comments below for getting me pointed in the right direction.)
Updated new version.
Clarified version from Arco's comment.
With this version we're checking if the Modify bit is set.
(Get-Acl .\myfolder).Access | ?{$_.IdentityReference -eq "BUILTIN\Users"} |?{ $_.FileSystemRights -band [Security.AccessControl.FileSystemRights]::Modify}
The value__ property is the numeric bit set version.

Moving dev site to production on new account AWS

I am in the process of moving and testing a development site on the actual domain name now and I just wanted to check if I was missing anything and also get some advice.
It is a Magento 1.8.1 install from Turnkey Linux running on an m1.medium instance.
What I have done (so far) is, create an image of the development instance, made a new account and copied it over to there. I then made an elastic IP and associated it with the new instance. Next I pointed the A name record of the production domain to the elastic IP.
Now, if I go to the production domain I get redirected to the development domain. Is there a reason for this?
Ideally I would like to have two instances, one dev one that is off unless needed and of course the production on which is going to be live 24/7. However if I turn the development domain off it stops the other too.
I have a feeling it's just because I need to change instances of the dev domain in the Magento database / back-end however I wanted to get a more knowledgable answer as I don't want to break either of the instance.
Also, I should probably mention that the development domain is a subdomain i.e. shop.mysite.com and the live one is just normal i.e. mysite.com. Not entirely sure this is relevant but thought it worth a mention.
Thanks in advance for any help.
The reason your URL on your new instance is getting redirected to the old URL is because in the core_config_data table of your magento database the web/unsecure/base_url and web/secure/base_url paths point to your old URL.
So if you are using mysql you can query your database as follows:
mysql> use magento;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A
Database changed
mysql> select * from core_config_data;
+-----------+---------+----------+-------------------------------+-------------------------------------+
| config_id | scope | scope_id | path | value |
+-----------+---------+----------+-------------------------------+-------------------------------------+
| 1 | default | 0 | web/seo/use_rewrites | 1 |
| 2 | default | 0 | admin/dashboard/enable_charts | 0 |
| 3 | default | 0 | web/unsecure/base_url | http://magento.myolddomain.com/ |
| 4 | default | 0 | web/secure/use_in_frontend | 1 |
| 5 | default | 0 | web/secure/base_url | https://magento.myolddomain.com/ |
| 6 | default | 0 | web/secure/use_in_adminhtml | 1 |
| 7 | default | 0 | general/locale/code | en_US |
| 8 | default | 0 | general/locale/timezone | Europe/London |
| 9 | default | 0 | currency/options/base | USD |
| 10 | default | 0 | currency/options/default | USD |
| 11 | default | 0 | currency/options/allow | USD |
| 12 | default | 0 | general/region/display_all | 1 |
| 13 | default | 0 | general/region/state_required | AT,CA,CH,DE,EE,ES,FI,FR,LT,LV,RO,US |
| 14 | default | 0 | catalog/category/root_id | 2 |
+-----------+---------+----------+-------------------------------+-------------------------------------+
14 rows in set (0.00 sec)
and you can change it as follows:
mysql> update core_config_data set value='http://magento.mynewdomain.com' where path='web/unsecure/base_url';
mysql> update core_config_data set value='https://magento.mynewdomain.com' where path='web/secure/base_url';

Resources