Error - Plesk 12.5.30 - vps

i installed plesk with Putty Ssh but i have one big problem , when i login to plesk it redirect me to an error page , you can see the picture
Error picture

The problem occurs because of the smb_roles table is empty on your server. Try to create backup of your plesk and smb_roles table and create default smb roles in database.
First, Crete the backup
mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` psa > /root/psa.sql
mysqldump -uadmin -p`cat /etc/psa/.psa.shadow` psa smb_roles > /root/psa.smb_roles.sql
Than, Login your mysql prompt and try with the following command.
mysql> INSERT INTO `smb_roles` VALUES (1,'Admin',1,1);
Query OK, 1 row affected (0.10 sec)
mysql> INSERT INTO `smb_roles` VALUES (2,'WebMaster',0,1);
Query OK, 1 row affected (0.21 sec)
mysql> INSERT INTO `smb_roles` VALUES (3,'Application User',0,1);
Query OK, 1 row affected (0.06 sec)
mysql> INSERT INTO `smb_roles` VALUES (4,'Accountant',1,1);
Query OK, 1 row affected (0.04 sec)
mysql> INSERT INTO `smb_roles` VALUES (5,'Mail User',0,1);
Query OK, 1 row affected (0.04 sec)

Related

Rows not copied to destination table from source table using oracle sqlplus copy command

I am using this copy command.
COPY FROM username/[pwd]#identifier INSERT SCHEMA_NAME.TABLE_NAME
USING SELECT * FROM SCHEMA_NAME.TABLE_NAME;
Note : Both the source and target tables are in different databases. The source table has around 19 million records. Both tables have 198 columns.
and I am getting the below message when the copy command is executed (I am not seeing any error message but no rows are copied).
Array fetch/bind size is 5000. (arraysize is 5000)
Will commit after every 100 array binds. (copycommit is 100)
Maximum long size is 80. (long is 80)
0 rows selected from username/[pwd]#identifier.
0 rows inserted into SCHEMA_NAME.TABLE_NAME.
0 rows committed into SCHEMA_NAME.TABLE_NAME at DEFAULT HOST connection.
Please help me on this or any possible guidance to tackle above issue is deeply appreciated.
I tried it on my local 11g XE database; it works OK.
SQL> create table test as select * From dept where 1 = 2;
Table created.
SQL> copy from scott/tiger#xe insert test using select * from dept;
Array fetch/bind size is 15. (arraysize is 15)
Will commit when done. (copycommit is 0)
Maximum long size is 80. (long is 80)
4 rows selected from scott#xe.
4 rows inserted into TEST.
4 rows committed into TEST at DEFAULT HOST connection.
SQL> select * From test;
DEPTNO DNAME LOC
---------- -------------- -------------
10 ACCOUNTING NEW YORK
20 RESEARCH DALLAS
30 SALES CHICAGO
40 OPERATIONS BOSTON
SQL>
Your query selected nothing and inserted nothing, which looks as if query doesn't do what you thought it would. Did you make sure that it is correctly written and that it fetches some rows?
As of disadvantages: Oracle 19c documentation says that
The COPY command will be deprecated in future releases of SQL*Plus. After Oracle 9i, no new datatypes are supported by COPY.
so you'd probably rather use other options to move data around. That would be e.g.
INSERT INTO
MERGE
data pump export & import

sqldeveloper query other users

I have a database that has say 100 other_users.
I am logged in as the database owner and I want to query a specific table in all the other users schemas.
So let's say each schema has a table called propertyvalue.
I want to find out which schemas have this value set to TRUE.
Is there a way I can run a select statement against all the other users schemas without specifically pointing to an individual schema.
Something like:
select * from otherusers.propertyvalue where value = 'TRUE', which doesn't work as I have tried that.
Thanks.
You can write a statement that will write select statements to do this
SELECT 'SELECT '||owner||'.'||table_name||'.'||column_name ||
' FROM '||owner||'.'||table_name||';'
FROM All_Tab_Cols atc
WHERE atc.column_name = 'PROPERTYVALUE';
Run this statement as a user with select privilege on the table and then run the selects that are the output.
However tables with multiple rows will return all rows. Are you expecting that there is only one row in each table?
You could also write an anonymous block that will open a cursor with the same statement and output the results to a file/table/output.

Oracle update not working

Hello everyone i've tried to update a row in my database using sqlplus the query is executed but the values still the same
here is my code:
update pilote set nom= 'yees' where id_pilote= 111;
1 row updated
Use Commit Command after update and again re query table contents

Oracle--Error(Refereence to object_id)

SELECT object_id from dbname.tablename
This query has to be executed against oracle 11g.I get errors when i execute this.
I do a migration from sybase to oracle and in oracle this query fails.
What could be the problem. Please suggest a solution
"What could be the problem."
All sorts of things. Since you failed to state what errors you're getting, we can only guess, e.g.:
Table not found
No SELECT privilege on table
dbname not a valid schema
object_id not a column in the table
Not connected to a running oracle instance
Trying to run the statement in an environment that doesn't understand SQL
etc, etc, ...
If all you want is to check that the table exists, you could do this:
SELECT 1 FROM dba_tables WHERE owner = 'DBNAME' AND table_name = 'TABLENAME';
If you want to check that you can query the table, you could do this:
SELECT 1 FROM schemaname.tablename WHERE 1=0;
If you want to check if the table has any rows, you could do this:
SELECT 1 FROM schemaname.tablename WHERE ROWNUM <= 1;
What you will do with the result. If you only want a unique id for a row, yo can user SELECT ROWID FROM dbname.tablename!

MySQL Query still executing after a day..?

I'm trying to isolate duplicates in a 500MB database and have tried two ways to do it. One creating a new table and grouping:
CREATE TABLE test_table as
SELECT * FROM items WHERE 1 GROUP BY title;
But it's been running for an hour and in MySQL Admin it says the status is Locked.
The other way I tried was to delete duplicates with this:
DELETE bad_rows.*
from items as bad_rows
inner join (
select post_title, MIN(id) as min_id
from items
group by title
having count(*) > 1
) as good_rows on good_rows.post_title = bad_rows.post_title;
..and this has been running for 24hours now, Admin telling me it's Sending data...
Do you think either or these queries are actually still running? How can I find out if it's hung? (with Apple OS X 10.5.7)
You can do this:
alter ignore table items add unique index(title);
This will add a unique index and at the same time remove any duplicates, which will prevent any future duplicates from occurring. Make sure you do a backup before running this command.

Resources