Traceback (most recent call last):
File "/usr/lib/python2.7/dist-packages/odoo/addons/base/ir/ir_attachment.py", line 100, in _file_read
r = open(full_path,'rb').read().encode('base64')
IOError: [Errno 2] No such file or directory: u'/var/lib/odoo/.local/share/Odoo/filestore/coverpr1/f3/f3f11e52a3ead336749157f46e1c8d8a07de8b61'
I have solved it by deleting all the records from ir_attachment table. Use the query below to solve the problem.
DELETE FROM ir_attachment;
Try this:
DELETE FROM ir_attachment WHERE url LIKE '/web/content/%';
If you work with Linux, you can get all records from logfile:
grep 'No such file or directory' /var/log/odoo/odoo.log| cut -d'/' -f 10 | sort| uniq > /tmp/2delete.txt
And open the file and create an SQL syntax for each line found.
Example
DELETE FROM ir_attachment WHERE store_fname LIKE '%ff3fb425a0e573436f30d1377e3e74ba095b3a4d%';
Next, execute all SQL sentences in your database.
I my case:
psql myOdooDB -U odooUser < /tmp/2deleteSQLFormat.txt
If you delete all the records from ir_attachment then it will delete the attachment from all modules where ever we have attached our documents.
Related
I run my dbt command in a loop and save its contents to a .yml file. This works well and generates a schema in my .yml file accurately:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > test.yml
done
However, in the example above, I am just saving the test.yml file in the root directory. When I try to save the file in another path for example models/l30_mart/test.yml, it doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/test.yml
done
In this case, when I open the test.ymlfile, I see this:
12:06:42 Running with dbt=1.0.1
12:06:43 Encountered an error:
Compilation Error
The schema file at models/l30_mart/test.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
It's probably a syntax mistake. Do I need any quotes to specify the path? or brackets? Why can't I save the .yml file in the same folder? Similarly, something like this and try to save different files with the tablename, it also doesn't work:
for file in models/l30_mart/*.sql; do
table=$(basename "$file" .sql)
dbt run-operation generate_model_yaml --args "{\"model_name\": \"$table\"}" > models/l30_mart/$table.yml
done
In this case, the files either have this output:
20:39:44 Running with dbt=1.0.1
20:39:45 Encountered an error:
Compilation Error
The schema file at models/l30_mart/firsttable.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
https://docs.getdbt.com/docs/schemayml-files
or this (eg in the second tablename.yml file):
20:39:48 Running with dbt=1.0.1
20:39:49 Encountered an error:
Parsing Error
Error reading dbt_4flow: l30_mart/firstablename.yml - Runtime Error
Syntax error near line 2
------------------------------
1 | 20:39:44 Running with dbt=1.0.1
2 | 20:39:45 Encountered an error:
3 | Compilation Error
4 | The schema file at models/l30_mart/firsttablename.yml is invalid because no version is specified. Please consult the documentation for more information on schema.yml syntax:
5 |
Raw Error:
------------------------------
mapping values are not allowed in this context
in "<unicode string>", line 2, column 31
Note that the secondtablename.yml mentions the firsttablename.yml.
I have a script which extract error messages from db2diag.log. I have to extract the SQL queries which caused the deadlock from the below file.
File contents: log.txt
db2inst1 , WSCOMUSR , MESSAGE : ADM5501I DB2 is performing lock escalation. The affected application
is named "db2jcc_application", and is associated with the workload
name "SYSDEFAULTUSERWORKLOAD" and application ID
"173.10.105.33.59586.13011817552" at member "0". The total number of
locks currently held is "1249935", and the target number of locks to
hold is "624967". The current statement being executed is "delete
from DMEXPLOG where CREATED < ? ". Reason code "1"
db2inst1 , WSCOMUSR , MESSAGE : ADM5501I DB2 is performing lock escalation. The affected application
is named "db2jcc_application", and is associated with the workload
name "SYSDEFAULTUSERWORKLOAD" and application ID
"173.10.105.33.59586.13011817552" at member "0". The total number of
locks currently held is "1249935", and the target number of locks to
hold is "624967". The current statement being executed is "select
* from DMEXPLOG where CREATED < ?". Reason code "1"
Required output: all the sql queries
1. delete
from DMEXPLOG where CREATED < ?
2. select
* from DMEXPLOG where CREATED < ?
like this. I want all sql parts from the file. Any grep or Awk/sed solution to get required output?
Platform: Unix (AIX)
Your current example can be translated with
sed -n '/statement being executed/ s/.*"//p; /Reason code/ s/".*//p' log
awk '{gsub(/^.*The current statement being executed is \"|\". Reason code.*$/,""); print NR". "$0}' log.txt
1. delete from DMEXPLOG where CREATED < ?
2.
3. select * from DMEXPLOG where CREATED < ?
The matching strings could be shorter no doubt and 2. is empty because the data you presented had an empty line between the actual data. Are there empty lines in the actual data?
Maybe, This help you
user#host:/tmp$ sed -n '/select/,/^$/p;/delete/,/^$/p;/insert/,/^$/p;/update/,/^$/p' log.txt | sed -n '/^[0-9]/!H;//x;$x;s/\n\([^A]\)/ \1/gp' | awk -F'"' '{printf("%d.\t %s\n", NR, $4)}'
1. delete from DMEXPLOG where CREATED < ?
2. select * from DMEXPLOG where CREATED < ?
I want to dump only a specific column on some text file using parquet-tools-1.8.1.jar.But not able to do so. I am trying below command. Please note my column name has forward slash.
parquet-tools-1.8.1.jar dump --column 'dir1/log1/job12121' '/hdfs-path/to/parquet file with space.parquet' > /home/local/parquet/output.text
Run
hadoop jar parquet-tools-1.8.1.jar parquet.tools.Main dump --column 'dir1/log1/job12121' '/hdfs-path/to/parquet file with space.parquet' > /home/local/parquet/output.text
Please use the following:
hadoop jar parquet-tools-1.8.1.jar dump -c dir1 log1 job12121 -m /hdfs-path/to/parquet file with space.parquet >> /home/local/parquet/output.text
Note:No single quotes for input arguments.
I am trying to do a bulk insert into tables from a CSV file using Oracle11. My problem is that the database is on a remote machine which I can sqlpl to using this:
sqlpl username#oracle.machineName
Unfortunately the sqlldr has trouble connecting using the following command:
sqlldr userid=userName/PW#machinename control=BULK_LOAD_CSV_DATA.ctl log=sqlldr.log
Error is:
Message 2100 not found; No message file for product=RDBMS, facility=ULMessage 2100 not found; No message file for product=RDBMS, facility=UL
Now having given up on this approach I tried writing a basic sql script, but I am unsure of the proper Oracle keyword for BULK. I know this works in MySql but I get:
unknown command beginning "BULK INSER..."
When running the script:
BULK INSERT <TABLE_NAME>
FROM 'CSVFILE.csv'
WITH
(
FIELDTERMINATOR = ',',
ROWTERMINATOR = '\n'
)
GO
I don't care which one works! Either one will do, I just need a little help.
Sorry I am a dumb dumb! I forgot to add oracle/bin to my path!
If you have found this post, add the bin directory to your path (linux) using the following commands:
export ORACLE_HOME=/path/to/oracle/client
export PATH=$PATH:$ORACLE_HOME/bin
Sorry if I wasted anyone's time ....
I Just started using SQLite for our log processing system where I just import a file in to sqlite database which has '#' as field separator.
If I run the following in SQLite repl
$ sqlite3 log.db
sqlite> .separator "#"
sqlite> .import output log_dump
It works [import was successful]. But if I try to do the same via a bash script
sqlite log.db '.separator "#"'
sqlite log.db '.import output log_dump'
it doesn't. The separator shifts back to '|' and I'm getting an error saying that there are insufficient columns
output line 1: expected 12 columns of data but found 1
How can I overcome this issue?
You should pass two commands to sqlite at the same time:
echo -e '.separator "#"\n.import output log_dump' | sqlite log.db