I want to insert the image to word file, If I try like the below code the word file show some unknown symbols like
"ΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰΰ"
My code:
figure,imshow(img1);
fid = fopen('mainfile.doc', 'a', 'n', 'UTF-8');
fwrite(fid, img1, 'char');
fclose(fid);
open('mainfile.doc');
fwrite won't do this directly. You could try Matlab report generator if you have access to it or a file exchange submission, OfficeDoc.
Related
I have a set of png images of 300dpi . Each image is full of text (not handwritten), digits (not handwritten).
l want to extract each character and save it in a different image.
For each character in the image l have its position stored in csv file.
For instance in image1.png for a given character “k” l have its position :
“k”=[left=656, right=736,top=144,down= 286]
Is there any python library which allows to do that ?. As input l have the images (png format) and csv file that contains the position of each character of each images.
after executing the code l stack at this line :
img_charac=img[int(coords[2]):int(coords[3]),int(coords[0]):int(coords[1])]
l got the following error:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'NoneType' object has no attribute '__getitem__'
So if I understood correctly, this has nothing to do with image processing, just file opening, image cropping and saving.
With a csv file looking like ,
an input image looking like
I get results like
import cv2
import numpy as np
import csv
path_csv= #path to your csv
#stock coordinates of characters from your csv in numpy array
npa=np.genfromtxt(path_csv+"cs.csv", delimiter=',',skip_header=1,usecols=(1,2,3,4))
nb_charac=len(npa[:, 0]) #number of characters
#stock the actual letters of your csv in an array
characs=[]
cpt=0
#take characters
f = open(path_csv+"cs.csv", 'rt')
reader = csv.reader(f)
for row in reader:
if cpt>=1: #skip header
characs.append(str(row[0]))
cpt+=1
#open your image
path_image= #path to your image
img=cv2.imread(path_image+"yourimagename.png")
path_save= #path you want to save to
#for every line on your csv,
for i in range(nb_charac):
#get coordinates
coords=npa[i,:]
charac=characs[i]
#actual cropping of the image (easy with numpy)
img_charac=img[int(coords[2]):int(coords[3]),int(coords[0]):int(coords[1])]
#saving the image
cv2.imwrite(path_save+"carac"+str(i)+"_"+str(charac)+".png",img_charac)
This is sort of quick and dirty, the csv opening is a bit messy for example (you could get all the info with one opening and converting), and should be adapted to your csv file anyway.
I want to open a csv file using SmarterCSV.process
market_csv = SmarterCSV.process(market)
p "just read #{market_csv}"
The problem is that the data is not read and this prints:
[]
However, if I attempt the same thing with the default CSV library implementation the content of the file is read(the following print statement prints the file).
CSV.foreach(market) do |row|
p row
end
The content of the file I was reading is of the form:
Date,Close
03/06/15,0.1634
02/06/15,0.1637
01/06/15,0.1638
31/05/15,0.1638
The problem could come from the line separator, the file is not exactly the same if you're using windows or unix system ("\r\n" or "\r"). Try to identify and specify the character in the SmarterCSV.process like this:
market_csv = SmarterCSV.process(market, row_sep: "\r")
p "just read #{market_csv}"
or like this:
market_csv = SmarterCSV.process(market, row_sep: :auto)
p "just read #{market_csv}"
I am trying to edit particular html files that I download in python. I am running into a problem where I run my code to edit the file and my python context locks up. I checked the file it's writing to and found that there are two files. The html file and a .bak file.
The html file starts out at 0kb and the .bak file constantly grows to a point, maybe 12 mb or so, then the .html file will grow to a larger size, then the .bak file will grow again. This seems to cycle endlessly. The html file I am editing is 22kb. I watched the output file grow to a gig once just to see if it would stop... It doesn't.
Here is the function I am using to edit the file:
def replace(self, search_str, replace_str):
f = open(self.path,'r+')
content = f.readlines()
for i, line in enumerate(content):
content[i] = line.replace(search_str, replace_str)
f.writelines(content)
f.close()
The issue, I imagine relates to the fact that the html file, as downloaded, is mostly in a single line with ~ 21,000 characters in it. Any ideas?
edit:
I have also tried another function, but get the same result:
def replace(self, search_str, replace_str):
assert self.path != None, 'No file path provided.'
fi = fileinput.FileInput(self.path,inplace=1)
for line in fi:
if search_str in line:
line=line.replace(search_str,replace_str)
print line
fi.close()
Try using generator. Thats the way to go if you need to read a large file
for line in open(self.path,'r+'):
# do stuff with line
I re-wrote the function to write everything out to a new file and it works.
def replace(self, search_str, replace_str):
f = open(self.path,'r+')
new_path = self.path.split('.')[0]+'.TEMP'
new_f = open(new_path,'w')
new_lines = [x.replace(search_str, replace_str) for x in f]
new_f.writelines(new_lines)
f.close()
new_f.close()
os.remove(self.path)
os.rename(new_path, self.path)
So, i've faced this task. I wrote code, but somehow instead of putting content of file into another file2, it simply erase content of file2. What am i doing wrong?
Program Lesson9_Program2;
Var FName, Fname2, Txt, Txt2 : String;
UserFile, UserFile2 : Text;
Begin
FName := 'Textfile';
Assign(UserFile,'E:\text.txt'); {assign a text file}
Assign(UserFile2,'E:\text2.txt');
Reset(UserFile);
Reset(UserFile2);
readln(UserFile2, Txt);
readln(UserFile, Txt2);
Close(UserFile2);
Close(UserFile);
Rewrite(UserFile);
WriteLn(UserFile, Txt);
WriteLn(UserFIle, Txt2);
Close(UserFile);
Rewrite(UserFile);
End.
So the problem was in last Rewrite function. Turned out, it made to erase file content, so removing it fixed my program
I need to import with load data some perl - generated files to oracle database.
Perl-script get a webpage and write csv file.
Here a simplified script:
use File::Slurp;
my $c= ( $user && $passwd )
? get("$protocol://$user:$passwd\#$url")
: get("$protocol://$url");
write_file("$VZ_GET/$FileTS.$typ.csv",$c);
Here a sample line from the webpage:
5052;97;Jan;Ihrfelt 5053;97;Jari;Honko 5121;97;Katja;Keitaanniemi 5302;97;Ola;Södermark 5421;97;Sven;Sköld 5609;97;Peter;Näslund
Content of the webpage is saved in var $c.
Here a sample line of csv file:
5053;97;Jari;Honko
Here a load command:
LOAD DATA
INTO TABLE LIQA
TRUNCATE
FIELDS TERMINATED BY ";"
(
LIQA_ANALYST_ID,
LIQA_FIRM_ID,
LIQA_ANALYST_FIRST_NAME,
LIQA_ANALYST_LAST_NAME,
LIQA_TS_INSERT DATE 'YYYYMMDDHH24MISS'
)
Command SELECT * FROM NLS_DATABASE_PARAMETERS WHERE PARAMETER = 'NLS_CHARACTERSET'; returns AL32UTF8.
The generated csv file is recognized as UTF-8 Unicode text.
Anyhow I cant import german characters. In the csv file they are still correct. But it is not the case in the database.
I have also tried to convert $c like this:
$c = encode("iso-8859-1", $c);
The generated csv file is stll recognized as UTF-8 Unicode text.
I have no clue how can I fix it.
I have solved it:
$c = decode( 'utf-8', $c );
$c = encode( 'iso-8859-1' , $c );