I had a template image and need to append on that a specific images on X , Y positions. Is there any equivalent to that function in rmagick
ImageList.new("https://365psd.com/images/istock/previews/8479/84796157-football-field-template-with-goal-on-top-view.jpg")
and draw on that other images and generate one image.
You can read and write URIs in ruby-vips like this:
#!/usr/bin/ruby
require "vips"
require "down"
def new_from_uri(uri)
byte_source = Down.open uri
source = Vips::SourceCustom.new
source.on_read do |length|
puts "reading #{length} bytes from #{uri} ..."
byte_source.read length
end
source.on_seek do |offset, whence|
puts "seeking to #{offset}, #{whence} in #{uri}"
byte_source.seek(offset, whence)
end
return Vips::Image.new_from_source source, "", access: :sequential
end
a = new_from_uri "https://upload.wikimedia.org/wikipedia/commons/a/a6/Big_Ben_Clock_Face.jpg"
b = new_from_uri "https://upload.wikimedia.org/wikipedia/commons/4/47/PNG_transparency_demonstration_1.png"
out = a.composite b, "over", x: 100, y: 100
out.write_to_file "x.jpg"
If you watch the console output you can see it loading the two source images and interleaving the pixels. It makes this output:
The docs on Vips::Source have more details.
Related
I have this ruby method for compressing a string -
def compress_data(data)
output = StringIO.new
gz = Zlib::GzipWriter.new(output)
gz.write(data)
gz.close
compressed_data = output.string
compressed_data
end
When I call this method with the same input, I get different outputs at different times. I am trying to get the byte array for the compressed outputs and compare them.
The output is Different when I run the below -
input = "hello world"
output1 = (compress_data input).bytes.to_a
sleep 1
output2 = (compress_data input).bytes.to_a
if output1 == output2
puts 'Same'
else
puts 'Different'
end
The output is Same when I remove the sleep. Does the compression algorithm have something to do with the current time?
Option 1 - fixed mtime:
Yes. The compression time is stored in the header. You can use the mtime method to set the time to a fixed value, which will resolve your problem:
gz = Zlib::GzipWriter.new(output)
gz.mtime = 1
gz.write(data)
gz.close
Note that the Ruby documentation says that setting mtime to zero will disable the timestamp. I tried it, and it does not work. I also looked at the source code, and it appears this functionality is missing. Seems like a bug. So you have to set it to something else than 0 (but see comments below - it will be fixed in future releases).
Option 2 - skip the header:
Another option is to just skip the header when checking for similar data. The header is 10 bytes long, so to only check the data:
data = compress_data(input).bytes[10..-1]
Note that you do not need to call to_a on bytes. It is already an Array:
String.bytes -> an_array
Returns an array of bytes in str. This is a shorthand for str.each_byte.to_a.
In Ruby, I'm reading an .ifc file to get some information, but I can't decode it. For example, the file content:
"'S\X2\00E9\X0\jour/Cuisine'"
should be:
"'Séjour/Cuisine'"
I'm trying to encode it with:
puts ifcFileLine.encode("Windows-1252")
puts ifcFileLine.encode("ISO-8859-1")
puts ifcFileLine.encode("ISO-8859-5")
puts ifcFileLine.encode("iso-8859-1").force_encoding("utf-8")'
But nothing gives me what I need.
I don't know anything about IFC, but based solely on the page Denis linked to and your example input, this works:
ESCAPE_SEQUENCE_EXPR = /\\X2\\(.*?)\\X0\\/
def decode_ifc(str)
str.gsub(ESCAPE_SEQUENCE_EXPR) do
$1.gsub(/..../) { $&.to_i(16).chr(Encoding::UTF_8) }
end
end
str = 'S\X2\00E9\X0\jour/Cuisine'
puts "Input:", str
puts "Output:", decode_ifc(str)
All this code does is replace every sequence of four characters (/..../) between the delimiters, which will each be a Unicode code point in hexadecimal, with the corresponding Unicode character.
Note that this code handles only this specific encoding. A quick glance at the implementation guide shows other encodings, including an \X4 directive for Unicode characters outside the Basic Multilingual Plane. This ought to get you started, though.
See it on eval.in: https://eval.in/776980
If someone is interested, I wrote here a Python Code that decode 3 of the IFC encodings : \X, \X2\ and \S\
import re
def decodeIfc(txt):
# In regex "\" is hard to manage in Python... I use this workaround
txt = txt.replace('\\', 'µµµ')
txt = re.sub('µµµX2µµµ([0-9A-F]{4,})+µµµX0µµµ', decodeIfcX2, txt)
txt = re.sub('µµµSµµµ(.)', decodeIfcS, txt)
txt = re.sub('µµµXµµµ([0-9A-F]{2})', decodeIfcX, txt)
txt = txt.replace('µµµ','\\')
return txt
def decodeIfcX2(match):
# X2 encodes characters with multiple of 4 hexadecimal numbers.
return ''.join(list(map(lambda x : chr(int(x,16)), re.findall('([0-9A-F]{4})',match.group(1)))))
def decodeIfcS(match):
return chr(ord(match.group(1))+128)
def decodeIfcX(match):
# Sometimes, IFC files were made with old Mac... wich use MacRoman encoding.
num = int(match.group(1), 16)
if (num <= 127) | (num >= 160):
return chr(num)
else:
return bytes.fromhex(match.group(1)).decode("macroman")
To get the image dimensions in ruby, I tried to use identify to get image dimensions. I wanted to retrieve the output of this system call and get the output as a string
str = system('identify -format "%[fx:w]x%[fx:h]" image.png')
output = `ls`
print output
But, I'm getting the last lines of output and not the output to this particular system call.
Also, if there is a simpler way to get the image dimensions without external gems or libraries, please suggest as it would be great !
Since you already use an external library (ImageMagick), you could use its Ruby wrapper RMagick:
require 'RMagick'
img = Magick::Image::read('image.png').first
arr = [img.columns, img.rows]
Here's an example of a very simple PNG parser:
data = File.binread('image.png', 100) # read first 100 bytes
if data[0, 8] == [137, 80, 78, 71, 13, 10, 26, 10].pack("C*")
# file has a PNG file signature, let's get the image header chunk
length, chunk_type = data[8, 8].unpack("l>a4")
raise "unknown format, expecting image header" unless chunk_type == "IHDR"
chunk_data = data[16, length].unpack("l>l>CCCCC")
width = chunk_data[0]
height = chunk_data[1]
bit_depth = chunk_data[2]
color_type = chunk_data[3]
compression_method = chunk_data[4]
filter_method = chunk_data[5]
interlace_method = chunk_data[6]
puts "image size: #{width}x#{height}"
else
# handle other formats
end
Okay, I finally found a solution after some experiments.
str = `identify -format "%[fx:w]x%[fx:h]" image.png`
arr = str.split('x')
The array arr now contains dimensions in it [width,height] .
This worked for me ! Please suggest other approaches that might be more easier or simpler.
In consideration to my M.tech Project
I want to know if there is any algorithm to detect duplicate videos from youtube.
For example (here are links of two videos):
random user upload
upload by official channel
Amongst these second is official video and T-series has it's copyright.
Is youtube officially doing something to remove duplicate videos from youtube?
Not only videos, there exists duplicate youtube channels also.
Sometimes the original video has less number of views than that of pirated version.
So, while searching found this
(see page number [49] of pdf)
What I learnt from the given link
Original vs copyright infringed video detection Classifier is used.
Given a query, firstly top k search results are being retrieved.Thereafter three parameters are used to classify the videos
Number of subscribers
user profile
username popularity
and on the basis of these parameters, original video is identified as described in the link.
EDIT 1:
There are basically two different objectives
To identify original video with the above method
To eliminate the duplicate videos
obviously identifying original video is easier than finding out all the duplicate videos.
So i preferred to first find out the original video.
Approach which i can think till now
to improve the accuracy:
We can first find out the original videos with above method
And then use the most popular publicized frames(may be multiple) of that video to search on google image. This method therefore retrieves the list of duplicate videos in google image search results.
After getting these duplicate videos, we can once again check frame by frame and reach a level of satisfaction(yes retrieved videos were "exact or "almost" duplicate copy of original video)
Will this approach work?
if not, is there any better algorithm, to improve upon the given method?
Please write in the comments section if i am unable to explain my approach clearly.
I will soon add some more details.
I've recently hacked together a small tool for that purpose. It's still work in progress but usually pretty accurate. The idea is to simply compare time between brightness maxima in the center of the video. Therefore it should work with different resolutions, frame rates and rotation of the video.
ffmpeg is used for decoding, imageio as bridge to python, numpy/scipy for maxima computation and some k-nearest-neighbor library (annoy, cyflann, hnsw) for comparison.
At the moment it's not polished at all so you should know a little python to run it or simply copy the idea.
Me too had the same problem.. So wrote a program myself..
Problem is I had videos of various formats and resolution.. So needed to take hash of each video frame and compare.
https://github.com/gklc811/duplicate_video_finder
you can just change the directories at top and you are good to go..
from os import path, walk, makedirs, rename
from time import clock
from imagehash import average_hash
from PIL import Image
from cv2 import VideoCapture, CAP_PROP_FRAME_COUNT, CAP_PROP_FRAME_WIDTH, CAP_PROP_FRAME_HEIGHT, CAP_PROP_FPS
from json import dump, load
from multiprocessing import Pool, cpu_count
input_vid_dir = r'C:\Users\gokul\Documents\data\\'
json_dir = r'C:\Users\gokul\Documents\db\\'
analyzed_dir = r'C:\Users\gokul\Documents\analyzed\\'
duplicate_dir = r'C:\Users\gokul\Documents\duplicate\\'
if not path.exists(json_dir):
makedirs(json_dir)
if not path.exists(analyzed_dir):
makedirs(analyzed_dir)
if not path.exists(duplicate_dir):
makedirs(duplicate_dir)
def write_to_json(filename, data):
file_full_path = json_dir + filename + ".json"
with open(file_full_path, 'w') as file_pointer:
dump(data, file_pointer)
return
def video_to_json(filename):
file_full_path = input_vid_dir + filename
start = clock()
size = round(path.getsize(file_full_path) / 1024 / 1024, 2)
video_pointer = VideoCapture(file_full_path)
frame_count = int(VideoCapture.get(video_pointer, int(CAP_PROP_FRAME_COUNT)))
width = int(VideoCapture.get(video_pointer, int(CAP_PROP_FRAME_WIDTH)))
height = int(VideoCapture.get(video_pointer, int(CAP_PROP_FRAME_HEIGHT)))
fps = int(VideoCapture.get(video_pointer, int(CAP_PROP_FPS)))
success, image = video_pointer.read()
video_hash = {}
while success:
frame_hash = average_hash(Image.fromarray(image))
video_hash[str(frame_hash)] = filename
success, image = video_pointer.read()
stop = clock()
time_taken = stop - start
print("Time taken for ", file_full_path, " is : ", time_taken)
data_dict = dict()
data_dict['size'] = size
data_dict['time_taken'] = time_taken
data_dict['fps'] = fps
data_dict['height'] = height
data_dict['width'] = width
data_dict['frame_count'] = frame_count
data_dict['filename'] = filename
data_dict['video_hash'] = video_hash
write_to_json(filename, data_dict)
return
def multiprocess_video_to_json():
files = next(walk(input_vid_dir))[2]
processes = cpu_count()
print(processes)
pool = Pool(processes)
start = clock()
pool.starmap_async(video_to_json, zip(files))
pool.close()
pool.join()
stop = clock()
print("Time Taken : ", stop - start)
def key_with_max_val(d):
max_value = 0
required_key = ""
for k in d:
if d[k] > max_value:
max_value = d[k]
required_key = k
return required_key
def duplicate_analyzer():
files = next(walk(json_dir))[2]
data_dict = {}
for file in files:
filename = json_dir + file
with open(filename) as f:
data = load(f)
video_hash = data['video_hash']
count = 0
duplicate_file_dict = dict()
for key in video_hash:
count += 1
if key in data_dict:
if data_dict[key] in duplicate_file_dict:
duplicate_file_dict[data_dict[key]] = duplicate_file_dict[data_dict[key]] + 1
else:
duplicate_file_dict[data_dict[key]] = 1
else:
data_dict[key] = video_hash[key]
if duplicate_file_dict:
duplicate_file = key_with_max_val(duplicate_file_dict)
duplicate_percentage = ((duplicate_file_dict[duplicate_file] / count) * 100)
if duplicate_percentage > 50:
file = file[:-5]
print(file, " is dup of ", duplicate_file)
src = analyzed_dir + file
tgt = duplicate_dir + file
if path.exists(src):
rename(src, tgt)
# else:
# print("File already moved")
def mv_analyzed_file():
files = next(walk(json_dir))[2]
for filename in files:
filename = filename[:-5]
src = input_vid_dir + filename
tgt = analyzed_dir + filename
if path.exists(src):
rename(src, tgt)
# else:
# print("File already moved")
if __name__ == '__main__':
mv_analyzed_file()
multiprocess_video_to_json()
mv_analyzed_file()
duplicate_analyzer()
String.length will only tell me how many characters are in the String. (In fact, before Ruby 1.9, it will only tell me how many bytes, which is even less useful.)
I'd really like to be able to find out how many 'en' wide a String is. For example:
'foo'.width
# => 3
'moo'.width
# => 3.5 # m's, w's, etc. are wide
'foi'.width
# => 2.5 # i's, j's, etc. are narrow
'foo bar'.width
# => 6.25 # spaces are very narrow
Even better would be if I could get the first n en of a String:
'foo'[0, 2.en]
# => "fo"
'filial'[0, 3.en]
# => "fili"
'foo bar baz'[0, 4.5en]
# => "foo b"
And better still would be if I could strategize the whole thing. Some people think a space should be 0.25en, some think it should be 0.33, etc.
You should use the RMagick gem to render a "Draw" object using the font you want (you can load .ttf files and such)
The code would look something like this:
the_text = "TheTextYouWantTheWidthOf"
label = Draw.new
label.font = "Vera" #you can also specify a file name... check the rmagick docs to be sure
label.text_antialias(true)
label.font_style=Magick::NormalStyle
label.font_weight=Magick::BoldWeight
label.gravity=Magick::CenterGravity
label.text(0,0,the_text)
metrics = label.get_type_metrics(the_text)
width = metrics.width
height = metrics.height
You can see it in action in my button maker here: http://risingcode.com/button/everybodywangchungtonite
Use the ttfunk gem to read the metrics from the font file. You can then get the width of a string of text in em. Here's my pull request to get this example added to the gem.
require 'rubygems'
require 'ttfunk'
require 'valuable'
# Everything you never wanted to know about glyphs:
# http://chanae.walon.org/pub/ttf/ttf_glyphs.htm
# this code is a substantial reworking of:
# https://github.com/prawnpdf/ttfunk/blob/master/examples/metrics.rb
class Font
attr_reader :file
def initialize(path_to_file)
#file = TTFunk::File.open(path_to_file)
end
def width_of( string )
string.split('').map{|char| character_width( char )}.inject{|sum, x| sum + x}
end
def character_width( character )
width_in_units = ( horizontal_metrics.for( glyph_id( character )).advance_width )
width_in_units.to_f / units_per_em
end
def units_per_em
#u_per_em ||= file.header.units_per_em
end
def horizontal_metrics
#hm = file.horizontal_metrics
end
def glyph_id(character)
character_code = character.unpack("U*").first
file.cmap.unicode.first[character_code]
end
end
Here it is in action:
>> din = Font.new("#{File.dirname(__FILE__)}/../../fonts/DIN/DINPro-Light.ttf")
>> din.width_of("Hypertension")
=> 5.832
# which is correct! Hypertension in that font takes up about 5.832 em! It's over by maybe ... 0.015.
You could attempt to create a standarized "width proportion table" to calculate an aproximation, basically you need to store the width of each character and then traverse the string adding up the widths.
I found this table here:
Left, Width, Advance values for ArialBD16 'c' through 'm'
Letter Left Width Advance
c 1 7 9
d 1 8 10
e 1 8 9
f 0 6 5
g 0 9 10
h 1 8 10
i 1 2 4
j -1 4 4
k 1 8 9
l 1 2 4
m 1 12 14
If you want to get serious, I'd start by looking at webkit, gecko, and OO.org, but I guess the algorithms for kerning and size calculation are not trivial.
If you have ImageMagick installed you can access this information from the command line.
$ convert xc: -font ./.fonts/HelveticaRoundedLTStd-Bd.otf -pointsize 24 -debug annotate -annotate 0 'MyTestString' null: 2>&1
2010-11-02T19:17:48+00:00 0:00.010 0.010u 6.6.5 Annotate convert[22496]: annotate.c/RenderFreetype/1155/Annotate
Font ./.fonts/HelveticaRoundedLTStd-Bd.otf; font-encoding none; text-encoding none; pointsize 24
2010-11-02T19:17:48+00:00 0:00.010 0.010u 6.6.5 Annotate convert[22496]: annotate.c/GetTypeMetrics/736/Annotate
Metrics: text: MyTestString; width: 157; height: 29; ascent: 18; descent: -7; max advance: 24; bounds: 0,-5 20,17; origin: 158,0; pixels per em: 24,24; underline position: -1.5625; underline thickness: 0.78125
2010-11-02T19:17:48+00:00 0:00.010 0.010u 6.6.5 Annotate convert[22496]: annotate.c/RenderFreetype/1155/Annotate
Font ./.fonts/HelveticaRoundedLTStd-Bd.otf; font-encoding none; text-encoding none; pointsize 24
To do it from Ruby, use backticks:
result = `convert xc: -font #{path_to_font} -pointsize #{size} -debug annotate -annotate 0 '#{string}' null: 2>&1`
if result =~ /width: (\d+);/
$1
end
This is a good problem!
I'm trying to solve it using pango/cairo in ruby for SVG output. I am probably going to use pango to calculate the width and then use a simple svg element.
I use the following code:
require "cairo"
require "pango"
paper = Cairo::Paper::A4_LANDSCAPE
TEXT = "Don't you love me anymore?"
def pac(surface)
cr = Cairo::Context.new(surface)
cr.select_font_face("Calibri",
Cairo::FONT_SLANT_NORMAL,
Cairo::FONT_WEIGHT_NORMAL)
cr.set_font_size(12)
extents = cr.text_extents(TEXT)
puts extents
end
Cairo::ImageSurface.new(*paper.size("pt")) do |surface|
cr = pac(surface)
end
Once I had to display a string array (containing the coming world days, current namedays, etc) in two lines, putting the linebreak after the appropriate string I had to determine the cumulative widths of the strings, printed in Arial. I opened my word editor, typed the alphabet, and I classified the characters into two classes, based on their width in the given font:
w="023456789AÁBCDEFGHJKLMNOÓÖŐPQRSTUÚÜŰWZYaábcdeghksoóöőpqwuúüűzymn".chars.yield_self{|z| z.zip(Array.new(z.size){1.5})}.to_h.merge("1rfiíjltIÍ ".chars.yield_self{|z| z.zip(Array.new(z.size){1})}.to_h)
w.default=1
nntd=["01-21:A vallások világnapja", "01-19:Kanut", "Kenéz", "Margaréta", "Márió", "Máriusz", "Megyer", "Sára", "Szultána", "Vázsony"]
nntd.sort_by!{|z| z.chars.map{|q| w[q]}.sum}.reverse
Then I was able to determine the position of the linebreak:
ind=nntd.collect.with_index.find_index{|z,i| nntd[0..i].join.chars.map{|q| w[q]}.sum >=nntd.join.chars.map{|q| w[q]}.sum/2}
t=[nntd[0..ind],nntd[ind+1..-1]].map{|z| z.join(",")}.join("\n")
After all I got a nice, balanced output, divided into two lines:
01-21:A vallások világnapja,01-19:Margaréta,Szultána
Vázsony,Máriusz,Megyer,Kenéz,Kanut,Márió,Sára
This way I can check with an eyeblink the incoming world days, and current namedays.