Camera Calibration Principle Point Equals Zero - camera-calibration

I am using the Caltech Tool to calibrate my high precision camera. After all the steps done without a mistake, I got the output of:
Calibration results (with uncertainties):
Focal Length: fc = [ 19492.53297 19388.55257 ] � [ 525.74266 512.39590 ]
Principal point: cc = [ 2148.07762 -224.68443 ] � [ 0.00000 0.00000 ]
Skew: alpha_c = [ 0.00000 ] � [ 0.00000 ] => angle of pixel axes = 90.00000 � 0.00000 degrees
Distortion: kc = [ -0.83041 11.44633 0.01176 -0.01231 0.00000 ] � [ 0.31872 5.17178 0.02315 0.00453 0.00000 ]
Pixel error: err = [ 1.70741 1.96577 ]
Note: The numerical errors are approximately three times the standard deviations (for reference).
Now that I want to construct my K matrix which is:
[fx, s, cx]
[0, fy, cy]
[0, 0, 1]
so in my case this would be:
def get_camera_intrinsic():
K = np.zeros((3, 3), dtype='float64')
K[0, 0], K[0, 2] = 525.74266, 0
K[1, 1], K[1, 2] = 512.39590, 0
K[2, 2] = 1.
return K
which makes no sense to me because principle point being 0 is weird:
Principal point: cc = [ 2148.07762 -224.68443 ] � [ 0.00000 0.00000 ]
Any thoughts?
EDIT: Here is a sample image I used during the calibration. Due to size issues it will take some time for me to resize the images, change their file types to upload all of them here.

Related

Epipolar Geometry Pure Translation Implementing Equation 9.6 from book Multiview Geometry

Implementing Equation 9.6
We want to calculate how each pixel in image will move when we know the camera translation and depth of each pixel.
The book Multive View Geometry gives solution in Chapter 9 in section 9.5
height = 512
width = 512
f = 711.11127387
#Camera intrinsic parameters
K = np.array([[f, 0.0, width/2],
[0.0, f, height/2],
[0.0, 0.0, 1.0]])
Kinv= np.array([[1, 0, -width/2],
[0, 1, -height/2],
[0, 0, f ]])
Kinv = np.linalg.inv(K)
#Translation matrix on the Z axis change dist will change the height
T = np.array([[ 0.0],
[0.0],
[-0.1]])
plt.figure(figsize=(10, 10))
ax = plt.subplot(1, 1, 1)
plt.imshow(old_seg)
for row, col in [(150,100), (450, 350)]:
ppp = np.array([[col],[row],[0]])
print (" Point ", ppp)
plt.scatter(ppp[0][0], ppp[1][0])
# Equation 9.6
new_pt = ppp + K.dot(T/old_depth[row][col])
print (K)
print (T/old_depth[row][col])
print (K.dot(T/old_depth[row][col]))
plt.scatter(new_pt[0][0], new_pt[1][0], c='c', marker=">")
ax.plot([ppp[0][0],new_pt[0][0]], [ppp[1][0],new_pt[1][0]], c='g', alpha=0.5)
Output
Point [[100]
[150]
[ 0]]
[[711.11127387 0. 256. ]
[ 0. 711.11127387 256. ]
[ 0. 0. 1. ]]
[[ 0. ]
[ 0. ]
[-0.16262454]]
[[-41.63188234]
[-41.63188234]
[ -0.16262454]]
Point [[350]
[450]
[ 0]]
[[711.11127387 0. 256. ]
[ 0. 711.11127387 256. ]
[ 0. 0. 1. ]]
[[ 0. ]
[ 0. ]
[-0.19715078]]
[[-50.47059987]
[-50.47059987]
[ -0.19715078]]
I expect the bottom point to move in the opposite direction .
What mistake am I doing ?

How to add elevation to a buffer polygon in geopandas

How would I add a zero elevation value to each xy component of a buffer polygon, i.e.
Transform POLYGON ((0.20000 0.00000, 0.19904 -0.01960, ...)) into POLYGON ((0.20000 0.00000 0, 0.19904 -0.01960 0, ...))
I think you're asking how you can add a third dimension to your shapely objects and have them set to zero by default.
Here's a way to do so:
import shapely.ops
def add_zero_z(geom):
def _add_zero_z(x, y):
return x, y, [0 for _ in x]
return shapely.ops.transform(_add_zero_z, geom)
Here's an example of how to use it:
import shapely.geometry
x_2D = shapely.geometry.Point((0,0))
print(x_2D.wkt)
# POINT (0 0)
x_3D = add_zero_z(x_2D)
print(x_3D.wkt)
# POINT (0 0 0)
You can also apply the function to a whole GeoDataFrame:
import geopandas as gpd
gdf = gpd.GeoDataFrame({'id':[1,2],
'geometry':[shapely.geometry.Point((0,0)),
shapely.geometry.Point((1,1))]},
geometry='geometry')
# Applying the function to the "geometry" column and replacing it
gdf['geometry'] = gdf['geometry'].apply(lambda geom: add_zero_z(geom))
print(gdf)
# id geometry
# 0 1 POINT Z (0.00000 0.00000 0.00000)
# 1 2 POINT Z (1.00000 1.00000 0.00000)
Credit
My idea for this answer came from this post by user mikewatt.

Plot LINESTRING Z from GeoDataFrame using pydeck's PathLayer (or TripLayer)

I have a geodataframe with LINESTRING Z geometries:
TimeUTC
Latitude
Longitude
AGL
geometry
0
2021-06-16 00:34:04+00:00
42.8354
-70.9196
82.2
LINESTRING Z (42.83541343273769 -70.91961015378617 82.2, 42.83541343273769 -70.91961015378617 82.2)
1
2021-06-14 13:32:18+00:00
42.8467
-70.8192
66.3
LINESTRING Z (42.84674080836037 -70.81919357049679 66.3, 42.84674080836037 -70.81919357049679 66.3)
2
2021-06-18 23:56:05+00:00
43.0788
-70.7541
0.9
LINESTRING Z (43.07882882269921 -70.75414567194126 0.9, 43.07884601143309 -70.75416286067514 0, 43.07885174101104 -70.75416286067514 0, 43.07884028185512 -70.75415713109717 0, 43.07884601143309 -70.75414567194126 0, 43.07884601143309 -70.75414567194126 0)
I can plot the component points using pydeck's ScatterplotLayer using the raw
(not geo) dataframe but I need to also plot the full, smooth, track.
I've tried this:
layers = [
pdk.Layer(
type = "PathLayer",
data=tracks,
get_path="geometry",
width_scale=20,
width_min_pixels=5,
get_width=5,
get_color=[180, 0, 200, 140],
pickable=True,
),
]
view_state = pdk.ViewState(
latitude=gdf_polygon.centroid.x,
longitude=gdf_polygon.centroid.y,
zoom=6,
min_zoom=5,
max_zoom=15,
pitch=40.5,
bearing=-27.36)
r = pdk.Deck(layers=[layers], initial_view_state=view_state)
return(r)
Which silently fails. Try as I might, I cannot find a way to convert the
LINESTRING Z's (and I can do without the Z component if need be) to an object
that pydeck will accept.
I found a way to extract the info needed from GeoPandas and make it work in pydeck. You just need to apply a function that extracts the coordinates from the shapely geometries as a list. Here is a fully reproducible example:
import shapely
import numpy as np
import pandas as pd
import pydeck as pdk
import geopandas as gpd
linestring_a = shapely.geometry.LineString([[0,1,2],
[3,4,5],
[6,7,8]])
linestring_b = shapely.geometry.LineString([[7,15,1],
[8,14,2],
[9,13,3]])
multilinestring = shapely.geometry.MultiLineString([[[10,11,2],
[13,14,5],
[16,17,8]],
[[19,10,11],
[12,15,4],
[10,13,0]]])
gdf = gpd.GeoDataFrame({'id':[1,2,3],
'geometry':[linestring_a,
linestring_b,
multilinestring],
'color_hex':['#ed1c24',
'#faa61a',
'#ffe800']})
# Function that transforms a hex string into an RGB tuple.
def hex_to_rgb(h):
h = h.lstrip("#")
return tuple(int(h[i : i + 2], 16) for i in (0, 2, 4))
# Applying the HEX-to-RGB function above
gdf['color_rgb'] = gdf['color_hex'].apply(hex_to_rgb)
# Function that extracts the 2d list of coordinates from an input geometry
def my_geom_coord_extractor(input_geom):
if (input_geom is None) or (input_geom is np.nan):
return []
else:
if input_geom.type[:len('multi')].lower() == 'multi':
full_coord_list = []
for geom_part in input_geom.geoms:
geom_part_2d_coords = [[coord[0],coord[1]] for coord in list(geom_part.coords)]
full_coord_list.append(geom_part_2d_coords)
else:
full_coord_list = [[coord[0],coord[1]] for coord in list(input_geom.coords)]
return full_coord_list
# Applying the coordinate list extractor to the dataframe
gdf['coord_list'] = gdf['geometry'].apply(my_geom_coord_extractor)
gdf_polygon = gdf.unary_union.convex_hull
# Establishing the default view for the pydeck output
view_state = pdk.ViewState(latitude=gdf_polygon.centroid.coords[0][1],
longitude=gdf_polygon.centroid.coords[0][0],
zoom=4)
# Creating the pydeck layer
layer = pdk.Layer(
type="PathLayer",
data=gdf,
pickable=True,
get_color='color_rgb',
width_scale=20,
width_min_pixels=2,
get_path="coord_list",
get_width=5,
)
# Finalizing the pydeck output
r = pdk.Deck(layers=[layer], initial_view_state=view_state, tooltip={"text": "{id}"})
r.to_html("path_layer.html")
Here's the output it yields:
Big caveat
It seems like pydeck isn't able to deal with MultiLineString geometries. Notice how, in the example above, my original dataframe had 3 geometries, but only 2 lines were drawn in the screenshot.

HSI to RGB color conversion

I'm trying to implement HSI <=> RGB color conversion
There are formulas on the wiki https://en.wikipedia.org/wiki/HSL_and_HSV#HSI_to_RGB
RGB to HSI seems to work fine.
However, I have difficulties with HSI to RGB.
I will write in Ruby, the examples will be in Ruby, however if you write in JS/Python/etc I think it will be understandable too, since it's just math.
Online ruby Interpreter.
def hsi_to_rgb(hsi_arr)
# to float
hue, saturation, intensity = hsi_arr.map(&:to_f)
hue /= 60
z = 1 - (hue % 2 - 1).abs
chroma = (3 * intensity * saturation) / (1 + z)
x = chroma * z
point = case hue
when 0..1 then [chroma, x, 0]
when 1..2 then [x, chroma, 0]
when 2..3 then [0, chroma, x]
when 3..4 then [0, x, chroma]
when 4..5 then [x, 0, chroma]
when 5..6 then [chroma, 0, x]
else [0, 0, 0]
end
# calculation rgb & scaling into range 0..255
m = intensity * (1 - saturation)
point.map { |channel| ((channel + m) * 255).round }
end
So, with simple html colors, everything seemed to work.
Until I tried values like this:
p hsi_to_rgb([0, 1, 1]) # => [765, 0, 0]
p hsi_to_rgb([360, 1, 1]) # => [765, 0, 0]
p hsi_to_rgb([357, 1, 1]) # => [729, 0, 36]
p hsi_to_rgb([357, 1, 0.5]) # => [364, 0, 18]
The values obtained are clearly incorrect, outside the range 0..255.
I have also seen implementations using trigonometric functions:
https://hypjudy.github.io/images/dip/hsi2rgb.jpg
However, I didn't get the right results either.
The only online RGB to HSI converter I found: https://www.picturetopeople.org/color_converter.html
Just to have something to compare it to.
Your implementation looks correct (assuming Wikipedia is correct).
The only missing part is limiting the RGB output to [0, 255].
As Giacomo Catenazzi commented, instead of clipping to [0, 255], it's better dividing R,G,B by max(R, G, B) in case the maximum is above 255.
In most color space conversion formulas there are values that are in the valid range of the source color space, but falls out of the valid range of the destination color space.
The common solution is clipping the result to the valid range.
In some cases there are undefined values.
Take a look at the first 3 rows of the examples table.
The Hue is marked N/A for white, black and gray colors.
All of the sample HSI values the you choose:
[0, 1, 1]
[360, 1, 1]
[357, 1, 1]
[357, 1, 0.5]
Falls out of the valid range of the RGB color space (after HSI to RGB conversion).
I suggest you to test the valid tuples from the examples table:
H S I R G B
--- ---- ---- ---- ---- ----
0 100% 33.3% 100% 0% 0%
60 100% 50% 75% 75% 0%
120 100% 16.7% 0% 50% 0%
180 40% 83.3% 50% 100% 100%
240 25% 66.7% 50% 50% 100%
300 57.1% 58.3% 75% 25% 75%
61.8 69.9% 47.1% 62.8% 64.3% 14.2%
251.1 75.6% 42.6% 25.5% 10.4% 91.8%
134.9 66.7% 34.9% 11.6% 67.5% 25.5%
49.5 91.1% 59.3% 94.1% 78.5% 5.3%
283.7 68.6% 59.6% 70.4% 18.7% 89.7%
14.3 44.6% 57% 93.1% 46.3% 31.6%
56.9 36.3% 83.5% 99.8% 97.4% 53.2%
162.4 80% 49.5% 9.9% 79.5% 59.1%
248.3 53.3% 31.9% 21.1% 14.9% 59.7%
240.5 13.5% 57% 49.5% 49.3% 72.1%
I don't know the syntax of Rubi programming language, but your implementation looks correct.
Here is a Python implementation that matches the conversion formula from Wikipedia:
def hsi_to_rgb(hsi):
"""
Convert HSI tuple to RGB tuple (without scaling the result by 255)
Formula: https://en.wikipedia.org/wiki/HSL_and_HSV#HSI_to_RGB
H - Range [0, 360] (degrees)
S - Range [0, 1]
V - Range [0, 1]
The R,G,B output range is [0, 1]
"""
H, S, I = float(hsi[0]), float(hsi[1]), float(hsi[2])
Htag = H / 60
Z = 1 - abs(Htag % 2 - 1)
C = (3 * I * S) / (1 + Z)
X = C * Z
if 0 <= Htag <= 1:
R1, G1, B1 = C, X, 0
elif 1 <= Htag <= 2:
R1, G1, B1 = X, C, 0
elif 2 <= Htag <= 3:
R1, G1, B1 = 0, C, X
elif 3 <= Htag <= 4:
R1, G1, B1 = 0, X, C
elif 4 <= Htag <= 5:
R1, G1, B1 = X, 0, C
elif 5 <= Htag <= 6:
R1, G1, B1 = C, 0, X
else:
R1, G1, B1 = 0, 0, 0 # Undefined
# Calculation rgb
m = I * (1 - S)
R, G, B = R1 + m, G1 + m, B1 + m
# Limit R, G, B to valid range:
#R = max(min(R, 1), 0)
#G = max(min(G, 1), 0)
#B = max(min(B, 1), 0)
# Handling RGB values above 1:
# -----------------------------
# Avoiding weird colours - see the comment of Giacomo Catenazzi.
# Find the maximum between R, G, B, and if the value is above 1, divide the 3 channels with such numbers.
max_rgb = max((R, G, B))
if max_rgb > 1:
R /= max_rgb
G /= max_rgb
B /= max_rgb
return (R, G, B)
def rgb2percent(rgb):
""" Convert rgb tuple to percentage with one decimal digit accuracy """
rgb_per = (round(rgb[0]*1000.0)/10, round(rgb[1]*1000.0)/10, round(rgb[2]*1000.0)/10)
return rgb_per
print(rgb2percent(hsi_to_rgb([ 0, 100/100, 33.3/100]))) # => (99.9, 0.0, 0.0) Wiki: 100% 0% 0%
print(rgb2percent(hsi_to_rgb([ 60, 100/100, 50/100]))) # => (75.0, 75.0, 0.0) Wiki: 75% 75% 0%
print(rgb2percent(hsi_to_rgb([ 120, 100/100, 16.7/100]))) # => ( 0.0, 50.1, 0.0) Wiki: 0% 50% 0%
print(rgb2percent(hsi_to_rgb([ 180, 40/100, 83.3/100]))) # => (50.0, 100.0, 100.0) Wiki: 50% 100% 100%
print(rgb2percent(hsi_to_rgb([ 240, 25/100, 66.7/100]))) # => (50.0, 50.0, 100.0) Wiki: 50% 50% 100%
print(rgb2percent(hsi_to_rgb([ 300, 57.1/100, 58.3/100]))) # => (74.9, 25.0, 74.9) Wiki: 75% 25% 75%
print(rgb2percent(hsi_to_rgb([ 61.8, 69.9/100, 47.1/100]))) # => (62.8, 64.3, 14.2) Wiki: 62.8% 64.3% 14.2%
print(rgb2percent(hsi_to_rgb([251.1, 75.6/100, 42.6/100]))) # => (25.5, 10.4, 91.9) Wiki: 25.5% 10.4% 91.8%
print(rgb2percent(hsi_to_rgb([134.9, 66.7/100, 34.9/100]))) # => (11.6, 67.6, 25.5) Wiki: 11.6% 67.5% 25.5%
print(rgb2percent(hsi_to_rgb([ 49.5, 91.1/100, 59.3/100]))) # => (94.1, 78.5, 5.3) Wiki: 94.1% 78.5% 5.3%
print(rgb2percent(hsi_to_rgb([283.7, 68.6/100, 59.6/100]))) # => (70.4, 18.7, 89.7) Wiki: 70.4% 18.7% 89.7%
print(rgb2percent(hsi_to_rgb([ 14.3, 44.6/100, 57/100]))) # => (93.2, 46.3, 31.6) Wiki: 93.1% 46.3% 31.6%
print(rgb2percent(hsi_to_rgb([ 56.9, 36.3/100, 83.5/100]))) # => (99.9, 97.4, 53.2) Wiki: 99.8% 97.4% 53.2%
print(rgb2percent(hsi_to_rgb([162.4, 80/100, 49.5/100]))) # => ( 9.9, 79.5, 59.1) Wiki: 9.9% 79.5% 59.1%
print(rgb2percent(hsi_to_rgb([248.3, 53.3/100, 31.9/100]))) # => (21.1, 14.9, 59.7) Wiki: 21.1% 14.9% 59.7%
print(rgb2percent(hsi_to_rgb([240.5, 13.5/100, 57/100]))) # => (49.5, 49.3, 72.2) Wiki: 49.5% 49.3% 72.1%
As you can see, the results matches the examples table from Wikipedia.
Comparisons with the WIKI color table:
def print_rgb(rgb)
puts "[%s]" % rgb.map {|val| "%5.1f" % ((val / 255.0) * 100) }.join(", ")
end
print_rgb hsi_to_rgb([ 0, 100/100.0, 33.3/100.0]) # => [100.0, 0.0, 0.0] Wiki: 100% 0% 0%
print_rgb hsi_to_rgb([ 60, 100/100.0, 50/100.0]) # => [ 74.9, 74.9, 0.0] Wiki: 75% 75% 0%
print_rgb hsi_to_rgb([ 120, 100/100.0, 16.7/100.0]) # => [ 0.0, 50.2, 0.0] Wiki: 0% 50% 0%
print_rgb hsi_to_rgb([ 180, 40/100.0, 83.3/100.0]) # => [ 49.8, 100.0, 100.0] Wiki: 50% 100% 100%
print_rgb hsi_to_rgb([ 240, 25/100.0, 66.7/100.0]) # => [ 50.2, 50.2, 100.0] Wiki: 50% 50% 100%
print_rgb hsi_to_rgb([ 300, 57.1/100.0, 58.3/100.0]) # => [ 74.9, 25.1, 74.9] Wiki: 75% 25% 75%
print_rgb hsi_to_rgb([ 61.8, 69.9/100.0, 47.1/100.0]) # => [ 62.7, 64.3, 14.1] Wiki: 62.8% 64.3% 14.2%
print_rgb hsi_to_rgb([251.1, 75.6/100.0, 42.6/100.0]) # => [ 25.5, 10.6, 91.8] Wiki: 25.5% 10.4% 91.8%
print_rgb hsi_to_rgb([134.9, 66.7/100.0, 34.9/100.0]) # => [ 11.8, 67.5, 25.5] Wiki: 11.6% 67.5% 25.5%
print_rgb hsi_to_rgb([ 49.5, 91.1/100.0, 59.3/100.0]) # => [ 94.1, 78.4, 5.1] Wiki: 94.1% 78.5% 5.3%
print_rgb hsi_to_rgb([283.7, 68.6/100.0, 59.6/100.0]) # => [ 70.6, 18.8, 89.8] Wiki: 70.4% 18.7% 89.7%
print_rgb hsi_to_rgb([ 14.3, 44.6/100.0, 57/100.0]) # => [ 93.3, 46.3, 31.8] Wiki: 93.1% 46.3% 31.6%
print_rgb hsi_to_rgb([ 56.9, 36.3/100.0, 83.5/100.0]) # => [100.0, 97.3, 53.3] Wiki: 99.8% 97.4% 53.2%
print_rgb hsi_to_rgb([162.4, 80/100.0, 49.5/100.0]) # => [ 9.8, 79.6, 59.2] Wiki: 9.9% 79.5% 59.1%
print_rgb hsi_to_rgb([248.3, 53.3/100.0, 31.9/100.0]) # => [ 21.2, 14.9, 59.6] Wiki: 21.1% 14.9% 59.7%
print_rgb hsi_to_rgb([240.5, 13.5/100.0, 57/100.0]) # => [ 49.4, 49.4, 72.2] Wiki: 49.5% 49.3% 72.1%
The values are slightly different, since the method returns integer RGB values from the range 0..255
As Rotem said, the HSI values I tried to convert to RGB are out of RGB range.
All other RGB values of 16.7M colors are converted correctly.

How to control size of validity icon in a pdf

I've been trying for at least 2 days now to control the size of the validity icon of a pdf file, when signed.
The icon is set by the pdf reader usually.
I've tried different approaches to the problem :
Redimensioned the Signature Annotation Rectangle - which reshaped
all the contents within
Redimensioned the Signature Annotation Appearance BBox - which also
reshaped the text and icon contents.
I've also tried to reshape n2 and n0 layers, and created a new one
n5, expecting to be able to control it's size without success
In the end, I would just want to individually resize the validity icon.
Any suggestions shall be deeply appreciated.
dsblank = Annotation::AppearanceStream.new.setFilter(:FlateDecode)
dsblank.Type=Name.new("XObject")
dsblank.Resources = Resources.new
dsblank.BBox = [ 0, 0, width, height ]
dsblank.draw_stream('% DSBlank')
n2 = Annotation::AppearanceStream.new.setFilter(:FlateDecode)
n2.Resources = Resources.new
n2.BBox = [ 0, 0, width, height ]
n2.draw_stream('% DSBlank')
n5 = Annotation::AppearanceStream.new.setFilter(:FlateDecode)
n5.Resources = Resources.new
n5.BBox = [ 0, 0, width, height ]
n5.write(caption,x: padding_x, y: padding_y, size: text_size, leading: text_size )
sigannot = Annotation::Widget::Signature.new
sigannot.Rect = Rectangle[ llx: x, lly: y, urx: x+width, ury: y+height ]
sigannot.F = Annotation::Flags::PRINT #sets the print mode on
#
# Creates the stream for the signature appearance
#
streamN = Annotation::AppearanceStream.new.setFilter(:FlateDecode)
streamN.BBox = [ 0, 0,width, height]
streamN.Resources = Resources.new
streamN.Resources.add_xobject(Name.new("n0"), dsblank)
streamN.Resources.add_xobject(Name.new("n1"), dsblank)
streamN.Resources.add_xobject(Name.new("n2"), n2)
streamN.Resources.add_xobject(Name.new("n3"), dsblank)
streamN.Resources.add_xobject(Name.new("n5"), n5)
streamN.draw_stream('q 1 0 0 1 0 0 cm /n0 Do Q')
streamN.draw_stream('q 1 0 0 1 0 0 cm /n1 Do Q')
streamN.draw_stream('q 1 0 0 1 0 0 cm /n2 Do Q')
streamN.draw_stream('q 1 0 0 1 0 0 cm /n3 Do Q')
streamN.draw_stream('q 1 0 0 1 0 0 cm /n5 Do Q')
sigannot.set_normal_appearance(streamN)
page.add_annot(sigannot)
This is not an answer showing how to fix it but more a comment arguing that it is a bad idea to try it at all. It is too big for a comment field, though.
You are trying to manufacture PDFs to support an Adobe Reader feature which Adobe has started phasing out a long time ago, with Adobe Reader 9!
(page 10 of Adobe Acrobat 9 Digital Signatures, Changes and Improvements)
Thus, even if you achieve your goal for now with the current Adobe Reader, it may very easily happen that in upcoming new Adobe Reader version support for this feature will completely be stopped.
Furthermore you wont find any mentioning of this feature (changing signature appearances) in the current PDF specification ISO 32000-1, let alone the upcoming ISO 32000-2.
Also maintenance of support for layers except n0 and n2 had already been stopped in Acrobat 6.0:
(page 8 of Adobe® Acrobat® SDK Digital Signature Appearances, Edition 1.0, October 2006)
After some iterations I managed to get a scale factor of 3 for streamN and n2, as well as padding_y. I also add to increase the text_size.
With that I managed to reduce the size of the icon, and still have legible text.
n2 = Annotation::AppearanceStream.new
n2.Resources = Resources.new
n2.BBox = [ 0, 0, width*3, height*3 ]
n2.write(caption,x: padding_x, y: padding_y*3, size: text_size, leading: text_size )
sigannot = Annotation::Widget::Signature.new
sigannot.Rect = Rectangle[ llx: x, lly: y, urx: x+width, ury: y+height ]
sigannot.F = Annotation::Flags::PRINT #sets the print mode on
#
# Creates the stream for the signature appearance
#
streamN = Annotation::AppearanceStream.new.setFilter(:FlateDecode)
streamN.BBox = [ 0, 0, width*3, height*3 ]

Resources