Looking into UTF8 decoding performance, I noticed the performance of protobuf's UnsafeProcessor::decodeUtf8 is better than String(byte[] bytes, int offset, int length, Charset charset) for the following non ascii string: "Quizdeltagerne spiste jordbær med flØde, mens cirkusklovnen".
I tried to figure out why, so I copied the relevant code in String and replaced the array accesses with unsafe array accesses, same as UnsafeProcessor::decodeUtf8.
Here are the JMH benchmark results:
Benchmark Mode Cnt Score Error Units
StringBenchmark.safeDecoding avgt 10 127.107 ± 3.642 ns/op
StringBenchmark.unsafeDecoding avgt 10 100.915 ± 4.090 ns/op
I assume the difference is due to missing bounds checking elimination which I expected to kick in, especially since there is an explicit bounds check in the form of a call to checkBoundsOffCount(offset, length, bytes.length) in the beginning of String(byte[] bytes, int offset, int length, Charset charset).
Is the issue really a missing bounds check elimination?
Here's the code I benchmarked using OpenJDK 17 & JMH. Note that this is only part of the String(byte[] bytes, int offset, int length, Charset charset) constructor code, and works correctly only for this specific German String.
The static methods were copied from String.
Look for the // the unsafe version: comments that indicate where I replaced the safe access with unsafe.
private static byte[] safeDecode(byte[] bytes, int offset, int length) {
checkBoundsOffCount(offset, length, bytes.length);
int sl = offset + length;
int dp = 0;
byte[] dst = new byte[length];
while (offset < sl) {
int b1 = bytes[offset];
// the unsafe version:
// int b1 = UnsafeUtil.getByte(bytes, offset);
if (b1 >= 0) {
dst[dp++] = (byte)b1;
offset++;
continue;
}
if ((b1 == (byte)0xc2 || b1 == (byte)0xc3) &&
offset + 1 < sl) {
// the unsafe version:
// int b2 = UnsafeUtil.getByte(bytes, offset + 1);
int b2 = bytes[offset + 1];
if (!isNotContinuation(b2)) {
dst[dp++] = (byte)decode2(b1, b2);
offset += 2;
continue;
}
}
// anything not a latin1, including the repl
// we have to go with the utf16
break;
}
if (offset == sl) {
if (dp != dst.length) {
dst = Arrays.copyOf(dst, dp);
}
return dst;
}
return dst;
}
Followup
Apparently if I change the while loop condition from offset < sl to 0 <= offset && offset < sl
I get similar performance in both versions:
Benchmark Mode Cnt Score Error Units
StringBenchmark.safeDecoding avgt 10 100.802 ± 13.147 ns/op
StringBenchmark.unsafeDecoding avgt 10 102.774 ± 3.893 ns/op
Conclusion
This question was picked up by HotSpot developers as https://bugs.openjdk.java.net/browse/JDK-8278518.
Optimizing this code ended up giving a 2.5x boost to decoding the above Latin1 string.
This C2 optimization closes the unbelievable more than 7x gap between commonBranchFirst and commonBranchSecond in the below benchmark and will land in Java 19.
Benchmark Mode Cnt Score Error Units
LoopBenchmark.commonBranchFirst avgt 25 1737.111 ± 56.526 ns/op
LoopBenchmark.commonBranchSecond avgt 25 232.798 ± 12.676 ns/op
#State(Scope.Thread)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.NANOSECONDS)
public class LoopBenchmark {
private final boolean[] mostlyTrue = new boolean[1000];
#Setup
public void setup() {
for (int i = 0; i < mostlyTrue.length; i++) {
mostlyTrue[i] = i % 100 > 0;
}
}
#Benchmark
public int commonBranchFirst() {
int i = 0;
while (i < mostlyTrue.length) {
if (mostlyTrue[i]) {
i++;
} else {
i += 2;
}
}
return i;
}
#Benchmark
public int commonBranchSecond() {
int i = 0;
while (i < mostlyTrue.length) {
if (!mostlyTrue[i]) {
i += 2;
} else {
i++;
}
}
return i;
}
}
To measure the branch you are interested in and particularly the scenario when while loop becomes hot, I've used the following benchmark:
#State(Scope.Thread)
#BenchmarkMode(Mode.AverageTime)
#OutputTimeUnit(TimeUnit.NANOSECONDS)
public class StringConstructorBenchmark {
private byte[] array;
#Setup
public void setup() {
String str = "Quizdeltagerne spiste jordbær med fløde, mens cirkusklovnen. Я";
array = str.getBytes(StandardCharsets.UTF_8);
}
#Benchmark
public String newString() {
return new String(array, 0, array.length, StandardCharsets.UTF_8);
}
}
And indeed, with modified constructor it does give significant improvement:
//baseline
Benchmark Mode Cnt Score Error Units
StringConstructorBenchmark.newString avgt 50 173,092 ± 3,048 ns/op
//patched
Benchmark Mode Cnt Score Error Units
StringConstructorBenchmark.newString avgt 50 126,908 ± 2,355 ns/op
This is likely to be a HotSpot issue: optimizing compiler for some reason failed to eliminate array bounds check within while-loop. I guess the reason is that offset is modified within the loop:
while (offset < sl) {
int b1 = bytes[offset];
if (b1 >= 0) {
dst[dp++] = (byte)b1;
offset++; // <---
continue;
}
if ((b1 == (byte)0xc2 || b1 == (byte)0xc3) &&
offset + 1 < sl) {
int b2 = bytes[offset + 1];
if (!isNotContinuation(b2)) {
dst[dp++] = (byte)decode2(b1, b2);
offset += 2;
continue;
}
}
// anything not a latin1, including the repl
// we have to go with the utf16
break;
}
Also I've looked into the code via LinuxPerfAsmProfiler, here's the link for baseline https://gist.github.com/stsypanov/d2524f98477d633fb1d4a2510fedeea6 and this one is for patched constructor https://gist.github.com/stsypanov/16c787e4f9fa3dd122522f16331b68b7
What should one pay attention to? Let's find the code corresponding int b1 = bytes[offset]; (line 538). In baseline we have this:
3.62% ││ │ 0x00007fed70eb4c1c: mov %ebx,%ecx
2.29% ││ │ 0x00007fed70eb4c1e: mov %edx,%r9d
2.22% ││ │ 0x00007fed70eb4c21: mov (%rsp),%r8 ;*iload_2 {reexecute=0 rethrow=0 return_oop=0}
││ │ ; - java.lang.String::<init>#107 (line 537)
2.32% ↘│ │ 0x00007fed70eb4c25: cmp %r13d,%ecx
│ │ 0x00007fed70eb4c28: jge 0x00007fed70eb5388 ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
│ │ ; - java.lang.String::<init>#110 (line 537)
3.05% │ │ 0x00007fed70eb4c2e: cmp 0x8(%rsp),%ecx
│ │ 0x00007fed70eb4c32: jae 0x00007fed70eb5319
2.38% │ │ 0x00007fed70eb4c38: mov %r8,(%rsp)
2.64% │ │ 0x00007fed70eb4c3c: movslq %ecx,%r8
2.46% │ │ 0x00007fed70eb4c3f: mov %rax,%rbx
3.44% │ │ 0x00007fed70eb4c42: sub %r8,%rbx
2.62% │ │ 0x00007fed70eb4c45: add $0x1,%rbx
2.64% │ │ 0x00007fed70eb4c49: and $0xfffffffffffffffe,%rbx
2.30% │ │ 0x00007fed70eb4c4d: mov %ebx,%r8d
3.08% │ │ 0x00007fed70eb4c50: add %ecx,%r8d
2.55% │ │ 0x00007fed70eb4c53: movslq %r8d,%r8
2.45% │ │ 0x00007fed70eb4c56: add $0xfffffffffffffffe,%r8
2.13% │ │ 0x00007fed70eb4c5a: cmp (%rsp),%r8
│ │ 0x00007fed70eb4c5e: jae 0x00007fed70eb5319
3.36% │ │ 0x00007fed70eb4c64: mov %ecx,%edi ;*aload_1 {reexecute=0 rethrow=0 return_oop=0}
│ │ ; - java.lang.String::<init>#113 (line 538)
2.86% │ ↗│ 0x00007fed70eb4c66: movsbl 0x10(%r14,%rdi,1),%r8d ;*baload {reexecute=0 rethrow=0 return_oop=0}
│ ││ ; - java.lang.String::<init>#115 (line 538)
2.48% │ ││ 0x00007fed70eb4c6c: mov %r9d,%edx
2.26% │ ││ 0x00007fed70eb4c6f: inc %edx ;*iinc {reexecute=0 rethrow=0 return_oop=0}
│ ││ ; - java.lang.String::<init>#127 (line 540)
3.28% │ ││ 0x00007fed70eb4c71: mov %edi,%ebx
2.44% │ ││ 0x00007fed70eb4c73: inc %ebx ;*iinc {reexecute=0 rethrow=0 return_oop=0}
│ ││ ; - java.lang.String::<init>#134 (line 541)
2.35% │ ││ 0x00007fed70eb4c75: test %r8d,%r8d
╰ ││ 0x00007fed70eb4c78: jge 0x00007fed70eb4c04 ;*iflt {reexecute=0 rethrow=0 return_oop=0}
││ ; - java.lang.String::<init>#120 (line 539)
and in patched code the corresponding part is
17.28% ││ 0x00007f6b88eb6061: mov %edx,%r10d ;*iload_2 {reexecute=0 rethrow=0 return_oop=0}
││ ; - java.lang.String::<init>#107 (line 537)
0.11% ↘│ 0x00007f6b88eb6064: test %r10d,%r10d
│ 0x00007f6b88eb6067: jl 0x00007f6b88eb669c ;*iflt {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#108 (line 537)
0.39% │ 0x00007f6b88eb606d: cmp %r13d,%r10d
│ 0x00007f6b88eb6070: jge 0x00007f6b88eb66d0 ;*if_icmpge {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#114 (line 537)
0.66% │ 0x00007f6b88eb6076: mov %ebx,%r9d
13.70% │ 0x00007f6b88eb6079: cmp 0x8(%rsp),%r10d
0.01% │ 0x00007f6b88eb607e: jae 0x00007f6b88eb6671
0.14% │ 0x00007f6b88eb6084: movsbl 0x10(%r14,%r10,1),%edi ;*baload {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#119 (line 538)
0.37% │ 0x00007f6b88eb608a: mov %r9d,%ebx
0.99% │ 0x00007f6b88eb608d: inc %ebx ;*iinc {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#131 (line 540)
12.88% │ 0x00007f6b88eb608f: movslq %r9d,%rsi ;*bastore {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#196 (line 548)
0.17% │ 0x00007f6b88eb6092: mov %r10d,%edx
0.39% │ 0x00007f6b88eb6095: inc %edx ;*iinc {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#138 (line 541)
0.96% │ 0x00007f6b88eb6097: test %edi,%edi
0.02% │ 0x00007f6b88eb6099: jl 0x00007f6b88eb60dc ;*iflt {reexecute=0 rethrow=0 return_oop=0}
│ ; - java.lang.String::<init>#124 (line 539)
In baseline between if_icmpge and aload_1 byte-code instructions we have bounds check, but we don't have one in patched code.
So your original assumptions is correct: it is about missing bounds check elimination.
UPD I must correct my answer: it turned out that bounds check is still there:
13.70% │ 0x00007f6b88eb6079: cmp 0x8(%rsp),%r10d
0.01% │ 0x00007f6b88eb607e: jae 0x00007f6b88eb6671
and the code I've pointed out is something that the compiler introduces, but it does nothing. The issue itself is still about bounds check as its explicit declaration solves the issue ad hoc.
Consider the following Julia "compound" iterator: it merges two iterators, a and b,
each of which are assumed to be sorted according to order, to a single ordered
sequence:
struct MergeSorted{T,A,B,O}
a::A
b::B
order::O
MergeSorted(a::A, b::B, order::O=Base.Order.Forward) where {A,B,O} =
new{promote_type(eltype(A),eltype(B)),A,B,O}(a, b, order)
end
Base.eltype(::Type{MergeSorted{T,A,B,O}}) where {T,A,B,O} = T
#inline function Base.iterate(self::MergeSorted{T},
state=(iterate(self.a), iterate(self.b))) where T
a_result, b_result = state
if b_result === nothing
a_result === nothing && return nothing
a_curr, a_state = a_result
return T(a_curr), (iterate(self.a, a_state), b_result)
end
b_curr, b_state = b_result
if a_result !== nothing
a_curr, a_state = a_result
Base.Order.lt(self.order, a_curr, b_curr) &&
return T(a_curr), (iterate(self.a, a_state), b_result)
end
return T(b_curr), (a_result, iterate(self.b, b_state))
end
This code works, but is type-instable since the Julia iteration facilities are inherently so. For most cases, the compiler can work this out automatically, however, here it does not work: the following test code illustrates that temporaries are created:
>>> x = MergeSorted([1,4,5,9,32,44], [0,7,9,24,134]);
>>> sum(x);
>>> #time sum(x);
0.000013 seconds (61 allocations: 2.312 KiB)
Note the allocation count.
Is there any way to efficiently debug such situations other than playing around with the code and hoping that the compiler will be able to optimize out the type ambiguities? Does anyone know there any solution in this specific case that does not create temporaries?
How to diagnose the problem?
Answer: use #code_warntype
Run:
julia> #code_warntype iterate(x, iterate(x)[2])
Variables
#self#::Core.Const(iterate)
self::MergeSorted{Int64, Vector{Int64}, Vector{Int64}, Base.Order.ForwardOrdering}
state::Tuple{Tuple{Int64, Int64}, Tuple{Int64, Int64}}
#_4::Int64
#_5::Int64
#_6::Union{}
#_7::Int64
b_state::Int64
b_curr::Int64
a_state::Int64
a_curr::Int64
b_result::Tuple{Int64, Int64}
a_result::Tuple{Int64, Int64}
Body::Tuple{Int64, Any}
1 ─ nothing
│ Core.NewvarNode(:(#_4))
│ Core.NewvarNode(:(#_5))
│ Core.NewvarNode(:(#_6))
│ Core.NewvarNode(:(b_state))
│ Core.NewvarNode(:(b_curr))
│ Core.NewvarNode(:(a_state))
│ Core.NewvarNode(:(a_curr))
│ %9 = Base.indexed_iterate(state, 1)::Core.PartialStruct(Tuple{Tuple{Int64, Int64}, Int64}, Any[Tuple{Int64, Int64}, Core.Const(2)])
│ (a_result = Core.getfield(%9, 1))
│ (#_7 = Core.getfield(%9, 2))
│ %12 = Base.indexed_iterate(state, 2, #_7::Core.Const(2))::Core.PartialStruct(Tuple{Tuple{Int64, Int64}, Int64}, Any[Tuple{Int64, Int64}, Core.Const(3)])
│ (b_result = Core.getfield(%12, 1))
│ %14 = (b_result === Main.nothing)::Core.Const(false)
└── goto #3 if not %14
2 ─ Core.Const(:(a_result === Main.nothing))
│ Core.Const(:(%16))
│ Core.Const(:(return Main.nothing))
│ Core.Const(:(Base.indexed_iterate(a_result, 1)))
│ Core.Const(:(a_curr = Core.getfield(%19, 1)))
│ Core.Const(:(#_6 = Core.getfield(%19, 2)))
│ Core.Const(:(Base.indexed_iterate(a_result, 2, #_6)))
│ Core.Const(:(a_state = Core.getfield(%22, 1)))
│ Core.Const(:(($(Expr(:static_parameter, 1)))(a_curr)))
│ Core.Const(:(Base.getproperty(self, :a)))
│ Core.Const(:(Main.iterate(%25, a_state)))
│ Core.Const(:(Core.tuple(%26, b_result)))
│ Core.Const(:(Core.tuple(%24, %27)))
└── Core.Const(:(return %28))
3 ┄ %30 = Base.indexed_iterate(b_result, 1)::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(2)])
│ (b_curr = Core.getfield(%30, 1))
│ (#_5 = Core.getfield(%30, 2))
│ %33 = Base.indexed_iterate(b_result, 2, #_5::Core.Const(2))::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(3)])
│ (b_state = Core.getfield(%33, 1))
│ %35 = (a_result !== Main.nothing)::Core.Const(true)
└── goto #6 if not %35
4 ─ %37 = Base.indexed_iterate(a_result, 1)::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(2)])
│ (a_curr = Core.getfield(%37, 1))
│ (#_4 = Core.getfield(%37, 2))
│ %40 = Base.indexed_iterate(a_result, 2, #_4::Core.Const(2))::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(3)])
│ (a_state = Core.getfield(%40, 1))
│ %42 = Base.Order::Core.Const(Base.Order)
│ %43 = Base.getproperty(%42, :lt)::Core.Const(Base.Order.lt)
│ %44 = Base.getproperty(self, :order)::Core.Const(Base.Order.ForwardOrdering())
│ %45 = a_curr::Int64
│ %46 = (%43)(%44, %45, b_curr)::Bool
└── goto #6 if not %46
5 ─ %48 = ($(Expr(:static_parameter, 1)))(a_curr)::Int64
│ %49 = Base.getproperty(self, :a)::Vector{Int64}
│ %50 = Main.iterate(%49, a_state)::Union{Nothing, Tuple{Int64, Int64}}
│ %51 = Core.tuple(%50, b_result)::Tuple{Union{Nothing, Tuple{Int64, Int64}}, Tuple{Int64, Int64}}
│ %52 = Core.tuple(%48, %51)::Tuple{Int64, Tuple{Union{Nothing, Tuple{Int64, Int64}}, Tuple{Int64, Int64}}}
└── return %52
6 ┄ %54 = ($(Expr(:static_parameter, 1)))(b_curr)::Int64
│ %55 = a_result::Tuple{Int64, Int64}
│ %56 = Base.getproperty(self, :b)::Vector{Int64}
│ %57 = Main.iterate(%56, b_state)::Union{Nothing, Tuple{Int64, Int64}}
│ %58 = Core.tuple(%55, %57)::Tuple{Tuple{Int64, Int64}, Union{Nothing, Tuple{Int64, Int64}}}
│ %59 = Core.tuple(%54, %58)::Tuple{Int64, Tuple{Tuple{Int64, Int64}, Union{Nothing, Tuple{Int64, Int64}}}}
└── return %59
and you see that there are too many types of return value, so Julia gives up specializing them (and just assumes the second element of return type is Any).
How to fix the problem?
Answer: reduce the number of return type options of iterate.
Here is a quick write up (I do not claim it is most terse and have not tested it extensively so there might be some bug, but it was simple enough to write quickly using your code to show how one could approach your problem; note that I use special branches when one of the collections is empty as then it should be faster to just iterate one collection):
struct MergeSorted{T,A,B,O,F1,F2}
a::A
b::B
order::O
fa::F1
fb::F2
function MergeSorted(a::A, b::B, order::O=Base.Order.Forward) where {A,B,O}
fa, fb = iterate(a), iterate(b)
F1 = typeof(fa)
F2 = typeof(fb)
new{promote_type(eltype(A),eltype(B)),A,B,O,F1,F2}(a, b, order, fa, fb)
end
end
Base.eltype(::Type{MergeSorted{T,A,B,O}}) where {T,A,B,O} = T
struct State{Ta, Tb}
a::Union{Nothing, Ta}
b::Union{Nothing, Tb}
end
function Base.iterate(self::MergeSorted{T,A,B,O,Nothing,Nothing}) where {T,A,B,O}
return nothing
end
function Base.iterate(self::MergeSorted{T,A,B,O,F1,Nothing}) where {T,A,B,O,F1}
return self.fa
end
function Base.iterate(self::MergeSorted{T,A,B,O,F1,Nothing}, state) where {T,A,B,O,F1}
return iterate(self.a, state)
end
function Base.iterate(self::MergeSorted{T,A,B,O,Nothing,F2}) where {T,A,B,O,F2}
return self.fb
end
function Base.iterate(self::MergeSorted{T,A,B,O,Nothing,F2}, state) where {T,A,B,O,F2}
return iterate(self.b, state)
end
#inline function Base.iterate(self::MergeSorted{T,A,B,O,F1,F2}) where {T,A,B,O,F1,F2}
a_result, b_result = self.fa, self.fb
return iterate(self, State{F1,F2}(a_result, b_result))
end
#inline function Base.iterate(self::MergeSorted{T,A,B,O,F1,F2},
state::State{F1,F2}) where {T,A,B,O,F1,F2}
a_result, b_result = state.a, state.b
if b_result === nothing
a_result === nothing && return nothing
a_curr, a_state = a_result
return T(a_curr), State{F1,F2}(iterate(self.a, a_state), b_result)
end
b_curr, b_state = b_result
if a_result !== nothing
a_curr, a_state = a_result
Base.Order.lt(self.order, a_curr, b_curr) &&
return T(a_curr), State{F1,F2}(iterate(self.a, a_state), b_result)
end
return T(b_curr), State{F1,F2}(a_result, iterate(self.b, b_state))
end
And now you have:
julia> x = MergeSorted([1,4,5,9,32,44], [0,7,9,24,134]);
julia> sum(x)
269
julia> #allocated sum(x)
0
julia> #code_warntype iterate(x, iterate(x)[2])
Variables
#self#::Core.Const(iterate)
self::MergeSorted{Int64, Vector{Int64}, Vector{Int64}, Base.Order.ForwardOrdering, Tuple{Int64, Int64}, Tuple{Int64, Int64}}
state::State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}
#_4::Int64
#_5::Int64
#_6::Int64
b_state::Int64
b_curr::Int64
a_state::Int64
a_curr::Int64
b_result::Union{Nothing, Tuple{Int64, Int64}}
a_result::Union{Nothing, Tuple{Int64, Int64}}
Body::Union{Nothing, Tuple{Int64, State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}}
1 ─ nothing
│ Core.NewvarNode(:(#_4))
│ Core.NewvarNode(:(#_5))
│ Core.NewvarNode(:(#_6))
│ Core.NewvarNode(:(b_state))
│ Core.NewvarNode(:(b_curr))
│ Core.NewvarNode(:(a_state))
│ Core.NewvarNode(:(a_curr))
│ %9 = Base.getproperty(state, :a)::Union{Nothing, Tuple{Int64, Int64}}
│ %10 = Base.getproperty(state, :b)::Union{Nothing, Tuple{Int64, Int64}}
│ (a_result = %9)
│ (b_result = %10)
│ %13 = (b_result === Main.nothing)::Bool
└── goto #5 if not %13
2 ─ %15 = (a_result === Main.nothing)::Bool
└── goto #4 if not %15
3 ─ return Main.nothing
4 ─ %18 = Base.indexed_iterate(a_result::Tuple{Int64, Int64}, 1)::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(2)])
│ (a_curr = Core.getfield(%18, 1))
│ (#_6 = Core.getfield(%18, 2))
│ %21 = Base.indexed_iterate(a_result::Tuple{Int64, Int64}, 2, #_6::Core.Const(2))::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(3)])
│ (a_state = Core.getfield(%21, 1))
│ %23 = ($(Expr(:static_parameter, 1)))(a_curr)::Int64
│ %24 = Core.apply_type(Main.State, $(Expr(:static_parameter, 5)), $(Expr(:static_parameter, 6)))::Core.Const(State{Tuple{Int64, Int64}, Tuple{Int64, Int64}})
│ %25 = Base.getproperty(self, :a)::Vector{Int64}
│ %26 = Main.iterate(%25, a_state)::Union{Nothing, Tuple{Int64, Int64}}
│ %27 = (%24)(%26, b_result::Core.Const(nothing))::State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}
│ %28 = Core.tuple(%23, %27)::Tuple{Int64, State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}
└── return %28
5 ─ %30 = Base.indexed_iterate(b_result::Tuple{Int64, Int64}, 1)::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(2)])
│ (b_curr = Core.getfield(%30, 1))
│ (#_5 = Core.getfield(%30, 2))
│ %33 = Base.indexed_iterate(b_result::Tuple{Int64, Int64}, 2, #_5::Core.Const(2))::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(3)])
│ (b_state = Core.getfield(%33, 1))
│ %35 = (a_result !== Main.nothing)::Bool
└── goto #8 if not %35
6 ─ %37 = Base.indexed_iterate(a_result::Tuple{Int64, Int64}, 1)::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(2)])
│ (a_curr = Core.getfield(%37, 1))
│ (#_4 = Core.getfield(%37, 2))
│ %40 = Base.indexed_iterate(a_result::Tuple{Int64, Int64}, 2, #_4::Core.Const(2))::Core.PartialStruct(Tuple{Int64, Int64}, Any[Int64, Core.Const(3)])
│ (a_state = Core.getfield(%40, 1))
│ %42 = Base.Order::Core.Const(Base.Order)
│ %43 = Base.getproperty(%42, :lt)::Core.Const(Base.Order.lt)
│ %44 = Base.getproperty(self, :order)::Core.Const(Base.Order.ForwardOrdering())
│ %45 = a_curr::Int64
│ %46 = (%43)(%44, %45, b_curr)::Bool
└── goto #8 if not %46
7 ─ %48 = ($(Expr(:static_parameter, 1)))(a_curr)::Int64
│ %49 = Core.apply_type(Main.State, $(Expr(:static_parameter, 5)), $(Expr(:static_parameter, 6)))::Core.Const(State{Tuple{Int64, Int64}, Tuple{Int64, Int64}})
│ %50 = Base.getproperty(self, :a)::Vector{Int64}
│ %51 = Main.iterate(%50, a_state)::Union{Nothing, Tuple{Int64, Int64}}
│ %52 = (%49)(%51, b_result::Tuple{Int64, Int64})::State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}
│ %53 = Core.tuple(%48, %52)::Tuple{Int64, State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}
└── return %53
8 ┄ %55 = ($(Expr(:static_parameter, 1)))(b_curr)::Int64
│ %56 = Core.apply_type(Main.State, $(Expr(:static_parameter, 5)), $(Expr(:static_parameter, 6)))::Core.Const(State{Tuple{Int64, Int64}, Tuple{Int64, Int64}})
│ %57 = a_result::Union{Nothing, Tuple{Int64, Int64}}
│ %58 = Base.getproperty(self, :b)::Vector{Int64}
│ %59 = Main.iterate(%58, b_state)::Union{Nothing, Tuple{Int64, Int64}}
│ %60 = (%56)(%57, %59)::State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}
│ %61 = Core.tuple(%55, %60)::Tuple{Int64, State{Tuple{Int64, Int64}, Tuple{Int64, Int64}}}
└── return %61
EDIT: now I have realized that my implementation is not fully correct, as it assumes that the return value of iterate if it is not nothing is type stable (which it does not have to be). But if it is not type stable then compiler must allocate. So a fully correct solution would first check if iterate is type stable. If it is - use my solution, and if it is not - use e.g. your solution.