I have generated a .cap file from a simple HelloWorld program that looks like this:
package com.oracle.jcclassic.samples.helloworld;
import javacard.framework.APDU;
import javacard.framework.Applet;
import javacard.framework.ISO7816;
import javacard.framework.ISOException;
import javacard.framework.Util;
public class HelloWorld extends Applet {
private byte[] echoBytes;
private static final short LENGTH_ECHO_BYTES = 256;
protected HelloWorld() {
echoBytes = new byte[LENGTH_ECHO_BYTES];
register();
}
public static void install(byte[] bArray, short bOffset, byte bLength) {
new HelloWorld();
}
#Override
public void process(APDU apdu) {
byte buffer[] = apdu.getBuffer();
if ((buffer[ISO7816.OFFSET_CLA] == 0) &&
(buffer[ISO7816.OFFSET_INS] == (byte) (0xA4))) {
return;
}
short bytesRead = apdu.setIncomingAndReceive();
short echoOffset = (short) 0;
while (bytesRead > 0) {
Util.arrayCopyNonAtomic(buffer, ISO7816.OFFSET_CDATA, echoBytes, echoOffset, bytesRead);
echoOffset += bytesRead;
bytesRead = apdu.receiveBytes(ISO7816.OFFSET_CDATA);
}
apdu.setOutgoing();
apdu.setOutgoingLength((short) (echoOffset + 5));
apdu.sendBytes((short) 0, (short) 5);
apdu.sendBytesLong(echoBytes, (short) 0, echoOffset);
}
}
I've used the Eclipse IDE and their provided HelloWorld classic applet sample to do that. Then I have a NXP J3H145 Smart Card on which I'd like to install my applet. I use the Global Platform Pro tool for that and write the command:
gp.exe -d -v -install helloworld.cap
I get the following result:
gp.exe -d -v -install helloworld.cap
# gp -d -v -install helloworld.cap
# GlobalPlatformPro v21.12.31-3-g903416f
# Running on Windows 10 10.0 amd64, Java 18.0.1.1 by Oracle Corporation
[DEBUG] TerminalManager - Processing 3 readers with null as preferred and null as ignored
[DEBUG] TerminalManager - Preferred reader: Optional.empty
SCardConnect("Broadcom Corp Contacted SmartCard 0", T=*) -> T=1, 3BDC18FF8191FE1FC38073C821136605036351000250
A>> T=1 (4+0000) 00A40400 00
A<< (0018+2) (26ms) 6F108408A000000151000000A5049F6501FF 9000
[DEBUG] GPSession - Auto-detected ISD: A000000151000000
# Warning: no keys given, defaulting to 404142434445464748494A4B4C4D4E4F
[INFO] GPSession - Using card master keys with version 0 for setting up session with MAC
A>> T=1 (4+0008) 80500000 08 4E2AF49D3EFDF13F 00
A<< (0029+2) (99ms) 00008048009426073469FF03001B999689BD191E08DBB58A4F31BE3D4A 9000
[DEBUG] GPSession - KDD: 00008048009426073469
[DEBUG] GPSession - Host challenge: 4E2AF49D3EFDF13F
[DEBUG] GPSession - Card challenge: 1B999689BD191E08
[DEBUG] GPSession - Card reports SCP03 with key version 255 (0xFF)
[INFO] GPSession - Diversified card keys: ENC=404142434445464748494A4B4C4D4E4F (KCV: 504A77) MAC=404142434445464748494A4B4C4D4E4F (KCV: 504A77) DEK=404142434445464748494A4B4C4D4E4F (KCV: 504A77) for SCP03
[INFO] GPSession - Session keys: ENC=196540E4A67650882195BF1BCEB78B36 MAC=09B4554BBA83417A61728B9AE76DECF7 RMAC=CB8F3FC5C52BE5A9C83A49622B195C01
[DEBUG] GPSession - Verified card cryptogram: DBB58A4F31BE3D4A
[DEBUG] GPSession - Calculated host cryptogram: 89F0CD854C3725AD
A>> T=1 (4+0016) 84820100 10 89F0CD854C3725AD58C4B201E2C601FE
A<< (0000+2) (148ms) 9000
A>> T=1 (4+0010) 84F28002 0A 4F002740B169D9F9CC3A 00
A<< (0044+2) (111ms) E32A4F08A0000001510000009F700107C5039EFE80C407A0000000620001CE020100CC08A000000151000000 9000
A>> T=1 (4+0010) 84F24002 0A 4F009CDA12BDBB4AD370 00
A<< (0000+2) (99ms) 6A88
A>> T=1 (4+0010) 84F21002 0A 4F00467F8B0AF8DAC1AE 00
A<< (0025+2) (102ms) E3174F07A00000015153509F7001018408A000000151535041 9000
A>> T=1 (4+0010) 84F22002 0A 4F00510E1124D76D25E7 00
A<< (0015+2) (99ms) E30D4F07A00000015153509F700101 9000
CAP file (v2.3), contains: applets for JavaCard 3.1.0
Package: com.oracle.jcclassic.samples.helloworld A00000006203010C01 v1.0
Applet: com.oracle.jcclassic.samples.helloworld.HelloWorld A00000006203010C0101
Import: A0000000620101 v1.8 javacard.framework
Import: A0000000620001 v1.0 java.lang
Generated by Oracle Corporation converter [v3.1.0]
On Thu May 26 14:58:30 EEST 2022 with JDK 18.0.1.1 (Oracle Corporation)
Code size 339 bytes (1149 with debug)
SHA-256 96aaaaa6510a3bc106babdec45790831c3ea62c2e05d274e05a4999255bdc3e1
SHA-1 8afd2bd6d08fff180d6949189399928ff41a7371
A>> T=1 (4+0030) 84E60200 1E 09A00000006203010C0108A000000151000000000000268DB924E83229B2
A<< (0001+2) (113ms) 00 9000
A>> T=1 (4+0255) 84E80000 FF C4820153010014DECAFFED030204000109A00000006203010C010002002500140025000E0015003600170072000A00130000006C02B80000000000000000000002010004001502080107A0000000620101000107A000000062000103000E010AA00000006203010C01010014060017000000800301000107010000001D000102030405060708070072000210188C000118110100900B8700188B00027A01308F00038C00047A0523198B00052D1A0325610A1A042510A46B037A198B00063203290470191A08AD0016041F8D000B3B16041F41290419088B000C321F64E8198B00073B19160408418B00081903088B000919AD00031604728F17ECFC6DA47C
A<< (0000+2) (698ms) 6403
Error: LOAD failed: 0x6403
SCardDisconnect("Broadcom Corp Contacted SmartCard 0", true) tx:399/rx:150 in 1s663ms
So, the process ends with the error 6403, and my applet is not installed. I have found that the APDU response code 6403 means "CAP MINOR", but I have no idea what it is about. CAP file minor version? I can see in the print-out above the following line:
CAP file (v2.3), contains: applets for JavaCard 3.1.0
Does that mean that there are some problem with different versions? But I found in my Smart Card specification (https://www.cardlogix.com/product/nxp-jcop3-j3h145-java-card-3-0-4-dual-interface) that it supports JavaCard version v3.
I also tried downloading my applet to my card using the PyApduTool (http://javacardos.com/javacardforum/viewtopic.php?t=38). I got the following error message:
Download Cap error: GP init update failed. recv: 67 00
Does anyone have an idea what is wrong with my code or my actions and why I can't install any applets on my card?
Please compile with the exact version. Normal Java is relatively flexible when it comes to differences in minor versions. The dynamic linking simply happens by method signature, i.e. the name and parameters within the class file.
However, CAP files are pre-linked during conversion. That means that all the methods in a class are enumerated, and the byte code simply refers to members by value, not by name.
If the API changes then the enumeration changes and the byte code would link to the wrong members within a class. It is therefore very important that the minor version is correct as well.
Related
I have a problem debugging an stm32f407vet6 board and rust code.
The point of the problem is that GDB ignores breakpoints.
After setting breakpoints and executing the "continue" command in gdb, the program continues to ignore all breakpoints.
The only way to stop the program running is to cause an interrupt using the "ctrl + c" command.
After this command, the board stops its execution on the line currently being executed.
I have tried to set breakpoints on all lines where I can set them, but all the attempts are unsuccessful.
$ openocd
Open On-Chip Debugger 0.10.0 (2020-07-01) [https://github.com/sysprogs/openocd]
Licensed under GNU GPL v2
libusb1 09e75e98b4d9ea7909e8837b7a3f00dda4589dc3
For bug reports, read
http://openocd.org/doc/doxygen/bugs.html
Info : auto-selecting first available session transport "hla_swd". To override use 'transport select <transport>'.
Info : The selected transport took over low-level target control. The results might differ compared to plain JTAG/SWD
Info : Listening on port 6666 for tcl connections
Info : Listening on port 4444 for telnet connections
Info : clock speed 2000 kHz
Error: libusb_open() failed with LIBUSB_ERROR_NOT_SUPPORTED
Info : STLINK V2J35S7 (API v2) VID:PID 0483:3748
Info : Target voltage: 6.436364
Info : stm32f4x.cpu: hardware has 6 breakpoints, 4 watchpoints
Info : starting gdb server for stm32f4x.cpu on 3333
Info : Listening on port 3333 for gdb connections
$ arm-none-eabi-gdb -q target\thumbv7em-none-eabihf\debug\test_blink
Reading symbols from target\thumbv7em-none-eabihf\debug\test_blink...
(gdb) target remote :3333
Remote debugging using :3333
0x00004070 in core::ptr::read_volatile (src=0xe000e010) at C:\Users\User\.rustup\toolchains\stable-x86_64-pc-windows-msvc\lib/rustlib/src/rust\src/libcore/ptr/mod.rs:1005
1005 pub unsafe fn read_volatile<T>(src: *const T) -> T {
(gdb) load
Loading section .vector_table, size 0x1a8 lma 0x0
Loading section .text, size 0x47bc lma 0x1a8
Loading section .rodata, size 0xbf0 lma 0x4970
Start address 0x47a2, load size 21844
Transfer rate: 100 KB/sec, 5461 bytes/write.
(gdb) b main
Breakpoint 1 at 0x1f2: file src\main.rs, line 15.
(gdb) continue
Continuing.
Program received signal SIGINT, Interrupt.
0x00001530 in cortex_m::peripheral::syst::<impl cortex_m::peripheral::SYST>::has_wrapped (self=0x1000fc6c)
at C:\Users\User\.cargo\registry\src\github.com-1ecc6299db9ec823\cortex-m-0.6.3\src\peripheral/syst.rs:135
135 pub fn has_wrapped(&mut self) -> bool {
(gdb) bt
#0 0x00001530 in cortex_m::peripheral::syst::<impl cortex_m::peripheral::SYST>::has_wrapped (self=0x1000fc6c)
at C:\Users\User\.cargo\registry\src\github.com-1ecc6299db9ec823\cortex-m-0.6.3\src\peripheral/syst.rs:135
#1 0x00003450 in <stm32f4xx_hal::delay::Delay as embedded_hal::blocking::delay::DelayUs<u32>>::delay_us (self=0x1000fc6c, us=500000)
at C:\Users\User\.cargo\registry\src\github.com-1ecc6299db9ec823\stm32f4xx-hal-0.8.3\src/delay.rs:69
#2 0x0000339e in <stm32f4xx_hal::delay::Delay as embedded_hal::blocking::delay::DelayMs<u32>>::delay_ms (self=0x1000fc6c, ms=500)
at C:\Users\User\.cargo\registry\src\github.com-1ecc6299db9ec823\stm32f4xx-hal-0.8.3\src/delay.rs:32
#3 0x00000318 in test_blink::__cortex_m_rt_main () at src\main.rs:40
#4 0x000001f6 in main () at src\main.rs:15
memory.x file:
MEMORY
{
/* NOTE 1 K = 1 KiBi = 1024 bytes */
/* TODO Adjust these memory regions to match your device memory layout */
/* These values correspond to the LM3S6965, one of the few devices QEMU can emulate */
CCMRAM : ORIGIN = 0x10000000, LENGTH = 64K
RAM : ORIGIN = 0x20000000, LENGTH = 128K
FLASH : ORIGIN = 0x00000000, LENGTH = 512K
}
/* This is where the call stack will be allocated. */
/* The stack is of the full descending type. */
/* You may want to use this variable to locate the call stack and static
variables in different memory regions. Below is shown the default value */
_stack_start = ORIGIN(CCMRAM) + LENGTH(CCMRAM);
/* You can use this symbol to customize the location of the .text section */
/* If omitted the .text section will be placed right after the .vector_table
section */
/* This is required only on microcontrollers that store some configuration right
after the vector table */
/* _stext = ORIGIN(FLASH) + 0x400; */
/* Example of putting non-initialized variables into custom RAM locations. */
/* This assumes you have defined a region RAM2 above, and in the Rust
sources added the attribute `#[link_section = ".ram2bss"]` to the data
you want to place there. */
/* Note that the section will not be zero-initialized by the runtime! */
/* SECTIONS {
.ram2bss (NOLOAD) : ALIGN(4) {
*(.ram2bss);
. = ALIGN(4);
} > RAM2
} INSERT AFTER .bss;
*/
openocd.cfg file:
# Sample OpenOCD configuration for the STM32F3DISCOVERY development board
# Depending on the hardware revision you got you'll have to pick ONE of these
# interfaces. At any time only one interface should be commented out.
# Revision C (newer revision)
source [find interface/stlink.cfg]
# Revision A and B (older revisions)
# source [find interface/stlink-v2.cfg]
source [find target/stm32f4x.cfg]
# use hardware reset, connect under reset
# reset_config none separate
main.rs file:
#![no_main]
#![no_std]
#![allow(unsafe_code)]
// Halt on panic
#[allow(unused_extern_crates)] // NOTE(allow) bug rust-lang/rust#53964
extern crate panic_halt; // panic handler
use cortex_m;
use cortex_m_rt::entry;
use stm32f4xx_hal as hal;
use crate::hal::{prelude::*, stm32};
#[entry]
fn main() -> ! {
if let (Some(dp), Some(cp)) = (
stm32::Peripherals::take(),
cortex_m::peripheral::Peripherals::take(),
) {
let rcc = dp.RCC.constrain();
let clocks = rcc
.cfgr
.sysclk(168.mhz())
.freeze();
let mut delay = hal::delay::Delay::new(cp.SYST, clocks);
let gpioa = dp.GPIOA.split();
let mut l1 = gpioa.pa6.into_push_pull_output();
let mut l2 = gpioa.pa7.into_push_pull_output();
loop {
l1.set_low().unwrap();
l2.set_high().unwrap();
delay.delay_ms(500u32);
l1.set_high().unwrap();
l2.set_low().unwrap();
delay.delay_ms(500u32);
}
}
loop {}
}
Cargo.toml file:
[package]
name = "test_blink"
version = "0.1.0"
authors = ["Alex"]
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
embedded-hal = "0.2"
nb = "0.1.2"
cortex-m = "0.6"
cortex-m-rt = "0.6"
# Panic behaviour, see https://crates.io/keywords/panic-impl for alternatives
panic-halt = "0.2"
cortex-m-log="0.6.2"
[dependencies.stm32f4xx-hal]
version = "0.8.3"
features = ["rt", "stm32f407"]
I am new to rust embedded and maybe I have done something wrong, but I have already tried all the options I can find on the Internet.
At first I thought it was a problem with the cortex-debug plugin for vscode and even created the issue, but the guys couldn't help me because the problem is obviously not on their side.
Debugging "C" code in cubeIDE works, so I dare to assume that the problem is somewhere in rust--gdb--openocd. Perhaps I am missing something, but unfortunately I cannot find it myself yet.
I would appreciate any resources or ideas to solve this problem.
I'm hoping you checked out this resources:
Discovery - debug
From your screen-grab of arm-none-eabi-gdb it does indeed look it it did not hit the break point.
you should have seen this message afterwards:
Note: automatically using hardware breakpoints for read-only addresses.
Breakpoint 1, main () at ...
Did you compile your source with symbols, and unoptimised?
Your config all looks right to me otherwise.
I use Eclipse Milo (0.2.3) in my prject for OPC UA communication. The OPC UA participants are a client (written using Eclipse Milo) and a server, which is running on a remote machine, and is not implemented using Milo).
I can connect the client to the server normally and if the remote server is shut down, I am able to reconnect the client automatically, as soon as the server is accessible again.
However, after updating the server software, the client can't reconnect any more and it floods the server with the following messages:
Create Session Request
The server is able to create a session
Activate Session Request
The server sends an Activate Session Response, in which the ServerNonce is missing and the service result is "bad"
This causes the client to send a new Create Session Request. This all happens multiple times within a second, which makes it impossible for the server to execute any other tasks then trying to create this session.
Are there any settings in Milo to specify the reconnection delay? Or is there any setting for sepcifying what should happen when receiving an empty ServerNonce?
The server's responses are as follows:
If the session can be activated:
OpcUa Binary Protocol
Message Type: MSG
Chunk Type: F
Message Size: 96
SecureChannelId: 1599759116
Security Token Id: 1
Security Sequence Number: 53
Security RequestId: 3
OpcUa Service : Encodeable Object
TypeId : ExpandedNodeId
NodeId EncodingMask: Four byte encoded Numeric (0x01)
NodeId Namespace Index: 0
NodeId Identifier Numeric: ActivateSessionResponse (470)
ActivateSessionResponse
ResponseHeader: ResponseHeader
Timestamp: Nov 16, 2018 14:05:47.974000000
RequestHandle: 1
ServiceResult: 0x00000000 [Good]
ServiceDiagnostics: DiagnosticInfo
EncodingMask: 0x00
.... ...0 = has symbolic id: False
.... ..0. = has namespace: False
.... .0.. = has localizedtext: False
.... 0... = has locale: False
...0 .... = has additional info: False
..0. .... = has inner statuscode: False
.0.. .... = has inner diagnostic info: False
StringTable: Array of String
ArraySize: 0
AdditionalHeader: ExtensionObject
TypeId: ExpandedNodeId
EncodingMask: 0x00
ServerNonce: ab...
Results: Array of StatusCode
ArraySize: 0
DiagnosticInfos: Array of DiagnosticInfo
ArraySize: 0
If the session can't be activated (after updating the server's software):
OpcUa Binary Protocol
Message Type: MSG
Chunk Type: F
Message Size: 64
SecureChannelId: 1599759041
Security Token Id: 1
Security Sequence Number: 61
Security RequestId: 11
OpcUa Service : Encodeable Object
TypeId : ExpandedNodeId
ActivateSessionResponse
ResponseHeader: ResponseHeader
Timestamp: Nov 16, 2018 12:49:08.235000000
RequestHandle: 222
ServiceResult: 0x80000000 [Bad]
ServiceDiagnostics: DiagnosticInfo
EncodingMask: 0x00
.... ...0 = has symbolic id: False
.... ..0. = has namespace: False
.... .0.. = has localizedtext: False
.... 0... = has locale: False
...0 .... = has additional info: False
..0. .... = has inner statuscode: False
.0.. .... = has inner diagnostic info: False
StringTable: Array of String
ArraySize: 0
AdditionalHeader: ExtensionObject
TypeId: ExpandedNodeId
EncodingMask: 0x00
ServerNonce: <MISSING>[OpcUa Null ByteString]
Results: Array of StatusCode
ArraySize: 0
DiagnosticInfos: Array of DiagnosticInfo
ArraySize: 0
Thank you in advance for your help.
This corner case you described where there's no delay between a failed re-activation and the subsequent re-creation is addressed on the dev/0.3 branch in this commit.
I might be able to back port it to 0.2.x next week if I have some spare time.
I don't think there are any workarounds you can use.
I am trying to debug a mono application using WinDbg. The application hangs in an infinite loop inside the C# code that WinDbg is not able to decode internally.
I know I can use the function mono_pmip() to translate a stack address to the name of the function
I'm using .call mono!mono_pmip(0x63a0630) (verified as available using x *!*pmip*), but I still can't get the output of the function, I get an access violation instead.
This is the stack:
34 018feee8 071824eb 0x71824eb
35 018fef08 07181f4c 0x71824eb
36 018fef28 0717fd8a 0x7181f4c
37 018fef68 071708ae 0x717fd8a
38 018fefc8 07170328 0x71708ae
39 018ff078 0716efa5 0x7170328
3a 018ff0e8 0716ed4c 0x716efa5
3b 018ff108 18de8f88 0x716ed4c
3c 018ff1b8 18de75ff 0x18de8f88
3d 018ff208 18de6f6f 0x18de75ff
3e 018ff238 18de660c 0x18de6f6f
3f 018ff2f8 18de60ce 0x18de660c
40 018ff328 18de6033 0x18de60ce
41 018ff348 18ddf586 0x18de6033
42 018ff3e8 18ddebc6 0x18ddf586
43 018ff408 18dde13e 0x18ddebc6
44 018ff418 063a0630 0x18dde13e
45 018ff450 100f1328 0x63a0630
46 018ff480 1005d984 mono!mono_jit_runtime_invoke+0x214 [c:\buildslave\mono\build\mono\mini\mini.c # 4936]
47 018ff4a4 0035e9ce mono!mono_runtime_invoke+0x51 [c:\buildslave\mono\build\mono\metadata\object.c # 2623]
the same function actually works if I use the immediate windows in Visual Studio
(char*)mono.dll!mono_pmip((void*)0x63a0630)
0x15ebf258 " Login.Login:OnClickLoginButton () + 0x4b (21FF75F8 21FF765C) [06E26E70 - Unity Root Domain]"
still I need to make it run in Windbg :(
I wonder if I have to execute the call on the same thread of the call stack I want to debug.
I realised I never answered this question. (char*)mono.dll!mono_pmip((void*)address) is available only on the mainthread, so I had to select the main thread first from the thread list.
I use systematap to probe slab memory allocation activity.
#! /usr/bin/env stap
global slabs
probe vm.kmem_cache_alloc {
slabs [execname(), bytes_req]<<<1
}
probe timer.ms(10000)
{
dummy = "";
foreach ([name, bytes] in slabs) {
if (dummy != name)
printf("\nProcess:%s\n", name);
printf("Slab_size:%d\tCount:%d\n", bytes, #count(slabs[name, bytes]));
dummy = name;
}
delete slabs
printf("\n-------------------------------------------------------\n\n")
}
but the stap produce following errors :
[root#svr_test5 ~]# stap -v -u vm.tracepoints.stp
Pass 1: parsed user script and 85 library script(s) using 146832virt/23712res/3012shr/21396data kb, in 140usr/10sys/152real ms.
Pass 2: analyzed script: 3 probe(s), 111 function(s), 3 embed(s), 13 global(s) using 228472virt/45000res/4760shr/41696data kb, in 300usr/150sys/488real ms.
Pass 3: translated to C into "/tmp/stap7FrdOq/stap_1d0a8db65ecd4c9f56be318001d197c0_39617_src.c" using 226240virt/47000res/6800shr/41696data kb, in 10usr/0sys/36real ms.
Pass 4: compiled C into "stap_1d0a8db65ecd4c9f56be318001d197c0_39617.ko" in 1360usr/160sys/1546real ms.
Pass 5: starting run.
WARNING: probe kernel.function("kmem_cache_alloc#mm/slab.c:3269").call (address 0xffffffff8000ac24) registration error (rc -84)
WARNING: probe kernel.function("kmem_cache_alloc#mm/slab.c:3269").return (address 0xffffffff8000ac24) registration error (rc -84)
which I guess the probe kernel module should be not registered, so have no effective.
My os :
CentOS release 5.8 (Final)
kernel :
Linux svr_test5 2.6.18-308.el5 #1 SMP Tue Feb 21 20:06:06 EST 2012 x86_64 x86_64 x86_64 GNU/Linux
so, what's the WARNING meaning ? how to fix it ?
WARNING: probe [...] registration error (rc -84)
This is an indication of a kernel kprobe error EILSEQ, which is issued when the kernel is unable to decode/confirm the binary instruction sequence at the requested address.
For systemtap 1.8 (last version officially updated for RHEL5) against a RHEL5.11 kernel (2.6.18-400), it happens to work; perhaps kprobes improvements did the job.
Hi i have recently updated polymer to 0.5.1 and my core-animation stopped working.
Here is my core-animation element:
<core-animation duration="400" fill="forwards" id="show">
<core-animation-keyframe>
<core-animation-prop name="opacity" value="0.7"></core-animation-prop>
</core-animation-keyframe>
</core-animation>
and js code:
var show = this.$.show;
show.target = this.$.img;
show.play()
The problem is that this doesn't work at all. In chrome console i get error Uncaught #<Object> which is caused by line 63 in effect.js:
61 if (group[0].offset != 0 || group[group.length - 1].offset != 1) {
62 throw {
63 type: DOMException.NOT_SUPPORTED_ERR,
64 name: 'NotSupportedError',
65 message: 'Partial keyframes are not supported'
66 };
67 }
It looks like with 5.1, you need at least two keyframes, one for starting value and one for ending value.