I wanted to use the Swift method CFSwapInt16BigToHost but I can't get it linking. I link against the CoreFoundation framework, but every time I get the following error :
Undefined symbols for architecture i386:
"__OSSwapInt16", referenced from:
Did I miss something ?
Yes, for some reason the CFSwap... functions cannot be used in a Swift program.
But since Xcode 6 beta 3, all integer types have little/bigEndian: constructors
and little/bigEndian properties.
From the UInt16 struct definition:
/// Creates an integer from its big-endian representation, changing the
/// byte order if necessary.
init(bigEndian value: UInt16)
/// Creates an integer from its little-endian representation, changing the
/// byte order if necessary.
init(littleEndian value: UInt16)
/// Returns the big-endian representation of the integer, changing the
/// byte order if necessary.
var bigEndian: UInt16 { get }
/// Returns the little-endian representation of the integer, changing the
/// byte order if necessary.
var littleEndian: UInt16 { get }
Example:
// Data buffer containing the number 1 in 16-bit, big-endian order:
var bytes : [UInt8] = [ 0x00, 0x01]
let data = NSData(bytes: &bytes, length: bytes.count)
// Read data buffer into integer variable:
var i16be : UInt16 = 0
data.getBytes(&i16be, length: sizeofValue(i16be))
println(i16be) // Output: 256
// Convert from big-endian to host byte-order:
let i16 = UInt16(bigEndian: i16be)
println(i16) // Output: 1
Update: As of Xcode 6.1.1, the CFSwap... functions are available in Swift, so
let i16 = CFSwapInt16BigToHost(bigEndian: i16be)
let i16 = UInt16(bigEndian: i16be)
both work, with identical results.
It looks like those are handled with a combination of macro's and inline functions so... I don't know why it wouldn't be already statically compiled into the CF version:
generally to solve this kind of dependency riddle you can just search for the naked function name without prefixed underscores, then figure out where it should be linked from
#define OSSwapInt16(x) __DARWIN_OSSwapInt16(x)
then
#define __DARWIN_OSSwapInt16(x) \
((__uint16_t)(__builtin_constant_p(x) ? __DARWIN_OSSwapConstInt16(x) : _OSSwapInt16(x)))
then
__DARWIN_OS_INLINE
__uint16_t
_OSSwapInt16(
__uint16_t _data
)
{
return ((__uint16_t)((_data << 8) | (_data >> 8)));
}
I know this isn't a real answer but it was too big for a comment,
I think you may need to find out if there is a problem with the way that swift imports the headers... like if for instance the macro's to import headers isn't correct in a swift setup.
As the other answers have pointed out, the __OSSwapInt16 swap method does not appear to be in the swift header of the CFByte framework. I think the swift like alternative would be:
var dataLength: UInt16 = 24
var swapped = UInt16(dataLength).byteSwapped
Related
I have a C function:
Node * first_element_by_path(const Node * node, const char * path, char delimiter);
And a Rust glue function:
pub fn first_element_by_path(node: *mut CNode, path: *const c_char, delimiter: c_char) -> *mut CNode;
It expects a c_char as delimiter. I want to send a char to it, but c_char is a i8 and not a char. How can I convert a Rust char to i8 or c_char in this case?
You are asking the question:
How do I fit a 32-bit number into an 8-bit value?
Which has the immediate answer: "throw away most of the bits":
let c = rust_character as libc::c_char;
However, that should cause you to stop and ask the questions:
Are the remaining bits in the right encoding?
What about all those bits I threw away?
Rust chars allow encoding all Unicode scalar values. What is your desired behavior for this code:
let c = '💩' as libc::c_char;
It's probably not to create the value -87, a non-ASCII value! Or this less-silly and perhaps more realistic variant, which is -17:
let c = 'ï' as libc::c_char;
You then have to ask: what does the C code mean by a character? What encoding does the C code think strings are? How does the C code handle non-ASCII text?
The safest thing may be to assert that the value is within the ASCII range:
let c = 'ï';
let v = c as u32;
assert!(v <= 127, "Invalid C character value");
let v = v as libc::c_char;
Instead of asserting, you could also return a Result type that indicates that the value was out of range.
should I change my function (the one that will call the glue function) to receive a c_char instead of a char?
That depends. That may just be pushing the problem further up the stack; now every caller has to decide how to create the c_char and worry about the values between 128 and 255. If the semantics of your code are such that the value has to be an ASCII character, then encode that in your types. Specifically, you can use something like the ascii crate.
In either case, you push the possibility for failure into someone else's code, which makes your life easier at the potential expense of making the caller more frustrated.
I have a card reader that always report 64 bits, and can read cards with 4 or 7 byte UIDs.
As an example, I see it can report:
04-18-c5-82-00-00-00-00 - a 4-byte UID in the form uid0-uid1-uid2-uid3-00-00-00-00
04-18-c5-82-f1-3b-81-00 - a 7-byte UID in the form uid0-uid1-uid2-uid3-uid4-uid5-uid6-00
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero? Is this covered in a spec? If so, which spec?
What prevents a 7-byte UID from having uid4, uid5 and uid6 set to zero?
Nothing. The format of the UID (as used by MIFARE cards) is defined in ISO/IEC 14443-3. Specifically for MIFARE cards, NXP has (or at least had?) some further allocation logic for 4 byte UIDs, but that's not publicly available.
Is it possible to distinguish the two cases?
If the reader outputs the UIDs exactly in the form that you showed in your example, then the answer is no (at least not reliably). However, some readers output the UID on 8 bytes and include the cascade tag for 7-byte-UIDs. Thus, all 7-byte-UIDs start with 0x88 for those readers. This does not seem to be the case with your reader.
Are there possible strategies to distinguish the two cases?
Some strategies come to my mind to distinguish 4-byte-UIDs from 7-byte-UIDs.
The first byte of a 7-byte-UID is the manufacturer code (as defined in ISO/IEC 7816-6 (see How to detect manufacturer from NFC tag using Android? on how to obtain the list). Thus, if you have a limited set of manufacturers (e.g. if you only use MIFARE cards with chips from NXP), you could interpret all UIDs that start with NXP's manufacturer code (0x04) as 7-byte-UIDs. Nevertheless, you should be aware that 4-byte-UIDs are allowed to start with 0x04 as well. Hence, this method is not 100% reliable and may fail for some cases.
The first byte of 4-byte-UIDs must not contain any of the following values: 'x8' (with x != '0'), 'xF'. If you find the first byte to match any of those values, you can assume the UID to consist of 7 bytes.
if you can get the ATQA response you can distinguish it. The lower byte of the ATQA shows you how long the UID is. (4/7/10Byte) As far as I know there is no other way to distinguish with 100% assurance
br
I know its bit late to the party, but for any one who is having the same doubt;
the documented way of creating a 7 Byte UID out of 4 Byte UID is in the Annex-6 of this pdf.
In case the page goes down, a shameless rip off from that page is given below.
Any and all mistakes if you find in the below code snippet rightfully belongs to NXP guys, not me.
But how do you know whether the tag is a 4 byte or 7 byte uid one?
From the ATQA response. Refer page 15/36 of the document 1 and page 8/15 of document 2 .
In case the document goes down, here is the relevant excerpt from the document 1.
The MF1S50yyX/V1 answers to a REQA or WUPA command with the ATQA value
shown in Table 11 and to a Select CL1 command (CL2 for the 7-byte UID variant) with the SAK value shown in Table 12.
Remark: The ATQA coding in bits 7 and 8 indicate the UID size according to ISO/IEC 14443 independent from the settings of the UID usage.
6. Annex, Source code to derive NUID out of a Double Size UID
#include <stdio.h>
#include <conio.h>
#include <stdlib.h>
#include <time.h>
#define BYTE unsigned char
unsigned short UpdateCrc(unsigned char ch, unsigned short *lpwCrc)
{
ch = (ch^(unsigned char)((*lpwCrc) & 0x00FF));
ch = (ch^(ch<<4));
*lpwCrc = (*lpwCrc >> 8)^((unsigned short)ch << 8)^((unsigned
short)ch<<3)^((unsigned short)ch>>4);
return(*lpwCrc);
}
void ComputeCrc(unsigned short wCrcPreset, unsigned char *Data, int
Length, unsigned short &usCRC)
{
unsigned char chBlock;
do {
chBlock = *Data++;
UpdateCrc(chBlock, &wCrcPreset);
} while (--Length);
usCRC = wCrcPreset;
return;
}
void Convert7ByteUIDTo4ByteNUID(unsigned char *uc7ByteUID, unsigned char
*uc4ByteUID)
{
unsigned short CRCPreset = 0x6363;
unsigned short CRCCalculated = 0x0000;
ComputeCrc(CRCPreset, uc7ByteUID, 3,CRCCalculated);
uc4ByteUID[0] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[1] = CRCCalculated &0xFF; //LSB
CRCPreset = CRCCalculated;
ComputeCrc(CRCPreset, uc7ByteUID+3, 4,CRCCalculated);
uc4ByteUID[2] = (CRCCalculated >>8)&0xFF;//MSB
uc4ByteUID[3] = CRCCalculated &0xFF; //LSB
uc4ByteUID[0] = uc4ByteUID[0]|0x0F;
uc4ByteUID[0] = uc4ByteUID[0]& 0xEF;
}
int main(void)
{
int i;
unsigned char uc7ByteUID[7] =
{0x04,0x18,0x3F,0x09,0x32,0x1B,0x85};//4F505D7D
unsigned char uc4ByteUID[4] = {0x00};
Convert7ByteUIDTo4ByteNUID(uc7ByteUID,uc4ByteUID);
printf("7-byte UID = ");
for(i = 0;i<7;i++)
printf("%02X",uc7ByteUID[i]);
printf("\t4-byte FNUID = ");
for(i = 0;i<4;i++)
printf("%02X",uc4ByteUID[i]);
getch();
return(0);
}
If you came here (as I did) to find a proper way to automatically get the UID from a card independent if it is a 4, 7 or 10 byte UID, I do it dynamically as following (found this logic somewhere on the internet but can't find it anymore to give proper credits. 10 Bytes not tested):
(this is C# code and is using winscard.dll under the hood):
public static UInt64 getCardUIDasUInt64() // *** only for mifare 1k cards ***
{
UInt64 UID = 0;
byte[] receivedUID = new byte[10]; // ***
Card.SCARD_IO_REQUEST request = new Card.SCARD_IO_REQUEST();
request.dwProtocol = (UInt32)Protocol; // *** use the detected protocol instead of statically assigned protocol type *** // Card.SCARD_PROTOCOL_T1;
request.cbPciLength = (UInt32)System.Runtime.InteropServices.Marshal.SizeOf(typeof(Card.SCARD_IO_REQUEST));
byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x00 }; //get UID command for Mifare cards
//byte[] sendBytes = new byte[] { 0xFF, 0xCA, 0x00, 0x00, 0x04 }; //get UID command for Mifare cards
int receivedBytesLength = receivedUID.Length;
int status = Card.SCardTransmit(hCard, ref request, ref sendBytes[0], sendBytes.Length, ref request, ref receivedUID[0], ref receivedBytesLength);
if (status == Card.SCARD_S_SUCCESS)
{
if (receivedBytesLength >= 2)
{
// do we have an error
if ((receivedUID[receivedBytesLength - 2] != 0x90) ||
(receivedUID[receivedBytesLength - 1] != 0x00))
{
throw new Exception(receivedUID[receivedBytesLength - 2].ToString());
}
else if (receivedBytesLength > 2)
{
for (UInt32 i = 0; i != receivedBytesLength - 2; i++)
{
UID <<= 8;
UID |= (UInt64)(receivedUID[i]);
};
}
}
else
{
throw new Exception(ResourceHandling.getTextResource("Error_Card_Read"));
}
}
return UID;
}
If you need the UID in hex, then use this (in addition to above code):
public static string getCardUIDasHex() // *** only for mifare 1k cards ***
{
UInt64 cardUID = getCardUIDasUInt64();
return string.Format("{0:X}", cardUID);
}
Maybe this is also of help to someone else as in the internet (also here in SO) there are many places which just read out the 1st four bytes of the UID which is just not correct anymore today.
I'm trying to initialize ALAssetsGroupType constant in Swift (Xcode 6.4.):
let groupTypes: ALAssetsGroupType = ALAssetsGroupType(ALAssetsGroupAll)
But It doesn't compile for 32bit devices(ex, iPhone 5) and I get error:
There's probably a better way, but the direct approach is to use the constructor for Int32 to create a signed Int32 from a UInt32:
let groupTypes: ALAssetsGroupType = ALAssetsGroupType(Int32(bitPattern: ALAssetsGroupAll))
Explanation
If you option-click on ALAssetsGroupType you will see that it is a typealias for Int:
typealias ALAssetsGroupType = Int
But, if you then click on AssetsLibrary next to Declared In you will see that in the header file it is actually a typedef for NSUInteger:
ALAssetsLibrary.h
typedef NSUInteger ALAssetsGroupType;
So, what's going on here? Why doesn't Swift treat NSUInteger as UInt? Swift is a strongly typed language, which means you can't just assign a Int to an UInt without conversion. To keep our lives simpler and to remove many of those conversions, the Swift engineers decided to treat NSUInteger as Int which saves a lot of hassle in most cases.
The next piece of the mystery is the definition of ALAssetsGroupAll:
enum {
ALAssetsGroupLibrary = (1 << 0), // The Library group that includes all assets.
ALAssetsGroupAlbum = (1 << 1), // All the albums synced from iTunes or created on the device.
ALAssetsGroupEvent = (1 << 2), // All the events synced from iTunes.
ALAssetsGroupFaces = (1 << 3), // All the faces albums synced from iTunes.
ALAssetsGroupSavedPhotos = (1 << 4), // The Saved Photos album.
#if __IPHONE_5_0 <= __IPHONE_OS_VERSION_MAX_ALLOWED
ALAssetsGroupPhotoStream = (1 << 5), // The PhotoStream album.
#endif
ALAssetsGroupAll = 0xFFFFFFFF, // The same as ORing together all the available group types,
};
Note that the comment next to ALAssetsGroupAll says "The same as ORing together all the available group types". Well, 0x3F would have sufficed, but presumably the author decided to set all of the bits just to future proof it in case other options were added in the future.
The problem is that while 0xFFFFFFFF fits in an NSUInteger, it doesn't fit into an Int32, so you get an overflow warning on 32-bit systems. The solution provided above converts the UInt32 0xFFFFFFFF into an Int32 with the same bitPattern. That then gets converted to an ALAssetsGroupType which is just an Int, so on a 32-bit system you get an Int with all bits set (which is the representation of -1). On a 64-bit system, the Int32 value of -1 gets sign extended to -1 in 64-bit which sets all 64 bits of the value.
Another way to solve it is to define your own AllGroups:
let AllGroups = -1 // all bits set
let groupTypes: ALAssetsGroupType = AllGroups
Note, this is deprecated in iOS 9:
typedef NSUInteger ALAssetsGroupType NS_DEPRECATED_IOS(4_0, 9_0, "Use PHAssetCollectionType and PHAssetCollectionSubtype in the Photos framework instead");
First off, I've read this question:
What's the right way to make a stack (or other dynamically resizable vector-like thing) in Rust?
The problem
The selected answer just tells the question asker to use the standard library instead of explaining the implementation, which is fine if my goal was to build something. Except I'm trying to learn about the implementation of a stack, while following a data structure textbook written for Java (Algorithms by Robert Sedgwick & Kevin Wayne), where they implement a stack via resizing an array (Page 136).
I'm in the process of implementing the resize method, and it turns out the size of the array needs to be a constant expression.
meta: are arrays in rust called slices?
use std::mem;
struct DynamicStack<T> {
length: uint,
internal: Box<[T]>,
}
impl<T> DynamicStack<T> {
fn new() -> DynamicStack<T> {
DynamicStack {
length: 0,
internal: box [],
}
}
fn resize(&mut self, new_size: uint) {
let mut temp: Box<[T, ..new_size]> = box unsafe { mem::uninitialized() };
// ^^ error: expected constant expr for array
// length: non-constant path in constant expr
// code for copying elements from self.internal
self.internal = temp;
}
}
For brevity the compiler error was this
.../src/lib.rs:51:23: 51:38 error: expected constant expr for array length: non-constant path in constant expr
.../src/lib.rs:51 let mut temp: Box<[T, ..new_size]> = box unsafe { mem::uninitialized() };
^~~~~~~~~~~~~~~
.../src/lib.rs:51:23: 51:38 error: expected constant expr for array length: non-constant path in constant expr
.../src/lib.rs:51 let mut temp: Box<[T, ..new_size]> = box unsafe { mem::uninitialized() };
^~~~~~~~~~~~~~~
The Question
Surely there is a way in rust to initialize an array with it's size determined at runtime (even if it's unsafe)? Could you also provide an explanation of what's going on in your answer?
Other attempts
I've considered it's probably possible to implement the stack in terms of
struct DynamicStack<T> {
length: uint,
internal: Box<Optional<T>>
}
But I don't want the overhead of matching optional value to remove the unsafe memory operations, but this still doesn't resolve the issue of unknown array sizes.
I also tried this (which doesn't even compile)
fn resize(&mut self, new_size: uint) {
let mut temp: Box<[T]> = box [];
let current_size = self.internal.len();
for i in range(0, current_size) {
temp[i] = self.internal[i];
}
for i in range(current_size, new_size) {
temp[i] = unsafe { mem::uninitialized() };
}
self.internal = temp;
}
And I got this compiler error
.../src/lib.rs:55:17: 55:21 error: cannot move out of dereference of `&mut`-pointer
.../src/lib.rs:55 temp[i] = self.internal[i];
^~~~
.../src/lib.rs:71:19: 71:30 error: cannot use `self.length` because it was mutably borrowed
.../src/lib.rs:71 self.resize(self.length * 2);
^~~~~~~~~~~
.../src/lib.rs:71:7: 71:11 note: borrow of `*self` occurs here
.../src/lib.rs:71 self.resize(self.length * 2);
^~~~
.../src/lib.rs:79:18: 79:22 error: cannot move out of dereference of `&mut`-pointer
.../src/lib.rs:79 let result = self.internal[self.length];
^~~~
.../src/lib.rs:79:9: 79:15 note: attempting to move value to here
.../src/lib.rs:79 let result = self.internal[self.length];
^~~~~~
.../src/lib.rs:79:9: 79:15 help: to prevent the move, use `ref result` or `ref mut result` to capture value by reference
.../src/lib.rs:79 let result = self.internal[self.length];
I also had a look at this, but it's been awhile since I've done any C/C++
How should you do pointer arithmetic in rust?
Surely there is a way in Rust to initialize an array with it's size determined at runtime?
No, Rust arrays are only able to be created with a size known at compile time. In fact, each tuple of type and size constitutes a new type! The Rust compiler uses that information to make optimizations.
Once you need a set of things determined at runtime, you have to add runtime checks to ensure that Rust's safety guarantees are always valid. For example, you can't access uninitialized memory (such as by walking off the beginning or end of a set of items).
If you truly want to go down this path, I expect that you are going to have to get your hands dirty with some direct memory allocation and unsafe code. In essence, you will be building a smaller version of Vec itself! To that end, you can check out the source of Vec.
At a high level, you will need to allocate chunks of memory big enough to hold N objects of some type. Then you can provide ways of accessing those elements, using pointer arithmetic under the hood. When you add more elements, you can allocate more space and move old values around. There are lots of nuanced things that may or may not come up, but it sounds like you are on the beginning of a fun journey!
Edit
Of course, you could choose to pretend that most of the methods of Vec don't even exist, and just use the ones that are analogs of Java's array. You'll still need to use Option to avoid uninitialized values though.
This program runs fine one my machine (go1.2.1 linux/amd64):
package main
import "fmt"
const bigint = 1<<62
func main() {
fmt.Println(bigint)
}
But with the go playground, it gives overflow error - http://play.golang.org/p/lAUwLwOIVR
It seem that my build is configured with 64 bits for integer constans, playground configured with 32 bits.
But spec say that implementation must give at least 256 bits of precision for constants?
Also see code in my other question -- the scanner standard package has code:
const GoWhitespace = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
Since space is 32, this don't work on 32-bit playground at all.
How can this be?
Constants in general
Constants itself are not limited in precision but when used in code they are converted to a suitable type.
From the spec:
A constant may be given a type explicitly by a constant declaration or conversion, or implicitly when used in a variable declaration or an assignment or as an operand in an expression. It is an error if the constant value cannot be represented as a value of the respective type. For instance, 3.0 can be given any integer or any floating-point type, while 2147483648.0 (equal to 1<<31) can be given the types float32, float64, or uint32 but not int32 or string.
So if you have
const a = 1 << 33
fmt.Println(a)
you will get an overflow error as the default type for integer constants int can't hold the value 1 << 33 on 32 bit environments. If you convert the constant to int64 everything's fine on all platforms:
const a = 1 << 33
fmt.Println(int64(a))
Scanner
The constant GoWhitespace is not directly used in the scanner.
The Whitespace attribute used in the Scanner type is of type uint64 and GoWhitespace is assigned to it:
s.Whitespace = GoWhitespace
This means you deal with a uint64 value and 1 << ' ' (aka. 1 << 32) is perfectly valid.
Example (on play):
const w = 1<<'\t' | 1<<'\n' | 1<<'\r' | 1<<' '
c := ' '
// fmt.Println(w & (1 << uint(c))) // fails with overflow error
fmt.Println(uint64(w) & (1 << uint(c))) // works as expected
As stated by nemo, you can give a type to your constant. Just specify int64 and it works fine :)
http://play.golang.org/p/yw2vsvMigk
package main
import "fmt"
const bigint int64 = 1<<62
func main() {
fmt.Println(bigint)
}