I have been looking at examples of FFTs in Swift, and they all seem to have ConstUnsafePointer when using vDSP_ctozD as in the example below:
import Foundation
import Accelerate
internal func spectrumForValues(signal: [Double]) -> [Double] {
// Find the largest power of two in our samples
let log2N = vDSP_Length(log2(Double(signal.count)))
let n = 1 << log2N
let fftLength = n / 2
// This is expensive; factor it out if you need to call this function a lot
let fftsetup = vDSP_create_fftsetupD(log2N, FFTRadix(kFFTRadix2))
var fft = [Double](count:Int(n), repeatedValue:0.0)
// Generate a split complex vector from the real data
var realp = [Double](count:Int(fftLength), repeatedValue:0.0)
var imagp = realp
withExtendedLifetimes(realp, imagp) {
var splitComplex = DSPDoubleSplitComplex(realp:&realp, imagp:&imagp)
// Take the fft
vDSP_fft_zripD(fftsetup, &splitComplex, 1, log2N, FFTDirection(kFFTDirection_Forward))
// Normalize
var normFactor = 1.0 / Double(2 * n)
vDSP_vsmulD(splitComplex.realp, 1, &normFactor, splitComplex.realp, 1, fftLength)
vDSP_vsmulD(splitComplex.imagp, 1, &normFactor, splitComplex.imagp, 1, fftLength)
// Zero out Nyquist
splitComplex.imagp[0] = 0.0
// Convert complex FFT to magnitude
vDSP_zvmagsD(&splitComplex, 1, &fft, 1, fftLength)
}
// Cleanup
vDSP_destroy_fftsetupD(fftsetup)
return fft
}
// To get rid of the `() -> () in` casting
func withExtendedLifetime<T>(x: T, f: () -> ()) {
return Swift.withExtendedLifetime(x, f)
}
// In the spirit of withUnsafePointers
func withExtendedLifetimes<A0, A1>(arg0: A0, arg1: A1, f: () -> ()) {
return withExtendedLifetime(arg0) { withExtendedLifetime(arg1, f) }
}
However when I try to use it in my project, this ConstUnsafePointer is seen as an unresolved identifier. Any clue how to fix this? Thanks in advance.
The name ConstUnsafePointer was used in early Swift betas last summer (at that time, UnsafePointer meant mutable). Now, constant pointers are just UnsafePointer and mutable pointers are UnsafeMutablePointer.
Related
I'm using filter, but I don't understand why I should use **x > 1 for a slice but use *x > 1 for a range.
fn main() {
let a = [0, 1, 2, 3];
let a_iter = a.iter().filter(|x: &&i32| **x > 1); // x: &&i32
let x: Vec<&i32> = a_iter.collect();
println!("{:?}", x);
let b = 0..4;
let b_iter = b.filter(|x: &i32| *x > 1); // x: &i32
let y: Vec<i32> = b_iter.collect();
println!("{:?}", y);
}
The docs say it should be **x > 1.
slice::iter, such as a.iter() in your example, produces an iterator over references to values. Ranges are iterators that produce non-reference values.
The filter(<closure>) method takes a <closure> that takes iterator values by reference, so if your iterator already produces references you'll get a reference to a reference, and if your iterator produces non-reference values then you'll get references to those values.
The difference becomes easier to understand if we use a Vec for both examples:
fn main() {
let a = vec![0, 1, 2, 3];
let a_iter = a.iter(); // iter() returns iterator over references
let x: Vec<&i32> = a_iter.filter(|x: &&i32| **x > 1).collect();
println!("{:?}", x);
let b = vec![0, 1, 2, 3];
let b_iter = a.into_iter(); // into_iter() returns iterator over values
let y: Vec<i32> = b_iter.filter(|x: &i32| *x > 1).collect();
println!("{:?}", y);
}
playground
This is because in the array-example, you first call .iter() to create an iterator, which borrows the array and hence to values in it; the closure-argument to filter receives a borrowed version of the iterator's Item, so it's borrowed again, which makes it a &&i32.
In the Range-case, you call filter directly, as a Range is an iterator. The iterator, therefore, contains owned values and the closure in filter borrows that, which makes its type &i32.
You can see that if you try to let y = b; after the let b_iter = ...-line: You'll get a use of moved value-error because b was consumed by the iterator you used in b.filter
I am using a Vec to store a 2D (row major) matrix of values. I would like to iterate over this matrix with a sliding 2D sub-window to apply a filter (which is unfortunately non-separable).
I have seen in the slice documentation that a windows function exists, which is what I want but in 2 dimensions.
I thought about implementing this as:
fn main() {
// 4 rows 3 columns
let dim: (usize, usize) = (4, 3);
// Place-holder matrix
#[rustfmt::skip]
let mat = vec![0, 1, 2,
3, 4, 5,
6, 7, 8,
9, 10, 11];
// 2D index to 1D index
let linearize = |r, c| r * dim.1 + c;
// The dimensions of my sub-window
let win_size: usize = 2;
// Calculate the bounds for which the top left corner of each window may exist
let bounds: (usize, usize) = (dim.0 - win_size + 1, dim.1 - win_size + 1);
// Convert window 1D index into a 2D index
let split = |i| (i / win_size, i % win_size);
// Iterate over all the top left corners
let window_2d = (0..bounds.0 * bounds.1).map(|i| {
// Get the 2D index of the top left corner
let (r, c) = (i / bounds.1, i % bounds.1);
// Borrow the matrix, so our closure may own the reference
let bmat = &mat;
// Return an iterator for this window
return (0..win_size * win_size).map(move |x| {
let (wr, wc) = split(x);
return bmat[linearize(wr + r, wc + c)];
});
});
// Print the windows out
window_2d.for_each(|it| {
print!("[ ");
it.for_each(|x| print!("{} ", x));
println!("]");
});
}
Essentially creating an iterator over a range of indices and then mapping to the square bracket operator of the matrix.
As far as I know, this is going to have the overhead of a bounds check for each deref of the iterator.
I'm wondering if there's an alternative which would elide the bounds checks? Maybe using a combination of chunks, windows and zip, to chunk the matrix into rows, each with a sliding window, then zip the row's windows and flatten the result?
Thanks!
Edit:
I'm not looking to simply iterate over a 2D array, I want to slide a 2D window over the array, similar to how the std::slice::windows function works.
The best I've got for now is wrapping the matrix access in an unsafe block to elide the bounds check.
With some other misc changes, this is the full example now:
fn split_factory(cols: usize) -> impl Fn(usize) -> (usize, usize) {
// Declaring that cols must be positive allows more aggressive optimisation of div and mod.
if cols < 1 {
unreachable!()
}
move |i| (i / cols, i % cols)
}
fn main() {
// 4 rows 3 columns
let dim: (usize, usize) = (4, 3);
// Place-holder matrix
#[rustfmt::skip]
let mat = vec![0, 1, 2,
3, 4, 5,
6, 7, 8,
9, 10, 11];
// The dimensions of my sub-window
let win_dim = (3usize, 2usize);
// Calculate the bounds for which the top left corner of each window may exist
let bounds = (dim.0 - win_dim.0 + 1, dim.1 - win_dim.1 + 1);
// Iterate over all the top left corners
let convolution_iter = (0..bounds.0 * bounds.1)
.map(split_factory(bounds.1))
.map(|(r, c)| {
// Borrow the matrix, so our closure may own the reference
let bmat = &mat;
// Return an iterator for this window
return (0..win_dim.0 * win_dim.1)
.map(split_factory(win_dim.1))
.map(move |(wr, wc)| {
let px = (wr + r) * dim.1 + (wc + c);
(px, unsafe { *bmat.get_unchecked(px) })
});
});
// Print the windows out (badly...)
convolution_iter.for_each(|it| println!("{:?}", it.collect::<Vec<(usize, i32)>>()));
}
Would still be nicer to avoid unsafe and the indirection of looking up the matrix with indices.
With the U.S.'s large $1.5 Billion lottery this week, I wrote a function in Ruby to make Powerball picks. In Powerball, you choose 5 numbers from the range 1..69 (with no duplicates) and 1 number from the range 1..26.
This is what I came up with:
def pball
Array(1..69).shuffle[0..4].sort + [rand(1..26)]
end
It works by creating an array of integers from 1 to 69, shuffling that array, choosing the first 5 numbers, sorting those, and finally adding on a number from 1 to 26.
To do this in Swift takes a bit more work since Swift doesn't have the built-in shuffle method on Array.
This was my attempt:
func pball() -> [Int] {
let arr = Array(1...69).map{($0, drand48())}.sort{$0.1 < $1.1}.map{$0.0}[0...4].sort()
return arr + [Int(arc4random_uniform(26) + 1)]
}
Since there is no shuffle method, it works by creating an [Int] with values in the range 1...69. It then uses map to create [(Int, Double)], an array of tuple pairs that contain the numbers and a random Double in the range 0.0 ..< 1.0. It then sorts this array using the Double values and uses a second map to return to [Int] and then uses the slice [0...4] to extract the first 5 numbers and sort() to sort them.
In the second line, it appends a number in the range 1...26. I tried adding this to the first line, but Swift gave the error:
Expression was too complex to be solved in reasonable time; consider
breaking up the expression into distinct sub-expressions.
Can anyone suggest how to turn this into a 1-line function? Perhaps there is a better way to choose the 5 numbers from 1...69.
Xcode 8.3 • Swift 3.1
import GameKit
var powerballNumbers: [Int] {
return (GKRandomSource.sharedRandom().arrayByShufflingObjects(in: Array(1...69)) as! [Int])[0..<5].sorted() + [Int(arc4random_uniform(26) + 1)]
}
powerballNumbers // [5, 9, 62, 65, 69, 2]
Swift 2.x
import GameKit
var powerballNumbers: [Int] {
return (GKRandomSource.sharedRandom().arrayByShufflingObjectsInArray(Array(1...69)) as! [Int])[0...4].sort() + [Int(arc4random_uniform(26).successor())]
}
powerballNumbers // [21, 37, 39, 42, 65, 23]
I don't find the "one-liner" concept very compelling. Some languages lend themselves to it; others don't. I would suggest giving Swift a shuffle method to start with:
extension Array {
mutating func shuffle () {
for var i = self.count - 1; i != 0; i-- {
let ix1 = i
let ix2 = Int(arc4random_uniform(UInt32(i+1)))
(self[ix1], self[ix2]) = (self[ix2], self[ix1])
}
}
}
But since I made this mutating, we still need more than one line to express the entire operation because we have to have a var reference to our starting array:
var arr = Array(1...69)
(1...4).forEach {_ in arr.shuffle()}
let result = Array(arr[0..<5]) + [Int(arc4random_uniform(26)) + 1]
If you really insist on the one-liner, and you don't count the code needed to implement shuffle, then you can do it, though less efficiently, by defining shuffle more like this:
extension Array {
func shuffle () -> [Element] {
var arr = self
for var i = arr.count - 1; i != 0; i-- {
let ix1 = i
let ix2 = Int(arc4random_uniform(UInt32(i+1)))
(arr[ix1], arr[ix2]) = (arr[ix2], arr[ix1])
}
return arr
}
}
And here's your one-liner:
let result = Array(1...69).shuffle().shuffle().shuffle().shuffle()[0..<5] + [Int(arc4random_uniform(26)) + 1]
But oops, I omitted your sort. I don't see how to do that without getting the "too complex" error; to work around that, I had to split it into two lines:
var result = Array(1...69).shuffle().shuffle().shuffle().shuffle()[0..<5].sort(<)
result.append(Int(arc4random_uniform(26)) + 1)
How about this:
let winningDraw = (1...69).sort{ _ in arc4random_uniform(2) > 0}[0...4].sort() + [Int(arc4random_uniform(26)+1)]
[edit] above formula wasn't random. but this one will be
(1...69).map({Int(rand()%1000*70+$0)}).sort().map({$0%70})[0...4].sort() + [Int(rand()%26+1)]
For the fun of it, a non-GameplayKit (long) one-liner for Swift 3, using the global sequence(state:next:) function to generate random elements from the mutable state array rather than shuffling the array (although mutating the value array 5 times, so some extra copy operations here...)
let powerballNumbers = Array(sequence(state: Array(1...69), next: {
(s: inout [Int]) -> Int? in s.remove(at: Int(arc4random_uniform(UInt32(s.count))))})
.prefix(5).sorted()) + [Int(arc4random_uniform(26) + 1)]
... broken down for readability.
(Possible in future Swift version)
If the type inference weren't broken inout closure parameters (as arguments to closures), we could reduce the above to:
let powerballNumbers = Array(sequence(state: Array(1...69), next: {
$0.remove(at: Int(arc4random_uniform(UInt32($0.count)))) })
.prefix(5).sorted()) + [Int(arc4random_uniform(26) + 1)]
If we'd also allow the following extension
extension Int {
var rand: Int { return Int(arc4random_uniform(UInt32(exactly: self) ?? 0)) }
}
Then, we could go on to reduce the one-line to:
let powerballNumbers = Array(sequence(state: Array(1...69), next: { $0.remove(at: $0.count.rand) }).prefix(5).sorted()) + [26.rand + 1]
Xcode 10 • Swift 4.2
Swift now has added shuffled() to ClosedRange and random(in:) to Int which now makes this easily accomplished in one line:
func pball() -> [Int] {
return (1...69).shuffled().prefix(5).sorted() + [Int.random(in: 1...26)]
}
Further trimmings:
Because of the return type of pball(), the Int can be inferred in the random method call. Also, .prefix(5) can be replaced with [...4]. Finally, return can be omitted from the one-line function:
func pball() -> [Int] {
(1...69).shuffled()[...4].sorted() + [.random(in: 1...26)]
}
The goal is simple - get the colour at (0, 0) and remove any pixels in the image that are similar to it within the specified threshold (16384 in this case). However, the code below doesn't compile:
#![feature(env, old_path, core, old_io)]
extern crate image;
use std::env;
use std::num::ToPrimitive;
use std::old_io::File;
use image::color::FromColor;
use image::Pixel;
fn get_arguments() -> Vec<String> {
let mut argv: Vec<String> = env::args().collect();
argv.remove(0);
return argv;
}
fn remove_background<T:image::GenericImage>(img: &mut T) {
let background_color = img.get_pixel(0, 0).to_rgba();
let transparent_pixel = image::Rgba([0, 0, 0, 0]);
if background_color[3].to_uint().unwrap() > 0 {
for (x, y, color) in img.pixels() {
let rgba = color.to_rgba();
let (dr,dg,db) = (rgba[0] - background_color[0],
rgba[1] - background_color[1],
rgba[2] - background_color[2]);
if (dr*dr + dg*dg + db*db).to_uint().unwrap() < 16384 { img.put_pixel(x, y, transparent_pixel); } // Remove the background colour.
}
}
}
fn main() {
for filepath in get_arguments() {
let img = image::open( &Path::new(filepath) ).unwrap();
remove_background( &mut img );
let ref mut fout = File::create(&Path::new("output.png")).unwrap();
img.save(fout, image::PNG);
}
}
It gives the following error:
src/main.rs:32:83: 32:100 error: mismatched types:
expected `<T as image::image::GenericImage>::Pixel`,
found `image::color::Rgba<_>`
(expected associated type,
found struct `image::color::Rgba`) [E0308]
src/main.rs:32 if (dr*dr + dg*dg + db*db).to_uint().unwrap() < 16384 { img.put_pixel(x, y, transparent_pixel); } // Remove the background colour.
This is presumably because the GenericImage struct defines its own internal "Pixel", which I don't think I can access, but is exactly the same as the normal Pixel struct. How would I get code with this functionality to compile? Every other use of put_pixel I've seen has used get_pixel on the image object and manipulated that, but I need to use a transparent pixel, so that won't work.
Chris Morgan is spot on - when you are accepting a GenericImage, you have to handle a generic Pixel. However, you are trying to use a specific one - Rgba. Even more than that, you have to specify the type of the channels of the Rgba.
A notable issue with your original code is: what do you do when the GenericImage is composed of pixels that don't support transparency?
Here's a version that picks some concrete types and compiles:
fn remove_background<T>(img: &mut T)
where T: image::GenericImage<Pixel=image::Rgba<u8>>
{
let background_color = img.get_pixel(0, 0).to_rgba();
if background_color[3].to_uint().unwrap() > 0 {
for (_, _, color) in img.pixels_mut() {
let rgba = color.to_rgba();
let (dr,dg,db) = (rgba[0] - background_color[0],
rgba[1] - background_color[1],
rgba[2] - background_color[2]);
// Remove the background colour.
if (dr*dr + dg*dg + db*db).to_uint().unwrap() < 16384 {
for c in color.channels_mut().iter_mut() { *c = 0 }
}
}
}
}
Beyond specifying the specific pixel type in the where clause, you'll also run into mutability issues. I changed it to pixels_mut, channels_mut, and iter_mut to get the mutability to the right place.
Note that the Rust style is 4-space indents, so I've done that as well.
I need to turn an array of doubles to ints while keeping their ratios the same and being as simple as possible. For example [0.7, 0, -0.7] should become [1, 0, -1] and [24, 12, 0] should become [2, 1, 0]. I'm not certain if this would involve getting the least common multiple of the doubles or not, and how would this be done if so?
(The code has been updated for Swift 4 and later.)
First of all, there is no GCD or LCM for floating point numbers. You
have to convert the input to rational numbers first.
This is not as easy as it sounds, because decimal fractions like 0.7 cannot be represented exactly as a binary floating point number and
would be stored as something like 0.69999999999999996 in a Double.
So it is not completely obvious how to get from there to 7/10.
It is therefore necessary to specify a precision. Then you can
use
Continued Fractions
to efficiently create a (finite or infinite) sequence of fractions hn/kn that are arbitrary good approximations to a given real number x.
Here is a translation of this JavaScript implementation to Swift:
typealias Rational = (num : Int, den : Int)
func rationalApproximationOf(_ x0 : Double, withPrecision eps : Double = 1.0E-6) -> Rational {
var x = x0
var a = floor(x)
var (h1, k1, h, k) = (1, 0, Int(a), 1)
while x - a > eps * Double(k) * Double(k) {
x = 1.0/(x - a)
a = floor(x)
(h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k)
}
return (h, k)
}
Examples:
rationalApproximationOf(0.7) // (7, 10) i.e. 7/10
rationalApproximationOf(0.142857) // (1, 7) i.e. 1/7
I have set the default precision to 1.0E-6, but you can adjust that
to your needs.
Then you need functions for the GCD (greatest common divisor)
and LCM (lowest common multiple). Here are simple implementation:
// GCD of two numbers:
func gcd(_ a: Int, _ b: Int) -> Int {
var (a, b) = (a, b)
while b != 0 {
(a, b) = (b, a % b)
}
return abs(a)
}
// GCD of a vector of numbers:
func gcd(_ vector: [Int]) -> Int {
return vector.reduce(0, gcd)
}
// LCM of two numbers:
func lcm(a: Int, b: Int) -> Int {
return (a / gcd(a, b)) * b
}
// LCM of a vector of numbers:
func lcm(_ vector : [Int]) -> Int {
return vector.reduce(1, lcm)
}
With all these utilities, your function can now be implemented as
func simplifyRatios(_ numbers : [Double]) -> [Int] {
// Normalize the input vector to that the maximum is 1.0,
// and compute rational approximations of all components:
let maximum = numbers.map(abs).max()!
let rats = numbers.map { rationalApproximationOf($0/maximum) }
// Multiply all rational numbers by the LCM of the denominators:
let commonDenominator = lcm(rats.map { $0.den })
let numerators = rats.map { $0.num * commonDenominator / $0.den }
// Divide the numerators by the GCD of all numerators:
let commonNumerator = gcd(numerators)
return numerators.map { $0 / commonNumerator }
}
Examples:
simplifyRatios([0.7, 0, -0.7]) // [1, 0, -1]
simplifyRatios([24, 12, 0]) // [2, 1, 0]
simplifyRatios([1.3, 0.26, 0.9]) // [65, 13, 45]