How to get the default value of a generic type (for the warp shuffle)? - aleagpu

let t = if warpid = 0 then mean.[i / num_rows] else (Unchecked.defaultof<'T>)
__syncthreads()
let v = __shfl t 0 32
I want to get the default value of 'T, but the above snippet gives a compile error due to Unchecked.defaultof<'T>. What would be the preferred way of doing a warp shuffle in Alea?
Right now as I have a problem where many threads read from the same location once, I am trying test whether it would be more efficient to have only the first thread read from that spot and and then shuffle the value to the others in the warp. (Edit: Not at all. The cache is doing its job nicely.)

It is a good idea to support Unchecked.defaultof, I will check this, thanks.
Currently there are two ways to get a default value of type 'T:
Use Alea.CUDA.Intrinsic.__default_value<'T>() (see here). Intrinsic is an auto-open module, so if you opened Alea.CUDA namespace, you can directly use __default_value() in your code.
The second way is to open namespace Alea.CUDA.Utilities, and use the auto-opened NumericLiteralG module (see here), then in your inline generic function you can directly write things like 0G, 1G, etc..
For your second question, I paste some source code of the helper warp shuffle type, which includes broadcasting usage. These helper static methods are available in module Alea.CUDA.Intrinsic:
///A helper static class providing shuffle instructions.
[<AbstractClass;Sealed>]
type WarpShuffle private () =
[<ReflectedDefinition>]
static member Broadcast(input:'T, srcLane:int, width:int) =
__shfl input srcLane width
[<ReflectedDefinition>]
static member Broadcast(input:'T, srcLane:int) =
let width = __warp_size()
__shfl input srcLane width
[<ReflectedDefinition>]
static member Up(input:'T, delta:int, width:int) =
__shfl_up input delta width
[<ReflectedDefinition>]
static member Up(input:'T, delta:int) =
let width = __warp_size()
__shfl_up input delta width
[<ReflectedDefinition>]
static member Down(input:'T, delta:int, width:int) =
__shfl_down input delta width
[<ReflectedDefinition>]
static member Down(input:'T, delta:int) =
let width = __warp_size()
__shfl_down input delta width
[<ReflectedDefinition>]
static member Xor(input:'T, laneMask:int, width:int) =
__shfl_xor input laneMask width
[<ReflectedDefinition>]
static member Xor(input:'T, laneMask:int) =
let width = __warp_size()
__shfl_xor input laneMask width
///[omit]
[<AbstractClass;Sealed>]
type FullWarpShuffle private () =
[<ReflectedDefinition>]
static member Broadcast(input:'T, srcLane:int, logicWarpThreads:int) =
let shflC = logicWarpThreads - 1
__shfl_raw input srcLane shflC
[<ReflectedDefinition>]
static member Broadcast(input:'T, srcLane:int) =
let shflC = __warp_size() - 1
__shfl_raw input srcLane shflC
[<ReflectedDefinition>]
static member Up(input:'T, srcOffset:int) =
let shflC = 0
__shfl_up_raw input srcOffset shflC
[<ReflectedDefinition>]
static member Down(input:'T, srcOffset:int) =
let shflC = __warp_size() - 1
__shfl_down_raw input srcOffset shflC
[<ReflectedDefinition>]
static member Down(input:'T, srcOffset:int, warpThreads:int) =
let shflC = warpThreads - 1
__shfl_down_raw input srcOffset shflC
In the code above it used __shf_raw, which the online doc is out-of-date. this is the raw version of ptx code shfl.idx, where the shflC contains two packed values specifying a mask for logically splitting warps into sub-segments and an upper bound for clamping the source lane index. Read more at here.

Related

Canvas resize photoshop script

Would it be possible to write a script to resize each image to the closest round number (for example if the original image is 791x1265px then it could be resized to 800x1300px)
Thanks!
Pretty Easy and small script can do it :) Enjoy
Note : You have two choices for script; before running script either use static base value (default) or if you want to add prompt on each run then uncomment below line of var base and comment the var base line :) Hope that is what you were looking :)
//get Original Ruler Units;
var origRuler = app.preferences.rulerUnits;
app.preferences.rulerUnits = Units.PIXELS;
//get Active document scales
var origWidth = app.activeDocument.width;
var origHeight = app.activeDocument.height;
//define base
var base = 100; //change your base like 10;100 etc; use below code to make a prompt on each run;
//var base = prompt("Enter Your Base number",""); //use this code if you want prompt for each run . uncomment by rermoving first two "//"
//magical Mathematics XD
var roundWidth = Math.ceil(origWidth / base) * base;
var roundHeight = Math.ceil(origHeight / base) * base;
//resize canvas
app.activeDocument.resizeCanvas (roundWidth, roundHeight);
//Restores Original Ruler Units;
app.preferences.rulerUnits = origRuler;
Edit: Updated script to avoid ruler units conflicts and changed Math.round to Math.ceil as per #Sergey Suggestion!

How to detect numbers/digits via builtin OcrEngine class

I have troubles detecting digits/numbers in an image with the Windows UWP OCR-Engine from C++/CX.
I need to detect the number in the following Image
I tried it by using the builtin method for Windows 10 UWP: OcrEngine with the following code:
...
cv::Mat croppedImage = imread("digit.png");
WriteableBitmap^ bit1 = ref new WriteableBitmap(croppedImage.cols, croppedImage.rows);
SoftwareBitmap^ bit2 = bit2->CreateCopyFromBuffer(bit1->PixelBuffer, BitmapPixelFormat::Bgra8, bit1->PixelWidth, bit1->PixelHeight);
Windows::Globalization::Language^ l = ref new Windows::Globalization::Language("de");
OcrEngine^ ocrEngine = OcrEngine::TryCreateFromLanguage(l);
IAsyncOperation<OcrResult^>^ ao = ocrEngine->RecognizeAsync(bit2);
task_completion_event<Platform::String^> purchaseCompleted;
auto deviceEnumTask = create_task(ao);
deviceEnumTask.then([this](OcrResult^ result)
{
App1::MainPage::findNumber(result->Text);
});
...
void App1::MainPage::findNumber(Platform::String^ text)
{
//Do something with String
}
My Problem is now, that the inserted string in findNumber is always null. I tried with different pictures as input but always the same result: NULL.
Is there an easier way to get the digits in this images in C++/CX?
What could be the problem? Converting the image?
The problem was the conversion of the WriteableBitmap to a SoftwareBitmap WriteableBitmap^ bit1 = ref new WriteableBitmap(croppedImage.cols, croppedImage.rows);
// Get access to the pixels
IBuffer^ buffer = bit1->PixelBuffer;
unsigned char* dstPixels;
// Obtain IBufferByteAccess
ComPtr<IBufferByteAccess> pBufferByteAccess;
ComPtr<IInspectable> pBuffer((IInspectable*)buffer);
pBuffer.As(&pBufferByteAccess);
// Get pointer to pixel bytes
pBufferByteAccess->Buffer(&dstPixels);
memcpy(dstPixels, croppedImage.data, croppedImage.step.buf[1] * croppedImage.cols*croppedImage.rows);
SoftwareBitmap^ bit2= ref new SoftwareBitmap(BitmapPixelFormat::Bgra8, croppedImage.cols, croppedImage.rows);
//SoftwareBitmap^ bit2 =
bit2->CopyFromBuffer(bit1->PixelBuffer);

How to disable certain SI prefixes in D3 formatter?

var format = d3.format('s');
format(1000); // 1k, good
format(1000000); //1M, good
format(0.1); // 100m, not necessary, would be better to show 0.1 directly
I think most of the time the SI prefix 'm' is not necessary. How can I disable it?
There is no built-in way of customizing the output of d3.format() the way you want it. You could, however, define two distinct formats: one for the large numbers including the SI-prefixes, while the other one for the small numbers will omit the prefixes. Wrapping these in a function will give you your custom format function:
var formatLarge = d3.format('s');
var formatSmall = d3.format('-.g');
var customFormat = function(val) {
return Math.abs(val) < 1 ? formatSmall(val) : formatLarge(val);
};
console.log(customFormat(1000)); // 1k
console.log(customFormat(1000000)); // 1M
console.log(customFormat(0.1)); // 0.1

Table in Word looks different if halted on breakpoint

I have stumbled on a very annoying problem when setting column widths on a table in Word (using Microsoft Office 15.0 Object Library, VS 2013). If I run the code below without having any breakpoints, the result becomes incorrect (first column width should be 30%), but if I put a breakpoint (e.g.) on line 47, the generated Word file becomes as I want it to be.
The conclusion I make is that when the debugger stops execution, the given column size values are flushed into the data model and the output will be as I want it to be. Without the breakpoint, the merging of cells changes the widths (e.g. the first column becomes 12,5%).
I have searched for some sort of method/function to make the data model adjust to my programmatically given column sizes before the cell merging, with no luck. Anyone out there that can explain why halting on the breakpoint will change the output?
Regards,
Ola
using System;
using System.Linq;
using System.Runtime.InteropServices;
using Microsoft.Office.Interop.Word;
namespace ShowWordProblem
{
class Program
{
private const string WordFileName = #"C:\temp\test.doc";
static void Main(string[] args)
{
var wordApplication = new Application();
wordApplication.Visible = false;
var wordDocument = wordApplication.Documents.Add();
FillDocument(wordDocument);
wordDocument.SaveAs2(WordFileName);
wordDocument.Close();
wordApplication.Quit(Type.Missing, Type.Missing);
Marshal.FinalReleaseComObject(wordApplication);
wordApplication = null;
wordApplication = new Microsoft.Office.Interop.Word.Application();
wordApplication.Visible = true;
wordApplication.Documents.Open(WordFileName);
}
private static void FillDocument(Document wordDocument)
{
Range range = wordDocument.Content.Paragraphs.Add().Range;
var table = range.Tables.Add(range, 5, 8);
table.PreferredWidthType = WdPreferredWidthType.wdPreferredWidthPercent;
table.PreferredWidth = (float)100.0;
table.Columns.PreferredWidthType = WdPreferredWidthType.wdPreferredWidthPercent;
table.Columns.PreferredWidthType = WdPreferredWidthType.wdPreferredWidthPercent;
table.Columns[1].PreferredWidth = 30;
for (int i = 2; i <= 8; i++) table.Columns[i].PreferredWidth = 10;
var widths = table.Columns.OfType<Column>().Select(c => c.Width).ToArray();
MergeGroupHeaderCells(table.Rows[1], 5, 9);
MergeGroupHeaderCells(table.Rows[1], 2, 5);
}
private static void MergeGroupHeaderCells(Row row, int startIndex, int lastIndex)
{
var r = row.Cells[startIndex].Range;
Object cell = WdUnits.wdCell;
Object count = lastIndex - startIndex - 1;
r.MoveEnd(ref cell, ref count);
r.Cells.Merge();
}
}
}
Finally I found a way to force the code to apply the given percentage values to the columns before the merging of the cells, and thereby having the correct widths on all columns.
By adding the following line of code after the last PreferredWidth-assignment:
var info = table.Range.Information[WdInformation.wdWithInTable];
it seems that the given PreferredWidth values are propagated to the columns, and the final rendered word document looks exactly as I want it.
/Ola

Binary operator '+' cannot be applied to two CGFloat operands?

Coding in Swift and get the above error...
Is the message masking something else OR can you really not add two CGFloat operands? If not, why (on earth) not?
EDIT
There is nothing special in the code I am trying to do; what is interesting is that the above error message, VERBATIM, is what the Swift assistant compiler tells me (underlining my code with red squiggly lines).
Running Xcode 6.3 (Swift 1.2)
It's absolutely possible, adding two CGFloat variables using the binary operator '+'. What you need to know is the resultant variable is also a CGFloat variable (based on type Inference Principle).
let value1 : CGFloat = 12.0
let value2 : CGFloat = 13.0
let value3 = value1 + value2
println("value3 \(value3)")
//The result is value3 25.0, and the value3 is of type CGFloat.
EDIT:
By Swift v3.0 convention
let value = CGFloat(12.0) + CGFloat(13.0)
println("value \(value)")
//The result is value 25.0, and the value is of type CGFloat.
I ran into this using this innocent-looking piece of code:
func foo(line: CTLine) {
let ascent: CGFloat
let descent: CGFloat
let leading: CGFloat
let fWidth = Float(CTLineGetTypographicBounds(line, &ascent, &descent, &leading))
let height = ceilf(ascent + descent)
// ~~~~~~ ^ ~~~~~~~
}
And found the solution by expanding the error in the Issue Navigator:
Looking into it, I think Swift found the +(lhs: Float, rhs: Float) -> Float function first based on its return type (ceilf takes in a Float). Obviously, this takes Floats, not CGFloats, which shines some light on the meaning of the error message. So, to force it to use the right operator, you gotta use a wrapper function that takes either a Double or a Float (or just a CGFloat, obviously). So I tried this and the error was solved:
// ...
let height = ceilf(Float(ascent + descent))
// ...
Another approach would be to use a function that takes a Double or CGFloat:
// ...
let height = ceil(ascent + descent)
// ...
So the problem is that the compiler prioritizes return types over parameter types. Glad to see this is still happening in Swift 3 ;P
Mainly two possible reasons are responsible to occur such kind of error.
first:
Whenever you try to compare optional type CGFloat variable
like
if a? >= b?
{
videoSize = size;
}
This is responsible for an error so jus make it as if a! >= b!{}
Second:
Whenever you direct use value to get a result at that time such kind of error occure
like
var result = 1.5 / 1.2
Don't use as above.
use with object which is declare as a CGFloat
like
var a : CGFloat = 1.5
var b : CGFloat = 1.2
var result : CGFloat!
result = a / b

Resources