Filetime Struct Sort - winapi

I have a filetime structure, i just want to compare to see which one is older (my aim is to sort descending), so I was hoping to avoid FileTimeToSystemTime
These are my two structs:
var time1 = {
dwLowDateTime: 2944808535,
dwHighDateTime: 30434197
}
then
var time2 = {
dwLowDateTime: 3483262096,
dwHighDateTime: 30434432
}
I was wondering is there a reliable way to test which one is greater. Like can I just compare dwHighDateTime's?
Thanks

Yes you can compare high parts, but they may be identical.
Compare high. If they are different, whichever is greater is greater. If same, compare low.
or convert them to LARGE_INTEGERs and compare those:
LARGE_INTEGER one;
one.HighPart = time1.dwHighDateTime;
one.LowPart = time1.dwLowDateTime;
LARGE_INTEGER two;
two.HighPart = time2.dwHighDateTime;
two.LowPart = time2.dwLowDateTime;
if (one.QuadPart > two.QuadPart) {
...
}
the reason you have to do it is that FILETIME predates availability of 64-bit instructions.

Related

Swift Explicit vs. Inferred Typing : Performance

I'm reading a tutorial about Swift (http://www.raywenderlich.com/74438/swift-tutorial-a-quick-start) and it preconized to not set type explicitly because it's more readable this way.
I do not really agree about this point but that's not the question. My question is : Is it more efficient, in terms of performance (compiler...) to set type explicitly ?
For example, would this : var hello: Int = 56 be more efficient than this : var tutorialTeam = 56
There is no difference in performance between code that uses explicit types and code which uses type inference. The compiled output is identical in each case.
When you omit the type the compiler simply infers it.
The very small differences observed in the accepted answer are just your usual micro benchmarking artefacts, and cannot be trusted!
Whether or not you include the explicit type is a matter of taste. In some contexts it might make your code more readable.
The only time it makes a difference to your code is when you want to specify a different type to the one which the compiler would infer. As an example:
var num = 2
The above code infers that num is an Int, due to it being initialised with an integer literal. However you can 'force' it to be a Double as follows:
var num: Double = 2
From my experience, there's been a huge performance impact in terms of compilation speed when using explicit vs inferred types. A majority of my slow compiling code has been resolved by explicitly typing variables.
It seems like the Swift compiler still has room for improvement in this area. Try benchmarking some of your projects and you'll see a big difference.
Here's an article I wrote on how to speed up slow Swift compile times and how to find out what is causing it.
Type inference will not affect performance in your given example. However, I did find that being specific about the Type in your Swift array does impact performance significantly.
For example, the method below shuffles an array of type Any.
class func shuffleAny(inout array: [Any]) {
for (var i = 0; i < array.count; i++) {
let currentObject: Any = array[i]
let randomIndex = Int(arc4random()) % array.count
let randomObject: Any = array[randomIndex]
array[i] = randomObject;
array[randomIndex] = currentObject
}
}
The above function is actually much slower than if I were to make this function take an array of Int instead like this
class func shuffleIntObjects(inout array: [Int]) {
for (var i = 0; i < array.count; i++) {
let currentObject: Int = array[i]
let randomIndex = Int(arc4random()) % array.count
let randomObject: Int = array[randomIndex]
array[i] = randomObject;
array[randomIndex] = currentObject
}
}
The function that uses [Any] clocked in at 0.537 seconds 3% STDEV for 1 million Int objects. And the function that uses [Int] clocked in at 0.181 seconds 2% STDEV for 1 million Int objects.
You can check out this repo (https://github.com/vsco/swift-benchmarks) which details a lot more interesting benchmarks in Swift. One of my favorite ones is that Swift generics perform very poorly with the test conditions mentioned above

Crash when casting the result of arc4random() to Int

I've written a simple Bag class. A Bag is filled with a fixed ratio of Temperature enums. It allows you to grab one at random and automatically refills itself when empty. It looks like this:
class Bag {
var items = Temperature[]()
init () {
refill()
}
func grab()-> Temperature {
if items.isEmpty {
refill()
}
var i = Int(arc4random()) % items.count
return items.removeAtIndex(i)
}
func refill() {
items.append(.Normal)
items.append(.Hot)
items.append(.Hot)
items.append(.Cold)
items.append(.Cold)
}
}
The Temperature enum looks like this:
enum Temperature: Int {
case Normal, Hot, Cold
}
My GameScene:SKScene has a constant instance property bag:Bag. (I've tried with a variable as well.) When I need a new temperature I call bag.grab(), once in didMoveToView and when appropriate in touchesEnded.
Randomly this call crashes on the if items.isEmpty line in Bag.grab(). The error is EXC_BAD_INSTRUCTION. Checking the debugger shows items is size=1 and [0] = (AppName.Temperature) <invalid> (0x10).
Edit Looks like I don't understand the debugger info. Even valid arrays show size=1 and unrelated values for [0] =. So no help there.
I can't get it to crash isolated in a Playground. It's probably something obvious but I'm stumped.
Function arc4random returns an UInt32. If you get a value higher than Int.max, the Int(...) cast will crash.
Using
Int(arc4random_uniform(UInt32(items.count)))
should be a better solution.
(Blame the strange crash messages in the Alpha version...)
I found that the best way to solve this is by using rand() instead of arc4random()
the code, in your case, could be:
var i = Int(rand()) % items.count
This method will generate a random Int value between the given minimum and maximum
func randomInt(min: Int, max:Int) -> Int {
return min + Int(arc4random_uniform(UInt32(max - min + 1)))
}
The crash that you were experiencing is due to the fact that Swift detected a type inconsistency at runtime.
Since Int != UInt32 you will have to first type cast the input argument of arc4random_uniform before you can compute the random number.
Swift doesn't allow to cast from one integer type to another if the result of the cast doesn't fit. E.g. the following code will work okay:
let x = 32
let y = UInt8(x)
Why? Because 32 is a possible value for an int of type UInt8. But the following code will fail:
let x = 332
let y = UInt8(x)
That's because you cannot assign 332 to an unsigned 8 bit int type, it can only take values 0 to 255 and nothing else.
When you do casts in C, the int is simply truncated, which may be unexpected or undesired, as the programmer may not be aware that truncation may take place. So Swift handles things a bit different here. It will allow such kind of casts as long as no truncation takes place but if there is truncation, you get a runtime exception. If you think truncation is okay, then you must do the truncation yourself to let Swift know that this is intended behavior, otherwise Swift must assume that is accidental behavior.
This is even documented (documentation of UnsignedInteger):
Convert from Swift's widest unsigned integer type,
trapping on overflow.
And what you see is the "overflow trapping", which is poorly done as, of course, one could have made that trap actually explain what's going on.
Assuming that items never has more than 2^32 elements (a bit more than 4 billion), the following code is safe:
var i = Int(arc4random() % UInt32(items.count))
If it can have more than 2^32 elements, you get another problem anyway as then you need a different random number function that produces random numbers beyond 2^32.
This crash is only possible on 32-bit systems. Int changes between 32-bits (Int32) and 64-bits (Int64) depending on the device architecture (see the docs).
UInt32's max is 2^32 − 1. Int64's max is 2^63 − 1, so Int64 can easily handle UInt32.max. However, Int32's max is 2^31 − 1, which means UInt32 can handle numbers greater than Int32 can, and trying to create an Int32 from a number greater than 2^31-1 will create an overflow.
I confirmed this by trying to compile the line Int(UInt32.max). On the simulators and newer devices, this compiles just fine. But I connected my old iPod Touch (32-bit device) and got this compiler error:
Integer overflows when converted from UInt32 to Int
Xcode won't even compile this line for 32-bit devices, which is likely the crash that is happening at runtime. Many of the other answers in this post are good solutions, so I won't add or copy those. I just felt that this question was missing a detailed explanation of what was going on.
This will automatically create a random Int for you:
var i = random() % items.count
i is of Int type, so no conversion necessary!
You can use
Int(rand())
To prevent same random numbers when the app starts, you can call srand()
srand(UInt32(NSDate().timeIntervalSinceReferenceDate))
let randomNumber: Int = Int(rand()) % items.count

Making a list of integers more human friendly

This is a bit of a side project I have taken on to solve a no-fix issue for work. Our system outputs a code to represent a combination of things on another thing. Some example codes are:
9-9-0-4-4-5-4-0-2-0-0-0-2-0-0-0-0-0-2-1-2-1-2-2-2-4
9-5-0-7-4-3-5-7-4-0-5-1-4-2-1-5-5-4-6-3-7-9-72
9-15-0-9-1-6-2-1-2-0-0-1-6-0-7
The max number in one of the slots I've seen so far is about 150 but they will likely go higher.
When the system was designed there was no requirement for what this code would look like. But now the client wants to be able to type it in by hand from a sheet of paper, something the code above isn't suited for. We've said we won't do anything about it, but it seems like a fun challenge to take on.
My question is where is a good place to start loss-less compressing this code? Obvious solutions such as store this code with a shorter key are not an option; our database is read only. I need to build a two way method to make this code more human friendly.
1) I agree that you definately need a checksum - data entry errors are very common, unless you have really well trained staff and independent duplicate keying with automatic crosss-checking.
2) I suggest http://en.wikipedia.org/wiki/Huffman_coding to turn your list of numbers into a stream of bits. To get the probabilities required for this, you need a decent sized sample of real data, so you can make a count, setting Ni to the number of times number i appears in the data. Then I suggest setting Pi = (Ni + 1) / (Sum_i (Ni + 1)) - which smooths the probabilities a bit. Also, with this method, if you see e.g. numbers 0-150 you could add a bit of slack by entering numbers 151-255 and setting them to Ni = 0. Another way round rare large numbers would be to add some sort of escape sequence.
3) Finding a way for people to type the resulting sequence of bits is really an applied psychology problem but here are some suggestions of ideas to pinch.
3a) Software licences - just encode six bits per character in some 64-character alphabet, but group characters in a way that makes it easier for people to keep place e.g. BC017-06777-14871-160C4
3b) UK car license plates. Use a change of alphabet to show people how to group characters e.g. ABCD0123EFGH4567IJKL...
3c) A really large alphabet - get yourself a list of 2^n words for some decent sized n and encode n bits as a word e.g. GREEN ENCHANTED LOGICIAN... -
i worried about this problem a while back. it turns out that you can't do much better than base64 - trying to squeeze a few more bits per character isn't really worth the effort (once you get into "strange" numbers of bits encoding and decoding becomes more complex). but at the same time, you end up with something that's likely to have errors when entered (confusing a 0 with an O etc). one option is to choose a modified set of characters and letters (so it's still base 64, but, say, you substitute ">" for "0". another is to add a checksum. again, for simplicity of implementation, i felt the checksum approach was better.
unfortunately i never got any further - things changed direction - so i can't offer code or a particular checksum choice.
ps i realised there's a missing step i didn't explain: i was going to compress the text into some binary form before encoding (using some standard compression algorithm). so to summarize: compress, add checksum, base64 encode; base 64 decode, check checksum, decompress.
This is similar to what I have used in the past. There are certainly better ways of doing this, but I used this method because it was easy to mirror in Transact-SQL which was a requirement at the time. You could certainly modify this to incorporate Huffman encoding if the distribution of your id's is non-random, but it's probably unnecessary.
You didn't specify language, so this is in c#, but it should be very easy to transition to any language. In the lookup you'll see commonly confused characters are omitted. This should speed up entry. I also had the requirement to have a fixed length, but it would be easy for you to modify this.
static public class CodeGenerator
{
static Dictionary<int, char> _lookupTable = new Dictionary<int, char>();
static CodeGenerator()
{
PrepLookupTable();
}
private static void PrepLookupTable()
{
_lookupTable.Add(0,'3');
_lookupTable.Add(1,'2');
_lookupTable.Add(2,'5');
_lookupTable.Add(3,'4');
_lookupTable.Add(4,'7');
_lookupTable.Add(5,'6');
_lookupTable.Add(6,'9');
_lookupTable.Add(7,'8');
_lookupTable.Add(8,'W');
_lookupTable.Add(9,'Q');
_lookupTable.Add(10,'E');
_lookupTable.Add(11,'T');
_lookupTable.Add(12,'R');
_lookupTable.Add(13,'Y');
_lookupTable.Add(14,'U');
_lookupTable.Add(15,'A');
_lookupTable.Add(16,'P');
_lookupTable.Add(17,'D');
_lookupTable.Add(18,'S');
_lookupTable.Add(19,'G');
_lookupTable.Add(20,'F');
_lookupTable.Add(21,'J');
_lookupTable.Add(22,'H');
_lookupTable.Add(23,'K');
_lookupTable.Add(24,'L');
_lookupTable.Add(25,'Z');
_lookupTable.Add(26,'X');
_lookupTable.Add(27,'V');
_lookupTable.Add(28,'C');
_lookupTable.Add(29,'N');
_lookupTable.Add(30,'B');
}
public static bool TryPCodeDecrypt(string iPCode, out Int64 oDecryptedInt)
{
//Prep the result so we can exit without having to fiddle with it if we hit an error.
oDecryptedInt = 0;
if (iPCode.Length > 3)
{
Char[] Bits = iPCode.ToCharArray(0,iPCode.Length-2);
int CheckInt7 = 0;
int CheckInt3 = 0;
if (!int.TryParse(iPCode[iPCode.Length-1].ToString(),out CheckInt7) ||
!int.TryParse(iPCode[iPCode.Length-2].ToString(),out CheckInt3))
{
//Unsuccessful -- the last check ints are not integers.
return false;
}
//Adjust the CheckInts to the right values.
CheckInt3 -= 2;
CheckInt7 -= 2;
int COffset = iPCode.LastIndexOf('M')+1;
Int64 tempResult = 0;
int cBPos = 0;
while ((cBPos + COffset) < Bits.Length)
{
//Calculate the current position.
int cNum = 0;
foreach (int cKey in _lookupTable.Keys)
{
if (_lookupTable[cKey] == Bits[cBPos + COffset])
{
cNum = cKey;
}
}
tempResult += cNum * (Int64)Math.Pow((double)31, (double)(Bits.Length - (cBPos + COffset + 1)));
cBPos += 1;
}
if (tempResult % 7 == CheckInt7 && tempResult % 3 == CheckInt3)
{
oDecryptedInt = tempResult;
return true;
}
return false;
}
else
{
//Unsuccessful -- too short.
return false;
}
}
public static string PCodeEncrypt(int iIntToEncrypt, int iMinLength)
{
int Check7 = (iIntToEncrypt % 7) + 2;
int Check3 = (iIntToEncrypt % 3) + 2;
StringBuilder result = new StringBuilder();
result.Insert(0, Check7);
result.Insert(0, Check3);
int workingNum = iIntToEncrypt;
while (workingNum > 0)
{
result.Insert(0, _lookupTable[workingNum % 31]);
workingNum /= 31;
}
if (result.Length < iMinLength)
{
for (int i = result.Length + 1; i <= iMinLength; i++)
{
result.Insert(0, 'M');
}
}
return result.ToString();
}
}

Find a Global Atom from a partial string

I can create an Global Atom using GlobalAddAtom and I can find that atom again using GlobalFindAtom if I already know the string associated with the atom. But is there a way to find all atoms whose associated string matches a given partial string?
For example, let's say I have an atom whose string is "Hello, World!" How can I later find that atom by searching for just "Hello"?
Unfortunately, the behavior you're describing is not possible for Atom Tables. This is because Atom Tables in Windows are basically Hash Tables, and the mapping process handles strings in entirety and not by parts.
Of course, it almost sounds like it would be possible, as quoted from the MSDN documentation:
Applications can also use local atom tables to save time when searching for a particular string. To perform a search, an application need only place the search string in the atom table and compare the resulting atom with the atoms in the relevant structures. Comparing atoms is typically faster than comparing strings.
However, they are referring to exact matches. This limitation probably seems dated compared to what is possible with resources currently available to software. However, Atoms have been available as far back as Win16 and in those times, this facility allowed a means for applications to manage string data effectively in minimal memory. Atoms are still used now to manage window class names, and still provide decent benefits in reducing the footprint of multiple stored copies of strings.
If you need to store string data efficiently and to be able to scan by partial starting matches, a Suffix Tree is likely to meet or exceed your needs.
It actually can be done, but only through scanning them all. In LINQPad 5 this can be done in 0.025 seconds on my machine, so it is quite fast. Here is an example implementation:
void Main()
{
const string atomPrefix = "Hello";
const int bufferSize = 1024;
ushort smallestAtomIndex = 0XC000;
var buffer = new StringBuilder(bufferSize);
var results = new List<string>();
for (ushort atomIndex = smallestAtomIndex; atomIndex < ushort.MaxValue; atomIndex++)
{
var resultLength = GlobalGetAtomName(atomIndex, buffer, bufferSize);
if (buffer.ToString().StartsWith(atomPrefix))
{
results.Add($"{buffer} - {atomIndex}");
}
buffer.Clear();
}
results.Dump();
}
[DllImport("kernel32.dll", CharSet = CharSet.Auto, SetLastError = true)]
public static extern uint GlobalGetAtomName(ushort atom, StringBuilder buffer, int size);

Best way to write a conversion function

Let's say that I'm writing a function to convert between temperature scales. I want to support at least Celsius, Fahrenheit, and Kelvin. Is it better to pass the source scale and target scale as separate parameters of the function, or some sort of combined parameter?
Example 1 - separate parameters:
function convertTemperature("celsius", "fahrenheit", 22)
Example 2 - combined parameter:
function convertTemperature("c-f", 22)
The code inside the function is probably where it counts. With two parameters, the logic to determine what formula we're going to use is slightly more complicated, but a single parameter doesn't feel right somehow.
Thoughts?
Go with the first option, but rather than allow literal strings (which are error prone), take constant values or an enumeration if your language supports it, like this:
convertTemperature (TempScale.CELSIUS, TempScale.FAHRENHEIT, 22)
Depends on the language.
Generally, I'd use separate arguments with enums.
If it's an object oriented language, then I'd recommend a temperature class, with the temperature stored internally however you like and then functions to output it in whatever units are needed:
temp.celsius(); // returns the temperature of object temp in celsius
When writing such designs, I like to think to myself, "If I needed to add an extra unit, what would design would make it the easiest to do so?" Doing this, I come to the conclusion that enums would be easiest for the following reasons:
1) Adding new values is easy.
2) I avoid doing string comparison
However, how do you write the conversion method? 3p2 is 6. So that means there are 6 different combinations of celsius, Fahrenheit, and kelvin. What if I wanted to add a new temperate format "foo"? That would mean 4p2 which is 12! Two more? 5p2 = 20 combination. Three more? 6p2 = 30 combinations!
You can quickly see how each additional modification requires more and more changes to the code. For this reason I don't do direct conversions! Instead, I do an intermediate conversion. I'd pick one temperature, say Kelvin. And initially, I'd convert to kelvin. I'd then convert kelvin to the desired temperature. Yes, It does result in an extra calculation. However, it makes scalling the code a ton easier. adding adding a new temperature unit will always result in only two new modifications to the code. Easy.
A few things:
I'd use an enumerated type that a syntax checker or compiler can check rather than a string that can be mistyped. In Pseudo-PHP:
define ('kCelsius', 0); define ('kFarenheit', 1); define ('kKelvin', 2);
$a = ConvertTemperature(22, kCelsius, kFarenheit);
Also, it seems more natural to me to place the thing you operate on, in this case the temperature to be converted, first. It gives a logical ordering to your parameters (convert -- what? from? to?) and thus helps with mnemonics.
Your function will be much more robust if you use the first approach. If you need to add another scale, that's one more parameter value to handle. In the second approach, adding another scale means adding as many values as you already had scales on the list, times 2. (For example, to add K to C and F, you'd have to add K-C, K-F, C-K, and C-F.)
A decent way to structure your program would be to first convert whatever comes in to an arbitrarily chosen intermediate scale, and then convert from that intermediate scale to the outgoing scale.
A better way would be to have a little library of slopes and intercepts for the various scales, and just look up the numbers for the incoming and outgoing scales and do the calculation in one generic step.
In C# (and probaly Java) it would be best to create a Temperature class that stores temperatures privately as Celcius (or whatever) and which has Celcius, Fahrenheit, and Kelvin properties that do all the conversions for you in their get and set statements?
Depends how many conversions you are going to have. I'd probably choose one parameter, given as an enum: Consider this expanded version of conversion.
enum Conversion
{
CelsiusToFahrenheit,
FahrenheitToCelsius,
KilosToPounds
}
Convert(Conversion conversion, X from);
You now have sane type safety at point of call - one cannot give correctly typed parameters that give an incorrect runtime result. Consider the alternative.
enum Units
{
Pounds,
Kilos,
Celcius,
Farenheight
}
Convert(Unit from, Unit to, X fromAmount);
I can type safely call
Convert(Pounds, Celcius, 5, 10);
But the result is meaningless, and you'll have to fail at runtime. Yes, I know you're only dealing with temperature at the moment, but the general concept still holds (I believe).
I would choose
Example 1 - separate parameters: function convertTemperature("celsius", "fahrenheit", 22)
Otherwise within your function definition you would have to parse "c-f" into "celsius" and "fahrenheit" anyway to get the required conversion scales, which could get messy.
If you're providing something like Google's search box to users, having handy shortcuts like "c-f" is nice for them. Underneath, though, I would convert "c-f" into "celsius" and "fahrenheit" in an outer function before calling convertTemperature() as above.
In this case single parameters looks totally obscure;
Function convert temperature from one scale to another scale.
IMO it's more natural to pass source and target scales as separate parameters. I definitely don't want to try to grasp format of first argument.
I would make an enumeration out of the temperature types and pass in the 2 scale parameters. Something like (in c#):
public void ConvertTemperature(TemperatureTypeEnum SourceTemp,
TemperatureTypeEnum TargetTemp,
decimal Temperature)
{}
I'm always on the lookout for ways to use objects to solve my programming problems. I hope this means that I'm more OO than when I was only using functions to solve problems, but that remains to be seen.
In C#:
interface ITemperature
{
CelciusTemperature ToCelcius();
FarenheitTemperature ToFarenheit();
}
struct FarenheitTemperature : ITemperature
{
public readonly int Value;
public FarenheitTemperature(int value)
{
this.Value = value;
}
public FarenheitTemperature ToFarenheit() { return this; }
public CelciusTemperature ToCelcius()
{
return new CelciusTemperature((this.Value - 32) * 5 / 9);
}
}
struct CelciusTemperature
{
public readonly int Value;
public CelciusTemperature(int value)
{
this.Value = value;
}
public CelciusTemperature ToCelcius() { return this; }
public FarenheitTemperature ToFarenheit()
{
return new FarenheitTemperature(this.Value * 9 / 5 + 32);
}
}
and some tests:
// Freezing
Debug.Assert(new FarenheitTemperature(32).ToCelcius().Equals(new CelciusTemperature(0)));
Debug.Assert(new CelciusTemperature(0).ToFarenheit().Equals(new FarenheitTemperature(32)));
// crossover
Debug.Assert(new FarenheitTemperature(-40).ToCelcius().Equals(new CelciusTemperature(-40)));
Debug.Assert(new CelciusTemperature(-40).ToFarenheit().Equals(new FarenheitTemperature(-40)));
and an example of a bug that this approach avoids:
CelciusTemperature theOutbackInAMidnightOilSong = new CelciusTemperature(45);
FarenheitTemperature x = theOutbackInAMidnightOilSong; // ERROR: Cannot implicitly convert type 'CelciusTemperature' to 'FarenheitTemperature'
Adding Kelvin conversions is left as an exercise.
By the way, it doesn't have to be more work to implement the three-parameter version, as suggested in the question statement.
These are all linear functions, so you can implement something like
float LinearConvert(float in, float scale, float add, bool invert);
where the last bool indicates if you want to do the forward transform or reverse it.
Within your conversion technique, you can have a scale/add pair for X -> Kelvin. When you get a request to convert format X to Y, you can first run X -> Kelvin, then Kelvin -> Y by reversing the Y -> Kelvin process (by flipping the last bool to LinearConvert).
This technique gives you something like 4 lines of real code in your convert function, and one piece of data for every type you need to convert between.
Similar to what #Rob #wcm and #David explained...
public class Temperature
{
private double celcius;
public static Temperature FromFarenheit(double farenheit)
{
return new Temperature { Farhenheit = farenheit };
}
public static Temperature FromCelcius(double celcius)
{
return new Temperature { Celcius = celcius };
}
public static Temperature FromKelvin(double kelvin)
{
return new Temperature { Kelvin = kelvin };
}
private double kelvinToCelcius(double kelvin)
{
return 1; // insert formula here
}
private double celciusToKelvin(double celcius)
{
return 1; // insert formula here
}
private double farhenheitToCelcius(double farhenheit)
{
return 1; // insert formula here
}
private double celciusToFarenheit(double kelvin)
{
return 1; // insert formula here
}
public double Kelvin
{
get { return celciusToKelvin(celcius); }
set { celcius = kelvinToCelcius(value); }
}
public double Celcius
{
get { return celcius; }
set { celcius = value; }
}
public double Farhenheit
{
get { return celciusToFarenheit(celcius); }
set { celcius = farhenheitToCelcius(value); }
}
}
I think I'd go whole hog one direction or another. You could write a mini-language that does any sort of conversion like units does:
$ units 'tempF(-40)' tempC
-40
Or use individual functions like the recent Convert::Temperature Perl module does:
use Convert::Temperature;
my $c = new Convert::Temperature();
my $res = $c->from_fahr_to_cel('59');
But that brings up an important point---does the language you are using already have conversion functions? If so, what coding convention do they use? So if the language is C, it would be best to follow the example of the atoi and strtod library functions (untested):
double fahrtocel(double tempF){
return ((tempF-32)*(5/9));
}
double celtofahr(double tempC){
return ((9/5)*tempC + 32);
}
In writing this post, I ran across a very interesting post on using emacs to convert dates. The take-away for this topic is that it uses the one function-per-conversion style. Also, conversions can be very obscure. I tend to do date calculations using SQL because it seems unlikely there are many bugs in that code. In the future, I'm going to look into using emacs.
Here is my take on this (using PHP):
function Temperature($value, $input, $output)
{
$value = floatval($value);
if (isset($input, $output) === true)
{
switch ($input)
{
case 'K': $value = $value - 273.15; break; // Kelvin
case 'F': $value = ($value - 32) * (5 / 9); break; // Fahrenheit
case 'R': $value = ($value - 491.67) * (5 / 9); break; // Rankine
}
switch ($output)
{
case 'K': $value = $value + 273.15; break; // Kelvin
case 'F': $value = $value * (9 / 5) + 32; break; // Fahrenheit
case 'R': $value = ($value + 273.15) * (9 / 5); break; // Rankine
}
}
return $value;
}
Basically the $input value is converted to the standard Celsius scale and then converted back again to the $output scale - one function to rule them all. =)
My vote is two parameters for conversion types, one for the value (as in your first example). I would use enums instead of string literals, however.
Use enums, if your language allows it, for the unit specifications.
I'd say the code inside would be easier with two. I'd have a table with pre-add, multiplty, and post-add, and run the value through the item for one unit, and then through the item for the other unit in reverse. Basically converting the input temperature to a common base value inside, and then out to the other unit. This entire function would be table-driven.
I wish there was some way to accept multiple answers. Based on everyone's recommendations, I think I will stick with the multiple parameters, changing the strings to enums/constants, and moving the value to be converted to the first position in the parameter list. Inside the function, I'll use Kelvin as a common middle ground.
Previously I had written individual functions for each conversion and the overall convertTemperature() function was merely a wrapper with nested switch statements. I'm writing in both classic ASP and PHP, but I wanted to leave the question open to any language.

Resources