Displaying filesize on the user interface - user-interface

I have been using following function to display the file size in bytes in more readable uable friendly on the user interface.
function bytesToSize(bytes, precision)
{
var kilobyte = 1024;
var megabyte = kilobyte * 1024;
var gigabyte = megabyte * 1024;
var terabyte = gigabyte * 1024;
if ((bytes >= 0) && (bytes < kilobyte)) {
return bytes + ' B';
} else if ((bytes >= kilobyte) && (bytes < megabyte)) {
return (bytes / kilobyte).toFixed(precision) + ' KB';
} else if ((bytes >= megabyte) && (bytes < gigabyte)) {
return (bytes / megabyte).toFixed(precision) + ' MB';
} else if ((bytes >= gigabyte) && (bytes < terabyte)) {
return (bytes / gigabyte).toFixed(precision) + ' GB';
} else if (bytes >= terabyte) {
return (bytes / terabyte).toFixed(precision) + ' TB';
} else {
return bytes + ' B';
}
}
However the problem is that it shows X KB, X MP ..etc (rounding the results to display on UI) and not able to show X.Y or say X,Y. After this thing thing came to my mind i was more confused and i started looking around the systems for how they show it on there UI. following are some of them,
FileZill : XXX,XXX,XXX bytes (while uploading)
Windows Explorer : XXX, XXX KB (in details view)
Now all these brought me to new level of confusion, as the size displayed on the UI using above function is not really precise imagine file size in GB's which is incorrectly displayed to user.
Please let know in which format this data should be displayed to the users to be more useful at the same time more accurate. Also real estate is expensive in today's complicated web pages which also needed to be considered.

For a quick overview I like the file size format xxx.x fooB where foo the biggest applicable of T, G, M, k (or empty).
That is, divide numBytes by 1024 until it's < 1000 using floating point arithmetic. Remember how often you divided, convert that to the according foo.
Of course you might want to consider other formats for detailed file size information.

Related

Problem in Flash Programming of PIC24FJ128GC006

I am making a USB Bootloader for PIC24FJ. I am now in the process of writing to flash memory the application code hex file through a software and without using ICD3. After downloading the application code hex file, I checked the program memory of the PIC using the PIC Memory Views of MPLAB Window Toolbar, and this is what it looks like. PIC24_BOOT_APP_VECTOR_AREA As you can see in the picture, the opcode is not continuously written in every address. It alternates with 000000.
Also I compared the opcodes of the downloaded app code using the bootloader, to the application code without using the bootloader. I have found out that there are data in the opcodes that are not present in the application alone. Attached are the photos. Application_Code_Alone_User_Area, PIC24_Boot_App_User_Area This may create a problem in jumping to the application.
Below is my code for storing data in the buffer and writing to flash memory. (I use single word programming for flash)
#define WRITE_FLASH_BLOCK_SIZE 0x04
#define USER_MEM_START_ADDRESS 0x004002
unsigned long rxBuff[60];
int rxIndexer;
int xfer;
lineStart = rxBuff[ 0 ];
positionAddress = (rxBuff[ 2 ] << 8) + (rxBuff[ 3 ]); // THIS IS THE ADDRESS WHERE THE DATA SHOULD BE ADDRESSED
numberOfData = rxBuff[ 1 ]; // THIS IS THE TOTAL NUMBER OF DATA RECEIVED IN THE STREAM
recordType = rxBuff[ 4 ];
rxIndexer = 5; //Start of data index in a INTEL Hex file format
for(xfer = 0; xfer < numberOfData; xfer += WRITE_FLASH_BLOCK_SIZE) // THIS SECTION CONTAINS THE PROCESS OF COMBINING THE INFORMATION
{ // FROM THE DATA STREAM TO THIS FORMAT - 0x00AA, 0xBBCC
rxBuff[rxIndexer] = ((rxBuff[START_OF_DATA_INDEX + xfer]) &0x00FF);
rxBuff[rxIndexer] |= ((rxBuff[START_OF_DATA_INDEX + xfer + 1] << 8) &0xFF00); //end of lower word
rxIndexer++;
rxBuff[rxIndexer] = ((rxBuff[START_OF_DATA_INDEX + xfer + 2]) &0x00FF); //start of upper byte
rxBuff[rxIndexer] |= ((0x00 << 8) & 0xFF00); // phantom byte (0x00)
rxIndexer++;
}
if(lineStart == ':')
{
if(recordType == 0x00 && data_checksum == 0)
{
for(xfer = 0; xfer < numberOfData; xfer += 2)
{
FlashWrite_Word(programAddress + positionAddress, rxBuff[5 + xfer], rxBuff[5 + xfer + 1]);
positionAddress += 2;
}
}
else if(recordType == 0x04 && data_checksum == 0)
{
programAddress = USER_MEM_START_ADDRESS;
}
else if( recordType == END_OF_FILE_RECORD)
{
jumpTo_App();
}

Efficient way to generate a seemingly random permutation from a very large set without repeating?

I have a very large set (billions or more, it's expected to grow exponentially to some level), and I want to generate seemingly random elements from it without repeating. I know I can pick a random number and repeat and record the elements I have generated, but that takes more and more memory as numbers are generated, and wouldn't be practical after couple millions elements out.
I mean, I could say 1, 2, 3 up to billions and each would be constant time without remembering all the previous, or I can say 1,3,5,7,9 and on then 2,4,6,8,10, but is there a more sophisticated way to do that and eventually get a seemingly random permutation of that set?
Update
1, The set does not change size in the generation process. I meant when the user's input increases linearly, the size of the set increases exponentially.
2, In short, the set is like the set of every integer from 1 to 10 billions or more.
3, In long, it goes up to 10 billion because each element carries the information of many independent choices, for example. Imagine an RPG character that have 10 attributes, each can go from 1 to 100 (for my problem different choices can have different ranges), thus there's 10^20 possible characters, number "10873456879326587345" would correspond to a character that have "11, 88, 35...", and I would like an algorithm to generate them one by one without repeating, but makes it looks random.
Thanks for the interesting question. You can create a "pseudorandom"* (cyclic) permutation with a few bytes using modular exponentiation. Say we have n elements. Search for a prime p that's bigger than n+1. Then find a primitive root g modulo p. Basically by definition of primitive root, the action x --> (g * x) % p is a cyclic permutation of {1, ..., p-1}. And so x --> ((g * (x+1))%p) - 1 is a cyclic permutation of {0, ..., p-2}. We can get a cyclic permutation of {0, ..., n-1} by repeating the previous permutation if it gives a value bigger (or equal) n.
I implemented this idea as a Go package. https://github.com/bwesterb/powercycle
package main
import (
"fmt"
"github.com/bwesterb/powercycle"
)
func main() {
var x uint64
cycle := powercycle.New(10)
for i := 0; i < 10; i++ {
fmt.Println(x)
x = cycle.Apply(x)
}
}
This outputs something like
0
6
4
1
2
9
3
5
8
7
but that might vary off course depending on the generator chosen.
It's fast, but not super-fast: on my five year old i7 it takes less than 210ns to compute one application of a cycle on 1000000000000000 elements. More details:
BenchmarkNew10-8 1000000 1328 ns/op
BenchmarkNew1000-8 500000 2566 ns/op
BenchmarkNew1000000-8 50000 25893 ns/op
BenchmarkNew1000000000-8 200000 7589 ns/op
BenchmarkNew1000000000000-8 2000 648785 ns/op
BenchmarkApply10-8 10000000 170 ns/op
BenchmarkApply1000-8 10000000 173 ns/op
BenchmarkApply1000000-8 10000000 172 ns/op
BenchmarkApply1000000000-8 10000000 169 ns/op
BenchmarkApply1000000000000-8 10000000 201 ns/op
BenchmarkApply1000000000000000-8 10000000 204 ns/op
Why did I say "pseudorandom"? Well, we are always creating a very specific kind of cycle: namely one that uses modular exponentiation. It looks pretty pseudorandom though.
I would use a random number and swap it with an element at the beginning of the set.
Here's some pseudo code
set = [1, 2, 3, 4, 5, 6]
picked = 0
Function PickNext(set, picked)
If picked > Len(set) - 1 Then
Return Nothing
End If
// random number between picked (inclusive) and length (exclusive)
r = RandomInt(picked, Len(set))
// swap the picked element to the beginning of the set
result = set[r]
set[r] = set[picked]
set[picked] = result
// update picked
picked++
// return your next random element
Return temp
End Function
Every time you pick an element there is one swap and the only extra memory being used is the picked variable. The swap can happen if the elements are in a database or in memory.
EDIT Here's a jsfiddle of a working implementation http://jsfiddle.net/sun8rw4d/
JavaScript
var set = [];
set.picked = 0;
function pickNext(set) {
if(set.picked > set.length - 1) { return null; }
var r = set.picked + Math.floor(Math.random() * (set.length - set.picked));
var result = set[r];
set[r] = set[set.picked];
set[set.picked] = result;
set.picked++;
return result;
}
// testing
for(var i=0; i<100; i++) {
set.push(i);
}
while(pickNext(set) !== null) { }
document.body.innerHTML += set.toString();
EDIT 2 Finally, a random binary walk of the set. This can be accomplished with O(Log2(N)) stack space (memory) which for 10billion is only 33. There's no shuffling or swapping involved. Using trinary instead of binary might yield even better pseudo random results.
// on the fly set generator
var count = 0;
var maxValue = 64;
function nextElement() {
// restart the generation
if(count == maxValue) {
count = 0;
}
return count++;
}
// code to pseudo randomly select elements
var current = 0;
var stack = [0, maxValue - 1];
function randomBinaryWalk() {
if(stack.length == 0) { return null; }
var high = stack.pop();
var low = stack.pop();
var mid = ((high + low) / 2) | 0;
// pseudo randomly choose the next path
if(Math.random() > 0.5) {
if(low <= mid - 1) {
stack.push(low);
stack.push(mid - 1);
}
if(mid + 1 <= high) {
stack.push(mid + 1);
stack.push(high);
}
} else {
if(mid + 1 <= high) {
stack.push(mid + 1);
stack.push(high);
}
if(low <= mid - 1) {
stack.push(low);
stack.push(mid - 1);
}
}
// how many elements to skip
var toMid = (current < mid ? mid - current : (maxValue - current) + mid);
// skip elements
for(var i = 0; i < toMid - 1; i++) {
nextElement();
}
current = mid;
// get result
return nextElement();
}
// test
var result;
var list = [];
do {
result = randomBinaryWalk();
list.push(result);
} while(result !== null);
document.body.innerHTML += '<br/>' + list.toString();
Here's the results from a couple of runs with a small set of 64 elements. JSFiddle http://jsfiddle.net/yooLjtgu/
30,46,38,34,36,35,37,32,33,31,42,40,41,39,44,45,43,54,50,52,53,51,48,47,49,58,60,59,61,62,56,57,55,14,22,18,20,19,21,16,15,17,26,28,29,27,24,25,23,6,2,4,5,3,0,1,63,10,8,7,9,12,11,13
30,14,22,18,16,15,17,20,19,21,26,28,29,27,24,23,25,6,10,8,7,9,12,13,11,2,0,63,1,4,5,3,46,38,42,44,45,43,40,41,39,34,36,35,37,32,31,33,54,58,56,55,57,60,59,61,62,50,48,49,47,52,51,53
As I mentioned in my comment, unless you have an efficient way to skip to a specific point in your "on the fly" generation of the set this will not be very efficient.
if it is enumerable then use a pseudo-random integer generator adjusted to the period 0 .. 2^n - 1 where the upper bound is just greater than the size of your set and generate pseudo-random integers discarding those more than the size of your set. Use those integers to index items from your set.
Pre- compute yourself a series of indices (e.g. in a file), which has the properties you need and then randomly choose a start index for your enumeration and use the series in a round-robin manner.
The length of your pre-computed series should be > the maximum size of the set.
If you combine this (depending on your programming language etc.) with file mappings, your final nextIndex(INOUT state) function is (nearly) as simple as return mappedIndices[state++ % PERIOD];, if you have a fixed size of each entry (e.g. 8 bytes -> uint64_t).
Of course, the returned value could be > your current set size. Simply draw indices until you get one which is <= your sets current size.
Update (In response to question-update):
There is another option to achieve your goal if it is about creating 10Billion unique characters in your RPG: Generate a GUID and write yourself a function which computes your number from the GUID. man uuid if you are are on a unix system. Else google it. Some parts of the uuid are not random but contain meta-info, some parts are either systematic (such as your network cards MAC address) or random, depending on generator algorithm. But they are very very most likely unique. So, whenever you need a new unique number, generate a uuid and transform it to your number by means of some algorithm which basically maps the uuid bytes to your number in a non-trivial way (e.g. use hash functions).

Mifare Classic 1K reading fail with PN532 chip

I have PN532 chip for NFC and my board is connected to it by SPI. I am shure, that connection is good (Mifare UL totally works). I have some blank 1K classic cards, and there is some troubles.
My moves:
1) Setup PN532 (retries, SAM)
2) search for card by ListPassiveTarget command for ISO14443A cards
3) when card is found, authenticate to some sector (required block_number = sector_number * 4)
4) read data by 4 InDataExchange commands. Required block_number = sector_number * 4 + 0, 1, 2, 3.
5) go to step 3
First sector read is ok, I have some good data. But when I try to read other sector after succesfull authorisation to it, I have an error like I failed authentication.
I tried to read sector 0 (OK), then sector 1..15 read is failed.
I tried to read sector 5 (OK) with all block_num calculations (20,21,22,23 block), then sector 6..15 read is failed.
I tried to remove card from field for a minute, return card to field and repeat reading - and I cant read any sector. Only rebooting helps.
I suppose, that there can be some move between authentications. Typical HALT command is not helping.
Authentication function is tested - wrong keys dont work, wright keys are working.
My code, that dealing with reading:
// here we know card type
// ISO 14443 A MIFARE CLASSIC 1K
// repeat polling
if( !ListPassiveTarget_14443A_106() )
{
// no card!
NFC_download = false;
break;
}
else
{
if( !GetGeneralStatus() )
{
// no card!
NFC_download = false;
break;
}
else
{
if( NFC::Num_Of_Tg != 0 )
{
// 14443 A Mifare Classic
// 16 sectors, 4 blocks in each
for(u8 sector = 0; sector < 16; sector++) // for all 16 sectors
{
// autentificate sector with A key
u8 x = 0;
for(x = 0; (x < 3) && (Autenticated == 0); x++) // loop for keys
{
Autenticated = Try_Mifare_Classic_Key( x, 0, sector ); // try A key
// block = sector*4
}
if( Autenticated != 0 )
{
// send up success and key num
// uart send
if( ((Uart::CommandTX_WPos + 1) & 0x0F) != Uart::CommandTX_RPos )
{
// ok
Uart::CommandTXBuf[Uart::CommandTX_WPos].Size = 4;
Uart::CommandTXBuf[Uart::CommandTX_WPos].Buf[0] = AUTH_CLASSIC;
Uart::CommandTXBuf[Uart::CommandTX_WPos].Buf[1] = Autenticated; // key type
Uart::CommandTXBuf[Uart::CommandTX_WPos].Buf[2] = x; // key number
Uart::CommandTXBuf[Uart::CommandTX_WPos].Buf[3] = sector; // sector
Uart::CommandTX_WPos = (Uart::CommandTX_WPos + 1) & 0x0F;
Uart::commandSend();
}
// read all the sector
Read_Mifare( sector*4 );
Read_Mifare( sector*4 + 1);
Read_Mifare( sector*4 + 2);
Read_Mifare( sector*4 + 3);
// reboot card?
//SPI::Wait(5500000); // 1000 ms delay
//Halt_Mifare(); // halt wont help
//SPI::Wait(550000); // 100 ms delay
}
}// for sectors
}
else
{
// no target
NFC_download = false;
break;
}
}
}
What can be wrong? I have missed some moves between sector authentications and readings?
Fail in logic. Forgot to clear authentication flag. Now it's working.

dropzone.js Change display units

Does anyone know if it's possible to change the units display for uploaded files? I uploaded a file that is 600 MB, and the display says 0.6 Gib... It's not real user friendly. I've checked the instructions on the website, and cannot find anything beyond how to change the filesizeBase from 1000 to 1024.
I had a similar need because I had to show the units always on KB. I found a function in dropzone.js called filesize and I just overwritten it by the next one on my own code:
Dropzone.prototype.filesize = function(size) {
var selectedSize = Math.round(size / 1024);
return "<strong>" + selectedSize + "</strong> KB";
};
I think you have to overwrite the same function but adapt it for your needs.
I hope is still useful for you.
This is more similar to the existing filesize function included in Dropzone (except more verbose).
Dropzone.prototype.filesize = function (bytes) {
let selectedSize = 0;
let selectedUnit = 'b';
let units = ['kb', 'mb', 'gb', 'tb'];
if (Math.abs(bytes) < this.options.filesizeBase) {
selectedSize = bytes;
} else {
var u = -1;
do {
bytes /= this.options.filesizeBase;
++u;
} while (Math.abs(bytes) >= this.options.filesizeBase && u < units.length - 1);
selectedSize = bytes.toFixed(1);
selectedUnit = units[u];
}
return `<strong>${selectedSize}</strong> ${this.options.dictFileSizeUnits[selectedUnit]}`;
}
Example:
339700 bytes -> 339.7 KB (instead of 0.3 MB which is what Dropzone returns by default)
Source: https://stackoverflow.com/a/14919494/1922696
This piece of code works for me:
Dropzone.prototype.filesize = function (bytes) {
let selectedSize = 0;
let units = ['B', 'KB', 'MB', 'GB', 'TB'];
var size = bytes;
while (size > 1000) {
selectedSize = selectedSize + 1;
size = size/1000;
}
return "<strong>" + Math.trunc(size * 100)/100 + "</strong> " + units[selectedSize];
}
I'm dividing by 1000, because otherwise I get 1010 KB, instead of 1.01 MB.

Performance issue with Set union in Scala

I just encountered a strange behavior in the Scala Set API. Here is my function stripped of what's related to the rest of the project
def grade(...): Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else return (setA & setB size) / (setA | set B size)
}
This function is called a lot of time inside a loop, and the whole loop is executed in around 4.5 sec. But when a replace the size of the union by the sum of the sizes (a gross approximation), in order to test the influence of the union operation, the time of execution is reduce to around 0.35 sec...
def grade(...): Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else return (setA & setB size) / (setA size + set B size)
}
Well, you can't compare a simple operation like a sum of 2 Ints with the union operation of 2 Sets. I expect the performance of these operations to be very different, specially if your Sets contain a lot of elements.
You don't need a union because you already do an intersection. Try the following:
def grade: Double = {
val setA: HashSet = // get from somewhere else
val setB: HashSet = // get from somewhere else
if ((setA size) == 0 || (setB size) == 0) return 0
else {
val inter = setA & setB size
return inter / ((setA size) + (setB size) - inter)
}
}
However, I find your measurement a little odd because I expected both operations (union and intersect) to take around the same amount of time O(n). Removing the union should improve the performance by half (2s)...
Are you using parallel collections, by any chance? Union is performed in a sequential manner, so any parallel collection is first converted into a sequential collection. That might account for the performance.
Other than that, union is about O(n), so you are going form O(n) to O(1), which makes a lot of difference.

Resources