I'm really noob on processing and programing and I can't figure it out how to show my images at random.
I'm loading the images in setup with the PImage name img0, img1, img2 and then
image("img" + random(3), 0, 0);
But it does't work, coz processing wait for a PImage argument, and the string plus a number isn't.
And I know for shure there must be some better way than:
int randomNumber = random(3);
if(randomNumber == 0 ){
image(img0,0,0);
}
if(randomNumber == 1 ){
image(img1,0,0);
}
if(randomNumber == 2 ){
image(img2,0,0);
}
But I haven't found it.
Any thoughts?
thanks!
In addition to Kevin's great answer you can also use a an array to store the loaded PImages.
Here's a rough example (you'll need to adjust path to images of course):
// total number of images
int numImages = 3;
// an array of images
PImage[] images = new PImage[num];
int randomNumber;
void setup(){
//TODO correct sketch size
size(300,300);
// initialize images array (loading each one)
for(int i = 0 ; i < numImages; i++){
// TODO correct path to images
images[i] = loadImage("img"+(i)+".png");
}
}
void draw(){
background(0);
//render the most recently selected random index image
image(images[randomNumber]);
//instructions
text("click to randomize",10,15);
}
// change the random number on click (draw() would look chaotic/hard to debug)
void mousePressed(){
// pick a random number and cast the floating point value return to integer needed as in images array index
randomNumber = (int)random(numImages);
}
You could use a HashMap to create a map from String keys to PImage values. Something like this:
HashMap<String, PImage> imageMap = new HashMap<String, PImage>();
imageMap.put("image1", image1);
imageMap.put("image2", image2);
Then to get a PImage from a String key, you'd call the get() function:
PImage image1 = imageMap.get("image1");
You can find more info in the reference.
By the way, this line won't compile:
int randomNumber = random(3);
The random() function returns a float value. You can't store a float value in an int variable. You have to convert it using the int() function:
int randomNumber = int(random(3));
If you still can't get it working, please post a MCVE that demonstrates the problem. Good luck.
Related
i have two vectors. one is a (1) vector which contains all objects im using and works with that. the other vector is a (2) vector which after a function adds an object from (1) into (2). once this happens i want to remove the object from (1). the problem is that the function i made to remove it takes in a object * obj. the actual object still has pointers pointing to it and i dont want to fully wipe the obj, just remove it from (1). any help would be appreciated.
`
void removeObject(const Object * obj) {
int index;
for (int i = 0; i < size(); i++) {
if (obj == & objectVector[i] ) {
index = i;
}
}
objectVector.erase(objectVector.begin() + index);
}
`
im sure this is the problem because when i dont comment out the line using this function i get a sigtrap error
For our student project I've been tinkering with an OBJ-loader in order to import models into our application.
It loads without issues, and drawing it kind of works without index (the model is obviously not represented correctly because I'm not using an index buffer)
However, drawing with DeviceContext->DrawIndexed shows nothing on screen.
Without indexed drawing
With indexed drawing
Buffer creation method:
void ObjectLoader::CreateBuffers()
{
//Index buffer
D3D11_BUFFER_DESC iBufferDesc;
memset(&iBufferDesc, 0, sizeof(iBufferDesc));
iBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER;
iBufferDesc.Usage = D3D11_USAGE_DEFAULT;
iBufferDesc.ByteWidth = sizeof(DWORD);
D3D11_SUBRESOURCE_DATA indexData;
indexData.pSysMem = &ind;
pDevice->CreateBuffer(&iBufferDesc, &indexData, &pIndexBuffer);
//Vertex buffer
D3D11_BUFFER_DESC bufferDesc;
memset(&bufferDesc, 0, sizeof(bufferDesc));
bufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER;
bufferDesc.Usage = D3D11_USAGE_DEFAULT;
bufferDesc.ByteWidth = sizeof(TriangleVertex) * this->NumberOfVerts();
D3D11_SUBRESOURCE_DATA data;
data.pSysMem = tva;
pDevice->CreateBuffer(&bufferDesc, &data, &pVertexBuffer);
}
Draw method:
void ObjectLoader::Draw()
{
if (pDevice == nullptr)
return;
UINT32 vertexSize = sizeof(float) * 5;
UINT32 offset = 0;
pDeviceContext->IASetVertexBuffers(0, 1, &pVertexBuffer, &vertexSize, &offset);
pDeviceContext->IASetIndexBuffer(this->pIndexBuffer, DXGI_FORMAT_R32_UINT, 0);
pDeviceContext->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);
pDeviceContext->DrawIndexed(vIndex.size(),0 , 0);
//pDeviceContext->Draw(this->NumberOfVerts(), 0);
}
What the hell am I missing? I've looked at several books on indexed drawing and it seems pretty straight-forward. At first I thought the winding order was reversed but I checked this by simply reversing the index array; same result.
If you need more code let me know, but I feel this should suffice.
Thanks in advance!
Edit: OT: I never figured out how to get my code to be properly formatted so I apologize for that, feel free to share how that's done.
This is a homework question that I can't get my head around at all
Its a very simple encryption algorithm. You start with a string of characters as your alphabet:
ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!, .
Then ask the user to enter there own string that will act as a map such as:
0987654321! .,POIUYTREWQASDFGHJKLMNBVCXZ
Then the program uses this to make a map and allows you to enter text that gets encrypted.
For example MY NAME IS JOSEPH would be encrypted as .AX,0.6X2YX1PY6O3
This is all very easy, however he said that its a one to one mapping and thus implied that if I enter .AX,0.6X2YX1PY6O3 back into the program I will get out MY NAME IS JOSEPH
This doesn't happen, because .AX,0.6X2YX1PY6O3 becomes Z0QCDZQGAQFOALDH
The mapping only works to decrypt when you go backwards but the question implies that the program just loops and runs the one algorithm every time.
Even if some could say that it is possible I would be happy, I have pages and pages of paper filled up with possible workings, but I came up with nothing, the only solution to run the algorithm backwards back I don't think we are allowed to do that.
Any ideas?
Edit:
Unfortunately I can't get this to work (Using the orbit computation idea) What am I doing wrong?
//import scanner class
import java.util.Scanner;
public class Encryption {
static Scanner inputString = new Scanner(System.in);
//define alphabet
private static String alpha = "ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!, .";
private static String map;
private static int[] encryptionMap = new int[40];//mapping int array
private static boolean exit = false;
private static boolean valid = true;
public static void main(String[] args) {
String encrypt, userInput;
userInput = new String();
System.out.println("This program takes a large reordered string");
System.out.println("and uses it to encrypt your data");
System.out.println("Please enter a mapping string of 40 length and the same characters as below but in different order:");
System.out.println(alpha);
//getMap();//don't get user input for map, for testing!
map=".ABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!, ";//forced input for testing only!
do{
if (valid == true){
System.out.println("Enter Q to quit, otherwise enter a string:");
userInput = getInput();
if (userInput.charAt(0) != 'Q' ){//&& userInput.length()<2){
encrypt = encrypt(userInput);
for (int x=0; x<39; x++){//here I am trying to get the orbit computation going
encrypt = encrypt(encrypt);
}
System.out.println("You entered: "+userInput);
System.out.println("Encrypted Version: "+encrypt);
}else if (userInput.charAt(0) == 'Q'){//&& userInput.length()<2){
exit = true;
}
}
else if (valid == false){
System.out.println("Error, your string for mapping is incorrect");
valid = true;//reset condition to repeat
}
}while(exit == false);
System.out.println("Good bye");
}
static String encrypt(String userInput){
//use mapping array to encypt data
String encrypt;
StringBuffer tmp = new StringBuffer();
char current;
int alphaPosition;
int temp;
//run through the user string
for (int x=0; x<userInput.length(); x++){
//get character
current = userInput.charAt(x);
//get location of current character in alphabet
alphaPosition = alpha.indexOf(current);
//encryptionMap.charAt(alphaPosition)
tmp.append(map.charAt(alphaPosition));
}
encrypt = tmp.toString();
return(encrypt);
}
static void getMap(){
//get a mapping string and validate from the user
map = getInput();
//validate code
if (map.length() != 40){
valid = false;
}
else{
for (int x=0; x<40; x++){
if (map.indexOf(alpha.charAt(x)) == -1){
valid = false;
}
}
}
if (valid == true){
for (int x=0; x<40; x++){
int a = (int)(alpha.charAt(x));
int y = (int)( map.charAt(x));
//create encryption map
encryptionMap[x]=(a-y);
}
}
}
static String getInput(){
//get input(this repeats)
String input = inputString.nextLine();
input = input.toUpperCase();
if ("QUIT".equals(input) || "END".equals(input) || "NO".equals(input) || "N".equals(input)){
StringBuffer tmp = new StringBuffer();
tmp.append('Q');
input = tmp.toString();
}
return(input);
}
}
You will (probably) not get your original string back if you apply that substitution again. I say probably because you can construct such inputs (they all do things like if A->B then B->A). But most inputs won't do that. You would have to construct the reverse map to decrypt.
However, there is a trick you can do if you're only allowed to go forward. Keep applying the mapping and you'll eventually return to your original input. The number of times you'll have to do that depends on your input. To figure out how many times, compute the orbit of each character, and take the least common multiple of all the orbit sizes. For your input the orbits are size 1 (T->T, W->W), 2 (B->9->B H->3->H U->R->U P->O->P), 4 (C->8->N->,->C), 9 (A->...->Y->A), and 17 (E->...->V->E). The LCM of all those is 612, so 611 forward mappings applied to the ciphertext will return you to the plaintext.
Well, you can get your string back this way only if you do reverse mapping. One to one mapping means that a single letter of your default alphabet maps to only one letter of your new alphabet and vice versa. I.e. you can't map ABCD to ABBA. It doesn't imply that you can get your initial string by doing a second round of encryption.
The thing you have described can be achieved if you use a finite alphabet and a displacement to encode your string. You can choose the displacement in such a way that after a number of rounds of encryption totalDisplacement mod alphabetSize == 0 Than you will get your string back going only forward.
I want to convert a UINT16 monochrome image to a 8 bits image, in C++.
I have that image in a
char *buffer;
I'd like to give the new converted buffer to a QImage (Qt).
I'm trying with freeImagePlus
fipImage fimage;
if (fimage.loadfromMemory(...) == false)
//error
loadfromMemory needs a fipMemoryIO adress:
loadfromMemory(fipMemoryIO &memIO, int flag = 0)
So I do
fipImage fimage;
BYTE *buf = (BYTE*)malloc(gimage.GetBufferLength() * sizeof(BYTE));
// 'buf' is empty, I have to fill it with 'buffer' content
// how can I do it?
fipMemoryIO memIO(buf, gimage.GetBufferLength());
fimage.loadFromMemory(memIO);
if (fimage.convertTo8Bits() == true)
cout << "Good";
Then I would do something like
fimage.saveToMemory(...
or
fimage.saveToHandle(...
I don't understand what is a FREE_IMAGE_FORMAT, which is the first argument to any of those two functions. I can't find information of those types in the freeImage documentation.
Then I'd finish with
imageQt = new QImage(destiny, dimX, dimY, QImage::Format_Indexed8);
How can I fill 'buf' with the content of the initial buffer?
And get the data from the fipImage to a uchar* data for a QImage?
Thanks.
The conversion is simple to do in plain old C++, no need for external libraries unless they are significantly faster and you care about such a speedup. Below is how I'd do the conversion, at least as a first cut. The data is converted inside of the input buffer, since the output is smaller than the input.
QImage from16Bit(void * buffer, int width, int height) {
int size = width*height*2; // length of data in buffer, in bytes
quint8 * output = reinterpret_cast<quint8*>(buffer);
const quint16 * input = reinterpret_cast<const quint16*>(buffer);
if (!size) return QImage;
do {
*output++ = *input++ >> 8;
} while (size -= 2);
return QImage(output, width, height, QImage::Format_Indexed8);
}
I have an application where I am reading and writing small blocks of data (a few hundred bytes) hundreds of millions of times. I'd like to generate a compression dictionary based on an example data file and use that dictionary forever as I read and write the small blocks. I'm leaning toward the LZW compression algorithm. The Wikipedia page (http://en.wikipedia.org/wiki/Lempel-Ziv-Welch) lists pseudocode for compression and decompression. It looks fairly straightforward to modify it such that the dictionary creation is a separate block of code. So I have two questions:
Am I on the right track or is there a better way?
Why does the LZW algorithm add to the dictionary during the decompression step? Can I omit that, or would I lose efficiency in my dictionary?
Thanks.
Update: Now I'm thinking the ideal case be to find a library that lets me store the dictionary separate from the compressed data. Does anything like that exist?
Update: I ended up taking the code at http://www.enusbaum.com/blog/2009/05/22/example-huffman-compression-routine-in-c and adapting it. I am Chris in the comments on that page. I emailed my mods back to that blog author, but I haven't heard back yet. The compression rates I'm seeing with that code are not at all impressive. Maybe that is due to the 8-bit tree size.
Update: I converted it to 16 bits and the compression is better. It's also much faster than the original code.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.IO;
namespace Book.Core
{
public class Huffman16
{
private readonly double log2 = Math.Log(2);
private List<Node> HuffmanTree = new List<Node>();
internal class Node
{
public long Frequency { get; set; }
public byte Uncoded0 { get; set; }
public byte Uncoded1 { get; set; }
public uint Coded { get; set; }
public int CodeLength { get; set; }
public Node Left { get; set; }
public Node Right { get; set; }
public bool IsLeaf
{
get { return Left == null; }
}
public override string ToString()
{
var coded = "00000000" + Convert.ToString(Coded, 2);
return string.Format("Uncoded={0}, Coded={1}, Frequency={2}", (Uncoded1 << 8) | Uncoded0, coded.Substring(coded.Length - CodeLength), Frequency);
}
}
public Huffman16(long[] frequencies)
{
if (frequencies.Length != ushort.MaxValue + 1)
{
throw new ArgumentException("frequencies.Length must equal " + ushort.MaxValue + 1);
}
BuildTree(frequencies);
EncodeTree(HuffmanTree[HuffmanTree.Count - 1], 0, 0);
}
public static long[] GetFrequencies(byte[] sampleData, bool safe)
{
if (sampleData.Length % 2 != 0)
{
throw new ArgumentException("sampleData.Length must be a multiple of 2.");
}
var histogram = new long[ushort.MaxValue + 1];
if (safe)
{
for (int i = 0; i <= ushort.MaxValue; i++)
{
histogram[i] = 1;
}
}
for (int i = 0; i < sampleData.Length; i += 2)
{
histogram[(sampleData[i] << 8) | sampleData[i + 1]] += 1000;
}
return histogram;
}
public byte[] Encode(byte[] plainData)
{
if (plainData.Length % 2 != 0)
{
throw new ArgumentException("plainData.Length must be a multiple of 2.");
}
Int64 iBuffer = 0;
int iBufferCount = 0;
using (MemoryStream msEncodedOutput = new MemoryStream())
{
//Write Final Output Size 1st
msEncodedOutput.Write(BitConverter.GetBytes(plainData.Length), 0, 4);
//Begin Writing Encoded Data Stream
iBuffer = 0;
iBufferCount = 0;
for (int i = 0; i < plainData.Length; i += 2)
{
Node FoundLeaf = HuffmanTree[(plainData[i] << 8) | plainData[i + 1]];
//How many bits are we adding?
iBufferCount += FoundLeaf.CodeLength;
//Shift the buffer
iBuffer = (iBuffer << FoundLeaf.CodeLength) | FoundLeaf.Coded;
//Are there at least 8 bits in the buffer?
while (iBufferCount > 7)
{
//Write to output
int iBufferOutput = (int)(iBuffer >> (iBufferCount - 8));
msEncodedOutput.WriteByte((byte)iBufferOutput);
iBufferCount = iBufferCount - 8;
iBufferOutput <<= iBufferCount;
iBuffer ^= iBufferOutput;
}
}
//Write remaining bits in buffer
if (iBufferCount > 0)
{
iBuffer = iBuffer << (8 - iBufferCount);
msEncodedOutput.WriteByte((byte)iBuffer);
}
return msEncodedOutput.ToArray();
}
}
public byte[] Decode(byte[] bInput)
{
long iInputBuffer = 0;
int iBytesWritten = 0;
//Establish Output Buffer to write unencoded data to
byte[] bDecodedOutput = new byte[BitConverter.ToInt32(bInput, 0)];
var current = HuffmanTree[HuffmanTree.Count - 1];
//Begin Looping through Input and Decoding
iInputBuffer = 0;
for (int i = 4; i < bInput.Length; i++)
{
iInputBuffer = bInput[i];
for (int bit = 0; bit < 8; bit++)
{
if ((iInputBuffer & 128) == 0)
{
current = current.Left;
}
else
{
current = current.Right;
}
if (current.IsLeaf)
{
bDecodedOutput[iBytesWritten++] = current.Uncoded1;
bDecodedOutput[iBytesWritten++] = current.Uncoded0;
if (iBytesWritten == bDecodedOutput.Length)
{
return bDecodedOutput;
}
current = HuffmanTree[HuffmanTree.Count - 1];
}
iInputBuffer <<= 1;
}
}
throw new Exception();
}
private static void EncodeTree(Node node, int depth, uint value)
{
if (node != null)
{
if (node.IsLeaf)
{
node.CodeLength = depth;
node.Coded = value;
}
else
{
depth++;
value <<= 1;
EncodeTree(node.Left, depth, value);
EncodeTree(node.Right, depth, value | 1);
}
}
}
private void BuildTree(long[] frequencies)
{
var tiny = 0.1 / ushort.MaxValue;
var fraction = 0.0;
SortedDictionary<double, Node> trees = new SortedDictionary<double, Node>();
for (int i = 0; i <= ushort.MaxValue; i++)
{
var leaf = new Node()
{
Uncoded1 = (byte)(i >> 8),
Uncoded0 = (byte)(i & 255),
Frequency = frequencies[i]
};
HuffmanTree.Add(leaf);
if (leaf.Frequency > 0)
{
trees.Add(leaf.Frequency + (fraction += tiny), leaf);
}
}
while (trees.Count > 1)
{
var e = trees.GetEnumerator();
e.MoveNext();
var first = e.Current;
e.MoveNext();
var second = e.Current;
//Join smallest two nodes
var NewParent = new Node();
NewParent.Frequency = first.Value.Frequency + second.Value.Frequency;
NewParent.Left = first.Value;
NewParent.Right = second.Value;
HuffmanTree.Add(NewParent);
//Remove the two that just got joined into one
trees.Remove(first.Key);
trees.Remove(second.Key);
trees.Add(NewParent.Frequency + (fraction += tiny), NewParent);
}
}
}
}
Usage examples:
To create the dictionary from sample data:
var freqs = Huffman16.GetFrequencies(File.ReadAllBytes(#"D:\nodes"), true);
To initialize an encoder with a given dictionary:
var huff = new Huffman16(freqs);
And to do some compression:
var encoded = huff.Encode(raw);
And decompression:
var raw = huff.Decode(encoded);
The hard part in my mind is how you build your static dictionary. You don't want to use the LZW dictionary built from your sample data. LZW wastes a bunch of time learning since it can't build the dictionary faster than the decompressor can (a token will only be used the second time it's seen by the compressor so the decompressor can add it to its dictionary the first time its seen). The flip side of this is that it's adding things to the dictionary that may never get used, just in case the string shows up again. (e.g., to have a token for 'stackoverflow' you'll also have entries for 'ac','ko','ve','rf' etc...)
However, looking at the raw token stream from an LZ77 algorithm could work well. You'll only see tokens for strings seen at least twice. You can then build a list of the most common tokens/strings to include in your dictionary.
Once you have a static dictionary, using LZW sans the dictionary update seems like an easy implementation but to get the best compression I'd consider a static Huffman table instead of the traditional 12 bit fixed size token (as George Phillips suggested). An LZW dictionary will burn tokens for all the sub-strings you may never actually encode (e.g, if you can encode 'stackoverflow', there will be tokens for 'st', 'sta', 'stac', 'stack', 'stacko' etc.).
At this point it really isn't LZW - what makes LZW clever is how the decompressor can build the same dictionary the compressor used only seeing the compressed data stream. Something you won't be using. But all LZW implementations have a state where the dictionary is full and is no longer updated, this is how you'd use it with your static dictionary.
LZW adds to the dictionary during decompression to ensure it has the same dictionary state as the compressor. Otherwise the decoding would not function properly.
However, if you were in a state where the dictionary was fixed then, yes, you would not need to add new codes.
Your approach will work reasonably well and it's easy to use existing tools to prototype and measure the results. That is, compress the example file and then the example and test data together. The size of the latter less the former will be the expected compressed size of a block.
LZW is a clever way to build up a dictionary on the fly and gives decent results. But a more thorough analysis of your typical data blocks is likely to generate a more efficient dictionary.
There's also room for improvement in how LZW represents compressed data. For instance, each dictionary reference could be Huffman encoded to a closer to optimal length based on the expected frequency of their use. To be truly optimal the codes should be arithmetic encoded.
I would look at your data to see if there's an obvious reason it's so easy to compress. You might be able to do something much simpler than LZ78. I've done both LZ77 (lookback) and LZ78 (dictionary).
Try running a LZ77 on your data. There's no dictionary with LZ77, so you could use a library without alteration. Deflate is an implementation of LZ77.
Your idea of using a common dictionary is a good one, but it's hard to know whether the files are similar to each other or just self-similar without doing some tests.
The right track is to use an library -- almost every modern language have a compression library. C#, Python, Perl, Java, VB.net, whatever you use.
LZW save some space by depending the dictionary on previous inputs. It have an initial dictionary, and when you decompress something, you add them to the dictionary -- so the dictionary is growing. (I am omitting some details here, but this is the general idea)
You can omit this step by supply the whole (complete) dictionary as the initial one. But this would cost some space.
I find this aproach quite interesting for repeated log entries and something I would like to explore using.
Can you share the compression statistics for using this approach for your use case so I can compare it with other alternatives?
Have you considered having the common dictionary grow over time or is that not a valid option?