Where can I find a web version of processing compatible with 3.0.2?
http://funprogramming.org/77-A-3D-rotating-cloud-of-points.html is not e.g.
x[i] = float(random(-150, 150));
on web doesn't give the parseFloat() error that processing.exe does.
It's not available yet.
There was a link to an experimental version available here, but it hasn't been officially released.
You're probably better off just fixing whatever little inconsistencies you find for now.
I'm not sure the code you posted is an inconsistency between versions though. Notice that the original code on the page is this:
x[i] = int(random(-150, 150));
y[i] = int(random(-150, 150));
z[i] = int(random(-150, 150));
But your'e doing this:
x[i] = float(random(-150, 150));
y[i] = float(random(-150, 150));
z[i] = float(random(-150, 150));
The value returned from the random() function is already a float value, so passing it into the float() function doesn't make any sense. That's why you're getting a compilation error.
The web version doesn't complain because it's not as strict with types. But it's not really an inconsistency with Processing 3, it's an inconsistency between Java and JavaScript. That inconsistency is going to exist in every version of Processing.
If you want those values to be floats, you can just drop the float() part, since they're already float values:
x[i] = random(-150, 150);
y[i] = random(-150, 150);
z[i] = random(-150, 150);
This is what I was trying to get at in the comments on this answer to your other question.
Related
I have been working on a doom/wolfenstein style raycaster for a while now. I implemented the "floor raycasting" to the best of my ability, roughly following a well known raycaster tutorial. It almost works, but the floor tiles seem slightly bigger than they should be, and they don't "stick", as in they don't seem to align properly and they slide slightly as the player moves/rotates. Additionally, the effect seems worsened as the FOV is increased. I cannot figure out where my floor casting is going wrong, so any help is appreciated.
Here is a (crappy) gif of the glitch happening
Here is the most relevant part of my code:
void render(PVector pos, float dir) {
ArrayList<FloatList> dists = new ArrayList<FloatList>();
for (int i = 0; i < numColumns; i++) {
float curDir = atan((i - (numColumns/2.0)) / projectionDistance) + dir;
// FloatList because it returns a few pieces of data
FloatList curHit = cast(pos, curDir);
// normalize distances with cos
curHit.set(0, curHit.get(0) * cos(curDir - dir));
dists.add(curHit);
}
screen.beginDraw();
screen.background(50);
screen.fill(0, 30, 100);
screen.noStroke();
screen.rect(0, 0, screen.width, screen.height/2);
screen.loadPixels();
PImage floor = textures.get(4);
// DRAW FLOOR
for (int y = screen.height/2 + 1; y < screen.height; y++) {
float rowDistance = 0.5 * projectionDistance / ((float)y - (float)rY/2);
// leftmost and rightmost (on screen) floor positions
PVector left = PVector.fromAngle(dir - fov/2).mult(rowDistance).add(p.pos);
PVector right = PVector.fromAngle(dir + fov/2).mult(rowDistance).add(p.pos);
// current position on the floor
PVector curPos = left.copy();
PVector stepVec = right.sub(left).div(screen.width);
float b = constrain(map(rowDistance, 0, maxDist, 1, 0), 0, 1);
for (int x = 0; x < screen.width; x++) {
color sample = floor.get(floor((curPos.x - floor(curPos.x)) * floor.width), floor((curPos.y - floor(curPos.y)) * floor.height));
screen.pixels[x + y*screen.width] = color(red(sample) * b, green(sample) * b, blue(sample) * b);
curPos.add(stepVec);
}
}
updatePixels();
}
If anyone wants to look at the full code or has any questions, ask away.
Ok, I seem to have found a "solution". I will be the first to admit that I do not understand why it works, but it does work. As per my comment above, I noticed that my rowDistance variable was off, which caused all of the problems. In desperation, I changed the FOV and then hardcoded the rowDistance until things looked right. I plotted the ratio between the projectionDistance and the numerator of the rowDistance. I noticed that it neatly conformed to a scaled cos function. So after some simplification, here is the formula I came up with:
float rowDistance = (rX / (4*sin(fov/2))) / ((float)y - (float)rY/2);
where rX is the width of the screen in pixels.
If anyone has an intuitive explanation as to why this formula makes sense, PLEASE enlighten me. I hope this helps anyone else who may have this problem.
I want to convert dp to px in my C# code in xamarin.android, but all I could find were java codes in android studio that have some problems in xamarin. I tried to use equivalent like using Resources instead of getResources() and I could solve some little problems, but there are some problems yet that I couldn't find any equivalent for them. here are original codes, my codes, and my codes problems in xamarin:
First code
(found from Programatically set height on LayoutParams as density-independent pixels)
java code
int height = (int)TypedValue.applyDimension(TypedValue.COMPLEX_UNIT_DIP, < HEIGHT >, getResources().getDisplayMetrics());
C# code
int height = (int)TypedValue.ApplyDimension(TypedValue.COMPLEX_UNIT_DIP, < HEIGHT >, Resources.DisplayMetrics);
problems:
'TypedValue' does not contain a definition for 'COMPLEX_UNIT_DIP'
Invalid expression term < (The same error for >)
The name 'HEIGHT' does not exist in the current context
Second code
(found from Formula px to dp, dp to px android)
java code
DisplayMetrics displayMetrics = getContext().getResources().getDisplayMetrics();
int px = Math.round(dp * (displayMetrics.xdpi / DisplayMetrics.DENSITY_DEFAULT));
C# code
DisplayMetrics displayMetrics = Application.Context.Resources.DisplayMetrics;
int pixel = Math.Round(dp * (displayMetrics.Xdpi / DisplayMetrics.DensityDefault));
problem
Operator '/' cannot be applied to operands of type 'float' and 'DisplayMetricsDensity'
Now I have actually two questions. Which code is more proper? What's equivalent code for them in xamarin.android?
Thanks in advance.
Solution to "First code":
Xamarin tends to move constants into their own enums. COMPLEX_UNIT_DIP can be found on ComplexUnitType enum. Also you cannot have < HEIGHT > in your code you actually need to pass in dips to get the equivalent pixel value. in the example below I am getting pixels in 100 dips.
var dp = 100;
int pixel = (int)TypedValue.ApplyDimension(ComplexUnitType.Dip, dp, Context.Resources.DisplayMetrics);
Solution to "Second code":
You need to explicitly cast 'DisplayMetrics.DensityDefault' to a float and the entire round to an int:
int pixel = (int)System.Math.Round(dp * (displayMetrics.Xdpi / (float)DisplayMetrics.DensityDefault));
I prefer the first approach as the second code is specifically for calculating along the "x dimension":
Per Android docs and Xamarin.Android docs, the Xdpi property is
"The exact physical pixels per inch of the screen in the X dimension."
These are from the project that I am currently working on..
public static float pxFromDp(Context context, float dp)
{
return dp * context.Resources.DisplayMetrics.Density;
}
public static float dpFromPx(Context context, float px)
{
return px / context.Resources.DisplayMetrics.Density;
}
From android source code textview settextsize
var convertToDp = TypedValue.ApplyDimension( ComplexUnitType.Dip, size, context.Resouces.DisplayMetrics);
dp to px:
DisplayMetrics displayMetrics = Application.Context.Resources.DisplayMetrics;
double pixel = Math.Round((dp * DisplayMetrics.DensityDefault) + 0.5);
Answer taken from here : https://stackoverflow.com/a/8490361/6949388
Sorry, not enough rep to comment.
I'm coming from jQuery and JS and would like to go a little bit into Processing.
I like it because it has a quite good reference where I get examples etc.
But one thing I can't get is how I can store objects into a variable.
Example jQuery:
var anydiv = $('#anydiv');
and I have my object stored.
In Processing it does not seem that simple because it has different types.
I can store a number pretty easy:
float anynumber = 10;
or a string etc. But how can I e.g. store a new point in a var?
var anypoint = point(0, 0);
Thanks in advance.
objects need to have classes. Processing comes with some predefined, but "point" isn't one of them. So you write a point class,
class Point {
float x, y;
Point(float _x, float y) {
x = _x;
y = _y;
}
String toString() { return x + "/" + y; }
}
and then you can store it like any other typed object:
Point p = new Point(0,0);
float xcoordinate = p.x;
float ycoordinate = p.y;
p.x += 200;
p.y += 100;
println(p);
And no, capital first letter is not required, but that's the convention. Stick with it (don't go defining classes "point", unless you're never going to show people your code or ask for help. Make sure to get the syntax conventions right =)
I am working on a tool which distorts images, the purpose of the distortion is to project images to a sphere screen. The desired output is as the following image.
The code I use is as follow - for every Point(x, y) in the destination area, I calculate the corresponding pixel (sourceX, sourceY) in the original image to retrieve from.
But this approach is awkwardly slow, in my test, processing the sunset.jpg (800*600) requires more than 1500ms, if I remove the Mathematical/Trigonometrical calculations, calling cvGet2D and cvSet2D alone require more than 1200ms.
Is there a better way to do this? I am using Emgu CV (a .NET wrapper library for OpenCV) but examples in other language is also OK.
private static void DistortSingleImage()
{
System.Diagnostics.Stopwatch stopWatch = System.Diagnostics.Stopwatch.StartNew();
using (Image<Bgr, Byte> origImage = new Image<Bgr, Byte>("sunset.jpg"))
{
int frameH = origImage.Height;
using (Image<Bgr, Byte> distortImage = new Image<Bgr, Byte>(2 * frameH, 2 * frameH))
{
MCvScalar pixel;
for (int x = 0; x < 2 * frameH; x++)
{
for (int y = 0; y < 2 * frameH; y++)
{
if (x == frameH && y == frameH) continue;
int x1 = x - frameH;
int y1 = y - frameH;
if (x1 * x1 + y1 * y1 < frameH * frameH)
{
double radius = Math.Sqrt(x1 * x1 + y1 * y1);
double theta = Math.Acos(x1 / radius);
int sourceX = (int)(theta * (origImage.Width - 1) / Math.PI);
int sourceY = (int)radius;
pixel = CvInvoke.cvGet2D(origImage.Ptr, sourceY, sourceX);
CvInvoke.cvSet2D(distortImage, y, x, pixel);
}
}
}
distortImage.Save("Distort.jpg");
}
Console.WriteLine(stopWatch.ElapsedMilliseconds);
}
}
From my personal experience, I was doing some stereoscopic vision stuff, the best way to talk to openCV is through own wrapper, you could put your method in c++ and call it from c#, that would give you 1 call to native, faster code, and because under the hood Emgu's keeping OpenCV data, it's also possible to create an image with emgu, process it natively and enjoy processed image in c# again.
The get/set methods looks like Gdi's GetPixel / SetPixel ones, and, according to documentation they are "slow but safe way".
For staying with Emgu only, documentation tells that if you want to iterate over pixels, you should access .Data property:
The safe (slow) way
Suppose you are working on an Image. You can obtain the pixel on the y-th row and x-th column by calling
Bgr color = img[y, x];
Setting the pixel on the y-th row and x-th column is also simple
img[y,x] = color;
The fast way
The Image pixels values are stored in the Data property, a 3D array. Use this property if you need to iterate through the pixel values of the image.
I am trying to find the easiest way to add, subtract a scalar value with a opencv 2.0 cv::Mat class.
Most of the existing function allows only matrix-matrix and matrix-scalar operations.
I am looking for a scalar-matrix operations.
I am doing it currently by creating a temporary matrix with the same scalar value and doing required arithmetic operation. Example below..
Mat M(Size(100,100), CV_8U);
Mat temp = Mat::ones(100, 100, CV_8U)*255;
M = temp-M;
But I think there should be better/easier ways to do it.
Any suggestions ?
You cannot initialize a Mat expression from an int or double. The solution is to use cv::Scalar, even for single channel Matrices:
Mat M = Mat::ones(Size(100, 100), CV_8U);
M = Scalar::all(255) - M;
See http://docs.opencv.org/modules/core/doc/basic_structures.html#matrixexpressions for a list of possible Mat expressions.
Maybe this is a feature of 2.1 or somewhere between 2.1 and current trunk version, but this works fine for me:
Mat cc = channels[k];
double fmin,fmax;
cv::minMaxLoc( cc, &fmin, &fmax );
if( fmax > 1.0 )
fmax = 255.0 ;
else
fmax = 1.0;
cc = ( cc / (fmax + 1e-9) );
channels is coming from:
channels = vector<Mat>(3);
cv::split( img, channels );
So, sure just use a scalar expression, at least in 2.1 / current SVN branch; what happens if you try the above in 2.0 ?