I need to execute this Pseudocode in java - greenfoot

I have these two methods given called leftEyeColor() and rightEyeColor(). And in each of those methods, I must execute this pseudocode:
For leftEyeColor and rightEyeColor methods:
Get world
Get background
Get x and y coordinates of the eye
(Get color from background at eye coordinates)
Return color. (All of these tasks can be done from the return statement)
I tried to put everything inside the return line and it looks like this:
private Color leftEyeColor()
{
Point eyePos = leftEye();
return getWorld().getBackground().getColorAt(eyePos.getX(),eyePos.getY());
}
private Color rightEyeColor()
{
Point eyePos = rightEye();
return getWorld().getBackground().getColorAt(eyePos.getX(),eyePos.getY()); // this is incomplete - fix it
}
This looks right to me but I am being thrown the error:
Greenfoot.Color cannot be converted to java.awt.Color
A simple fix would be to simply not import java.awt.Color, however, I don't think I am allowed to remove the import for this assignment unfortunately :(

This is a difference between Greenfoot versions. In Greenfoot 3.1.0, greenfoot.Color was introduced and used in place of java.awt.Color everywhere. This was a breaking change: code before 3.1.0 is incompatible with versions 3.1.0+ and vice versa. See https://www.greenfoot.org/doc/font_color
If the assignment came with the import java.awt.Color; already in then it must be from an old version of Greenfoot. The simplest fix for you personally might be to download an old Greenfoot like 3.0.4 (as unfortunately it's easier for you to downgrade than get them to upgrade, even though 3.1.0 is five years old already).

Related

Parameters for dlib::find_min_bobyqa

I'm working on the C++ version of Matt Zucker's Page dewarping. So far everything works fine, but I have a problem with optimization. In line 748 of Github repo Matt uses optimize function from Scipy. My C++ equivalent is find_min_bobyqa from dlib.net. The code is:
auto f = [&](const column_vector& ppts) { return objective( dstpoints, ppts, keypoint_index); };
dlib::find_min_bobyqa(f,
params,
2 * params.nr() + 1, // npt - number of interpolation points: x.size() + 2 <= npt && npt <= (x.size()+1)*(x.size()+2)/2
dlib::uniform_matrix<double>(params.nr(), 1, -2), // lower bound constraint
dlib::uniform_matrix<double>(params.nr(), 1, 2), // upper bound constraint
1, // initial trust region radius
1e-5, // stopping trust region radius
4000 // max number of objective function evaluations
);
In my concrete example params is a dlib::column_vector with double values and length = 189. Every element of params is less than 2.0 and greater than -2.0. Function objective() returns double value and "alone" it works properly because I get the same value as in the Python version. But after running fin_min_bobyqa function I usually get the message:
Terminate called after throwing an instance of 'dlib:bobyqa_failure', return from BOBYQA because the objective function has been called max_f_evals times.
I set max_f_evals to quite big value to see if it optimizes at all, but it doesn't. I did some tweaking with parameters but without good results. How should I set the parameters of find_min_bobyqa to get the right solution?
I am very interested in this issue as well. Zucker's work, with very minor tweaks, is ideal for straightening sheet music images, and I was looking for ways to implement it in a mobile platform when I came across your question.
My research so far suggests that BOBYQA is not the equivalent of Powell's method in scipy. BOBYQA is constrained, and the one in scipy is not.
See these links for more information, and a possible way to compile the right supporting library - I would try UOBYQA or NEWUOA.
https://github.com/jacobwilliams/PowellOpt
https://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.minimize.html#rdd2e1855725e-3
(See the Notes section)
EDIT: see C version here:
https://github.com/emmt/Algorithms/tree/master/newuoa
I wanted to post this as a comment, but I don't have enough points for that.
I am very interested in your progress. If you're willing, please keep me posted.
I finally solved this problem. I used PRAXIS library, because it doesn't need derivative information and is fast.
I modified the code a little to my needs and now it is faster around few seconds than original version written in Python.

Using termios in Swift

Now that we've reached Swift 2.0, I've decided to convert my, as yet unfinished, OS X app to Swift. Making progress but I've run into some issues with using termios and could use some clarification and advice.
The termios struct is treated as a struct in Swift, no surprise there, but what is surprising is that the array of control characters in the struct is now a tuple. I was expecting it to just be an array. As you might imagine it took me a while to figure this out. Working in a Playground if I do:
var settings:termios = termios()
print(settings)
then I get the correct details printed for the struct.
In Obj-C to set the control characters you would use, say,
cfmakeraw(&settings);
settings.c_cc[VMIN] = 1;
where VMIN is a #define equal to 16 in termios.h. In Swift I have to do
cfmakeraw(&settings)
settings.c_cc.16 = 1
which works, but is a bit more opaque. I would prefer to use something along the lines of
settings.c_cc.vim = 1
instead, but can't seem to find any documentation describing the Swift "version" of termios. Does anyone know if the tuple has pre-assigned names for it's elements, or if not, is there a way to assign names after the fact? Should I just create my own tuple with named elements and then assign it to settings.c_cc?
Interestingly, despite the fact that pre-processor directives are not supposed to work in Swift, if I do
print(VMIN)
print(VTIME)
then the correct values are printed and no compiler errors are produced. I'd be interested in any clarification or comments on that. Is it a bug?
The remaining issues have to do with further configuration of the termios.
The definition of cfsetspeed is given as
func cfsetspeed(_: UnsafeMutablePointer<termios>, _: speed_t) -> Int32
and speed_t is typedef'ed as an unsigned long. In Obj-C we'd do
cfsetspeed(&settings, B38400);
but since B38400 is a #define in termios.h we can no longer do that. Has Apple set up replacement global constants for things like this in Swift, and if so, can anyone tell me where they are documented. The alternative seems to be to just plug in the raw values and lose readability, or to create my own versions of the constants previously defined in termios.h. I'm happy to go that route if there isn't a better choice.
Let's start with your second problem, which is easier to solve.
B38400 is available in Swift, it just has the wrong type.
So you have to convert it explicitly:
var settings = termios()
cfsetspeed(&settings, speed_t(B38400))
Your first problem has no "nice" solution that I know of.
Fixed sized arrays are imported to Swift as tuples, and – as far as I know – you cannot address a tuple element with a variable.
However,Swift preserves the memory layout of structures imported from C, as
confirmed by Apple engineer Joe Groff:. Therefore you can take the address of the tuple and “rebind” it to a pointer to the element type:
var settings = termios()
withUnsafeMutablePointer(to: &settings.c_cc) { (tuplePtr) -> Void in
tuplePtr.withMemoryRebound(to: cc_t.self, capacity: MemoryLayout.size(ofValue: settings.c_cc)) {
$0[Int(VMIN)] = 1
}
}
(Code updated for Swift 4+.)

<bound method PolyCollection.get_paths of <matplotlib.collections.PolyCollection object

Is there a way to get at all the paths with matplotlib1.3.0?
I am using hexbin and create the following output: "hex31mm", which is a:
In [42]: type(hex31mm)
Out[42]: matplotlib.collections.PolyCollection
My aim is to use the method "get_paths" as is in "matplotlib 1.1.0" for the function linked below but with the newer version of "matplotlib 3.0.1"
Interestingly: "get_paths" under matplotlib 3.0.1, yields "802" distinct paths as below:
In [41]: len(hex31mm.get_paths())
Out[41]: 802
Yet "get_paths" under matplotlib 1.3.0, for this same object "hex31mm" yields only one path as below:
In[1] len(hex31mm.get_paths())
Out[1]: 1
Please check link below for more details, any help much appreciated!
NOTE:
I am sure the information for all paths are part of the object in both cases because the hexbin figure that plots up onto the screen is the same under both matplotlib versions, however I require the hexbin centres, hence my insistance of use on the "get_path" method for the linked function.
Sorry to sound repetitive but the function works fine in matplotlib1.1.0 but not under matplotlib1.3.0 and is supposed to return an array (n,2), and each element of that array is the centre (x,y) of n hexbins:
any hints, would be much appreciated...
I think in the newer versions of matplotlib the method: "get_offsets()" does the trick: "hex31mm.get_offsets()" returns the centres which is the output of the function ...

GL_EXT_packed_pixels vs GL_APPLE_packed_pixels

My application checks for GL_EXT_packed_pixels extension before using packed pixel formats such as UNSIGNED_INT_8_8_8_8_EXT. On my MacBook, my code can't find this extension, despite that using packed pixel formats still appears to work.
OpenGL Extension Viewer seems to suggest that it has a special name on OS X:
What's the difference? Should I just check for either GL_EXT_packed_pixels or GL_APPLE_packed_pixels when assessing if UNSIGNED_INT_8_8_8_8_EXT is supported?
EXT_packed_pixels has these definitions:
UNSIGNED_BYTE_3_3_2_EXT 0x8032
UNSIGNED_SHORT_4_4_4_4_EXT 0x8033
UNSIGNED_SHORT_5_5_5_1_EXT 0x8034
UNSIGNED_INT_8_8_8_8_EXT 0x8035
UNSIGNED_INT_10_10_10_2_EXT 0x8036
While APPLE_packed_pixels has these:
UNSIGNED_BYTE_3_3_2 0x8032
UNSIGNED_BYTE_2_3_3_REV 0x8362
UNSIGNED_SHORT_5_6_5 0x8363
UNSIGNED_SHORT_5_6_5_REV 0x8364
UNSIGNED_SHORT_4_4_4_4 0x8033
UNSIGNED_SHORT_4_4_4_4_REV 0x8365
UNSIGNED_SHORT_5_5_5_1 0x8034
UNSIGNED_SHORT_1_5_5_5_REV 0x8366
UNSIGNED_INT_8_8_8_8 0x8035
UNSIGNED_INT_8_8_8_8_REV 0x8367
UNSIGNED_INT_10_10_10_2 0x8036
UNSIGNED_INT_2_10_10_10_REV 0x8368
Comparing the two, EXT_packed_pixels is a subset of APPLE_packed_pixels, and the shared values are the same. Therefore, if APPLE_packed_pixels is supported, you can safely use all definitions from EXT_packed_pixels.
As your screen shot of the extension viewer already suggests, GL_EXT_packed_pixels has been core functionality since OpenGL 1.2. So in most cases, you should not have to test for any of these in the extension string. If you check the version first, and it's at least 1.2, you already know that the functionality is available. The test logic could look like this:
if (strcmp(glGetString(GL_VERSION), "1.2") >= 0 ||
strstr(glGetString(GL_EXTENSIONS), "_packed_pixels") != NULL)
{
// supported
}

Validation testing: how to validate a UI?

I have been asked to implement validation tests on the javascript part of our website. I've been considering using selenium WebDriver. One of the things we want to test is the UI: check whether it "looks" good (things that must be aligned are aligned, boxes are in the right position).
For the moment, the only option I found was to take a snapshot using Selenium, and either compare it to a test snapshot manually taken beforehand, or check the snapshots manually. The snapshot comparison is not very maintainable, as any change in the layout requires all the test snapshots to be taken again, and the manual check is very time consuming.
Does anyone know of any way (in Selenium or other) to achieve this?
It's not nice, but it can be done to some extent.
For positioning, you can use WebElement's getLocation() (Java doc, but the same method exists in all Selenium bindings). Note that most browsers render slightly differently, so do not expect things to be pixel-perfect when working with older IE. Also, things might be positioned slightly differently when e.g. the first font defined in CSS was not found and an alternative was used. Don't rely heavily on this method. But if you'll be able to make your tests sane and your environment stable, it will work.
For aligning, I wrote a simple Java method for WebDriver that asserts that an element is visually inside of another element.
There should be no false negatives, but there could be some false positives in the case when the inner element is visually inside, but its (invisible) actual borders "peek out". I haven't bumped into this problem, however, in my real experience, since nice websites behave nicely and don't need such hacks :). Still, it's kinda hackish and Selenium wasn't designed for this type of work, so it might be harder to implement more complex checks.
public static void assertContains(WebElement outerElem, WebElement innerElem) {
// get borders of outer element
Point outerLoc = outerElem.getLocation();
Dimension outerDim = outerElem.getSize();
int outerLeftX = outerLoc.getX();
int outerRightX = outerLeftX + outerDim.getWidth();
int outerTopY = outerLoc.getY();
int outerBottomY = outerTopY + outerDim.getHeight();
// get borders of inner element
Point innerLoc = innerElem.getLocation();
Dimension innerDim = innerElem.getSize();
int innerLeftX = innerLoc.getX();
int innerRightX = innerLeftX + innerDim.getWidth();
int innerTopY = innerLoc.getY();
int innerBottomY = innerTopY + innerDim.getHeight();
// assures the inner borders don't cross the outer borders
final String errorMsg = "ughh, some error message";
final boolean contains = (outerLeftX <= innerLeftX)
&& (innerRightX <= outerRightX)
&& (outerTopY <= innerTopY)
&& (innerBottomY <= outerBottomY);
assertTrue(errorMsg, contains);
}
If you use term validation in meaning: "Test that we have built the right thing", I would say it is nearly impossible to automate this. How will you judge if it looks pleasing or that it is easy to use, if not by having some people to really use it?
This kind of visual checks are also something humans are good at. If you use the website at all while developing it, you will notice quite easily if there is something fishy with the layouts and such.
For functionalities automated tests are good idea.

Resources