Why are maximally cohesive classes not advisable or possible to create? - coding-style

I'm learning the book of Robert C. Martin "Clean Code" (2009) and I've stumbled upon the concept of cohesion (Chapter 10).
Robert quotes:
A class in which each variable is used by each method is maximally
cohesive. In general it is neither advisable nor possible to create
such maximally cohesive classes ...
Unfortunately I didn't found a detailed explanation anywhere. Does anyone have an explanation for this with an real code examples?
Many thanks in advance!!

Making my comments into an answer..
Maximal Cohesion, as defined in the book, in general implies that the methods provide overlapping functionality thus there is code duplication and they are not orthogonal. So this is bad design and should be avoided by refactoring common code and making the methods orthogonal as far as possible, thus eliminatiing maximal cohesion. So my point is that is why it is not advisable.
However it is possible to create maximally cohesive classes and in some cases it is perfectly normal.
One simple and practical example, I can think of, is classes representing geometrical shapes in computer-aided design.
For example:
class Circle{
float[2] center;
float radius;
draw() {
hardware.draw(center[0], center[1], radius);
}
print() {
print('Cicrle at '+center[0]+','+center[1]+' with radius '+radius);
}
scale(s) {
center[0] *= s;
center[1] *= s;
radius *= s;
}
intersectLine(line) {
/* compute intersection based on line and circle coordinates using both cnter and radius variables */
}
}
class Bezier{
float[4] controls;
draw() {
/* .. */
}
print() {
/* .. */
}
scale(s) {
/* .. */
}
intersectLine(line) {
/* .. */
}
}
As one can see the shape classes are maximally cohesive and it is perfectly normal given the nature of the objects and their methods. Their variables are needed for any calculation of actual interest.
Hope the examples and explanations are helpful.

Related

Is it possible to model LLVM-like inheritance hierarchy in Rust using enums?

By saying LLVM-like inheritance hierarchy, I mean the way to obtain runtime polymorphism described in this documentation: https://llvm.org/docs/HowToSetUpLLVMStyleRTTI.html.
It is easy to implement the same feature in Rust using enums, like:
enum Shape {
Square(Square),
Circle(Circle)
}
enum Square {
SquareA(SquareAData),
SquareB(SquareBData),
}
enum Circle {
CircleA(CircleAData),
CircleB(CircleBData),
}
// assume ***Data can be arbitrarily complex
However, the memory layout is inevitably different from LLVM-like inheritance hierarchy, which uses a single integer field to record the discriminant of the type. Though current rustc already has a lot of optimizations on the size of enums, there will still be two integer field to record the discriminant in a Shape object in the above example.
I have tried some ways without success, in which the closet to LLVM-like inheritance hierarchy in my mind is to enable nightly feature arbitrary_enum_discriminant and assign each variant of the enum an discriminant:
#![feature(arbitrary_enum_discriminant)]
enum Shape {
Square(Square),
Circle(Circle)
}
#[repr(usize)]
enum Square {
SquareA(SquareAData) = 0,
SquareB(SquareBData) = 1,
}
#[repr(usize)]
enum Circle {
CircleA(CircleAData) = 2,
CircleB(CircleBData) = 3,
}
It is perfectly possible for Shape to go without its own discriminant, since its two variants have non-intersecting discriminant sets. However, rustc still assigns a integer discriminant to it, making it larger than Square or Circle. (rustc version: rustc 1.44.0-nightly (f509b26a7 2020-03-18))
So my question is: is it on earth possible in Rust to use enums to model LLVM-like inheritance hierarchy, with only a single integer discriminant in the top level "class"?

Best practice on instantiating different types of the same es6 class

I am building an HTML5 Canvas 2d game and i have some es6 classes on my project. One of them is Obstacles. What i want to do is having the ability to create a different instance of this class depending on a given type (eg. small, thick, tall etc).
What is the best way to do this?
Just add another parameter to the constructor of the class and name it type?
Or just create subclasses of the main class Obstacle by extending it (eg. SmallObstacle, TallObstacle) given a random type value?
Thanks in advance.
Your question is based around how you want to build and implement your data structures. Neither option is necessarily better than the other 100% of the time. I'll explain the pro's and con's of each below along with my personal solution. You can decide which one is more feasible for you to implement.
Option 1:
Pros:
Easy to insert into the class and deal with later in your code
Leaves your class neat and slimmer
Cons:
You have to deal with it later in your code. When initializing (maybe even when updating/rendering) your code needs to interpret what the types are and build each obstacle based on that.
Option 2:
Pros:
Makes an organized class
Not much work later in your code since you're just pulling preset specifications rather than interpreting them
Cons:
The class is much larger
You are likely to be repeating some code (although if you do it well, you won't)
My Opinion:
Personally I would skip specifying "large," "medium," "tall," "wide." Instead, I would use some logic when implementing the class and make the height and width whatever I want. You could take advantage of how one can easily specify a value while also providing a default with a function constructor.
Here is a snippet example:
class obstacle {
constructor(x, y, w=150, h=150) {
this.x = x;
this.y = y;
this.width = w; //default is 150
this.height = h; //so it's medium or something
this.sprite = ...; //etc.
}
}
// Initialize 100 obstacles :P
for (let i=0; i<100; i++) {
let x = Math.random() * scene.width;
let y = Math.random() * scene.height;
let foo = new obstacle(x, y);
/* When you want different width and height instead
of the default, construct like this:
let bar = new obstacle(x, y, width, height);
*/
}

I want to make a image move with sin graph, in processing

PImage img;
float x;
void setup() {
img = loadImage("img.png");
}
void draw() {
}
How can I accomplish this?
When you get started I recommend carefully reading the available documentation , tutorial. You can also find an exhaustive answer on using the sine function in general and applying to movement here
Additionally, have a look at the Stackoverflow Tour to get an idea of how the community works (and earn a badge doing so).

libGDX- Exact collision detection - Polygon creation?

I've got a question about libGDX collision detection. Because it's a rather specific question I have not found any good solution on the internet yet.
So, I already created "humans" that consist of different body parts, each with rectangle-shaped collision detection.
Now I want to implement weapons and skills, which for example look like this:
Skill example image
Problem
Working with rectangles in collision detections would be really frustrating for players when there are skills like this: They would dodge a skill successfully but the collision detector would still damage them.
Approach 1:
Before I started working with Libgdx I have created an Android game with a custom engine and similar skills. There I solved the problem following way:
Detect rectangle collision
Calculate overlapping rectangle section
Check every single pixel of the overlapping part of the skill for transparency
If there is any non-transparent pixel found -> Collision
That's a kind of heavy way, but as only overlapping pixels are checked and the rest of the game is really light, it works completely fine.
At the moment my skill images are loaded as "TextureRegion", where it is not possible to access single pixels.
I have found out that libGDX has a Pixmap class, which would allow such pixel checks. Problem is: having them loaded as Pixmaps additionally would 1. be even more heavy and 2. defeat the whole purpose of the Texture system.
An alternative could be to load all skills as Pixmap only. What do you think: Would this be a good way? Is it possible to draw many Pixmaps on the screen without any issues and lag?
Approach 2:
An other way would be to create Polygons with the shape of the skills and use them for the collision detection.
a)
But how would I define a Polygon shape for every single skill (there are over 150 of them)? Well after browsing a while, I found this useful tool: http://www.aurelienribon.com/blog/projects/physics-body-editor/
it allows to create Polygon shapes by hand and then save them as JSON files, readable by the libGDX application. Now here come the difficulties:
The Physics Body Editor is connected to Box2d (which I am not using). I would either have to add the whole Box2d physics engine (which I do not need at all) just because of one tiny collision detection OR I would have to write a custom BodyEditorLoader which would be a tough, complicated and time-intensive task
Some images of the same skill sprite have a big difference in their shapes (like the second skill sprite example). When working with the BodyEditor tool, I would have to not only define the shape of every single skill, but I would have to define the shape of several images (up to 12) of every single skill. That would be extremely time-intensive and a huge mess when implementing these dozens of polygon shapes
b)
If there is any smooth way to automatically generate Polygons out of images, that could be the solution. I could simply connect every sprite section to a generated polygon and check for collisions that way. There are a few problems though:
Is there any smooth tool which can generate Polygon shapes out of an image (and does not need too much time therefor)?
I don't think that a tool like this (if one exists) can directly work with Textures. It would probably need Pixmaps. It would not be needed to keep te Pixmaps loaded after the Polygon creation though. Still an extremely heavy task!
My current thoughts
I'm stuck at this point because there are several possible approaches but all of them have their difficulties. Before I choose one path and continue coding, it would be great if you could leave some of your ideas and knowledge.
There might be helpful classes and code included in libGDX that solve my problems within seconds - as I am really new at libGDX I just don't know a lot about it yet.
Currently I think I would go with approach 1: Work with pixel detection. That way I made exact collision detections possible in my previous Android game.
What do you think?
Greetings
Felix
I, personally, would feel like pixel-to-pixel collision would be overkill on performance and provide some instances where I would still feel cheated - (I got hit by the handle of the axe?)
If it were me, I would add a "Hitbox" to each skill. StreetFighter is a popular game which uses this technique. (newer versions are in 3D, but hitbox collision is still 2D) Hitboxes can change frame-by-frame along with the animation.
Empty spot here to add example images - google "Streetfighter hitbox" in the meantime
For your axe, there could be a defined rectangle hitbox along the edge of one or both ends - or even over the entire metal head of the axe.
This keeps it fairly simple, without having to mess with exact polygons, but also isn't overly performance heavy like having every single pixel being its own hitbox.
I've used that exact body editor you referenced and it has the ability to generate polygons and/or circles for you. I also made a loader for the generated JSON with the Jackson library. This may not be the answer for you since you'd have to implement box2d. But here's how how I did it anyway.
/**
* Adds all the fixtures defined in jsonPath with the name'lookupName', and
* attach them to the 'body' with the properties defined in 'fixtureDef'.
* Then converts to the proper scale with 'width'.
*
* #param body the body to attach fixtures to
* #param fixtureDef the fixture's properties
* #param jsonPath the path to the collision shapes definition file
* #param lookupName the name to find in jsonPath json file
* #param width the width of the sprite, used to scale fixtures and find origin.
* #param height the height of the sprite, used to find origin.
*/
public void addFixtures(Body body, FixtureDef fixtureDef, String jsonPath, String lookupName, float width, float height) {
JsonNode collisionShapes = null;
try {
collisionShapes = json.readTree(Gdx.files.internal(jsonPath).readString());
} catch (JsonProcessingException e) {
e.printStackTrace();
} catch (IOException e) {
e.printStackTrace();
}
for (JsonNode node : collisionShapes.findPath("rigidBodies")) {
if (node.path("name").asText().equals(lookupName)) {
Array<PolygonShape> polyShapes = new Array<PolygonShape>();
Array<CircleShape> circleShapes = new Array<CircleShape>();
for (JsonNode polygon : node.findPath("polygons")) {
Array<Vector2> vertices = new Array<Vector2>(Vector2.class);
for (JsonNode vector : polygon) {
vertices.add(new Vector2(
(float)vector.path("x").asDouble() * width,
(float)vector.path("y").asDouble() * width)
.sub(width/2, height/2));
}
polyShapes.add(new PolygonShape());
polyShapes.peek().set(vertices.toArray());
}
for (final JsonNode circle : node.findPath("circles")) {
circleShapes.add(new CircleShape());
circleShapes.peek().setPosition(new Vector2(
(float)circle.path("cx").asDouble() * width,
(float)circle.path("cy").asDouble() * width)
.sub(width/2, height/2));
circleShapes.peek().setRadius((float)circle.path("r").asDouble() * width);
}
for (PolygonShape shape : polyShapes) {
Vector2 vectors[] = new Vector2[shape.getVertexCount()];
for (int i = 0; i < shape.getVertexCount(); i++) {
vectors[i] = new Vector2();
shape.getVertex(i, vectors[i]);
}
shape.set(vectors);
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
for (CircleShape shape : circleShapes) {
fixtureDef.shape = shape;
body.createFixture(fixtureDef);
}
}
}
}
And I would call it like this:
physics.addFixtures(body, fixtureDef, "ship/collision_shapes.json", shipType, width, height);
Then for collision detection:
public ContactListener shipsExplode() {
ContactListener listener = new ContactListener() {
#Override
public void beginContact(Contact contact) {
Body bodyA = contact.getFixtureA().getBody();
Body bodyB = contact.getFixtureB().getBody();
for (Ship ship : ships) {
if (ship.body == bodyA) {
ship.setExplode();
}
if (ship.body == bodyB) {
ship.setExplode();
}
}
}
};
return listener;
}
then you would add the listener to the world:
world.setContactListener(physics.shipsExplode());
my sprites' width and height were small since you're dealing in meters not pixels once you start using box2d. One sprite height was 0.8f and width was 1.2f for example. If you made the sprites width and height in pixels the physics engine hits speed limits that are built in http://www.iforce2d.net/b2dtut/gotchas
Don't know if this still matter to you guys, but I built a small python script that returns the pixels positions of the points in the edges of the image. There is room to improve the script, but for me, for now its ok...
from PIL import Image, ImageFilter
filename = "dship1"
image = Image.open(filename + ".png")
image = image.filter(ImageFilter.FIND_EDGES)
image.save(filename + "_edge.png")
cols = image.width
rows = image.height
points = []
w = 1
h = 1
i = 0
for pixel in list(image.getdata()):
if pixel[3] > 0:
points.append((w, h))
if i == cols:
w = 0
i = 0
h += 1
w += 1
i += 1
with open(filename + "_points.txt", "wb") as nf:
nf.write(',\n'.join('%s, %s' % x for x in points))
In case of updates you can find them here: export positions

SharpGL Animation Questions

So I am writing a program that parses files with xyz points and makes a bunch of connected lines. What I am trying to do is animate each line being drawn. I have tried to use VBO's and Display Lists in order to increase performance (as I am dealing with large amount of data points i.e. 1,000,000 points) but I could not figure out how to use them in SharpGL. So the code I am using to draw right now is as follows:
private void drawInput(OpenGL gl)
{
gl.Begin(OpenGL.GL_LINE_STRIP);
for (int i = 0; i < parser.dataSet.Count; i++)
{
gl.Color((float) i, 3.0f, 0.0f);
gl.Vertex(parser.dataSet[i].X, parser.dataSet[i].Y, parser.dataSet[i].Z);
gl.Flush();
}
gl.End();
}
I know immediate mode is super noobzore5000 of me, but I can't find any SharpGL examples of VBO's or Display Lists. So know what I want to do is to 'redraw' the picture after each line is drawn. I thought when the flush method is called, it draws everything up to that point. But it still 'batches' it, and displays all the data at once, how can I animate this? I am incredibly desperate, I don't think thoroughly learning OpenGL or DirectX is practical for such a simple task.
After lots of tinkering, I chose to go with OpenTK because I did end up figuring out VBO's for SharpGL and the performance is AWFUL compared to OpenTK. I will give an answer as to how to animate in the way that I wanted.
My solution works with Immediate Mode and using VBO's. The main concept is making a member integer (animationCount) that you increase every time your paint function gets called, and paint up to that number.
Immediate Mode:
private void drawInput(OpenGL gl)
{
gl.Begin(OpenGL.GL_LINE_STRIP);
for (int i = 0; i < animationCount; i++)
{
gl.Color((float) i, 3.0f, 0.0f);
gl.Vertex(parser.dataSet[i].X, parser.dataSet[i].Y, parser.dataSet[i].Z);
}
gl.End();
animationCount++;
}
or
VBO:
private void glControl1_Paint(object sender, System.Windows.Forms.PaintEventArgs e)
{
GL.DrawArrays(PrimitiveType.LineStrip, 0, animationCount);
animationCount++;
}

Resources