How to get element of spark millib matrix using coordinate - apache-spark-mllib

For example, if I have a matrix:
import org.apache.spark.mllib.linalg.{Matrices}
// Create a dense matrix ((1.0, 2.0), (3.0, 4.0), (5.0, 6.0))
val dm: Matrix = Matrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
dm:
1.0 2.0
3.0 4.0
5.0 6.0
If I want to know get (1,2) of dm which is 2, what should I do.
I searched the internet, and could not find a proper API.

Perhaps this is useful-
import org.apache.spark.mllib.linalg.{Matrices => OldMatrices, Matrix => OldMatrix}
// Create a dense matrix ((1.0, 2.0), (3.0, 4.0), (5.0, 6.0))
val dm: OldMatrix = OldMatrices.dense(3, 2, Array(1.0, 3.0, 5.0, 2.0, 4.0, 6.0))
println(dm)
/**
* 1.0 2.0
* 3.0 4.0
* 5.0 6.0
*/
// /** Gets the (i, j)-th element. */ index starts from 0
println(dm.apply(0, 1))
// 2.0

Related

save 2-dimensional array as number chart image

Its possible to print a full array in the console with:
import sys
import numpy
numpy.set_printoptions(threshold=sys.maxsize)
but is there also an option to export a kind of "number chart image"? e.g.
import numpy as np
numberchart = np.range(100)
WANTED_RESULT.png
I plotted with matplotlib some kind of a heatmap, but Iam looking for image format like .png
harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0],
[2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0],
[1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0],
[0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0],
[0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0],
[1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1],
[0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]])
fig, ax = plt.subplots()
im = ax.imshow(harvest)
y, x = harvest.shape
ax.set_xticks(np.arange(x))
ax.set_yticks(np.arange(y))
plt.setp(ax.get_xticklabels())#, rotation=45, ha="right",
#rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(x):
for j in range(y):
text = ax.text(j, i, harvest[i, j],
ha="center", va="center", color="w")
ax.set_title("NumberChart")
fig.tight_layout()
plt.show()
I did some minor changes to your code. It is more like a work around but does what you hope I beleive:
import matplotlib.pyplot as plt
harvest = np.array([[0.8, 2.4, 2.5, 3.9, 0.0, 4.0, 0.0],
[2.4, 0.0, 4.0, 1.0, 2.7, 0.0, 0.0],
[1.1, 2.4, 0.8, 4.3, 1.9, 4.4, 0.0],
[0.6, 0.0, 0.3, 0.0, 3.1, 0.0, 0.0],
[0.7, 1.7, 0.6, 2.6, 2.2, 6.2, 0.0],
[1.3, 1.2, 0.0, 0.0, 0.0, 3.2, 5.1],
[0.1, 2.0, 0.0, 1.4, 0.0, 1.9, 6.3]])
fig, ax = plt.subplots()
im = ax.imshow(harvest*0, cmap="Greys")
y, x = harvest.shape
ax.set_xticks(np.arange(x))
ax.set_yticks(np.arange(y))
plt.setp(ax.get_xticklabels())#, rotation=45, ha="right",
#rotation_mode="anchor")
# Loop over data dimensions and create text annotations.
for i in range(x):
for j in range(y):
text = ax.text(j, i, harvest[i, j],
ha="center", va="center", color="k")
ax.set_xticks(np.arange(-.5, harvest.shape[0], 1), minor=True)
ax.set_yticks(np.arange(-.5, harvest.shape[1], 1), minor=True)
plt.tick_params(
axis='x',
which='both',
bottom=False,
top=False,
labelbottom=False)
plt.tick_params(
axis='y',
which='both',
left=False,
top=False,
labelleft=False)
ax.grid(which='minor', color='k', linestyle='-', linewidth=1)
ax.set_title("NumberChart")
fig.tight_layout()
plt.show()

Create domain with matrices in Chapel

I have a domain D, and I want to use it to index several matrices A. Something of the form
var dom: domain(1) = {0..5};
var mats: [dom] <?>;
var a0 = [[0.0, 0.1, 0.2], [0.3, 0.4, 0.5]];
var a1 = [[1.0, 1.1, 1.2, 1.3], [1.4, 1.5, 1.6, 1.7]];
mats[0] = a0;
mats[1] = a1;
Each a will be 2D but have different sizes. Yes, some of these will be sparse (but need not be for purposes of this question)
== UPDATE ==
For clarity, I have a series of layers (it's a neural net), say 1..15. I created var layerDom = {1..15} Each layer has multiple objects associated with it, like error so I have
var errors: [layerDom] real; // Just a number
And I'd like to have
var Ws: [layerDom] <matrixy thingy>; // Weight matrices all of different shape.
As of Chapel 1.15 there isn't an elegant way to create an array of arrays where the inner arrays have different sizes. This is because the inner arrays all share the same domain, meaning that changing one array's domain changes all arrays.
To achieve the desired effect, you need to create an array of records/classes that contain an array:
record Weight {
var D : domain(2);
var A : [D] real;
}
var layers = 4;
var weights : [1..layers] Weight;
for i in 1..layers {
weights[i].D = {1..i, 1..i};
weights[i].A = i;
}
for w in weights do writeln(w.A, "\n");
// 1.0
//
// 2.0 2.0
// 2.0 2.0
//
// 3.0 3.0 3.0
// 3.0 3.0 3.0
// 3.0 3.0 3.0
//
// 4.0 4.0 4.0 4.0
// 4.0 4.0 4.0 4.0
// 4.0 4.0 4.0 4.0
// 4.0 4.0 4.0 4.0
//

OpenGL ES shader normalized dot product greater than 1.0

It's impossible for the dot product of two normalized vectors to be greater than 1.0, right? How can GLSL is saying it is greater than 5.0?
Here is the code I am using to debug, and the white pixels are appearing.
if(dot(normalize(n), normalize(h)) > 5.0)
gl_FragColor = vec4(1.0, 1.0, 1.0, 1.0); // Why does this execute!?
else
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
Does anybody see any stupid obvious mistake, or if something else could be going on? It shouldn't matter if n and h are even correct, right? It's been days...
This is OpenGL ES 2.0 if that matters.

Nested Match & Sum In Ruby

Currently I have this array =
[["abc", [0.0, 1.0, 2.0, 3.0], "Testing"], ["efg", [1.0, 2.0, 3.0, 4.0], "Testing"]]
Condition:
if each of nested array index2 is the same then I want to sum up with both
[0.0 + 1.0, 1.0 + 2.0, 2.0 + 3.0, 3.0 + 4.0] = [1.0, 3.0, 5.0, 7.0]
The final result I want:
[["efg", [1.0, 3.0, 5.0, 7.0], "Testing"]]
Is there any way or suggestion to obtain this result?
I've had fun building this in TDD:
def nested_match_sum(data)
grouped = data.group_by(&:last)
grouped.values.map do |array|
array.inject(nil) do |result, elem|
if result
elem[1] = array_position_sum(elem[1], result[1])
end
elem
end
end
end
def array_position_sum(first, second)
first.zip(second).map do |couple|
couple.first + couple.last
end
end
require 'rspec/autorun'
describe "#nested_match_sum" do
let(:data) do
[
["abc", [0.0, 1.0, 2.0, 3.0], "Testing"],
["efg", [1.0, 2.0, 3.0, 4.0], "Testing"]
]
end
it "groups by last element and aggregates the sum" do
expect(nested_match_sum(data)).to eq(
[["efg", [1.0, 3.0, 5.0, 7.0], "Testing"]]
)
end
context "giving multiple keys" do
let(:data) do
[
["abc", [0.0, 1.0, 2.0, 3.0], "Testing"],
["efg", [1.0, 2.0, 3.0, 4.0], "Testing"],
["abc", [0.0, 1.0, 2.0, 3.0], "Another"],
["ghj", [2.0, 3.0, 4.0, 5.0], "Another"]
]
end
it "works aswell" do
expect(nested_match_sum(data)).to eq([
["efg", [1.0, 3.0, 5.0, 7.0], "Testing"],
["ghj", [2.0, 4.0, 6.0, 8.0], "Another"]
])
end
end
end
describe "#array_position_sum" do
let(:first) { [1, 2, 3] }
let(:second) { [4, 5, 6] }
it "sums two arrays by position" do
expect(array_position_sum(first, second)).to eq(
[5, 7, 9]
)
end
end

Loss of precision in GLSL fragment shader

I am now using opengl-es and I use the gl shading language. I hope to render to texture but I found a loss of precision. For example, when I write a float value of 0.5 to the texture, I found the actual value stored in the texture was approximately 0.498. What should I do to achieve higher precision?
You probably should consider storing your values (if just one value per pixel/texel) via packing-unpacking your values:
vec4 packFloat(const float value) {
const vec4 bitSh = vec4(256.0 * 256.0 * 256.0, 256.0 * 256.0, 256.0, 1.0);
const vec4 bitMsk = vec4(0.0, 1.0 / 256.0, 1.0 / 256.0, 1.0 / 256.0);
vec4 res = fract(value * bitSh);
res -= res.xxyz * bitMsk;
return res;
}
float unpackFloat(const vec4 value) {
const vec4 bitSh = vec4(1.0 / (256.0 * 256.0 * 256.0), 1.0 / (256.0 * 256.0), 1.0 / 256.0, 1.0);
return (dot(value, bitSh));
}
This might be okay for storing values for something like depth-maps
And this would be kind of a 32 bit range for each pixel/texel
Try adding the highp precision qualifier in front of your variables.
Render to a texture that uses more than 8 bits per component. If you don't have the appropriate OpenGL ES extensions for that, then generally there's not much you can do.
Even the next higher precision might not be enough, because the final stage of the rendering pipeline scales the pixel values to a range of 0..1, the end points inclusive. Thus 1 will be represented as 255, which suggest a factor of 1/255 instead of 1/256.
The same applies to all precisions: 0.5 can't be represented exactly.

Resources