Is there a way to count the reviews by rating count before calling the get (-> get) method to get all those counters by querying the database without any calculations on the server? My solution at the moment:
$allReviews = Review::query()
->where('product_id', $data['product_id'])
->whereNotNull('published_at')
->get();
$fiveStars = count($allReviews->where('rating', 5));
$fourStars = count($allReviews->where('rating', 4));
$threeStars = count($allReviews->where('rating', 3));
$twoStars = count($allReviews->where('rating', 2));
$oneStar = count($allReviews->where('rating', 1));
$overallRating = ($fiveStars * 5 + $fourStars * 4 + $threeStars * 3 + $twoStars * 2 + $oneStar) / (($fiveStars + $fourStars + $threeStars + $twoStars + $oneStar));
You could use groupBy and pluck:
$ratings = Review::query()
->selectRaw('rating, COUNT(*) as amount')
->where('product_id', $data['product_id'])
->whereNotNull('published_at')
->groupBy('rating')
->pluck('amount', 'rating');
$fiveStars = $rating[5] ?? 0;
$fourStars = $rating[4] ?? 0;
$threeStars = $rating[3] ?? 0;
$twoStars = $rating[2] ?? 0;
$oneStar = $rating[1] ?? 0;
Related
I wish to get the sum of multiple collection
$collection1 = collect([1, 2, 3]);
$collection2 = collect([4, 8, 1]);
$collection3 = collect([5, 7, 1]);
I wish the result like that
$collection = [10,17,5];
if you can merge all collections in an array, this code help you:
$newColl = collect([0, 0, 0]);
foreach([$collection1, $collection2, $collection3] as $coll){
for($i = 0; $i < $coll->count(); $i++){
$newColl[$i] += $coll[$i];
}
}
$number_of_collections = x;
for($i=1 ; $i<=$number_of_collections ; $i++){
$collection[$i] = $collection1[$i]+$collection2[$i]+$collection3[$i];
}
I have two numbers that I get from relationships tables and I calculate percentage like this:
$safe_voters = AddMember::with('settlement')->whereHas('settlement', function($query) {
$query->where('reon_id', '1');
})->where('town_id', Auth::user()->town_id)->count();
$members = AddMember::with('settlement')->whereHas('settlement', function($query) {
$query->where('reon_id', '1');
})->where('town_id', Auth::user()->town_id)
->where('cipher_id', '0')
->count();
$percent_members = round(($members / $safe_voters) * 100,1);
In this way, I get all safe_voters together with reon_id == 1 and all safe_voters together with reon_id == 1 and cipher_id == 0 and calculate the percentage.
But, I need one more variable which has sum of percentage of each safe_voters and members separately. For example:
$safe_voters1 = 6;
$member1 = 3;
$percent1 = round(($member1 / $safe_voters1) * 100,1);
$safe_voters2 = 9;
$member2 = 3;
$percent2 = round(($member2 / $safe_voters2) * 100,1);
$safe_voters3 = 12;
$member3 = 3;
$percent3 = round(($member3 / $safe_voters3) * 100,1);
$final_percentage = $percent1 + $percent2 + $percent3;
$final_percentage it should be number 9 in this case.
How to get $final_percentage if I don't have each percentage separately?
One more thing, the number of safe_voters and members are raising so I suppose I need foreach loop first? I guess i must get percentage from loop individually but I don't know how...
here is update function. As soon as i turn update on my program gets slower. I'm not even able to render 25000 particles at a time. Voxels is a 3 dimensional array. How to i change my update function so that the calculations is done faster. i want to able to render at least 100000 particles.
function update(){
newTime = Date.now();
elapsedTime = newTime - oldTime;
oldTime = newTime;
for(var index =0 ; index < particles.vertices.length; index++){
//particle's old position
var oldPosition = particles.vertices[index];
//making sure particles do not og out of boundary
if (oldPosition.x > screenSquareLength || oldPosition.x < -screenSquareLength){
oldPosition.x = 2 * screenSquareLength * Math.random() - screenSquareLength;
}
if (oldPosition.y > screenSquareLength || oldPosition.y < -screenSquareLength){
oldPosition.y = 2 * screenSquareLength * Math.random() - screenSquareLength;
}
if (oldPosition.z > screenSquareDepth/2 || oldPosition.z < -screenSquareDepth/2){
oldPosition.z = screenSquareDepth * Math.random() - screenSquareDepth/2;
}
var oldVelocity = particlesExtraInfo[index].velocity;
var fieldVelocity;
var xIndex, yIndex, zIndex;
try{
//calculating index of voxel
xIndex = Math.floor(( oldPosition.x + screenSquareLength ) / voxelSize);
yIndex = Math.floor(( oldPosition.y + screenSquareLength ) / voxelSize);
zIndex = Math.floor(( screenSquareDepth / 2 - oldPosition.z) / voxelSize);
//getting velocity, color for particle and if voxel is
fieldVelocity = voxels[zIndex][xIndex][yIndex].userData["velocity"];
particleColor = voxels[zIndex][xIndex][yIndex].userData["color"];
activeVoxel = voxels[zIndex][xIndex][yIndex].userData["visible"];
}catch (e){
console.log("indexX = "+xIndex + " \t Yindex = "+ yIndex+" \t zIndex = "+ zIndex);
}
var particleColor;
var activeVoxel;
try{
var vx = ((oldVelocity.x + fieldVelocity.x) * elapsedTime);
var vy = ((oldVelocity.y + fieldVelocity.y) * elapsedTime);
var vz = ((oldVelocity.z + fieldVelocity.z) * elapsedTime);
var magnitude = Math.abs(vx) + Math.abs(vy) + Math.abs(vz); //Math.sqrt(vx*vx + vy*vy+ vz*vz);
var normalized = new THREE.Vector3(vx / magnitude, vy / magnitude, vz / magnitude);
if((particles.vertices[index].x < 0.1 && particles.vertices[index].x > -0.1) && (particles.vertices[index].y < 0.1 && particles.vertices[index].y > -0.1) && (particles.vertices[index].z < 0.1 && particles.vertices[index].z > -0.1) ){
particles.vertices[index].x = 2 * screenSquareLength * Math.random() - screenSquareLength;;
particles.vertices[index].y = 2 * screenSquareLength * Math.random() - screenSquareLength;;
particles.vertices[index].z = 2 * screenSquareLength * Math.random() - screenSquareLength;;
}
//if voxel is not part of the model update particle postion and velocity
if( activeVoxel == 0){
particles.colors[index] = new THREE.Color(particleColor);//new THREE.Color(0, 0, 1);
particles.colorsNeedUpdate = true;
particles.vertices[index].x += normalized.x/slowingFactor;
particles.vertices[index].y += normalized.y/slowingFactor;
particles.vertices[index].z += normalized.z/slowingFactor;
particles.verticesNeedUpdate = true;
particlesExtraInfo[index].velocity = normalized;
}else{
//voxel is part of particle so update color property of particle
particles.colors[index] = new THREE.Color(0, 0, 1);
particles.colorsNeedUpdate = true;
particles.vertices[index].x += normalized.x/(slowingFactor * 200);
particles.vertices[index].y += normalized.y/(slowingFactor * 200);
particles.vertices[index].z += normalized.z/(slowingFactor * 200);
particles.verticesNeedUpdate = true;
particlesExtraInfo[index].velocity = new THREE.Vector3( normalized.x/slowingFactor, normalized.y/slowingFactor, normalized.z/slowingFactor );
}
}catch(e){
}
}
}
I don't know much about what exactly happens when you update a buffer like this, but I know that it can be slow.
While 25k may be a lot for what you're trying to do (i experimented with 5k and had trouble) there is no reason why you can't optimize your JS before trying to move everything to the gpu (for example).
var foo = 0;
foo+= normalized.x / someFactor;
//better done this way:
var invSomeFactor = 1/someFactor;
// now you avoid dividing the same thing many times in your loop
foo += normalized.x * invSomeFactor;
Math.random() is pretty expensive, you could make a look up table (a large one) and fetch these precomputed values from it.
var myLookupTable = [];
var MAX_VALUES = 2048;
for ( var i = 0 ; i < MAX_VALUES ; i ++ ){
myLookupTable.push(Math.random());
}
//and then you can have a stride for example
var RAND_STRIDE = 0;
//and in the loop
someVec.x += something.x * myLookupTable[ RAND_STRIDE ++ ];
RAND_STRIDE %= MAX_VALUES; //read from the beginning
Finally, you can write a fragment shader, that would read from a buffer, and write into another buffer doing all this logic in the process. Each fragment is your particle and once you run this pass and compute your positions, you need to be able to read the buffer in your particle vertex shader and just assign those positions.
correlation = zeros(length(s1), 1);
sizeNum = 0;
for i = 1 : length(s1) - windowSize - delta
s1Dat = s1(i : i + windowSize);
s2Dat = s2(i + delta : i + delta + windowSize);
if length(find(isnan(s1Dat))) == 0 && length(find(isnan(s2Dat))) == 0
if(var(s1Dat) ~= 0 || var(s2Dat) ~= 0)
sizeNum = sizeNum + 1;
correlation(i) = abs(corr(s1Dat, s2Dat)) ^ 2;
end
end
end
What's happening here:
Run through every values in s1. For every value, get a slice for s1
till s1 + windowSize.
Do the same for s2, only get the slice after an intermediate delta.
If there are no NaN's in any of the two slices and they aren't flat,
then get the correlaton between them and add that to the
correlation matrix.
This is not an answer, I am trying to understand what is being asked.
Take some data:
N = 1e4;
s1 = cumsum(randn(N, 1)); s2 = cumsum(randn(N, 1));
s1(randi(N, 50, 1)) = NaN; s2(randi(N, 50, 1)) = NaN;
windowSize = 200; delta = 100;
Compute correlations:
tic
corr_s = zeros(N - windowSize - delta, 1);
for i = 1:(N - windowSize - delta)
s1Dat = s1(i:(i + windowSize));
s2Dat = s2((i + delta):(i + delta + windowSize));
corr_s(i) = corr(s1Dat, s2Dat);
end
inds = isnan(corr_s);
corr_s(inds) = 0;
corr_s = corr_s .^ 2; % square of correlation coefficient??? Why?
sizeNum = sum(~inds);
toc
This is what you want to do, right? A moving window correlation function? This is a very interesting question indeed …
I'm using matlab to implement a multilayer neural network. In the code I represent
the value of each node AS netValue{k}
the weight between layer k and k + 1 AS weight{k}
etc.
Since these data is three-dimensional, I have to use cell to hold a 2-D matrix to enable matrix multiply.
So it becomes really really slow to train the model, which I expect to have resulted from the usage of cell.
Can anyone tell me how to accelerate this code? Thanks
clc;
close all;
clear all;
input = [-2 : 0.4 : 2;-2:0.4:2];
ican = 4;
depth = 4; % total layer - 1, by convension
[featureNum , sampleNum] = size(input);
levelNum(1) = featureNum;
levelNum(2) = 5;
levelNum(3) = 5;
levelNum(4) = 5;
levelNum(5) = 2;
weight = cell(0);
for k = 1 : depth
weight{k} = rand(levelNum(k+1), levelNum(k)) - 2 * rand(levelNum(k+1) , levelNum(k));
threshold{k} = rand(levelNum(k+1) , 1) - 2 * rand(levelNum(k+1) , 1);
end
runCount = 0;
sumMSE = 1; % init MSE
minError = 1e-5;
afa = 0.1; % step of "gradient ascendence"
% training loop
while(runCount < 100000 & sumMSE > minError)
sumMSE = 0; % sum of MSE
for i = 1 : sampleNum % sample loop
netValue{1} = input(:,i);
for k = 2 : depth
netValue{k} = weight{k-1} * netValue{k-1} + threshold{k-1}; %calculate each layer
netValue{k} = 1 ./ (1 + exp(-netValue{k})); %apply logistic function
end
netValue{depth+1} = weight{depth} * netValue{depth} + threshold{depth}; %output layer
e = 1 + sin((pi / 4) * ican * netValue{1}) - netValue{depth + 1}; %calc error
assistS{depth} = diag(ones(size(netValue{depth+1})));
s{depth} = -2 * assistS{depth} * e;
for k = depth - 1 : -1 : 1
assistS{k} = diag((1-netValue{k+1}).*netValue{k+1});
s{k} = assistS{k} * weight{k+1}' * s{k+1};
end
for k = 1 : depth
weight{k} = weight{k} - afa * s{k} * netValue{k}';
threshold{k} = threshold{k} - afa * s{k};
end
sumMSE = sumMSE + e' * e;
end
sumMSE = sqrt(sumMSE) / sampleNum;
runCount = runCount + 1;
end
x = [-2 : 0.1 : 2;-2:0.1:2];
y = zeros(size(x));
z = 1 + sin((pi / 4) * ican .* x);
% test
for i = 1 : length(x)
netValue{1} = x(:,i);
for k = 2 : depth
netValue{k} = weight{k-1} * netValue{k-1} + threshold{k-1};
netValue{k} = 1 ./ ( 1 + exp(-netValue{k}));
end
y(:, i) = weight{depth} * netValue{depth} + threshold{depth};
end
plot(x(1,:) , y(1,:) , 'r');
hold on;
plot(x(1,:) , z(1,:) , 'g');
hold off;
Have you used the profiler to find out what functions are actually slowing down your code? It shows what lines take the most time to execute.