Compareing Image Pixel Size in .Bat - windows

I'm trying to Get file sizes of images so i can compare them to 1 of two standers 1920x1080 or 1080x1920. if my images don't match the standers i want to delete them. The reason why im trying to do this is so that when i pull images from the windows spotlight folder i can sort them and delete the file that don't match. I don't have much yet but here is my repository of code
xcopy %localappdata%\Packages\Microsoft.Windows.ContentDeliveryManager_cw5n1h2txyewy\LocalState\Assets C:\Users\%Username%\Pictures\"Windows Spotlight"
D
cd \
cd Users\%Username%\Pictures\"Windows Spotlight"
ren * *.jpg
https://github.com/CamoJackson/WinSpotlight-Copy. I looked into WMIC but i don't know enough about it. Any Ideas.
Edit: I looked at the directory and the files that im trying to use are all above 200mb and the files i don't want are under 100mb. Using ~zA i can compare that file size.

Or using BAT/VBS :
for the image dimension use the value : 31.
example :
GetMediaInfo.bat "Path_to_the_folder" "Image_name" 31
::GetMediaInfo.bat
::By SachaDee - 2016
::Usage
::GetMediaInfo.bat "Folder" "File" "Value of the Info to GET"
::Possible Value Example :
:: 27 = Media Duration for video or music files
:: 28 = Bits Rate in Kbs/s
:: 31 = Dimensions of an image
::Output
::Information du media
#echo off
If not exist "#.vbs" call:construct
For /f "delims=" %%a in ('cscript //nologo #.vbs "%~1" "%~2" "%~3"') do set $MediaInfo=%%a
echo %$MediaInfo%
exit/b
:construct
(echo.dim objShell&echo.dim objFolder&echo.dim objFolderItem&echo.set objShell = CreateObject("shell.application"^)&echo.set objFolder = objShell.NameSpace(wscript.arguments(0^)^)&echo.set objFolderItem = objFolder.ParseName(wscript.arguments(1^)^)&echo.dim objInfo&echo.objInfo = objFolder.GetDetailsOf(objFolderItem, wscript.arguments(2^)^)&echo.wscript.echo objinfo)>#.vbs
List of possible value (depend of the file type) :
Name - 0
Size - 1
Item type - 2
Date modified - 3
Date created - 4
Date accessed - 5
Attributes - 6
Offline status - 7
Offline availability - 8
Perceived type - 9
Owner - 10
Kind - 11
Date taken - 12
Contributing artists - 13
Album - 14
Year - 15
Genre - 16
Conductors - 17
Tags - 18
Rating - 19
Authors - 20
Title - 21
Subject - 22
Categories - 23
Comments - 24
Copyright - 25
Length - 27
Bit rate - 28
Protected - 29
Camera model - 30
Dimensions - 31
Camera maker - 32
Company - 33
File description - 34
Program name - 35
Duration - 36
Is online - 37
Is recurring - 38
Location - 39
Optional attendee addresses - 40
Optional attendees - 41
Organizer address - 42
Organizer name - 43
Reminder time - 44
Required attendee addresses - 45
Required attendees - 46
Resources - 47
Meeting status - 48
Free/busy status - 49
Total size - 50
Account name - 51
Task status - 52
Computer - 53
Anniversary - 54
Assistant's name - 55
Assistant's phone - 56
Birthday - 57
Business address - 58
Business city - 59
Business P.O. box - 60
Business postal code - 61
Business state or province - 62
Business street - 63
Business fax - 64
Business home page - 65
Business phone - 66
Callback number - 67
Car phone - 68
Children - 69
Company main phone - 70
Department - 71
E-mail address - 72
E-mail2 - 73
E-mail3 - 74
E-mail list - 75
E-mail display name - 76
File as - 77
First name - 78
Full name - 79
Gender - 80
Given name - 81
Hobbies - 82
Home address - 83
Home city - 84
Home country/region - 85
Home P.O. box - 86
Home postal code - 87
Home state or province - 88
Home street - 89
Home fax - 90
Home phone - 91
IM addresses - 92
Initials - 93
Job title - 94
Label - 95
Last name - 96
Mailing address - 97
Middle name - 98
Cell phone - 99
Cell phone - 100
Nickname - 101
Office location - 102
Other address - 103
Other city - 104
Other country/region - 105
Other P.O. box - 106
Other postal code - 107
Other state or province - 108
Other street - 109
Pager - 110
Personal title - 111
City - 112
Country/region - 113
P.O. box - 114
Postal code - 115
State or province - 116
Street - 117
Primary e-mail - 118
Primary phone - 119
Profession - 120
Spouse/Partner - 121
Suffix - 122
TTY/TTD phone - 123
Telex - 124
Webpage - 125
Content status - 126
Content type - 127
Date acquired - 128
Date archived - 129
Date completed - 130
Device category - 131
Connected - 132
Discovery method - 133
Friendly name - 134
Local computer - 135
Manufacturer - 136
Model - 137
Paired - 138
Classification - 139
Status - 140
Client ID - 141
Contributors - 142
Content created - 143
Last printed - 144
Date last saved - 145
Division - 146
Document ID - 147
Pages - 148
Slides - 149
Total editing time - 150
Word count - 151
Due date - 152
End date - 153
File count - 154
Filename - 155
File version - 156
Flag color - 157
Flag status - 158
Space free - 159
Bit depth - 160
Horizontal resolution - 161
Width - 162
Vertical resolution - 163
Height - 164
Importance - 165
Is attachment - 166
Is deleted - 167
Encryption status - 168
Has flag - 169
Is completed - 170
Incomplete - 171
Read status - 172
Shared - 173
Creators - 174
Date - 175
Folder name - 176
Folder path - 177
Folder - 178
Participants - 179
Path - 180
By location - 181
Type - 182
Contact names - 183
Entry type - 184
Language - 185
Date visited - 186
Description - 187
Link status - 188
Link target - 189
URL - 190
Media created - 191
Date released - 192
Encoded by - 193
Producers - 194
Publisher - 195
Subtitle - 196
User web URL - 197
Writers - 198
Attachments - 199
Bcc addresses - 200
Bcc - 201
Cc addresses - 202
Cc - 203
Conversation ID - 204
Date received - 205
Date sent - 206
From addresses - 207
From - 208
Has attachments - 209
Sender address - 210
Sender name - 211
Store - 212
To addresses - 213
To do title - 214
To - 215
Mileage - 216
Album artist - 217
Album ID - 218
Beats-per-minute - 219
Composers - 220
Initial key - 221
Part of a compilation - 222
Mood - 223
Part of set - 224
Period - 225
Color - 226
Parental rating - 227
Parental rating reason - 228
Space used - 229
EXIF version - 230
Event - 231
Exposure bias - 232
Exposure program - 233
Exposure time - 234
F-stop - 235
Flash mode - 236
Focal length - 237
35mm focal length - 238
ISO speed - 239
Lens maker - 240
Lens model - 241
Light source - 242
Max aperture - 243
Metering mode - 244
Orientation - 245
People - 246
Program mode - 247
Saturation - 248
Subject distance - 249
White balance - 250
Priority - 251
Project - 252
Channel number - 253
Episode name - 254
Closed captioning - 255
Rerun - 256
SAP - 257
Broadcast date - 258
Program description - 259
Recording time - 260
Station call sign - 261
Station name - 262
Summary - 263
Snippets - 264
Auto summary - 265
Search ranking - 266
Sensitivity - 267
Shared with - 268
Sharing status - 269
Product name - 270
Product version - 271
Support link - 272
Source - 273
Start date - 274
Billing information - 275
Complete - 276
Task owner - 277
Total file size - 278
Legal trademarks - 279
Video compression - 280
Directors - 281
Data rate - 282
Frame height - 283
Frame rate - 284
Frame width - 285
Total bitrate - 286

Related

Can't generate any alignments in MCScanX

I'm trying to find collinearity between a group of genes from two different species using MCScanX. But I don't know what I could be possibly doing wrong anymore. I've checked both input files countless times (.gff and .blast), and they seem to be in line with what the manual says.
Like, for the first species, I've downloaded the gff file from figshare. I already had the fasta file containing only the proteins of interest (that I also got from figshare), so gene ids matched. Then, I downloaded both the gff and the protein fasta file from coffee genome hub. I used the coffee proteins fasta file as the reference genome in rBLAST to align the first specie's genes against it. After blasting (and keeping only the first five best alignments with e-values greater than 1e-10), I filtered both gff files so they only contained genes that matched those in the blast file, and then concatenated them. So the final files look like this:
View (test.blast) #just imagine they're tab separated values
sp1.id1 sp2.id1 44.186 43 20 1 369 411 206 244 0.013 37.4sp1.id1 sp2.id2 25.203 123 80 4 301 413 542 662 0.00029 43.5sp1.id1 sp2.id3 27.843 255 130 15 97 333 458 676 1.75e-05 47.8sp1.id1 sp2.id4 26.667 105 65 3 301 396 329 430 0.004 39.7sp1.id1 sp2.id5 27.103 107 71 3 301 402 356 460 0.000217 43.5sp1.id2 sp2.id6 27.368 95 58 2 40 132 54 139 0.41 32sp1.id2 sp2.id7 27.5 120 82 3 23 138 770 888 0.042 35sp1.id2 sp2.id8 38.596 57 35 0 21 77 126 182 0.000217 42sp1.id2 sp2.id9 36.17 94 56 2 39 129 633 725 1.01e-05 46.6sp1.id2 sp2.id10 37.288 59 34 2 75 133 345 400 0.000105 43.1sp1.id3 sp2.id11 33.846 65 42 1 449 512 360 424 0.038 37.4sp1.id3 sp2.id12 40 50 16 2 676 725 672 707 6.7 30sp1.id3 sp2.id13 31.707 41 25 1 370 410 113 150 2.3 30.4sp1.id3 sp2.id14 31.081 74 45 1 483 550 1 74 3.3 30sp1.id3 sp2.id15 35.938 64 39 1 377 438 150 213 0.000185 43.5
View (test.gff) #just imagine they're tab separated values
ex0 sp2.id1 78543527 78548673ex0 sp2.id2 97152108 97154783ex1 sp2.id3 16555894 16557150ex2 sp2.id4 3166320 3168862ex3 sp2.id5 7206652 7209129ex4 sp2.id6 5079355 5084496ex5 sp2.id7 27162800 27167939ex6 sp2.id8 5584698 5589330ex6 sp2.id9 7085405 7087405ex7 sp2.id10 1105021 1109131ex8 sp2.id11 24426286 24430072ex9 sp2.id12 2734060 2737246ex9 sp2.id13 179361 183499ex10 sp2.id14 893983 899296ex11 sp2.id15 23731978 23733073ts1 sp1.id1 5444897 5448367ts2 sp1.id2 28930274 28935578ts3 sp1.id3 10716894 10721909
So I moved both files to the test folder inside MCScanX directory and ran MCScan (using Ubuntu 20.04.5 LTS, the WSL feature) with:
../MCScanX ./test
I've also tried
../MCScanX -b 2 ./test
(since "-b 2" is the parameter for inter-species patterns of syntenic blocks)
but all I ever get is
255 matches imported (17 discarded)85 pairwise comparisons0 alignments generated
What am I missing????
I should be getting a test.synteny file that, as per the manual's example, looks like this:
## Alignment 0: score=9171.0 e_value=0 N=187 at1&at1 plus
0- 0: AT1G17240 AT1G72300 0
0- 1: AT1G17290 AT1G72330 0
...
0-185: AT1G22330 AT1G78260 1e-63
0-186: AT1G22340 AT1G78270 3e-174
##Alignment 1: score=5084.0 e_value=5.6e-251 N=106 at1&at1 plus

Jmeter + InfluxDB: Response Codes are missing

I have an InfluxDB v1.7.9 installation and my Jmeter v5.2 is correctly sending data to it through the default Backend Listener (org.apache.jmeter.visualizers.backend.influxdb.HttpMetricsSender). I can see the data when querying the database.
Sample here:
time application avg count countError endedT hit max maxAT meanAT min minAT pct10.0 pct90.0 pct95.0 pct99.0 rb responseCode responseMessage sb startedT statut transaction
---- ----------- --- ----- ---------- ------ --- --- ----- ------ --- ----- ------- ------- ------- ------- -- ------------ --------------- -- -------- ------ -----------
1579001235935000000 grafanapoc-14-01-2020-1126 0 0 0 0 0 internal
1579001240085000000 grafanapoc-14-01-2020-1126 0 0 0 0 11 internal
1579001245091000000 grafanapoc-14-01-2020-1126 586.3529411764706 17 0 195 1177 197 246.6 1126.6 1177 1177 6302301 64159 all all
1579001245098000000 grafanapoc-14-01-2020-1126 197 1 197 197 197 197 197 197 10470 633 all GET - Page
1579001245100000000 grafanapoc-14-01-2020-1126 197 1 197 197 197 197 197 197 ok GET - Page
1579001245102000000 grafanapoc-14-01-2020-1126 259 1 259 259 259 259 259 259 9827 643 all GET - Privacy
1579001245102000000 grafanapoc-14-01-2020-1126 259 1 259 259 259 259 259 259 ok GET - Privacy
1579001245104000000 grafanapoc-14-01-2020-1126 710.8333333333334 12 1177 434 452.6 1158.1000000000001 1177 1177 6168994 56448 all GET - Homepage
1579001245106000000 grafanapoc-14-01-2020-1126 710.8333333333334 12 1177 434 452.6 1158.1000000000001 1177 1177 ok GET - Homepage
1579001245107000000 grafanapoc-14-01-2020-1126 327.3333333333333 3 387 273 273 387 387 387 ok GET - Contact
1579001245107000000 grafanapoc-14-01-2020-1126 327.3333333333333 3 387 273 273 387 387 387 113010 6435 all GET - Contact
1579001245109000000 grafanapoc-14-01-2020-1126 0 23 18 12 23 internal
1579001250083000000 grafanapoc-14-01-2020-1126 411.16666666666674 25 0 197 1177 143 179 712.0000000000001 1059.7000000000005 1177 5350040 69699 all all
However, as you can see from this sample, the 'responseCode' column is empty and only displays data when an error occurs (500, 404, Non HTTP response code, etc).
I am interested in recording all the Response Codes, not just errors.
I attempted to amend the jmeter.properties file defaults, without success. Can anyone help me identify the reason why the Response Codes for successful requests are not parsed over?
As per JMeter 5.2 response code and message are stored only for failed samplers:
private void addErrorMetric(String transaction, ErrorMetric err, long count) {
//
tag.append(TAG_RESPONSE_CODE).append(AbstractInfluxdbMetricsSender.tagToStringValue(err.getResponseCode()));
tag.append(TAG_RESPONSE_MESSAGE).append(AbstractInfluxdbMetricsSender.tagToStringValue(err.getResponseMessage()));
//
Unfortunately this is not something you can control via JMeter Properties, if you want to change this behaviour you need to amend InfluxdbBackendListenerClient and rebuild JMeter from source code

MPICH output not printing

Problem
I'm running an executable cp2k installed on HPC cluster using mpich-3.2. The output from the executable is printed in an out file. The problem is, that there is no output in the out file after some steps are printed, but when I see the status of my job on the cluster, it turns out that it is still running. Basically, the problem is that my job is still running, but the output is not getting printed.
Script
I'm using the following job script:
#!/bin/bash
#PBS -N test
#PBS -o test.log
#PBS -j oe
#PBS -l nodes=2:ppn=20
#PBS -q mini
#PBS -l walltime=2:00:00
cd $PBS_O_WORKDIR
echo Master process running on `hostname`
echo Directory is `pwd`
echo PBS has allocated the following nodes:
echo `cat $PBS_NBODEFILE`
NPROCS=`wc -l < $PBS_NODEFILE`
echo This job has allocated $NPROCS nodes
export I_MPI_FABRICS=shm:dapl
export I_MPI_PROVIDER=psm2
export I_MPI_FALLBACK=0
export KMP_AFFINITY=verbose,scatter
export OMP_NUM_THREADS=1
export I_MPI_IFACE=ib0
echo Starting executation at `date`
EXEC="/home/arshil/software/cp2k-5.1.0/exe/local/cp2k.popt"
cp $EXEC ./cp2k
mpiexec -np $NPROCS --machinefile $PBS_NODEFILE ./cp2k -i test.inp >& out
rm cp2k
echo Finished at `date`
Error
The ouput in the out file:
SCF WAVEFUNCTION OPTIMIZATION
----------------------------------- OT ---------------------------------------
Minimizer : DIIS : direct inversion
in the iterative subspace
using 7 DIIS vectors
safer DIIS on
Preconditioner : FULL_SINGLE_INVERSE : inversion of
H + eS - 2*(Sc)(c^T*H*c+const)(Sc)^T
Precond_solver : DEFAULT
stepsize : 0.08000000 energy_gap : 0.08000000
eps_taylor : 0.10000E-15 max_taylor : 4
----------------------------------- OT ---------------------------------------
Step Update method Time Convergence Total energy Change
------------------------------------------------------------------------------
1 OT DIIS 0.80E-01 21.3 0.00002878 -8797.2068024142 -8.80E+03
2 OT DIIS 0.80E-01 10.9 0.00007114 -8797.2061897209 6.13E-04
3 OT DIIS 0.80E-01 10.8 0.00001688 -8797.2073257531 -1.14E-03
As it can be seen, there is no printing after step 3 in the output file, but the job is still running in the background. Even after the walltime is over, the output file remains the same as above. Where is the output going?
The executable cp2k is used to perform quantum chemical calculations and was installed on the cluster along with mpich-3.2. All CP2K needs is an input file with extension .inp. For my case, test.inp is the input file.
&FORCE_EVAL
METHOD Quickstep
&DFT
BASIS_SET_FILE_NAME GTH_BASIS_SETS
POTENTIAL_FILE_NAME GTH_POTENTIALS
&MGRID
NGRIDS 4
CUTOFF 380
REL_CUTOFF 60
&END MGRID
&QS
METHOD GPW
MAP_CONSISTENT
EXTRAPOLATION ASPC
EXTRAPOLATION_ORDER 3
&END QS
&SCF
MAX_SCF 1000
EPS_SCF 1.0E-5
SCF_GUESS ATOMIC
&OT
PRECONDITIONER FULL_SINGLE_INVERSE
MINIMIZER DIIS
N_DIIS 7
&END OT
&PRINT
&RESTART OFF
&END RESTART
&END PRINT
&END SCF
&XC
&XC_FUNCTIONAL PBE
&END XC_FUNCTIONAL
&vdW_POTENTIAL
DISPERSION_FUNCTIONAL PAIR_POTENTIAL
&PAIR_POTENTIAL
PARAMETER_FILE_NAME dftd3.dat
TYPE DFTD3
REFERENCE_FUNCTIONAL PBE
R_CUTOFF [angstrom] 12.3
&END PAIR_POTENTIAL
&END vdW_POTENTIAL
&END XC
&END DFT
&SUBSYS
&CELL
ABC 24.6904 24.6904 24.6904
PERIODIC XYZ
&END CELL
&KIND C
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q4
&END KIND
&KIND P
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q5
&END KIND
&KIND H
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q1
&END KIND
&KIND O
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q6
&END KIND
&KIND N
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q5
&END KIND
&KIND Mg
BASIS_SET TZV2P-GTH
POTENTIAL GTH-PBE-q10
&END KIND
&COLVAR
&COORDINATION
ATOMS_FROM 41
ATOMS_TO 38
R_0 [bohr] 4.5
NN 6
ND 12
&END COORDINATION
&END COLVAR
&COLVAR
&COORDINATION
ATOMS_FROM 41
ATOMS_TO 42 44 47 50 53 56 59 62 65 68 71 74 77 80 83 86 89 92 95 98 101 104 107 110 113 116 119 122 125 128 131 134 137 140 143 146 149 152 155 158 161 164 167 170 173 176 179 182 185 188 191 194 197 200 203 206 209 212 215 218 221 224 227 230 233 236 239 242 245 248 251 254 257 260 263 266 269 272 275 278 281 284 287 290 293 296 299 302 305 308 311 314 317 320 323 326 329 332 335 338 341 344 347 350 353 356 359 362 365 368 371 374 377 380 383 386 389 392 395 398 401 404 407 410 413 416 419 422 425 428 431 434 437 440 443 446 449 452 455 458 461 464 467 470 473 476 479 482 485 488 491 494 497 500 503 506 509 512 515 518 521 524 527 530 533 536 539 542 545 548 551 554 557 560 563 566 569 572 575 578 581 584 587 590 593 596 599 602 605 608 611 614 617 620 623 626 629 632 635 638 641 644 647 650 653 656 659 662 665 668 671 674 677 680 683 686 689 692 695 698 701 704 707 710 713 716 719 722 725 728 731 734 737 740 743 746 749 752 755 758 761 764 767 770 773 776 779 782 785 788 791 794 797 800 803 806 809 812 815 818 821 824 827 830 833 836 839 842 845 848 851 854 857 860 863 866 869 872 875 878 881 884 887 890 893 896 899 902 905 908 911 914 917 920 923 926 929 932 935 938 941 944 947 950 953 956 959 962 965 968 971 974 977 980 983 986 989 992 995 998 1001 1004 1007 1010 1013 1016 1019 1022 1025 1028 1031 1034 1037 1040 1043 1046 1049 1052 1055 1058 1061 1064 1067 1070 1073 1076 1079 1082 1085 1088 1091 1094 1097 1100 1103 1106 1109 1112 1115 1118 1121 1124 1127 1130 1133 1136 1139 1142 1145 1148 1151 1154 1157 1160 1163 1166 1169 1172 1175 1178 1181 1184 1187 1190 1193 1196 1199 1202 1205 1208 1211 1214 1217 1220 1223 1226 1229 1232 1235 1238 1241 1244 1247 1250 1253 1256 1259 1262 1265 1268 1271 1274 1277 1280 1283 1286 1289 1292 1295 1298 1301 1304 1307 1310 1313 1316 1319 1322 1325 1328 1331 1334 1337 1340 1343 1346 1349 1352 1355 1358 1361 1364 1367 1370 1373 1376 1379 1382 1385 1388 1391 1394 1397 1400 1403 1406 1409 1412 1415 1418 1421 1424 1427 1430 1433 1436 1439 1442 1445 1448 1451 1454 1457
ATOMS_TO 1460 1463 1466 1469 1472 1475 1478 1481 1484 1487 1490 1493 1496 1499 1502 1505
R_0 [bohr] 4.5
NN 6
ND 12
&END COORDINATION
&END COLVAR
&END SUBSYS
&END FORCE_EVAL
&GLOBAL
PROJECT test
RUN_TYPE MD
PRINT_LEVEL LOW
&END GLOBAL
&MOTION
&MD
ENSEMBLE NVT
STEPS 100000
TIMESTEP 0.5
TEMPERATURE 310
TEMP_TOL 100
&THERMOSTAT
&NOSE
LENGTH 3
YOSHIDA 3
TIMECON 100.0
MTS 2
&END NOSE
&END
&PRINT
&ENERGY
&EACH
MD 10
&END
&END
&PROGRAM_RUN_INFO
&EACH
MD 100
&END
&END
FORCE_LAST
&END PRINT
&END MD
&FREE_ENERGY
&METADYN
DO_HILLS
LAGRANGE .TRUE.
NT_HILLS 40
WW [kcalmol] 1
TEMPERATURE 310
TEMP_TOL 10
&METAVAR
SCALE 0.05
COLVAR 1
MASS 50
LAMBDA 2
&WALL
POSITION 0.0
TYPE QUARTIC
&QUARTIC
DIRECTION WALL_MINUS
K 10.0
&END
&END
&END METAVAR
&METAVAR
SCALE 0.05
COLVAR 2
MASS 50
LAMBDA 2
&WALL
POSITION 0.0
TYPE QUARTIC
&QUARTIC
DIRECTION WALL_MINUS
K 10.0
&END
&END
&END METAVAR
&PRINT
&COLVAR
COMMON_ITERATION_LEVELS 3
&EACH
MD 1
&END
&END
&HILLS
COMMON_ITERATION_LEVELS 3
&EACH
MD 1
&END
&END
&END
&END METADYN
&END
&PRINT
&TRAJECTORY
&EACH
MD 1
&END
&END
&VELOCITIES OFF
&END
&RESTART
&EACH
MD 20
&END
ADD_LAST NUMERIC
&END
&RESTART_HISTORY
&EACH
MD 2000
&END
&END
&END
&END MOTION
&EXT_RESTART
RESTART_FILE_NAME NVT-1.restart
RESTART_COUNTERS .FALSE.
&END
The problem in my opinion is not with the input file. It has got to do something with mpich-3.2. I would really appreciate some help.
This may be something similar going on / solutions that can be used here: Python "print" not working when embedded into MPI program It is not perfect as you are not using python however it may help.
At a basic level MPI launches many processes - but only the command that launches it has access to stdio etc. The redirect at the end of the line starting with mpiexec sends the stdout of mpiexec to a file. The output from your script is buffered by mpiexec until the processes end (either they complete or they are stopped).
Where the output is going is a good question and may require changes in test.np or some other way of shutting down (you mention you were out of wall time). I'm looking to solve the same problem - and will update this (if) I find an answer.
Also the output from different processes started by mpi can arrive in random order. I do not care about this but if you do you may need to pass the messages back to some common thread which sorts their order.

Creating frequency interval in Crystal Report

I am trying to create a dataset of frequency interval in crystal report something like below. First column is rowid, second is start interval , third column is end interval and fourth column is interval name.
1 0 29 0 - 29
2 30 59 30 - 59
3 60 89 60 - 89
4 90 119 90 - 119
5 120 149 120 - 149
6 150 179 150 - 179
7 180 209 180 - 209
8 210 239 210 - 239
9 240 269 240 - 269
10 270 299 270 - 299
11 300 329 300 - 329
12 330 359 330 - 359
13 360 389 360 - 389
14 390 419 390 - 419
15 420 449 420 - 449
16 450 479 450 - 479
17 480 509 480 - 509
18 510 539 510 - 539
19 540 569 540 - 569
20 570 599 570 - 599
21 600 629 600 - 629
22 630 659 630 - 659
23 660 689 660 - 689
24 690 719 690 - 719
25 720 749 720 - 749
26 750 779 750 - 779
27 780 809 780 - 809
28 810 839 810 - 839
29 840 869 840 - 869
30 870 899 870 - 899
Can I write a CTE to generate this interval so that I can use it directly in crystal report without writing function on database side? Below is the code which I wrote:
declare intervalStart integer := 0;
intervalEnd integer := 900;
intervalMins varchar(10) := 30;
totalIntervals number := 0;
begin
begin
execute immediate 'create global temporary table intervalTable (row_Id int not null, intStart integer, intEnd integer, intervalName varchar2(25))ON COMMIT DELETE ROWS';
exception when others then dbms_output.put_line(sqlerrm);
end;
totalIntervals := intervalEnd/intervalMins;
--dbms_output.put_line(totalIntervals);
for i in 1 ..totalIntervals loop
intervalStart := 0;
intervalEnd := 0;
intervalStart := intervalStart + (i-1)*intervalMins;
intervalEnd := intervalEnd + (i*intervalMins)-1;
--dbms_output.put_line(intervalStart || ' - ' || intervalEnd);
insert into intervalTable
(
row_id,
intStart,
intEnd,
intervalName
)
values(i, intervalStart, intervalEnd, (intervalStart || ' - ' || intervalEnd));
end loop;
end;
I think you want something like this:
with freq_data as (
select level as id, (level-1)*30 as start_interval, ((level-1)*30) + 29 as end_interval, (level-1)*30 || ' - ' || to_char(((level-1)*30) + 29) as label
from dual
connect by level <= 30
order by level
)
select * from freq_data;
Output
ID START_INTERVAL END_INTERVAL LABEL
1 0 29 0 - 29
2 30 59 30 - 59
3 60 89 60 - 89
4 90 119 90 - 119
5 120 149 120 - 149
6 150 179 150 - 179
7 180 209 180 - 209
8 210 239 210 - 239
9 240 269 240 - 269
10 270 299 270 - 299
11 300 329 300 - 329
12 330 359 330 - 359
13 360 389 360 - 389
14 390 419 390 - 419
15 420 449 420 - 449
16 450 479 450 - 479
17 480 509 480 - 509
18 510 539 510 - 539
19 540 569 540 - 569
20 570 599 570 - 599
21 600 629 600 - 629
22 630 659 630 - 659
23 660 689 660 - 689
24 690 719 690 - 719
25 720 749 720 - 749
26 750 779 750 - 779
27 780 809 780 - 809
28 810 839 810 - 839
29 840 869 840 - 869
30 870 899 870 - 899
An example using the above in a join query:
create table my_test
(
num number
-- other important data ...
);
-- insert some random numbers
insert into my_test
select trunc(DBMS_RANDOM.VALUE(0,900))
from dual
connect by level <= 10;
commit;
Now joining to get the label for each num field:
with freq_data as (
select level as id, (level-1)*30 as start_interval, ((level-1)*30) + 29 as end_interval, (level-1)*30 || ' - ' || to_char(((level-1)*30) + 29) as label
from dual
connect by level <= 30
order by level
)
select t.num, d.label
from my_test t
left join freq_data d ON (t.num between d.start_interval and d.end_interval);
Output:
NUM LABEL
64 60 - 89
73 60 - 89
128 120 - 149
154 150 - 179
267 240 - 269
328 300 - 329
550 540 - 569
586 570 - 599
745 720 - 749
795 780 - 809

Couchbase: possible reasons for 10x difference in cbs-pillowfight latency test, when running in a cluster mode

So I've started a simple test,
cbs-pillowfight -h localhost -b default -i 1 -I 10000 -T
Got:
[10717.252368] Run
+---------+---------+---------+---------+
[ 20 - 29]us |## - 257
[ 30 - 39]us |# - 106
[ 40 - 49]us |###################### - 2173
[ 50 - 59]us |################ - 1539
[ 60 - 69]us |######################################## - 3809
[ 70 - 79]us |################ - 1601
[ 80 - 89]us |## - 254
[ 90 - 99]us |# - 101
[100 - 109]us | - 43
[110 - 119]us | - 17
[120 - 129]us | - 48
[130 - 139]us | - 23
[140 - 149]us | - 14
[150 - 159]us | - 5
[160 - 169]us | - 5
[170 - 179]us | - 1
[180 - 189]us | - 3
[210 - 219]us | - 1
[270 - 279]us | - 1
+----------------------------------------
Then, a cluster was created by adding this node to another i7 node.
'Default' bucket is definitely smaller than 1Gb, it has 1 replica and 2 writers, flush is not set.
Now, same command produces (both hosts used ):
50% in 100-200 ns, 1% in 200-900 ns, 49% in 900ns to "1 to 9 ms!" WTF.
After adding -r (ratio) switch set to 90% SETs,
25% in 100-200ns, 74% in 900ns, remaining in 900ns to "1 to 9 ms!"
So it seems that write performance suffers much in clustered mode; why it can be such a large, 10x drop? Network is clean, there are no highload services running..
UPD1.
Forgot to add the ideal case: -r 100.
25% in 100-200 ns, 74% in 900 ns.
This makes me think, that:
A) benchmark code is blocking somewhere (quick reading shown no signs)
B) server is doing some non-logged magic on SETs I can't understand to reconfigure. Replication factor? Isn't that a nonsense for a small dataset? That's what I'm trying to ask here.
C) network problem. But wireshark shows nothing.
UPD2.
Stopped both nodes, moved them to tmpfs.
For a "normal" responses, got 20ns improval. But slow responses remain slow.
..[cut]
50 - 59]us |## - 164
[ 60 - 69]us |#### - 321
[ 70 - 79]us |######## - 561
[ 80 - 89]us |########## - 701
[ 90 - 99]us |############ - 844
[100 - 109]us |########## - 717
[110 - 119]us |####### - 514
[120 - 129]us |##### - 336
[130 - 139]us |### - 230
[140 - 149]us |## - 175
[150 - 159]us |## - 135
[160 - 169]us |# - 81
..[cut]
[930 - 939]us | - 24
[940 - 949]us |## - 139
[950 - 959]us |##### - 339
[960 - 969]us |####### - 474
[970 - 979]us |####### - 534
[980 - 989]us |###### - 467
[990 - 999]us |##### - 342
[ 1 - 9]ms |######################################## - 2681
[ 10 - 19]ms | - 1
..[cut]
UPD3: screenshot.
Problem is "solved" by switching to three-node configuration on gigabit network.

Resources