I am trying to make a gif of the solution of a partial differential equation. In some related posts I have found that I should split my data as follows:
-1.000000 0.000000
-0.600000 0.000000
-0.200000 0.654508
0.200000 0.654508
0.600000 0.000000
1.000000 0.000000
1.400000 0.000000
1.800000 0.000000
2.200000 0.000000
2.600000 0.000000
3.000000 0.000000
-1.000000 0.000000
-0.600000 0.000000
-0.200000 0.163627
0.200000 0.654508
0.600000 0.490881
1.000000 0.000000
1.400000 0.000000
1.800000 0.000000
2.200000 0.000000
2.600000 0.000000
3.000000 0.000000
...
and then I have read that something like that should work:
set terminal gif animate delay 100
set output 'name.gif'
stats 'data.dat' nooutput
do for [i=1:int(STATS_blocks)]{plot 'data.dat' every i using 1:2 with lines notitle}
but I get this. Whereas if I plot every data chunk alone it is completely different. What is wrong with my Gnuplot code?
I think you want index i rather than every i
Related
I am converting stereo audio files to mono using ffmpeg.
ffmpeg -i $1 -ac 1 -ab 192k mono_$1
However, after conversion, the RMS and peak loudness levels are not the same.
Tests-iMac:auditions test$ ./rms.sh mono_test.mp3
mean_volume: -20.1 dB
max_volume: -0.2 dB
Peak level dB: -0.150201
RMS level dB: -20.138039
RMS peak dB: -10.650649
RMS trough dB: -94.923318
Flat factor: 0.000000
Peak count: 2.000000
Bit depth: 32/32
Number of samples: 5800320
Number of NaNs: 0.000000
Number of Infs: 0.000000
Number of denormals: 0.000000
Tests-iMac:auditions test$ ./rms.sh test.mp3
mean_volume: -22.9 dB
max_volume: -2.9 dB
Peak level dB: -2.896314
RMS level dB: -22.883812
RMS peak dB: -13.397327
RMS trough dB: -95.943631
Flat factor: 0.000000
Peak count: 2.000000
Bit depth: 32/32
Number of samples: 5800320
Number of NaNs: 0.000000
Number of Infs: 0.000000
Number of denormals: 0.000000
The first ouput is the mono file which is technically louder than the stereo file, listed second. How can I preserve the peak and RMS values while also converting to mono? I have no issue scripting in order to obtain the stereo loudness values to pass to the mono conversion process.
Thanks!
I just needed to reduce the volume by 2.7db with an audio filter.
ffmpeg -i $1 -ac 1 -af "volume=-2.7dB" -ab 192k mono_$1
I'm learning Ruby right now. Coming from using Javascript the past couple of years, I'm familiar with the While loop. But the until loop? I've looked around but couldn't find a solid reason why one would be better than the other.
Ruby has "until" which is described as another way to phrase a problem. The way I see it, "while" iterates until false, and "until" iterates until true.
I'm sure that most of the programs I write won't really need refactoring for speed. However, I like to get into the little details sometimes.
Is there a speed difference between the two loops? Why is there an "until" syntax in Ruby? Why not just stick with "while?"
There would not be a speed difference between while and until as they mirror each other.
We'll compare a while loop with an until loop:
n = 0
puts n += 1 while n != 3
n = 0
puts n += 1 until n == 3
These will both print 1 through 3.
Here's a diff between the two disassembled human-readable instruction sequences from the Ruby VM:
## -13,7 +13,7 ##
0021 pop
0022 getlocal_OP__WC__0 2
0024 putobject 3
-0026 opt_neq <callinfo!mid:!=, argc:1, ARGS_SIMPLE>, <callcache>, <callinfo!mid:==, argc:1, ARGS_SIMPLE>, <callcache>
-0031 branchif 8
-0033 putnil
-0034 leave
+0026 opt_eq <callinfo!mid:==, argc:1, ARGS_SIMPLE>, <callcache>
+0029 branchunless 8
+0031 putnil
+0032 leave
A while loop uses a branchif for its jump, whereas the until loop used a branchunless. So, these loops simply differ in the comparison being made, which you can see by looking at how branchif and branchunless are defined:
DEFINE_INSN
branchif
(OFFSET dst)
(VALUE val)
()
{
if (RTEST(val)) {
RUBY_VM_CHECK_INTS(th);
JUMP(dst);
}
}
DEFINE_INSN
branchunless
(OFFSET dst)
(VALUE val)
()
{
if (!RTEST(val)) {
RUBY_VM_CHECK_INTS(th);
JUMP(dst);
}
}
Performance between while and until should be nearly identical. Usage should be determined by readability.
Speed differences aside, it's really all about readability, which is something that Ruby prides itself on.
Let's pretend we're making a drink - which do you think reads better?
A) pour_drink until glass.full?
B) pour_drink while !glass.full?
Speed will be more influenced by your choice of comparison operator than your choice of while or until
Benchmark.bmbm do |bm|
bm.report('while') do
n = 0
n += 1 while n != 10_000_000
end
bm.report('until') do
n = 0
n += 1 until n == 10_000_000
end
end
user system total real
while 0.250000 0.000000 0.250000 ( 0.247949)
until 0.220000 0.000000 0.220000 ( 0.222049)
With while n != 10_000_000 vs. until n == 10_000_000, until appears to be faster.
Benchmark.bmbm do |bm|
bm.report('while') do
n = 0
n += 1 while n < 10_000_000
end
bm.report('until') do
n = 0
n += 1 until n == 10_000_000
end
end
user system total real
while 0.210000 0.000000 0.210000 ( 0.207265)
until 0.220000 0.000000 0.220000 ( 0.223195)
Change it to while n < 10_000_000 and now while seems to have the edge. To be fair we should give them the more equivalent while n < 10_000_000 vs. until n > 9_999_999
Benchmark.bmbm do |bm|
bm.report('while') do
n = 0
n += 1 while n < 10_000_000
end
bm.report('until') do
n = 0
n += 1 until n > 9_999_999
end
end
user system total real
while 0.200000 0.000000 0.200000 ( 0.208428)
until 0.200000 0.000000 0.200000 ( 0.206218)
Now they're almost identical. So follow Ruby's lead and gain your satisfaction from code that reads like an English sentence. But make sure you use < or > to gain that extra boost of .0000000001 seconds.
I am trying the examples given on vowpal-wabbit tutorial but I am getting an error while using *.cache file for training. Error: 6 is too many tokens for a simple label: 8.3.0c�?�p�k>���>���L=��O�?#
second_house�p�Q8>�ޙ�>�33�>��O�??
third_house�p�?��
V$ cat house_dataset
0 | price:.23 sqft:.25 age:.05 2006
1 2 'second_house | price:.18 sqft:.15 age:.35 1976
0 1 0.5 'third_house | price:.53 sqft:.32 age:.87 1924
V$ ls -lrth
total 4.0K
-rw-r--r-- 1 A users 144 May 3 06:28 house_dataset
V$ vw --version
8.3.0
V$ vw house_dataset -c
Num weight bits = 18
learning rate = 0.5
initial_t = 0
power_t = 0.5
creating cache_file = house_dataset.cache
Reading datafile = house_dataset
num sources = 1
average since example example current current current
loss last counter weight label predict features
0.000000 0.000000 1 1.0 0.0000 0.0000 5
0.666667 1.000000 2 3.0 1.0000 0.0000 5
finished run
number of examples per pass = 4
passes used = 1
weighted example sum = 5.000000
weighted label sum = 2.000000
average loss = 0.600000
best constant = 0.500000
best constant's loss = 0.250000
total feature number = 16
V$ vw house_dataset.cache
Num weight bits = 18
learning rate = 0.5
initial_t = 0
power_t = 0.5
using no cache
Reading datafile = house_dataset.cache
num sources = 1
average since example example current current current
loss last counter weight label predict features
Error: 6 is too many tokens for a simple label: 8.3.0c�?�p�k>���>���L=��O�?#
second_house�p�Q8>�ޙ�>�33�>��O�??
third_house�p�?��
0.000000 0.000000 1 1.0 unknown 0.0000 1
0.000000 0.000000 2 2.0 unknown 0.0000 1
finished run
number of examples per pass = 2
passes used = 1
weighted example sum = 2.000000
weighted label sum = 0.000000
average loss = 0.000000
total feature number = 2
It should be
$ vw --cache_file house_dataset.cache
You can check command line arguments description here.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
A number is called a circular prime if all of it's rotations are primes themselves.
For example the number 197 has two rotations: 971, and 719. Both of them are prime.
There are thirteen such primes below 100: 2, 3, 5, 7, 11, 13, 17, 31, 37, 71, 73, 79, and 97.
How many circular primes are there below N?
1 <= N <= 1000000
require 'prime'
require 'benchmark'
def circular_prime_count(kk)
primes = []
count = 0
Prime.each(kk) do |prime|
pa= prime.to_s.split('')
flag = true
pa.count.times do |i|
pa.rotate!
flag = false if !Prime.prime?(pa.join.to_i)
end
count+=1 if flag
end
count
end
[100, 200, 1000, 2000 ,10_000, 100_000, 1_000_000 ].each do |number|
puts "Count of primes below #{number} is: #{circular_prime_count(number)}"
Benchmark.bm do |x|
x.report do
circular_prime_count(number)
end
end
end
My benchmark on ruby 2.0.0p247 is:
count of primes below 100 is: 13
user system total real
0.000000 0.000000 0.000000 ( 0.001891)
count of primes below 200 is: 17
user system total real
0.000000 0.000000 0.000000 ( 0.004775)
count of primes below 1000 is: 25
user system total real
0.010000 0.000000 0.010000 ( 0.005716)
count of primes below 2000 is: 27
user system total real
0.020000 0.000000 0.020000 ( 0.018399)
count of primes below 10000 is: 33
user system total real
0.110000 0.000000 0.110000 ( 0.105365)
count of primes below 100000 is: 43
user system total real
1.790000 0.000000 1.790000 ( 1.789223)
count of primes below 1000000 is: 55
user system total real
43.870000 0.010000 43.880000 ( 43.971832)
Could you help to improve performance to find count for 1 million for 20 seconds?
My notebook model is: HP Probook 4530s
Processor Information:
Family: Core i5
Manufacturer: Intel(R) Corporation
ID: A7 06 02 00 FF FB EB BF
Version: Intel(R) Core(TM) i5-2450M CPU # 2.50GHz
Voltage: 1.2 V
External Clock: 100 MHz
Max Speed: 2500 MHz
Current Speed: 2500 MHz
Status: Populated, Enabled
Core Count: 2
Core Enabled: 2
Thread Count: 4
Thanks everyone, so final solution is:
require 'prime'
require 'benchmark'
require 'set'
def circular_prime_count_v2(search_max)
max = if search_max == 10 ** (search_max.to_s.length - 1)
search_max
else
10 ** search_max.to_s.length - 1
end
primes = Prime.each(max).to_set
count = 0
primes.each do |prime|
break if prime > search_max
s = prime.to_s
l = s.length
l.times { |i| primes.include?(((s * 2)[i, l]).to_i) || break } || next
count+=1
end
count
end
[100, 200, 1000, 2000 ,10_000, 100_000, 1_000_000 ].each do |number|
puts "Count of primes below #{number} is: #{circular_prime_count_v2(number)}"
Benchmark.bm do |x|
x.report('circular_prime_count') do
circular_prime_count_v2(number)
end
end
end
Count of primes below 100 is: 13
user system total real
circular_prime_count 0.000000 0.000000 0.000000 ( 0.000482)
Count of primes below 200 is: 17
user system total real
circular_prime_count 0.000000 0.000000 0.000000 ( 0.002683)
Count of primes below 1000 is: 25
user system total real
circular_prime_count 0.000000 0.000000 0.000000 ( 0.003666)
Count of primes below 2000 is: 27
user system total real
circular_prime_count 0.010000 0.000000 0.010000 ( 0.005241)
Count of primes below 10000 is: 33
user system total real
circular_prime_count 0.000000 0.000000 0.000000 ( 0.007074)
Count of primes below 100000 is: 43
user system total real
circular_prime_count 0.060000 0.000000 0.060000 ( 0.057128)
Count of primes below 1000000 is: 55
user system total real
circular_prime_count 0.720000 0.000000 0.720000 ( 0.727345)
Update
I have created a MUCH faster implementation. For historic value, it was roughly based on this post. From my basic testing(with 1 million records), its about 40 times faster than the original.
require 'prime'
require 'benchmark'
require 'set'
def circular_prime_count(kk)
primes, count = [], 0
Prime.each(kk) do |prime|
pa= prime.to_s.split('')
flag = true
pa.count.times do |i|
pa.rotate!
flag = false if !Prime.prime?(pa.join.to_i)
end
count+=1 if flag
end
count
end
def circular_prime_count_v2(search_max)
primes = Prime.each(search_max).to_set
count = 0
primes.each do |prime|
str = prime.to_s
all_points_match = (0...str.length).collect { |i| primes.include?(((str * 2)[i, str.length]).to_i) || break }
count+=1 if all_points_match
end
count
end
Benchmark.bm do |x|
x.report('circular_prime_count') do
puts " Count: #{circular_prime_count(1000000)}"
end
x.report('circular_prime_count_v2') do
puts " Count: #{circular_prime_count_v2(1000000)}"
end
end
NEW RESULTS
user system total real
circular_prime_count Count: 55
39.430000 0.080000 39.510000 ( 39.603969)
circular_prime_count_v2 Count: 55
0.900000 0.000000 0.900000 ( 0.906725)
I am told that for a simple cube I need 36 vertices when I want to have colors/textures etc. for OpenGL ES application but when I export the colored cube to OBJ format using Blender, I only get 8 vertices and also I don't even get color data in the OBJ not to mention I only get 8 normals in OBJ file but I need normal for each vertex in each triangle ( a total of 36 normals).
This is what I get as the content of the OBJ file for a cube that has been colored with different colors on all the faces:
# Blender v2.56 (sub 0) OBJ File: ''
# www.blender.org
mtllib untitled.mtl
o Cube
v 1.000000 1.000000 -1.000000
v 1.000000 -1.000000 -1.000000
v -1.000000 -1.000000 -1.000000
v -1.000000 1.000000 -1.000000
v 1.000000 0.999999 1.000000
v 0.999999 -1.000001 1.000000
v -1.000000 -1.000000 1.000000
v -1.000000 1.000000 1.000000
vn 0.666646 0.666646 0.333323
vn 0.408246 0.408246 -0.816492
vn -0.408246 0.816492 -0.408246
vn -0.666646 0.333323 0.666646
vn -0.577349 -0.577349 -0.577349
vn -0.577349 -0.577349 0.577349
vn 0.816492 -0.408246 -0.408246
vn 0.333323 -0.666646 0.666646
usemtl Material
s 1
f 5//1 1//2 4//3
f 5//1 4//3 8//4
f 3//5 7//6 8//4
f 3//5 8//4 4//3
f 2//7 6//8 3//5
f 6//8 7//6 3//5
f 1//2 5//1 2//7
f 5//1 6//8 2//7
f 5//1 8//4 6//8
f 8//4 7//6 6//8
f 1//2 2//7 3//5
f 1//2 3//5 4//3
This is the content of MTL file:
# Blender MTL File: ''
# Material Count: 1
newmtl Material
Ns 96.078431
Ka 0.000000 0.000000 0.000000
Kd 0.640000 0.640000 0.640000
Ks 0.500000 0.500000 0.500000
Ni 1.000000
d 1.000000
illum 2
36 vertices for a cube is not right. Possible but unnecessary.
A vertex is a kind of coordinate in space, consisting of 3 parameters, x, y, z. As a cube has 8 corners then there should be 8 vertices only.
Following vertices, there are texture coordinates which is obtained after UV mapping in Blender.
After texture coordinates, there are indices. They are the order of connecting vertices which determines in what order your cube is drawn.
And at last, there are normals for lighting effects.
While exporting in Blender, make sure you highlight only those:
Context:
all scenes
Output Options:triangulate, materials, UVs, normals, HQ
Blender objects as OBJ:objects
This gives you two files: OBJ and MTL
MTL contains texture image info and OBJ contains:
vertices in form of:
v x, y, z
v x, y, z
texture coordinates in form of:
vt x, y
vt x, y
indices in form of:
f i/j k/l m/n
f i/j k/l m/n
After you successfully get your exported OBJ and MTL files, add them to your project with the texture image and use OpenGLOBJLoader class to render them in iOS.