How can I create an array from a method with ruby - ruby

How can I make an array of 30 min intervals to 8 hours. so this ish:
[30, 60, 90, all-the-way-to, 480]

You can use a Range and the step method, then convert it to an Array:
(30..480).step(30).to_a
The result is:
[30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, 360, 390, 420, 450, 480)

Your arguments are
increment = 30
duration = 480 # 8*60
You could use
increment.step(by: increment, to: duration).to_a
#=> [ 30, 60, 90, 120, 150, 180, 210, 240,
# 270, 300, 330, 360, 390, 420, 450, 480]
which reads well. Numeric#step, when used without a block, returns an enumerator, which is why .to_a is needed.

I came up with this, but #infused answer is way better.
a = (1..16).to_a.map{|i| i*30 }

Option selecting (Enumerable#select) from the range:
stop = 480
step = 30
(step..stop).select { |n| n % step == 0 }
#=> [0, 30, 60, 90, 120, 150, 180, 210, 240, 270, 300, 330, 360, 390, 420, 450, 480]

Related

javascript: from a websocket I receive messages as zlib deflate: how to read OR "unflate" OR "deflate" (not inflate)

This question is about converting a gzip deflate message from a websocket message and convert it to array OR raw text that I can apply JSON.parse on it...
*** to be also clear: In this question : i use a websocket from a crypto exchange.... but the question is about the received message NOT about crypto exchange
in the documentation they say "please use zlib deflate"
HERE THE JAVASCRIPT
digifinexopen = '{"id":12312,"method":"trades.subscribe","params":["btc_usdt"]}';
digifinex_market_ws = new WebSocket("wss://openapi.digifinex.com/ws/v1/");
digifinex_market_ws.binaryType = "arraybuffer";
digifinex_market_ws.onmessage = event => digifinex_trades(event.data);
digifinex_market_ws.onopen = event => digifinex_market_ws.send(digifinexopen);
function fu_bitmex_trades (jsonx) { console.log(jsonx); }
I have this in the log
object=>[[Int8Array]]: Int8Array(1129) 0 … 99]
0: 120
1: -38
I tried with <script src="https://cdnjs.cloudflare.com/ajax/libs/pako/2.0.4/pako.min.js" ...></script>
if I do pako.deflate(jsonx);
I get
object=> Uint8Array(78) [120, 156, 1, 67, 0, 188, 255, 120, 218, 4, 192, 177, 13, 196, 32, 12, 133, 225, 93, 254, 154, 6, 174, 243, 54, 39, 66, 17, 201, 74, 36, 63, 187, 66, 236, 158, 111, 179, 34, 222, 192, 158, 114, 111, 196, 82, 121, 98, 27, 229, 63, 75, 24, 170, 57, 151, 196, 105, 220, 23, 214, 199, 175, 143, 243, 5, 0, 0, 255, 255, 32, 108, 18, 108, 62, 68, 31,
If I add decoder = new TextDecoder("utf8"); and log(decoder.decode(jsonx)); I get
string=> x�E��xڜ��n\7����'5���*
���$pƋ Ȼ�*�֋�g��#����|�����������������v\�//�_������������
but, HOW TO RETREIVE the array or raw data that I could json.parse ????
If I decompress your data twice, I get:
{"error":null,"result":{"status":"success"},"id":12312}
It looks like you compressed instead of decompressed. Use pako.inflate().

Why won't ruby-mcrypt accept an array as a key?

Hello I am having trouble encrypting using an array as the key and the value with the ruby-mcrypt gem. The gem lets me use an array for the key fine, cipher = Mcrypt.new("rijndael-256", :ecb, secret) works. But it will give me an error when I try to encrypt. I've tried many things but no luck. Does anyone know if Mcrypt just doesn't like encrypting with an array?
require 'mcrypt'
def encrypt(plain, secret)
cipher = Mcrypt.new("rijndael-256", :ecb, secret)
cipher.padding = :zeros
encrypted = cipher.encrypt(plain)
p encrypted
encrypted.unpack("H*").first.to_s.upcase
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(array_to_encrypt, key_array)
p "RESULT IS #{result}"
The output is as follows:
Mcrypt::RuntimeError: Could not initialize mcrypt: Key length is not legal.
I traced this error to here in the ruby-mcrypt gem but don't understand it enough to figure out why I am getting the error message. Any help or insights would be amazing. Thanks!
The library doesn't support arrays. You'll need to use Strings instead:
def binary(byte_array)
byte_array.pack('C*')
end
array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56]
result = encrypt(binary(array_to_encrypt), binary(key_array))
p "RESULT IS #{result}"

Spark groupByKey Clarification

I am trying to process some data and write the output in such a way that the result is partitioned by a key, and is sorted by another parameter- say ASC. For example,
>>> data =sc.parallelize(range(10000))
>>> mapped = data.map(lambda x: (x%2,x))
>>> grouped = mapped.groupByKey().partitionBy(2).map(lambda x: x[1] ).saveAsTextFile("mymr-output")
$ hadoop fs -cat mymr-output/part-00000 |cut -c1-1000
[0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20, 22, 24, 26, 28, 30, 32, 34, 36, 38, 40, 42, 44, 46, 48, 50, 52, 54, 56, 58, 60, 62, 64, 66, 68, 70, 72, 74, 76, 78, 80, 82, 84, 86, 88, 90, 92, 94, 96, 98, 100, 102, 104, 106, 108, 110, 112, 114, 116, 118, 120, 122, 124, 126, 128, 130, 132, 134, 136, 138, 140, 142, 144, 146, 148, 150, 152, 154, 156, 158, 160, 162, 164, 166, 168, 170, 172, 174, 176, 178, 180, 182, 184, 186, 188, 190, 192, 194, 196, 198, 200, 202, 204, 206, 208, 210, 212, 214, 216, 218, 220, 222, 224, 226, 228, 230, 232, 234, 236, 238, 240, 242, 244, 246, 248, 250, 252, 254, 256, 258, 260, 262, 264, 266, 268, 270, 272, 274, 276, 278, 280, 282, 284, 286, 288, 290, 292, 294, 296, 298, 300, 302, 304, 306, 308, 310, 312, 314, 316, 318, 320, 322, 324, 326, 328, 330, 332, 334, 336, 338, 340, 342, 344, 346, 348, 350, 352, 354, 356, 358, 360, 362, 364, 366, 368, 370, 372, 374, 376, 378, 380, 382, 384, 386, 388, 390, 392, 394, 396, 398, 400, 402, 404, 406, 408, 410, 412, 414, 416, 418, 420,
$ hadoop fs -cat mymr-output/part-00001 |cut -c1-1000
[2049, 2051, 2053, 2055, 2057, 2059, 2061, 2063, 2065, 2067, 2069, 2071, 2073, 2075, 2077, 2079, 2081, 2083, 2085, 2087, 2089, 2091, 2093, 2095, 2097, 2099, 2101, 2103, 2105, 2107, 2109, 2111, 2113, 2115, 2117, 2119, 2121, 2123, 2125, 2127, 2129, 2131, 2133, 2135, 2137, 2139, 2141, 2143, 2145, 2147, 2149, 2151, 2153, 2155, 2157, 2159, 2161, 2163, 2165, 2167, 2169, 2171, 2173, 2175, 2177, 2179, 2181, 2183, 2185, 2187, 2189, 2191, 2193, 2195, 2197, 2199, 2201, 2203, 2205, 2207, 2209, 2211, 2213, 2215, 2217, 2219, 2221, 2223, 2225, 2227, 2229, 2231, 2233, 2235, 2237, 2239, 2241, 2243, 2245, 2247, 2249, 2251, 2253, 2255, 2257, 2259, 2261, 2263, 2265, 2267, 2269, 2271, 2273, 2275, 2277, 2279, 2281, 2283, 2285, 2287, 2289, 2291, 2293, 2295, 2297, 2299, 2301, 2303, 2305, 2307, 2309, 2311, 2313, 2315, 2317, 2319, 2321, 2323, 2325, 2327, 2329, 2331, 2333, 2335, 2337, 2339, 2341, 2343, 2345, 2347, 2349, 2351, 2353, 2355, 2357, 2359, 2361, 2363, 2365, 2367, 2369, 2371, 2373, 2375, 2377, 2379, 238
$
Which is perfect- satisfies my first criteria, which is to have results partitioned by key. But I want the result sorted. I tried sorted(), but it didn't work.
>>> grouped= sorted(mapped.groupByKey().partitionBy(2).map(lambda x: x[1] ))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'PipelinedRDD' object is not iterable
I don't want to use parallelize again, and go recursive. Any help would be greatly appreciated.
PS: I did go through this: Does groupByKey in Spark preserve the original order? but it didn't help.
Thanks,
Jeevan.
Yes, that's an RDD, not a Python object that you can sort as if it's a local collection. After groupByKey() though, the value in each key-value tuple is a collection of numbers, and that is what you want to sort? You can use mapValues() which called sorted() on its argument.
I realize it's a toy example but be careful with groupByKey as it has to get all values for a key in memory. Also it is not even necessarily guaranteed that an RDD with 2 elements, and 2 partitions, means 1 goes in each partition. It's probable but not guarnateed.
PS you should be able to replace map(lambda x: x[1]) with values(). It may be faster.
Similar to what is said above the value in key-value is an RDD collection; you can test this by checking type(value). However, you can access a python list via the member .data and call sort or sorted on that.
grouped = mapped.groupByKey().partitionBy(2).map(lambda x: sorted(x[1].data) )

How to create simple array in Ruby?

What is the shortest way to create this array in Ruby:
[10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
Thanks for any help!
What about Range#step:
(10..100).step(10).to_a
#=> [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
Or Numeric#step:
10.step(100, 10).to_a
#=> [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
You can use Range and call Enumerable#map method on it, like this:
(1..10).map{|i| i * 10}
# => [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
Or, as suggested by #JörgWMittag, with Object#method method that returns Method instance which is converted to proc by & notation:
(1..10).map(&10.method(:*))
# => [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
This builds an array directly from the constructor.
Array.new(10){|i| (i + 1) * 10}
# => [10, 20, 30, 40, 50, 60, 70, 80, 90, 100]

Spotting the first local minimum of 2 column CSV data in a Bash shell script

I want to write a Bash shell script function to return the x, y coordinates of the first local minimum data point in simple 2 column CSV data.
The function would take as an input a Bash variable (say "${myData}") storing data such as the following:
10, 0.14665
20, 0.144971
30, 0.14262
40, 0.142424
50, 0.142370
60, 0.142375
70, 0.142375
80, 0.142375
90, 0.142375
100, 0.142375
110, 0.142306
120, 0.142017
130, 0.141054
140, 0.140148
150, 0.139993
160, 0.139972
170, 0.139958
180, 0.139932
190, 0.139886
200, 0.139876
210, 0.13987
220, 0.139865
230, 0.139861
240, 0.13986
250, 0.139857
260, 0.139855
270, 0.139853
280, 0.139852
290, 0.139847
300, 0.139847
I want the function to spot the first local minimum point (in this case, this would correspond to the coordinate 50, 0.142370) and return the coordinate of this point. Could you suggest a simple way of doing this?
You can use awk, either on one line or prettily indented as in here:
awk '
NR > 1 {
if ($2 > n) {
print line;
exit(0);
}
}
{
line=$0;
n=$2
}
' <<< "${myData}"
You can also take out the exit(0); to show all local minimas.

Resources