interpret results of cassandra stress test - performance
We have scylla cluster with tree node. Each node is in its own datacenter. Тo evaluate cluster performance i use cassandra stress test with this parameters:
cassandra-stress write "no-warmup cl=QUORUM" -rate threads=1 -schema "replication(strategy=NetworkTopologyStrategy, DC1=1, DC2=1, DC3=1)" -mode cql3 native connectionsPerHost=80 protocolVersion=4 user=cassandra password=cassandra -node 10.0.0.3
Result:
type total ops, op/s, pk/s, row/s, mean, med, .95, .99, .999, max, time, stderr, errors, gc: #, max ms, sum ms, sdv ms, mb
total, 84, 84, 84, 84, 10.3, 5.0, 45.8, 59.3, 65.9, 65.9, 1.0, 0.00000, 0, 0, 0, 0, 0, 0
total, 145, 61, 61, 61, 16.3, 5.5, 45.1, 46.2, 47.2, 47.2, 2.0, 0.11899, 0, 0, 0, 0, 0, 0
total, 197, 52, 52, 52, 19.3, 5.4, 44.5, 46.0, 46.7, 46.7, 3.0, 0.12423, 0, 0, 0, 0, 0, 0
total, 258, 61, 61, 61, 15.9, 5.3, 44.2, 44.4, 46.0, 46.0, 4.0, 0.09670, 0, 0, 0, 0, 0, 0
total, 325, 67, 67, 67, 15.3, 5.1, 45.4, 46.7, 47.0, 47.0, 5.0, 0.07708, 0, 0, 0, 0, 0, 0
total, 388, 63, 63, 63, 15.9, 5.2, 44.3, 46.2, 47.9, 47.9, 6.0, 0.06482, 0, 0, 0, 0, 0, 0
total, 455, 67, 67, 67, 14.8, 5.3, 46.0, 46.3, 46.5, 46.5, 7.0, 0.05547, 0, 0, 0, 0, 0, 0
total, 518, 63, 63, 63, 15.5, 5.3, 44.0, 46.2, 46.9, 46.9, 8.0, 0.04891, 0, 0, 0, 0, 0, 0
total, 576, 58, 58, 58, 17.2, 5.3, 45.3, 45.8, 48.0, 48.0, 9.0, 0.04541, 0, 0, 0, 0, 0, 0
total, 634, 58, 58, 58, 17.0, 5.3, 44.4, 46.1, 46.2, 46.2, 10.0, 0.04231, 0, 0, 0, 0, 0, 0
total, 704, 70, 70, 70, 14.3, 4.8, 44.1, 46.6, 46.6, 46.6, 11.0, 0.03908, 0, 0, 0, 0, 0, 0
total, 775, 71, 71, 71, 14.4, 4.8, 43.8, 44.1, 46.7, 46.7, 12.0, 0.03650, 0, 0, 0, 0, 0, 0
total, 847, 72, 72, 72, 13.5, 5.0, 41.7, 44.2, 45.7, 45.7, 13.0, 0.03438, 0, 0, 0, 0, 0, 0
total, 914, 67, 67, 67, 15.1, 5.1, 43.8, 43.9, 44.1, 44.1, 14.0, 0.03191, 0, 0, 0, 0, 0, 0
total, 972, 58, 58, 58, 17.0, 5.1, 45.7, 46.6, 47.1, 47.1, 15.0, 0.03089, 0, 0, 0, 0, 0, 0
total, 1031, 59, 59, 59, 17.0, 4.9, 44.2, 44.8, 45.9, 45.9, 16.0, 0.02966, 0, 0, 0, 0, 0, 0
total, 1085, 54, 54, 54, 18.7, 4.8, 44.0, 44.1, 44.7, 44.7, 17.0, 0.02975, 0, 0, 0, 0, 0, 0
total, 1165, 80, 80, 80, 12.1, 4.7, 43.8, 44.3, 45.6, 45.6, 18.0, 0.03075, 0, 0, 0, 0, 0, 0
total, 1224, 59, 59, 59, 17.3, 5.2, 43.9, 46.9, 46.9, 46.9, 19.0, 0.02963, 0, 0, 0, 0, 0, 0
total, 1294, 70, 70, 70, 14.2, 4.8, 43.9, 44.3, 46.4, 46.4, 20.0, 0.02833, 0, 0, 0, 0, 0, 0
total, 1363, 69, 69, 69, 14.2, 4.7, 44.2, 46.7, 46.8, 46.8, 21.0, 0.02707, 0, 0, 0, 0, 0, 0
total, 1432, 69, 69, 69, 14.3, 4.9, 44.0, 44.3, 48.2, 48.2, 22.0, 0.02591, 0, 0, 0, 0, 0, 0
total, 1494, 62, 62, 62, 16.3, 4.9, 44.3, 46.8, 47.3, 47.3, 23.0, 0.02492, 0, 0, 0, 0, 0, 0
total, 1557, 63, 63, 63, 15.7, 5.5, 45.4, 47.3, 47.4, 47.4, 24.0, 0.02395, 0, 0, 0, 0, 0, 0
total, 1620, 63, 63, 63, 16.2, 5.0, 45.2, 46.6, 46.9, 46.9, 25.0, 0.02305, 0, 0, 0, 0, 0, 0
total, 1687, 67, 67, 67, 14.9, 4.8, 44.1, 45.5, 45.6, 45.6, 26.0, 0.02216, 0, 0, 0, 0, 0, 0
total, 1743, 56, 56, 56, 17.8, 4.8, 44.0, 46.3, 47.3, 47.3, 27.0, 0.02203, 0, 0, 0, 0, 0, 0
total, 1804, 61, 61, 61, 16.0, 4.8, 43.8, 44.4, 45.4, 45.4, 28.0, 0.02138, 0, 0, 0, 0, 0, 0
total, 1877, 73, 73, 73, 13.5, 4.8, 44.1, 44.5, 44.8, 44.8, 29.0, 0.02101, 0, 0, 0, 0, 0, 0
total, 1941, 64, 64, 64, 15.9, 5.0, 44.0, 46.7, 47.3, 47.3, 30.0, 0.02032, 0, 0, 0, 0, 0, 0
total, 2003, 62, 62, 62, 16.0, 4.8, 44.0, 44.1, 44.2, 44.2, 31.0, 0.01975, 0, 0, 0, 0, 0, 0
total, 2004, 215, 215, 215, 16.9, 16.9, 16.9, 16.9, 16.9, 16.9, 31.0, 0.07564, 0, 0, 0, 0, 0, 0
As we see, we have a relatively small mead and mean latency, but catastrophic .95 and .99 latency.
And main question: why? And why this values so different?
It's not a goo practice to have a single node per data center. It could be the source of latency. You also run a single thread in the client but that's more of a throughput issue. There are no details about the machines you run. Try to replicate one of the published benchmarks of Scylla
Related
how to extract the raw value in a subdocument in mongo's libbson c library?
consider the mongo document: {"key1": "value1", "key2": {"subkey1": "subvalue1"}}, this can be encoded (using python bson module like: bson.encode(DATA)) to the following uint8_t array: [56, 0, 0, 0, 2, 107, 101, 121, 49, 0, 7, 0, 0, 0, 118, 97, 108, 117, 101, 49, 0, 3, 107, 101, 121, 50, 0, 28, 0, 0, 0, 2, 115, 117, 98, 107, 101, 121, 49, 0, 10, 0, 0, 0, 115, 117, 98, 118, 97, 108, 117, 101, 49, 0, 0, 0]. Now, I use this array to initialize the bson_t struct and use the iterator to find the subdocument for key2. Here I want to create another bson_t document from this. I tried using the bson_iter_document() method but it gives me precondition failed: document error. Maybe i'm not using it right. Is there another way to do it properly? void test_bson(){ uint8_t raw[] = {56, 0, 0, 0, 2, 107, 101, 121, 49, 0, 7, 0, 0, 0, 118, 97, 108, 117, 101, 49, 0, 3, 107, 101, 121, 50, 0, 28, 0, 0, 0, 2, 115, 117, 98, 107, 101, 121, 49, 0, 10, 0, 0, 0, 115, 117, 98, 118, 97, 108, 117, 101, 49, 0, 0, 0}; bson_t *bson; bson = bson_new_from_data(raw, 56); bson_iter_t iter; bson_iter_init(&iter, bson); if (bson_iter_find(&iter, "key2")){ printf("found. %d\n", bson_iter_type(&iter)); const uint8_t **subdocument; uint32_t subdoclen = 56; bson_iter_document (&iter, &subdoclen, subdocument); } bson_free(bson); }
Why won't ruby-mcrypt accept an array as a key?
Hello I am having trouble encrypting using an array as the key and the value with the ruby-mcrypt gem. The gem lets me use an array for the key fine, cipher = Mcrypt.new("rijndael-256", :ecb, secret) works. But it will give me an error when I try to encrypt. I've tried many things but no luck. Does anyone know if Mcrypt just doesn't like encrypting with an array? require 'mcrypt' def encrypt(plain, secret) cipher = Mcrypt.new("rijndael-256", :ecb, secret) cipher.padding = :zeros encrypted = cipher.encrypt(plain) p encrypted encrypted.unpack("H*").first.to_s.upcase end array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56] result = encrypt(array_to_encrypt, key_array) p "RESULT IS #{result}" The output is as follows: Mcrypt::RuntimeError: Could not initialize mcrypt: Key length is not legal. I traced this error to here in the ruby-mcrypt gem but don't understand it enough to figure out why I am getting the error message. Any help or insights would be amazing. Thanks!
The library doesn't support arrays. You'll need to use Strings instead: def binary(byte_array) byte_array.pack('C*') end array_to_encrypt = [16, 0, 0, 0, 50, 48, 49, 55, 47, 48, 50, 47, 48, 55, 32, 50, 50, 58, 52, 54, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0] key_array = [65, 66, 67, 68, 49, 50, 51, 52, 70, 71, 72, 73, 53, 54, 55, 56] result = encrypt(binary(array_to_encrypt), binary(key_array)) p "RESULT IS #{result}"
Dual pivot quick sort algorithm
I was analyzing the code for Arrays.sort() method in java . My question is for what values of integer array a[] will this code return true ? if (less < e1 && e5 < great) After Sorting left and right parts recursively, excluding known pivots for what value of array a[] will the center part become too large (comprises > 4/7 of the array) ? Given QUICKSORT_THRESHOLD = 286 . Array size cannot be more than 286 Any example of int array please .
It happens when all candidates for pivots are close to either the maximum or the minimum value of the array. java.util.DualPivotQuicksort#sort() chooses the pivots from 5 positions in the array: int seventh = (length >> 3) + (length >> 6) + 1; int e3 = (left + right) >>> 1; // The midpoint int e2 = e3 - seventh; int e1 = e2 - seventh; int e4 = e3 + seventh; int e5 = e4 + seventh; So, in order to construct an array that satisfies the condition, we need to fill those 5 positions with extreme values. For example: int[] x = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, -2, /* e1 = 10 */ 0, 0, 0, 0, 0, 0, -1, /* e2 = 17 */ 0, 0, 0, 0, 0, 0, 0, /* e3 = 24 */ 0, 0, 0, 0, 0, 0, 1, /* e4 = 31 */ 0, 0, 0, 0, 0, 0, 2, /* e5 = 38 */ 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; Arrays.sort(x); And a non-trivial case where the method changes the boundaries of the central part before sorting it: int[] x = { 70, 66, 11, 24, 10, 28, 58, 13, 19, 90, 15, 79, 16, 69, 39, 14, 10, 16, 40, 59, 47, 77, 90, 50, 50, 50, 16, 76, 86, 70, 33, 90, 24, 35, 73, 93, 87, 19, 91, 73, 87, 22, 15, 24, 92, 34, 35, 98, 11, 40 };
Converting a UTF-16LE Elixir bitstring into an Elixir String
Given an Elixir bitstring encoded in UTF-16LE: <<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>> how can I get this converted into a readable Elixir String (it spells out "Devastator")? The closest I've gotten is transforming the above into a list of the Unicode codepoints (["0044", "0065", ...]) and trying to prepend the \u escape sequence to them, but Elixir throws an error since it's an invalid sequence. I'm out of ideas.
The simplest way is using functions from the :unicode module: :unicode.characters_to_binary(utf16binary, {:utf16, :little}) For example <<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>> |> :unicode.characters_to_binary({:utf16, :little}) |> IO.puts #=> Devastator (there's a null byte at the very end, so the binary display instead of string will be used in the shell, and depending on OS it may print some extra representation for the null byte)
You can make use of Elixir's pattern matching, specifically <<codepoint::utf16-little>>: defmodule Convert do def utf16le_to_utf8(binary), do: utf16le_to_utf8(binary, "") defp utf16le_to_utf8(<<codepoint::utf16-little, rest::binary>>, acc) do utf16le_to_utf8(rest, <<acc::binary, codepoint::utf8>>) end defp utf16le_to_utf8("", acc), do: acc end <<68, 0, 101, 0, 118, 0, 97, 0, 115, 0, 116, 0, 97, 0, 116, 0, 111, 0, 114, 0, 0, 0>> |> Convert.utf16le_to_utf8 |> IO.puts <<192, 3, 114, 0, 178, 0>> |> Convert.utf16le_to_utf8 |> IO.puts Output: Devastator πr²
Ruby How to convert a flat array to hash
I have the following Hash. #facet_counts = {"facet_queries"=>{}, "facet_fields"=> {"product_collection_value"=>["traditional and imitation", 304, "chunky", 34, "modern", 15, "coloured gems", 12, "traditional", 0, "traditional & imitation", 0], "product_material_value"=>["alloy", 161, "metal alloy", 132, "metal", 60, "925 sterling silver", 8, "lac", 3, "beads", 2, "beaded", 0, "brass", 0, "copper", 0, "crystal", 0, "fabric", 0, "feather", 0, "glass", 0, "jute", 0, "leather", 0, "pashmina", 0, "plastic", 0, "polymer beads", 0, "pu leather", 0, "rexin", 0, "rubber", 0, "satin", 0, "shell", 0, "silk", 0, "silk brocade", 0, "silver", 0, "silver alloy", 0, "stainless steel", 0, "sterling silver", 0, "stone", 0, "velvet", 0, "viscose", 0, "wood", 0, "wooden", 0, "wool", 0], "product_type_value"=>["jhumkis", 364, "danglers", 53, "drops", 7, "hoops", 6, "victorian", 2, "armlets", 0, "bands", 0, "bangles", 0, "beaded", 0, "beads", 0, "bib", 0, "chains", 0, "charms", 0, "choker", 0, "clip on", 0, "cluster", 0, "cluster pendant necklaces", 0, "clusters", 0, "cocktail", 0, "contemporary", 0, "cuff", 0, "cz", 0, "diamond look", 0, "double chain", 0, "double fold", 0, "double strand", 0, "earrings", 0, "fashion", 0, "gemstone", 0, "hasli", 0, "hath phool", 0, "kada", 0, "kamarband", 0, "kundan", 0, "link", 0, "links", 0, "maang tika set", 0, "mangalsutras", 0, "modern", 0, "oxidised", 0, "oxidized", 0, "pair", 0, "pearl", 0, "pendant", 0, "pendant necklaces", 0, "potli", 0, "pouch", 0, "rani haar", 0, "rings", 0, "saree pins", 0, "set", 0, "single chain", 0, "single fold", 0, "single stone", 0, "single strand", 0, "singles", 0, "sling bag", 0, "strings", 0, "studded", 0, "studs", 0, "thewa", 0, "tote bag", 0, "traditional", 0, "with chain", 0, "with gemstone", 0, "without chain", 0, "without gemstone", 0], "product_plating_value"=>["yellow gold plating", 135, "gold plating", 98, "silver", 39, "black silver", 15, "rhodium", 2, "white rhodium", 1, "14k yellow gold", 0, "18k yellow gold", 0, "alloy", 0, "black gold", 0, "black rhodium", 0, "brass", 0, "cubic zirconia", 0, "oxidised", 0, "pearl", 0, "rose gold", 0, "rose gold plating", 0, "silver plating", 0, "sterling silver", 0, "yellow gold", 0, "yellow rhodium", 0], "product_gemstones_value"=>["cubic zirconia", 132, "pearl", 45, "semi-precious", 19, "crystal", 3, "precious", 3, "amethyst", 2, "citrine", 2, "garnet", 2, "peridot", 2, "green onyx", 1, "imitation kundan", 1, "iolite", 1, "aquamarine", 0, "black onyx", 0, "blue topaz", 0, "carnelian", 0, "chalcedony", 0, "coral", 0, "diamond", 0, "emerald", 0, "gem stones", 0, "green stone", 0, "howlite", 0, "hydro", 0, "jade", 0, "jasper", 0, "kundan", 0, "labradorite", 0, "lapis", 0, "lemon quartz", 0, "lemon stone", 0, "malachite", 0, "marcasite", 0, "moonstone", 0, "onyx", 0, "opal", 0, "pink amethyst", 0, "prehnite", 0, "quartz", 0, "rainbow", 0, "red onyx", 0, "red stone", 0, "red tiger eye", 0, "rhodolite", 0, "rose quartz", 0, "ruby", 0, "sapphire", 0, "smoky quartz", 0, "spinel", 0, "tanzanite", 0, "tiger eye", 0, "topaz", 0, "tourmaline", 0, "turquoise", 0, "white rainbow", 0, "white rainbow stone", 0], "product_occasion_value"=>["special occasions or gifts", 358, "wedding or festive wear", 245, "everyday wear", 119, "work wear", 4, "religious", 0] }, "facet_dates"=>{}, "facet_ranges"=>{}} What I want The hash corresponds search results on a page. The array against each filed name indicates the values and the number of results having that value. But Since this is an array I am not able to access the counts easily. What I am currently doing is: facet_fields=["product_collection_value","product_material_value","product_type_value","product_plating_value","product_gemstones_value","product_occasion_value"] #count_hash = Hash.new facet_fields.each {|field| print #facet_counts["facet_fields"][field] #facet_counts["facet_fields"][field].each_with_index{ |v,i| if i%2 == 1 next else #count_hash[#facet_counts["facet_fields"][field][i]] = #facet_counts["facet_fields"][field][i+1] end } print "\n\n" } Issue : This creates a new Hash. But the problem is that in case of multiple entries of same tag. Eg. Modern is in product_collection_value and product_type_value so the value get overwritten. IS there a way by which I can convert the original hash so that the counts can be accessed easily?
the answer is: Hash[#facet_counts["facet_fields"].map { |k, v| [k, Hash[*v]] }]
I believe what you are trying to do it this: #count_hash = #facet_counts['facet_fields'].map do |k, v| Hash[*v] end.inject({}) do |s, i| s.merge(i) { |_, old, new| old + new } end The first part maps each facet field to a hash of key-value pairs, for example - the first has will look like this: {"traditional and imitation"=>304, "chunky"=>34, "modern"=>15, "coloured gems"=>12, "traditional"=>0, "traditional & imitation"=>0} The second part merges them by summing identical keys. The result of your example will look something like this: {"traditional and imitation"=>304, "chunky"=>34, "modern"=>15, "coloured gems"=>12, "traditional"=>0, "traditional & imitation"=>0, "alloy"=>161, "metal alloy"=>132, "metal"=>60, "925 sterling silver"=>8, "lac"=>3, "beads"=>2, "beaded"=>0, "brass"=>0, "copper"=>0, "crystal"=>3, "fabric"=>0, "feather"=>0, "glass"=>0, "jute"=>0, "leather"=>0, "pashmina"=>0, "plastic"=>0, "polymer beads"=>0, "pu leather"=>0, "rexin"=>0, "rubber"=>0, "satin"=>0, "shell"=>0, "silk"=>0, "silk brocade"=>0, "silver"=>39, "silver alloy"=>0, "stainless steel"=>0, "sterling silver"=>0, ... }