How to post a "poster" to VKontakte social network wall - social-networking

I have a script that can post a text or an image to the VK.com over API. But I can't find a way to create a poaster:
There is no information about posters at the official documentation page.

You have to add the parameter poster_bkg_id = {background_number_for_poster}. This parameter is not described in the documentation (thanks to the author https://github.com/VBIralo/vk-posters).
Backgrounds (poster_bkg_id)
Gradients
1, 2, 3, 4, 5, 6, 7, 8, 9, 10
Artwork
11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 31, 32
Emoji Patterns
21, 22, 23, 24, 25, 26, 27, 28, 29, 30

Related

Tackling the 'Small Data' Problem with Distributed Computing Cluster?

I'm learning about Hadoop + MapReduce and Big Data and from my understanding it seems that the Hadoop ecosystem was mainly designed to analyze large amounts of data that's distributed on many servers. My problem is a bit different.
I have a relatively small amount of data (a file consisting of 1-10 million lines of numbers) which needs to be analyzed in millions of different ways. For example, consider the following dataset:
[1, 6, 7, 8, 10, 17, 19, 23, 27, 28, 28, 29, 29, 29, 29, 30, 32, 35, 36, 38]
[1, 3, 3, 4, 4, 5, 5, 10, 11, 12, 14, 16, 17, 18, 18, 20, 27, 28, 39, 40]
[2, 3, 7, 8, 10, 10, 12, 13, 14, 15, 15, 16, 17, 19, 27, 30, 32, 33, 34, 40]
[1, 9, 11, 13, 14, 15, 17, 17, 18, 18, 18, 19, 19, 23, 25, 26, 27, 31, 37, 39]
[5, 8, 8, 10, 14, 16, 16, 17, 20, 21, 22, 22, 23, 28, 29, 30, 32, 32, 33, 38]
[1, 1, 3, 3, 13, 17, 21, 24, 24, 25, 26, 26, 30, 31, 32, 35, 38, 39, 39, 39]
[1, 2, 4, 4, 5, 5, 10, 13, 14, 14, 14, 14, 15, 17, 28, 29, 29, 35, 37, 40]
[1, 2, 6, 8, 12, 13, 14, 15, 15, 15, 16, 22, 23, 24, 26, 30, 31, 36, 36, 40]
[3, 6, 7, 8, 8, 10, 10, 12, 13, 17, 17, 20, 21, 22, 33, 35, 35, 36, 39, 40]
[1, 3, 8, 8, 11, 11, 13, 18, 19, 19, 19, 23, 24, 25, 27, 33, 35, 37, 38, 40]
I need to analyze how frequently a number of each column (Column N) repeats itself a certain number of rows later (L rows later. For example, if we were analyzing Column A with 1L (1-Row-Later) the result would be as follows:
Note: The position does not need to match - so number can appear anywhere in the next row
Column: A N-Later: 1 Result: YES, NO, NO, NO, NO, YES, YES, NO, YES -> 4/9.
We would repeat the above analysis for each column separately and for maximum N later times. In the above dateset which only consists of 10 lines it means a maximum of 9 N later. But in a dateset of 1 million lines, the analyses (for each column) would be repeated 999,999 times.
I looked into the MapReduce framework but it doesn't seem to cut it; it doesn't seem like an efficient solution for this problem and it requires a great deal of work to convert the core code into a MapReduce friendly structure.
As you can see in the above example, each analyses is independent of each other. For example, it is possible to analyze Column A separately from Column B. It is also possible to perform 1L analyses separately from 2L and so on. However, unlike Hadoop where the data lives on separate machines, in our scenario, each server needs access to all of the data to perform it's "share" of analysis.
I looked into possible solutions for this problem and it seems there are very few options: Ray or building a custom application on top of YARN using Apache Twill. Apache Twill was moved to the Attic in 2020 which means that Ray is the only available option.
Is Ray the best way to tackle this problem or are there other, better options? Ideally, the solution should automatically handle fail over and distribute the processing load intelligently. For example, in the above example, if we wanted to distribute the load to 20 machines, one way of doing so would be to divide 999,999 N Later by 20 and let Machine A analyze 1L-49999L, Machine B from 50000L - 100000L and so on. However, when you think about it, the load isn't being distributed equally - as it takes much longer to analyze 1L vs. 500000L as the latter contains only about half the number of rows (for 500000L the first row we are analyzing is row 500001 so we are essentially omitting the first 500K rows from analysis).
It should also not require a great deal of modification to the core code (like MapReduce does).
I'm working with Java.
Thanks
Well you are right - your scenario and your technological stack are not that suitable. Which raise the question - why not (add) something more relevant to your current technological stack? For instance - Redis DB.
Seems that your common action is mainly count values and you want to prevent over-calculation and make it more performant (e.g. - properly index your data). Given that this is one of the main features of Redis - it sounds logical to use it as a processor
My suggestion:
Create a hashmap that uses the numeric value as key and its count as value. This way - you will be able to pull different calculations over those metrics and always iterate your data-set once. Afterwards - you just need to pull the data from Redis by different criteria (per calculation or metric).
From this point, it's easy to save your calculated data back to your database and make it ready for direct querying. The overall process may be similar to this:
Scan data from file
Properly index it to redis (using hashmap)
Make desired calculations (over the indexed count)
Save it in your DB (as a digested data-set)
Flush Redis DB
Query your DB (as much as you like)
Follow the docs for both populating and retrieving data

Strange utf8 decoding error in windows notepad

If you type the following string into a text file encoded with utf8(without bom) and open it with notepad.exe,you will get some weired characters on screen. But notepad can actually decode this string well without the last 'a'. Very strange behavior. I am using Windows 10 1809.
[19, 16, 12, 14, 15, 15, 12, 17, 18, 15, 14, 15, 19, 13, 20, 18, 16, 19, 14, 16, 20, 16, 18, 12, 13, 14, 15, 20, 19, 17, 14, 17, 18, 16, 13, 12, 17, 14, 16, 13, 13, 12, 15, 20, 19, 15, 19, 13, 18, 19, 17, 14, 17, 18, 12, 15, 18, 12, 19, 15, 12, 19, 18, 12, 17, 20, 14, 16, 17, 18, 15, 12, 13, 19, 18, 17, 18, 14, 19, 18, 16, 15, 18, 17, 15, 15, 19, 16, 15, 14, 19, 13, 19, 15, 17, 16, 12, 12, 18, 12, 14, 12, 16, 19, 12, 19, 12, 17, 19, 20, 19, 17, 19, 20, 16, 19, 16, 19, 16, 12, 12, 18, 19, 17, 18, 16, 12, 17, 13, 18, 20, 19, 18, 20, 14, 16, 13, 12, 12, 14, 13, 19, 17, 20, 18, 15, 12, 15, 20, 14, 16, 15, 16, 19, 20, 20, 12, 17, 13, 20, 16, 20, 13a
I wonder if this is a windows bug or there is something I can do to solve this.
Did more research; figured it out.
Seems like a variation of the classic case of "Bush hid the facts".
https://en.wikipedia.org/wiki/Bush_hid_the_facts
It looks like Notepad has a different character encoding default for saving a file than it does for opening a file. Yes, this does seem like a bug.
But there is an actual explanation for what is occurring:
Notepad checks for a BOM byte sequence. If it does not find one, it has 2 options: the encoding is either UTF-16 Little Endian (without BOM) or plain ASCII. It checks for UTF-16 LE first using a function called IsTextUnicode.
IsTextUnicode runs a series of tests to guess whether the given text is Unicode or not. One of these tests is IS_TEXT_UNICODE_STATISTICS, which uses statistical analysis. If the test is true, then the given text is probably Unicode, but absolute certainty is not guaranteed.
https://learn.microsoft.com/en-us/windows/desktop/api/winbase/nf-winbase-istextunicode
If IsTextUnicode returns true, Notepad encodes the file with UTF-16 LE, producing the strange output you saw.
We can confirm this with this character ㄠ. Its corresponding ASCII characters are ' 1' (space one); the corresponding hex values for those ASCII characters are 0x20 for space and 0x31 for one. Since the byte-ordering is Little Endian, the order for the Unicode code point would be '1 ', or U+3120, which you can confirm if you look up that code point.
https://unicode-table.com/en/3120/
If you want to solve the issue, you need to break the pattern which helps IsTextUnicode determine if the given text is Unicode. You can insert a newline before the text to break the pattern.
Hope that helped!

Re-numbering residues in PDB file with biopython

I have a sequence alignment as:
RefSeq :MXKQRSLPLXQKRTKQAISFSASHRIYLQRKFSH .....
Templatepdb:-----------------ISFSASHR------FSHAQADFAG
I am trying to write a code that re-number residues based on this alignment in PDB file as:
original pdb : RES ID= 1 1 1 1 1 1 2 2 2 2 3 3 3 3 3 3 4 4 4 4 4 5 ...
new pdb : RES ID = 18 18 18 19 19 19 19 19 20 20 20 21 21 22 23 24 25 31 31 31 31 32 32 33 34 35 36 ...
If alignment only has gaps at beginning of alignment, easy to figure out. Only count gaps("-") and add sum of gaps in to residue.id= " " "sum of gap" " "
However, I could not find a way if there are gaps in the middle of the sequence.
Do you have any suggestions?
If I understand it correctly,
Your input is an alignment:
'-----------------ISFSASHR------FSHAQADFAG'
and a list of residue numbers:
[1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, 18]
And your output is the residue number shifted by the number of gaps before the residue:
[18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 25, 25, 25, 25, 32, 32, 32, 33, 34, 34, 34, 34, 35, 35, 35, 35, 35, 35, 36, 36, 36, 36, 36, 36, 36, 36, 37, 37, 37, 37, 37, 37, 37, 37, 37, 38, 38, 38, 38, 38, 38, 38, 38, 38, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 40, 41, 41, 41, 41]
Below is the code to demonstrate it. There are numerous ways to calculate the output.
The way I do it is to keep a dictionary shift_dict with key as the original number and value as the shifted number.
import itertools
import random
def random_residue_number(sequence):
nested = [[i + 1] * random.randint(1, 10) for i in range(len(sequence))]
merged = list(itertools.chain.from_iterable(nested))
return merged
def aligned_residue_number(alignment, original_number):
gap_shift = 0
residue_count = 0
shift_dict = {}
for residue in alignment:
if residue == '-':
gap_shift += 1
else:
residue_count += 1
shift_dict[residue_count] = gap_shift + residue_count
return [shift_dict[number] for number in original_number]
sequence = 'ISFSASHRFSHAQADFAG'
alignment = '-----------------ISFSASHR------FSHAQADFAG'
original_number = random_residue_number(sequence)
print(original_number)
# [1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 3, 3, 3, 4, 4, 5, 5, 5, 5, 5, 5, 5, 6, 6, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 8, 8, 8, 8, 9, 9, 9, 10, 11, 11, 11, 11, 12, 12, 12, 12, 12, 12, 13, 13, 13, 13, 13, 13, 13, 13, 14, 14, 14, 14, 14, 14, 14, 14, 14, 15, 15, 15, 15, 15, 15, 15, 15, 15, 16, 16, 16, 16, 16, 16, 16, 16, 16, 16, 17, 18, 18, 18, 18]
new_number = aligned_residue_number(alignment, original_number)
print(new_number)
# [18, 18, 18, 18, 18, 18, 18, 18, 19, 19, 19, 19, 19, 19, 20, 20, 20, 20, 20, 20, 20, 20, 21, 21, 22, 22, 22, 22, 22, 22, 22, 23, 23, 23, 23, 23, 23, 23, 24, 24, 24, 24, 24, 25, 25, 25, 25, 32, 32, 32, 33, 34, 34, 34, 34, 35, 35, 35, 35, 35, 35, 36, 36, 36, 36, 36, 36, 36, 36, 37, 37, 37, 37, 37, 37, 37, 37, 37, 38, 38, 38, 38, 38, 38, 38, 38, 38, 39, 39, 39, 39, 39, 39, 39, 39, 39, 39, 40, 41, 41, 41, 41]

How to adjust the categories's labels into a KendoChart?

I have been using a kendochart as in the example: http://jsfiddle.net/ericklanford/6dr0k59v/2/
the categoryAxis is deffined as:
categoryAxis: {
categories: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31],
majorGridLines: {
visible: false
},
},
If you note, it is difficult to see the labels under the categoryAxis.
There is any possibility to do something like this:
What you are proposing with your image is not available out of the box (but it is possible through some hacks). Officially you have two options - rotate the labels or skip every other label:
Skip every other label
To do that you need to specify a step value when you configure the labels, like this:
// ...
categoryAxis: {
categories: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31],
labels: {
step: 2
},
majorGridLines: {
visible: false
},
}
// ...
Rotate the labels
This will prevent them from overlapping because they will be sideways. That way they are easier to read, while you are not missing every other label. You need to set the rotation value to -90:
// ...
categoryAxis: {
categories: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31],
labels: {
rotation: -90
},
majorGridLines: {
visible: false
},
}
// ...
... and the hacky way
This is not officially supported and it requires some manipulation of the rendered svg image. We need to slightly change the color of the axis first, so that we can find the elements by color:
// ...
categoryAxis: {
categories: [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31],
color: "#000001",
majorGridLines: {
visible: false
},
}
// ...
And then we will run a script that will find all the labels and increase the "y" position of every other label by 8 pixels:
$(document).ready(createChart);
var axisLabels = $("#chart").find("text[fill='#000001']");
for(i = 0; i < axisLabels.length; i += 2){
$(axisLabels[i]).attr("y",parseInt($(axisLabels[i]).attr("y")) + 8);
}
And here's the fiddle: http://jsfiddle.net/4Lsownbp/

Remove n elements from array dynamically and add to another array

nums= [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30]
new_array=[]
How do I grab every two items divisible by 5 and add them to a new array.
This is the desired result:
the new_array should now contain these values
[[5,10],[15,20],[25,30]]
Note: I want to do this without pushing them all into the array and then performing
array.each_slice(2). The process should happen dynamically.
Try this
new_array = nums.select { |x| x % 5 == 0 }.each_slice(2).entries
No push involved.

Resources