In Pure Data how to keyup, keydown, and while keydown? - keypress

I'm trying to setup a little midi keyboard (using my computer's keyboard) in Pure Data. It works this way:
press a key > send a note_on on midi channel
stop pressing a key > send a note_off on midi channel
The problem is, that when you keep a key pressed the [key] object generates a series of inputs instead of a single (long) one. This stops the (desired) note from playing (since the original input stops, after ~500ms) and re-starts playing the note many times in a row.
I've already tried [change], [timer]+[moses] and other non-solutions, I'm looking for a better implementation of [key] that can handle long key-presses
I'm looking for something that does [key]'s job but that can handle a long-press, if I long-press a key with [key] for more than a second it does something like:
key....(1 sec passes)...keyup.key.keyup.key.keyup. and it goes on and on...

the problem is that your Operating System(!) generates repeated key events if you keep pressing the key.
solution
so the simple solution is to tell your OS to suppress repeated key events.
workaround
the more complicated workaround is to keep track of the current state of the given key and suppress duplicate keydowns. this is most easily done if you only track a single key (rather than all at once):
e.g. an abstraction [keypress 97] that will detect keypresses of a (ascii 97):
[key] [keyup]
| |
[select $1] [select $1]
| |
[t b b] |
| [stop( |
| | |
| +----- |
| \|
| [del 50]
| |
[1( [0(
| |
| -----------+
|/
[change]
|
[outlet]

What about [keyname]:
http://en.flossmanuals.net/pure-data/sensors/game-controllers/
Here is an example patch that will write to an array when multiple keys are pressed. It should be possible to use this as a polyphonic input. I think then using [tabread] and iterating the array index number would indicate whether a key is pressed or not (the index should match the ascii/key number):
#N canvas 800 301 544 205 10;
#X obj 23 23 keyname;
#X symbolatom 89 40 10 0 0 0 - - -;
#X floatatom 23 46 5 0 0 0 - - -;
#X obj 181 18 key;
#X floatatom 181 46 3 0 0 0 - - -;
#X floatatom 220 44 3 0 0 0 - - -;
#X obj 220 18 keyup;
#X obj 44 87 pack float symbol float float;
#X obj 67 117 print;
#X obj 46 151 tabwrite array1;
#N canvas 0 0 450 300 (subpatch) 0;
#X array array1 256 float 1;
#A 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
#X coords 0 1.2 255 0 256 100 1 0 0;
#X restore 277 33 graph;
#X connect 0 0 2 0;
#X connect 0 1 1 0;
#X connect 1 0 7 1;
#X connect 2 0 7 0;
#X connect 2 0 9 0;
#X connect 3 0 4 0;
#X connect 4 0 7 2;
#X connect 4 0 9 1;
#X connect 5 0 7 3;
#X connect 5 0 9 1;
#X connect 6 0 5 0;
#X connect 7 0 8 0;
Example with a + g pressed at the same time:
After pressing s:
While a:
After pressing a:
I was able to find something here as well: http://puredata.hurleur.com/sujet-3718-pdkb-basic-virtual-midi-keyboard
zipfile: http://puredata.hurleur.com/attachment.php?item=1635
Looks neat, not sure if it functions.

Related

Find 4-neighbors using J

I'm trying to find the 4-neighbors of all 1's in a matrix of 0's and 1's using the J programming language. I have a method worked out, but am trying to find a method that is more compact.
To illustrate, let's say I have the matrix M—
] M=. 4 4$0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
0 0 0 0
and I want to generate—
0 0 1 0
0 1 0 1
0 0 1 0
0 0 0 0
I've sorted something close (which I owe to this little gem: https://www.reddit.com/r/cellular_automata/comments/9kw21u/i_made_a_34byte_implementation_of_conways_game_of/)—
] +/+/(|:i:1*(2 2)$1 0 0 1)&|.M
0 0 1 0
0 1 2 1
0 0 1 0
0 0 0 0
which is fine because I'll be weighting the initial 1's anyway (and the actual numbers aren't really that important for my application anyway). But I feel like this could be more compact and I've just hit a wall. And the compactness of the expression actually is important to my application.
Building on #Eelvex comment solution, if you are willing to make the verb dyadic it becomes pretty simple. The left argument can be the rotation matrix and then the result is composed with +./ which is a logical or and can be weighted however you want.
] M0=. 4 4$0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0
0 0 0 0
0 0 1 0
0 0 0 0
0 0 0 0
] m =.2,\5$0,i:1
0 _1
_1 0
0 1
1 0
m +./#:|. M0
0 0 1 0
0 1 0 1
0 0 1 0
0 0 0 0
There is still an issue with the edges (which wrap) around, but that also occurs with your original solution, so I am hoping that you are not concerned with that.
] M1=. 4 4$1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
1 0 0 0
0 0 0 0
0 0 0 0
0 0 0 0
m +./#:|. M1
0 1 0 1
1 0 0 0
0 0 0 0
1 0 0 0
If you did want to clean that up, you can use the slightly longer m +./#:(|.!.0), which fills the rotation with 0's.
] M2=. 4 4$ 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1
0 0 0 0
0 0 0 0
0 0 0 0
0 0 0 1
m +./#:(|.!.0) M2
0 0 0 0
0 0 0 0
0 0 0 1
0 0 1 0
m +./#:(|.!.0) M1
0 1 0 0
1 0 0 0
0 0 0 0
0 0 0 0

problem with extract region pixels from gray image on Matlab

I would to like to selection the dark gray pixels from gray image.
J = rgb2gray(I);
Newfigure = zeros(size(J));
[k,l] =find(J<130);
Newfigure(k,l) = J(k,l);
imshow(Newfigure)
when visualize the Newfigure, I see the zone of circle like square. Why does this happen?
This is due to the way you index into Newfigure. Look at the following:
>> test = zeros(10);
>> test([2,8], [1,2]) = 1
test =
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
1 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
This is different from
>> test = zeros(10);
>> test(2, 1) = 1;
>> test(8, 2) = 1
test =
0 0 0 0 0 0 0 0 0 0
1 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0
You could either use a loop like
Newfigure = zeros(size(J));
for n = 1:numel(k);
Newfigure(k(n), l(n)) = J(k(n), l(n));
end
or simply use
Newfigure = J < 130;
imshow(Newfigure);
Get rid of the find(...) and just use logical indices. It'll be faster too...
J = rgb2gray(I);
Newfigure = zeros(size(J));
tf = J<130;
Newfigure(tf) = J(tf);
imshow(Newfigure)
The tf variable will be an array of 0s/1s (true/false), the same size as J which you can then use to index the arrays as shown.

Computing block sum for an arbitrary region in an image

I wonder what is the most effective way to solve the following problem:
(If there is a name for this problem, I would like to know it as well)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 0 1 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 0 0 1 0 1 1 1;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 1 1 1 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 1 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0;
0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0]
If I have an image where I am interested in the following pixels marked by 1. In an image I want to calculate a sum around this block. A sum of block is easy to calculate from an integral image but I don't want to do it for the whole image, since there is a lot of unnecessary computation.
One option that I can come up with is to search the minimum and maximum in horizontal and vertical directions and then take a rectangular portion of the image enlarged so that it will covered the block portion. For example +2 pixels each directions, if the block size is 5. But this solution still includes unnecessary calculation.
If I had a list of these indices, I could loop through them and calculate the sum for each block but then if there is another pixel close by which has the same pixels in its block, I need to recalculate them and If I save them, I somehow need to look if they are already calculated or not and that takes time as well.
Is there a known solution for this sort of a problem?

How to compare the values in two different rows with awk?

Given this file:
Variable_name Value
Aborted_clients 0
Aborted_connects 4
Binlog_cache_disk_use 0
Binlog_cache_use 0
Binlog_stmt_cache_disk_use 0
Binlog_stmt_cache_use 0
Bytes_received 141
Bytes_sent 177
Com_admin_commands 0
Com_assign_to_keycache 0
Com_alter_db 0
Com_alter_db_upgrade 0
Com_alter_event 0
Com_alter_function 0
Com_alter_procedure 0
Com_alter_server 0
Com_alter_table 0
Com_alter_tablespace 0
Com_analyze 0
Com_begin 0
Com_binlog 0
Com_call_procedure 0
Com_change_db 0
Com_change_master 0
Com_check 0
Com_checksum 0
Com_commit 0
Com_create_db 0
Com_create_event 0
Com_create_function 0
Com_create_index 0
Com_create_procedure 0
Com_create_server 0
Com_create_table 0
Com_create_trigger 0
Com_create_udf 0
Com_create_user 0
Com_create_view 0
Com_dealloc_sql 0
Com_delete 0
Com_delete_multi 0
Com_do 0
Com_drop_db 0
Com_drop_event 0
Com_drop_function 0
Com_drop_index 0
Com_drop_procedure 0
Com_drop_server 0
Com_drop_table 0
Com_drop_trigger 0
Com_drop_user 0
Com_drop_view 0
Com_empty_query 0
Com_execute_sql 0
Com_flush 0
Com_grant 0
Com_ha_close 0
Com_ha_open 0
Com_ha_read 0
Com_help 0
Com_insert 0
Com_insert_select 0
Com_install_plugin 0
Com_kill 0
Com_load 0
Com_lock_tables 0
Com_optimize 0
Com_preload_keys 0
Com_prepare_sql 0
Com_purge 0
Com_purge_before_date 0
Com_release_savepoint 0
Com_rename_table 0
Com_rename_user 0
Com_repair 0
Com_replace 0
Com_replace_select 0
Com_reset 0
Com_resignal 0
Com_revoke 0
Com_revoke_all 0
Com_rollback 0
Com_rollback_to_savepoint 0
Com_savepoint 0
Com_select 1
Com_set_option 0
Com_signal 0
Com_show_authors 0
Com_show_binlog_events 0
Com_show_binlogs 0
Com_show_charsets 0
Com_show_collations 0
Com_show_contributors 0
Com_show_create_db 0
Com_show_create_event 0
Com_show_create_func 0
Com_show_create_proc 0
Com_show_create_table 0
Com_show_create_trigger 0
Com_show_databases 0
Com_show_engine_logs 0
Com_show_engine_mutex 0
Com_show_engine_status 0
Com_show_events 0
Com_show_errors 0
Com_show_fields 0
Com_show_function_status 0
Com_show_grants 0
Com_show_keys 0
Com_show_master_status 0
Com_show_open_tables 0
Com_show_plugins 0
Com_show_privileges 0
Com_show_procedure_status 0
Com_show_processlist 0
Com_show_profile 0
Com_show_profiles 0
Com_show_relaylog_events 0
Com_show_slave_hosts 0
Com_show_slave_status 0
Com_show_status 1
Com_show_storage_engines 0
Com_show_table_status 0
Com_show_tables 0
Com_show_triggers 0
Com_show_variables 0
Com_show_warnings 0
Com_slave_start 0
Com_slave_stop 0
Com_stmt_close 0
Com_stmt_execute 0
Com_stmt_fetch 0
Com_stmt_prepare 0
Com_stmt_reprepare 0
Com_stmt_reset 0
Com_stmt_send_long_data 0
Com_truncate 0
Com_uninstall_plugin 0
Com_unlock_tables 0
Com_update 0
Com_update_multi 0
Com_xa_commit 0
Com_xa_end 0
Com_xa_prepare 0
Com_xa_recover 0
Com_xa_rollback 0
Com_xa_start 0
Compression OFF
Connections 375
Created_tmp_disk_tables 0
Created_tmp_files 6
Created_tmp_tables 0
Delayed_errors 0
Delayed_insert_threads 0
Delayed_writes 0
Flush_commands 1
Handler_commit 0
Handler_delete 0
Handler_discover 0
Handler_prepare 0
Handler_read_first 0
Handler_read_key 0
Handler_read_last 0
Handler_read_next 0
Handler_read_prev 0
Handler_read_rnd 0
Handler_read_rnd_next 0
Handler_rollback 0
Handler_savepoint 0
Handler_savepoint_rollback 0
Handler_update 0
Handler_write 0
Innodb_buffer_pool_pages_data 584
Innodb_buffer_pool_bytes_data 9568256
Innodb_buffer_pool_pages_dirty 0
Innodb_buffer_pool_bytes_dirty 0
Innodb_buffer_pool_pages_flushed 120
Innodb_buffer_pool_pages_free 7607
Innodb_buffer_pool_pages_misc 0
Innodb_buffer_pool_pages_total 8191
Innodb_buffer_pool_read_ahead_rnd 0
Innodb_buffer_pool_read_ahead 0
Innodb_buffer_pool_read_ahead_evicted 0
Innodb_buffer_pool_read_requests 14912
Innodb_buffer_pool_reads 584
Innodb_buffer_pool_wait_free 0
Innodb_buffer_pool_write_requests 203
Innodb_data_fsyncs 163
Innodb_data_pending_fsyncs 0
Innodb_data_pending_reads 0
Innodb_data_pending_writes 0
Innodb_data_read 11751424
Innodb_data_reads 594
Innodb_data_writes 243
Innodb_data_written 3988480
Innodb_dblwr_pages_written 120
Innodb_dblwr_writes 40
Innodb_have_atomic_builtins ON
Innodb_log_waits 0
Innodb_log_write_requests 28
Innodb_log_writes 41
Innodb_os_log_fsyncs 83
Innodb_os_log_pending_fsyncs 0
Innodb_os_log_pending_writes 0
Innodb_os_log_written 34816
Innodb_page_size 16384
Innodb_pages_created 1
Innodb_pages_read 583
Innodb_pages_written 120
Innodb_row_lock_current_waits 0
Innodb_row_lock_time 0
Innodb_row_lock_time_avg 0
Innodb_row_lock_time_max 0
Innodb_row_lock_waits 0
Innodb_rows_deleted 0
Innodb_rows_inserted 0
Innodb_rows_read 40
Innodb_rows_updated 39
Innodb_truncated_status_writes 0
Key_blocks_not_flushed 0
Key_blocks_unused 13396
Key_blocks_used 0
Key_read_requests 0
Key_reads 0
Key_write_requests 0
Key_writes 0
Last_query_cost 0.000000
Max_used_connections 3
Not_flushed_delayed_rows 0
Open_files 86
Open_streams 0
Open_table_definitions 109
Open_tables 109
Opened_files 439
Opened_table_definitions 0
Opened_tables 0
Performance_schema_cond_classes_lost 0
Performance_schema_cond_instances_lost 0
Performance_schema_file_classes_lost 0
Performance_schema_file_handles_lost 0
Performance_schema_file_instances_lost 0
Performance_schema_locker_lost 0
Performance_schema_mutex_classes_lost 0
Performance_schema_mutex_instances_lost 0
Performance_schema_rwlock_classes_lost 0
Performance_schema_rwlock_instances_lost 0
Performance_schema_table_handles_lost 0
Performance_schema_table_instances_lost 0
Performance_schema_thread_classes_lost 0
Performance_schema_thread_instances_lost 0
Prepared_stmt_count 0
Qcache_free_blocks 1
Qcache_free_memory 16758160
Qcache_hits 0
Qcache_inserts 1
Qcache_lowmem_prunes 0
Qcache_not_cached 419
Qcache_queries_in_cache 1
Qcache_total_blocks 4
Queries 1146
Questions 2
Rpl_status AUTH_MASTER
Select_full_join 0
Select_full_range_join 0
Select_range 0
Select_range_check 0
Select_scan 0
Slave_heartbeat_period 0.000
Slave_open_temp_tables 0
Slave_received_heartbeats 0
Slave_retried_transactions 0
Slave_running OFF
Slow_launch_threads 0
Slow_queries 0
Sort_merge_passes 0
Sort_range 0
Sort_rows 0
Sort_scan 0
Ssl_accept_renegotiates 0
Ssl_accepts 0
Ssl_callback_cache_hits 0
Ssl_cipher
Ssl_cipher_list
Ssl_client_connects 0
Ssl_connect_renegotiates 0
Ssl_ctx_verify_depth 0
Ssl_ctx_verify_mode 0
Ssl_default_timeout 0
Ssl_finished_accepts 0
Ssl_finished_connects 0
Ssl_session_cache_hits 0
Ssl_session_cache_misses 0
Ssl_session_cache_mode NONE
Ssl_session_cache_overflows 0
Ssl_session_cache_size 0
Ssl_session_cache_timeouts 0
Ssl_sessions_reused 0
Ssl_used_session_cache_entries 0
Ssl_verify_depth 0
Ssl_verify_mode 0
Ssl_version
Table_locks_immediate 123
Table_locks_waited 0
Tc_log_max_pages_used 0
Tc_log_page_size 0
Tc_log_page_waits 0
Threads_cached 1
Threads_connected 2
Threads_created 3
Threads_running 1
Uptime 2389
Uptime_since_flush_status 2389
How would one use awk to make this calculation of Queries per second (Queries/Uptime):
1146/2389
And print the result?
I'm grepping 2 results from a list of results and need to calculate items/second where 302 is the total item count and 503 the total uptimecount.
At this moment I'm doing
grep -Ew "Queries|Uptime" | awk '{print $2}'
to print out:
302
503
But here i got stuck.
You can use something like:
$ awk '/Queries/ {q=$2} /Uptime/ {print q/$2}' file
0.600398
That is: when the line contains the string "Queries", store its value. When it contains "Uptime", print the result of dividing its value by the one stored in queries.
This assumes the string "Queries" appearing before the string "Uptime".
Given your updated input, I see that we need to check if the first field is exactly "Uptime" or "Queries" so that it does not match other lines with this content:
$ awk '$1 == "Queries" {q=$2} $1=="Uptime" {print q/$2}' file
0.479699
I think the following awk one-liner will help you:
kent$ cat f
Queries 302
Uptime 503
LsyHP 13:42:57 /tmp/test
kent$ awk '{a[NR]=$NF}END{printf "%.2f\n",a[NR-1]/a[NR]}' f
0.60
If you want to do together with "grep" function:
kent$ awk '/Queries/{a=$NF}/Uptime/{b=$NF}END{printf "%.2f\n",a/b}' f
0.60

Using Rstudio to create Graph

i want to make Bar graph from attached data in Rstudio i want to show that what ip used what protocol and how many times
Protocol
Source DNS FTP HTTP IMF LLC SMTP TCP TELNET
172.16.112.100 306 0 0 0 0 0 0 0
172.16.112.50 0 0 0 0 0 0 0 24
172.16.113.168 0 0 0 0 0 0 0 15
172.16.113.204 1 0 0 0 0 0 0 0
172.16.114.50 1 0 0 0 0 0 0 0
172.16.115.20 158 0 0 0 0 0 0 2
192.168.1.20 3 0 0 0 0 0 0 0
194.7.248.153 0 0 0 0 0 0 0 2
197.218.177.69 0 0 0 0 0 0 0 0
HP_ed:9b:2d 0 0 0 0 0 0 0 0
Simple way to build one plot for each IP:
After loading data as a data.frame with colnames and rownames.
par(mfrow=c(4,3))
for (i in 1:nrow(data)) {
barplot(as.numeric(data[i,]), main=rownames(data)[i], names.arg=colnames(data))
}
Your data is very poor, which makes the graphs have only one or no bar at all. If you want stacked or grouped bars you should have a look at the pakages ggplot2 or lattice.

Resources