I have file of gimp curves in old style, like this:
# GIMP Curves File
0 0 45 79 166 134 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 255 255
0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 255 255
0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 255 255
0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 255 255
0 0 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 -1 255 255
Every pair is point with x,y koordinates (except -1,-1). I need calculate y-val for the curve by x in range(0, 255). Which algorithm I need use?
Related
I'm trying to build a matrix out of linear equations but for some reason I keep getting a parse error in my matrix when previously I did not.
CoM=[K1*(abs(Q1).^r) K2*(abs(Q2).^r) -(K3*(abs(Q3).^r)) -(K4*(abs(Q4).^r)); K3*(abs(Q3).^r) -(K5*(abs(Q5).^r)) -(K6*(abs(Q6).^r) -(K7*(abs(Q7).^r) ; 1 -1 0 0 ; 1 1 -1 0 ; 1 -1 0 0 ; 1 -1 0 0 ; 1 1 -1 0];
^
error: parse error near line 1 of file '____'\WNA2loop.m
syntax error
CoM=[K1*(abs(Q1).^r) K2*(abs(Q2).^r) -(K3*(abs(Q3).^r)) -(K4*(abs(Q4).^r)); K3*(abs(Q3).^r) -(K5*(abs(Q5).^r)) -(K6*(abs(Q6).^r) -(K7*(abs(Q7).^r) ; 1 -1 0 0 ; 1 1 -1 0 ; 1 -1 0 0 ; 1 -1 0 0 ; 1 1 -1 0];
^
Right where I put the caret but when I take that small part of the matrix and run the command it
[1 -1 0 0 ; 1 1 -1 0 ; 1 -1 0 0 ; 1 -1 0 0 ; 1 1 -1 0]
ans = 1 -1 0 0
1 1 -1 0
1 -1 0 0
1 -1 0 0
1 1 -1 0
The K1 to K7 and Q1 to Q7 as well as the are just variables that get input by the user through the input function it worked before but now it just wont budge, could someone please provide assistance on this?
I have a text file that I am trying to modify. I am taking the input file that has lines of the form of
(y+1/4,-x+1/2,z+3/4)
and trying to change it to
0 1 0 -1 0 0 0 0 1 1 / 4 1 / 2 3 / 4
I currently can get to this point
0 1 0 1/4 -1 0 0 1/2 0 0 1 3/4
using
#!bin/bash
filename="227.dat"
sed -i 's/(/ /g' $filename
sed -i 's/)//g' $filename
sed -i 's/,/ /g' $filename
sed -i 's/-x/-1 0 0/g' $filename
sed -i 's/x/ 1 0 0/g' $filename
sed -i 's/-y/ 0 -1 0/g' $filename
sed -i 's/y/ 0 1 0/g' $filename
sed -i 's/-z/ 0 0 -1/g' $filename
sed -i 's/z/ 0 0 1/g' $filename
sed -i '/+/! s/$/ 0 \/ 1 0 \/ 1 0 \/ 1/' $filename
while ((i++)); read -r line; do
if [[ $line == *[+]* ]]
then
sed -i 's/+/ /g' $filename
echo $i
fi
done < "$filename"
The reason for the echo $i was to see that it correctly gives the line number and I thought perhaps I could use it for commands on those specific lines. I am doing this conversion as the code we use in creating crystal structures needs the vector notation with fractions at the end, not the x,y,z notation. I already know this is not the "prettiest" or simplest solution, but I am very new to all of this and it's what I have been able to piece together so far. Any suggestions?
Here's an approach that may simplify the parsing. Read each line into an array using IFS set to all possible delimiters and characters you don't care about:
while IFS=$'\(\)+,' read -ra line; do
for i in 1 3 5; do
case "${line[$i]}" in
x) printf "%s\t%s\t%s\t" 1 0 0 ;;
y) printf "%s\t%s\t%s\t" 0 1 0 ;;
z) printf "%s\t%s\t%s\t" 0 0 1 ;;
-x) printf "%s\t%s\t%s\t" -1 0 0 ;;
-y) printf "%s\t%s\t%s\t" 0 -1 0 ;;
-z) printf "%s\t%s\t%s\t" 0 0 -1 ;;
esac
done
for i in 2 4 6; do
printf "%s\t" "${line[$i]}"
done
echo
done < "$filename"
#!/usr/bin/env bash
filename="227.dat"
re='[(]y[+]([[:digit:]/]+),-x[+]([[:digit:]/]+),z[+]([[:digit:]/]+)[)]';
while IFS= read -r line; do
if [[ $line =~ $re ]]; then
printf '\t%s' \
0 1 0 \
-1 0 0 \
0 0 1 \
"${BASH_REMATCH[1]}" \
"${BASH_REMATCH[2]}" \
"${BASH_REMATCH[3]}";
printf '\n';
else
echo "ERROR: $line does not match $re" 1>&2;
fi;
done <"$filename"
...given, your input, returns:
0 1 0 -1 0 0 0 0 1 1/4 1/2 3/4
...which as far as I can tell is correct.
A more complex approach, making unfounded extrapolations (given the lack of detail and exemplars in the question itself), might look like:
#!/usr/bin/env bash
while IFS='(),' read -a pieces; do
declare -A vars=( [x]=1 [y]=1 [z]=1 [x_sigil]='' [y_sigil]='' [z_sigil]='' )
for piece in "${pieces[#]}"; do
# 1 2 3 4
if [[ $piece =~ (-?)([xyz])([+]([[:digit:]/]+))? ]]; then
if [[ ${BASH_REMATCH[4]} ]]; then # only if there *are* digits
vars[${BASH_REMATCH[2]}]=${BASH_REMATCH[4]} # ...then store them.
fi
vars[${BASH_REMATCH[2]}_sigil]=${BASH_REMATCH[1]} # store - if applicable
fi
done
printf '\t%s' \
"0" "${vars[x_sigil]}1" 0 \
"${vars[y_sigil]}1" 0 0 \
0 0 "${vars[z_sigil]}1" \
"${vars[y]}" "${vars[x]}" "${vars[z]}"
printf '\n'
done
Given the sample inputs provided in a comment on this answer, output is:
0 1 0 1 0 0 0 0 1 1 1 1
0 1 0 1 0 0 0 0 1 1 1 1
0 1 0 1 0 0 0 0 1 1 1 1
0 1 0 1 0 0 0 0 -1 3/4 1/4 1/2
0 1 0 -1 0 0 0 0 1 1/2 3/4 1/4
0 -1 0 1 0 0 0 0 1 1/4 1/2 3/4
0 -1 0 -1 0 0 0 0 -1 1 1 1
0 -1 0 -1 0 0 0 0 -1 1 1 1
0 -1 0 -1 0 0 0 0 -1 1 1 1
0 -1 0 -1 0 0 0 0 1 1/4 3/4 1/2
0 -1 0 1 0 0 0 0 -1 1/2 1/4 3/4
0 1 0 -1 0 0 0 0 -1 3/4 1/2 1/4
0 -1 0 -1 0 0 0 0 1 1/4 3/4 1/2
0 -1 0 -1 0 0 0 0 1 1/4 3/4 1/2
0 -1 0 -1 0 0 0 0 1 1/4 3/4 1/2
0 -1 0 -1 0 0 0 0 -1 1 1 1
0 -1 0 1 0 0 0 0 1 1/4 1/2 3/4
0 1 0 -1 0 0 0 0 1 1/2 3/4 1/4
0 1 0 1 0 0 0 0 -1 3/4 1/4 1/2
0 1 0 1 0 0 0 0 -1 3/4 1/4 1/2
0 1 0 1 0 0 0 0 -1 3/4 1/4 1/2
0 1 0 1 0 0 0 0 1 1 1 1
0 1 0 -1 0 0 0 0 -1 3/4 1/2 1/4
0 -1 0 1 0 0 0 0 -1 1/2 1/4 3/4
0 -1 0 1 0 0 0 0 -1 1/2 1/4 3/4
0 -1 0 1 0 0 0 0 -1 1/2 1/4 3/4
0 -1 0 1 0 0 0 0 -1 1/2 1/4 3/4
0 -1 0 1 0 0 0 0 1 1/4 1/2 3/4
Given this file:
Variable_name Value
Aborted_clients 0
Aborted_connects 4
Binlog_cache_disk_use 0
Binlog_cache_use 0
Binlog_stmt_cache_disk_use 0
Binlog_stmt_cache_use 0
Bytes_received 141
Bytes_sent 177
Com_admin_commands 0
Com_assign_to_keycache 0
Com_alter_db 0
Com_alter_db_upgrade 0
Com_alter_event 0
Com_alter_function 0
Com_alter_procedure 0
Com_alter_server 0
Com_alter_table 0
Com_alter_tablespace 0
Com_analyze 0
Com_begin 0
Com_binlog 0
Com_call_procedure 0
Com_change_db 0
Com_change_master 0
Com_check 0
Com_checksum 0
Com_commit 0
Com_create_db 0
Com_create_event 0
Com_create_function 0
Com_create_index 0
Com_create_procedure 0
Com_create_server 0
Com_create_table 0
Com_create_trigger 0
Com_create_udf 0
Com_create_user 0
Com_create_view 0
Com_dealloc_sql 0
Com_delete 0
Com_delete_multi 0
Com_do 0
Com_drop_db 0
Com_drop_event 0
Com_drop_function 0
Com_drop_index 0
Com_drop_procedure 0
Com_drop_server 0
Com_drop_table 0
Com_drop_trigger 0
Com_drop_user 0
Com_drop_view 0
Com_empty_query 0
Com_execute_sql 0
Com_flush 0
Com_grant 0
Com_ha_close 0
Com_ha_open 0
Com_ha_read 0
Com_help 0
Com_insert 0
Com_insert_select 0
Com_install_plugin 0
Com_kill 0
Com_load 0
Com_lock_tables 0
Com_optimize 0
Com_preload_keys 0
Com_prepare_sql 0
Com_purge 0
Com_purge_before_date 0
Com_release_savepoint 0
Com_rename_table 0
Com_rename_user 0
Com_repair 0
Com_replace 0
Com_replace_select 0
Com_reset 0
Com_resignal 0
Com_revoke 0
Com_revoke_all 0
Com_rollback 0
Com_rollback_to_savepoint 0
Com_savepoint 0
Com_select 1
Com_set_option 0
Com_signal 0
Com_show_authors 0
Com_show_binlog_events 0
Com_show_binlogs 0
Com_show_charsets 0
Com_show_collations 0
Com_show_contributors 0
Com_show_create_db 0
Com_show_create_event 0
Com_show_create_func 0
Com_show_create_proc 0
Com_show_create_table 0
Com_show_create_trigger 0
Com_show_databases 0
Com_show_engine_logs 0
Com_show_engine_mutex 0
Com_show_engine_status 0
Com_show_events 0
Com_show_errors 0
Com_show_fields 0
Com_show_function_status 0
Com_show_grants 0
Com_show_keys 0
Com_show_master_status 0
Com_show_open_tables 0
Com_show_plugins 0
Com_show_privileges 0
Com_show_procedure_status 0
Com_show_processlist 0
Com_show_profile 0
Com_show_profiles 0
Com_show_relaylog_events 0
Com_show_slave_hosts 0
Com_show_slave_status 0
Com_show_status 1
Com_show_storage_engines 0
Com_show_table_status 0
Com_show_tables 0
Com_show_triggers 0
Com_show_variables 0
Com_show_warnings 0
Com_slave_start 0
Com_slave_stop 0
Com_stmt_close 0
Com_stmt_execute 0
Com_stmt_fetch 0
Com_stmt_prepare 0
Com_stmt_reprepare 0
Com_stmt_reset 0
Com_stmt_send_long_data 0
Com_truncate 0
Com_uninstall_plugin 0
Com_unlock_tables 0
Com_update 0
Com_update_multi 0
Com_xa_commit 0
Com_xa_end 0
Com_xa_prepare 0
Com_xa_recover 0
Com_xa_rollback 0
Com_xa_start 0
Compression OFF
Connections 375
Created_tmp_disk_tables 0
Created_tmp_files 6
Created_tmp_tables 0
Delayed_errors 0
Delayed_insert_threads 0
Delayed_writes 0
Flush_commands 1
Handler_commit 0
Handler_delete 0
Handler_discover 0
Handler_prepare 0
Handler_read_first 0
Handler_read_key 0
Handler_read_last 0
Handler_read_next 0
Handler_read_prev 0
Handler_read_rnd 0
Handler_read_rnd_next 0
Handler_rollback 0
Handler_savepoint 0
Handler_savepoint_rollback 0
Handler_update 0
Handler_write 0
Innodb_buffer_pool_pages_data 584
Innodb_buffer_pool_bytes_data 9568256
Innodb_buffer_pool_pages_dirty 0
Innodb_buffer_pool_bytes_dirty 0
Innodb_buffer_pool_pages_flushed 120
Innodb_buffer_pool_pages_free 7607
Innodb_buffer_pool_pages_misc 0
Innodb_buffer_pool_pages_total 8191
Innodb_buffer_pool_read_ahead_rnd 0
Innodb_buffer_pool_read_ahead 0
Innodb_buffer_pool_read_ahead_evicted 0
Innodb_buffer_pool_read_requests 14912
Innodb_buffer_pool_reads 584
Innodb_buffer_pool_wait_free 0
Innodb_buffer_pool_write_requests 203
Innodb_data_fsyncs 163
Innodb_data_pending_fsyncs 0
Innodb_data_pending_reads 0
Innodb_data_pending_writes 0
Innodb_data_read 11751424
Innodb_data_reads 594
Innodb_data_writes 243
Innodb_data_written 3988480
Innodb_dblwr_pages_written 120
Innodb_dblwr_writes 40
Innodb_have_atomic_builtins ON
Innodb_log_waits 0
Innodb_log_write_requests 28
Innodb_log_writes 41
Innodb_os_log_fsyncs 83
Innodb_os_log_pending_fsyncs 0
Innodb_os_log_pending_writes 0
Innodb_os_log_written 34816
Innodb_page_size 16384
Innodb_pages_created 1
Innodb_pages_read 583
Innodb_pages_written 120
Innodb_row_lock_current_waits 0
Innodb_row_lock_time 0
Innodb_row_lock_time_avg 0
Innodb_row_lock_time_max 0
Innodb_row_lock_waits 0
Innodb_rows_deleted 0
Innodb_rows_inserted 0
Innodb_rows_read 40
Innodb_rows_updated 39
Innodb_truncated_status_writes 0
Key_blocks_not_flushed 0
Key_blocks_unused 13396
Key_blocks_used 0
Key_read_requests 0
Key_reads 0
Key_write_requests 0
Key_writes 0
Last_query_cost 0.000000
Max_used_connections 3
Not_flushed_delayed_rows 0
Open_files 86
Open_streams 0
Open_table_definitions 109
Open_tables 109
Opened_files 439
Opened_table_definitions 0
Opened_tables 0
Performance_schema_cond_classes_lost 0
Performance_schema_cond_instances_lost 0
Performance_schema_file_classes_lost 0
Performance_schema_file_handles_lost 0
Performance_schema_file_instances_lost 0
Performance_schema_locker_lost 0
Performance_schema_mutex_classes_lost 0
Performance_schema_mutex_instances_lost 0
Performance_schema_rwlock_classes_lost 0
Performance_schema_rwlock_instances_lost 0
Performance_schema_table_handles_lost 0
Performance_schema_table_instances_lost 0
Performance_schema_thread_classes_lost 0
Performance_schema_thread_instances_lost 0
Prepared_stmt_count 0
Qcache_free_blocks 1
Qcache_free_memory 16758160
Qcache_hits 0
Qcache_inserts 1
Qcache_lowmem_prunes 0
Qcache_not_cached 419
Qcache_queries_in_cache 1
Qcache_total_blocks 4
Queries 1146
Questions 2
Rpl_status AUTH_MASTER
Select_full_join 0
Select_full_range_join 0
Select_range 0
Select_range_check 0
Select_scan 0
Slave_heartbeat_period 0.000
Slave_open_temp_tables 0
Slave_received_heartbeats 0
Slave_retried_transactions 0
Slave_running OFF
Slow_launch_threads 0
Slow_queries 0
Sort_merge_passes 0
Sort_range 0
Sort_rows 0
Sort_scan 0
Ssl_accept_renegotiates 0
Ssl_accepts 0
Ssl_callback_cache_hits 0
Ssl_cipher
Ssl_cipher_list
Ssl_client_connects 0
Ssl_connect_renegotiates 0
Ssl_ctx_verify_depth 0
Ssl_ctx_verify_mode 0
Ssl_default_timeout 0
Ssl_finished_accepts 0
Ssl_finished_connects 0
Ssl_session_cache_hits 0
Ssl_session_cache_misses 0
Ssl_session_cache_mode NONE
Ssl_session_cache_overflows 0
Ssl_session_cache_size 0
Ssl_session_cache_timeouts 0
Ssl_sessions_reused 0
Ssl_used_session_cache_entries 0
Ssl_verify_depth 0
Ssl_verify_mode 0
Ssl_version
Table_locks_immediate 123
Table_locks_waited 0
Tc_log_max_pages_used 0
Tc_log_page_size 0
Tc_log_page_waits 0
Threads_cached 1
Threads_connected 2
Threads_created 3
Threads_running 1
Uptime 2389
Uptime_since_flush_status 2389
How would one use awk to make this calculation of Queries per second (Queries/Uptime):
1146/2389
And print the result?
I'm grepping 2 results from a list of results and need to calculate items/second where 302 is the total item count and 503 the total uptimecount.
At this moment I'm doing
grep -Ew "Queries|Uptime" | awk '{print $2}'
to print out:
302
503
But here i got stuck.
You can use something like:
$ awk '/Queries/ {q=$2} /Uptime/ {print q/$2}' file
0.600398
That is: when the line contains the string "Queries", store its value. When it contains "Uptime", print the result of dividing its value by the one stored in queries.
This assumes the string "Queries" appearing before the string "Uptime".
Given your updated input, I see that we need to check if the first field is exactly "Uptime" or "Queries" so that it does not match other lines with this content:
$ awk '$1 == "Queries" {q=$2} $1=="Uptime" {print q/$2}' file
0.479699
I think the following awk one-liner will help you:
kent$ cat f
Queries 302
Uptime 503
LsyHP 13:42:57 /tmp/test
kent$ awk '{a[NR]=$NF}END{printf "%.2f\n",a[NR-1]/a[NR]}' f
0.60
If you want to do together with "grep" function:
kent$ awk '/Queries/{a=$NF}/Uptime/{b=$NF}END{printf "%.2f\n",a/b}' f
0.60
I'm working on a project that produces data files that are square matrices of numbers, either 1 or -1. I need to visualize this as images, and what I do now is using Matlab to open them, the matlab function imshow automatically draws those kind of matrices as monochrome black and white images.
Using matlab though is very slow and I was wondering if there was some linux program that I can fastly use from the terminal to do so, like an imagemagick oneliner or something similar.
This is an example of file
-1 1 -1 -1 1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 1 1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
-1 -1 -1 -1 -1 -1 -1 -1
And this would be the image
The actual matrices would be of the order of 128x128.
Thank you!
It is trivial to convert your data to "PBM" format, even with a text editor,as I've done here. Change all of the " 1" to "0", "-1" to "1", and add a one-line header "P1 8 8 1" (substitute the actual width and height for "8 8"). Here's a one-line script matrix2pbm that does that:
echo P1 $2 $3 1; sed -e "s/-1/z/g; s/1/0/g; s/z/1/g" $1
Run it with
./matrix2pbm matrix.txt 8 8 > matrix.pbm
cat matrix.pbm
P1 8 8 1
1 0 1 1 0 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 0 0 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1
The PBM format is one of the Netpbm image formats [http://en.wikipedia.org/wiki/Netpbm_format]
If you like, you can then use ImageMagick or some other file converter to convert the result into some other format:
convert matrix.pbm matrix.png
As emcconville commented, you can do both transformations with this one-liner:
./matrix2pbm matrix.txt 8 8 | convert pbm:- matrix.png
How would I go about creating the matrix
[1 2 0 0 0;
-1 1 2 0 0;
0 -1 1 2 0;
0 0 -1 1 2;
0 0 0 -1 1]
using the diag command in MatLab?
Here is one way:
> diag(ones(1,5),0)+diag(ones(1,4),1)*2+diag(ones(1,4),-1)*-1
ans =
1 2 0 0 0
-1 1 2 0 0
0 -1 1 2 0
0 0 -1 1 2
0 0 0 -1 1
>
This just creates three diagonals at 0, +1 and -1, scales them as needed, then adds them.