remove a block of text which match a pattern - shell

i have a hard problem, i want to remove some block from a text which have a special word,here is a example , what i want is remove match backup_all and one line before match line ,and three line after match line
# Time: 2018-01-23T03:41:41.454104+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3168695
# Query_time: 0.129250 Lock_time: 0.000062 Rows_sent: 3535 Rows_examined: 3535
SET timestamp=1516650101;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `legend_gather_customer_note_effect`;
# Time: 2018-01-23T03:41:41.527587+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3168695
# Query_time: 0.066378 Lock_time: 0.000059 Rows_sent: 193 Rows_examined: 193
SET timestamp=1516650101;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `legend_gather_performance_config`;
# Time: 2018-01-23T03:41:41.558254+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3168695
# Query_time: 0.025533 Lock_time: 0.000058 Rows_sent: 296 Rows_examined: 296
SET timestamp=1516650101;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `legend_gift`;
# Time: 2018-01-23T03:42:09.536559+08:00
# User#Host: zabbix_agent[zabbix_agent] # [127.0.0.1] Id: 3169056
# Query_time: 0.000304 Lock_time: 0.000162 Rows_sent: 1 Rows_examined: 1
SET timestamp=1516650129;
SELECT SUM(trx_rows_locked) AS rows_locked, SUM(trx_rows_modified) AS rows_modified, SUM(trx_lock_memory_bytes) AS lock_memory FROM information_schema.INNODB_TRX;
after removing ,the text become the following output,how to use sed to get this goal?
# Time: 2018-01-23T03:42:09.536559+08:00
# User#Host: zabbix_agent[zabbix_agent] # [127.0.0.1] Id: 3169056
# Query_time: 0.000304 Lock_time: 0.000162 Rows_sent: 1 Rows_examined: 1
SET timestamp=1516650129;
SELECT SUM(trx_rows_locked) AS rows_locked, SUM(trx_rows_modified) AS rows_modified, SUM(trx_lock_memory_bytes) AS lock_memory FROM information_schema.INNODB_TRX;
update
this text is from mysql slow log ,i want to remove all sql info from user 'backup_all' , as you see ,this text product by user 'backup_all' is definite to be regular ,but other text product by other user may not be same,they will be
# Time: 2018-01-23T03:41:24.490723+08:00
# User#Host: zabbix_agent[zabbix_agent] # [127.0.0.1] Id: 3169038
# Query_time: 0.000669 Lock_time: 0.000334 Rows_sent: 1 Rows_examined: 27
SET timestamp=1516650084;
select count(*) Slownum from information_schema.processlist where COMMAND = 'Query' and info not like '%information_schema.processlist%' and TIME > 0;
# Time: 2018-01-23T03:41:40.628284+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3168695
# Query_time: 78.333179 Lock_time: 0.000073 Rows_sent: 13269064 Rows_examined: 13269064
SET timestamp=1516650100;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `legend_final_inventory`;
# Time: 2018-01-23T03:41:40.925596+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3168695
# Query_time: 0.175956 Lock_time: 0.000065 Rows_sent: 5101 Rows_examined: 5101
SET timestamp=1516650100;
SELECT /*!40001 SQL_NO_CACHE */ * FROM `legend_finance_account`;
or
# Time: 2018-01-23T04:29:26.903048+08:00
# User#Host: yun_rw_legend[yun_rw_legend] # [10.162.86.162] Id: 3167670
# Query_time: 0.000150 Lock_time: 0.000053 Rows_sent: 6 Rows_examined: 32
SET timestamp=1516652966;
select id, value from legend_precheck_value where value_type = 8 and is_deleted = 'N';
# Time: 2018-01-23T04:29:31.825823+08:00
# User#Host: yun_rw_legend[yun_rw_legend] # [10.162.86.162] Id: 3167670
# Query_time: 0.000826 Lock_time: 0.000146 Rows_sent: 0 Rows_examined: 947
SET timestamp=1516652971;
select
id as id,
is_deleted as isDeleted,
gmt_create as gmtCreate,
creator as creator,
gmt_modified as gmtModified,
modifier as modifier,
shop_id as shopId,
customer_id as customerId,
account_id as accountId,
customer_car_id as customerCarId
from legend_customer_car_rel
WHERE is_deleted = 'N'
and shop_id = 3374
and customer_car_id = 1307177;
# Time: 2018-01-23T04:30:01.149529+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3170398
# Query_time: 0.003047 Lock_time: 0.000169 Rows_sent: 0 Rows_examined: 385
SET timestamp=1516653001;
SELECT LOGFILE_GROUP_NAME, FILE_NAME, TOTAL_EXTENTS, INITIAL_SIZE, ENGINE, EXTRA FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'UNDO LOG' AND FILE_NAME IS NOT NULL AND LOGFILE_GROUP_NAME IS NOT NULL GROUP BY LOGFILE_GROUP_NAME, FILE_NAME, ENGINE, TOTAL_EXTENTS, INITIAL_SIZE, EXTRA ORDER BY LOGFILE_GROUP_NAME;
# Time: 2018-01-23T04:30:01.151783+08:00
# User#Host: backup_all[backup_all] # [127.0.0.1] Id: 3170398
# Query_time: 0.002082 Lock_time: 0.000119 Rows_sent: 0 Rows_examined: 385
SET timestamp=1516653001;
SELECT DISTINCT TABLESPACE_NAME, FILE_NAME, LOGFILE_GROUP_NAME, EXTENT_SIZE, INITIAL_SIZE, ENGINE FROM INFORMATION_SCHEMA.FILES WHERE FILE_TYPE = 'DATAFILE' ORDER BY TABLESPACE_NAME, LOGFILE_GROUP_NAME;

Obligatory awk solution which seems to work:
awk '
$2!~/backup_all/ {
printf "# Time%s", $0
}' RS='# Time' ORS='' FS='
' example_in
It splits the input into records on every # Time, and fields at \n to give each line as a field. If the second line (field) doesn't have backup_all in it, print
output for first example:
# Time: 2018-01-23T03:42:09.536559+08:00
# User#Host: zabbix_agent[zabbix_agent] # [127.0.0.1] Id: 3169056
# Query_time: 0.000304 Lock_time: 0.000162 Rows_sent: 1 Rows_examined: 1
SET timestamp=1516650129;
SELECT SUM(trx_rows_locked) AS rows_locked, SUM(trx_rows_modified) AS rows_modified, SUM(trx_lock_memory_bytes) AS lock_memory FROM information_schema.INNODB_TRX
and for the second:
# Time: 2018-01-23T04:29:26.903048+08:00
# User#Host: yun_rw_legend[yun_rw_legend] # [10.162.86.162] Id: 3167670
# Query_time: 0.000150 Lock_time: 0.000053 Rows_sent: 6 Rows_examined: 32
SET timestamp=1516652966;
select id, value from legend_precheck_value where value_type = 8 and is_deleted = N;
# Time: 2018-01-23T04:29:31.825823+08:00
# User#Host: yun_rw_legend[yun_rw_legend] # [10.162.86.162] Id: 3167670
# Query_time: 0.000826 Lock_time: 0.000146 Rows_sent: 0 Rows_examined: 947
SET timestamp=1516652971;
select
id as id,
is_deleted as isDeleted,
gmt_create as gmtCreate,
creator as creator,
gmt_modified as gmtModified,
modifier as modifier,
shop_id as shopId,
customer_id as customerId,
account_id as accountId,
customer_car_id as customerCarId
from legend_customer_car_rel
WHERE is_deleted = N
and shop_id = 3374
and customer_car_id = 1307177;

This might work for you (GNU sed):
sed -n '/^#/{:a;N;/^SELECT/M!ba;/backup_all/!p}' file
As this is a reduction/filtering of lines use seds grep-like nature achieved by setting the -n option. Gather up lines that begin with a comment line (#) and end with a line beginning SELECT. If these lines do not contain the word backup_all print them.
N.B. the M flag on regexp invokes a multiline mode, where ^ and $ can be matched on start/end of lines. An alternative regexp would be /\nSELECT/.

Related

Parse Aurora Slow Query Logs pt-query-digest

I'm trying to use pt-query-digest to aggregate slow query logs from Aurora stored in Cloudwatch. I use the awslogs tool to grab them and write them to a file, it looks like they are formatted wrong, even after I strip out the host information. A traditional slow query has the following as a header before the query:
# Time: 2017-08-04T19:24:50.630232Z
# User#Host: root[root] # localhost [] Id: 236
# Schema: Last_errno: 0 Killed: 0
# Query_time: 11.003017 Lock_time: 0.000000 Rows_sent: 1 Rows_examined: 0 Rows_affected: 0
# Bytes_sent: 57
where as the logs from Cloudwatch look like this:
# Time: 2021-02-16T20:35:10.179940Z
# User#Host: root[root] # localhost [] Id: 236
# Query_time: 5.290956 Lock_time: 0.001651 Rows_sent: 8 Rows_examined: 1858252
SET timestamp=1613507710;
has anyone found a way to use pt-query-deigest to consume Aurora logs?

Ignoring the 'pipelines.yml' file because modules or command line options are specified

I have set up elasticsearch with password protected, and i am successfully able to work with elastic search by entering username=elastic and password=mypassword
but now I am trying to import mysql data into elasticsearch using logstash, when i run logstash using below command it gives error.
am i missing something?
logstash -f mysql.conf
logstash-plain.log
[2019-06-14T18:12:34,410][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2019-06-14T18:12:34,424][INFO ][logstash.runner ] Starting Logstash {"logstash.version"=>"7.1.0"}
[2019-06-14T18:12:35,400][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {\r\n elasticsearch {\r\n\thosts => \"http://10.42.35.14:9200/\"\r\n user => elastic\r\n password => pharma", :backtrace=>["D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:41:in `compile_imperative'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:49:in `compile_graph'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:11:in `block in compile_sources'", "org/jruby/RubyArray.java:2577:in `map'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/compiler.rb:10:in `compile_sources'", "org/logstash/execution/AbstractPipelineExt.java:151:in `initialize'", "org/logstash/execution/JavaBasePipelineExt.java:47:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/java_pipeline.rb:23:in `initialize'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/pipeline_action/create.rb:36:in `execute'", "D:/softwares/ElasticSearch/Version7.1/logstash-7.1.0/logstash-core/lib/logstash/agent.rb:325:in `block in converge_state'"]}
[2019-06-14T18:12:35,758][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}
[2019-06-14T18:12:40,664][INFO ][logstash.runner ] Logstash shut down.
mysql.conf
# file: contacts-index-logstash.conf
input {
jdbc {
jdbc_connection_string => "jdbc:mysql://52.213.22.96:3306/prbi"
jdbc_user => "myuser"
jdbc_password => "mypassword"
jdbc_driver_library => "mysql-connector-java-6.0.5.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
statement => "SELECT * from tmp_j_summaryreport"
}
}
output {
elasticsearch {
hosts => "http://10.42.35.14:9200/"
user => elastic
password => myelasticpassword
index => "testing123"
}
stdout { codec => json_lines }
}
logstash.yml
# Settings file in YAML
#
# Settings can be specified either in hierarchical form, e.g.:
#
# pipeline:
# batch:
# size: 125
# delay: 5
#
# Or as flat keys:
#
# pipeline.batch.size: 125
# pipeline.batch.delay: 5
#
# ------------ Node identity ------------
#
# Use a descriptive name for the node:
#
# node.name: test
#
# If omitted the node name will default to the machine's host name
#
# ------------ Data path ------------------
#
# Which directory should be used by logstash and its plugins
# for any persistent needs. Defaults to LOGSTASH_HOME/data
#
# path.data:
#
# ------------ Pipeline Settings --------------
#
# The ID of the pipeline.
#
# pipeline.id: main
#
# Set the number of workers that will, in parallel, execute the filters+outputs
# stage of the pipeline.
#
# This defaults to the number of the host's CPU cores.
#
# pipeline.workers: 2
#
# How many events to retrieve from inputs before sending to filters+workers
#
# pipeline.batch.size: 125
#
# How long to wait in milliseconds while polling for the next event
# before dispatching an undersized batch to filters+outputs
#
# pipeline.batch.delay: 50
#
# Force Logstash to exit during shutdown even if there are still inflight
# events in memory. By default, logstash will refuse to quit until all
# received events have been pushed to the outputs.
#
# WARNING: enabling this can lead to data loss during shutdown
#
# pipeline.unsafe_shutdown: false
#
# ------------ Pipeline Configuration Settings --------------
#
# Where to fetch the pipeline configuration for the main pipeline
#
# path.config:
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
# At startup, test if the configuration is valid and exit (dry run)
#
# config.test_and_exit: false
#
# Periodically check if the configuration has changed and reload the pipeline
# This can also be triggered manually through the SIGHUP signal
#
# config.reload.automatic: false
#
# How often to check if the pipeline configuration has changed (in seconds)
#
# config.reload.interval: 3s
#
# Show fully compiled configuration as debug log message
# NOTE: --log.level must be 'debug'
#
# config.debug: false
#
# When enabled, process escaped characters such as \n and \" in strings in the
# pipeline configuration files.
#
# config.support_escapes: false
#
# ------------ Module Settings ---------------
# Define modules here. Modules definitions must be defined as an array.
# The simple way to see this is to prepend each `name` with a `-`, and keep
# all associated variables under the `name` they are associated with, and
# above the next, like this:
#
# modules:
# - name: MODULE_NAME
# var.PLUGINTYPE1.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE1.PLUGINNAME1.KEY2: VALUE
# var.PLUGINTYPE2.PLUGINNAME1.KEY1: VALUE
# var.PLUGINTYPE3.PLUGINNAME3.KEY1: VALUE
#
# Module variable names must be in the format of
#
# var.PLUGIN_TYPE.PLUGIN_NAME.KEY
#
# modules:
#
# ------------ Cloud Settings ---------------
# Define Elastic Cloud settings here.
# Format of cloud.id is a base64 value e.g. dXMtZWFzdC0xLmF3cy5mb3VuZC5pbyRub3RhcmVhbCRpZGVudGlmaWVy
# and it may have an label prefix e.g. staging:dXMtZ...
# This will overwrite 'var.elasticsearch.hosts' and 'var.kibana.host'
# cloud.id: <identifier>
#
# Format of cloud.auth is: <user>:<pass>
# This is optional
# If supplied this will overwrite 'var.elasticsearch.username' and 'var.elasticsearch.password'
# If supplied this will overwrite 'var.kibana.username' and 'var.kibana.password'
# cloud.auth: elastic:<password>
#
# ------------ Queuing Settings --------------
#
# Internal queuing model, "memory" for legacy in-memory based queuing and
# "persisted" for disk-based acked queueing. Defaults is memory
#
# queue.type: memory
#
# If using queue.type: persisted, the directory path where the data files will be stored.
# Default is path.data/queue
#
# path.queue:
#
# If using queue.type: persisted, the page data files size. The queue data consists of
# append-only data files separated into pages. Default is 64mb
#
# queue.page_capacity: 64mb
#
# If using queue.type: persisted, the maximum number of unread events in the queue.
# Default is 0 (unlimited)
#
# queue.max_events: 0
#
# If using queue.type: persisted, the total capacity of the queue in number of bytes.
# If you would like more unacked events to be buffered in Logstash, you can increase the
# capacity using this setting. Please make sure your disk drive has capacity greater than
# the size specified here. If both max_bytes and max_events are specified, Logstash will pick
# whichever criteria is reached first
# Default is 1024mb or 1gb
#
# queue.max_bytes: 1024mb
#
# If using queue.type: persisted, the maximum number of acked events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.acks: 1024
#
# If using queue.type: persisted, the maximum number of written events before forcing a checkpoint
# Default is 1024, 0 for unlimited
#
# queue.checkpoint.writes: 1024
#
# If using queue.type: persisted, the interval in milliseconds when a checkpoint is forced on the head page
# Default is 1000, 0 for no periodic checkpoint.
#
# queue.checkpoint.interval: 1000
#
# ------------ Dead-Letter Queue Settings --------------
# Flag to turn on dead-letter queue.
#
# dead_letter_queue.enable: false
# If using dead_letter_queue.enable: true, the maximum size of each dead letter queue. Entries
# will be dropped if they would increase the size of the dead letter queue beyond this setting.
# Default is 1024mb
# dead_letter_queue.max_bytes: 1024mb
# If using dead_letter_queue.enable: true, the directory path where the data files will be stored.
# Default is path.data/dead_letter_queue
#
# path.dead_letter_queue:
#
# ------------ Metrics Settings --------------
#
# Bind address for the metrics REST endpoint
#
# http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
# path.logs:
#
# ------------ Other Settings --------------
#
# Where to find custom plugins
# path.plugins: []
#
# ------------ X-Pack Settings (not applicable for OSS build)--------------
#
# X-Pack Monitoring
# https://www.elastic.co/guide/en/logstash/current/monitoring-logstash.html
#xpack.monitoring.enabled: false
#xpack.monitoring.elasticsearch.username: logstash_system
#xpack.monitoring.elasticsearch.password: password
#xpack.monitoring.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.monitoring.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.monitoring.elasticsearch.ssl.truststore.path: path/to/file
#xpack.monitoring.elasticsearch.ssl.truststore.password: password
#xpack.monitoring.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.monitoring.elasticsearch.ssl.keystore.password: password
#xpack.monitoring.elasticsearch.ssl.verification_mode: certificate
#xpack.monitoring.elasticsearch.sniffing: false
#xpack.monitoring.collection.interval: 10s
#xpack.monitoring.collection.pipeline.details.enabled: true
#
# X-Pack Management
# https://www.elastic.co/guide/en/logstash/current/logstash-centralized-pipeline-management.html
#xpack.management.enabled: false
#xpack.management.pipeline.id: ["main", "apache_logs"]
#xpack.management.elasticsearch.username: logstash_admin_user
#xpack.management.elasticsearch.password: password
#xpack.management.elasticsearch.hosts: ["https://es1:9200", "https://es2:9200"]
#xpack.management.elasticsearch.ssl.certificate_authority: [ "/path/to/ca.crt" ]
#xpack.management.elasticsearch.ssl.truststore.path: /path/to/file
#xpack.management.elasticsearch.ssl.truststore.password: password
#xpack.management.elasticsearch.ssl.keystore.path: /path/to/file
#xpack.management.elasticsearch.ssl.keystore.password: password
#xpack.management.elasticsearch.ssl.verification_mode: certificate
#xpack.management.elasticsearch.sniffing: false
#xpack.management.logstash.poll_interval: 5s
#xpack.management.enabled: true
xpack.management.elasticsearch.hosts: "http://10.42.35.14:9200/"
#xpack.management.elasticsearch.username: logstash_system
xpack.management.elasticsearch.password: myelasticpassword
This message on the logstash log indicates that there is something wrong with your config file:
Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError"
The rest of message says that the problem is in your output block
:message=>"Expected one of #, {, } at line 16, column 23 (byte 507) after output {
Double check your output configuration, it needs to be something like this:
output {
elasticsearch {
hosts => ["10.42.35.14:9200"]
user => "elastic"
password => "myelasticpassword"
index => "testing123"
}
stdout { codec => "json_lines" }
}

Update a line at specific line and section

I have file and content has like below.
servers:
# Start OF VM1
- displayName: "I_INST1_1"
includeQueues: [test1,test2,test3]
excludeTopics: []
# End OF VM1
# Start OF VM2
- displayName: "I_INST1_2"
includeQueues: []
excludeTopics: []
# End OF VM2
I wanted to update the line for includeQueue section with [test1,test2,test3] only between the lines where from # Start OF VM1 to # End OF VM1
Could some one help me how to achieve this.
Following will select the range of records between "Start OF VM1" and "End OF VM1" and then apply gsub function over that chunk of records. where [] is replaced with [test1,test2,test,*].
awk '/Start OF VM1/,/End OF VM1/{if( $0 ~/includeQueues/ )gsub(/\[\]/,"[test1,test2,test,*]")}1'
------------------------------------------------
#########
## If xxxx:
##
servers:
# Start OF VM1
- displayName: "I_INST1_1"
includeQueues: [test1,test2,test,*]
excludeTopics: []
# End OF VM1
# Start OF VM2
- displayName: "I_INST1_2"
includeQueues: []
excludeTopics: []
# End OF VM2
---------------------------------------------------
try:
awk '/End OF VM1/{A=""} /Start OF VM1/{A=1} A && /includeQueues/{$0="includeQueues: [test1,test2,test,*]"} 1' Input_file

accessing values in Strings

In some code I'm trying to learn from, the Maze string below is turned into an array (code not shown for that) and saved in the instance variable #maze. The starting point of the Maze is represented by the letter 'A' in that Maze, which can be accessed at #maze[1][13]---row 1, column 13. However, the code I'm looking at uses #maze[1][13,1] to get the A, which you can see returns the same result in my console. If I do #maze[1][13,2], it returns the letter "A " with two blank spaces next to it, and so on. [13,3] returns "A " with three blank spaces.
Does the 2 in [13,2] mean, "return two values starting at [1][13]? If so, why? Is this some feature of arrays or two dimensional arrays that I don't get?
[20] pry(#<Maze>):1> #maze[1][13]
=> "A"
[17] pry(#<Maze>):1> #maze[1][13,1]
=> "A"
[18] pry(#<Maze>):1> #maze[1][13,2]
=> "A "
[19] pry(#<Maze>):1> #maze[1][13,3]
=> "A "
Maze String
MAZE1 = %{#####################################
# # # #A # # #
# # # # # # ####### # ### # ####### #
# # # # # # # # #
# ##### # ################# # #######
# # # # # # # # #
##### ##### ### ### # ### # # # # # #
# # # # # # B# # # # # #
# # ##### ##### # # ### # # ####### #
# # # # # # # # # # # #
# ### ### # # # # ##### # # # ##### #
# # # # # # #
#####################################}
From what you show, it seems that #maze is not a two-dimentional array, but is an array of strings. #maze[1] is a string. The second [] is applied to a string. And the second argument of String#[] method describes the length of characters to take. You can consider that it is defaulted to 1 when you do not specify it. By the way, your question is wrong. You describe
If I do #maze[1][13,2], it returns the letter "A " with two blank spaces next to it, and so on.
but what your example shows is
If I do #maze[1][13,2], it returns the letter "A " with one blank space next to it, and so on.
The 2-dimensionality isn't the issue. This works for any array.
s = ['k', 'i', 't', 't', 'y']
print s[2,3]
=> ["t", "t", "y"]
From the docs (http://www.ruby-doc.org/core-1.9.3/Array.html#method-i-5B-5D):
ary[start, length] → new_ary or nil

calling each on a string object

This code (by someone else) might have been written using an older version of Ruby because now I'm getting an error calling 'each' on a string object. The maze string below gets passed to the maze_string_to_array method. When it's run, it yields this error in `maze_string_to_array'
NoMethodError: undefined method `each' for #<String:0x00000100854ac0>
Can you explain what the problem is, and how to fix it?
def maze_string_to_array(mazestring)
#maze = []
mazestring.each do |line|
#maze.push line.chomp
end
end
Maze string
MAZE1 = %{#####################################
# # # #A # # #
# # # # # # ####### # ### # ####### #
# # # # # # # # #
# ##### # ################# # #######
# # # # # # # # #
##### ##### ### ### # ### # # # # # #
# # # # # # B# # # # # #
# # ##### ##### # # ### # # ####### #
# # # # # # # # # # # #
# ### ### # # # # ##### # # # ##### #
# # # # # # #
#####################################}
This code is unnecessarily verbose. The whole method can be written with a map and using 1.9 String#lines instead of the old 1.8.x String#each to split lines:
#maze = mazestring.lines.map(&:chomp)
Ruby 1.8 String#each used to iterate through lines. In 1.9, String#each_line does the same thing.
In Ruby 1.9 use each_line instead of each.
But it looks as if you could replace the whole method with mazestring.split(/\n/) anyway.

Resources