Indesign CC script to apply paragraph styles to multiple paragraphs - adobe-indesign

I have an Indesign document with the following structure:
paragraph 1 blah blah blah blah blah blah blah blah
paragraph 2 blah blah blah blah blah blah blah blah
paragraph 3 blah blah blah blah blah blah blah blah
paragraph 4 blah blah blah blah blah blah blah blah
paragraph 5 blah blah blah blah blah blah blah blah
. . . and so on...
Now I need to leave the first paragraph as is but apply paragraph styles to all the subsequent paragraphs in the following pattern:
paragraph 2: style A
paragraph 3: style B
paragraph 4: style A
paragraph 5: style B
. . . and so on (alternating pattern)...
I know this can be automated using scripts and I also know a bit of programming in general (JavaScript) but I have no idea how to go about doing this in Indesign. Any suggestion?

Try this script:
provided you have a text frame and you referenced it to a variable myFrame
for (i=0; i < myFrame.paragraphs.length; i++)
{
if ( i%2 == 0 )
{
myFrame.parentStory.paragraphs[i].appliedParagraphStyle = app.activeDocument.paragraphStyles.item('Style B);
}
else
{
myFrame.parentStory.paragraphs[i].appliedParagraphStyle = app.activeDocument.paragraphStyles.item('Style A);
}
}
Save it as a script in scripts folder and run from the scripts panel. You will need to add frame referencing.

Related

running a script to read three values then output the first value to a txt file if the 2nd and 3rd add up to equal above a set number

I have a file with values within it that I need to sift through for specific reference numbers that are over a certain value. The trouble is that this file is also full of a lot of junk info that I don't need.
The file looks something like this:
file 657657/78687686
blah blah blah
blah
blah 5 blah 8 value1 456456 value2 678678 blah 7
blah 2 blah 5 value1 9878787 value2 4544454 blah 2
blah 1 blah 8 value1 4584 value2 21231232 blah 5
blah blah
blah
file 657657/78687686
blah blah blah
blah
blah 5 blah 0 value1 871245 value2 555558 blah 7
blah 6 blah 7 value1 6666 value2 777877 blah 1
I want to feed that into a script and have it add the values and work out if the total value is above say 500000. If it is then it sends the file number to a seperate txt file and then moves on to the next file number and so on.
I have no idea where to start with this, any help would be appreciated.
This is being run on an AIX box and in a .ksh

Split text from bash variable

I have a variable which has groups of numbers. It looks like this:
foo 3
foo 5
foo 2
bar 8
bar 8
baz 2
qux 3
qux 5
...
I would like to split this data so I can work on one 'group' at a time. I feel this would be achievable with a loop somehow. The end goal is to take the mean of each group, such that I could have:
foo 3.33
bar 8.50
baz 5.00
qux 4.00
...
This mean taking has been implemented already, but I've brought it up so the context is known.
It's important to note that each group (eg. foo, bar, baz) is of arbitrary length.
How would I go about splitting up these groups?
I would use awk (tested using the GNU version gawk here, but I think it's portable) for both the collecting and the averaging. As a coreutil, it should be in just about anything bash is installed on.
# print_avg.awk
{
sums[$1] += $2
counts[$1] += 1
}
END {
for (key in sums)
print key , sums[key] / counts[key]
}
data.txt:
foo 3
foo 5
bar 8
bar 8
baz 2
qux 3
qux 5
Run it like:
$ awk -f print_avg.awk data.txt
foo 4
baz 2
qux 4
bar 8

Parse YAML to key value and include yaml categories

Was looking to parse a YAML file into plain key=value strings.
I have some initial structure, but I wanted to get some of the keys from a yaml as well.
test:
line1: "line 1 text"
line2: "line 2 text"
line3: "line 3 text"
options:
item1: "item 1 text"
item2: "item 2 text"
item3: "item 3 text"
Ruby:
File.open("test.yml") do |f|
f.each_line do |line|
line.chomp
if line =~ /:/
line.chop
line.sub!('"', "")
line.sub!(": ", "=")
line.gsub!(/\A"|"\Z/, '')
printline = line.strip
puts "#{printline}"
target.write( "#{printline}")
end
end
end
The results currently look like
test:
line1=line 1 text
line2=line 2 text
line3=line 4 text
options:
item1=item 1 text
item2=item 2 text
item3=item 3 text
But I am looking to add the category before like:
test/line1=line 1 text
test/line2=line 2 text
test/line3=line 3 text
options/item1=item 1 text
options/item2=item 2 text
options/item3=item 3 text
What is the best way to include the category for each line?
You could use the YAML#load_file, read each line and adapt it to your need:
foo = YAML.load_file('file.yaml').map do |key, value|
value.map { |k, v| "#{key}/#{k}=#{v}" }
end
foo.each { |value| puts value }
# test/line1=line 1 text
# test/line2=line 2 text
# test/line3=line 3 text
# options/item1=item 1 text
# options/item2=item 2 text
# options/item3=item 3 text
You can easily convert YAML to a hash:
#test.yml
test:
line1: "line 1 text"
line2: "line 2 text"
line3: "line 3 text"
options:
item1: "item 1 text"
item2: "item 2 text"
item3: "item 3 text"
#ruby
hash = YAML.load File.read('test.yml')
Now you can do anything you want with the hash, get the keys, values etc.
hash['options']['item1'] #=> "item 1 text"
hash['test']['line1'] #=> "line 1 text"

Combine and align every two lines [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
How to use sed, awk or bash to most succinctly convert the file format A to B below ?
A
1
blabla
2
another blabla
... (more omitted)
10
yet another blabla
...
100
final blabla
B
1 blabla
2 another blabla
...
10 yet another blabla
...
100 final blabla
So many different ways, here is one using paste
$ cat ip.txt
1
blabla
2
another blabla
10
yet another blabla
100
final blabla
$ paste - - < ip.txt
1 blabla
2 another blabla
10 yet another blabla
100 final blabla
See How to process a multi column text file to get another multi column text file? for many more methods
In one bash line:
while read line1; do if read line2; then echo "$line1" "$line2"; fi; done < file.txt
Use pr in Bash:
$ pr -2 -a -s\ -t foo2
1 blabla
2 another blabla
10 yet another blabla
100 final blabla

How to speed up this log parser?

I have a gigabytes-large log file of in this format:
2016-02-26 08:06:45 Blah blah blah
I have a log parser which splits up the single file log into separate files according to date while trimming the date from the original line.
I do want some form of tee so that I can see how far along the process is.
The problem is that this method is mind numbingly slow. Is there no way to do this quickly in bash? Or will I have to whip up a little C program to do it?
log_file=server.log
log_folder=logs
mkdir $log_folder 2> /dev/null
while read a; do
date=${a:0:10}
echo "${a:11}" | tee -a $log_folder/$date
done < <(cat $log_file)
read in bash is absurdly slow. You can make it faster, but you can probably get more speed up with awk:
#!/bin/bash
log_file=input
log_directory=${1-logs}
mkdir -p $log_directory
awk 'NF>1{d=l"/"$1; $1=""; print > d}' l=$log_directory $log_file
If you really want to print to stdout as well, you can, but if that's going to a tty it is going to slow things down a lot. Just use:
awk '{d=l"/"$1; $1=""; print > d}1' l=$log_directory $log_file
(Note the "1" after the closing brace.)
Try this awk solution - it should be pretty fast - it shows progress - only one file is kept open - also writes lines that don't start with a date to the current date file so lines are not lost - a default initial date is set to "0000-00-00" in case log starts with lines without dates
any timing comparison would be much appreciated
dir=$1
if [[ -z $dir ]]; then
echo >&2 "Usage: $0 outdir <logfile"
echo >&2 "outdir: directory where output files are created"
echo >&2 "logfile: input on stdin to split into output files"
exit 1
fi
mkdir -p $dir
echo "output directory \"$dir\""
awk -vdir=$dir '
BEGIN {
datepat="[0-9]{4}-[0-9]{2}-[0-9]{2}"
date="0000-00-00"
file=dir"/"date
}
date != $1 && $1 ~ datepat {
if(file) {
close(file)
print ""
}
print $1 ":"
date=$1
file=dir"/"date
}
{
if($1 ~ datepat)
line=substr($0,12)
else
line=$0
print line
print line >file
}
'
head -6 $dir/*
sample input log
first line without date
2016-02-26 08:06:45 0 Blah blah blah
2016-02-26 09:06:45 1 Blah blah blah
2016-02-27 07:06:45 2 Blah blah blah
2016-02-27 08:06:45 3 Blah blah blah
no date line
blank lines
another no date line
2016-02-28 07:06:45 4 Blah blah blah
2016-02-28 08:06:45 5 Blah blah blah
output
first line without date
2016-02-26:
08:06:45 0 Blah blah blah
09:06:45 1 Blah blah blah
2016-02-27:
07:06:45 2 Blah blah blah
08:06:45 3 Blah blah blah
no date line
blank lines
another no date line
2016-02-28:
07:06:45 4 Blah blah blah
08:06:45 5 Blah blah blah
==> tmpd/0000-00-00 <==
first line without date
==> tmpd/2016-02-26 <==
08:06:45 0 Blah blah blah
09:06:45 1 Blah blah blah
==> tmpd/2016-02-27 <==
07:06:45 2 Blah blah blah
08:06:45 3 Blah blah blah
no date line
blank lines
another no date line
==> tmpd/2016-02-28 <==
07:06:45 4 Blah blah blah
08:06:45 5 Blah blah blah

Resources