How to print group contents when there is no data in that group--Ireport - ireport

I need to print the group when there do not come record. I have actually added group which gets printed when the record fall in the very group ,but it does not get printed if it is blank. Can anyone tell me what could be the expression or any solution?
<groupExpression>
<![CDATA[(($F{GROUP_EXP}.doubleValue() >= new BigDecimal(250000).doubleValue()
&& $F{GROUP_EXP}.doubleValue() < new BigDecimal(500000).doubleValue() && $F{NE}.charAt(0)=='E'))]]>
</groupExpression>
<groupHeader>
<band height="109">
<printWhenExpression>
<![CDATA[(($F{GROUP_EXP}.doubleValue() >=new BigDecimal(250000).doubleValue()
&& $F{GROUP_EXP}.doubleValue() < new BigDecimal(500000).doubleValue() && $F{NE}.charAt(0)=='E'))]]>
</printWhenExpression>

Related

script to loop through and combine two text files

I have two .csv files which I am trying to 'multiply' out via a script. The first file is person information and looks basically like this:
First Name, Last Name, Email, Phone
Sally,Davis,sdavis#nobody.com,555-555-5555
Tom,Smith,tsmith#nobody.com,555-555-1212
The second file is account numbers and looks like this:
AccountID
1001
1002
Basically I want to get every name with every account Id. So if I had 10 names in the first file and 10 account IDs in the second file, I should end up with 100 rows in the resulting file and have it look like this:
First Name, Last Name, Email, Phone, AccountID
Sally,Davis,sdavis#nobody.com,555-555-5555, 1001
Tom,Smith,tsmith#nobody.com,555-555-1212, 1001
Sally,Davis,sdavis#nobody.com,555-555-5555, 1002
Tom,Smith,tsmith#nobody.com,555-555-1212, 1002
Any help would be greatly appreciated
You could simply write a for loop for each value to be repeated by it's id count and append the description, but just in the reverse order.
Has that not worked or have you not tried that?
If python works for you, here's a script which does that:
def main():
f1 = open("accounts.txt", "r")
f1_total_lines = sum(1 for line in open('accounts.txt'))
f2_total_lines = sum(1 for line in open('info.txt'))
f1_line_counter = 1;
f2_line_counter = 1;
f3 = open("result.txt", "w")
f3.write('First Name, Last Name, Email, Phone, AccountID\n')
for line_account in f1.readlines():
f2 = open("info.txt", "r")
for line_info in f2.readlines():
parsed_line_account = line_account
parsed_line_info = line_info.rstrip() # we have to trim the newline character from every line from the 'info' file
if f2_line_counter == f2_total_lines: # ...for every but the last line in the file (because it doesn't have a newline character)
parsed_line_info = line_info
f3.write(parsed_line_info + ',' + parsed_line_account)
if f1_line_counter == f1_total_lines:
f3.write('\n')
f2_line_counter = f2_line_counter + 1
f1_line_counter = f1_line_counter + 1
f2_line_counter = 1 # reset the line counter to the first line
f1.close()
f2.close()
f3.close()
if __name__ == '__main__':
main()
And the files I used are as follows:
info.txt:
Sally,Davis,sdavis#nobody.com,555-555-555
Tom,Smith,tsmith#nobody.com,555-555-1212
John,Doe,jdoe#nobody.com,555-555-3333
accounts.txt:
1001
1002
1003
If You Intended to Duplicate Account_ID
If you intended to add each Account_ID to every record in your information file then a short awk solution will do, e.g.
$ awk -F, '
FNR==NR{a[i++]=$0}
FNR!=NR{b[j++]=$0}
END{print a[0] ", " b[0]
for (k=1; k<i; k++)
for (m=1; m<i; m++)
print a[m] ", " b[k]}
' info id
First Name, Last Name, Email, Phone, AccountID
Sally,Davis,sdavis#nobody.com,555-555-5555, 1001
Tom,Smith,tsmith#nobody.com,555-555-1212, 1001
Sally,Davis,sdavis#nobody.com,555-555-5555, 1002
Tom,Smith,tsmith#nobody.com,555-555-1212, 1002
Above the lines in the first file (when the file-record-number equals the record-number, e.g. FNR==NR) are stored in array a, the lines from the second file (when FNR!=NR) are stored in array b and then they combined and output in the END rule in the desired order.
Without Duplicating Account_ID
Since Account_ID is usually a unique bit of information, if you did not intended to duplicate every ID at the end of each record, then there is no need to loop. The paste command does that for you. In your case with your information file as info and you account ID file as id, it is as simple as:
$ paste -d, info id
First Name, Last Name, Email, Phone,AccountID
Sally,Davis,sdavis#nobody.com,555-555-5555,1001
Tom,Smith,tsmith#nobody.com,555-555-1212,1002
(note: the -d, option just sets the delimiter to a comma)
Seems a lot easier that trying to reinvent the wheel.
Can be easily done with arrays
OLD=$IFS; IFS=$'\n'
ar1=( $(cat file1) )
ar2=( $(cat file2) )
IFS=$OLD
ind=${!ar1[#]}
for i in $ind; { echo "${ar1[$i]}, ${ar2[$i]}"; }

java.lang.IndexOutOfBoundsException: Index: 0, Size: 0 Java 8

ABC abc = eMsg.getAbcCont().stream()
.filter(cnt -> (option.geiID().equals(cnt.getId()) && option.getIdVersion() == cnt.getIdVersion()))
.collect(Collectors.toList()).get(0);
delEmsg.getAbcCont().remove(abc);
Above code is giving me en exception as
java.lang.IndexOutOfBoundsException: Index: 0, Size: 0
at java.util.ArrayList.rangeCheck(ArrayList.java:657)
at java.util.ArrayList.get(ArrayList.java:433)
getAbcCont method will return the List of ABC objects.Currently my eMsg contains two object with the getAbcCont. When control reach to the .collect(Collectors.toList()).get(0); its giving the above mentioned exception. Any help suggestion must be appricaited.
This means that the result after the filter is zero elements, so you cannot do get(0).
A quick solution for this would be to first get the list of elements back, and then check if there is atleast one element.
List<ABC> list = ABC abc = eMsg.getAbcCont().stream()
.filter(cnt -> (option.geiID().equals(cnt.getId()) && option.getIdVersion() == cnt.getIdVersion()))
.collect(Collectors.toList());
if(list.size() > 0){
ABC abc = list.get(0);
}
Obviously there is a shorter way also using lambdas such as:
ABC abc = eMsg.getAbcCont().stream()
.filter(cnt -> (option.geiID().equals(cnt.getId()) && option.getIdVersion() == cnt.getIdVersion()))
.collect(Collectors.toList()).findFirst().orElse(null)
Reference: https://stackoverflow.com/a/26126636/1688441
But as User nullpointer , you might need to check if an element is found before you try to call remove() using object abc. I suspect trying to remove null from a collection might not do anything, but you could check to be sure!
if(abc != null){
delEmsg.getAbcCont().remove(abc);
}
You should do !list.isEmpty() rather than list.size() as per sonar

CT_FETCH error in PowerBuilder Program

I'm still learning PowerBuilder and trying to get familiar with it. I'm receiving the following error when I try to run a program against a specific document in my database:
ct_fetch(): user api layer: internal common library error: The bind of result set item 4 resulted in an overflow. ErrCode: 2.
What does this error mean? What is item 4? This is only when I run this program against a specific document in my database, any other document works fine. Please see code below:
string s_doc_nmbr, s_doc_type, s_pvds_doc_status, s_sql
long l_rtn, l_current_fl, l_apld_fl, l_obj_id
integer l_pvds_obj_id, i_count
IF cbx_1.checked = True THEN
SELECT dsk_obj.obj_usr_num,
dsk_obj.obj_type,
preaward_validation_doc_status.doc_status,
preaward_validation_doc_status.obj_id
INTO :s_doc_nmbr, :s_doc_type, :s_pvds_doc_status, :l_pvds_obj_id
FROM dbo.dsk_obj dsk_obj,
preaward_validation_doc_status
WHERE dsk_obj.obj_id = :gx_l_doc_obj_id
AND preaward_validation_doc_status.obj_id = dsk_obj.obj_id
using SQLCA;
l_rtn = sqlca.uf_sqlerrcheck("w_pdutl095_main", "ue_run_script", TRUE)
IF l_rtn = -1 THEN
RETURN -1
END IF
//check to see if document (via obj_id) exists in the preaward_validation_doc_status table.
SELECT count(*)
into :i_count
FROM preaward_validation_doc_status
where obj_id = :l_pvds_obj_id
USING SQLCA;
IF i_count = 0 THEN
//document doesn't exist
// messagebox("Update Preaward Validation Doc Status", + gx_s_doc_nmbr + ' does not exist in the Preaward Validation Document Status table.', Stopsign!)
//MC - 070815-0030-MC Updating code to insert row into preaward_validation_doc_status if row doesn't already exist
// s_sql = "insert into preaward_validation_doc_status(obj_id, doc_status) values (:gx_l_doc_obj_id, 'SUCCESS') "
INSERT INTO preaward_validation_doc_status(obj_id, doc_status)
VALUES (:gx_l_doc_obj_id, 'SUCCESS')
USING SQLCA;
IF sqlca.sqldbcode <> 0 then
messagebox('SQL ERROR Message',string(sqlca.sqldbcode)+'-'+sqlca.sqlerrtext)
return -1
end if
MessageBox("PreAward Validation ", 'Document number ' + gx_s_doc_nmbr + ' has been inserted and marked as SUCCESS for PreAward Validation.')
return 1
Else
//Update document status in the preaward_validation_doc_status table to SUCCESS
Update preaward_validation_doc_status
Set doc_status = 'SUCCESS'
where obj_id = :l_pvds_obj_id
USING SQLCA;
IF sqlca.sqldbcode <> 0 then
messagebox('SQL ERROR Message',string(sqlca.sqldbcode)+'-'+sqlca.sqlerrtext)
return -1
end if
MessageBox("PreAward Validation ", 'Document number '+ gx_s_doc_nmbr + ' has been marked as SUCCESS for PreAward Validation.')
End IF
update crt_script
set alt_1 = 'Acknowledged' where
ticket_nmbr = :gx_s_ticket_nmbr and
alt_2 = 'Running' and
doc_nmbr = :gx_s_doc_nmbr
USING SQLCA;
Return 1
ElseIF cbx_1.checked = False THEN
messagebox("Update Preaward Validation Doc Status", 'The acknowledgment checkbox must be selected for the script to run successfully. The script will now exit. Please relaunch the script and try again . ', Stopsign!)
Return -1
End IF
Save yourself a ton of headaches and use datawindows... You'd reduce that entire script to about 10 lines of code.
Paul Horan gave you good advice. This would be simple using DataWindows or DataStores. Terry Voth is on the right track for your problem.
In your code, Variable l_pvds_obj_id needs to be the same type as gx_l_doc_obj_id because if you get a result, it will always be equal to it. From the apparent naming scheme it was intended to be long. This is the kind of stuff we look for in peer reviews.
A few other things:
Most of the time you want SQLCode not SQLDbCode but you didn't say what database you're using.
After you UPDATE crt_script you need to check the result.
I don't see COMMIT or ROLLBACK. Autocommit isn't suitable when you need to update multiple tables.
You aren't using most of the values from the first SELECT. Perhaps you've simplified your code for posting or troubleshooting.

How to return ALL events in a Google Calendar without knowing whether it is a timed or all day event

Now, I'm working on making a program in Python that can pull events from all the calendars in my Google account; however, I'm trying to make the program potentially as commercial as possible. With that said, it's quite simple to customize the code for myself, when I know that all the US Holidays events attached to my calendar are all day events, so I can set up a simple if statement that checks if it's a Holiday calendar and specify the events request as such:
def get_main_events(pageToken=None):
events = gc_source.service.events().list(
calendarId=calendarId,
singleEvents=True,
maxResults=1000,
orderBy='startTime',
pageToken=pageToken,
).execute()
return events
So, that works for all day events. After which I'd append the results to a list and filter it to get only the events I want. Now getting events from my primary calendar is a bit easier to specify the events I want because they're generally not all day events, just my work schedule so I can use:
now = datetime.now()
now_plus_thirtydays = now + timedelta(days=30)
def get_main_events(pageToken=None):
events = gc_source.service.events().list(
calendarId=calendarId,
singleEvents=True,
maxResults=1000,
orderBy='startTime',
timeMin=now.strftime('%Y-%m-%dT%H:%M:%S-00:00'),
timeMax=now_plus_thirtydays.strftime('%Y-%m-%dT%H:%M:%S-00:00'),
pageToken=pageToken,
).execute()
return events
Now, the problem I run into with making the program available for commercial use, as well as myself, is the above will ONLY return NON-all day events from my primary calendar. I'd like to find out if there's a way - if so, how - to run the get events request and return ALL results whether they're all day or if they're just a timed event that takes place in a portion of the day. In addition part of this issue is that in another part of the code where I print the results I would need to use:
print event['start']['date']
for an all day event, and:
print event['start']['dateTime']
for a non all day event.
So, since 'dateTime' wont work on an all day event, I'd like to figure out a way to set it up so that I can evaluate whether an event is all day or not. i.e. "if said event is an all day event, use event['start']['date'], else use event['start']['dateTime']
So, through much testing, and finding a way to use a log feature to see what error was happening with:
print event['start']['date']
vs:
print event['start']['dateTime']
I found that I could use the error result to my advantage using 'try' and 'except'.
Here is the resulting fix:
First the initial part as earlier with the actual query to the calendar:
now = datetime.now()
now_plus_thirtydays = now + timedelta(days=30)
def get_calendar_events(pageToken=None):
events = gc_source.service.events().list(
calendarId=cal_id[cal_count],
singleEvents=True,
orderBy='startTime',
timeMin=now.strftime('%Y-%m-%dT%H:%M:%S-00:00'),
timeMax=now_plus_thirtydays.strftime('%Y-%m-%dT%H:%M:%S-00:00'),
pageToken=pageToken,
).execute()
return events
Then the event handling portion:
# Events Portion
print "Calendar: ", cal_summary[cal_count]
events = get_calendar_events()
while True:
for event in events['items']:
try:
if event['start']['dateTime']:
dstime = dateutil.parser.parse(event['start']['dateTime'])
detime = dateutil.parser.parse(event['end']['dateTime'])
if dstime.strftime('%d/%m/%Y') == detime.strftime('%d/%m/%Y'):
print event['summary'] + ": " + dstime.strftime('%d/%m/%Y') + " " + dstime.strftime('%H%M') + "-" + detime.strftime('%H%M')
# Making a list for the respective items so they can be iterated through easier for time comparison and TTS messages
if cal_count == 0:
us_holiday_list.append((dstime, event['summary']))
elif cal_count == 1:
birthday_list.append((dstime, event['summary']))
else:
life_list.append((dstime, event['summary']))
else:
print event['summary'] + ": " + dstime.strftime('%d/%m/%Y') + " # " + dstime.strftime('%H%M') + " to " + detime.strftime('%H%M') + " on " + detime.strftime('%d/%m/%Y')
# Making a list for the respective items so they can be iterated through easier for time comparison and TTS messages
if cal_count == 0:
us_holiday_list.append((dstime, event['summary']))
elif cal_count == 1:
birthday_list.append((dstime, event['summary']))
else:
life_list.append((dstime, event['summary']))
else:
return
except KeyError:
dstime = dateutil.parser.parse(event['start']['date'])
detime = dateutil.parser.parse(event['end']['date'])
print event['summary'] + ": " + dstime.strftime('%d/%m/%Y')
# Making a list for the respective items so they can be iterated through easier for time comparison and TTS messages
if cal_count == 0:
us_holiday_list.append((dstime, event['summary']))
elif cal_count == 1:
birthday_list.append((dstime, event['summary']))
else:
life_list.append((dstime, event['summary']))
page_token = events.get('nextPageToken')
if page_token:
events = get_calendar_events(page_token)
else:
if cal_count == (len(cal_id) - 1): # If there are no more calendars to process
break
else: #Continue to next calendar
print "-----"
cal_count += 1
print "Retrieving From Calendar: ", cal_summary[cal_count]
events = get_calendar_events()

conditional re-download of website data based on timestamp

In simple form I re-download a file from my account on a website if my local copy of data.csv is older than 1 hour:
# Mission: make sure data*.csv is most current whenever called
def updateData
return if File.exists?("data.csv") && (Time.now - File::Stat.new("data.csv").mtime) < 3600
$agent = Mechanize.new
$agent.pluggable_parser.default = Mechanize::Download
$page = $agent.get("http://website.com/login.jsp")
# login etc.
$agent.get("/getdata!downLoad.action").save("data.csv")
end
However they mentioned that updates to my data are only published thrice daily: at 16:45, 18:45, and 22:45.
Question:
How do I make my code more intelligent about grabbing the update only if my copy is older than the last update time (including yesterday's)?
Some array ["16:45", "18:45", "22:45"] could help but I'm not sure what next in Ruby.
Something like this could do it :
require 'time'
current = Time.now.strftime("%H%M")
past = File::Stat.new("data.csv").mtime.strftime("%H%M")
if (current > '2245' and past < '2245') or (current > '1845' and past < '1845') or (current > '1645' and past < '1645') or (File::Stat.new("data.csv").mtime.day != Time.now.day and current > '1645')
#update
end
You will also need to change the way you store mtime. It will need to be in the form hhmm. You would set mtime like this mtime = Time.now.hour.to_s + Time.now.min.to_s when you create the csv.

Resources