Splitting a table into multiple tables in Power BI - oracle

I have this following data in a single table. I need to split this table into multiple tables based on the YearMonth Column. Is there a way to automate this task.
+------------+-----------+
| Year_Month | Part# |
+------------+-----------+
| 2014-03 | CCH057169 |
| 2014-03 | CCH057276 |
| 2014-03 | CCH057303 |
| 2014-03 | CCH057430 |
| 2014-04 | CCH057409 |
| 2014-04 | CCH057497 |
| 2014-04 | CCH057570 |
| 2014-04 | CCH057583 |
| 2014-04 | CCH057650 |
| 2014-04 | CCH057696 |
| 2014-04 | CCH057707 |
| 2014-04 | CCH057798 |
| 2014-05 | CCH057701 |
| 2014-06 | CCH057235 |
| 2014-06 | CCH057280 |
| 2014-06 | CCH057693 |
| 2014-06 | CCH057707 |
| 2014-06 | CCH057721 |
| 2014-07 | CCH057235 |
| 2014-07 | CCH057427 |
| 2014-08 | CCH057650 |
| 2014-08 | CCH057696 |
| 2014-08 | CCH057798 |
| 2014-09 | CCH057303 |
| 2014-09 | CCH057482 |
| 2014-09 | CCH057668 |
| 2014-09 | CCH057744 |
| 2014-09 | CCH057776 |
| 2014-10 | CCH057668 |
| 2014-10 | CCH057696 |
| 2014-11 | CCH057390 |
| 2014-11 | CCH057409 |
| 2014-11 | CCH057679 |
| 2014-11 | CCH057700 |
| 2014-11 | CCH057721 |
| 2014-11 | CCH057749 |
| 2014-11 | CCH057896 |
| 2014-12 | CCH057169 |
| 2014-12 | CCH057693 |
| 2014-12 | CCH057696 |
| 2014-12 | CCH057708 |
| 2014-12 | CCH057876 |
| 2014-12 | CCH057896 |
| 2015-01 | CCH057630 |
| 2015-01 | CCH057679 |
| 2015-01 | CCH057700 |
| 2015-01 | CCH057776 |
| 2015-02 | CCH057409 |
| 2015-02 | CCH057482 |
+------------+-----------+
More Information:
I am getting the data from Oracle Database. The Purpose of this data is to compare between two given Dates and provide new records. Is there a way that I select two dates on the form (Slicer) and then the query has to fetch the data based on the date selection on the form.

Might be splitting hairs, but from a database (RDBMS) perspective, slicers change what is summarized and displayed, not what is queried. So, you might want to take some steps so you don't do something like getting the introduction date of every product Amazon has ever offered.
It sounds like you want to use a slicer with a range slider, which only works on numbers and dates. So, first add a column computed from the Year_Month column for a date, say the end of the month, and call it Month.
Month = EOMONTH(DATEVALUE([Year_Month] & "-01"), 0)
Then all you need to do is create a grid with the part# field and create a slicer for the Month field, which is initially configured with a range slider.

Related

how can a table of all databases be sent to elasticsearch?

Here's my situation.
Food_database is in mysql.
There are 130 tables in food_database
I would like to send 130 tables to elasticsearch via logstash_jdbc.
-> how can a table of all databases be sent to elasticsearch?
my conf file (attempt)
input {
jdbc {
clean_run => true
jdbc_driver_library => "C:\ElasticSearch\mysql-connector-java-8.0.23\mysql-connector-java-8.0.23.jar"
jdbc_driver_class => "com.mysql.jdbc.Driver"
jdbc_connection_string => "jdbc:mysql://localhost:3306/food_database?useSSL=false&user=root&password=1234"
jdbc_user => "root"
jdbc_password => "1234"
schedule => "* * * * *"
statement => "select * from ??????"
#use_column_value => true
#tracking_column => "jobid"
}
}
output{
elasticsearch {
hosts => "localhost:9200"
index => "test_indexfile"
}
stdout {
codec => rubydebug
}
}
But I don't know how to send all 130 tables in food_databases.
I found a similar question through googling, but I couldn't solve it.
-> save whole database to elasticsearch using logstash
-> https://dzone.com/articles/migrating-mysql-data-to-elasticsearch-using-logsta
Please help me.
update posting (tables in food_database)
+--------------------------------------+
| Tables_in_food_database |
+--------------------------------------+
| access_token |
| activity |
| address |
| answer_abuse_reason |
| answer_report_abuse |
| attribute |
| attribute_group |
| banner |
| banner_group |
| banner_image |
| banner_image_description |
| blog |
| blog_related |
| category |
| category_commission |
| category_description |
| category_path |
| contact |
| country |
| coupon |
| coupon_product_category |
| coupon_usage |
| coupon_usage_product |
| currency |
| customer |
| customer_activity |
| customer_cart |
| customer_document |
| customer_group |
| customer_ip |
| customer_transaction |
| customer_wishlist |
| delivery_allocation |
| delivery_location |
| delivery_location_to_location |
| delivery_person |
| delivery_person_to_location |
| delivery_status |
| email_template |
| geo_zone |
| jobs |
| language |
| login_log |
| manufacturer |
| migrations |
| order |
| order_cancel_reason |
| order_history |
| order_log |
| order_product |
| order_product_log |
| order_status |
| order_total |
| page |
| page_group |
| payment |
| payment_archive |
| payment_items |
| payment_items_archive |
| paypal_order |
| paypal_order_transaction |
| permission_module |
| permission_module_group |
| plugins |
| price_update_file_log |
| product |
| product_answer |
| product_answer_like_dislike |
| product_attribute |
| product_description |
| product_discount |
| product_image |
| product_price_log |
| product_question |
| product_rating |
| product_related |
| product_special |
| product_stock_alert |
| product_tag |
| product_tire_price |
| product_to_category |
| product_varient |
| product_varient_option |
| product_varient_option_details |
| product_varient_option_image |
| product_view_log |
| quotation |
| razorpay_order |
| razorpay_order_transaction |
| service |
| service_category |
| service_category_path |
| service_enquiry |
| service_image |
| service_to_category |
| sessions |
| settings |
| settlement |
| settlement_item |
| site_filter |
| site_filter_category |
| site_filter_section |
| site_filter_section_item |
| sku |
| stock_log |
| stock_status |
| stripe_order |
| stripe_order_transaction |
| tax |
| trend |
| trend_image |
| trend_recommend |
| user_group |
| users |
| varients |
| varients_value |
| vendor |
| vendor_category |
| vendor_coupon |
| vendor_coupon_product_category |
| vendor_global_setting |
| vendor_invoice |
| vendor_invoice_item |
| vendor_order_archive |
| vendor_order_archive_log |
| vendor_order_products |
| vendor_order_status |
| vendor_orders |
| vendor_orders_log |
| vendor_payment |
| vendor_payment_archive |
| vendor_product |
| widget |
| widget_item |
| zone |
| zone_to_geo_zone |
+--------------------------------------+
136 rows in set (0.00 sec)
I would like to send all the values of my goals 136 tables to elasticsearch via logstash.
If running a script next to logstash would be an option I would go for the following approach:
Create a bash script (or whatever language your preference has), put this in cron to do a simple 'show tables' and use the output in order to create 130 config files only containing the INPUT part for logstash with a naming convention like 'INPUT_tablename.conf'. This script should create the config as shown above, for each table that exists.
Make sure it lists the INPUT_* files in the directory and deletes the ones that no longer exists.
Make sure that when a file already exists it does not touch it
have your FILTER.conf and OUTPUT.conf in the same directory
Put you logstash in auto reload config mode
By doing it this way you seperate the thing you are struggling with and allows the database to have changes in tables, new ones that are added, and old ones that might be deleted or renamed.
I've learned to do it this way on clusters that I know will become very large and where I need to learn when the maximum io is being hit so i know when to add new nodes to which layer without killing the complete setup.

Shell Script - Getting a specified value from returned block

I wish to get the returned value of status with shell then I can process to next command
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5
+---------------------+--------------------------------------+
| Property | Value |
+---------------------+--------------------------------------+
| checksum | None |
| container_format | bare |
| created_at | 2021-03-15T02:54:15Z |
| disk_format | raw |
| hw_disk_bus | scsi |
| hw_qemu_guest_agent | yes |
| hw_scsi_model | virtio-scsi |
| id | 0227a985-cb1e-4f0c-81cb-003411988ea5 |
| locations | [] |
| min_disk | 0 |
| min_ram | 0 |
| name | not_inuse |
| os_hash_algo | None |
| os_hash_value | None |
| os_hidden | False |
| os_require_quiesce | yes |
| owner | 4d97a99e53bd4b51aa58601985776d5c |
| protected | False |
| size | None |
| status | active |
| tags | [] |
| updated_at | 2021-03-15T02:54:30Z |
| virtual_size | Not available |
| visibility | private |
+---------------------+--------------------------------------+
How do I get the printed value = active ??
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5 | grep status
| status | active |
Please help, Thank you
# glance image-show 0227a985-cb1e-4f0c-81cb-003411988ea5 | grep status | awk '{print $4}'
active

NiFi CaptureChangeMySQL converts varchar columns to nulls

I have problem with Apache NiFi 1.12.1. For some unknown for me reason CaptureChangeMySQL returns many nulls. Basically, only columns which are int, return correct values. I'm new in a matter of using NiFi so I might miss some obvious thing in configuration.
I have following table:
create table inventory.abc
(
id int auto_increment
primary key,
first_name varchar(100) not null,
last_name varchar(100) not null,
age int not null
);
Processor config:
Bin logs settings:
mysql> show variables like '%bin%';
+--------------------------------------------+--------------------------------+
| Variable_name | Value |
+--------------------------------------------+--------------------------------+
| bind_address | * |
| binlog_cache_size | 32768 |
| binlog_checksum | CRC32 |
| binlog_direct_non_transactional_updates | OFF |
| binlog_error_action | ABORT_SERVER |
| binlog_format | ROW |
| binlog_group_commit_sync_delay | 0 |
| binlog_group_commit_sync_no_delay_count | 0 |
| binlog_gtid_simple_recovery | ON |
| binlog_max_flush_queue_time | 0 |
| binlog_order_commits | ON |
| binlog_row_image | FULL |
| binlog_rows_query_log_events | OFF |
| binlog_stmt_cache_size | 32768 |
| binlog_transaction_dependency_history_size | 25000 |
| binlog_transaction_dependency_tracking | COMMIT_ORDER |
| innodb_api_enable_binlog | OFF |
| innodb_locks_unsafe_for_binlog | OFF |
| log_bin | ON |
| log_bin_basename | /var/lib/mysql/mysql-bin |
| log_bin_index | /var/lib/mysql/mysql-bin.index |
| log_bin_trust_function_creators | OFF |
| log_bin_use_v1_row_events | OFF |
| log_statements_unsafe_for_binlog | ON |
| max_binlog_cache_size | 18446744073709547520 |
| max_binlog_size | 1073741824 |
| max_binlog_stmt_cache_size | 18446744073709547520 |
| sql_log_bin | ON |
| sync_binlog | 1 |
+--------------------------------------------+--------------------------------+
29 rows in set (0.00 sec)
And I get results like this:
Any idea why I get so many nulls in output? I thought it might be related to Distributed Map Cache Client but since this option is not mandatory I don't think that's a problem.

Executable JAR unable to find Liquibase db/changelog.xml

The project runs as expected with spring-boot:run. However, the executable JAR fails to run because it cannot find db/changelog.xml.
The following steps can be used to reproduce the problem:
run mvn package from project root
go to target folder
run java -jar executable-jar-with-liquibase-1.0.0-SNAPSHOT.jar
The log will now show an error because the table domain has not been created.
Note that the application.yml is found, since if liquibase.enabled is set to false, it will refuse to run entirely (as it should).
application.yml
server:
context-path: /api
spring:
datasource:
platform: h2
url: jdbc:h2:mem:testdb;MODE=PostgreSQL;DB_CLOSE_ON_EXIT=FALSE
jackson:
date-format: yyyy-MM-dd
jpa:
database-platform: org.hibernate.dialect.PostgreSQLDialect
hibernate:
ddl-auto: none
liquibase:
enabled: false
change-log: classpath:db/changelog.xml
The generated JAR has the following contents:
.
|____BOOT-INF
| |____classes
| | |____application.yml
| | |____db
| | | |____changelog.xml
| | | |____changelogs
| | | | |____changelog_000.xml
| | |____nl
| | | |_____42
| | | | |____app
| | | | | |____ApplicationConfig.class
| | | | | |____domain
| | | | | | |____Domain.class
| | | | | | |____DomainController.class
| | | | | | |____DomainRepository.class
| | | | | | |____DomainService.class
| | | | | |____shared
| | | | | | |____AbstractEntity.class
| | | | | |____WebAppConfig.class
| | | | | |____WebApplication.class
| |____lib
| | |____accessors-smart-1.1.jar
| | |____antlr-2.7.7.jar
| | |____asm-5.0.3.jar
| | |____aspectjweaver-1.8.9.jar
| | |____assertj-core-2.5.0.jar
| | |____classmate-1.3.1.jar
| | |____dom4j-1.6.1.jar
| | |____h2-1.4.192.jar
| | |____hamcrest-core-1.3.jar
| | |____hamcrest-library-1.3.jar
| | |____hibernate-commons-annotations-5.0.1.Final.jar
| | |____hibernate-core-5.0.11.Final.jar
| | |____hibernate-entitymanager-5.0.11.Final.jar
| | |____hibernate-jpa-2.1-api-1.0.0.Final.jar
| | |____hibernate-validator-5.2.4.Final.jar
| | |____jackson-annotations-2.8.3.jar
| | |____jackson-core-2.8.3.jar
| | |____jackson-databind-2.8.3.jar
| | |____jackson-datatype-jsr310-2.8.3.jar
| | |____jandex-2.0.0.Final.jar
| | |____javassist-3.20.0-GA.jar
| | |____javax.transaction-api-1.2.jar
| | |____jboss-logging-3.3.0.Final.jar
| | |____jcl-over-slf4j-1.7.21.jar
| | |____json-20140107.jar
| | |____json-path-2.2.0.jar
| | |____json-smart-2.2.1.jar
| | |____jsonassert-1.3.0.jar
| | |____jul-to-slf4j-1.7.21.jar
| | |____liquibase-core-3.5.1.jar
| | |____log4j-over-slf4j-1.7.21.jar
| | |____logback-classic-1.1.7.jar
| | |____logback-core-1.1.7.jar
| | |____mockito-core-1.10.19.jar
| | |____objenesis-2.1.jar
| | |____slf4j-api-1.7.21.jar
| | |____snakeyaml-1.17.jar
| | |____spring-aop-4.3.3.RELEASE.jar
| | |____spring-aspects-4.3.3.RELEASE.jar
| | |____spring-beans-4.3.3.RELEASE.jar
| | |____spring-boot-1.4.1.RELEASE.jar
| | |____spring-boot-autoconfigure-1.4.1.RELEASE.jar
| | |____spring-boot-configuration-processor-1.4.1.RELEASE.jar
| | |____spring-boot-devtools-1.4.1.RELEASE.jar
| | |____spring-boot-starter-1.4.1.RELEASE.jar
| | |____spring-boot-starter-aop-1.4.1.RELEASE.jar
| | |____spring-boot-starter-data-jpa-1.4.1.RELEASE.jar
| | |____spring-boot-starter-jdbc-1.4.1.RELEASE.jar
| | |____spring-boot-starter-logging-1.4.1.RELEASE.jar
| | |____spring-boot-starter-test-1.4.1.RELEASE.jar
| | |____spring-boot-starter-tomcat-1.4.1.RELEASE.jar
| | |____spring-boot-starter-web-1.4.1.RELEASE.jar
| | |____spring-boot-test-1.4.1.RELEASE.jar
| | |____spring-boot-test-autoconfigure-1.4.1.RELEASE.jar
| | |____spring-context-4.3.3.RELEASE.jar
| | |____spring-core-4.3.3.RELEASE.jar
| | |____spring-data-commons-1.12.3.RELEASE.jar
| | |____spring-data-jpa-1.10.3.RELEASE.jar
| | |____spring-expression-4.3.3.RELEASE.jar
| | |____spring-jdbc-4.3.3.RELEASE.jar
| | |____spring-orm-4.3.3.RELEASE.jar
| | |____spring-tx-4.3.3.RELEASE.jar
| | |____spring-web-4.3.3.RELEASE.jar
| | |____spring-webmvc-4.3.3.RELEASE.jar
| | |____tomcat-embed-core-8.5.5.jar
| | |____tomcat-embed-el-8.5.5.jar
| | |____tomcat-embed-websocket-8.5.5.jar
| | |____tomcat-jdbc-8.5.5.jar
| | |____tomcat-juli-8.5.5.jar
| | |____validation-api-1.1.0.Final.jar
| | |____xml-apis-1.4.01.jar
|____META-INF
| |____MANIFEST.MF
| |____maven
| | |____nl.mad
| | | |____executable-jar-with-liquibase
| | | | |____pom.properties
| | | | |____pom.xml
|____org
| |____springframework
| | |____boot
| | | |____loader
| | | | |____archive
| | | | | |____Archive$Entry.class
| | | | | |____Archive$EntryFilter.class
| | | | | |____Archive.class
| | | | | |____ExplodedArchive$1.class
| | | | | |____ExplodedArchive$FileEntry.class
| | | | | |____ExplodedArchive$FileEntryIterator$EntryComparator.class
| | | | | |____ExplodedArchive$FileEntryIterator.class
| | | | | |____ExplodedArchive.class
| | | | | |____JarFileArchive$EntryIterator.class
| | | | | |____JarFileArchive$JarFileEntry.class
| | | | | |____JarFileArchive.class
| | | | |____data
| | | | | |____ByteArrayRandomAccessData.class
| | | | | |____RandomAccessData$ResourceAccess.class
| | | | | |____RandomAccessData.class
| | | | | |____RandomAccessDataFile$DataInputStream.class
| | | | | |____RandomAccessDataFile$FilePool.class
| | | | | |____RandomAccessDataFile.class
| | | | |____ExecutableArchiveLauncher$1.class
| | | | |____ExecutableArchiveLauncher.class
| | | | |____jar
| | | | | |____AsciiBytes.class
| | | | | |____Bytes.class
| | | | | |____CentralDirectoryEndRecord.class
| | | | | |____CentralDirectoryFileHeader.class
| | | | | |____CentralDirectoryParser.class
| | | | | |____CentralDirectoryVisitor.class
| | | | | |____FileHeader.class
| | | | | |____Handler.class
| | | | | |____JarEntry.class
| | | | | |____JarEntryFilter.class
| | | | | |____JarFile$1.class
| | | | | |____JarFile$2.class
| | | | | |____JarFile$3.class
| | | | | |____JarFile$JarFileType.class
| | | | | |____JarFile.class
| | | | | |____JarFileEntries$1.class
| | | | | |____JarFileEntries$EntryIterator.class
| | | | | |____JarFileEntries.class
| | | | | |____JarURLConnection$1.class
| | | | | |____JarURLConnection$JarEntryName.class
| | | | | |____JarURLConnection.class
| | | | | |____ZipInflaterInputStream.class
| | | | |____JarLauncher.class
| | | | |____LaunchedURLClassLoader$1.class
| | | | |____LaunchedURLClassLoader.class
| | | | |____Launcher.class
| | | | |____MainMethodRunner.class
| | | | |____PropertiesLauncher$1.class
| | | | |____PropertiesLauncher$ArchiveEntryFilter.class
| | | | |____PropertiesLauncher$FilteredArchive$1.class
| | | | |____PropertiesLauncher$FilteredArchive.class
| | | | |____PropertiesLauncher$PrefixMatchingArchiveFilter.class
| | | | |____PropertiesLauncher.class
| | | | |____util
| | | | | |____SystemPropertyUtils.class
| | | | |____WarLauncher.class
The entire project can be found here: https://github.com/robert-bor/executable-jar-with-liquibase
What am I doing wrong here?
there used to be a problem with the includeAll tag in liquibase, see this issue. It should be fix meanwhile, but at the moment I could not make it run with the includeAll tag.
As a solution for your problem use:
<include file="classpath:db/changelogs/changelog_000.xml" relativeToChangelogFile="false"/>

Need help with credit expiration algorithm

So I'm stuck. I am working on a credit system with expirations. Similar to credit card miles but not exactly. By the way I am sorry for the book ahead but I needed to add enough detail to help get the whole picture.
What I need is a system where a user accumulates credits for doing activities. But they can also spend these credits on activities. The credits should expire after 30 days if they are not used. I seem to be stuck on how to accurately calculate this in a batch that will run every night. Any ideas in any language would be greatly appreciated as I seem to be stuck on just one minor detail that I can't get around. Here is an example of the data:
7/1: +5 - user signs up
7/2: +5 - user interacts with system
7/2: -3 - user purchases activity
7/3: +5 - user interacts with system
So at this point the user has received 15 credits and has spent 3. Leaving him with a total of 12 credits. (At least I got basic math down :P)
I should add that currently we are playing with the idea of having two fields: last processed, next processed. So these values at this time assuming it was a new sign up are:
Last Processed Date: 7/1
Next Process Date: 8/1
So now 8/1 comes around. The batch starts and looks at all credits that are older than 30 days. Which at this point is 5.
This is where it starts to get fuzzy.
Then the system should look at all the credits that have been spent in the last 30 days to see if they are using any credits. Because they should only expire if they haven't been used. So there are 3. So I then deduct the user 2 credits because that is the difference of credits earned older than 30 days and what has been spent. So I finish the batch and set the dates accordingly for the next day. Now assuming they haven't spent anymore I start the calculation over of credits earned older than 30, which is 5 and credits spent which again is 3. But I obviously don't want to consider the 3 credits that I considered yesterday. What is a good approach to not include those 3 credits again for consideration.
That is where I am stuck.
We are thinking about writing a debit record for the expired credits so we can track them but having a hard time seeing how I can use it in this calculation.
If you read this far thank you. If you even make a somewhat effort in the answer I will at a minimum give you an up vote for effort.
EDIT:
Ok #Greg mentioned something that I forgot to address. The idea of putting a flag on the credits considered. A valid point but not one that can work because of the following scenario:
Let's say that on a particular day a user spends 10 credits. But the expired credits that the batch is considering only accumulated to 5. Well he should still have 5 more credits left over to not have expired because he spent more than a single expiration. So the flag wouldn't work because we would have skipped those 5 extra credits. Hope that makes sense?
For every user of the system keep an array, that stores information about the amount of credits available to the user for the next 30 consecutive days
For example the data for some user might look like this
8 |
7 | |
6 | | | |
5 | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
Every time the user earns some credits, You increase amounts for all days by the number of credits earned. For example if the user earns 2 credits the table changes as follows. It's like rising the whole graph up.
10 |
9 | |
8 | | | |
7 | | | | | | | | | | |
6 | | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
If The user has x credits today and spends y credits, You decrease the amount of credits available to him to x - y, for every day he has an amount greater than x - y. For days he has no more than x - y, the amount stays the same. It's like cutting the top of the graph off. For example if the user spends 3 credits the graph changes to
7 | | | | | | | | | | |
6 | | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
Every day You shift the graph to the left to model expiring credits. The user will have the following amounts tomorrow
7 | | | | | | | | | |
6 | | | | | | | | | | | | | | | |
5 | | | | | | | | | | | | | | | | | | | | | | |
4 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
3 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
2 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
1 | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
-------------------------------------------------------------
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
^ ^ ^
| \_ |
today tomorrow in 15 days
I wouldn't consider trying to process the data as you present it. Instead, you should keep track of how many credits the user has, and when they expire. That way you keep track of which credits were used when the purchase is made, instead of trying to work it all out later.
So when the user signs up, they have:
5 credits expiring on 8/1
After interacting with the system the next day:
5 credits expiring on 8/1
5 credits expiring on 8/2
After purchasing something:
2 credits expiring on 8/1
5 credits expiring on 8/2
And so on.
Assuming you run this batch on a daily basis, you can have a table that keeps track of all the credits they earned, and the credits they used (negative credits).
At the beginning of the next month, your job is simply to find out which of the credits earned on the first day were not spent during the month.
The number of credits earned on the first day - the credits they spent all of last month. If the number is positive, they have some credits that need to expired. So simple add a record in the table with a negative credit. This will zero-out the unused credits.
The next day, repeat the process by seeing how many credits they earned on the second day minus the sum of all the credits they earned in the last month, taking into account the record with the negative credits you created the previous day.
How about adding a flag to the expenditures? If the flag is not set, then you can include that expenditure in the batch, if necessary. If you do use the expenditure to offset an expiration, then you set the flag. Next time through, you'll ignore that expenditure because the flag is set.
Use a debit record to record normal expenditures. When the monthly batch job runs, it can calculate the total debits which are less than or equal to the expiring credits. If there are credits to expire, simply insert an appropriate debit record (appropriate == to cancel the excess, in your application). In this way, any 'running total' code which examines only credits and debits will reach the same balance that your batch code intended.
One approach to this problem is to store only the transactions, not the balance. Then you always calculate the balance in real time when needed. Here's the data:
Date : Amount : Expiries
7/1 : +5 : 7/31
7/2 : +5 : 8/1
7/2 : -3 : never
7/3 : +5 : 8/2
The balance at any time is simply the total of all transactions that have not yet expired. No need to run any batch processes.
Regarding Julians reply (that I can't comment to yet), I'm dealing with just the same problem and Julians approach won't work because that would result the account being able to go negative.
If the user didn't use the service for one month, on 8/4 the account balance would be -3 and one activity worth of 5 would bring the balance to 2, not to 5 as it should.

Resources