java.lang.StackOverflowError from OpenJPA CriteriaQuery using large queries - derby

Overview
I'm trying to use an OpenJPA CriteriaQuery to count the number of objects in a DB table that meet certain properties.
I need to be able to apply a filter - to count the number of objects that meet the certain properties only if they have an ID in a known list of IDs.
I am not able to do this if the list is very large.
Detailed description
I'm using OpenJPA 2.2.1 with an underlying Derby DB.
My CriteriaQuery is being mapped to an SQL query with a WHERE clause that includes
t1.qid = ? OR t1.qid = ? OR t1.qid = ? ...
for each of the known IDs.
This normally works fine.
If this list gets large (e.g. a thousand or more IDs in the list), this fails with a java.lang.StackOverflowError.
Questions
Is there a better way to apply a filter that can scale to any
arbitrary number of object IDs?
Is there a way to avoid this error?
I've included a simplified version of my code below, together with OpenJPA exceptions and derby.log errors that resulted.
A simplified version of my code:
Collection<Integer> qids = A_LARGE_SET_OF_INTEGERS;
// prepare a criteria query to count objects that match some parameters
CriteriaQuery<Long> criteria = builder.createQuery(Long.class);
Root<MyObjectType> myrootobj = criteria.from(MyObjectType.class);
// counting the number of qids - a nonunique ID value in MyObjectType
criteria.select(builder.countDistinct(myrootobj.get(MyObjectType_.qid)));
// prepare a filter based on a property of the object
Predicate someProperty = builder.equal(myrootobj.get(MyObjectType_.someattr).get(MyOtherObjectType_.id), somefilterid);
// prepare a filter to limit to objects with an ID in the provided set
Predicate filteredObjects = myrootobj.get(MyObjectType_.qid).in(qids);
// apply the filter
criteria.where(builder.and(someProperty, filteredObjects));
long count = em.createQuery(criteria).getSingleResult();
The only SQL exception errors I can see are SQLState:XJ001 Error Code:-1
The exception thrown looks like this:
<openjpa-2.2.1-r422266:1396819 nonfatal user error> org.apache.openjpa.persistence.ArgumentException: Failed to execute query "null". Check the query syntax for correctness. See nested exception for details.
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:872)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:794)
at org.apache.openjpa.kernel.DelegatingQuery.execute(DelegatingQuery.java:542)
at org.apache.openjpa.persistence.QueryImpl.execute(QueryImpl.java:286)
at org.apache.openjpa.persistence.QueryImpl.getResultList(QueryImpl.java:302)
at org.apache.openjpa.persistence.QueryImpl.getSingleResult(QueryImpl.java:330)
at my.code.that.uses.CriteriaQuery
at sun.reflect.GeneratedMethodAccessor335.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:55)
at java.lang.reflect.Method.invoke(Method.java:613)
at ...
Caused by: org.apache.openjpa.lib.jdbc.ReportingSQLException: DERBY SQL error: SQLCODE: -1, SQLSTATE: XJ001, SQLERRMC: java.lang.StackOverflowErrorXJ001.U {SELECT COUNT(DISTINCT t1.id) FROM SomeObjectType t0 INNER JOIN Something t1 ON t0.SOMETHING_ID = t1.id WHERE (t0.attr = ? AND t1.RUN_ID = ? AND (t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ?) AND t1.qid IS NOT NULL)} [code=-1, state=XJ001]
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:219)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.wrap(LoggingConnectionDecorator.java:199)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator.access$000(LoggingConnectionDecorator.java:59)
at org.apache.openjpa.lib.jdbc.LoggingConnectionDecorator$LoggingConnection.prepareStatement(LoggingConnectionDecorator.java:251)
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)
at org.apache.openjpa.lib.jdbc.ConfiguringConnectionDecorator$ConfiguringConnection.prepareStatement(ConfiguringConnectionDecorator.java:140)
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:133)
at org.apache.openjpa.jdbc.kernel.JDBCStoreManager$RefCountConnection.prepareStatement(JDBCStoreManager.java:1646)
at org.apache.openjpa.lib.jdbc.DelegatingConnection.prepareStatement(DelegatingConnection.java:122)
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:449)
at org.apache.openjpa.jdbc.sql.SQLBuffer.prepareStatement(SQLBuffer.java:429)
at org.apache.openjpa.jdbc.sql.SelectImpl.prepareStatement(SelectImpl.java:479)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:420)
at org.apache.openjpa.jdbc.sql.SelectImpl.execute(SelectImpl.java:391)
at org.apache.openjpa.jdbc.sql.LogicalUnion$UnionSelect.execute(LogicalUnion.java:427)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:230)
at org.apache.openjpa.jdbc.sql.LogicalUnion.execute(LogicalUnion.java:220)
at org.apache.openjpa.jdbc.kernel.SelectResultObjectProvider.open(SelectResultObjectProvider.java:94)
at org.apache.openjpa.kernel.QueryImpl$PackingResultObjectProvider.open(QueryImpl.java:2070)
at org.apache.openjpa.kernel.QueryImpl.singleResult(QueryImpl.java:1320)
at org.apache.openjpa.kernel.QueryImpl.toResult(QueryImpl.java:1242)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:1007)
at org.apache.openjpa.kernel.QueryImpl.execute(QueryImpl.java:863)
... 75 more
Note that I've snipped the SQL query in the ReportingSQLException because it was very long - I removed hundreds of the OR t1.qid = ? clauses. There were over a thousand of them.
derby.log contains the following:
2013-06-30 01:03:16.279 GMT Thread[DRDAConnThread_36,5,main] (XID = 11562784), (SESSIONID = 23), (DATABASE = /full/path/to/my/db), (DRDAID = NF000001.CE86-507216535478407432{22}), Failed Statement is: SELECT COUNT(DISTINCT t1.id) FROM SomeObjectType t0 INNER JOIN Something t1 ON t0.SOMETHING_ID = t1.id WHERE (t0.attr = ? AND t1.RUN_ID = ? AND (t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ? OR t1.qid = ?) AND t1.qid IS NOT NULL)
java.lang.StackOverflowError
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
<snip - over a thousand more lines like these>
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.OrNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.AndNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.AndNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.AndNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.AndNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.AndNode.changeToCNF(Unknown Source)
at org.apache.derby.impl.sql.compile.SelectNode.normExpressions(Unknown Source)
at org.apache.derby.impl.sql.compile.SelectNode.preprocess(Unknown Source)
at org.apache.derby.impl.sql.compile.DMLStatementNode.optimizeStatement(Unknown Source)
at org.apache.derby.impl.sql.compile.CursorNode.optimizeStatement(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepMinion(Unknown Source)
at org.apache.derby.impl.sql.GenericStatement.prepare(Unknown Source)
at org.apache.derby.impl.sql.conn.GenericLanguageConnectionContext.prepareInternalStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement20.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement30.<init>(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedPreparedStatement40.<init>(Unknown Source)
at org.apache.derby.jdbc.Driver40.newEmbedPreparedStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.jdbc.EmbedConnection.prepareStatement(Unknown Source)
at org.apache.derby.impl.drda.DRDAStatement.prepare(Unknown Source)
at org.apache.derby.impl.drda.DRDAStatement.explicitPrepare(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.parsePRPSQLSTT(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.processCommands(Unknown Source)
at org.apache.derby.impl.drda.DRDAConnThread.run(Unknown Source)

I believe this is DERBY-3876: https://issues.apache.org/jira/browse/DERBY-3876
A workaround is to use your JVM's ability to set the thread stack size. For example,
try -Xss=2048k, if your version of Java accepts that flag.

Using OpenJPA and MS SQL Server I had a similar problem with a long list of id's but in OrExpression.initialize. The trick with the stack size given by Bryan Pendleton solved the issue at the OpenJPA side.
However now I am hitting the 2100 parameters limit of sql server.
(perhaps depending on the query) a solution is to split the collection of id's into parts of e.g. 1000. Make a for loop that performs a separate query for each such batch and collects the results.

Related

CodeIgniter sessions are not working on the live server?

CodeIgniter sessions are not working on the live server ?
$config["sess_driver"] = "files";
$config["sess_cookie_name"] = "ci_session";
$config["sess_expiration"] = 7200;
$config["sess_save_path"] = NULL;
$config["sess_match_ip"] = FALSE;
$config["sess_time_to_update"] = 300;
$config["sess_regenerate_destroy"] = FALSE;

How to add additional tag for EBS volume base on variable?

I'm using this EC2 module with lite alteration to create EC2 instances and EBS volumes, Code is working without an issue, But I have requirement to add mount point as a tag in EBS, So I can use data filter to get that value and mount it using Ansible.
Im trying to add tag value to "dynamic "ebs_block_device" through depoy-ec2.tf configuration file. As per the Terraform documentation tags is an optional value. Anyway, when I executing this it provided Unsupported argument error for tags value. Appreciate your support to understand issue here.
My Code as below.
Module main.tf
locals {
is_t_instance_type = replace(var.instance_type, "/^t(2|3|3a){1}\\..*$/", "1") == "1" ? true : false
}
resource "aws_instance" "this" {
count = var.instance_count
ami = var.ami
instance_type = var.instance_type
user_data = var.user_data
user_data_base64 = var.user_data_base64
subnet_id = length(var.network_interface) > 0 ? null : element(
distinct(compact(concat([var.subnet_id], var.subnet_ids))),
count.index,
)
key_name = var.key_name
monitoring = var.monitoring
get_password_data = var.get_password_data
vpc_security_group_ids = var.vpc_security_group_ids
iam_instance_profile = var.iam_instance_profile
associate_public_ip_address = var.associate_public_ip_address
private_ip = length(var.private_ips) > 0 ? element(var.private_ips, count.index) : var.private_ip
ipv6_address_count = var.ipv6_address_count
ipv6_addresses = var.ipv6_addresses
ebs_optimized = var.ebs_optimized
dynamic "root_block_device" {
for_each = var.root_block_device
content {
delete_on_termination = lookup(root_block_device.value, "delete_on_termination", null)
encrypted = lookup(root_block_device.value, "encrypted", null)
iops = lookup(root_block_device.value, "iops", null)
kms_key_id = lookup(root_block_device.value, "kms_key_id", null)
volume_size = lookup(root_block_device.value, "volume_size", null)
volume_type = lookup(root_block_device.value, "volume_type", null)
}
}
dynamic "ebs_block_device" {
for_each = var.ebs_block_device
content {
delete_on_termination = lookup(ebs_block_device.value, "delete_on_termination", null)
device_name = ebs_block_device.value.device_name
encrypted = lookup(ebs_block_device.value, "encrypted", null)
iops = lookup(ebs_block_device.value, "iops", null)
kms_key_id = lookup(ebs_block_device.value, "kms_key_id", null)
snapshot_id = lookup(ebs_block_device.value, "snapshot_id", null)
volume_size = lookup(ebs_block_device.value, "volume_size", null)
volume_type = lookup(ebs_block_device.value, "volume_type", null)
tags = lookup(ebs_block_device.value, "mount", null)
}
}
dynamic "ephemeral_block_device" {
for_each = var.ephemeral_block_device
content {
device_name = ephemeral_block_device.value.device_name
no_device = lookup(ephemeral_block_device.value, "no_device", null)
virtual_name = lookup(ephemeral_block_device.value, "virtual_name", null)
}
}
dynamic "metadata_options" {
for_each = length(keys(var.metadata_options)) == 0 ? [] : [var.metadata_options]
content {
http_endpoint = lookup(metadata_options.value, "http_endpoint", "enabled")
http_tokens = lookup(metadata_options.value, "http_tokens", "optional")
http_put_response_hop_limit = lookup(metadata_options.value, "http_put_response_hop_limit", "1")
}
}
dynamic "network_interface" {
for_each = var.network_interface
content {
device_index = network_interface.value.device_index
network_interface_id = lookup(network_interface.value, "network_interface_id", null)
delete_on_termination = lookup(network_interface.value, "delete_on_termination", false)
}
}
source_dest_check = length(var.network_interface) > 0 ? null : var.source_dest_check
disable_api_termination = var.disable_api_termination
instance_initiated_shutdown_behavior = var.instance_initiated_shutdown_behavior
placement_group = var.placement_group
tenancy = var.tenancy
tags = merge(
{
"Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EC2", var.name, count.index + 1) : format("%s-EC2",var.name)
},
{
"ResourceName" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EC2", var.name, count.index + 1) : format("%s-EC2",var.name)
},
{"Account" = var.Account,
"Environment" = var.Environment,
"ApplicationName" = var.ApplicationName,
"ApplicationID" = var.ApplicationID,
"Project" = var.Project,
"ProjectCode" = var.ProjectCode,
"Workload" = var.Workload,
"Division" = var.Division,
"Purpose" = var.Purpose,
"VersionNumber" = var.VersionNumber,
"RelVersion" = var.RelVersion,
"OSVersion" = var.OSVersion,
"DBVersion" = var.DBVersion,
"DataClassification" = var.DataClassification,
"Automation" = var.Automation,
"AWSResoureceType" = "EC2",
"BusinessEntitiy" = var.BusinessEntitiy,
"CostCentre" = var.CostCentre,
"BaseImageName" = var.BaseImageName},
var.tags,
)
volume_tags = merge(
{
"Name" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EBS", var.name, count.index + 1) : format("%s-EBS",var.name)
},
{
"ResourceName" = var.instance_count > 1 || var.use_num_suffix ? format("%s${var.num_suffix_format}-EBS", var.name, count.index + 1) : format("%s-EBS",var.name)
},
{"Account" = var.Account,
"Environment" = var.Environment,
"ApplicationName" = var.ApplicationName,
"ApplicationID" = var.ApplicationID,
"Project" = var.Project,
"ProjectCode" = var.ProjectCode,
"Workload" = var.Workload,
"Division" = var.Division,
"Purpose" = var.Purpose,
"VersionNumber" = var.VersionNumber,
"RelVersion" = var.RelVersion,
"OSVersion" = var.OSVersion,
"DBVersion" = var.DBVersion,
"DataClassification" = var.DataClassification,
"Automation" = var.Automation,
"AWSResoureceType" = "EC2",
"BusinessEntitiy" = var.BusinessEntitiy,
"CostCentre" = var.CostCentre,
"BaseImageName" = var.BaseImageName},
var.volume_tags,
)
credit_specification {
cpu_credits = local.is_t_instance_type ? var.cpu_credits : null
}
}
deploy-ec2.tf
module "mn-ec2" {
source = "../../../terraform12-modules/aws/ec2-instance"
instance_count = var.master_nodes
name = "${var.Account}-${var.Environment}-${var.ApplicationName}-${var.Project}-${var.Division}-${var.Purpose}-MN"
ami = var.ami_id
instance_type = var.master_node_ec2_type
subnet_ids = ["${data.aws_subnet.primary_subnet.id}","${data.aws_subnet.secondory_subnet.id}","${data.aws_subnet.tertiary_subnet.id}"]
vpc_security_group_ids = ["${module.sg-application-servers.this_security_group_id}"]
iam_instance_profile = "${var.iam_instance_profile}"
key_name = var.key_pair_1
Project = upper(var.Project)
Account = var.Account
Environment = var.Environment
ApplicationName = var.ApplicationName
ApplicationID = var.ApplicationID
ProjectCode = var.ProjectCode
Workload = var.Workload
Division = var.Division
RelVersion = var.RelVersion
Purpose = var.Purpose
DataClassification = var.DataClassification
CostCentre = var.CostCentre
Automation = var.Automation
tags = {
node_type = "master"
}
volume_tags = {
node_type = "master"
}
root_block_device = [
{
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 250
},
]
ebs_block_device = [
{
device_name = "/dev/sdc"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 500
mount = "/x02"
},
{
device_name = "/dev/sdd"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 1000
mount = "/x03"
},
{
device_name = "/dev/sde"
encrypted = true
kms_key_id = var.kms_key_id
volume_type = "gp2"
volume_size = 10000
mount = "/x04"
},
]
}
The issue with AWS provider, which didn't have much options, So I have upgraded to terraform-provider-aws_3.24.0_linux_amd64.zip and now can be added specific tags for each EBS volume
I ran into a similar problem. Changing from terraform-provider-aws=2 to terraform-provider-aws=3 worked.

how to store the primary key values to the foreign key at same time of save

I am new to the laravel and trying to store the primary key values of patientid to the foreign key of patient_id in address table
$patdet = $request->patdet;
foreach ($patdet as $patdets) {
$pdet = new Patient();
$pdet->fname = $patdets['fname'];
$pdet->mname = $patdets['mname'];
$pdet->lname = $patdets['lname'];
$pdet->age = $patdets['age'];
$pdet->blood_group = $patdets['bloodgroup'];
$pdet->gender = $patdets['gender'];
$ptid = $pdet->save();
}
$addr = $request->address[0];
$address = new Address;
$address->gps_lat = $addr['gps_lat'];
$address->gps_log = $addr['gps_long'];
$address->house_no = $addr['houseno'];
$address->zipcode = $addr['zip_code'];
$address->street = $addr['street'];
$address->chowk = $addr['chowk'];
$address->city = $addr['city'];
$address->patient_id = $patid->id;
$address->save();
I know that $ptid is a local variable and cannot be used in address table, so how can I store it in address table?
Try the following and ask me if it doesn't work
foreach($request->patdet as $key => $patdets)
{
$pdet = new Patient();
$pdet->fname = $patdets['fname'];
$pdet->mname = $patdets['mname'];
$pdet->lname = $patdets['lname'];
$pdet->age = $patdets['age'];
$pdet->blood_group = $patdets['bloodgroup'];
$pdet->gender = $patdets['gender'];
$pdet->save();
$addr = $request->address[$key];
if($pdet->id > 0 && !empty($addr)){
$addrss = new Address;
$addrss->gps_lat = $addr['gps_lat'];
$addrss->gps_log = $addr['gps_long'];
$addrss->house_no = $addr['houseno'];
$addrss->zipcode = $addr['zip_code'];
$addrss->street = $addr['street'];
$addrss->chowk = $addr['chowk'];
$addrss->city = $addr['city'];
$addrss->patient_id = $pdet->id;
$addrss->save();
}
}
Try the following
$patArray = $request->patdet;
$addressArray = $request->address;
$pId = array();
foreach($patArray as $key => $patdets){
$pdet = new Patient();
$pdet->fname = $patdets['fname'];
$pdet->mname = $patdets['mname'];
$pdet->lname = $patdets['lname'];
$pdet->age = $patdets['age'];
$pdet->blood_group = $patdets['bloodgroup'];
$pdet->gender = $patdets['gender'];
$pdet->save();
$pId[$key] = ($pdet->id > 0) ? $pdet->id : 0;
}
foreach($addressArray as $key => $addr){
$addrss = new Address;
$addrss->gps_lat = $addr['gps_lat'];
$addrss->gps_log = $addr['gps_long'];
$addrss->house_no = $addr['houseno'];
$addrss->zipcode = $addr['zip_code'];
$addrss->street = $addr['street'];
$addrss->chowk = $addr['chowk'];
$addrss->city = $addr['city'];
$addrss->patient_id = array_key_exists($key ,$pId) ? $pId[$key] : 0;
$addrss->save();
$i++;
}

How to get the values of an array retrieved from database in codeigniter?

I have a code with me to get the values from the database in an array as shown below :
$uri = $this->uri->uri_to_assoc(4);
$student_id='S17AB21017';
$data['profile_data']=$this->studentsmodel->get_Cinprofile($student_id);
$pay = $this->instamojo->pay_request(
$amount = "6930" ,
$purpose = "Primary Colors Internationals" ,
$buyer_name = "Apple" ,
$email = "webteam6#marrs.in" ,
$phone = "7034805760" ,
$send_email = 'TRUE' ,
$send_sms = 'TRUE' ,
$repeated = 'FALSE'
);
I want to get the values in $data values for $buyer_name,$email and $phone and give it in the pay_request(). The values for name,email and phone is obtained from the table students_to_cin.How can I get the values ?
The code for get_Cinprofile() is shown below :
public function get_Cinprofile($student_id)
{
//echo $student_id;die;
$this->db->select('students_to_cin.*,schools.school_name,schools.school_address,schools.school_address1,schools.school_email,schools.school_phone,schools.school_principal_name,schools.school_medium,states.state_subdivision_name,class.class_key,category.categoryKey,period.period_name,countries.country_name,studentlist.stud_cin,studentlist.stud_syllabus,studentlist.stud_school_state,studentlist.stud_school_country,schools.school_pincode');
$this->db->from('students_to_cin');
$this->db->join('schools','schools.school_id=students_to_cin.school_id');
$this->db->join('states','states.state_subdivision_id=students_to_cin.state_id');
$this->db->join('class','students_to_cin.class_id = class.class_id'); /*class.class_key,*/
$this->db->join('category','students_to_cin.category_id = category.category_id');
$this->db->join('period','students_to_cin.period_id = period.period_id');
$this->db->join('countries','students_to_cin.country_id = countries.country_id');
$this->db->join('studentlist','students_to_cin.cin = studentlist.stud_cin');
$this->db->where('students_to_cin.student_id',$student_id);
$query = $this->db->get();//echo $this->db->last_query();exit;
$result = $query->row_array();
return $result;
}
Can anyone help me on this ?
You can use it like.....
$uri = $this->uri->uri_to_assoc(4);
$student_id='S17AB21017';
$data['profile_data']=$this->studentsmodel->get_Cinprofile($student_id);
$pay = $this->instamojo->pay_request(
$amount = $data['profile_data']['yourfieldname'] ,
$purpose = $data['profile_data']['yourfieldname'] ,
$buyer_name = $data['profile_data']['yourfieldname'] ,
$email = $data['profile_data']['yourfieldname'] ,
$phone = $data['profile_data']['yourfieldname'] ,
$send_email = $data['profile_data']['yourfieldname'] ,
$send_sms = $data['profile_data']['yourfieldname'] ,
$repeated = $data['profile_data']['yourfieldname']
);

Why does this Linq left outer join not work as expected?

I have two lists, both using the same class (Part). The first list is the "master" list (engineerParts), the second (partDefaults) is the list of attribute values for those parts.
The first list contains 263 items, the second, 238. So, the final list should contain 263. However it doesn't, it contains 598!!
var partsWithDefaults = from a in engineerParts
join b in partDefaults on
new { a.PartNumber, a.Revision }
equals
new { b.PartNumber, b.Revision } into j1
from j2 in j1.DefaultIfEmpty()
select new Part()
{
NewPart = isNewPart(a.PartNumber, PadRevision(a.Revision), concatenateRevision),
PartNumber = concatenateRevision == true ? string.Concat(a.PartNumber, PadRevision(a.Revision)) : a.PartNumber,
Revision = concatenateRevision == true ? "" : a.Revision,
ParentPart = a.ParentPart,
Description = a.Description,
BoMQty = a.BoMQty,
UoM = (j2 != null) ? j2.UoM : null,
ClassId = (j2 != null) ? j2.ClassId : null,
ShortDescription = (j2 != null) ? j2.ShortDescription : null,
ABCCode = (j2 != null) ? j2.ABCCode : null,
BuyerId = (j2 != null) ? j2.BuyerId : null,
PlannerId = (j2 != null) ? j2.PlannerId : null,
OrderPolicy = (j2 != null) ? j2.OrderPolicy : null,
FixedOrderQty = (j2 != null) ? j2.FixedOrderQty : null,
OrderPointQty = (j2 != null) ? j2.OrderPointQty : null,
OrderUpToLevel = (j2 != null) ? j2.OrderUpToLevel : null,
OrderQtyMin = (j2 != null) ? j2.OrderQtyMin : null,
OrderQtyMax = (j2 != null) ? j2.OrderQtyMax : null,
OrderQtyMultiple = (j2 != null) ? j2.OrderQtyMultiple : null,
ReplenishmentMethod = (j2 != null) ? j2.ReplenishmentMethod : null,
ItemShrinkageFactor = (j2 != null) ? j2.ItemShrinkageFactor : null,
PurchasingLeadTime = (j2 != null) ? j2.PurchasingLeadTime : null,
MFGFixedLeadTime = (j2 != null) ? j2.MFGFixedLeadTime : null,
PlanningTimeFence = (j2 != null) ? j2.PlanningTimeFence : null,
FulfillMethod = (j2 != null) ? j2.FulfillMethod : null,
ItemStatus = (j2 != null) ? j2.ItemStatus : null,
TreatAsEither = (j2 != null) ? j2.TreatAsEither : false,
AltItem1 = (j2 != null) ? j2.AltItem1 : null,
AltItem2 = (j2 != null) ? j2.AltItem2 : null,
AltItem3 = (j2 != null) ? j2.AltItem3 : null,
AltItem4 = (j2 != null) ? j2.AltItem4 : null,
AltItem5 = (j2 != null) ? j2.AltItem5 : null,
DesignAuthority = (j2 != null) ? j2.DesignAuthority : null,
Level = a.Level
};
return partsWithDefaults.ToList<Part>();
Turns out it was a data issue. The person that had sent me the data in Excel for the second list had left duplicates in it!!!!

Resources