Anylogic - agent location triggered by condition - location

I have an issue concerning the agent location in one of my Anylogic simulations. I want to set a condition that defines which path an agent will take in the visualization of my simulation.
In a delay block in the main agent I wrote
if(agent.previousStation==1){
path01;
}
else {
path21;
}
into the agent location field.
When building the model anylogic presents me with these errors:
Description: Syntax error, insert "VariableDeclarators" to complete
LocalVariableDeclaration. Location: FVMMerkmale/shopfloor/wegzeit1 -
Delay
and
Description: Syntax error on token(s), misplaced construct(s).
Location: FVMMerkmale/shopfloor - Agent Type
writing "return" in front of the path does not help either and gives different errors:
Description: Syntax error on token(s), misplaced construct(s).
Location: FVMMerkmale/shopfloor - Agent Type
Description: path21 cannot be resolved to a variable. Location:
FVMMerkmale/shopfloor/wegzeit1 - Delay
Description: Void methods cannot return a value. Location:
FVMMerkmale/shopfloor/wegzeit1 - Delay
Description: agent cannot be resolved to a variable. Location:
FVMMerkmale/shopfloor/wegzeit1 - Delay
The path-elements are in the main agent. Using the value editor to choose the correct path will work.
According to the anylogic help, it is possible to bind the agent location to a condition:
Otherwise, if you want to set different nodes for agents here, you can
write a Java expression that will return different nodes depending on
some conditions.
https://help.anylogic.com/index.jsp?topic=%2Fcom.anylogic.help%2Fhtml%2Fagentbased%2FContinuous_Layouts.html
How do I write a condition that defines a path or node as the agent location?

This is the correct code, which is the compressed version of the if statement using ? and : operators (without using semicollon)
agent.previousStation==1 ? path01 : path21
More info about these operators here:
http://www.cafeaulait.org/course/week2/43.html

Related

Question about weird behavior referencing a YAML pipeline resource using a variable for the pipeline resource name

I am experiencing weird behavior with YAML variables, parameters, and Azure pipeline resource references. The following shows the original implementation that works compared to my new implementation with a single line change that fails.
Working Implementation
Template A (makes a call to template B):
- template: Templates\TemplateB.yml
serviceBuildResourceName: resourceName
Template B (uses serviceBuildResourceName param to get pipeline run information):
$projectId = '$(resources.pipeline.${{ parameters.serviceBuildResourceName }}.projectID)'
$pipelineId ='$(resources.pipeline.${{ parameters.serviceBuildResourceName }}.PipelineID)'
Template B goes on to use the values in $projectId and $pipelineId (along with other values not listed here since it is irrelevant) to successfully retrieve information about the a pipeline run from the specific pipeline resource, serviceBuildResourceName. Note that all pipeline resources are correctly defined at the beginning yaml file for the pipeline. In this implementation above, everything works perfectly.
Failing Implementation
Template A (makes a call to template B):
- template: Templates\TemplateB.yml
serviceBuildResourceName: $(ServiceBuildResourceName)
Template B (uses serviceBuildResourceName param to get pipeline run information):
$projectId = '$(resources.pipeline.${{ parameters.serviceBuildResourceName }}.projectID)'
$pipelineId ='$(resources.pipeline.${{ parameters.serviceBuildResourceName }}.PipelineID)'
Note that the only difference is the following: instead of passing the hard-coded string into the serviceBuildResourceName parameter, I pass in a variable, which has the same value as before, resourceName. The variable is defined in an earlier template as such:
- name: ServiceBuildResourceName
value: resourceName
I feel it should still work the same, but I know get the following error in my pipeline run:
WARNING: 2023-02-12 15:52:29.5071 Response body: {"$id":"1","innerException":null,"message":"The value is not an integer.
$(resources.pipeline.resourceName.PipelineID)
I know that the variable is being correctly populated since the error message above contains "resourceName" in resources.pipeline.resourceName.PipelineID, as it should.
However, for reasons unknown to me, it now throughs an error. It seems like it doesn't recognize the pipeline resource, and instead recognizes it as a string.
Any help or insight here would be greatly appreciated, thanks!
As far as I can tell, this is because of how predefined variables work in YAML. Since resources.pipeline... is a predefined variable, it gets resolved at compile time. Thus, you can't use run-time defined variables like I am doing. Instead of resolving it as a predefined variable, it will get resolved to be a string at runtime.

How to optionally apply environment configuration?

I want to optionally apply a VPC configuration based on whether an environment variable is set.
Something like this:
custom:
vpc:
securityGroupIds:
- ...
subnetIds:
- ...
functions:
main:
...
vpc: !If
- ${env:USE_VPC}
- ${self:custom.vpc}
- ~
I'd also like to do similar for alerts (optionally add emails to receive alerts) and other fields too.
How can this be done?
I've tried the above configuration and a variety of others but just receive various different errors
For example:
Configuration error:
at 'functions.main.vpc': must have required property 'securityGroupIds'
at 'functions.main.vpc': must have required property 'subnetIds'
at 'functions.main.vpc': unrecognized property 'Fn::If'
Currently, the best way to achieve such behavior is to use JS/TS-based configuration instead of YAML. With TS/JS, you get full power of a programming language to shape your configuration however you want, including use of such conditional checks to exclude certain parts of the configuration. It's not documented too well, but you can use this as a starting point: https://github.com/serverless/examples/tree/v3/legacy/aws-nodejs-typescript
In general, you can do whatever you want, as long as you export a valid object (or a promise that resolves to a valid object) with serverless configuration.

CloudFormation yaml - How to force number type?

I'm trying to create an ECS task definition as part of a CloudFormation stack.
My task definition so far looks like this...
TaskDefinition:
Type: AWS::ECS::TaskDefinition
Properties:
RequiresCompatibilities:
- EC2
ExecutionRoleArn: !Ref MyTaskRole
ContainerDefinitions:
- Name: !Ref ServiceName
Image: amazon/amazon-ecs-sample
PortMappings:
- ContainerPort: 3000
HostPort: 0
Protocol: tcp
MemoryReservation: 128
When I try to run this, I get the following error...
#/ContainerDefinitions/0/MemoryReservation: expected type: Number, found: String
So it seems that CloudFormation is converting 128 to a string, and then the stack fails.
What is the correct way to define this value so that it remains a number?
It turned out that the error that was being reported by CloudFormation actually wasn't anything to do with the failure. The code above was perfectly fine.
In my case the problem was with the way I'd defined the logging section which appeared later in the template.
The takeaway from this, is that CloudFormation is very confusing to debug, and if you receive an error like this, don't assume it is what's actually causing the stack to fail.
To find the actual problem, I had to first remove the properties which were causing the type conversion error, MemoryReservation and PortMappings, and then it showed an error about the way I'd defined my logging section. After fixing that fault, I was able to re-add the other properties, and it worked fine.
I suspect now that because my logging section was incorrect, the whole ContainerDefinitions perhaps wasn't being parsed correctly, potentially causing the misleading type mismatch error.

Websphere JYTHON Scripting - Get Active Spec ID

Problem:
Attempting to use the JYTHON command below and I cannot retrieve the id of my active specification defined at a node-server level in Websphere. I believe its a syntax issue but I'm not sure what.
Code:
AdminConfig.getid('/Cell:mycell/Node:mynode/Server:myserver/J2CActivationSpec:myActiveSpecName/')
Problem Notes:
I do not get a invalid object error so I believe I have the syntax right but it just cannot find the resource even though it exists.
I am using the AdminConfig.getid() as a way to check if the resource already exists in order to do a modify or a create.
If I use the following code: AdminConfig.getid('/J2CActivationSpec:myActiveSpecName/') it will find it but not if I use a more specific path listed above.
Reference Material:
IBM Documentation
Containment paths are always a little tricky. In my (limited) experience, even if you can trace the path by AdminConfig.parents, you may not always be able to use getid.
Are you restricted to using getid? If not, here are some alternatives that will get you an ActivationSpec at the /Cell/Node/Server level:
Querying using AdminConfig.list
This approach will list the Activation Specifications at the specified scope (in our case, the server), and grab the one that has it's name attribute equal to 'myActiveSpecName'.
server = AdminConfig.getid('/Cell:mycell/Node:mynode/Server:myserver')
activationSpec = ''
for as in AdminConfig.list('J2CActivationSpec', server).splitlines():
if AdminConfig.showAttribute(as, 'name') == 'myActiveSpecName'
activationSpec = as
print 'found it :)'
Using Wildcards
This approah uses AdminConfig.list as well, but with a pattern to narrow down your list. If you know your activation spec's configuration begins with myActiveSpecName, then you can do the following:
activationSpec = AdminConfig.list('J2CActivationSpec', 'myActiveSpecName*')

Informatica error =[ERROR('transformation error')]

I am getting the following Informatica error:
Note: Output column [AGENT_DISPOSTION_CODE] has no default value. Row will be skipped if transformation errors are encountered
MAPPING> DBG_21056 column=[PHONE_NUMBER], defaultvalue=[ERROR('transformation error')]
How can I fix it?
It's not an error, it's only an information that you have a port with a default value set to ERROR('transformation error'), so Integration Service will skip the NULL values with an ERROR function.
The Designer inserts this expression automatically, when you add a new output port; you can change it. Edit the expression, find the port on the Ports tab and check the Default value field at the bottom:
The ERROR function causes the Integration Service to skip a row and issue an error message, which you define.
When running a session on the Verbose Data Mode and if there is no default value specified for output ports in the Mapping, PowerCenter is designed for the warning messages to show up in the session log.
During the column initialization, PowerCenter evaluates the default value specified for each output port in the Mapping and displays the corresponding message. The evaluation code path is the same as for evaluating any other expressions later on during data transformation.
Example
If you specify SIN(1.415) as the default value for an output port, the evaluation on sin(1.415) executes successfully. Upon a successful evaluation, the following message will be displayed:
MAPPING> DBG_21364 Note: Default value [SIN(1.4)] of output column [output1] will be used if tansformation errors are encountered
However, if the default value is error('transformation error'), the following error message will be displayed during evaluation like any real transformation errors:
MAPPING> TE_7007 Transformation Evaluation Error [<> [ERROR]: transformation error... nl:ERROR(u:'transformation error')]; current row skipped...
Immediately after the evaluation, the following message will be displayed in the session log:
MAPPING> DBG_21367 Note: Output column [NUM38_37] has no default value. Row will be skipped if transformation errors are encountered

Resources