We have requirements to let our users create their own workflows. Those workflows can have simple yes / no branching as well as waiting for a signal from an external event. This wouldn’t be such a problem if we had well established workflow definition, however since the workflows can be dynamic this poses a much tricker problem.
Temporal Workflows are code that directly implements your business logic.
For use cases when hardcoding the business logic in code is not an option an interpreter of an external workflow definition language should be written. Such language is frequently called DSL as they are really useful when implemented for a specific domain. The DSLs are frequently YAML/Json/XML based. Sometimes it is just data in DB tables.
Here is how I would structure the workflow code to support custom DSL:
An activity that receives current workflow definition ID and state and returns a list of operations to execute. This activity applies the current state (which includes results to the most recently executed operations) to the appropriate DSL instance. The result is the set of next operations to execute. Operations are DSL specific, but most common ones are execute activity, wait for specific signal, sleep for some time, complete or fail workflow.
A workflow that implements a loop that calls the above activity and executes requested operations until the workflow completion operation is requested.
Here is a sample code for a trivial DSL that specifies a sequence of activities to execute:
#ActivityInterface
public interface Interpreter {
String getNextStep(String workflowType, String lastActivity);
}
public class SequenceInterpreter implements Interpreter {
// dslWorkflowType->(activityType->nextActivity)
private final Map<String, Map<String, String>> definitions;
public SequenceInterpreter(Map<String, Map<String, String>> definitions) {
this.definitions = definitions;
}
#Override
public String getNextStep(String workflowType, String lastActivity) {
Map<String, String> stateTransitions = definitions.get(workflowType);
return stateTransitions.get(lastActivity);
}
}
#WorkflowInterface
public interface InterpreterWorkflow {
#WorkflowMethod
String execute(String type, String input);
#QueryMethod
String getCurrentActivity();
}
public class InterpreterWorkflowImpl implements InterpreterWorkflow {
private final Interpreter interpreter = Workflow.newActivityStub(Interpreter.class);
private final ActivityStub activities =
Workflow.newUntypedActivityStub(
new ActivityOptions.Builder().setScheduleToCloseTimeout(Duration.ofMinutes(10)).build());
private String currentActivity = "init";
private String lastActivityResult;
#Override
public String execute(String workflowType, String input) {
do {
currentActivity = interpreter.getNextStep(workflowType, currentActivity);
lastActivityResult = activities.execute(currentActivity, String.class, lastActivityResult);
} while (currentActivity != null);
return lastActivityResult;
}
#Override
public String getCurrentActivity() {
return currentActivity;
}
}
Obviously the real-life interpreter activity is going to receive a more complex state object as a parameter and return a structure that potentially contains a list of multiple command types.
Related
I'm trying to #Test a Service class but there are several get() methods that I don't know how to test. I would need to know how to collect the data that is necessary or at least how to test the rest of the methods of the TokenHelper class.
This is the Session class:
public class SessionData {
public static final String KEY = "session_data";
private Integer id;
private String email;
private String fullName;
private List<Role> role;
private Boolean tempSession;
private int permissionsMask = 0;
private String avatar;
public boolean hasAnyRole(Role... roles) {
for (Role r : roles) {
if (this.role.contains(r)) {
return true;
}
}
return false;
}
}
This is the TokenHelper class:
public class TokenHelper {
public String generate(SessionData tokenData, long expirationInHours) {
return Jwts.builder()
.claim(SessionData.KEY, tokenData)
.setIssuedAt(Date.from(Instant.now()))
.setExpiration(Date.from(Instant.now().plus(expirationInHours, ChronoUnit.HOURS)))
.signWith(SignatureAlgorithm.HS256, TextCodec.BASE64.encode(secret))
.compact();
}
public UserGoogle getTokenDataFromGoogleToken(String token) throws InvalidTokenException {
try {
int i = token.lastIndexOf('.');
String withoutSignature = token.substring(0, i + 1);
Claims claims = Jwts.parser().parseClaimsJwt(withoutSignature).getBody();
return UserGoogle.builder()
.email(claims.get(UserGoogle.KEY_EMAIL).toString())
.firstName(claims.get(UserGoogle.KEY_FIST_NAME).toString())
.lastName(claims.get(UserGoogle.KEY_LAST_NAME).toString()).build();
} catch (ExpiredJwtException | MalformedJwtException | SignatureException | IllegalArgumentException ex) {
log.error(ERROR_TOKEN, ex.toString());
throw new InvalidTokenException();
}
}
}
This is my #Test:
#Test
void googleTokenHelperTest() throws InvalidTokenException {
TokenHelper obj1 = BeanBuilder.builder(TokenHelper.class).createRandomBean();
String mailGoogle = "google#prueba.com";
String firstGoogle = "Nombre";
String lastGoogle = "Apellido";
Map<String, Object> pruebaGoogle = new HashMap<String, Object>();
List<String> info = new ArrayList<String>();
info.add(firstGoogle);
info.add(lastGoogle);
pruebaGoogle.put(mailGoogle, info);
UserGoogle expectedUser = UserGoogle.builder().email(mailGoogle).firstName(firstGoogle).lastName(lastGoogle).build();
String myTestToken = pruebaGoogle.toString();
UserGoogle actualUser = obj1.getTokenDataFromGoogleToken(myTestToken);
assertEquals(actualUser, expectedUser);
}
I have created some variables to form a user, but I need to build them with a map to generate the token with the help of the generate () method. I need to know how to join those three variables and pass them to the generate () method, and then pass the result variable to the google method to generate the new user.
Edit: After clarification by OP the topic of the question changed.
Your problem arises from a flawed Object-Orientation-Design. For example, your SessionData implicitly holds a User by having String-fields relevant to a User among fields relevant to a Session. This overlapping makes it hard to test your code, because in order to test your Token-Generation for some User data, you need a Session object, which introduces additional data and dependencies.
That is one major reason, why it's difficult for you, to get a token from your three input values.
You want to test getTokenDataFromGoogleToken(String token). First thing you need to know is, what a valid Token-String will look like.
Next, you will need to mock your Claims claims object in one of two ways:
Mockito.mock it using Mockito to return the necessary Strings when claims.get() is called.
Mockito.mock your Jwts.parser().parseClaimsJwt(withoutSignature).getBody() to return a Claims object that serves your testing purpose.
Since the signature of your token will be irrelevant to your tested method, just focus on the substring before the .-Separator, i.e. the part after . in your token string can be any string you like.
If you want to test generate(SessionData, long) you need to supply a SessionData Object and a long value. After that you assertEquals the String as necessary. However, currently your code does not imply that your get is in any way related to your generate. This is, because you just handle Strings. A better design would be to have e.g. a User, Session and Token-classes, which would also make it easier to test your application and units.
A Test for your getToken method looks like the following, you just have to replace ... with your test data.
#Test
void givenGoogleToken_whenTokenHelperGeneratesUserFromToken_UserOk() {
TokenHelper helper = new TokenHelper();
String myTestToken = ...; //
UserGoogle expectedUser = ... // generate the UserGoogle Object you expect to obtain from your TokenHelper class
UserGoogle actualUser = helper.getTokenDataFromGoogleToken(myTestToken);
assertEquals(actualUser, expectedUser);
}
Test generally follow a given-when-then structure. Given some precondition, when some action is performed, then some result is returned/behaviour observed. When implemented very formally, this is called BDD (Behaviour Driven Development), but even when not practicing BDD, tests still generally follow that pattern.
In this case, I would suggest the tests be something like:
Given some data exists in the service threaddata
when I call get
then I get back the expected value
In the scenario above, the given part probably consists of setting some data on the service, the when is invoking get and the then is asserting that it's the expected value.
And I'd encourage you to consider the various scenarios. E.g what happens if the data isn't there? what happens if it's not the class the consumer asks for? Is the map case-sensitive? etc...
Code sample for the initial instance (I'm not sure what BeanBuilder is here, so I've omitted it):
#Test
public void testCurrentThreadServiceReturnsExpectedValue() {
final String key = "TEST KEY";
final String value = "TEST VALUE";
//Initialize System Under Test
CurrentThreadService sut = new CurrentThreadService();
//Given - precondition
sut.set(key, value);
//When - retrieve value
String observedValue = sut.get(key, String.class);
//Then - value is as expected
assertEquals(value, observedValue);
}
EDIT TO ADD It's always great to see someone get into unit testing, so if you have any follow ups, please ask I'm happy to help. The confidence one derives from well tested code is a great thing for software devs.
I'm new to GraphQL and I'm currently implementing a GraphQL API into an established Java code, using GraphQL-SPQR and I'm running into a couple issues when it comes extracting data from hierarchical classes.
The issues that I am running into are as follows.
Firstly I don't if there is an easy way to get all the data associated with a returned node. If there is, this would be most useful for my more complex classes.
Secondly when a method returns an abstract class, I only seem able to request the variables on the abstract class. I'm sure this should be possible I am just hitting my head against a wall.
As a simple example
public abstract class Animal {
private String name;
private int age;
// Constructor
#GraphQLQuery(name = "name")
public String getName() {
return name;
}
// Age getter
}
public class Dog extends Animal {
private String favouriteFood;
// Constructor
#GraphQLQuery(name = "favouriteFood")
public String getFavouriteFood() {
return favouriteFood;
}
}
public class Database {
#GraphQLQuery(name = "getanimal")
public Animal getAnimal(#GraphQLArgument(name = "animalname") String animalname) {
return database.get(name);
}
}
So in my first question what I am currently querying is.
"{animalname(name: \"Daisy\") {name age}}"
This works fine as expected. If you imagine the class however had 10 variables I would like to merely be able to write the equivalent of the following without having to look them up.
"{node(name: \"Daisy\") {ALL}}"
Is this possible?
In terms of my second question.
The follow query, throws an error ('Field 'favouriteFood' in type 'Animal' is undefined')
"{animalname(name: \"Bones\") {name age favouriteFood}}"
likewise (reading Inline Fragments of https://graphql.org/learn/queries/)
"{animalname(name: \"Bones\") {name age ... on Dog{favouriteFood}}}"
throws an error Unknown type Dog
This is annoying as I have a number of sub classes which could be returned and may require handling in different fashions. I think I can understand why this is occuring as GraphQL has no knowledge as to what the true class is, only the super class I have returned. However I'm wondering if there is a way to fix this.
Ultimately while I can get past both these issues by simply serialising all the data to JSON and sending it back, it kind of gets rid of the point of GraphQL and I would rather find an alternate solution.
Thank you for any response.
Apologies if these are basic questions.
Answering my own question to help anyone else who has this issue.
The abstract class needs to have #GraphQLInterface included, as shown below
#GraphQLInterface(name = "Animal ", implementationAutoDiscovery = true)
public abstract class Animal {
private String name;
private int age;
// Constructor
#GraphQLQuery(name = "name")
public String getName() {
return name;
}
// Age getter
}
The following code was found after much solution and was created by the creator of SPQR. Effectively, when setting up your schema you need to declare an interface mapping strategy. The code below can be copied wholesale with only the "nodeQuery" variable being replaced with the service you are using to containing your "#GraphQLQuery" and "#GraphQLMutation" methods.
final GraphQLSchema schema = new GraphQLSchemaGenerator()
.withInterfaceMappingStrategy(new InterfaceMappingStrategy() {
#Override
public boolean supports(final AnnotatedType interfase) {
return interfase.isAnnotationPresent(GraphQLInterface.class);
}
#Override
public Collection<AnnotatedType> getInterfaces(final AnnotatedType type) {
Class clazz = ClassUtils.getRawType(type.getType());
final Set<AnnotatedType> interfaces = new HashSet<>();
do {
final AnnotatedType currentType = GenericTypeReflector.getExactSuperType(type, clazz);
if (supports(currentType)) {
interfaces.add(currentType);
}
Arrays.stream(clazz.getInterfaces())
.map(inter -> GenericTypeReflector.getExactSuperType(type, inter))
.filter(this::supports).forEach(interfaces::add);
} while ((clazz = clazz.getSuperclass()) != Object.class && clazz != null);
return interfaces;
}
}).withOperationsFromSingleton(nodeQuery)// register the service
.generate(); // done ;)
graphQL = new GraphQL.Builder(schema).build();
As this solution took some hunting, I'm going to start a blog soon with the other solutions I've stumbled on.
With regards to having a query that just returns all results. This is not possible in GraphQL. One workaround I might write is to have a endpoint that returns JSON of the entire object and the name of the object, then I can just use ObjectMapper to convert it back.
I hope this helps other people. I'm still looking into an answer for my first question and will update this post when I find one.
My question is about what is best way to inhibit an endpoint that is automatically provided by Olingo?
I am playing with a simple app based on Spring boot and using Apache Olingo.On short, this is my servlet registration:
#Configuration
public class CxfServletUtil{
#Bean
public ServletRegistrationBean getODataServletRegistrationBean() {
ServletRegistrationBean odataServletRegistrationBean = new ServletRegistrationBean(new CXFNonSpringJaxrsServlet(), "/user.svc/*");
Map<String, String> initParameters = new HashMap<String, String>();
initParameters.put("javax.ws.rs.Application", "org.apache.olingo.odata2.core.rest.app.ODataApplication");
initParameters.put("org.apache.olingo.odata2.service.factory", "com.olingotest.core.CustomODataJPAServiceFactory");
odataServletRegistrationBean.setInitParameters(initParameters);
return odataServletRegistrationBean;
} ...
where my ODataJPAServiceFactory is
#Component
public class CustomODataJPAServiceFactory extends ODataJPAServiceFactory implements ApplicationContextAware {
private static ApplicationContext context;
private static final String PERSISTENCE_UNIT_NAME = "myPersistenceUnit";
private static final String ENTITY_MANAGER_FACTORY_ID = "entityManagerFactory";
#Override
public ODataJPAContext initializeODataJPAContext()
throws ODataJPARuntimeException {
ODataJPAContext oDataJPAContext = this.getODataJPAContext();
try {
EntityManagerFactory emf = (EntityManagerFactory) context.getBean(ENTITY_MANAGER_FACTORY_ID);
oDataJPAContext.setEntityManagerFactory(emf);
oDataJPAContext.setPersistenceUnitName(PERSISTENCE_UNIT_NAME);
return oDataJPAContext;
} catch (Exception e) {
throw new RuntimeException(e);
}
}
...
My entity is quite simple ...
#Entity
public class User {
#Id
private String id;
#Basic
private String firstName;
#Basic
private String lastName;
....
Olingo is doing its job perfectly and it helps me with the generation of all the endpoints around CRUD operations for my entity.
My question is : how can I "inhibit" some of them? Let's say for example that I don't want to enable the delete my entity.
I could try to use a Filter - but this seems a bit harsh. Are there any other, better ways to solve my problem?
Thanks for the help.
As you have said, you could use a filter, but then you are really coupled with the URI schema used by Olingo. Also, things will become complicated when you have multiple, related entity sets (because you could navigate from one to the other, making the URIs more complex).
There are two things that you can do, depending on what you want to achieve:
If you want to have a fined grained control on what operations are allowed or not, you can create a wrapper for the ODataSingleProcesor and throw ODataExceptions where you want to disallow an operation. You can either always throw exceptions (i.e. completely disabling an operation type) or you can use the URI info parameters to obtain the target entity set and decide if you should throw an exception or call the standard single processor. I have used this approach to create a read-only OData service here (basically, I just created a ODAtaSingleProcessor which delegates some calls to the standard one + overridden a method in the service factory to wrap the standard single processor in my wrapper).
If you want to completely un-expose / ignore a given entity or some properties, then you can use a JPA-EDM mapping model end exclude the desired components. You can find an example of such a mapping here: github. The mapping model is just an XML file which maps the JPA entities / properties to EDM entity type / properties. In order for olingo to pick it up, you can pass the name of the file to the setJPAEdmMappingModel method of the ODataJPAContext in your initialize method.
The client periodically calls an async method (long polling), passing it a value of a stock symbol, which the server uses to query the database and return the object back to the client.
I am using Spring's DeferredResult class, however I'm not familiar with how it works. Notice how I am using the symbol property (sent from client) to query the database for new data (see below).
Perhaps there is a better approach for long polling with Spring?
How do I pass the symbol property from the method deferredResult() to processQueues()?
private final Queue<DeferredResult<String>> responseBodyQueue = new ConcurrentLinkedQueue<>();
#RequestMapping("/poll/{symbol}")
public #ResponseBody DeferredResult<String> deferredResult(#PathVariable("symbol") String symbol) {
DeferredResult<String> result = new DeferredResult<String>();
this.responseBodyQueue.add(result);
return result;
}
#Scheduled(fixedRate=2000)
public void processQueues() {
for (DeferredResult<String> result : this.responseBodyQueue) {
Quote quote = jpaStockQuoteRepository.findStock(symbol);
result.setResult(quote);
this.responseBodyQueue.remove(result);
}
}
DeferredResult in Spring 4.1.7:
Subclasses can extend this class to easily associate additional data or behavior with the DeferredResult. For example, one might want to associate the user used to create the DeferredResult by extending the class and adding an additional property for the user. In this way, the user could easily be accessed later without the need to use a data structure to do the mapping.
You can extend DeferredResult and save the symbol parameter as a class field.
static class DeferredQuote extends DeferredResult<Quote> {
private final String symbol;
public DeferredQuote(String symbol) {
this.symbol = symbol;
}
}
#RequestMapping("/poll/{symbol}")
public #ResponseBody DeferredQuote deferredResult(#PathVariable("symbol") String symbol) {
DeferredQuote result = new DeferredQuote(symbol);
responseBodyQueue.add(result);
return result;
}
#Scheduled(fixedRate = 2000)
public void processQueues() {
for (DeferredQuote result : responseBodyQueue) {
Quote quote = jpaStockQuoteRepository.findStock(result.symbol);
result.setResult(quote);
responseBodyQueue.remove(result);
}
}
We are doing TDD for quite a while and we are facing some concerns when we refactor. As we are trying to respect as much as we can the SRP (Single responsibility principle), we created a lot of composition that our classes use to deal with common responsibilities (such as validation, logging, etc..).
Let's take a very simple example :
public class Executioner
{
public ILogger Logger { get; set; }
public void DoSomething()
{
Logger.DoLog("Starting doing something");
Thread.Sleep(1000);
Logger.DoLog("Something was done!");
}
}
public interface ILogger
{
void DoLog(string message);
}
As we use a mocking framework, the kind of test that we would do for this situation would be somthing like
[TestClass]
public class ExecutionerTests
{
[TestMethod]
public void Test_DoSomething()
{
var objectUnderTests = new Executioner();
#region Mock setup
var loggerMock = new Mock<ILogger>(MockBehavior.Strict);
loggerMock.Setup(l => l.DoLog("Starting doing something"));
loggerMock.Setup(l => l.DoLog("Something was done!"));
objectUnderTests.Logger = loggerMock.Object;
#endregion
objectUnderTests.DoSomething();
loggerMock.VerifyAll();
}
}
As you can see, the test is clearly aware of the method implementation that we are testing. I have to admit that this example is too simple, but we sometimes have compositions that cover responsibilities that don't add any value to a test.
Let's add some complexity to this example
public interface ILogger
{
void DoLog(LoggingMessage message);
}
public interface IMapper
{
TTarget DoMap<TSource, TTarget>(TSource source);
}
public class LoggingMessage
{
public string Message { get; set; }
}
public class Executioner
{
public ILogger Logger { get; set; }
public IMapper Mapper { get; set; }
public void DoSomething()
{
DoLog("Starting doing something");
Thread.Sleep(1000);
DoLog("Something was done!");
}
private void DoLog(string message)
{
var startMessage = Mapper.DoMap<string, LoggingMessage>(message);
Logger.DoLog(startMessage);
}
}
Ok, this is an example. I would include the Mapper stuff within the implementation of my Logger and keep a DoLog(string message) method in my interface, but it's an example to demonstrate my concerns
The corresponding test leads us to
[TestClass]
public class ExecutionerTests
{
[TestMethod]
public void Test_DoSomething()
{
var objectUnderTests = new Executioner();
#region Mock setup
var loggerMock = new Mock<ILogger>(MockBehavior.Strict);
var mapperMock = new Mock<IMapper>(MockBehavior.Strict);
var mockedMessage = new LoggingMessage();
mapperMock.Setup(m => m.DoMap<string, LoggingMessage>("Starting doing something")).Returns(mockedMessage);
mapperMock.Setup(m => m.DoMap<string, LoggingMessage>("Something was done!")).Returns(mockedMessage);
loggerMock.Setup(l => l.DoLog(mockedMessage));
objectUnderTests.Logger = loggerMock.Object;
objectUnderTests.Mapper = mapperMock.Object;
#endregion
objectUnderTests.DoSomething();
mapperMock.VerifyAll();
loggerMock.Verify(l => l.DoLog(mockedMessage), Times.Exactly(2));
loggerMock.VerifyAll();
}
}
Wow... imagine that we would use another way to translate our entities, I would have to change every tests that has some method that uses the mapper service.
Anyways, we really feel some pain when we do major refactoring as we need to change a bunch of tests.
I'd love to discuss about this kind of problem. Am I missing something? Are we testing too much stuff?
Tips:
Specify exactly what should happen and no more.
In your fabricated example,
Test E.DoSomething asks Mapper to map string1 and string2 (Stub out Logger - irrelevant)
Test E.DoSomething tells Logger to log mapped strings (Stub/Fake out Mapper to return message1 and message2)
Tell don't ask
Like you've yourself hinted, if this was a real example. I'd expect Logger to handle the translation internally via a hashtable or using a Mapper. So then I'd have a simple test for E.DoSomething
Test E.DoSomething tells Logger to log string1 and string2
The tests for Logger would ensure L.Log asks mapper to translate s1 and log the result
Ask methods complicate tests (ask Mapper to translate s1 and s2. Then pass the return values m1 and m2 to Logger) by coupling the collaborators.
Ignore irrelevant objects
The tradeoff for isolation via testing interactions is that the tests are aware of implementation.
The trick is to minimize this (via not creating interfaces/specifying expectations willy-nilly). DRY applies to expectations as well. Minimize the amount of places that an expectation is specified... ideally Once.
Minimize coupling
If there are lots of collaborators, coupling is high which is a bad thing. So you may need to rework your design to see which collaborators don't belong at the same level of abstraction
Your difficulties come from testing behavior rather than state. If you would rewrite the tests so that you look at what's in the log rather than verifying that the call to the log is made, your tests wouldn't break due to changes in the implementation.