Good night! Guys, I need some advice from Java developers about my monkey code. I'm learning Spring Boot, and I need to make an application that can take images medium REST API or UI on Vaadin after you recognize objects on it with Google AI, the result must be saved in PgSQL + some more requirements described in README.md.
In general, I've made an outline of REST and can get ready-made recognition. But I have many questions:
I have to cover the code with integration + unit tests. I don't have integration questions but how to write units for SpringBoot applications, did each method need to be covered?
How do I automatically generate Sql INSERT for oid PgSQL tables (DataGrip, DBeaver can't do that)? I want to add this to the Flyway migration.
I use many to many links, how do I implement Hibernate deletion from three tables (all I know so far is how to do it in pure SQL)?
In handlePicrureUpload() I not only upload the image but also write the image into PgSQL tags. It's a very serious error how to run these actions only when the handlePicrureUpload() method is finished.
How to make multithreaded uploading and processing of images? How to track the status of each recognition, a separate controller that takes the statuses from Google Cloud?
How to output c /api/ai/ getAiResults() table in Vaadin. How to display the picture in the Vaadin table and how to schedule the tag list in the field (it was highly desirable to edit them).
I know that Google has all these answers, but I'm a little time constrained right now. You can hit me with a stick.
Cloud Vision documentation - https://cloud.google.com/vision/docs
Thank you to everyone who will respond!
I have to cover the code with integration + unit tests. I don't have
integration questions but how to write units for SpringBoot
applications, did each method need to be covered?
unit tests are generally for each method.
I use many to many links, how do I implement Hibernate deletion from
three tables (all I know so far is how to do it in pure SQL)?
JPA supports deleting records. If you have cascade delete setup between the tables you don't need to delete them one by one.
In handlePicrureUpload() I not only upload the image but also write
the image into PgSQL tags. It's a very serious error how to run these
actions only when the handlePicrureUpload() method is finished.
You are using the wrong OR operator in your handlePicrureUpload. It should be ||
-How to make multithreaded uploading and processing of images? How to track the status of each recognition, a separate controller that takes
the statuses from Google Cloud?
Spring provides #Async to execute methods asynchronously in separate thred. It sounds like you want to do some sort of queueing of requests. To start simple, you can save the request in a table 'request' and return a request id to track it. You can setup a #Scheduled job that reads new operations every X interval and process. You can setup a REST endpoint to return the status of request.
Related
In a lot of the apps I work on, we have this problem where we heavily rely on 1st and 3rd party APIs. So much so, in some of our apps, it is useless to try to login without those APIs being in place. Either critical pieces of data are not there or the entire app is like a server side render SPA where it houses no data on its own but pulls that data from an API at the time of a request (we cache it when we can).
This raises a huge problem when trying to develop the app locally since we do not have a sandbox environment. Our current solution is to create a service layer in between our business logic and the actual HTTP calls. We then, in our local environments, swap out the HTTP implementation for a class that just returns fake data. This works pretty well most of the time except for a couple of issues:
This only really gives us one state of the application at a time. Unlike data in the database, we are not able to easily run different seeders to replicate different scenarios.
If we run into a bug in production, we have no way of replicating the api response without actually diving into the code and adding some conditional to return that specific response. With data that is stored in the database, it is easy to login to TablePlus and manually setup some condition or even pull down select table from production.
In our mocks, our functions can get quite large and nasty if we do try to have it dynamically respond with a different response based on the resource id being request, as an example.
This makes the overhead to create each test for each scenario quite high in my opinion. If we could use something similar to a database factory to generate a bunch of different request-response pairs, we could test a lot more cases and if we could somehow, dynamically, setup certain scenarios when we are trying to replicate bugs we are running into production.
Since our applications are built with Laravel and PHP, unlike the database, mocks don't persist from one request to another. We cannot simple throw open a tinker and start seeding out API integrations with data like we can in the database.
I was trying to think of a way to do it with a cache and set request-response pairs. This could also be move to the database but would prefer not to have that extra table there that is only used locally.
Any ideas?
We are using Mongock in our spring-boot/kotlin microservices as our main Mongo DB migration tool and it is working perfectly. We started with a simple json file to create a few collections and have been adding changesets for a while now.
By now we have so many changesets that it is becoming hard to see what 'the truth' is about our database. We do not have one single json file or a bunch of files which tells us what the state of the database is, it is an accumulation of the start json and all changesets.
I would like to create a new baseline script based on the current situation and start over.
What are some best-practices to achieve this? Of course without losing data etc.
I believe you are asking two different things. First How to know the current state of the Mongock migrations, and second, How to perform a baseline
How to know the current state
Mongock offers a cli(currently in beta, although is pretty stable) which offers this.
Simple running ./mongock state db -aj yourApp.jar
Currently it requires passing your application jar, but this dependency will be removed.
More information in the documentation.
CLI page
CLI operations page
State operation section
Springboot example readme
Baseline
Mongock currently doesn't provide this feature in the CLI, but it's one of the next items in our roadmap
I´m developing a Jokenpo Game using React with Spring Rest, but I can´t have a database to store all the information needed(create and delete moves, create and delete players).
I don´t know the best practice of development, or if there is some design pattern on how to store that kind of information. I know there is the folder src/main/resources where maybe I can store a text file there and thought about on the startup of the api it loads that file with the begin of the game, maybe, and after changing it during the game.
Trying to be more clear: I just would like to know the simplest way of storing information without being a database inside of a Spring Rest application. I really appreciate any helps. Thanks.
Take a look at SQLite. It's a very light database library that you can include as a dependency of your Spring application, It doesn't require a separate database server to run, and the entire database is stored in a single file, that you can choose where to store in the connection string.
It offers the flexibility of a standard database, so you can use Spring Data / JPA to access the data. It has some limitations compared with robust databases like MySQL, specially related with concurrent writes that you should investigate and be aware of. Usually it works very well for small applications or embedded applications.
Hi i learned web api 2 recently and i am working on a sample project now.I am following layered architecture in my project.This is the flow
controller=>Business Layer=>Data Layer
Now i read some article about the repository pattern which sound better nowadays.
i saw the flow like
controller=>services=>repository
Is there any significant difference between the two flows?
As a beginner which style of architecture should i flow?
Can someone help me to understand these patterns?
When you use services with repository you should make it generic and use Unit of Work (UOW) so you don't repeat yourself with CRUD actions on every entity. Then add some object relational mapper (ORM) like Entity Framework. It is prefered that you also use domain objects for storing your logic there, and make the services return data transfer objects (DTO), because best practice is to keep your services small and clean. Then you also need to include some mapping tool/logic to map DTO to Domain objects.
And you see where this is all going. Suddenly you have far more complexity and obstructions than you need. Because only one thing that customer care about is quality of your product not some fancy architecture.
But as project grow it is much more important to have proper architecture, automatic tests etc. , so it is valid in large projects. If you want to experiment and learn it is great opportunity, but if you have running application in production, i would rather not make that big changes.
TLDR: they are both valid
My company needs to migrate data from a Taleo system to a new HR system.
A little research suggests that traditional ETL may not work against the Taleo cloud based system, but I don't know enough about the setup and am trying to learn.
Does anyone have experience migrating HR data from Taleo to another system, and, if so, how did you do it, and was traditional ETL an option?
Thanks
How you access Taleo depends as much on your platform as theirs.
Example: I'm using Windows:
not sure if this is my mistake ~~ vs2010 Add Service Reference fails
Taleo has just released a new version that as has killed a number of companies temporarily.*
Whether your ETL is one time or continous, Taleo provides a .PDF version of their API that works as follows for employee records (I'm only grabbing their employee records). Other records appear to use the same paradigm.
Employee records have two types of fields: fixed and user defined. The fixed fields which I work with in c# are like simple properties of a class and can be accessed with standard .name notation such as taleoItem.ManagerId. The user defined names are in list of "beans" ... for each bean, one looks first at its name ( *foreach (var taleoItem in taleoEmployeeBean.flexValues) ... if (taleoItem.fieldName == "Social Club Member") { ... ). * currently I'm getting zero of the 50+ flexbeans that I normally get and two flexbeans that I've never before seen. as can be expected, until Taleo fixes this breakage, all that I can to is twiddle my thumbs
When Taleo works properly, retrieving data generally works like this.
access a fixed url to get a url for your company;
authenticate via the url retrieved from step 1 to get a session token.
use the session token from step 2 to invoke the various Taleo API methods.
Warning: the Taleo API has documentation errors. Also, the test cases will not necessarily work.
I'm not familiar with Taleo, but according to their website they have features that allow integration via "XML, Web Services, reusable components, and standard APIs". There are many ETL tools on the market that can interface with web services as a source, or you could optionally write your own.
Taleo provides a PDF which described all the calls that can be made. Basically Taleo uses SOAP as web-service for accessing their data.
For a detailed description visit Taleo Integration in Drupal