can you write scripts to modify flow tables for the SDN controller? - opendaylight

Im new to SDN and OPenDayLight. I was wondering if you could write custom scripts to modify flow tables for the SDN controller?

absolutely. you can interact with ODL via REST interface and write your scripts
to CRUD (create, read, update, delete) flows. There would be many ways to go
about this, python scripts, bash scripts, etc.

Related

Spring Boot + Google Vision AI

Good night! Guys, I need some advice from Java developers about my monkey code. I'm learning Spring Boot, and I need to make an application that can take images medium REST API or UI on Vaadin after you recognize objects on it with Google AI, the result must be saved in PgSQL + some more requirements described in README.md.
In general, I've made an outline of REST and can get ready-made recognition. But I have many questions:
I have to cover the code with integration + unit tests. I don't have integration questions but how to write units for SpringBoot applications, did each method need to be covered?
How do I automatically generate Sql INSERT for oid PgSQL tables (DataGrip, DBeaver can't do that)? I want to add this to the Flyway migration.
I use many to many links, how do I implement Hibernate deletion from three tables (all I know so far is how to do it in pure SQL)?
In handlePicrureUpload() I not only upload the image but also write the image into PgSQL tags. It's a very serious error how to run these actions only when the handlePicrureUpload() method is finished.
How to make multithreaded uploading and processing of images? How to track the status of each recognition, a separate controller that takes the statuses from Google Cloud?
How to output c /api/ai/ getAiResults() table in Vaadin. How to display the picture in the Vaadin table and how to schedule the tag list in the field (it was highly desirable to edit them).
I know that Google has all these answers, but I'm a little time constrained right now. You can hit me with a stick.
Cloud Vision documentation - https://cloud.google.com/vision/docs
Thank you to everyone who will respond!
I have to cover the code with integration + unit tests. I don't have
integration questions but how to write units for SpringBoot
applications, did each method need to be covered?
unit tests are generally for each method.
I use many to many links, how do I implement Hibernate deletion from
three tables (all I know so far is how to do it in pure SQL)?
JPA supports deleting records. If you have cascade delete setup between the tables you don't need to delete them one by one.
In handlePicrureUpload() I not only upload the image but also write
the image into PgSQL tags. It's a very serious error how to run these
actions only when the handlePicrureUpload() method is finished.
You are using the wrong OR operator in your handlePicrureUpload. It should be ||
-How to make multithreaded uploading and processing of images? How to track the status of each recognition, a separate controller that takes
the statuses from Google Cloud?
Spring provides #Async to execute methods asynchronously in separate thred. It sounds like you want to do some sort of queueing of requests. To start simple, you can save the request in a table 'request' and return a request id to track it. You can setup a #Scheduled job that reads new operations every X interval and process. You can setup a REST endpoint to return the status of request.

The best way to schedule||automate MarkLogic data hub flows/custom steps

I use DMSDK to ingest data; I have multiple custom flows to run following data ingestion. Instead of manually running the flows one by one, What is the best way to orchestrate MarkLogic data hub flows?
gradle, trigger or other scheduling tools?
I concur with Dave Cassel that NiFi, or perhaps something like MuleSoft, or maybe even Camel is a great way to manage running your flows. Particularly if you are talking about operational management.
To answer on other mechanisms:
Crontab doesn't connect to MarkLogic itself. You'd have to write scripts or code to make something actually happen. You won't have much control either, nor logging, unless you add that as well.
We have great plugins for Gradle that make running flows real easy. Great during development and such, but perhaps less suited for scheduling or operational tasking.
Triggers inside MarkLogic only respond to insertion of data, so you'd still have to initiate an update from outside anyhow.
Scheduled Tasks inside MarkLogic has similar limitations to Crontab and Gradle. It doesn't do much by itself, so you have to write code anyhow. It provides no logging by itself, nor ways to operationally manage the tasks, other than through Admin ui.
JAR package might depend on what JAR package you actually mean. You can create a JAR of your ml-gradle project, but that doesn't give you a lot of gain over calling Gradle itself.
Personally, I'd have a close look at the operational requirements. Think of for instance: need to get status overview, interrupt schedules, loops to retry at failure, built-in logging, and facilities to send notifications when attention is needed.
HTH!
There are a variety of answers that will work, of course; my preference is NiFi. This keeps any scheduling overhead outside of MarkLogic, with the trade-off that you'll need to have NiFi running.

Feasibilty analysis of data transformation using any ETL tool

I don't have any experience on any ETL tool. However I want to know if it is possible to do the followings using any ETL tool or we need to write a java or any other batch job to do this:
Scenario 1:
The source system has different REST APIs. I need to get the data, transform it, then store the data in a MongoDB.
The hardest part is the transformation. There can be situation where I need to call a REST API of source, and based on its data I need to call several other REST APIs using the 1st API data. After that we need to format the entire data in different format and store it in Mongo.
Scenario 2:
The source system has a DB. I need to transform the data using my custom logic and store it in MongoDB.
Here the custom logic can include things like this:
From table1 of source I created collection1. After that I need to consult table2 and previously created collection1, process the data and then create collection2.
Is this possible using any ETL tool? If possible then which tool? If possible please mention in as short as possible, how it can be done using different terminology so that I can search internet, learn things and implement it.
Briefly speaking: yes, that is what ETL tools are exactly for. You can Extract data from REST sources, Transform using sophisticated logic and Load to target, like MongoDB.
Exact implementation depends on the tool. While I guess you will get help if you run across problems implementing the solution in any of the tools, I don't think anyone will prepare complete, detailed solutions for you.

Is their any way, data can be written to CRM applications using Spring Batch?

This is just for learning purposes. I have worked on ETL where my team loaded data to Salesforce Sandbox provided by our client. It involved few ETL scripts which helped in moving the data to the sandbox for testing. This operation was under taken before production phase. ETL scripts helped in upserting the data from DataStage transformer stage to Salesforce Sandbox stage.
Is it possible to have ETL like scripts in a Spring Application to write the files in a Salesforce sandbox type setup using Spring Batch?
Sure it is. You will probably have to write your own writers. But reading bulk of data from a source, processing the data and storing it to a target is exactly what spring-batch is for.

Multiple programs updating the same database

I have a website developed with ASP.NET MVC, Entity Framework Code First and SQL Server.
The website has entities that each have a history of statuses that we defined (NEW, PACKED, SHIPPED etc.)
The DB contains a table in which a completely separate system inserts parcel tracking data.
I have to read this data tracking data and, following certain business rules, add to the existing status history of my entities.
The best way I can think of is to write an independent Windows service to poll the tracking data every so often and update my entity statuses from that. However, that makes me concerned about DB concurrency issues.
Please could someone advise me on the best strategy for this scenario?
Many thanks
There are different ways to do it. It also depends on the response time you need. If you need to update your system as soon as the tracking system updates the record then a trigger is the preferred way. Alternative way is to schedule a job which will run every 15/30mins and sync the 2 systems.
As for the concurrency issue you can use a concurrency token field. Entity framework has support for this.

Resources