Hi Can anybody tell me that is it possible to automate the inventory with suppliers data with magmin.
Actually i have 3 suppliers and they update there inventory regularly and i have to do all through csv , is there a way that i can automate all process ,means
regularly update the data
with suppliers data automatically or at
scheduled time.
yes you can.
Magmi CLI interface is made for this, coupled with a cron , if you have the right profiles already defined (one per provider) , automating is a no-brainer.
Magmi is already able to get csv from remote locations, so configure one profile per vendor with adequate plugin parameters.
See : Magmi Command line
Related
I am trying to run a job including a task that needs to multiple run in parallel using different parameter values.
I understand that this is possible based on this post:
https://docs.databricks.com/data-engineering/jobs/jobs.html#maximum-concurrent-runs
But I can't figure out how.
To create trigger on multiple jobs using Databricks UI, follow below path
Workflows > Jobs > Create
Here give Task name and select Type, Source and Path.
You can Add parameters as shown in the screenshot below.
In Advanced options you can Add dependent libraries, Edit email notifications, Edit retry policy, Edit timeout.
I need my NiFi Controller Service to update periodically. Is there a way to achieve this?
For more information:
I have two files 1. Customer_Info and 2. Sales_Info
Customer_Info file contains information like cust_id, name, address.
Sales_Info contains information like sales_id, cust_id ,date_id.
Both the files are received daily. Now, I want to merge these two files in NiFi. For this, I'm using CSVRecordLookupService lookup service. But once the files is loaded, I don't see a way to reload it with new file received.
It would be up to each controller service to implement the logic to periodically update itself since it is dependent on whatever the controller service is doing.
Unfortunately the CsvRecordLookupService does not periodically update itself, but I think an enhancement could be implemented to make it do this. An example of one that does update itself is the PropertiesFileLookupService.
I need to do a Load test on application which is built on ZK framework.
When i record a script which performs below action
a. User Login
b. Select Role
c. Open and Create Record
d. Log out.
When i run the script with multiple users say 10 users then scripts create 10 records in application.
But after some random duration say 4-5 hours later same script does not create any record even though all requests are shown as passed. Script also records COMET request (Ajax Push)
I am not able to figure out the reason.
Read this which explains how ids work :
http://books.zkoss.org/index.php?title=Small_Talks/2012/January/Execute_a_Loading_or_Performance_Test_on_ZK_using_JMeter
http://blog.zkoss.org/index.php/2013/08/06/zk-jmeter-plugin/
Scenario: The system needs to check the Product table in the database DAILY to check every product's expiration date. The system needs to get a list of products with expiration dates matching the current date. Then, removes these products in the database.
Things to consider:
- Runs a single query to retrieve products that has matching expiration date.
- Remove these products in the database.
- We are talking about thousands of products here.
QUESTION: Is there a need for me to create a Spring Batch Job supported with Scheduler for this or just a Scheduled Job to do this efficiently?
Because in Scheduled Job, I can just schedule the checking and removing daily. And its done. At the same time, I can also do it using Spring Batch with Scheduler. But, what do you think is the more efficient way?
i think spring batch would be a wise decision, if you need to restart your job with more steps inside. otherwise, if it's really just one job, you could solve that restart functionality anyway manually...
all those spring-batch configuration aren't that heavy in your applicationcontext, but you need to create tables for the according repository. (perhaps you gotta stage those tables as well..)
Spring Batch:
if you need restart-functionality
Common Scheduling: Easy and fast, ain't need that much knowledge about the framework
When setting up a new Hudson/Jenkins instance i run into the problem that i have to manually provide all the email addresses for the scm users.
We are using subversion and i can't generate the mail addresses from the usernames. I got a mapping but i found no way to copy / edit that without making use of the gui. With 20+ users that gets boring and i'd like to have just edit a file or something.
Maybe i'm missing some trivial thing like a scmusers.xml (which totally would do the job) ?
I've got 2 solutions so far:
The users are stored in users/USERNAME/config.xml could be versioned / updated / etc.
Makeing use of the RegEx+Email+Plugin, create one rule per user and version that file.
With 20+ users, setting up a list for the scm users is the way to go. Then when folks add/leave the group, you only have to edit the mailing list instead of the Hudson jobs. Also depending on your mailing list software, folks might be able to add and drop themselves from the list which would save you the time of maintaining it yourself in Hudson.
You might also want to look into the alias support of whatever email server your Hudson server is using. Let Hudson send out the emails it wants to using the SVN usernames, but then define aliases in your /etc/aliases file (or equivalent for your email server) that map the SVN usernames to the actual email addresses.