I want to import multiple solutions and data files related to each solution in a CRM online instance using package deployer.
I want to order the imports as mentioned below -
Step1 - Import Solution1
Step2 - Import Data File1
Step3 - Import Solution2
Step4 - Import data File2
Please let me know how to achieve this functionality.
It isn't possible to interleave the import of solution files and data files in a single packager deployer package given the structure defined by the ImportConfig.xml file which the solution packager uses to define the solutions and data files to import. The solutions will first be imported in sequential order, followed by all the data files in sequential order.
In practice, there shouldn't be a need to import the solutions and data files in such an interleaved order, and the data files can be imported after all the solutions have been imported without any issue.
If it's absolutely necessary to import the solution and data files in such an interleaved order, one could create multiple packages, with each package containing the single solution and its associated data file, and import each package one after the other in the desired order.
Related
I did some research including looking at the official doc from google and I can't find a good explanation of what the go_package option is used for. The official doc states the following:
The .proto file should contain a go_package option specifying the full import path of the Go package that contains the generated code.
I guess what i'm confused with is what they mean by import path. It sounds more like an export path as in where we want to place the generated code? But why would we need this if we can specify the out path during --go_out=? So i'm having trouble grasping why you need to specify an export path in the proto file and at the same time specify an output path in option go_package?
Declaring the generated codes expected import path is important for other protobuf files to know how to refer to those types in the generated code.
If all of your protobuf definitions are declared in a single .proto file, the import path means little because they are implicitly sharing the same go package. If you start storing/generating protobuf files in multiple packages, they need to know how to find each other.
Looking at the protobuf "Well Known" types is a good example of this:
https://github.com/protocolbuffers/protobuf/blob/master/src/google/protobuf/timestamp.proto
The top of that file has the following package declaration:
option go_package = "google.golang.org/protobuf/types/known/timestamppb";
If I consume that message in another protobuf file using:
import "google/protobuf/timestamp.proto";
message MyModel {
google.protobuf.Timestamp ts = 1;
}
My generated file for MyModel will contain an import statement at the top of the .pb.go file that looks like:
import timestamppb "google.golang.org/protobuf/types/known/timestamppb"
I trained my model in Nvidia Digits 5 and I would now like to extract the accuracy and loss plots that were generated during training for a report. Is this data saved somewhere so that it would possible to extract the data for these plots so that I could plot it in Python and perhaps ultimately modify the plots to compare different models etc?
The best solution I have found is to either look at the HTML file or to scan the text file caffe_output.log that is produced by Caffe. The text file is usually stored in /var/digits/jobs/insert_your_job_id/ but you can also just run on linux systems:
locate caffe_output.log
Go to your DIGITS job folder and locate your job's subfolder. Inside you'll find a file status.pickle, which is a pickled object containing all your job's information.
You can load it in python like so:
import digits
import pickle
data = pickle.load(open('status.pickle','rb'))
This object is somewhat generic and may contain multiple tasks. For a typical classification task it will likely be just one, but you will still need to access it via data.tasks[0]. From there you can grab the plots:
data.tasks[0].combined_graph_data()
which returns a somewhat convoluted dict (unfortunately - since your network can produce many accuracy/loss outputs, as well as even custom ones). It contains everything you need though - I managed to plot accuracy with:
plt.plot( data.tasks[0].combined_graph_data()['columns'][2][1:] )
but it's likely that you'll have to write a bit of custom code. As always, dir() is your friend.
I am using django-import-export library to import several excel books. However, I have over 1,000 books that need to be imported into the db. Is there a way to select a folder to upload instead of selecting and uploading each individual file? I've worked through the tutorial found here: https://django-import-export.readthedocs.org/en/latest/getting_started.html#admin-integration
but I was unable to find the answer to my question.
Any help would be greatly appreciated.
Posting mainly for future viewers. Currently, django_import_export imports only the active/first sheet of a single excel workbook. However, the code is easy enough to modify and alleviate this problem. In forms.py, there is ImportForm which is the one used while importing from admin. Simply change the import_file field to something like this:
import_file = forms.FileField(widget=forms.ClearableFileInput(attrs={'multiple':
True}),
label=_('File to import')
)
This form is used in admin.py to process the file data. Change the linked line to something like:
import_files = request.FILES.getlist('import_file')
for import_file in import_files:
...
Now all that's left is to modify the import procedure in base_formats.py for XLS and XLSX formats. The changes will be nearly same for both, I will outline the XLS one here.
Instead of taking the first sheet, run a for loop over the sheets and append the data to the dataset.
dataset = tablib.Dataset()
first_sheet = True # If you keep all correct headers only in first sheet
for sheet in xls_book.sheets():
if first_sheet:
dataset.headers = sheet.row_values(0)
first_sheet = False
for i in moves.range(1, sheet.nrows):
dataset.append(sheet.row_values(i))
return dataset
For XLSX, the loop will run on xlsx_book.worksheets. Rest is similar to xls.
This will allow you to select multiple excel workbooks and import all the sheets for a workbook. I know the ideal solution would be to import a zip file to create all the data using a single bulk_create, but this serves well for now.
I have to import my testcases which is in Excel into Testlink.How can I do it.Please help me out of this. Thanks in advance.
There's a decent macro available here: http://testlink-import.blogspot.co.uk/2009/11/test-link-import.html
It'll output an xml file but you'll probably find it doesn't import without some tweaking. I found it necessary to remove the <testsuite> tags and to add a <testcases> wrapper.
Also, if you add extra custom fields, you may find extra <customfields> tags.
I recommend running the macro with just one row of data and making sure you can import that first, then trying to do it in bulk. I still ended up with 1 record that just wouldn't import and I never got to the bottom of it, but the process was quicker overall than manually entering 190 test cases.
Is there any way to tell less to import different external files based on something pragmatically?
I process my LESS with dotless, but can't find anything that hints towards this being possible.
EDIT: Also just so this info is available. I'm aware of how to import specific files into another file. My question is the ability to specify a file to import based on some other piece of data....the users role...status of something....etc.
Is there a reason you cannot just do something this, while generating your LESS file with ASP:
If userRole1 Then
code to load imports for userRole1
Else
code to load default imports
End If
In other words, generate your LESS file with ASP, accessing what you need, then have it output the LESS code with the desired imports, then LESS takes over to build the CSS.