Cannot Insert header using the before_import override function in django-import-export - django-import-export

I'm using django-import-export to upload csv files through django admin. I have the ability to override the before_import function to add functionality before the import. I have a csv file with no headers, and the actual data starts on line one. I need to add a header, or insert a row before my csv file is uploaded, so that it can be read properly.
class UpdateResource(resources.ModelResource):
def before_import(self, dataset, using_transactions, dry_run, **kwargs):
dataset.header = ['sku', 'quantity']
class Meta:
model = Upload
import_id_fields = ('sku',)
This code changes the value of the first row of my csv file to sku,quantity, but I need to insert one above that value, not replace it. Alternatively, if there is an option to ignore headers and just map the values to my model from left to right or something, that would be great too.

My fix was to store the first row as a variable, create the desired header and append the first row to end of file.
class UpdateResource(resources.ModelResource):
def before_import(self, dataset, using_transactions, dry_run, **kwargs):
first_row = dataset.header
dataset.header = ['sku', 'quantity']
dataset.append(first_row)

Related

Laravel Excel Dynamic Heading Row selection

I'm currently working on a Laravel project using the Laravel-Excel package.
It is working fine, except for a use case i'm trying to solve I'm trying to solve for a few hours now.
Each CSV files I'm reading begins with the Heading Rows, and that's particularly practical, but some of my CSV Files begin with annotations like #CSV DD V2.4.3 at row 1 followed by the Heading Rows at row 2.
So, I need to find out how to determine the real line where the heading rows are located to avoid that unwanted line. I especially need to make it work in the headingRow() method implemented by the WithHeadingRow interface. is there a way t grab lines from the csv file to determine the correct heading row line ?
Hope you'll find out , thanks in advance
I would consider doing this in two steps. This is a very general example, so you will need to add in the appropriate Laravel Excel import concern and other dependencies, perform any checks to make sure the file exists, and so on.
Check the first line of the file before starting the import to determine the correct heading row.
// get the first line of the file
$line = fgets(fopen('path/to/file.csv', 'r'));
// set the heading row based on the line contents
// add terms to search array if desired ['#CSV','...']
$heading_row = Str::startsWith(trim($line), ['#CSV']) ? 2 : 1;
Dynamically inject the heading row value using the WithHeadingRow concern and associated method.
class SampleImport implements WithHeadingRow
{
private $heading_row;
// inject the desired heading row or default to first row
public function __construct($heading_row = 1)
{
$this->heading_row = $heading_row;
}
// set the heading row
public function headingRow(): int
{
return $this->heading_row;
}
}
Now you can pass a custom heading row when running the import.
Excel::import(new SampleImport($heading_row), 'path/to/file.csv');

How to export lines db table to csv file with phpspreadsheet

In laravel 5.7 with "phpoffice/phpspreadsheet": "^1.6" app I need to export lines db table to csv file, like :
$dataLines= [
['field1'=>'000000000', 'field2'=>1111111111111],
['field1'=>'11000000000', 'field2'=>221111111111111],
['field1'=>'31000000000', 'field2'=>321111111111111],
];
$spreadsheet = new \PhpOffice\PhpSpreadsheet\Spreadsheet();
$writer = new \PhpOffice\PhpSpreadsheet\Writer\Csv($spreadsheet);
$writer->save('/path/12345.csv');
But with code above I have empty file and I did find not way to write content of $dataLines array.
Also as I need to write 1 row as fields name and next rows from db,
have I to prepare 1st row with fields name and rest rows (only values without fields name) manually ?
Are there some mothods to make it automatically ?

How to update invoice.line quantity ?

I am trying to create button in Invoice that will update certain field in inovoice lines. I've found how to update field in account.invoice but I am strugling to find right way how to update it in account.invoice.line.
class accountinvoiceext(models.Model):
_inherit = ['account.invoice']
#api.one
def my_button(self,uid):
invoice_id = self.id
#lines = getinvoicelinesbyid(invoice_id)
I am sure there is some proper way how to get invoice.lines related to this invoice, or not ?
I've tried _inherit account.invoice.line but then I cannot define button there.
Second question - what is best way to call some function every time invoice is created ?
if you want to add button to change the line. you need to loops the one2many fields in the invoice, and change #api.one to #api.multi, example:
#api.multi
def my_button(self):
for line in self.invoice_line:
line.write({'your_field': 'your_values'})
and if you want to call this function every invoice is create, you need to modified the create function:
#api.multi
def create_project(self,values):
res = super(SaleOrder, self).create(values)
res.my_button()
return res

Why does my CursorPagination class always return the same previous link?

Trying to paginate a large queryset so I can return to the same position I was in previously even if data has been added to the database.
Currently I have as my pagination class:
from rest_framework.pagination import CursorPagination
class MessageCursorPagination(CursorPagination):
page_size = 25
ordering = '-date'
In my View I have:
from rest_framework.generics import GenericAPIView
from rest_framework.authentication import TokenAuthentication, BasicAuthentication
class MessageViewSet(GenericAPIView):
permission_classes = (IsAuthenticated, )
authentication_classes = (TokenAuthentication,)
pagination_class = pagination.MessageCursorPagination
serializer_class = serializers.MessageSerializer
def get(self, request, **kwargs):
account_id = kwargs.get('account_id', None)
messages = models.Message.objects.filter(
account=account_id)
paginated_messages = self.paginate_queryset(messages)
results = self.serializer_class(paginated_messages, many=True).data
response = self.get_paginated_response(results)
return response
While testing to see if I'd set it up right, I got the results I was expecting with a next link and a null for the previous link.
After going to the next link I get a new next link, the next set of results, and a previous link.
When I continue to the next link I get the same previous link as before, but with the next, next link and the next set of data.
No matter how many times I go to the next, next link the previous link remains the same.
Why doesn't the previous link update?
-- Update --
It looks like the cause to my issue is that I have a lot of messages on the same date. Ordering by date it tries to step back to the date before the current cursor. How can I order by date but step through the list using the cursor pagination like I would with ids?
From the Documentation
Proper usage of cursor pagination should have an ordering field that satisfies the following:
Should be an unchanging value, such as a timestamp, slug, or other field that is only set once, on creation.
Should be unique, or nearly unique. Millisecond precision timestamps are a good example. This implementation of cursor pagination uses a smart "position plus offset" style that allows it to properly support not-strictly-unique values as the ordering.
Should be a non-nullable value that can be coerced to a string.

django-import-export how to merge/append instead of update for specific fields

I was able to get access to the new row data and existing instance by overriding import_obj.
def import_obj(self, instance, row, dry_run):
super(RelationshipResource, self).import_obj(instance, row, dry_run)
for field in self.get_fields():
if isinstance(field.widget, widgets.ManyToManyWidget):
tags = []
for tag in instance.tagtag.all():
tags.append(tag.name)
tags.extend(row['tagtag'].split(',')) # concat existing and new tagtag list
row['tagtag'] = ', '.join(tags) #set as new import value
# continue to save_m2m
continue
self.import_field(field, instance, row)
However, some where else in the import workflow it compares the values. Since the new concat value contains the original value the field is not updated. Import thinks there is no change.
How can i save the instance with the full concat values?

Resources