I'm trying to implement the LZMA algorithm (compression/decompression algorithm) in the MPC5748G, however i need an example on how to use more than one core since there are 2 cores of 160Mhz
I'm using LZMA to reduce flashing time, the file firstly is compressed and then sent to the MPC ... it should then decompress the file and perform the flashing operation.
The algorithm need to run on a separate core because the other core are doing other things and contains a lot of tasks.The results are not very good and the decompression takes too much time.
as #marcus commented: the problem is not to write an lzma decoder, but to run it on a different core
Any help for using the other core will be very helpful.
How about Core_Boot(void)?
/*******************************************************************************
Function Name : Core_Boot
Engineer : Lukas Zadrapa
Date : Apr-20-2016
Parameters : NONE
Modifies : NONE
Returns : NONE
Notes : Start e200z4b and e200z2 cores
Issues : NONE
*******************************************************************************/
void Core_Boot(void)
{
/* Enable e200z4b and e200z2 cores in RUN0-RUN3, DRUN and SAFE modes */
MC_ME.CCTL[2].R = 0x00FC; /* e200z4b is active */
MC_ME.CCTL[3].R = 0x00FC; /* e200z2 is active */
/* Set start address for e200z4b and e200z2 cores */
MC_ME.CADDR[2].R = E200Z4B_BOOT_ADDRESS | 1; /* e200z4b boot address + RMC bit */
MC_ME.CADDR[3].R = E200Z2_BOOT_ADDRESS | 1; /* e200z2 boot address + RMC bit */
/* Mode change - re-enter the DRUN mode to start cores */
MC_ME.MCTL.R = 0x30005AF0; /* Mode & Key */
MC_ME.MCTL.R = 0x3000A50F; /* Mode & Key inverted */
while(MC_ME.GS.B.S_MTRANS == 1); /* Wait for mode entry complete */
while(MC_ME.GS.B.S_CURRENT_MODE != 0x3); /* Check DRUN mode entered */
}//Core_Boot
Do you need to exchange data between cores? Regards
Related
I recently start learning Direct Rendering Manager and tried to use the writeback connector and the libdrm.
There are some documents and codes of how the kernel implements it, but not enough of how the userspace uses the api, such as drmModeAtomicAddProperty and drmModeAtomicCommit of the writeback connector in libdrm.
I have referred to libdrm/tests/modetest; Linux API Reference; linux/v6.0-rc5/source/drivers/gpu/drm/drm_writeback.c and some patch imformation.
I used modetest to get some driver information and tried my code using the libdrm:
/* I previously get the value of writeback connector:wbc_conn_id
* and create an output framebuffer:fb_id
* and find the active crtc: crtc_id
* next to get writeback connector property*/
props = drmModeObjectGetProperties(fd, wbc_conn_id, DRM_MODE_OBJECT_CONNECTOR);
printf("the number of properties in connector %d : %d \n", wbc_conn_id, props->count_props);
writeback_fb_id_property = get_property_id(fd, props,"WRITEBACK_FB_ID");
writeback_crtc_id_property = get_property_id(fd, props,"CRTC_ID");
printf("writeback_fb_property: %d\n",writeback_fb_id_property);
printf("writeback_crtc_id_property: %d\n",writeback_crtc_id_property);
drmModeFreeObjectProperties(props);
/* atomic writeback connector update */
req = drmModeAtomicAlloc();
drmModeAtomicAddProperty(req, wbc_conn_id, writeback_crtc_id_property, crtc_id);
printf ("%d\n",ret);
drmModeAtomicAddProperty(req, wbc_conn_id, writeback_fb_id_property, buf.fb_id);
ret = drmModeAtomicCommit(fd, req, DRM_MODE_ATOMIC_ALLOW_MODESET, NULL);
if (ret) {
fprintf(stderr, "Atomic Commit failed [1]\n");
return 1;
}
drmModeAtomicFree(req);
printf("drmModeAtomicCommit Set Writeback\n");
getchar();
It turns out that the drmModeAtomicCommit failed. Was there any property set wrongly or missed? The value of two steps of addproperty is 1 and 2, and the atomic commit returned EINVAL -22 .
I've looked around but found no solution or similar question about the property set of writebackconnector.
I refer to below two links to use huge page in my linux driver:
Sequential access to hugepages in kernel driver
http://nuncaalaprimera.com/2014/using-hugepage-backed-buffers-in-linux-kernel-driver
Below is my code:
#define PAGE_SHIFT_2M 21
pages = vmalloc(nr_pages * sizeof(struct page*));
down_read(¤t->mm->mmap_sem);
get_nr_pages = get_user_pages(current, current->mm, buffer_start, nr_pages,
1 /* Write enable */, 0 /* Force */, pages, NULL);
up_read(¤t->mm->mmap_sem);
nid = page_to_nid(pages[0]); // Remap on the same NUMA node.
remapped_addr = vm_map_ram(pages, nr_pages, nid, PAGE_KERNEL);
printf("page pfn [0]=%lX, [1]=0x%lX, [2]=0x%lX\n",
page_to_pfn(pages[0]),
page_to_pfn(pages[1]),
page_to_pfn(pages[2]));
printf("page physical [0]=%lX, [1]=0x%lX, [2]=0x%lX\n",
page_to_pfn(pages[0])<<PAGE_SHIFT_2M,
page_to_pfn(pages[1])<<PAGE_SHIFT_2M,
page_to_pfn(pages[2])<<PAGE_SHIFT_2M);
printf("page logical addr [0]=%p, [1]=%p, [2]=%p\n",
__va(page_to_pfn(pages[0])<<PAGE_SHIFT_2M),
__va(page_to_pfn(pages[1])<<PAGE_SHIFT_2M),
__va(page_to_pfn(pages[2])<<PAGE_SHIFT_2M));
printf("page_address [0]=%p, [1]=%p, [2]=%p\n",
page_address(pages[0]),
page_address(pages[1]),
page_address(pages[2]));
Log print:
page pfn [0]=154A00, [1]=0x154A01, [2]=0x154A02
page physical [0]=2A940000000, [1]=0x2A940200000, [2]=0x2A940400000
page logical addr [0]=ffff8aa940000000, [1]=ffff8aa940200000, [2]=ffff8aa940400000
page_address [0]=ffff880154a00000, [1]=ffff880154a01000, [2]=ffff880154a02000
I have several questions:
1) I'm wondering whether vm_map_ram() can works with huge page. From kernel source code, I can see vm_map_ram() use PAGE_SIZE and PAGE_SHIFT, which's value should for default 4KB page size.
In my case, after write to the virtual address returned from vm_map_ram(), I encounter "BUG: unable to handle kernel paging request at XXXX" issue.
2) page_address return values for two pages are 0x1000(4KB) gap, not 2MB gap. Why is that?
3) Did I use right with "__va(page_to_pfn(pages[0])<
Thanks in advance!
I want to know how to pin multiplex pins in initial phase of boot i.e is in spl(MLO).
What I am trying to do is change the default pin configuration to gpio one, so that I can see high or low on the pin.
On P8 header I tried to change the mode 0 from default 'TIMER4' to gpio2[2] i.e mode 7. So I did this
static struct module_pin_mux gpio2_2_pin_mux[] = {
{OFFSET(gpmc_wen), (MODE(7) | PULLUDEN)},
{-1},
};
and called this function
configure_module_pin_mux(gpio2_2_pin_mux);
in board/ti/am335x/mux.c
I didn't saw any voltage on 7th pin of P8 header?
What is the correct way to do this?
file link : http://textuploader.com/5eh6u
you can search with '?' in the file to see what I added.
P.S
I checked with checking pin mux setting on uart0 and tried to read it if that is same.
So I wrote this in
./arch/arm/cpu/armv7/omap-common/boot-common.c
void spl_board_init(void)
{
/*
* Save the boot parameters passed from romcode.
* We cannot delay the saving further than this,
* to prevent overwrites.
*/
save_omap_boot_params();
unsigned int *mfi;
//control revision register
/* Prepare console output */
mfi = *(unsigned int *)(0x44E10980);
printf("1======> %x\n",mfi);
preloader_console_init();//it will print uboot version date and time information
mfi = *(unsigned int *)(0x44E10980);
printf("2======> %x\n",mfi);
more init code.....
}
I wanted to see this setting done in board/ti/am335x/mux.c
static struct module_pin_mux uart0_pin_mux[] = {
{OFFSET(uart0_rxd), (MODE(0) | PULLUP_EN | RXACTIVE)}, /* UART0_RXD */
{OFFSET(uart0_txd), (MODE(0) | PULLUDEN)}, /* UART0_TXD */
{-1},
}
But it printed value as 37. that means the pin is in GPIO mode.
How is this possible that the pin that should be in mode 0 is in 7th mode?
In my QEMU-based project (system emulation) I analyse various kernel structures of the guest Linux. To read the guest virtual memory I use cpu_memory_rw_debug() function.
In particular, I search struct module linked list in the kernel memory using some kind of heuristics.
Lest assume that the relevant part of an element in this list looks like this:
--------------------- ---------------------
| prev = 0xc1231234 | | prev = 0xc5675678 |
--------------------- ---------------------
| next = 0xc1122334 | | next = 0xc5566778 |
--------------------- ---------------------
| etc. | | etc. |
--------------------- ---------------------
When QEMU emulates x86 or ARM, prev/next pointers can be accessed by cpu_memory_rw_debug() and they actually point to previous/next list elements.
However, when QEMU emulates MIPS, I observe the following strange behavior: while prev/next pointers look like a valid kernel pointers in every element in the list, I cannot access their pointees by means of cpu_memory_rw_debug(), because finding the corresponding physical address fails: the access permissions are ok, the virtual CPU is in kernel mode, but tlb->map_address() fails.
Since I can't walk through the linked list, I tried to find the elements one by one - just to see what their prev/next pointers look like - and I actually found all the elements, but all of them reside at 0xAxxxxxxx addresses, not 0xCxxxxxxx, as prev/next imply.
The function r4k_map_address(), which performs physical address lookup looks like this (only the relevant excerpt):
#define KSEG0_BASE 0x80000000UL
#define KSEG1_BASE 0xA0000000UL
#define KSEG2_BASE 0xC0000000UL
#define KSEG3_BASE 0xE0000000UL
//..............
if (address < (int32_t)KSEG1_BASE) {
/* kseg0 */
if (kernel_mode) {
*physical = address - (int32_t)KSEG0_BASE;
*prot = PAGE_READ | PAGE_WRITE;
} else {
ret = TLBRET_BADADDR;
}
} else if (address < (int32_t)KSEG2_BASE) {
/* kseg1 */
if (kernel_mode) {
*physical = address - (int32_t)KSEG1_BASE;
*prot = PAGE_READ | PAGE_WRITE;
} else {
ret = TLBRET_BADADDR;
}
} else if (address < (int32_t)KSEG3_BASE) {
/* sseg (kseg2) */
if (supervisor_mode || kernel_mode) {
ret = env->tlb->map_address(env, physical, prot, real_address, rw, access_type);
} else {
ret = TLBRET_BADADDR;
}
That is, on MIPS 0xC0000000...0xE0000000 range is mapped differently from lower kernel ranges.
If I replace the TLB access with *physical = address - (int32_t)KSEG1_BASE direct mapping, I get the things working, but certainly that's not the solution.
Does it look like QEMU-related issue or a MIPS-related one? I'd appreciate any idea or debugging direction.
The bottom line is that cpu_memory_rw_debug() doesn't work reliably in qemu-system-mips.
The reason is that QEMU emulates MIPS software-managed TLB. With this approach, whenever virtual->physical address mapping does not exist in the TLB cache, QEMU emulates "TLB-miss" exception, which should be handled by the OS. It is OS responsibility to walk through the page directory and fill the TLB -- QEMU (just like real MIPS) won't do that.
While this approach works for the guest code, it results in inability
to read guest virtual memory using cpu_memory_rw_debug() - it
doesn't work reliably for mapped segments.
As for the question why kernel structs that actually reside in KSEG2 where observed in KSEG1 - that's just because some virtual ranges of KSEG1 and KSEG2 correspond to the same physical pages.
what is the equivalent of msync [unix sys call] in windows? I am looking for MSDN api in c,C++ space.
More info on msync can be found at http://opengroup.org/onlinepubs/007908799/xsh/msync.html
FlushViewOfFile
Checkout the Python 2.6 mmapmodule.c for an example of FlushViewOfFile and msync in use:
/*
/ Author: Sam Rushing <rushing#nightmare.com>
/ Hacked for Unix by AMK
/ $Id: mmapmodule.c 65859 2008-08-19 17:47:13Z thomas.heller $
/ Modified to support mmap with offset - to map a 'window' of a file
/ Author: Yotam Medini yotamm#mellanox.co.il
/
/ mmapmodule.cpp -- map a view of a file into memory
/
/ todo: need permission flags, perhaps a 'chsize' analog
/ not all functions check range yet!!!
/
/
/ This version of mmapmodule.c has been changed significantly
/ from the original mmapfile.c on which it was based.
/ The original version of mmapfile is maintained by Sam at
/ ftp://squirl.nightmare.com/pub/python/python-ext.
*/
static PyObject *
mmap_flush_method(mmap_object *self, PyObject *args)
{
Py_ssize_t offset = 0;
Py_ssize_t size = self->size;
CHECK_VALID(NULL);
if (!PyArg_ParseTuple(args, "|nn:flush", &offset, &size))
return NULL;
if ((size_t)(offset + size) > self->size) {
PyErr_SetString(PyExc_ValueError, "flush values out of range");
return NULL;
}
#ifdef MS_WINDOWS
return PyInt_FromLong((long) FlushViewOfFile(self->data+offset, size));
#elif defined(UNIX)
/* XXX semantics of return value? */
/* XXX flags for msync? */
if (-1 == msync(self->data + offset, size, MS_SYNC)) {
PyErr_SetFromErrno(mmap_module_error);
return NULL;
}
return PyInt_FromLong(0);
#else
PyErr_SetString(PyExc_ValueError, "flush not supported on this system");
return NULL;
#endif
}
UPDATE:
I don't think you are going to find complete parity in the win32 mapped file APIs. The FlushViewOfFile API doesn't have a synchronous flavor (probably because of the possible impact of the cache manager). If precise control over when data is written to disk is required perhaps you can use the FILE_FLAG_NO_BUFFERING and FILE_FLAG_WRITE_THROUGH flags with the CreateFile API when you create the handle to your mapped file?
Windows equivalent for flushing all filemapping is
void FlushToHardDrive(LPVOID fileMapAddress,HANDLE hFile)
{
FlushViewOfFile(fileMappAddress,0); //Async flush of dirty pages
FlushFileBuffers(hFiles); // flush metadata and wait
}
And for flushing part of filemapping
void FlushToHardDrive(LPVOID address,DWORD size, HANDLE hFile)
{
FlushViewOfFile(address,size); //Async flush of region
FlushFileBuffers(hFiles); // flush metadata and wait
}
This is described in MSDN here
Flag FILE_FLAG_NO_BUFFERING actually do nothing for memory mapped files (described here), also even file handle is created with these flags, metadata of file can be cached and not flushed, so FlushFileBuffers is always required for IO and MM, if you want to be completely sure that all data (including file access time) is saved. This behavior is described here
P.S. Some real world example: SQLite use MM-files almost only for reading, so when you are using MM-files for write/update you need understand all side effects for this scenario
I suspect FlushViewOfFile actually is the right thing. When I read the man page for msync, I would not assume that it is actually flushing the disk cache (the cache in the disk unit, as opposed to the system cache in main memory).
FlushViewOfFile won't return until the disk stack has completed the writes; like the msync documentation, it says nothing about what happens in the disk cache. We should probably take a look at making that more clear in the documentation.