I designed some logic in matlab/simulink and now I want to import microblaze there, which will handle a communication via serial port and it will also sets some parameteres inside logic through from register blocks. I created the microblaze at XPS and then export through EDK processor block and HDL netlisting to matlab/simulink. I also add shared memories at edk processor block. Everythings works fine until i tried to create a hw-cosim block. Then i got an error:
Begin generation
Checking model status
Checking simulation times
Performing compilation and generation
* ERROR *
Errors occurred during netlist generation.
Reference to non-existent field 'memmap_info'.
Any help will be highly appreciated. :)
My configuration:
Matlab 2011a
Windows 7
Ise 13.4 Design edition
Thanks,
Ondrej
You might want to check:
Upgrade your ISE version.
Verify that the generated instances have the proper names after create them.
Related
I am working with a Zynq board where a custom AXI 4 lite slave peripheral is created and then added from the IP Repository. Then these blocks have been successfully connected with Run Connection Automation. Then bit stream was generated successfully.
Further the SDK was launched. There was a blank C project with simple code for the ZYNQ PS working already. This code was altered by following the pdf "Designing a custom AXI4 lite Slave Peripheral" (the one shown in the following image).
Write and read functions for the custom AXI slave peripheral
Now the SDK executes without any error but when I observe the addresses on SDK monitor, there is no data written into it (as shown in the following image).
Where could I have gone wrong or what have I missed?
Working with vhdl on Vivado 16.2.
What I have already tried: -processing with XSDB console with command
mwr -force 0x43C00000 0x01234
no change there.
Checked the Vivado Address editor to contain the same base address
included xparamters.h
Thank you very much in advance..
update: the xparameters.h file did not have the same base and high address as the vivado address editor. So tried with changing the 'memory region' in linker script to RAM from DDR enter image description here ,
now when observed in the 'variables' window, when clicked on 'Step Into' button, i do get the expected change in valuesenter image description here ..
The XSDB console output and Memory monitor output remain unchanged though.
The hardware platform specification file does show the custom AXI lite with the right expected base and high address.enter image description here
Hardware_platform specified
One of the reasons that were causing this problem was a different hardware platform associated with the debug configurations wrt to the one you want to use.
As we make some changes in the IPs and update them, when the bitstream is exported to SDK, a new hardware platform gets created . Say if the older one is TOP_WRAPPER_hw_platform_0, now a new one is created TOP_WRAPPER_hw_platform_1.
This new platform should be updated in the debug configuration settings Hardware platform.
further in debug config settings the following needed to be ticked on
Under Target Setup
Reset entire system
Program FPGA
Under Application tab
Download Application
Stop at 'main'
I'm setting up a small windows cluster for parallel speedup of my Julia code (2x32 cores).
I have following questions:
Is there a way to suppress loading of a module (e.g. "using PyPlot") on the remote machines? In my code, I use my workstation for initialization and data presentation, whereas the cluster is used for heavy calculation without any need for PyPlot, Dataframes etc.
This code loading on the remote machines is even more annoying as the PyPlot (and any other package) fails to populate help database by giving following error message: (actually a lot of errors from every worker)
exception on : 1: 1ERROR: opening file C:\Users\phlavenk\AppData\Local\Julia-0.3.6\bin/../share/julia\helpdb.jl: No such file or directory
Running on Julia 3.6/ x64 / Windows7, identical directory structure and versions everywhere.
My addprocs command is following:
addprocs(machines,
sshflags=`-i c:\\cygwin64\\home\\phlavenk\\.ssh\\id_rsa`,
dir=`/cygdrive/c/Users/phlavenk/AppData/Local/Julia-0.3.6/bin`,
tunnel=true)
Thank you very much for your advice
"using" causes a module to be loaded on all the processes. To load a module on a specific machine you use "include". e.g.
if myid()==1
include("/home/user/.julia/PyPlot/src/PyPlot.jl")
end
You can then do your plotting by
PyPlot.plot(...) on your local machine.
You could sequence the statements in this order:
using PyPlot
using ModuleNeededOnMasterProcessOnly
addprocs(...)
using ModuleNeededOnAllProcesses
I'm currently using Xilinx ise10.1.I have simulated a Vhdl program for an upcounter.But i dont know how to interface it with PLB bus, so that the microblaze c code can read it via the same bus.Please help me as i'm a fresher in using these tools.
The version 10.1 of ISE is quite old. So, forgive me if I don't remember everything correct.
If you start the xps tool of the EDK part of Xilinx, you should find somewhere in the menus a wizard to create a new Microblaze peripheral. This will create a template with a PLB bus to connect to the Microblaze. You're HDL code can be inserted in the template.
For ISE 14.4:
start xps
create a new project ('File' -> 'New Blank Project')
'Hardware' -> 'Create of Import Peripheral'
'Create templates for a new peripheral'
....
If you only want to read it occasionally, and everything is running off the same clock, you could just instantiate a GPIO peripheral and connect your counter outputs to the GPIO input lines.
Subject: PPC Assembly Language - Linux Loadble Kernel Module
Detail: How access local TOC area (r2) when called from kernel in syscall table hook?
I have written a loadable kernel module for Linux that uses syscall table hooking to intercept system calls and log information about them before passing the call on to the original handler. This is part of a security product. My module runs well and is in production code running on a large variety of Linux kernel versions and distributions with both 32 and 64 bit kernels all running on x86 hardware.
I am trying to port this code to run on Linux for PPC processors and ran into a few problems. Using the Linux kernel source, it is easy enough to see how the system call table is implemented differently on PPC. I can replace entries in the table with function addresses from my own compiled handlers, no problem.
But, here's the issue I'm having trouble with. The PPC ABI uses this thing called a Table Of Contents (TOC) address which is stored in the CPU's R2 register and expects to address a module's global and local data by using an offset from the address (TOC address) contained in that register. This works fine in normal cases where a function call is made because the compiler knows to load the module's TOC address into the register before making the call (or its already there becasue normally your functions are called by your own code).
However, when I place the address of my own function (from my loaded kernel module at runtime) into the system call table, the kernel calls my handler with an R2 value that is not the one my compiled C code expects, so my code gets control without being able to access its data.
Does anybody know of any example code out there showing how to handle this situation? I cannot recompile the kernel. This should be a straightforward case of runtime syscall table hooking, but I have yet to figure it out, or find any examples specific to PPC.
Ideas include:
Hand coding an assembly language stub that saves the R2 value, loads the register with my local TOC address, executes my code, then restores the old value before calling the original handler. I don't have the depth of PPC assembly experience to do this, nor am I sure it would work.
Some magic gcc option that will generate my code without using TOC. There is a documented gcc option "-mno-toc" that doesn't work on my PPC6 Linux. It looks like it may only be an option for system V.4 and embedded PowerPC.
Any help is greatly appreciated !
Thanks!
Linux has a generic syscall audit infrastructure which works on powerpc and you can access from user space. Have you considered using that rather than writing a kernel module?
You need a stub to load r2. There are examples in the kernel source.
I am currently trying to build an application, that will talk to the super IO chip using port IO. As part of that, I am trying to develop a kernel-mode windows driver that I can contact, and which will do the IO for me. I have therefore downloaded the Windows Driver Kit v7.1.0, build 7600.16385.1, and I am trying to compile and install the sample portio driver, which is provided by WDK, since it seems to be quite close to what I need.
I have compiled the driver in both free and checked x86 XP build environments. This works fine, but when I try to install the resulting driver, using the provided instructions - which basically just amount to using the Add Hardware Wizard, and then supplying the files manually - I get the following error:
-The following hardware was installed: Sample PortIO Driver (KMDF)
-The software for this device is now installed, but may not work correctly
-Windows cannot load the driver for this hardware. The driver may be corrupted or missing. (Code 39)
So, I see two explanations: corrupted or missing. Missing, as far as I can tell, given my environment variables and .inf file, would mean that the generated .sys file is not in c:\windows\system32\drivers, but when I look there, the file is there.
So that would mean that the file is corrupted. Given that I haven't touched the driver code, and that I have found others with the same problem, it doesn't seem to be a problem with my compilation, but rather with the code itself, or some common combination of machine type and code. But I may be wrong.
Does anybody have any suggestions on how to solve this?
I would recommend enabling SetupAPI logging as described in the following document from Microsoft:
http://www.microsoft.com/whdc/archive/setupapilog.mspx
For Windows 7, the log files are split up as described here:
http://support.microsoft.com/kb/927521
You may be able to isolate the problem with the additional information in the SetupAPI logs.