For programming FPGAS, is it possible to write my own place & route routines? [The point is not that mine would be better; the point is whether I have the freedom to do so] -- or does the place & route stage output into undocumented bitfiles, essengially forcing me to use proprietary tools?
Thanks!
There's been some discussion of this on comp.arch.fpga in the past. The conclusion is generally that unless you want to attract intense legal action from the FPGA companies then you probably don't want to do something like this. bitfile formats are closely guarded secrets of the FPGA companies and you would likely have to understand the file format in order to do what you want to do. That implies that you would need to reverse engineer the format and that (if you made your tool public in any way) would get you a lawsuit in short order.
I will add that there probably are intermediate files and that you likely wouldn't read or write the bitfile itself to do what you want to do, but those intermediate files tend to be undocumented as well. Read the EULA for your FPGA synthesis tool (ISE from Xilinx, for example) - any kind of reverse engineering is strictly forbidden. It seems that the only way we'll ever have open source alternatives in this space is for an open source FPGA architecture to emerge.
I agree with annccodeal, but to amplify a little bit, on Xilinx, there may be a few ways to do this. The XDL file format allows (or used to allow) explicit placement and routing. In addition, it should be possible to script the FPGA Editor to implement custom routing.
As regards placement, there is a rich infrastructure to constrain technology mapping of logic to primitives and to control placement of those primitives. For example LUT_MAP constraints can control technology mapping and LOC and RLOC constraints can determine placement. In practice, these allow the experienced designer great control over how a design is implemented without requiring them to duplicate man-centuries of software development to generate a bitstream directly.
You may also find interesting the current state of the art FPGA CAD research software such VPR. In my opinion these are challenged to keep up with vendor's own tools that must cope with modern heterogeneous FPGAs with splittable 6-LUTs, DSP blocks, etc.
Happy hacking.
Related
I am not able to understand exact difference in Digital Forensic and Reverse Engineering. Will Digital Forensic has anything to do with decompilation, assembly code reading or debugging?
Thanks
Digital Forensic practice usually involves:
looking at logfiles
doing recovery of unlinked filesystem objects (e.g deleted files)
recovering browsing history through cache, etc.
looking at timestamps of files
(usually for the purpose of law enforcement)
Reverse Engineering usually involves determining how something works by:
looking at binary file formats of multiple files (or executables) to determine patterns
decompilation of binary executables to determine intent of the code
black-boxing and/or debugging of known-good applications to determine nominal behaviour with respect to data.
(usually for the purpose of interoperability)
They're completely different activities.
EDIT: so many typos.
I think the lines are a little more blurred than most realize. Digital forensics goes after the artifacts to prove certain activity has taken place. Very few software packages offer documentation on the files that are created by that application. Basically, reverse engineering is required to figure out what the artifacts are, but not all forensic examiners are required to do the actual reverse engineering part.
Both are very, very different.
Reverse Engineering is a process of deconstructing how a system behaves without its engineering documents.
It has many purposes: replicating or exploiting a system or merely to make a compatible product that works with a system. It may involve software tools (IDApro), in-circuit emulators, soldering irons, etc. One neat example is that it's possible to de-pot a chip using nitric acid https://www.youtube.com/watch?v=mT1FStxAVz4 and then place the chip under a microscope to possibly determine some of its structure and behavior. (IANAL, IANAC: Don't attempt without chemistry knowledge and lab safety.)
Digital Forensics is looking to see what people or systems may have done by examining compute, network and storage devices for evidence.
It is mostly used by persons defending systems such as system administrators or law enforcement to determine who/what/how a potential crime occurred. This can automated (Snort, Tripwire) or manual (searching logs, say in Splunk or Loggly, or searching raw disk snapshots for particular strings).
There very very different stuff!
Digital Forensics is used to retrieve deleted artifacts , logging am dd image , you can see it like viewing the big picture.
Reversing is the opposite, it's digging into a code to it binaries and understanding 100% what it does.
If you'd like to enter this field I recommend reading Practical Malware Analasys book.
Digital forensics is the practice of retrieving information from digital media (computers, phones & tablets, networks) via a number of means. Normally for law enforcement, though it can be for private organisations and other partied; especially in the rising field of e-discovery.
Reverse engineering is looking at the code or binary of a file/system and determining how it is structured and how it works.
These are two completely different sciences. But if you think about it, they go hand in hand. Digital forensics need reverse engineering to determine what information is available in files they analyse and how that information is stored. Any good digital forensics company will have a R&D department that will allow them to do this in house.
I need a FPGA that can have 50 I/O pins. I'm going to use it as a MUX. I though about using MUX or CPLD but the the guy I'm designing this circuit for says that he might need more features in the future so it has to be a FPGA.
So I'm looking for one with enough design examples on the internet. Can you suggest anything (for example a family)?
Also if you could tell me what I should consider when picking, that would be great. I'm new to this and still learning.
This is a very open question, and the answer to it as stated can be very long, if possible at all given all the options. What I suggest to you is to make a list of all current and future requirements. This will help you communicate your needs (here and elsewhere) and force you, and the people you work with on this project, to think about them more carefully. Saying that "more features in the future" will be needed is meaningless; would you buy the most capable FPGA on the market? No.
When you've compiled this list and thought about the requirements, post them here again, and then you'd get plenty of help.
Another possibility to get feedback and help is to describe what you are trying to do/solve. Maybe an FPGA is not the best solution -- people here will tell you that.
I agree with Saar, but you have to go back one step further: when you decide which technology to target, keep in mind that an FPGA needs a lot of things to run, i.e. different voltages fore core, I/O, auxiliary, and probably more. Also you need some kind of configuration mechanism as an FPGA is in general (there are exceptions) SRAM based and therefore needs to be configured at startup. CPLDs are less flexible but much easier to handle...
I was thinking about software metrics to use in the analysis of the effort to develop a piece of software. As I was thinking about using function-point like metrics for object-oriented software, I came across an interesting challenge / question.
Consider a business rules engine. It is an application that consists of the necessary components to run the business rules, and then one has translate business rules or company policies into configuration code for the business rules engine. My assumption is that for applications like a business rules engine, this configuration code could also become quite substantial. However, when considering it from the point of view of the implementation, configuration code essentially instantiates parts of the API.
So, first, am I wrong in assuming that the effort for writing configuration code is substantial enough that measuring it makes sense?
Does anybody have a clue about a function-point like metric (or any other metric) that could measure configuration code?
It definitely makes sense to measure the effort to produce "configuration code". Depending on your application, the configuration code might even be the greater part of the effort.
I don't know of any metrics especially designed for configuration code. There are many configuration languages already existing, and anybody can create a new one. You should probably see how much your configuration language resembles popular programming languages, and adapt a metric that works with that programming language.
Calling BR code "configuration" code doesn't change the problem. (What do you call a dog with 3 legs? It doesn't matter what you call it, its a dog with 3 legs).
Ignoring the considerable hype, business rules engines are just funny-looking programming languages (usually with complicated interfaces to the "non-business rule part" of the system, which the BR stuff is uanble to do). From this point of view, programming BRs isn't a lot different than other langauges, especially if you buy the function-point model (just because you have a BR engine won't get you out of writing code to generate reports!).
What the BR guys typcially try to do is to claim BR programmming is cheap because you can do it as you go. What they don't say is that programming BR is hard, because the very act of not coding the BR rules up front means you've avoided doing the requirements analysis first, on the grounds "you can just code BR later". And there's no guarantee that your BR system or the data it has access to really is ready for the problem you face. (The idea I really detest is "BR makes it possible for managers to understand..." Have you seen real BR rules?)
I totally agree with Ira and KC, that's why we only use standard script languages for in-application rules. You can use V8 or seamonkey to embed a JavaScript interpreter into your software, then use any estimator which understands JS (like ProjectCodeMeter) on your business rules code.
How does one choose if someone justify their design tradeoffs in terms of optimised code, clarity of implementation, efficiency, and portability?
A relevant example for the purpose of this question could be large file handling, where a "large file" is "quite a few GB" for a problem that would be simplified using random-access methods.
Approaches for reading and modifying this file could be:
Use streams anyway, and seek to the desired place - which is portable, but potentially slow, and is not clear - this will work for practically all OS's.
map the relevant portion of the file as a large block. Eg, mmap a 50MB chunk of the file for processing, for each chunk - This would work for many OS's, depending on the subtleties of implementing mmap for that system.
Just mmap the entire file - this requires a 64-bit OS and is the most efficient and clear way to implement this, however does not work on 32-bit OS's.
Not sure what you're asking, but part of the design process is to analyze requirements for portability and performance (amongst other factors).
If you know you'll never need to port the code, and you need absolutely the best performance, then you adjust your implementation accordingly. There's no point being portable just for its own sake.
Note also that if you want both performance and portability, there's nothing stopping you from providing an implementation for each platform. Of course this will increase your cost, so really, its up to you to prioritize your needs.
Without constraints, this question rationally cannot be answered rationally.
You're asking "what is the best color" without telling us whether you're painting a house or a car or a picture.
Constraints would include at least
Language of choice
Target platforms (multi CPU industrial-grade server or iPhone?)
Optimizing for speed vs. memory
Cost (who's funding this and is there a delivery constraint?)
No piece of software could have "ultimate" portability.
An example of this sort of problem being handled using a variety of methods but with a tight constraint both on the specific input/output required and the measurement of "best" would be the WideFinder project.
Basically, you need think first before coding. Every project is unique and an analysis of the needs could help decide what is primordial for it. What will make the best solution for any project depends on a few things...
First of all, will this project need to be or eventually be multiplatform? Depending on your choice, choosing the right programming language should be easier. Then again you could also use more than one language in your project and this is completely normal. Portability does not necessarily mean less performance. All it implies is that it involves harder work to achieve your goals because you will need quality code. Also, every programming language has its own philosophy. Learn what they are. One thing is for sure, certain problems frequently come back over and over. This is why knowing the different design patters can make a difference sometimes, but some languages have their own idioms and can be very relevant when choosing a language. Another thing that needs some thought is the different approaches that you can have for your project. Multithreading, sockets, client/server systems and many other technologies are all there for you to use. Choosing the right technology can help to make a project better.
Knowing the needs and the different solutions available today is what will help decide when comes the time to choose for the different tradeoffs.
It really depends on the drivers for the project. If you are doing in-house enterprise dev, then do the simplest thing that could work on your target hardare. Mod for performance reqs as needed.
If you know you need to support different hardware platforms on day 1, then you'll clearly need to choose a portable implementation, or use multiple approaches.
Portability for portability's sake has been a marketing spiel for Java since inception and is a fact of life for C by convention, and I believe most people who abide by it "grew up" with Java or C will say that.
However true, absolute portability will only be true for the most trivial to at most applications with medium complexity -- anything with high complexity will need specialized tweaks.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
What types of applications have you used model checking for?
What model checking tool did you use?
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
In the course of my studies, I had a chance to use Spin, and it aroused my curiosity as to how much actual model checking is going on and how much value are organizations getting out of it. In my work experience, I've worked on business applications, where there is (naturally) no consideration of applying formal verification to the logic. I'd really like to learn about SO folks model checking experience and thoughts on the subject. Will model checking ever become a more widely used developing practice that we should have in our toolkit?
I just finished a class on model checking and the big tools we used were Spin and SMV. We ended up using them to check properties on common synchronization problems, and I found SMV just a little bit easier to use.
Although these tools were fun to use, I think they really shine when you combine them with something that dynamically enforces constraints on your program (so that it's a bit easier to verify 'useful' things about your program). We ended up taking the Spring WebFlow framework, which uses XML to write a state-machine like file that specifies which web pages can transition to which other ones, and using SMV to be able to perform verification on said applications (shameless plug here).
To answer your last question, I think model checking is definitely useful to have, but I lean more towards using unit testing as a technique that makes me feel comfortable about delivering my final product.
We have used several model checkers in teaching, systems design, and systems development. Our toolbox includes SPIN, UPPAL, Java Pathfinder, PVS, and Bogor. Each has its strengths and weaknesses. All find problems with models that are simply impossible for human beings to discover. Their usability varies, though most are pushbutton automated.
When to use a model checker? I'd say any time you are describing a model that must have (or not have) particular properties and it is any larger than a handful of concepts. Anyone who thinks that they can describe and understand anything larger or more complex is fooling themselves.
What types of applications have you used model checking for?
We used the Java Path Finder model checker to verify some security (deadlock, race condition) and temporal properties (using Linear temporal logic to specify them). It supports classical assertions (like NotNull) on Java (bytecode) - it is for program model checking.
What model checking tool did you use?
We used Java Path Finder (for academic purposes). It's open source software developed by NASA initially.
How would you summarize your experience w/ the technique, specifically in evaluating its effectiveness in delivering higher quality software?
Program model checking has a major problem with state space explosion (memory & disk usage). But there are a wide variety of techniques to reduce the problems, to handle large artifacts, such as partial order reduction, abstraction, symmetry reduction, etc.
I used SPIN to find a concurrency issue in PLC software. It found an unsuspected race condition that would have been very tough to find by inspection or testing.
By the way, is there a "SPIN for Dummies" book? I had to learn it out of "The SPIN Model Checker" book and various on-line tutorials.
I've done some research on that subject during my time at the university, expanding the State Exploring Assembly Model Checker.
We used a virtual machine to walk each and every possible path/state of the program, using A* and some heuristic, depending on the kind of error (deadlock, I/O errors, ...)
It was inspired by Java Pathfinder and it worked with C++ code. (Everything GCC could compile)
But in our experiences this kind of technology will not be used in business applications soon, because of GUI related problems, the work necessary for creating an initial test environment and the enormous hardware requirements. (You need lots of RAM and disc space, because of the gigantic state space)