I have an old MSDOS software realized in magic 5.6 with an btrieve 5.10a database, that should be modernized(completely redone using a modern DB).
For this I would love to just get the table structures and some understanding in the structure of the magic program.
But unfortunately I was not able to find any documentation on magic nor was I able to get the structure with column names from the tables(.btr but no ddf files).
Any idea on how to get a step further?
Not enough rep to comment, so I'm forced to post this as an answer.
If you can't get hold of any info regarding the data structure you might try:
A. Download and install a try-out version of a more recent Pervasive version and see if that enables you to read the data. In (still more or less current) server versions like V11 there is a DDF Builder utility, which does what the name implies. However, this is no automatic process, but relies on your ability to link the data shown in the application to the hex values on disk.
B. Try to find a BUTIL.EXE version that works with 5.x Btrieve files, run BUTIL -RECOVER and see what that gets you. You might get to parse the data with scripting tools this way - I've done it in the past on 6.x files, but nothing as old as you are dealing with.
The main issue here is whether you'll be able to find compatible tools for a version that old. But then again, maybe 6.x tools might just work.
Related
I am creating a PythonAnalyzer using the following code:
var interpreterFactory = InterpreterFactoryCreator.CreateAnalysisInterpreterFactory(
PythonLanguageVersion.V36.ToVersion());
var analyzer = PythonAnalyzer.Create(interpreterFactory);
Later on I also create and analyze a simple python module, that looks like this:
name = input('What is your name?\n')
print('Hi, %s.' % name)
Then I do module.Analysis.GetValuesByIndex("name", 4).
At this moment I expected the "value" to be 'str', because that's what Visual Studio shows when I open the same file in it. However, I get 'object' instead. So it seems that the PythonAnalyzer when constructed as mentioned above lacks some important information about where to look for standard library and/or its types.
Unfortunately, the documentation on PythonAnalyzer is lacking, so I was hoping the community could help understand how to configure it properly.
Congratulations on getting so far :)
What you're hitting here is the fact that CreateAnalysisInterpreterFactory is really intended for "pure" cases, where you have access to all the code that you're trying to analyze and nothing needs to be looked up. It is mostly used for the unit tests, or as a fallback when no copies of Python are installed. Depending on precisely which version of PTVS you are using, the bare information you're getting is either coming from DefaultDB\v3\python.pyi or CompletionDB\__builtin__.idb, both of which are somewhat lacking (by design).
Assuming you have a copy of Python installed, I would suggest creating an instance of InterpreterConfiguration with all of its details, and passing that to CreateInterpreterFactory (without "Analysis").
If you're on the latest sources (strongly recommended), this may run the interpreter in the background to collect information from it (you can control caching of this info with the DatabasePath and UseExistingCache members of InterpreterFactoryCreationOptions). If you are using the older version still, you'll need to trigger a completion DB regeneration or have one that you've created through VS.
And a final caveat: this part of PTVS is currently under some pretty heavy development at time of writing, so you'll either want to keep updating the version you're working against or stick with a slightly older one. Also feel free to post questions like this on the GitHub site, as while this is technically public API, it's barely documented at all and so the best help will come from the dev team.
I know that the 5.0 release note say "After the migration, source syntax-highlighting won't be available on a project until it has been successfully analyzed"
BUT, i can't imagine that there is no way to activate just by running another analysis. In fact, when you have thousands of components (it's our case), you can't plan 4500 analysis just to "restore" a basic but helpful functionality ! And it's more true when you know that the majority of theses components wasn't changed since a time ago... :(
So, please, say me that we can write a little batch or program that will do the job without need to pull all the sources ! I don't know how because i don't' understand this limitation of this upgrade (why sources aren't accessible)
You should trust the release notes. Information required for syntax highlighting is computed during analysis. Note that it also requires the language plugins to support this feature. I suggest to upgrade them to latest versions.
I am trying to install LXR tp parse my working folder of linux. In some of the tutorials on the web to set it up, they are trying to use the initdb-mysql script to initialize LXR's database in mysql. Well, I can not find this script in v2.0.0 but I can see it in older versions. Is that one still valid for use with v2.0.0? If not, what script I can use to setup the db for LXR v2.0.0? or if this whole DB thing was dropped how can I proceed?
on a side note: why linux projects always lack the proper documentation?!!!! I can see they have procedure for installing LXR on their own webiste and I believe it is outdated...why not update it?
Thank you!
Don't know if you're still looking for an answer, but here is one for the record.
Creating the DB is part of LXR initialization and is considered an internal step (i.e. not end-user visible). As such, it is allowed to change along the configuration process. This is why it is crucial to follow the procedure adapted to the LXR version.
LXR 2.x is configured with a configuration wizard (scripts/configure-lxr.pl) which takes care of generating the DB structure based on a template (templates/initdb/initdb-*-template.sql). These templates are not directly usable; they must be customized by the wizard to produce an initialization script custom.d/initdb.sh which you must manually launch after the wizard has finished its job.
The whole procedure for a simple case is shown on the LXR site. From the home page, select the installation link for the version you use (note that the procedure for 1.x also applies to 2.x).
For complex cases, download and read the comprehensive user's manual from SourceForge. Once again, select the manual adapted to the LXR version.
These manuals are not outdated since the LXR project has a rule to withhold release until documentation is ready.
I wanted to try STXXL to find how efficient it is in reading a big data file from the disk.
So i setup the enviornment for using it.
Then i ran this program http://algo2.iti.kit.edu/dementiev/stxxl/tags/1.2.1/algo_2sort__file_8cpp-example.html in VS2010. However the file data was not mapped to the vector_type, in fact it deleted the contents of the file after this statement - vector_type v(&f);
I tried changing from stxxl::file::RDWR to stxxl::file::RDONLY, this time the file content was not deleted, however still the vector_type variable was empty.Request your support to proceed further.
Also, is STXXL used widely in commercial applications?
Best Regards,
Ramki.
You are running a code example from STXXL version 1.2.1, which version have you installed on your system?
Most up-to-date version is "Development 1.4" which comes with many improvements, a comprehensive documentation with a lot of short code examples and runs pretty well - check the official STXXL Website under "Downloads and Documentation". Using version 1.4 is highly recommended.
Please check if your problem still exists on the new "Development 1.4" version. The Installation Process has become much easier - read the Installation and Configuration Part of the Documentation at first.
The official webpage provides a (certainly incomplete) list of Publications,Ongoing and Completed Projects using the STXXL successfully - there is no reason why not using it in an commercial environment.
I am not even sure how to ask this question. I am absolutely willing to research this myself, but I don't even know what exactly my options are.
I'm fairly new to programming in general, and I'm the sole developer on an ASP.NET MVC3 web application. We're about to upgrade to a new version which has a lot of addition to the data model. There are a couple new entities and some of the old entities have new properties/columns.
We've finished beta testing and now we're going to try to get everyone moved over to the new version running parallel to the current version, that way if there are show-stopping problems, users can easily switch back to the old version. The problem is that we can't hook both up to the same db because of the data model differences.
Can I make the old version use the new version's schema or something? I'm not really sure what my options are. I'm not asking you to write this for me; I'm just looking for some direction. Thanks!
You should be able to disable the metadata checks and then use two versions against the DB assuming the models use a schema that is compatible between both.
http://revweblog.wordpress.com/2011/05/16/ef-4-1-code-first-disable-checking-for-edmmetadata-table/
Another option is to use entity framework 4.3 code first migrations and actually use an upgrade script that it will generate for you. If it fails you can roll back the script to a prior version and use your prior code base. This would imply you upgrade to 4.3 first before doing anything else though although you could still disable metadata checks.