Who is responsible for indexing all of the packages that work with a given package manager? - apt

apt lets you search for a package by name, so there must be some sort of indexed database of packages that apt works with.
Who is responsible for making the database of packages that work with a certain package manager (say apt). How is this done? Can you trust whoever is making this decision to make sure that the packages do not contain security flaws?

Related

How to keep Firely.Terminal from trashing the FHIR package cache?

One of the brilliant aspects of Firely.Terminal is its ability to interoperate with the local FHIR package cache (~/.fhir) in a way that is fully compatible with HAPI tools using the cache. Sadly, that no longer seems to be the case.
Today I updated Firely.Terminal to version 2.4.2 and it seems that the new version walks all over the FHIR package cache, changing files without having been asked to.
It used to be that the only thing Firely.Terminal changed in existing packages was the generation of a missing .index.json. For newly installed packages, the only difference to a HAPI-installed package was some additional fields in .index.json (presence of some fields containing null which would normally be suppressed, and the addition of a fhirVersion field).
When the new Firely.Terminal is told to add a package to a scope (fhir install) it automatically 'bakes' it, which seems to involve things like snapshotting all StructureDefinition resources and expanding all ValueSet resources. Even resources whose content remains unscathed get their timestamps trashed. The same fate befalls all packages that are listed as dependencies in the manifest of the package being added to the scope.
There is an 'unbake' command (e.g. fhir unbake --package kbv.ita.for#1.0.1) but this does not operate recursively. What's more, when it says 'Bake successfully removed from KBV.ITA.FOR#1.0.1' (note the erroneous capitalisation) then that is an outright lie - the contents of the package directory are completely unchanged, except for the removal of the file .bake.json.
Hence the only way of restoring the package cache to working order is to identify all trashed packages, delete them all, and then reference them with some HAPI tool in order to get them re-cached.
I wouldn't mind so much if Firely.Terminal trashed its own cache. But what it destroys is the global HAPI package cache for the current user, and that is simply not acceptable.
Is there any way of suppressing the destructive behaviour of Firely.Terminal? Ideally globally (with machine-wide effect), but a secret command switch would do in a pinch. If that is not possible: does anyone know which of the older versions is the newest that still works, and where to get it?
Note: if the cached packages are write-protected then Firely.Terminal doesn't take the hint - it tries to clobber the files anyway and spews out oodles of 'access denied' messages. What's more, it doesn't even stop when an error occurs; instead it continues on its merry way and trashes everything that one might have forgotten to write-protect.
Background: one of the properties of the FHIR package cache that is important for our work is that the files in the cache are exactly the same as those in the (normative) published packages. In particular, we need profiles published without snapshots to not contain snapshots, value sets published without expansions to not contain expansions and so on. For one thing, this makes it possible to verify that the cached files are exactly the same as those contained in the published packages (or fixed versions thereof). For another, we need to control the context in which profiles are snapshotted, value sets expanded and so on because it may be necessary to supply dependencies that are different from those declared in the package manifest. The latter is sometimes necessary because the profile/package version management in the context of electronic prescriptions in Germany is a bit, erm, peculiar and can diverge from FHIR standards. For this to work at all the resources must be snapshotted/expanded dynamically (depending on the use context), not statically on disk. Things are moving in a more standards-compliant direction but we are not quite there yet.
Latest version without bake (on install)
From some quick testing of the latest versions of Firely Terminal, it seems 2.2.0 is the latest without bake functionality (and auto-bake on install). Installation instructions:
> dotnet tool uninstall --global Firely.Terminal
> dotnet tool install --global Firely.Terminal --version 2.2.0
Baking
The bake functionality has been introduced to provide packages with snapshots, because not all downstream tooling (most notably sushi) are able to generate these themselves.
Currently bake might be a little too aggressive by default, also recalculating snapshots for packages that already have them. In principle, this should not be a problem since snapshots are a just a cache for the calculation of all the layered differentials. Since snapshot logic still evolves it might even be desirable now and then to recalculate. But in newer versions we will look to:
Change the default to not recalculate when already provided
Provide a global setting to change that default to never calculate/always (re)calculate snapshots
This should prevent Firely Terminal from touching any files that don't need touching in the package cache. I'm not sure from your question if there was anything broken in the state of the shared FHIR cache after 'baking', given your use of 'thrashing' and 'destroying'?
Unbaking
The unbake command is intended to remove snapshots from a folder of packages. I see in my testing that it's not doing that, which I'll take as an issue to fix.

How to add SRW package to existing Oracle Database?

I'm setting new oracle db and want to add SRW package which is used for oracle reports.How to add this package and where can I found the functions and procedures of this package? Or should I write PL/SQL codes myself ?
Edit:DB is used for ERP.
From my point of view, you should install Reports as SRW built-in package is closely related to that product; that is, probably, the best option you might choose.
If you have a database to spare (as I can't guarantee that - if you follow what will be said next - you won't harm the database), find IAS installation which has Reports installed.
Navigate to its reports\admin\sql directory which contains several files, one of them being srwAPIins.sql which should install the SRW package (by calling other files located in that directory); it is editable, have a look at its contents.
Once again: don't do that if you don't know what exactly you are doing.

Install package in separate area for read-only Anaconda Linux install

at work we have a central, read-only, Linux Anaconda installation, and several projects need library packages for their individual project members.
Is there a way to conda install packages in a writable area set aside for each project?
Our Linux servers are also not directly web connected, but we can transfer data from a Windows machine that is. Is there a way for the windows conda to download data for our Linux install in such a way that I can transfer the downloaded files to Linux and then finish the install on Linux , with the conda linux not needing a direct web connection?
Thanks in advance :-)
The best answer to this question is a bit oblique: the Anaconda Distribution is designed for a single user on a single system with unrestricted access to the Internet. Any other use is considered "off label" and YMMV, though there are no license restrictions in place preventing you from trying to use it as you see fit. Anaconda Enterprise is the commercial product that is specifically designed for multi-user, server-deployed Anaconda with firewall restrictions. Security, governance, indemnification, support, collaboration, etc. etc. Check out https://www.continuum.io/ for more details.
But there are "work around" ways to achieve what you want, albeit complicated ones. For it to be reliable, reproducible, and maintainable you're going to end up reimplementing a lot of what is in Anaconda Enterprsie. Here are some tips:
Check out the "conda in multi-user environments" documentation
Check out the "Centralized Anaconda installation" documentation
Regular user alice for project foo can do conda create -p /nfs/project/foo/envs/custompython --offline anaconda; conda activate /nfs/project/foo/envs/custompython; conda install pkg1 pkg2 pkg3
You're going to run into ownership/permission issues. If you have sensible umask values then when alice's colleague bob tries to update pkg2 in the foo project he'll discover that he can't unlink the files alice wrote there. There is stuff you can do (as the IT admin) with chown, or alice can do with chmod, but its all a bit of a bother and there are lots of ways you can paralyze a conda environment because it is expecting "writability" to be binary for a particular environment. There is a long history in the conda GH issue tracker of people (myself included) shooting themselves in the foot by starting a conda env setup with one account and then making mods with another account that bork out half way through, leaving everything inconsistent.
Be careful about .condarc files. My advice: avoid them everywhere but in the base Anaconda installation (say, inside /opt/anaconda/.condarc). All sorts of weird stuff can happen when multiple overlaying .condarc files come together (the docs reference above discusses this).
People can create their own environments in an "offline" mode so long as the packages specified in those new environments (and their dependencies) are a subset of the packages available in the base environment (or subsequently added to the package cache), taking into account versions as well, of course.
You can download packages using your online Windows machine by grabbing them from repo.continuum.io and from anaconda.org. Make sure you download them for the right platform. But the challenge: you need to download a set of packages that will satisfy the dependencies of the package you want to install. There isn't a super easy way to get that information when you're offline.
Once you drop new packages into the Linux system's package cache be sure to re-run conda index.
Beware installing packages directly from their tarballs: this will not pick up any dependencies and does what is called a "force" install. So doing conda install /path/to/conda/pkg-ver.tar.bz2 is actually most similar to doing conda install --force --no-deps pkg=ver (though not identical, to be sure). --force means the install will happen NO MATTER WHAT, even if it will break your environment (violate existing package dependencies), and --no-deps means you won't get any of the dependencies of pkg installed.

How to install a Chocolatey package completely offline?

I need to install software on Windows clients that are completely offline. That means they have no Internet access.
An example. Let's say I want to install Paint.Net. I go to a reference machine (with INet) and install Paint.Net with Chocolatey.
choco install paint.net -y
After the install is finished I have the software installed and two artifacts:
The package file "paint.net.nupkg" in %ChocolateyInstall%/lib/paint.net
and
the the installer file "paint.net.4.0.6.install.zip" in %Temp%\chocolatey.
I now put these two files on a USB stick. Then I go to the offline machine, plug in the USB stick and want to install the package.
Is it possible to install the software without modifying the package? I am aware that inside the nupkg file there is a tools/chocolateyInstall.ps1 file with a $url variable defined. But I want to install the package without changing the package content or modifying the URL by hand.
I played around with the parameters --cache and --source but with little to no luck.
I have seen that this kind of question is asked before. But never (to my knowledge) with the intend to run the installer file from the stick too (and not only the package file). So I hope this is not a duplicate.
Caching Downloads - Not Deterministic
While there are ways to set the original nupkg (with the version on it, not the one in the packages directory - use download from left side of package's page on the Chocolatey community package repository) and the cache onto a USB stick somewhere, it's not always deterministic that it will work. You can also override the cache location, so that the folder is somewhere not in TEMP. See choco config, choco config -h and choco config set cacheLocation c:\some\location to do this.
Create Your Own Packages - Better
For packages you need offline, you have the ability to manage your own packages and you can embed software right into the package. This is desired when you want to manage software offline as most things on the community repository are subject to copyright law and distribution rights (why they don't simply have the software they represent embedded).
Creating and working with your own packages is very secure, reliable, and repeatable (and can be completely offline), but it does tend to take up time. If you are doing this for yourself, then it could override any time-savings you get as a consumer using Chocolatey and the community repository.
Internalized Packages - Best
The best thing you can do here is a process called internalizing, where you download and extract the package, download all of the resources and embed them in the package (or put them somewhere local/UNC share), edit the scripts to use those embedded/local resources and recompile the package.
This allows you to take advantage of existing package logic without the issue of the internet.
For more details see Recompiling Packages and Package Internalizer - Automatically Recompile Packages.
NOTE: As a side note, we are thinking of offering the ability to auto recompile with Chocolatey Pro edition and not just the Business edition.
Organization Use of Chocolatey
Most organizations using Chocolatey are doing some combination of creating packages and recompiling packages, because they need absolute trust and control over those packages when being used in production scenarios.

Making OS X web Installer Packages

I have an installer implemented with "Packages" which contains the payload and after running some plugins and a post install script it successfully installs the product.
The same package bundle is used for making updates too, as we run it in background with root privileges and it overwrites the current/old installation files.
We now have the requirement to make a lightweight installer having the components (different Packages inside the product) in a web location to be downloaded and installed.
As I know that this is possible in other systems as Windows/InstallShield with "Releases of Web Type", I would like to know if this is possible in OS X. Otherwise, I will have implement it all from scratch (packages management, download, packages versions compare to make selective updates, privileges escalation, etc).
Well, it was long ago and I almost started doing a web installer from scratch but then I found out that the option is in Packages itself. Hope this helps.
It is the "Package Reference" option.
According to the Packages documentation
A Package Reference lets you use a package that is hosted on a web
server or a removable media and to which you may not have a direct
access. This package will not be built during the built phase.
I think that any referenced package will have its own pre-post-scripts so there would be no problem because of the limited options in the Package Reference. But I need to test it.

Resources