I have advised of a security issue found on the software GIMP 2.8.14. The vulnerability description can be found here: Vulnerability Description
And the CVE here: CVE-2016-4994
When I was advised of the vulnerability, I also was advised about a solution, which is update the software with a particular version sent to me in that advise. The thing is that the upgrade is available only for linux, and we have GIMP on Windows.
Do you know something about the risks of this vulnerability on 2.8.16 version (which is the one that we have)? And if there are risks, do you know the proper actions to avoid that?
I haven't found anything about GIMP on Windows, all the solutions are set for Linux.
Thanks beforehand.
The new version 2.8.18 of GIMP fixes this vulnerability. Check the releaase notes at: http://www.gimp.org/news/2016/07/14/gimp-2-8-18-released/
However, I don'think that is a big issue at all.
GIMP is not meant to be "secure software" - it runs as a user processor, and have to deal with tens of file formats, each one able to have up to hundreds of different data structures. It uses third-party libraries to handle some of those data formats.
One can't expect any version of GIMP to be secure against opening a file and have that file execute arbitrary code, with the same privileges the program itself has. While this particular vulnerability tells about GIMP's native XCF files, which may be fixed in that respect, one can simply open a postscript file, which is by definition a complete program - and will run arbitrary code, even for well-formed images. In most cases, the postcript libraries in use should sandbox the running program and prevent it from accessing, say, the filesystem, but it will be able to use CPU as a DoS attack nevertheless.
It is up to your OS to control what resources an user application can access. Vulnerabilities in GIMP won't offer privilege escalation, if the OS is tight. And one could even use finer grained security features (e.g. SELinux) to further restrict application access.
As for GIMP, the 2.8.18 version is out as of yesterday - if this particular issue is marked as fixed, you should try to grab that one.
Related
we have developed a kernel extension (KEXT) for a virtual file system (VFS) on macOS to integrate our software with external programs like Adobe InDesign or Microsoft Word. Our software and the KEXT are used by many of our customers.
As it looks like KEXTs are deprecated and may be removed completely in future versions of macOS, particularly on Apple Silicon based computers. See e.g. Apple's announcement in its security guide:
"This is why developers are being strongly encouraged to adopt system extensions before kext support is removed from macOS for future Mac computers with Apple silicon"
Therefore we are currently investigating in possible alternatives.
Apple suggests to migrate to System Extensions instead of KEXTs. However, the only VFS related API we found is to implement a File Provider that is based on an NSFileProviderReplicatedExtension.
Unfortunately that NSFileProviderReplicatedExtension has several flaws:
Files can either be in the cloud or downloaded. It is not possible to download/read only a portion of a file. This is a big performance problem for us, since we work with large images (> 1GB). The programs we integrate with typically only read a part of the image, e.g. the embedded preview. The API does not offer a way to access selected blocks of a file (random access file).
The File Provider learns about the file system content via enumerators. So everything that is inside a folder must be enumerated (listed) first. Otherwise it cannot be accessed. However, we cannot enumerate our VFS. Most of the content of our VFS is fully dynamic. It only exists when it is accessed by a client the first time. Such dynamic content also includes dynamic parameters like the client's locale or the size of a box where the image will be placed. Since we do not know those parameters in advance, we cannot enumerate the VFS's content in advance.
This means, an NSFileProviderReplicatedExtension in its current state isn't a replacement for a "real" VFS and therefore cannot be used by us as a replacement for our current VFS KEXT.
My questions:
Will Apple allow kernel extensions also in future versions of (Apple Silicon/M1 based) operating systems? Or is there at least a clear deadline?
If not, what is Apple's officially suggested replacement for KEXT based VFS solutions?
Will the API of an NSFileProviderReplicatedExtension be improved to behave like a "real" file system so that above mentioned flaws will no longer be an issue?
Many thanks for any answers or comments!
Best regards,
Michael
Will Apple allow kernel extensions also in future versions of (Apple Silicon/M1 based) operating systems? Or is there at least a clear deadline?
Apple doesn't really give timelines, and they also occasionally break promises of support.
However, this sort of hard API deprecation and removal usually is done as part of a major release, so you will typically get deprecation notice for it at WWDC one year, users might start seeing deprecation notices when the .0 of the OS release ships at the earliest, and sometimes the .3 or .4 revision. Then you'll typically be told at the next WWDC that the API is blocked in the upcoming release, so by that point you should have implemented a replacement.
If not, what is Apple's officially suggested replacement for KEXT based VFS solutions?
As far as I'm aware, NSFileProviderReplicatedExtension is currently the only one.
Will the API of an NSFileProviderReplicatedExtension be improved to behave like a "real" file system so that above mentioned flaws will no longer be an issue?
Other than via beta SDKs, Apple generally doesn't pre-announce future APIs.
My advice:
File issues for each of the file provider shortcomings you are hitting using Feedback Assistant. (Radar)
File an "enhancement request" feedback issue with Apple for a "real" file system API replacement for the VFS KPI.
If your vfs kext is critical to your business/product, I suggest additionally asking Apple's DTS via a TSI what they recommend for your situation. Reference the feedback IDs of the issues filed, otherwise they will recommend that you file issues.
I am very new to windows system programming. In my project I need to read the registered "ProgramFiles" location for 32/64bit processes.
I have finally come up to two choices: either to use SHGetKnownFolderPath or to read the values for these folders from system registry, but I have some security concerns. Can someone please compare these two methods in terms of security and reliability.
You don't read stuff from undocumented random places in the registry, because that's just an implementation detail of where Windows currently stores that data: it may easily happen that:
in some next version of Windows they'll decide that such data needs to be stored elsewhere;
the data you found just is there only on some configurations (some particular IE versions installed, the machine has not joined Active Directory, there's no folder redirection inplace, ...) - but you can't know it, there's no documentation that guarantees you anything.
The correct way to go is to use the documented interfaces that the OS provides, on which Microsoft explicitly makes promises of compatibility (they promise that a public function that works today - if used according to the documentation - will continue to work tomorrow).
tl;dr: use SHGetKnownFolderPath - or SHGetFolderPath if you want to remain compatible with Windows versions before Vista, which in general is a good thing, given that Windows XP still seems to have more market share than all OS X versions combined.
This is a conceptual question and I hope it fits into Stackoverflow's question and answer style. I wonder what the concept of installing applications is good for. In my naive understanding of operating systems we do not need a registry and to use an application it should be enough to just copy the executable and files onto your drive and launch that.
Am a Windows user but also worked with Linux a bit and noticed that there are package managers instead of installers. But even those do more than just a copy instruction, I guess.
I do not think that all the installers exist only because the common user expects them out of steady habit. So what is the advantage of installers in contrast to developing applications which are designed to run out of a single folder and copy that over?
I would really like if someone could explain that concept.
Installing applications is a way to embed them in the OS. It's a kind of standard, you offer procedures like installing and uninstalling that should have the same functionalities for all applications (even "change" under Windows).
Countless times I've "installed" applications with a single shell script that came with them, and then had troubles removing such programs, having to look for single files. If the programmer uses the standard of the OS to make an executable that can be installed, that won't happen.
You can also easily view a list of the installed programs at any time.
Under Linux, additionally, if we're talking about a package manager, it is convenient for the user to have an easy way to download and install a program by just typing its name.
Last but not least, some applications are required to be installed and recognized by the OS (for example services in Windows).
For a given extension, for example ".psd", I'd like to be able to determine the default application path for opening this file, for example "/Applications/Adobe Photoshop CS4.app".
I've looked into the Launch Services API, and there are clearly programmatic ways to get this information. Unfortunately for my particular scenario, only a scripting solution (Applescript or shell script) will do.
I've also looked at "lsregister -dump". It seems to be unwise to rely on parsing this information, since there are no guarantees as to the stability of the output format.
I've been solving this problem in the past with Creator Codes, but since Apple seems to be phasing them out since Snow Leopard I'm trying to eliminate dependence on Creator Codes.
thanks
Launch Services is the one and only place to get that information. You can write a scripting addition that will expose its functionality to AppleScript, but then you have to install that on whatever machine you plan to run on.
System Events does give you this in Leopard
alt text http://img.skitch.com/20091222-eessetxeqbai2mnwduygtm1cd5.png
I was watching the WWDC 2009 Keynote and something someone said about Windows 7/Vista got me curious..
The speaker claimed that 7 was still a poor operating system because it still used the same technologies such as DLLs and the registry. How accurate are his claims and how different is OS X doing it? Even os x has dynamically loaded libraries right? I guess the Registry thing might have some weight..
Can anyone explain to me the differences in each OS' strategy?
I'm not trying to incite fanboys here or anything, I just want to know how both operating systems tackle problems in general..
Thanks,
kreb
Of course both operating systems have facilities for using DLLs (they're called dylibs or Frameworks on OS X depending on how they're packaged).dylibs are very much like DLLs--they are a dynamically linked library and as such there may be multiple versions of them floating around. Frameworks, on the other hand, are really a directory structure. They contain dynamically linked libraries (potentially multiple versions of them), resources, headers, documentation, etc. The dynamic linker on OS X automatically handles choosing the correct library version from the framework for each executable. The system appears to work better than Windows' DLL management which is, well, quite a mess still (of course, Windows' system is tied by legacy issues that Apple dropped when they moved to OS X). To be fair, Unix has had a solution to this problem for a long time, as well using symbolic links to link dylibs to their correct versioned implementation, allowing multiple installed versions.
There is no OS X equivalent of the Windows registry. This is good and bad. The good side is that it's much harder to corrupt an entire OS X system with a registry screw up. OS X instead stores configuration in many separate files, usually one or more per application, user, whatever. These files are generally a plist (an XML schema representing dictionaries, arrays, and primitive types) formatted file. The bad side is that, by retaining this Unix-y heritage, OS X doesn't have the same über-admin tools that can churn through the registry and do all sorts of crazy things.
DLLs
The major difference between OS X and Windows is that Windows historically tried to save space/memory by having everyone share code (i.e. you install one DLL, everyone can use it). Apple statically compiles (well, not really, but it may as well be) all of the non-system libraries into every application. Wastes disk space/memory, but makes app deployment way easier and no versioning issues.
Registry
OS X does have a registry, they're just flat files called plists, instead of a magic component that's mostly like a filesystem except where it's not. Apple's approach makes it easy to migrate settings from one machine to another, whereas Windows' approach is faster in-memory, and allows apps to easily "watch" a key without taking a big perf hit (i.e. one app changes a key and the other instantly knows about it).
In conclusion
The keynote presenter's full of it, 10.6 is mostly the same code as 10.5, which was mostly the same code as 10.4 et al, just like Win7 is mostly Vista, which is mostly Server '03, etc. There's far too much tested code in an operating system to throw it away every release, especially if you actually want your customers' apps to work.
DLL's are bad variations of libaries since they are unable to operate on their own, to use them a further wrapper executable is called(automatically), which adds unrequired overhead and makes it much harder to tell which libraries are actually in use. Another less important flaw, is the inability for systems to truely share a library.
*nix systems avoid this by having libraries exist on the top level running on their own or under a larger wrapper(like kde-init ), the libraries may be shared by any applications, meaning only a single copy of each library is required, and you may at any time kill a single library with ease as required.
The registry is a great idea, except for the fact that it is used for so much, almost anything you install will use the registry, and a corrupt registry and render your operating system almost completely useless until it's fixed.
This is avoided in *nix systems by having multiple different files for different content, drivers are refered to via Xorg's config file, installed applications will be written to their own database, and keys or identification will often be written into a directory, rather than a single all purpose file. This reduces the likely hood of a serious failure and means that at any time you can probably still repair the system. If Xorg becomes corrupt you just reconfigure it, if the installed applications database becomes corrupt you can repair or rebuild it, and should an applications individual settings directory become corrupt you need only reinstall one application(and most good commercial aps should have a way to repair this anyway)