could we change protobuf version of cobalt ?
current protobuf is 2.4, but widevine(2.0.8) need protobuf version 2.5.
in google_streaming_api.pb.h, it hard coded "2004000" and warn us "DO NOT EDIT!" at top.
could you give us some suggestion?
Protobuf was actually included primarily for Widevine support, but it was included for an older version of Widevine. Also, Cobalt now also uses it for other purposes.
If Widevine requires a particular version of protobuf, it is most likely because it requires some feature of protobuf that was introduced in that version, so just changing the version number is probably not going to work.
Assuming protobuf maintains backwards compatibility, it should be fine to rebase to a later version. But, you would need to port it to Starboard as has been done for the bundled version of protobuf.
Another option that might end up being more expedient is to link to Widevine as a shared object, so Cobalt can use its version of protobuf and Widevine can use its version of protobuf. You would need to make sure neither Cobalt or the Widevine library export any protobuf symbols.
I've filed an internal ticket to update the protobuf version, so eventually Cobalt will bundle an updated version of the protobuf library that has been ported to Starboard.
Related
I was using plovr as closure compiler for the latest google closure library, but it think it plays nicely. Why is this so?
Link=http://plovr.com/docs.html
Thanks in advance for the help,
Kiran
Generally speaking, if you update the library, you would also need to update the compiler, as they're designed to work together. Getting a new compiler release to work with Plovr is non-trivial since it makes use of its internals and thus Plovr itself must be recompiled. While Plovr supports pointing to a custom Closure Library checkout, there have been enough changes (e.g. dependency, type system, module declarations) in Closure to make Plovr's stale compiler incompatible with recent library releases.
Here are a couple blocking issues in supporting more recent editions of the Library:
#162: Dependency analysis is outdated
#160: Generation logic of deps.js is outdated
This is particularly striking as new namespaces in the Library make use of the goog.module-style declaration.
I would like to add that I have contributed to Plovr last week to be able to support the latest version. However, since medium took over the NPM package I created a fork that I like to keep updated. Google Closure Compiler and Google Closure Library are both excellent tools. Plovr is also.
Please take a look over at: https://github.com/Plovr/Plovr-build/packages/36644 which is the npm package hosted at github packages. I plan to add this later to npm too. It works with the current latest release of closure (v20190929, released 14 days ago as of writing this post)
I see each cfx tools always produce xpi with its own minVersion and maxVersion. However, those are limited to the versions which the SDK is compatible with, e.g. SDK 1.14 only for FF 21 - 25.0a1 , SDK 1.17 only for FF 26 - 30. My questions are:
Do I need to package my extension with new SDK everytime new version comes out ?
How do I maintain and update my extension in the future? Does Addon Developer Hub provides a way to submit the same extension for multiple SDK versions ? I tried to look around but couldn't find a way to submit multiple versions.
I want to make FF 21 as the minimum version, since that's the version which has SDK built-in. My extension currently compiles with both SDK 1.14 and SDK 1.17 with only cosmetic(syntax) adjustment.
The developer hub lets you choose which versions of Firefox the add-on is compatible with. This is just a GUI for setting the minVersion and maxVersion in the install.rdf. As long as you don't use modules or methods that require Firefox 22+, it shouldn't matter which version of the SDK you use, as the version of the SDK being run is determined by the version on your user's browser.
It's hard to find module specific compatibility (you can always go to the docs for the specific module and look at the edit history), but have a look at the SDK API Lifecycle to understand which modules can be used. Some notable example are:
The new UI modules require FF29 and some of their features require FF30.
The widget module is deprecated from FF 29 onwards, being replaced by the above.
One way to handle the above for backward compatibility is to do the following:
const { version } = require('sdk/system/xul-app');
if (version < 29) var widget = require("sdk/widget").Widget({...});
else var button = require("sdk/ui/button/action")({...});
So, to be clear:
It doesn't matter which version of the SDK you use unless you want to use new modules.
No, you shouldn't make multiple versions of your add-on. If you want to use new modules for new browsers, follow the code example above.
It's true that you must use valid existing application versions but you generally don't need to repackage your addons, unless of course a change in the SDK directly affects your addons.
The reason for this is that by default the max target version is not going to be checked.
From the install manifest documentation:
strictCompatibility
A Boolean value indicating if the add-on should be enabled when the version of the application is greater than its max version. By default, the value of this property is false meaning that the compatibility checking will not be performed against the max version.
<em:strictCompatibility>true</em:strictCompatibility>
Usually, there is no need to restrict the compatibility: not all new releases will break your extension and, if it is hosted on AMO, you'll get notice several weeks in advance if a potential risk has been detected. Moreover, an extension being disabled, even for a short period, leads to a bad experience for the user. About the only time you should need to set this if your add-on does things that are likely to be broken by Firefox updates. You do not need to set this flag if your add-on has a binary component, since add-ons with binary components are always subject to strict compatibility checking (because binary components need to be rebuilt for every major application release anyway).
There is also is a recommendation for choosing version ranges.
minVersion and maxVersion should specify the range of versions of the application you have tested with. In particular you should never specify a maxVersion that is larger than the currently available version of the application since you do not know what API and UI changes are just around the corner. With compatibility updating it is not necessary to release a whole new version of the extension just to increase its maxVersion.
Technically you can use wildcards, but the documentation mentions several times that AMO verifies and possibly rejects addons with incorrect versions.
Dear expert community,
I so often face problems that software will not compile or work because libraries (specifically .so files) are too new.
If I try to install the old library (apt-get on Ubuntu) I can get errors that it is "not installable" because of conflicts...
So following problems arise in my mind:
1) how to install old libraries/packages together with newer ones for example on Arch Linux or Ubuntu Linux ?
2) how to avoid conflicts: the older library should be used (linked) only with the "problematic old" software and/or if I specify this explicitly ?
3) how can I check with cmake, make or the autotools if a specific (old) library version is installed and in case not automatically get it and install it and use it without conflict with a newer version ?
Thank to any expert for a help
Linux package managers normally don't allow installing multiple versions of the same package. You have to install older versions yourself, by hand, from source, preferably to some private place like /usr/old-versions.
You link problematic software like this:
<link command> -L /usr/old-versions/lib -Wl,-rpath=/usr/old-versions/lib
and it automatically uses the old version of the library.
There is no way to do that automatically.
Note that you may need to compile against older versions of library headers too, not just link against old versions of the libraries.
let's say the .proto files never changes, and we have a client -server system based on TCP, they speak each otehr via protobuf message. Before both client and server is on protobuf version 2.4.1. Now server upgrades to 2.5.0 (first recompile .java file using 2.5.0 protoc exe, than link against 2.5.0 runtime library). But client still work with 2.4.1 version. Can this system still work?
I think it is a common question for a client-server system. I believe it will work, but really I don't find any words about it from google offered document.
Yes, that should work fine. The only thing that would break it would be if you started using a new feature that doesn't exist in 2.4.1, but it would be impossible to do that without changing the .proto schema, so you are indeed safe against this. Version tolerance is a Big Thing in protobuf; new features (as an example: packed arrays) are always opt in and require .proto changes.
I'm wrote an application and I need to execute it on Gentoo,
but when I try run it, I get the following message:
/lib/libc.so.6: version `GLIBC_2.3.4' not found (required by /usr/local/myapp/lib/myapplib.so.1)
the current GLIBC version in this gentoo is 2.3.2.
I can't update this glibc, because I don't have permission, so I need to 'downgrade'
my glibc to the same version (2.3.2) ... how can I do it?
tks,
The "/lib/libc.so.6: version `GLIBC_2.3.4' not found" problem comes from trying to run a binary compiled against a newer glibc on a system with an old version of glibc. Downgrading glibc is strongly discouraged for this reason.
Since you say you wrote the application, it seems to me that the simplest solution is to recompile the application on the system where you plan to run it.
I'm actually wrestling with the same issue, so maybe I have some information that can help.
In short, your binary was compiled to look for libc.so.6. GLIBC_2.3.4 is in libc.so.5. As far as I know, if you downgrade your glibc on your dev machine some of your other programs may not work properly (because they were compiled to look for the current version). Somehow CentOS/RHEL have a compat-glibc package that can live along side of a current glibc without causing this error. If your dev box uses CentOS/RHEL, install that package/recompile and you should be good to go. You may need to use an older compiler for it to look for the older library. If you're not developing on CentOS/RHEL, continue on.
My plan of attack today is to compile glibc from source. This means using a compiler that was released around the same time as the older version of glibc. You may run into some stumbling blocks (such as needing an older version of buildutils, etc.), but my hope is once the libc.so.5 is compiled and installed into /usr/local/lib my application will find that before it finds libc.so.6 in /lib.
So there it is. It's not for the faint of heart, and it's definitely not a quick solution. Today I plan on testing this out, so I can't really say it's the right solution. Please, hivemind.. if I'm flat-out wrong correct me and save this poor soul from this winding torturous road :-)
EDIT: link to glibc sources