Golang dependencies registering sql driver in init() causing clash - go

I have some Go tests that depend on some external code which has init() methods that registers mysql drivers. My code also needs to register mysql drivers and I therefore end up with a panic and the error "Register called twice for driver mysql" when running my tests.
It seems that the repo I'm dependent on has a vendors directory with the driver in it ("vendors/github.com/go-sql-driver/mysql"). It seems that when I run my tests the init() methods in the driver are called and registers the mysql driver causing the clash.
The best option I can see would be to copy the dependency to my own vendors directory and remove the mysql driver from the dependency's vendor directory. However I'm not keen on this as it involves duplicating my dependency and then modifying it by deleting files. Also, I'm only dependent on this for running tests so I'm not sure I should be moving it to my vendors directory any way.
Is there a way to prevent init() being called on a dependency's vendor dependencies?

First of all, I would ditch the dependency. If it's registering a database driver - something that dependencies really should never do - I predict there will be more problems with it. Also I suggest opening an issue.
As a workaround, you can import the library depending on whether you are in a test build or not. So you will have a file called e.g. mysqlimport.go with nothing but
// +build !test
import _ "github.com/go-sql-driver/mysql"
This way this code will only be called when you're not testing. Although you'll have to start your tests with go test -tags test.

Related

Avoiding Go panic when Oracle Database client libraries not found

I have a server written in Go that accesses an Oracle database. It works fine. However, there will be multiple instances running at different (currently 2) locations some of which do not need to access the database. (They get the same info passed on to them from their peer servers.)
I want the same executable running in all places but some will be configured to not use the database since they don't need it. The problem is that once I import the OCI package, its init() function is called which panics when it can't talk to the database.
Running GO 1.12.5 on Windows Server 2019.
I tried adding OCI.DLL to the same directory as the .EXE but it still panics.
import _ "github.com/mattn/go-oci8"
When I run on the server (without DB drivers) I get the error:
panic: OCIEnvCreate error
goroutine 1 [running]:
github.com/mattn/go-oci8.init.0()
D:/Golang/src/github.com/mattn/go-oci8/globals.go:160 +0x130
I want to avoid this panic when I don't need database access. I would prefer one .EXE without the mess of conditional builds.
Swap to the Go goracle driver which delays Oracle client library initialization until connections are opened precisely to handle your situation, where not all app users connect to Oracle DB.
As you said, adding the DLLs to the same directory as the exe solves your problem, so if you wanted a single file to still work you can have the exe copy all of the DLLs over when it starts, and even delete then when it is done if you want. This way, you can transfer the file to multiple locations, but there is most likely no way to keep it one file while running.

Distributed JMeter test fails with java error but test will run from JMeter UI (non-distributed)

My goal is to run a load test using 4 Azure servers as load generators and 1 Azure server to initiate the test and gather results. I had the distributed test running and I was getting good data. But today when I remote start the test 3 of the 4 load generators fail with all the http transactions erroring. The failed transactions log the following error:
Non HTTP response message: java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory (Caused by java.lang.ClassNotFoundException: org.apache.commons.logging.impl.Log4jFactory)
I confirmed the presence of commons-logging-1.2.jar in the jmeter\lib folder on each machine.
To try to narrow down the issue I set up one Azure server to both initiate the load and run JMeter-server but this fails too. However, if I start the test from the JMeter UI on that same server the test runs OK. I think this rules out a problem in the script or a problem with the Azure machines talking to each other.
I also simplified my test plan down to where it only runs one simple http transaction and this still fails.
I've gone through all the basics: reinstalled jmeter, updated java to the latest version (1.8.0_111), updated the JAVA_HOME environment variable and backed out the most recent Microsoft Security update on the server. Any advice on how to pick this problem apart would be greatly appreciated.
I'm using JMeter 3.0r1743807 and Java 1.8
The Azure servers are running Windows Server 2008 R2
I did get a resolution to this problem. It turned out to be a conflict between some extraneous code in a jar file and a component of JMeter. It was “spooky” because something influenced the load order of referenced jar files and JMeter components.
I had included a jar file in my JMeter script using the “Add directory or jar to classpath” function in the Test Plan. This jar file has a piece of code I needed for my test along with many other components and one of those components, probably a similar logging function, conflicted with a logging function in JMeter. The problem was spooky; the test ran fine for months but started failing at the maximally inconvenient time. The problem was revealed by creating a very simple JMeter test that would load and run just fine. If I opened the simple test in JMeter then, without closing JMeter, opened my problem test, my problem test would not fail. If I reversed the order, opening the problem test followed by the simple test then the simple test would fail too. Given that the problem followed the order in which things loaded I started looking at the jar files and found my suspect.
When I built the script I left the jar file alone thinking that the functions I need might have dependencies to other pieces within the jar. Now that things are broken I need to find out if that is true and happily it is not. So, to fix the problem I changed the extension on my jar file to zip then edited it in 7-zip. I removed all the code except what I needed. I kept all the folders in the path to my needed code, I did this for two reasons; I did not have to update my code that called the functions and when I tried changing the path the functions did not work.
Next I changed the extension on the file back to jar and changed the reference in JMeter’s “Add directory or jar to classpath” function to point to the revised jar. I haven’t seen the failure since.
Many thanks to the folks who looked at this. I hope the resolution will help someone out.

Idiomatic approach to a Go plugin-based system

I have a Go project I would like to open source but there are certain elements which are not suitable for OSS, e.g. company specific logic etc.
I have conceived of the following approach:
interfaces are defined in the core repository.
Plugins can then be standalone repositories whose types implement the interfaces defined in core. This allows the plugins to be housed in completely separate modules and therefore have their own CI jobs etc.
Plugins are compiled into the final binary via symlinks.
This would result in a directory structure something like the following:
|- $GOPATH
|- src
|- github.com
|- jabclab
|- core-system
|- plugins <-----|
|- xxx |
|- plugin-a ------>| ln -s
|- yyy |
|- plugin-b ------>|
With an example workflow of:
$ go get git#github.com:jabclab/core-system.git
$ go get git#github.com:xxx/plugin-a.git
$ go get git#github.com:yyy/plugin-b.git
$ cd $GOPATH/src/github.com
$ ln -s ./xxx/plugin-a/*.go ./jabclab/core-system/plugins
$ ln -s ./yyy/plugin-b/*.go ./jabclab/core-system/plugins
$ cd jabclab/core-system
$ go build
The one issue I'm not sure about is how to make the types defined in plugins available at runtime in core. I'd rather not use reflect but can't think of a better way at the moment. If I was doing the code in one repo I would use something like:
package plugins
type Plugin interface {
Exec(chan<- string) error
}
var Registry map[string]Plugin
// plugin_a.go
func init() { Registry["plugin_a"] = PluginA{} }
// plugin_b.go
func init() { Registry["plugin_b"] = PluginB{} }
In addition to the above question would this overall approach be considered idiomatic?
This is one of my favorite issues in Go. I have an open source project that has to deal with this as well (https://github.com/cpg1111/maestrod), it has pluggable DB and Runtime (Docker, k8s, Mesos, etc) clients. Prior to the plugin package that is in the master branch of Go (so it should be coming to a stable release soon) I just compiled all of the plugins into the binary and allowed configuration decide which to use.
As of the plugin package, https://tip.golang.org/pkg/plugin/, you can use dynamic linking for plugins, so similar to C's dlopen() in its loading, and the behavior of go's plugin package is pretty well outlined in the documentation.
Additionally I recommend taking a look at how Hashicorp addresses this by doing RPC over a local unix socket. https://github.com/hashicorp/go-plugin
The added benefit of running a plugin as a separate process like Hashicorp's model does, is that you get great stability in the event that the plugin fails but the main process is able to handle that failure.
I should also mention Docker does its plugins in Go similarly, except Docker uses HTTP instead of RPC. Additionally, a Docker engineer has posted about embedding a Javascript interpreter for dynamic logic in the past http://crosbymichael.com/category/go.html.
The issue I wanted to point out with the sql package's pattern that was mentioned in the comments is that that's not a plugin architecture really, you're still limited to whatever is in your imports, so you could have multiple main.go's but that's not a plugin, the point of a plugin is such that the same program can run either one piece of code or another. What you have with things like the sql package is flexibility where a separate package determines what DB driver to use. Nonetheless, you end up modifying code to change what driver you are using.
I want to add, with all of these plugin patterns, aside from the compiling into the same binary and using configuration to choose, each can have its own build, test and deployment (i.e. their own CI/CD) but not necessarily.

Deploying huge PLSQL package via JDBC is incredible slow

I have to deploy very large PLSQL packages via JDBC and experience extremely long deployment duration. I know that it isn't a good idea to use packages with up to 25.000 lines of code, but I have no choice about that right now. Deployment of such a package takes about 2.5 hours via JDBC.
I read the package from filesystem with wrapping the FileReader in a BufferedReader. I parse it line by line and check for a delimiter and append each line to a StringBuilder until the statement is complete. Then I use the StringBuilders toString() and hand the resulting String over to my Statements execute().
Thank you for advice!
Oracle PL/SQL packages consist of:
a package header - the specification similar to C .h files
the package body - the equivalent of a .c file
Uploading a package header will invalidate all PL/SQL packages that utilize functions/procedures/records/constants/objects/types from that package - generally anything that references or uses something from the package.
Specify PACKAGE to recompile both the package specification and the package body if one exists, regardless of whether they are invalid. This is the default. The recompilation of the package specification and body lead to the invalidation and recompilation of dependent objects as described for SPECIFICATION and BODY.
The database also invalidates all objects that depend upon emp_mgmt. If you subsequently reference one of these objects without explicitly recompiling it first, then the database recompiles it implicitly at run time.
source: http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/alter_package.htm
Uploading a package body has less impact on the state of the database. Packages dependent of the uploaded unit will not be invalidated.
Recompiling a package body does not invalidate objects that depend upon the package specification.
When you recompile a package body, the database first recompiles the objects on which the body depends, if any of those objects are invalid. If the database recompiles the body successfully, then the body becomes valid.
source: http://docs.oracle.com/cd/B28359_01/appdev.111/b28370/alter_package.htm
The reason for your extremely long compile times is probably caused by a cascade of recompilations performed by the database. This is typically a result of keeping the package header and the package body in a single file and then uploading both the header and the body of each package sequentially like in the following example:
#pkg_a.sql - dependencies invalidated and recompilation attempt occurs
#pkg_b.sql - same as above
#pkg_c.sql - same as above
In this scenario, the database might attempt to recompile some packages several times without success because further dependencies were not yet uploaded. That's time wasted for dependency resolution and compilation.
This scenario can be greatly improved by splitting the packages into .pks (contains only the header) and .pkb (contains only the body). Uploading the header files first and then uploading the bodies.
#pkg_a.pks - dependencies invalidated but not recompiled
#pkg_b.pks - same as above
#pkg_c.pks - same as above
#pkg_a.pkb - pkg_a recompiled successfully because all headers are up to date
#pkg_b.pkb - same as above
#pkg_c.pkb - same as above
This is possible because it's only required that package headers from the dependencies are valid to compile a dependent package. In this scenario, recompilation of each package body occurs only once.
Splitting the packages into header and body files will also allow you to avoid uploading header files which did not change. This is quite often as most of the changes are made to the body (actual code) of a library. Skipping an unnecessary upload of a package header will result in less packages being invalidated and hence - less work to validate the whole database.
This approach should help you vastly reduce the time required to upload changes to the database.
I don't think your long package deployment times have to do with the Java, JDBC load process, but rather with the Oracle database package management.
When you execute a CREATE PACKAGE BODY FOO..., the RDBMS software first checks to make sure none of the other users in the system are using the package (via database locks). If they are, your process hangs until all the other users are finished using the package. Then it commits your package into the database. One of the ways to test this is to rename the package (in your original source file) and try and load it. If it doesn't take 2.5 hours this may be a contributing factor.
The other thing the RDMBS does when you run the statement is compile the package which involves verifying all the references in your code (e.g. tables, views, other packages) and generating a encoded version. For a large package the compile time may be significant. The way to test this is to run the statement ALTER PACKAGE FOO COMPILE; and time that.

Creating dtrace probes for plugins using single provider name

Note that this is for Mac OS X, although I imagine my problem would exist on any dtrace-capable OS.
I have an app that utilizes a lot of plugins. I'm adding userland probes to it, in both the core application and in the plugins themselves. The issue is that if I use the same provider name in the plugins that the main app is using, those probes aren't showing up when I attempt to create a list of available probes. It appears that whoever's code that loads first wins.
my .d file in my main app:
provider MyApp {
probe doSomething();
};
and in my plugin:
provider MyApp {
probe plugin_doSomethingPluginish();
};
Changing the name of the provider to something else, like MyAppPlugin, works, but then the list of providers is going to get insane (MyAppPlugin1, MyAppPlugin2, etc). I'd like to think that there's a way to add in new plugin-defined probes under the same provider name as the main app, but I'm either not seeing it or it doesn't exist.
So is there a way to do this? And if not, is it normal to have a different provider for each plugin even though the module name is already unique? Seems like that's what the module name is for...
You should just be defining one provider.d file and then importing the .h file into each class using any of those probes, there is really no reason to do multiple .d files each listing the same provider. I just checked in the DTrace documentation about this and don't see anything about it right off the bat, but yeah I would presume that multiple .d files each defining the same provider creates some sort of conflict or that loading probe listing for the same provider is like redefining the probe listing and not extending it as you probably intended.

Resources