use same message separate module in buf build - protocol-buffers

We are creating api for web app and smartphone app with protocol buffer.
There is a common message between the definition for the web and the definition for the smartphone application.
webapi ┬ myuniquenamemoduleweb ┬ common ┬ bar.proto
├ buf.yaml
├
appapi ┬ myuniquenamemoduleapp ┬ common ┬ bar.proto
├ buf.yaml
buf.yaml
In such a case, would it be a good policy to create a module like myappcommon, for example, and register it with BSR?
webapi ┬ myuniquenamemoduleweb
├ buf.yaml // deps myuniquenameprjcommon
appapi ┬ myuniquenamemoduleapp
├ buf.yaml // deps myuniquenameprjcommon
common ┬ myuniquenamemodulecommon ┬ common ┬ bar.proto
├ buf.yaml
I looked for projects on Github that might be helpful and read Buf's documentation.

Related

ModuleController: has no selected Controller - test was shutdown as a consequence

[Script Taxonomy][1]
[1]: https://i.stack.imgur.com/2r5S5.jpg
Attached is a picture that talks as to how scripts are structured for our testing needs.
We have a Jmeter project (e.g.Main.JMX) with multiple thread groups as shown in the pic and each thread group is calling external JMX (e.g.Sub1.jmx, Sub2.jmx) using the include controller. In each external JMX file (e.g.Sub1.jmx, Sub2.jmx), we have created a thread group, that contains simple controllers with a series of steps that is representing a test case. Each step from the simple controller is calling the test fragment residing in the same Sub1.JMX using the module controller.
The module controller defined in the simple controller is failing to locate the test fragments and producing the below error from Sub1.JMX file.
Error occurred starting thread group :[TG]-Subscriptions, error message:ModuleController:[MC]-2. Login to portal/mobile has no selected Controller (did you rename some element in the path to target controller?), test was shutdown as a consequence,
see log file for more details
rg.apache.jorphan.util.JMeterStopTestException: ModuleController:[MC]-2. Login to portal/mobile has no selected Controller (did you rename some element in the path to target controller?), test was shutdown as a consequence
    at org.apache.jmeter.control.ModuleController.resolveReplacementSubTree(ModuleController.java:143) ~[ApacheJMeter_components.jar:5.4.1]
    at org.apache.jmeter.control.ModuleController.restoreSelected(ModuleController.java:126) ~[ApacheJMeter_components.jar:5.4.1]
    at org.apache.jmeter.control.ModuleController.clone(ModuleController.java:70) ~[ApacheJMeter_components.jar:5.4.1]
    at org.apache.jmeter.engine.TreeCloner.addNodeToTree(TreeCloner.java:76) ~[ApacheJMeter_core.jar:5.4.1]
    at org.apache.jmeter.engine.TreeCloner.addNode(TreeCloner.java:63) ~[ApacheJMeter_core.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:993) ~[jorphan.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverseInto(HashTree.java:994) ~[jorphan.jar:5.4.1]
    at org.apache.jorphan.collections.HashTree.traverse(HashTree.java:976) ~[jorphan.jar:5.4.1]
    at org.apache.jmeter.threads.ThreadGroup.cloneTree(ThreadGroup.java:535) ~[ApacheJMeter_core.jar:?]
    at org.apache.jmeter.threads.ThreadGroup.makeThread(ThreadGroup.java:310) ~[ApacheJMeter_core.jar:?]
    at org.apache.jmeter.threads.ThreadGroup.startNewThread(ThreadGroup.java:265) ~[ApacheJMeter_core.jar:?]
    at org.apache.jmeter.threads.ThreadGroup.start(ThreadGroup.java:244) ~[ApacheJMeter_core.jar:?]
    at org.apache.jmeter.engine.StandardJMeterEngine.startThreadGroup(StandardJMeterEngine.java:527) [ApacheJMeter_core.jar:5.4.1]
    at org.apache.jmeter.engine.StandardJMeterEngine.run(StandardJMeterEngine.java:452) [ApacheJMeter_core.jar:5.4.1]
    at java.lang.Thread.run(Thread.java:834) [?:?]
Please advise if there is any possibility to get rid of the above error achieve the successful connection between the module controller defined in the simple controller and the test fragment (in Sub1.jmx).
I cannot reproduce your issue using simplified test plan assuming:
Sub1.jmx test plan with Test Fragment and Module Controller pointing to that Test Fragment
Main.jmx test plan with Include Controller pointing to Sub1.jmx
So most probably you either forgot to "select" the necessary test fragment in the Module Controller or renamed the Test Fragment after selecting so the Module Controller cannot find its target anymore.
If I'm reading your question incorrectly and the problem is still there I would ask to provide a minimal reproducible example test plan(s) based on Debug Samplers
Also be informed that according to JMeter Best Practices you should always use the latest version of JMeter so you can try upgrading as it might be the case you're suffering from a JMeter bug which has already been fixed.

How to read/write/sync data on cloud with Kedro

In short: how can I save a file both locally AND on cloud, similarly how to set to read from local.
Longer description: There are two scenario, 1) building model 2) serving model through API. In building the model, a series of analysis is done to generate feature and model. The result will be written locally. At the end everything will be uploaded to S3. For serving the data, first all required files which are generated from the first step, will be downloaded.
I am curious how I can leverage Kedro here. Perhaps I can define two entries for each file conf/base/catalog.yml one corresponds to the local version and the second for the S3. But perhaps not the most efficient way when I am dealing with 20 files.
Alternatively, I can upload the files using my own script to S3 and exclude the synchronization from Kedro ! in otherwords, Kedro is blind from the fact that there are copies exist on the cloud. Perhaps this approach is not the most Kedro-friendly way.
Not quite the same, but my answer here could potentially be useful.
I would suggest that the simplest approach in your case is indeed defining two catalog entries and having Kedro save to both of them (and load from the local one for additional speed up) which gives you the ultimate flexibility, though I do admit isn't the prettiest.
In terms of avoiding all your node functions needing to return two values, I'd suggest applying a decorator to certain nodes that you tag with a certain tag, e.g tags=["s3_replica"] taking inspiration from the below script (stolen from a colleague of mine):
class S3DataReplicationHook:
"""
Hook to replicate the output of any node tagged with `s3_replica` to S3.
E.g. if a node is defined as:
node(
func=myfunction,
inputs=['ds1', 'ds2'],
outputs=['ds3', 'ds4'],
tags=['tag1', 's3_replica']
)
Then the hook will expect to see `ds3.s3` and `ds4.s3` in the catalog.
"""
#hook_impl
def before_node_run(
self,
node: Node,
catalog: DataCatalog,
inputs: Dict[str, Any],
is_async: bool,
run_id: str,
) -> None:
if "s3_replica" in node.tags:
node.func = _duplicate_outputs(node.func)
node.outputs = _add_local_s3_outputs(node.outputs)
def _duplicate_outputs(func: Callable) -> Callable:
def wrapped(*args, **kwargs):
outputs = func(*args, **kwargs)
return (outputs,) + (outputs,)
return wrapped
def _add_local_s3_outputs(outputs: List[str]) -> List[str]:
return outputs + [f'{o}.s3' for o in outputs]
The above is a hook, so you'd place it in your hooks.py file (or wherever you want) in your project and then import it into your settings.py file and put:
from .hooks import ProjectHooks, S3DataReplicationHook
hooks = (ProjectHooks(), S3DataReplicatonHook())
in your settings.py.
You can be slightly cleverer with your output naming convention so that it only replicates certain outputs (for example, maybe you agree that all catalog entries that end with .local also have to have a corresponding .s3 entry and you mutate the outputs of your node in that hook accordingly, rather than do it for every output.
If you wanted to be even cleverer, you could inject the corresponding S3 entry into the catalog using a after_catalog_created hook rather than manually writing the S3-versioon of the dataset in your catalog, again, as per a naming convention you choose. Though I'd argue that writing the S3 entries is more readable in the long-run.
There are 2 ways I can think of. A simpler approach is to use --env conf for both cloud and local. https://kedro.readthedocs.io/en/latest/04_kedro_project_setup/02_configuration.html#additional-configuration-environments
conf
├── base
│ └──
├── cloud
│ └── catalog.yml
└── my_local_env
└── catalog.yml
And you can call kedro run --env=cloud or kedro run --env=my_local depending on which env you want to use.
Another more advanced way is to use TemplatedConfigLoader https://kedro.readthedocs.io/en/stable/kedro.config.TemplatedConfigLoader.html
conf
├── base
│ └── catalog.yml
├── cloud
│ └── globals.yml (contains `base_path:s3-prefix-path`)
└── my_local
└── globals.yml (contains `base_path:my_local_path`)
In catalog.yml, you can refer to base_path like this
my_dataset:
filepath: s3:${base_path}/my_dataset
And you can call kedro run --env=cloud or kedro run --env=my_local depending on which env you want to use.

How can I list all sub-packages under a specific Go package?

Is it possible to use go doc to view all sub-packages defined under a specific package?
Say, I want to view all sub-packages under crypto.
go doc crypto only lists what crypto defines, but no information about its sub-packages, like crypto/aes and crypto/cipher:
go doc crypto
package crypto // import "crypto"
Package crypto collects common cryptographic constants.
func RegisterHash(h Hash, f func() hash.Hash)
type Decrypter interface{ ... }
type DecrypterOpts interface{}
type Hash uint
const MD4 Hash = 1 + iota ...
...
If you want to see all sub-packages under a specific package you can use go list command:
go list crypto/...
crypto
crypto/aes
crypto/cipher
crypto/des
crypto/dsa
crypto/ecdsa
crypto/ed25519
crypto/ed25519/internal/edwards25519
crypto/elliptic
crypto/hmac
crypto/internal/randutil
crypto/internal/subtle
crypto/md5
crypto/rand
crypto/rc4
crypto/rsa
crypto/sha1
crypto/sha256
crypto/sha512
crypto/subtle
crypto/tls
crypto/x509
crypto/x509/pkix
Finally, for each package you can get the doc with go doc command.
go doc crypto/x509
...
You can write a script if you need to iterate over the results returned by go list.
Honestly, I think that the best way to consume the doc of std library is the Go website: https://golang.org/pkg/.
You can also start a local godoc web server to read the doc of your Go code:
godoc -http=:6060
*open your browser and visit localhost:6060*

Struggling with OpenLDAP configuration (on AWS EC2)

I've been configurating a LDAP server on a linux instance using AWS EC2.
Up to now, I successfully set up LDAP and phpLDAPadmin to work together.
I've created Users and Groups "Organisation Units". I've added users and groups to those "OU"s.
Now I want to grand access to specific parts of my LDAP tree to the "Users" members of a "Group". That's what I wasn't able to configure up to now...
My LDAP tree looks like this:
+--> dc=www,dc=website,dc=com (3)
---> cn=admin
+--> ou=groups (4)
| ---> cn=admin_users
| ---> cn=app1_users
| ---> cn=app2_users
| ---> cn=basic_users
+--> ou=users (3)
| ---> cn=user1
| ---> cn=user2
| ---> cn=user3
Let's say that I added user1 + user2 to the "memberUid" list of "app1_users" and user2 + user3 to the "memberUid" list of "app2_users".
I want:
cn=admin have full rights/access to the tree
app1_users can connect (to phpLDAPadmin) and add new members to the the group itself
the same for app2_users' users
A connected user (on phpLDAPadmin) should only see the tree (and child substrees) he's part of.
Here are the ACI I tried (but whose were obsiouvly not working):
access to attrs=shadowLastChange
by self write
by dn="cn=admin,dc=www,dc=website,dc=com" write
by * read
access to attrs=userPassword
by self write
by dn="cn=admin,dc=www,dc=website,dc=com" write
by anonymous auth by * none
access to dn.base=""
by * read
access to dn.subtree="cn=app1_users,ou=groups,dc=www,dc=website,dc=com"
by group.base="cn=app1_users,dc=www,dc=website,dc=com" write
by dn.base="cn=admin,dc=www,dc=website,dc=com" write
by * none
access to dn.subtree="cn=app2_users,ou=groups,dc=www,dc=website,dc=com"
by group.base="cn=app2_users,dc=www,dc=website,dc=com" write
by dn.base="cn=admin,dc=www,dc=website,dc=com" write
by * none
access to *
by self write
by dn="cn=admin,dc=www,dc=website,dc=com" write
by * read
Is there something wrong with my configuration ?
If cn=admin,... is your rootDn, it has all the rights there are and shouldn't be addressed in your own access rules.
For group management try:
access to dn.base="cn=app1_users,ou=groups,dc=www,dc=website,dc=com"
by group.exact="cn=app1_users,dc=www,dc=website,dc=com" write
There's an implicit last rule access to * by * none, so no need for by * none in your own rules.
Generally, add your rules one by one to the list - it's easier to watch the effects that way.

Package bound resource use for multiple go packages

For a contrived example:
I have 2 packages, repo.com/alpha/A & repo.net/beta/B. package A uses package B, both structured as example.
A:
main.go
B:
b.go
templates \
1.tmpl
2.tmpl
In main.go of package A, I'd need to access the templates directory of package B.
b.go
var templates string
templates = templatepath
func init(){
templatepath, _ = filepath.Abs("./templates")
}
main.go
import(
repo.net/beta/B
)
func main(){
fmt.Printf("%s", B.templates)
}
So the problem being in my more complex use case & the contrived example here is that B.templates will be in the directory for package A, where I need to establish and reference the directory of the imported path. This is part of learning and navigating the Go way of doing things, and my understanding is probably basic, so I need to understand how to do this in a Go context.
My use case is a package that uses other packages that do things for the base package, and these external packages may use standard web resources files(css, html, js) the problem being I'm having immediate trouble packaging and referencing them abstractly enough for what I want to do.
You can't, you have to either use something like go-bindata or so, or simply embed the templates in your B package as consts.
tmpl1.go:
const tmpl1 = `........`

Resources