Are pysa users expected to copy configuration files? - static-analysis

Facebook's Pysa tool looks useful, in the Pysa tutorial exercises they refer to files that are provided in the pyre-check repository using a relative path to include a path outside of the exercise directory.
https://github.com/facebook/pyre-check/blob/master/pysa_tutorial/exercise1/.pyre_configuration
{
"source_directories": ["."],
"taint_models_path": ["."],
"search_path": [
"../../stubs/"
],
"exclude": [
".*/integration_test/.*"
]
}
There are stubs provided for Django in the pyre-check repository which if I know the path where pyre check is installed I can hard-code in my .pyre_configuration and get something working but another developer may install pyre-check differently.
Is there a better way to refer to these provided stubs or should I copy them to the repository I'm working on?

Many projects have a standard development environment, allowing for hard coded paths in the .pyre_configuration file. These will usually point into the venv, or some other standard install location for dependencies.
For projects without a standard development environment, you could trying incorporating pyre init into your setup scripts. pyre init will setup a fresh .pyre_configuration file with paths that correspond to the current install of pyre. For additional configuration you want to add on top of the generated .pyre_configuration file (such as a pointer to local taint models), you can hand write a .pyre_configuration.local, which will act as an overlay and overwrite/add to the content of .pyre_configuration.

Pyre-check looks for the stubs in the directory specified by the typeshed directive in the configuration file.
The easiest way is to move stubs provided for Django in the pyre-check repository to the typeshed directory that is in the pyre-check directory.
For example, if you have installed pyre-check to the ~/.local/lib directory, move the django directory from ~/.local/lib/pyre_check/stubs to ~/.local/lib/pyre_check/typeshed/third_party/2and3/ and make sure your .pyre_configuration file will look like this:
{
"source_directories": ["~/myproject"],
"taint_models_path": "~/myproject/taint",
"typeshed": "~/.local/lib/pyre_check/typeshed"
}
In this case, your Django stubs directory will be ~/.local/lib/pyre_check/typeshed/third_parth/2and3/django
Pyre-check uses the following algorithm to traverse across the typeshed directory:
If it contains the third_party subdirectory, it uses a legacy method: enters just the two subdirectories: stdlib and third_party and there looks for any subdirectory except those with names starting with 2 but not 2and3, and looks for the modules in those subdirectories like 2and3, e.g. in third_party/2and3/
Otherwise, it enters the subdirectories stubs and stdlib, and looks for modules there, e.g. in stubs/, but not in stubs/2and3/.
That's why specifying multiple paths may be perplexing and confusing, and the easiest way is to setup the typeshed directory to ~/.local/lib/pyre_check/typeshed/ and move django to third_parth/2and3, so it will be ~/.local/lib/pyre_check/typeshed/third_parth/2and3/django.
Also don't forget to copy the .pysa files that you need to the taint_models_path directory. Don't set it up to the directory of the Pyre-check, create your own new directory and copy only those files that are relevant to you.

Related

VSCode look for Go packages in different directory

I successfully used rules_go to build a gRPC service:
go_proto_library(
name = "processor_go_proto",
compilers = ["#io_bazel_rules_go//proto:go_grpc"],
importpath = "/path/to/proto/package",
proto = ":processor_proto",
deps = ["//services/shared/proto/common:common_go_proto"],
)
However, I'm not sure how to import the resulting file in VSCode. The generated file is nested under bazel_bin and under the original proto file path; so to import this, it seems like I would need to write out the entire path (including the bazel_bin part) to the generated Go file. To my understanding, there doesn't seem to be a way to instruct VSCode to look under certain folders that only contain Go packages/files; everything seems to need a go.mod file. This makes it quite difficult to develop in.
For clarity, my directory structure looks something like this:
WORKSPACE
bazel-bin
- path
- to
- generated_Go_file.go
go.mod
go.sum
proto
- path
- to
- gRPC_proto.proto
main.go
main.go should use the generated_Go_file.go.
Is there a way around this?
I don't use Bazel and so cannot help with the Bazel configuration. It's likely there is a way to specify the generated code location so that you can revise this to reflect you preference.
The outline you provide of the generated code, is workable though and a common pattern. Often the generated proto|gRPC code is placed in a module's gen subdirectory.
This is somewhat similar to vendoring where your code incorporates what may often be a 3rd-party's stubs (client|server) into your code. The stubs must reflect the proto(s) package(s) and, when these are 3rd-party, using gen or bazel-bin provide a way to keep potentially multiple namespaces discrete.
You're correct that the import for main.go, could (!) be prefixed with the module name from go.mod (first line) followed by the folder path to the generated code. This is standard go packaging and treats the generated code in a similar way to vendored modules.
Another approach is to use|place the generated code in a different module.
For code generated from 3rd-party protos, this may be preferable and the generated code may be provided by the 3rd-party in a module that you can go get or add to your go.mod.
An example of this approach is Google Well-Known Types. The proto (sources) are bundled with protoc (lib directory) and, when protoc compiles sources that references any of these, the Go code that is generated includes imports that reference a Google-hosted location of the generated code (!) for these types (google.golang.org/protobuf/types/known).
Alternatively, you can replicate this behavior without having to use an external repo. The bazel-bin folder must be outside of the current module. Each distinct module in bazel-bin, would need its own go.mod file. You would include in a require block in your code's go.mod file references to the modules' (one or more) locations. You don't need to publish the module to a external repo but can simply require ( name => path/to/module ) to provide a local reference.

Copy to directory outside of project directory using Gradle 4.0.2

I have a gradle build which generates war file. I want to copy war file to my application servers' dropins directory which is somewhere outside of project directory. I have following copy task to do this.
task copyWarToDropins(type: Copy,dependsOn:[war]) {
from './build/libs/bds-service-token-1.0-SNAPSHOT.war'
into file('/apps/dropins') // want to copy to 'C:/apps/dropins' directory
rename { fileName -> 'bds-service-token.war' }
}
build.dependsOn copyWarToDropin
It evaluates /apps/dropins relative project directory and does copy there. I have tried many ways I can think of but could not make it copy to C:/apps/dropins directory.
Can someone please help?
First, please note that using into file(...) is redundant, as each call to into(...) will be evaluated via Project.file(...) anyhow.
As you can read in the documentation , file(...) handles strings in the following way:
A CharSequence, including String or GString. Interpreted relative to the project directory. A string that starts with file: is treated as a file URL.
So, one way to solve your problem could be using an absolute file URL.
However, if you continue to read the documentation, you will see that Java File objects are supported. So you could simply create such an object:
into new File('C:/your/absolute/path')

Ansible, override single module file

I want to tweak one Ansible module, I am using multiple modules. Now I want minor tweaks in one of them. How can I override default code?
I am not sure but my assumption is if I created a similar directory structure of modules in current directory, it will refer this code and for rest of module it will refer default code eg. for yum_repository module, default path is:
/usr/local/Cellar/ansible/2.4.1.0/libexec/lib/python2.7/site-packages/ansible/modules/packaging/os/yum_repository.py
but If I create the directory structure in my working directory as:
ansible/modules/packaging/os/ and keep edited file yum_repository.py there, it should refer this edited file.
Ansible will look for modules in ./library subdirectory of the playbook dir.
You can also use library parameter in the Ansible configuration file to specify a common directory for your modules.

Linking with a Windows library outside the build folder

Is there a way to link with a library that's not in the current package path.
This link suggests placing everything under the local directory. Our packages are installed in some repository elsewhere. I just want to specify the libpath to it on windows.
authors = ["Me"]
links = "CDbax"
[target.x86_64-pc-windows-gnu.CDbax]
rustc-link-lib = ["CDbax"]
rustc-link-search = ["Z:/Somepath//CPP/CDbax/x64/Debug/"]
root = "Z:/Somepath//CPP/CDbax/x64/Debug/"
But trying cargo build -v gives me
package `hello v0.1.0 (file:///H:/Users/Mushfaque.Cradle/Documents/Rustc/hello)` specifies that it links to `CDbax` but does not have a custom build script
From the cargo build script support guide, it seems to suggest that this should work. But I can see that it hasn't added the path. Moving the lib into the local bin\x68_64-pc-windows-gnu\ path works however.
Update
Thanks to the answer below, I thought I'd update this to give the final results of what worked on my machine so others find it useful.
In the Cargo.toml add
links = "CDbax"
build = "build.rs"
Even though there is no build.rs file, it seems to require it (?) otherwise complains with
package `xxx v0.1.0` specifies that it links to `CDbax` but does not have a custom build script
Followed by Vaelden answer's create a 'config' file in .cargo
If this is a sub crate, you don't need to put the links= tag in the parent crate, even though it's a dll; even with a 'cargo run'. I assume it adds the dll path to the execution environment
I think the issue is that you are mistaking the manifest of your project with the cargo
configuration.
The manifest is the Cargo.toml file at the root of your project. It describes your project itself.
The cargo configuration describes particular settings for cargo, and allow for example to override dependencies, or in your case override build scripts. The cargo configuration files have a hierarchical structure:
Cargo allows to have local configuration for a particular project or
global configuration (like git). Cargo also extends this ability to a
hierarchical strategy. If, for example, cargo were invoked in
/home/foo/bar/baz, then the following configuration files would be
probed for:
/home/foo/bar/baz/.cargo/config
/home/foo/bar/.cargo/config
/home/foo/.cargo/config
/home/.cargo/config
/.cargo/config
With this structure you can specify local configuration per-project,
and even possibly check it into version control. You can also specify
personal default with a configuration file in your home directory.
So if you move the relevant part:
[target.x86_64-pc-windows-gnu.CDbax]
rustc-link-lib = ["CDbax"]
rustc-link-search = ["Z:/Somepath//CPP/CDbax/x64/Debug/"]
root = "Z:/Somepath//CPP/CDbax/x64/Debug/"
to any correct location for a cargo configuration file, it should work.

Directory layout for pure Ruby project

I'm starting to learn ruby. I'm also a day-to-day C++ dev.
For C++ projects I usually go with following dir structure
/
-/bin <- built binaries
-/build <- build time temporary object (eg. .obj, cmake intermediates)
-/doc <- manuals and/or Doxygen docs
-/src
--/module-1
--/module-2
-- non module specific sources, like main.cpp
- IDE project files (.sln), etc.
What dir layout for Ruby (non-Rails, non-Merb) would you suggest to keep it clean, simple and maintainable?
As of 2011, it is common to use jeweler instead of newgem as the latter is effectively abandoned.
Bundler includes the necessary infrastructure to generate a gem:
$ bundle gem --coc --mit --test=minitest --exe spider
Creating gem 'spider'...
MIT License enabled in config
Code of conduct enabled in config
create spider/Gemfile
create spider/lib/spider.rb
create spider/lib/spider/version.rb
create spider/spider.gemspec
create spider/Rakefile
create spider/README.md
create spider/bin/console
create spider/bin/setup
create spider/.gitignore
create spider/.travis.yml
create spider/test/test_helper.rb
create spider/test/spider_test.rb
create spider/LICENSE.txt
create spider/CODE_OF_CONDUCT.md
create spider/exe/spider
Initializing git repo in /Users/francois/Projects/spider
Gem 'spider' was successfully created. For more information on making a RubyGem visit https://bundler.io/guides/creating_gem.html
Then, in lib/, you create modules as needed:
lib/
spider/
base.rb
crawler/
base.rb
spider.rb
require "spider/base"
require "crawler/base"
Read the manual page for bundle gem for details on the --coc, --exe and --mit options.
The core structure of a standard Ruby project is basically:
lib/
foo.rb
foo/
share/
foo/
test/
helper.rb
test_foo.rb
HISTORY.md (or CHANGELOG.md)
LICENSE.txt
README.md
foo.gemspec
The share/ is rare and is sometimes called data/ instead. It is for general purpose non-ruby files. Most projects don't need it, but even when they do many times everything is just kept in lib/, though that is probably not best practice.
The test/ directory might be called spec/ if BDD is being used instead of TDD, though you might also see features/ if Cucumber is used, or demo/ if QED is used.
These days foo.gemspec can just be .gemspec --especially if it is not manually maintained.
If your project has command line executables, then add:
bin/
foo
man/
foo.1
foo.1.md or foo.1.ronn
In addition, most Ruby project's have:
Gemfile
Rakefile
The Gemfile is for using Bundler, and the Rakefile is for Rake build tool. But there are other options if you would like to use different tools.
A few other not-so-uncommon files:
VERSION
MANIFEST
The VERSION file just contains the current version number. And the MANIFEST (or Manifest.txt) contains a list of files to be included in the project's package file(s) (e.g. gem package).
What else you might see, but usage is sporadic:
config/
doc/ (or docs/)
script/
log/
pkg/
task/ (or tasks/)
vendor/
web/ (or site/)
Where config/ contains various configuration files; doc/ contains either generated documentation, e.g. RDoc, or sometimes manually maintained documentation; script/ contains shell scripts for use by the project; log/ contains generated project logs, e.g. test coverage reports; pkg/ holds generated package files, e.g. foo-1.0.0.gem; task/ could hold various task files such as foo.rake or foo.watchr; vendor/ contains copies of the other projects, e.g. git submodules; and finally web/ contains the project's website files.
Then some tool specific files that are also relatively common:
.document
.gitignore
.yardopts
.travis.yml
They are fairly self-explanatory.
Finally, I will add that I personally add a .index file and a var/ directory to build that file (search for "Rubyworks Indexer" for more about that) and often have a work directory, something like:
work/
NOTES.md
consider/
reference/
sandbox/
Just sort of a scrapyard for development purposes.
#Dentharg: your "include one to include all sub-parts" is a common pattern. Like anything, it has its advantages (easy to get the things you want) and its disadvantages (the many includes can pollute namespaces and you have no control over them). Your pattern looks like this:
- src/
some_ruby_file.rb:
require 'spider'
Spider.do_something
+ doc/
- lib/
- spider/
spider.rb:
$: << File.expand_path(File.dirname(__FILE__))
module Spider
# anything that needs to be done before including submodules
end
require 'spider/some_helper'
require 'spider/some/other_helper'
...
I might recommend this to allow a little more control:
- src/
some_ruby_file.rb:
require 'spider'
Spider.include_all
Spider.do_something
+ doc/
- lib
- spider/
spider.rb:
$: << File.expand_path(File.dirname(__FILE__))
module Spider
def self.include_all
require 'spider/some_helper'
require 'spider/some/other_helper'
...
end
end
Why not use just the same layout? Normally you won't need build because there's no compilation step, but the rest seems OK to me.
I'm not sure what you mean by a module but if it's just a single class a separate folder wouldn't be necessary and if there's more than one file you normally write a module-1.rb file (at the name level as the module-1 folder) that does nothing more than require everything in module-1/.
Oh, and I would suggest using Rake for the management tasks (instead of make).
I would stick to something similar to what you are familiar with: there's no point being a stranger in your own project directory. :-)
Typical things I always have are lib|src, bin, test.
(I dislike these monster generators: the first thing I want to do with a new project is get some code down, not write a README, docs, etc.!)
So I went with newgem.
I removed all unnecessary RubyForge/gem stuff (hoe, setup, etc.), created git repo, imported project into NetBeans. All took 20 minutes and everything's on green.
That even gave me a basic rake task for spec files.
Thank you all.

Resources