Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I have created few variables under resource file and if i try to access with complete absolute path i am able to access the variables under resource file but if i use relative path it is not working.
Below is the screenshot of resource file (tried with changing format to .txt , .robot , .resources)
Below is the testcase file from where i am tryng to access resource file
When using relative imports, they are based off the location of the the file you are importing in.
As the Collections.robot is in the other_dirs/venv/Test/Collection/ directory, and the resource one in other_dirs/venv/resources /Txt/, the correct relative import path is:
Resource ../../resources/Txt/Resource.robot
Side note - why are you storing your sources in the venv dir? This looks like the python virtual environment you are using, and as such is managed by it; it often is also in git ignore, etc. That just looks like an incorrect practice.
Your "relative" path is relative to Collections.robot file, you need to go couple of levels up, something like:
../../resources/Txt/Resource.robot
where:
first .. moves you to Test folder
second .. moves you to venv folder
and under venv folder you have this resources
Also I don't think using Robot framework is the best way of running a JMeter test, looking at release history of the JMeter library it was updated 4 years ago and since then several versions of JMeter released, for instance JMeter 4.0 which had a lot of breaking changes so I would rather recommend switching to Process library and run JMeter in command-line non-GUI mode
Related
I'm still trying to create my first Azure Pipeline CI / CD. My CI part is working fine, my CD is also working except I cannot apply my Web.config file transformations.
Let me first show you what I have then I will ask several questions below. The build with generated artifact. I also copy manually my 3 config files.
Wen I open my WebAPI.zip file here is the path and content:
Here is my pipeline project
And the details of my staging phase:
When I run this full pipeline my config file is never transformed but I get no error. I just get a
2019-05-02T03:27:23.5778958Z ##[warning]Unable to apply transformation
for the given package.
I also have the debug log with full information but it doesn't give me much information for now. I will add it here later.
Questions
Azure Pipeline File Transformation is not working. Why?
Is it because the File Tranform task only look for config file in zip?
Is this system then just ignoring my tranformation file in root of artifact?
So I think my manual copy of config transformation file is obsolete?
How can I then add my transformation file into my zip?
In my csproj I already set all my tranformation files on Build action content, copy always, this is ignored too, is it normal?
EDIT 1
One more important question: Is it possible to simply ask the deployment system to ignore or not deploy my config file. It is not something I want to deploy every time. I like the idea I have to do it manually or from alternative deployment system. With this solution I can have some other issues if I save a version or build variable in my config file. Then is it possible to modify a already deployed file after deployment? I'm looking for workaround here. Example: I read a value in my existing config file then I increment this value by one or simply replace this value with another?
EDIT 2
I'm now able to add the config file to the WebApi.zip package on root and/or in bin folder. I followed the comment of Shayki Abramczyk bu using the xml transform of deploy. Still not working. And the errors messages are so poor. Seriously Microsoft? Is your transformation system even working? I see question similar to mine everywhere.
And now I get
The file is correct, transform works fine from Visual Studio Publish tool. I really think the xml transform tool from Microsoft in Azure is just not working.
EDIT 3
Is it possible to issues from my transformmations come from NLog because of the name and then special rule I apply on it?
This question already has answers here:
Can someone explain why GOPATH is convenient and how it should be used in general?
(3 answers)
Closed 4 years ago.
I am starting with Go and trying to get my head around GOPATH (and probably GOBIN).
When trying to fetch external libraries via go get I get the error
go get: no install location for directory D:\Seafile\dev-perso\domotiqueNG\services\dispatcher-go\src\dispatcher-go outside GOPATH
This error is apparently solved by having a project structure below $GOPATH/src.
Does this mean that all my Go programs must live there? If GOPATH is d:\hello then the projects bonjour and aurvoir really need to be in
d:\hello\src\bonjour
d:\hello\src\aurevoir
only ?
In this is the case how can I
split, say, personal and professional projects when the personal must stay at d:\home and professional at x:\work ?
have multilanguage projects where d:\home\domotique\dispatch is in Go, d:\home\domotique\whatever is in Python, and I have several such combos in d:\home?
You can actually have Go projects outside of GOPATH. However, some tools do not work well or do not work at all in this kind of situation. For example, goimports, which formats your code and add missing imports, will not be able to find packages that are outside of GOPATH. You'll have to write the imports manually using a relative path: ./path/to/your/package.
how can I split, say, personal and professional projects when the personal must stay at d:\home and professional at x:\work ?
You actually can have multiple Go workspaces (https://github.com/golang/go/wiki/GOPATH). You just need to set GOPATH to the list of their location joined with the list separator of your OS. e.g. on Linux it would look like this:
GOPATH="/home/nobody/perso:/home/nobody/work"
Though, I'm not sure how go and other tools such as dependency managers handle multiple workspaces.
When using the GOPATH you can make subdirectories in the /src folder, for example, adding both home and work directories. In fact, the Go project has an example of how to organize code. While packages need to be in their own folders, a folder itself in the GOPATH is not automatically a package.
If you'd rather not work within the confines of the GOPATH, you can change it to the path you do want to work in, or set it to your home directory.
I've been tasked with adding features to an existing hyperledger system. But all I've been given is the .bna file. I can clearly see it contains javascript source as well as models, but is this really enough to develop from? All my experience is going from .cto and .js files and configs to building the .bna archive. How do I go about doing that in reverse? Am I likely to run into problems because I'm missing something necessary not normally packaged in the .bna? Should I insist on getting the actual source tree that was used to build the .bna file? I already asked specifically for that and NOT the .bna file, but was ignored. Am I the ignorant one here?
The BNA file contains everything the networks needs to execute. The only things you are missing is things like build scripts, unit and system tests, documentation. So, I would keep trying to get the original source if you can, if you can't you do have enough to get going.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As someone new to Golang, I'm somewhat perplexed by the notion of the $GOPATH.
The only thing in my experience that it reminds me of is the "System Folder Path" (C:\Windows, C:\Program Files) on Microsoft Windows machines- is it somehow conceptually related to this?
I've seem a description of it from the go team, but it's too practical, it talks about what it is, as opposed to why it is.
So then, why is it?
It's an "include path". Practically every (modern) language uses one.
For C/C++, it's the combination of the LIB and INC environment variables (at least in Unix/Makefile environments).
For Perl (5) it's the PERLLIB or PERL5LIB environment variables.
For Python it's the PYTHONHOME environment variable.
For Node.js it's the NODE_PATH variable.
Etc, etc.
GOPATH is a variable that indicates where the dependencies of your application are installed. It is basically a path to a directory where you store the packages your application might use.
Any application of a reasonable size has dependencies. In golang, these come in the form of packages. At compile time, the location of the dependencies (i.e. packages you use) needs to be known such that your executable can be built.
They can either be stored at a fixed, predefined location or make somehow the user be able to specify the location himself. The first approach has a lot of downsides (for example, it would be impossible to support operating systems with different directory structure). Thus, the designers of the go tool decided to make it user configurable by the means of this variable. This also gives the user much more flexibility, such as the ability to group the dependencies for different projects in different directories.
The usage of a environment variable (as GOPATH) is not limited to golang. Java has its CLASSPATH, Python its PYTHONPATH and so on (each of them with their quirks, but with the same basic role).
I did a bit of searching and found this thread on the topic but it's specific to XML files, and so the answer makes sense (/etc/) for XML files.
In my case, I'm actually storing a txt file, which happens to be an SVN version number that I dumped out within my modman script.
The place that I'm using this is within a frontend model (Blocks/System/Html.php) which outputs the version number within the module config. So I went with the Blocks/System/ directory for now - the filename is Version.txt - but it feels like there should be a better place to put this.
Since this SVN version number is being written by an external tool I would prefer it not mess with the contents of code directories (which in a live environment may have write restrictions) and instead have it write to the "var" directory. In which case to get the correct path within "var" you would use:
$fullpath = Mage::getBaseDir('var') . DS . $path;
The contents of "var" are disposable, they may be deleted at any time so be prepared for a missing file.
version numbers can be added to app/code/local/Your/Extension/etc/config.xml
<config>
<modules>
<Your_Extension>
<version>0.0.0.0</version>
</Your_Extension>
</modules>
</config>
magento knows how to handle your extension version changes and can call update scripts based on version number change. This is the preferred method for this kind of stuff.
if you need to add random non php classes files to your extension then add them to your extension folder and ask them from there:
Mage::getModuleDir('etc', 'Your_Extension');
Mage::getModuleDir('whateverfolder', 'Your_Extension');
This is not a good practice though as this might just break magento compilation feature or introduce other issues so it is better to handle external data also through php classes or xml files inside your extension structure
I ran into the same kind of problem when developing a shipping module. I had a bunch of CSV files that contained maximum weight / delivery cost mappings. For what it's worth, I created a data/ directory at the module level and threw everything in there.
I don't think this kind of situation doesn't happens often enough in the Magento codebase for there to be an established convention. As long as you use sensible naming, and provide a level of abstraction to cope with any change of file location in the future, I'd say put it in any folder at your module's root.