VCD dump (VHDL simulation with vcs ) - vhdl

I need help in simulating VHDL code with VCS. What are options available to dump vcd file with vcs for vhdl code .I have tried all the options, which i found on internet. None seems to be working, or i am not doing it in a correct way . A detailed answer would be helpful
Commands tried till now:
vcs test_top -R +vcs+vcdpluson -debug_pp
vcs test_top -R +vcs+vcdpluson -debug_pp -vcd test.vcd
vcs test_top -R +vcs+vcdpluson+vcd_file test.vcd -debug_pp
vcs test_top -V -R +vcs+vcdpluson -debug_pp

I think you need to run the simv command as described in this on page 3.

Related

Run ansible-lint through subdirectories within a gitlab role

I am trying to add a validation step to a gitlab repo holding a single ansible role (with no playbook).
The structure of the role looks like :
.gitlab-ci.yml
tasks/
templates/
files/
vars/
handlers/
With the gitlab-ci looking like :
stages:
- lint
job-lint:
image:
name: cytopia/ansible-lint:latest
entrypoint: ["/bin/sh", "-c"]
stage: lint
script:
- ansible-lint --version
- ansible-lint . -x 106 tasks/*.yml
I need to skip the naming rule, thus ignoring rule 106.
Otherwise, I would like all files at the root repo to be checked. Since there is no playbook, lint has to be given the files that need to be checked... or at least, that is what I understoodd : I may have this point wrong. But anyway, if I give no name, lint does return ok but actually performs no check.
My problem is that I don't know how to tell him to check all the yaml in a recursive way, or even within a subdirectory. The above code returns an error :
ansible-lint: error: unrecognized arguments: tasks/deploy.yml tasks/localhost.yml tasks/main.yml tasks/managedata.yml tasks/psqlconf.yml
Any idea on how to check all the files from a subdirectory or through the whole role?
PS : I am using cytopia image for ansible-lint, but I have no problem using another, provided it's hosted on dockerhub.
You should certainly be able to pass multiple YAML files as arguments to ansible-lint. I have version 4.1.1a0, and I'm able to use it like this, for example:
anisble-lint -x 106 roles/*/tasks/*.yml
I notice that you seem to have placed a . before your -x 106; that looks like an error. It doesn't look like ansible-lint will accept a directory name as an argument (it doesn't cause it to fail; it just doesn't accomplish anything).
I've tried this both with a locally installed ansible-lint and using the cytopia/ansible-lint image, which appears to perform identically:
docker run --rm -v $PWD:/src -w /src cytopia/ansible-lint -x 106 roles/*/tasks/*.yml
If you want to check all the yaml files, you can use find with exec option, something like this:
find ./ -not -name ".gitlab-ci.yml" -name "*.yml" | xargs ansible-lint -x 106
However ansible-lint -x 106 ./ should work, are you sure that your role really has errors? I've tested it both on ansible-galaxy init generated roles (with meta and all that stuff) and roles which were containing only tasks directory, and it worked every time.
EDIT: I tried creating an error in existing role, replacing "present" with "latest" in package install task
$ ansible-galaxy install geerlingguy.nfs
$ cd ~/.ansible/roles/geerlingguy.nfs
$ sed -i "s/present/latest/g" tasks/setup-RedHat.yml
$ ansible-lint ./
Examining tasks/main.yml of type tasks
Examining tasks/setup-Debian.yml of type tasks
Examining tasks/setup-RedHat.yml of type tasks
Examining handlers/main.yml of type handlers
Examining meta/main.yml of type meta
[403] Package installs should not use latest
tasks/setup-RedHat.yml:2
Task/Handler: Ensure NFS utilities are installed.
and it actually worked, so you may want to run a verbose output to see if actually works, maybe individual yaml file rules are different from whole roles.
When i ran my find-based check i got a lot of extra [204] Lines should be no longer than 160 chars

MapReduceIndexerTool output dir error "Cannot write parent of file"

I want to use Cloudera's MapReduceIndexerTool to understand how morphlines work. I created a basic morphline that just reads lines from the input file and I tried to run that tool using that command:
hadoop jar /opt/cloudera/parcels/CDH/lib/solr/contrib/mr/search-mr-*-job.jar org.apache.solr.hadoop.MapReduceIndexerTool \
--morphline-file morphline.conf \
--output-dir hdfs:///hostname/dir/ \
--dry-run true
Hadoop is installed on the same machine where I run this command.
The error I'm getting is the following:
net.sourceforge.argparse4j.inf.ArgumentParserException: Cannot write parent of file: hdfs:/hostname/dir
at org.apache.solr.hadoop.PathArgumentType.verifyCanWriteParent(PathArgumentType.java:200)
The /dir directory has 777 permissions on it, so it is definitely allowed to write into it. I don't know what I should do to allow it to write into that output directory.
I'm new to HDFS and I don't know how I should approach this problem. Logs don't offer me any info about that.
What I tried until now (with no result):
created a hierarchy of 2 directories (/dir/dir2) and put 777 permissions on both of them
changed the output-dir schema from hdfs:///... to hdfs://... because all the examples in the --help menu are built that way, but this leads to an invalid schema error
Thank you.
It states 'cannot write parent of file'. And the parent in your case is /. Take a look into the source:
private void verifyCanWriteParent(ArgumentParser parser, Path file) throws ArgumentParserException, IOException {
Path parent = file.getParent();
if (parent == null || !fs.exists(parent) || !fs.getFileStatus(parent).getPermission().getUserAction().implies(FsAction.WRITE)) {
throw new ArgumentParserException("Cannot write parent of file: " + file, parser);
}
}
In the message printed is file, in your case hdfs:/hostname/dir, so file.getParent() will be /.
Additionally you can try the permissions with hadoop fs command, for example you can try to create a zero length file in the path:
hadoop fs -touchz /test-file
I solved that problem after days of working on it.
The problem is with that line --output-dir hdfs:///hostname/dir/.
First of all, there are not 3 slashes at the beginning as I put in my continuous trying to make this work, there are only 2 (as in any valid HDFS URI). Actually I put 3 slashes because otherwise, the tool throws an invalid schema exception! You can easily see in this code that the schema check is done before the verifyCanWriteParent check.
I tried to get the hostname by simply running the hostname command on the Cent OS machine that I was running the tool on. This was the main issue. I analyzed the /etc/hosts file and I saw that there are 2 hostnames for the same local IP. I took the second one and it worked. (I also attached the port to the hostname, so the final format is the following: --output-dir hdfs://correct_hostname:8020/path/to/file/from/hdfs
This error is very confusing because everywhere you look for the namenode hostname, you will see the same thing that the hostname command returns. Moreover, the errors are not structured in a way that you can diagnose the problem and take a logical path to solve it.
Additional information regarding this tool and debugging it
If you want to see the actual code that runs behind it, check the cloudera version that you are running and select the same branch on the official repository. The master is not up to date.
If you want to just run this tool to play with the morphline (by using the --dry-run option) without connecting to Solr and playing with it, you can't. You have to specify a Zookeeper endpoint and a Solr collection or a solr config directory, which involves additional work to research on. This is something that can be improved to this tool.
You don't need to run the tool with -u hdfs, it works with a regular user.

Bash commands putting out extra information which results into issues with scripts

Okay, hopefully I can explain this correctly as I have no idea what's causing this or how to resolve this.
For some reason bash commands (on a CentOS 6.x server) are displaying more information than "normally" and that causes issues with certain scripts. I have no clue if there is a name for this, but hopefully someone knows a solution for this.
First example.
Correct / good server:
[root#goodserver ~]# vzctl enter 3567
entered into CT 3567
[root#example /]#
(this is the correct behaviour)
Incorrect / bad server:
[root#badserver /]# vzctl enter 3127
Entering CT
entered into CT 3127
Open /dev/pts/0
[root#example /]#
With the "bad" server it will display more information as usual, like:
Entering CT
Open /dev/pts/0
It's like it parsing extra information on what it's doing.
Ofcourse the above is purely something cosmetic, however with several bash scripts we use, these issues are really issues.
A part of the script we use, uses the following command (there are more, but this is mainly a example of what's wrong):
DOMAIN=`vzctl exec $VEID 'hostname -d'`
The result of the above information is parsed in /etc/named.conf.
On the GOOD server it would be added in the named.conf like this:
zone "example.com" {
type master;
file "example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
The above is correct.
On the BAD server it would be added in the named.conf like this:
zone "Executing command: hostname -d
example.com" {
type master;
file "Executing command: hostname -d
example.com";
allow-transfer {
200.190.100.10;
200.190.101.10;
common-allow-transfer;
};
};
So it's add stuff of the action it does, in this example "Executing command: hostname -d"
Another example here when I run the command on a good server and on the bad server.
Bad server:
[root#bad-server /]# DOMAIN=`vzctl exec 3333 'hostname -d'`
[root#bad-server /]# echo $DOMAIN
Executing command: hostname -d example.com
Good server:
[root#good-server ~]# DOMAIN=`vzctl exec 4444 'hostname -d'`
[root#good-server ~]# echo $DOMAIN
example.com
My knowledge is limited, but I have tried several things checking rsyslog and the grub.conf, but nothing seems out of the ordinary.
I have no clue why it's displaying the extra information.
Probably it's something simple / stupid, but I have been trying to solve this for hours now and I really have no clue...
So any help is really appreciated.
Added information:
Both servers use: kernel.printk = 7 4 1 7
(I don't know if that's useful)
Well (thanks to Aaron for pointing me in the right direction) I finally found the little culprit which was causing all the issues I experienced with this script (which worked for every other server, so no need to change that obviously).
The issues were caused by the VERBOSE leven set in vz.conf (located in /etc/vz/ directory). There is an option in there called "VERBOSE" and in my case it was set to 3.
According to OpenVZ's website it does the following:
Increments logging level up from the default. Can be used multiple times.
Default value is set to the value of VERBOSE parameter in the global
configuration file vz.conf(5), or to 0 if not set by VERBOSE parameter.
After I changed VERBOSE=3 to VERBOSE=0 my script worked fine once again (as it did for every other server). :-)
So a big shoutout to Aaron for pointing me in the right direction. The answer is easy when you know where to look!
Sorry to say, but I am kinda disappointed by ndim's reaction. This is the 2nd time he was very unhelpful and rude in his response after that. He clearly didn't read the issue I posted correctly. Oh well.
I would make sure to properly parse the output of the command. In this case, we are only interested in lines of the form
entered into CT 12345
One way of doing this would be to pipe everything through sed and having sed print only the number when the line looks as above (untested, and I always forget which braces/brackets/parens need a backslash in front of them):
whateverthecommand | sed -n 's/^entered into CT ([0-9]{1,})$/\1/p'

hive output consists of these 2 warnings at the end. How do I suppress these 2 warnings

Hive query output that is using UDFs consists of these 2 warnings at the end. How do I suppress these 2 warnings. Please note that the 2 warnings come right after the output as part of output.
WARN: The method class org.apache.commons.logging.impl.SLF4JLogFactory#release() was invoked.
WARN: Please see http://www.slf4j.org/codes.html#release for an explanation.
hadoop version
Hadoop 2.6.0-cdh5.4.0
hive --version
Hive 1.1.0-cdh5.4.0
If you use beeline instead of Hive the error goes away. Not the best solution, but I'm planning to post to the CDH user group asking the same question to see if it's a bug that can be fixed.
This error occurs due to adding of assembly jar which which contains classes from icl-over-slf4j.jar (which is causing the stdout messages) and slf4j-log4j12.jar.
You can try couple of things to begin with:
Try removing the assembly jar, in case if using.
Look at the following link: https://issues.apache.org/jira/browse/HIVE-12179
This suggest that we can trigger a flag in Hive where spark-assembly is loaded only if HIVE_ADD_SPARK_ASSEMBLY = "true".
https://community.hortonworks.com/questions/34311/warning-message-in-hive-output-after-upgrading-to.html :
Although there is a workaround if to avoid any end time changes and that is to manually remove the 2 lines from the end of the files using shell script.
Have tried to set HIVE_ADD_SPARK_ASSEMBLY=false, but it didn't work.
Finally, I found a post question at Cloudera community. See: https://community.cloudera.com/t5/Support-Questions/Warning-message-in-Hive-output-after-upgrading-to-hive/td-p/157141
You could try the follow command, it works for me!
hive -S -d ns=$hiveDB -d tab=$t -d dunsCol=$c1 -d phase="$ph1" -d error=$c2 -d ts=$eColumnArray -d reporting_window=$rDate -f $dir'select_count.hsql' | grep -v "^WARN" > $gOutPut 2> /dev/null

How to identify a revision in the darcs repository?

I mean that I need to identify a set of applied patches by some number or a string to quickly check if I have the same code version as the other people without any synchronizations.
Is there some built-in darcs solution for this?
There's no short identifier for this - it's one of the downsides of the cheap cherry-picking darcs gives you. You can get a long identifier by first running darcs optimize --reorder and then looking at the output of darcs changes --context - but two repos with the same contents might list things in the context in different order, so it's still not perfect. You could sort the output and compare those.
So overall
darcs optimize --reorder
darcs changes --context | sort | md5sum
would give you a reasonable approximation of a version identifier.
The darcs optimize --reorder step isn't absolutely necessary, it's just possible that without it you'll get different results when the repos actually contain the same set of patches.
You can also use darcs pull and darcs push to see whether there are any patches not synchronized.
$ darcs pull --dry-run ; darcs push --dry-run
Would pull from "/home/masse/temp/2013/01/09/tests/project"...
Would pull the following changes:
Wed Jan 9 16:39:50 EET 2013 mats.rauhala#iki.fi
* Canonical order for colors
Making no changes: this is a dry run.
Would push to "/home/masse/temp/2013/01/09/tests/project"...
No recorded local changes to push!

Resources