If I run yum or dnf as root on most recent updated RHEL8 it works smoothly. As soon as I try running it with sudo users (added to wheel group) it takes up to 5 minutes.
Doing a yum clean did not help eiher.
I usually run updates using Ansible from a remote host but Ansible disconnects after several minutes of trying to run the yum module.
I set SELinux to disabled, just in case and checked proxy settings in dnf.conf - It all looks fine. Any input is very much appreciated.
This is the dnf.log
2020-02-20T15:33:13Z INFO Updating Subscription Management repositories.
2020-02-20T15:37:36Z DEBUG DNF version: 4.2.7
2020-02-20T15:37:36Z DDEBUG Command: yum update
2020-02-20T15:37:36Z DDEBUG Installroot: /
2020-02-20T15:37:36Z DDEBUG Releasever: 8
2020-02-20T15:37:36Z DEBUG cachedir: /var/cache/dnf
2020-02-20T15:37:36Z DDEBUG Base command: update
2020-02-20T15:37:36Z DDEBUG Extra commands: ['update']
2020-02-20T15:37:37Z DEBUG repo: using cache for: epel-modular
2020-02-20T15:37:37Z DEBUG epel-modular: using metadata from Sat Feb 15 03:19:39 2020.
2020-02-20T15:37:37Z DEBUG repo: using cache for: epel
2020-02-20T15:37:37Z DEBUG epel: using metadata from Thu Feb 20 06:38:22 2020.
2020-02-20T15:37:38Z DEBUG reviving: 'rhel-8-for-x86_64-baseos-rpms' can be revived - repomd matches.
2020-02-20T15:37:38Z DEBUG rhel-8-for-x86_64-baseos-rpms: using metadata from Thu Feb 13 10:30:57 2020.
2020-02-20T15:37:38Z DEBUG reviving: 'rhel-8-for-x86_64-appstream-rpms' can be revived - repomd matches.
2020-02-20T15:37:39Z DEBUG rhel-8-for-x86_64-appstream-rpms: using metadata from Thu Feb 20 11:06:31 2020.
2020-02-20T15:37:39Z DDEBUG timer: sack setup: 3550 ms
2020-02-20T15:37:39Z DEBUG Completion plugin: Generating completion cache...
2020-02-20T15:37:40Z DEBUG --> Starting dependency resolution
2020-02-20T15:37:40Z DEBUG --> Finished dependency resolution
2020-02-20T15:37:40Z DDEBUG timer: depsolve: 277 ms
2020-02-20T15:37:40Z INFO Dependencies resolved.
2020-02-20T15:37:40Z INFO Nothing to do.
2020-02-20T15:37:40Z INFO Complete!
2020-02-20T15:37:40Z DDEBUG Cleaning up.
I resolved the issue. The proxy was not correctly set in /etc/rhsm/rhsm.conf
I don't understand why running as root would not cause the same problem.
Related
When I am passing the command in command prompt then I am getting the below error-
C:\Users\ShivangiT\Downloads\apache-jmeter-5.3\apache-jmeter-5.3\bin>Jmeter.bat -Jjmeter.save.saveservice.output_format=xml -n -t \Users\ShivangiT\Downloads\apache-jmeter-5.3\apache-jmeter-5.3\bin\vieweventpage.jmx -l \Users\ShivangiT\Downloads\apache-jmeter-5.3\apache-jmeter-5.3\bin\rr.jtl
Creating summariser <summary>
Created the tree successfully using \Users\ShivangiT\Downloads\apache-jmeter-5.3\apache-jmeter-5.3\bin\vieweventpage.jmx
Starting standalone test # Fri Aug 21 07:29:38 BST 2020 (1597991378434)
Waiting for possible Shutdown/StopTestNow/HeapDump/ThreadDump message on port 4445
summary = 41 in 00:00:14 = 2.9/s Avg: 5256 Min: 7 Max: 13688 Err: 13 (31.71%)
Tidying up ... # Fri Aug 21 07:29:52 BST 2020 (1597991392905)
... end of run
The JVM should have exited but did not.
The following non-daemon threads are still running (DestroyJavaVM is OK):
Thread[DestroyJavaVM,5,main], stackTrace:
Thread[AWT-EventQueue-0,6,main], stackTrace:sun.misc.Unsafe#park
java.util.concurrent.locks.LockSupport#park
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject#await
java.awt.EventQueue#getNextEvent
java.awt.EventDispatchThread#pumpOneEventForFilters
java.awt.EventDispatchThread#pumpEventsForFilter
java.awt.EventDispatchThread#pumpEventsForHierarchy
java.awt.EventDispatchThread#pumpEvents
java.awt.EventDispatchThread#pumpEvents
java.awt.EventDispatchThread#run
Thread[AWT-Shutdown,5,system], stackTrace:java.lang.Object#wait
sun.awt.AWTAutoShutdown#run
java.lang.Thread#run
Can anybody please help me with this.
This is a known issue of JMeter 5.3 when test plan contains Http(s) Test script recorder.
The workaround is to remove it.
See:
https://bz.apache.org/bugzilla/show_bug.cgi?id=64479
Alternatively you can try nightly build:
https://ci.apache.org/projects/jmeter/nightlies/
Shiva, I would suggest that you should use the older version of JMeter always. The reason is very simple. There is not much of a change in JMeter since JMeter 4. JMeter is more inclined towards its compatibility with JDK 11 as currently, it supports JDK 8 flawlessly in the older versions. Use JMeter 4 from the JMeter official archive and you'll be able to execute everything smoothly. No need to look for workarounds. Make sure you use JMeter 4
I'm following the readthedocs instructions for the meta-raspberrypi layer and trying to build the rpi-test-image image for the raspberrypi3-64 machine contained in the meta-raspberrypi layer using the zeus release of yocto.
I added this to conf/local.conf:
MACHINE ??= "raspberrypi3-64"
ENABLE_UART = "1"
My bblayers.conf file looks like this:
/opt/yocto/workspace/rpi3-64-build/conf[master]☢ ☠$ cat bblayers.conf
# POKY_BBLAYERS_CONF_VERSION is increased each time build/conf/bblayers.conf
# changes incompatibly
POKY_BBLAYERS_CONF_VERSION = "2"
BBPATH = "${TOPDIR}"
BBFILES ?= ""
BBLAYERS ?= " \
/opt/yocto/workspace/sources/poky/meta \
/opt/yocto/workspace/sources/poky/meta-poky \
/opt/yocto/workspace/sources/poky/meta-yocto-bsp \
/opt/yocto/workspace/sources/meta-openembedded/meta-oe \
/opt/yocto/workspace/sources/meta-openembedded/meta-multimedia \
/opt/yocto/workspace/sources/meta-openembedded/meta-networking \
/opt/yocto/workspace/sources/meta-openembedded/meta-python \
/opt/yocto/workspace/sources/meta-raspberrypi \
"
I added the 4 items from meta-openembeeded as a work-around for ERROR: Nothing RPROVIDES 'bigbuckbunny-480p':
This enables the build to run but it exits with the following errors:
/opt/yocto/workspace/rpi3-64-build/conf[master]☢ ☠$ bitbake rpi-test-image
Parsing recipes: 100% |#########################################################################################################################################| Time: 0:02:49
Parsing of 2340 .bb files complete (0 cached, 2340 parsed). 3464 targets, 133 skipped, 0 masked, 0 errors.
NOTE: Resolving any missing task queue dependencies
Build Configuration:
BB_VERSION = "1.44.0"
BUILD_SYS = "x86_64-linux"
NATIVELSBSTRING = "ubuntu-16.04"
TARGET_SYS = "aarch64-poky-linux"
MACHINE = "raspberrypi3-64"
DISTRO = "poky"
DISTRO_VERSION = "3.0.2"
TUNE_FEATURES = "aarch64 cortexa53 crc"
TARGET_FPU = ""
meta
meta-poky
meta-yocto-bsp = "zeus:74f229160c7f4037107c1dad8f0d02128c080a7e"
meta-oe
meta-multimedia
meta-networking
meta-python = "zeus:9e60d30669a2ad0598e9abf0cd15ee06b523986b"
meta-raspberrypi = "zeus:0e05098853eea77032bff9cf81955679edd2f35d"
Initialising tasks: 100% |######################################################################################################################################| Time: 0:00:06
Sstate summary: Wanted 1338 Found 0 Missed 1338 Current 0 (0% match, 0% complete)
NOTE: Executing Tasks
NOTE: Setscene tasks completed
WARNING: icu-native-64.2-r0 do_fetch: Checksum mismatch for local file /opt/yocto/cache/downloads/icu4c-64_2-src.tgz
Cleaning and trying again.
WARNING: icu-native-64.2-r0 do_fetch: Renaming /opt/yocto/cache/downloads/icu4c-64_2-src.tgz to /opt/yocto/cache/downloads/icu4c-64_2-src.tgz_bad-checksum_abb12cb25a05198ad8f4c1e6f668fa05
WARNING: icu-native-64.2-r0 do_fetch: Checksum failure encountered with download of http://download.icu-project.org/files/icu4c/64.2/icu4c-64_2-src.tgz - will attempt other sources if available
ERROR: rpi-test-image-1.0-r0 do_rootfs: Could not invoke dnf. Command '/opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/recipe-sysroot-native/usr/bin/dnf -v --rpmverbosity=info -y -c /opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/rootfs/etc/dnf/dnf.conf --setopt=reposdir=/opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/rootfs/etc/yum.repos.d --installroot=/opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/rootfs --setopt=logdir=/opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/temp --repofrompath=oe-repo,/opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/oe-rootfs-repo --nogpgcheck install psplash-raspberrypi packagegroup-core-boot packagegroup-base-extended run-postinsts packagegroup-rpi-test locale-base-en-us locale-base-en-gb' returned 1:
DNF version: 4.2.2
cachedir: /opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/rootfs/var/cache/dnf
Added oe-repo repo from /opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/oe-rootfs-repo
repo: using cache for: oe-repo
not found other for:
not found modules for:
not found deltainfo for:
not found updateinfo for:
oe-repo: using metadata from Thu 30 Apr 2020 09:25:08 PM UTC.
Last metadata expiration check: 0:00:02 ago on Thu 30 Apr 2020 09:25:11 PM UTC.
No module defaults found
--> Starting dependency resolution
--> Finished dependency resolution
Error:
Problem: package packagegroup-base-wifi-1.0-r83.raspberrypi3_64 requires wireless-regdb-static, but none of the providers can be installed
- package wireless-regdb-2019.06.03-r0.noarch conflicts with wireless-regdb-static provided by wireless-regdb-static-2019.06.03-r0.noarch
- package packagegroup-base-1.0-r83.raspberrypi3_64 requires packagegroup-base-wifi, but none of the providers can be installed
- package packagegroup-rpi-test-1.0-r0.noarch requires wireless-regdb, but none of the providers can be installed
- package packagegroup-base-extended-1.0-r83.raspberrypi3_64 requires packagegroup-base, but none of the providers can be installed
- conflicting requests
(try to add '--allowerasing' to command line to replace conflicting packages or '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages)
ERROR: Logfile of failure stored in: /opt/yocto/workspace/rpi3-64-build/tmp/work/raspberrypi3_64-poky-linux/rpi-test-image/1.0-r0/temp/log.do_rootfs.23419
ERROR: Task (/opt/yocto/workspace/sources/meta-raspberrypi/recipes-core/images/rpi-test-image.bb:do_rootfs) failed with exit code '1'
NOTE: Tasks Summary: Attempted 3511 tasks of which 1 didn't need to be rerun and 1 failed.
Summary: 1 task failed:
/opt/yocto/workspace/sources/meta-raspberrypi/recipes-core/images/rpi-test-image.bb:do_rootfs
Summary: There were 3 WARNING messages shown.
Summary: There was 1 ERROR message shown, returning a non-zero exit code.
/opt/yocto/workspace/rpi3-64-build/conf[master]☢ ☠$
I'm not even sure how to go about debugging it.
I've tried googling the warnings and error messages but was not able to find anything the might be causing the problem.
2 questions:
1: What might the problem be in this particular instance?
2: What is a good general procedure for debugging failed builds like this?
UPDATE: "bitbake core-image-base" does successfully build the image, but it is still unclear to me why "bitbake rpi-test-image" failed.
I am experimenting with creating an EC2 instance to host a Perforce server. My instance is configured with the following user data:
#!/bin/bash
# Add a newline to the ec2-user prompt string
echo PS1=\"\\n\$PS1\" >> /home/ec2-user/.bashrc
# Update all packages
yum update –y
# Install Perforce packages
# The RHEL/7 part of the baseurl should be replaced with
# the latest RHEL version that both Amazon and Perforce support
rpm –import https://package.perforce.com/perforce.pubkey
cd /etc/yum.repos.d/
echo [perforce] > perforce.repo
echo name=Perforce >> perforce.repo
echo baseurl=http://package.perforce.com/yum/rhel/7/x86_64 >> perforce.repo
echo enabled=1 >> perforce.repo
echo gpgcheck=1 >> perforce.repo
yum install –y helix-p4d
# Make directories for the server, owned by new “perforce” user
cd /opt/perforce/servers/
mkdir danware
cd danware
mkdir danware-db danware-chkpts journal
chown –R perforce:perforce danware
I have tested each of the above commands, and know that they work when executed manually in this order. However, some aspect of Amazon's base64 encode/decode system seems to be getting in the way. When I go to "Actions > Instance Settings > View/Change User Data" from the EC2 Console after launching (and passing all system checks), I see the following user data. Note how almost every hyphen "-" has been replaced with some strange "a" character.
However, I'm not sure that this is the issue, because the log file at /var/log/cloud-init-output.log gives me the following output (I replaced some repetitive text with [...] to save space). Note the line that says Failed running /var/lib/cloud/instance/scripts/part-001 I have verified that this part-001 file actually does have the correctly displayed hyphen characters.
[...]
Cloud-init v. 0.7.6 running 'modules:final' at Fri, 09 Sep 2016 06:23:39 +0000. Up 86.66 seconds.
Loaded plugins: priorities, update-motd, upgrade-helper
No Match for argument: –y
No packages marked for update
RPM version 4.11.2
Copyright (C) 1998-2002 - Red Hat, Inc.
This program may be freely redistributed under the terms of the GNU GPL
Usage: rpm [-aKfgpqVcdLilsiv?] [-a|--all] [-f|--file] [-g|--group] [...]
Loaded plugins: priorities, update-motd, upgrade-helper
Resolving Dependencies
--> Running transaction check
---> [...]
Dependencies Resolved
================================================================================
Package Arch Version Repository Size
================================================================================
Installing:
helix-p4d x86_64 2016.1-1429894 perforce 24 k
Installing for dependencies:
helix-cli x86_64 2016.1-1429894 perforce 8.8 k
helix-cli-base x86_64 2016.1-1429894 perforce 1.4 M
helix-p4d-base x86_64 2016.1-1429894 perforce 3.1 k
helix-p4d-base-16.1 x86_64 2016.1-1429894 perforce 2.4 M
helix-p4dctl x86_64 2016.1-1429894 perforce 1.2 M
Transaction Summary
================================================================================
Install 1 Package (+5 Dependent packages)
Total download size: 5.0 M
Installed size: 13 M
Is this ok [y/d/N]: Exiting on user command
Your transaction was saved, rerun it with:
yum load-transaction /tmp/yum_save_tx.2016-09-09.06-23.dRP_r2.yumtx
/var/lib/cloud/instance/scripts/part-001: line 22: cd: /opt/perforce/servers/: No such file or directory
chown: invalid user: ‘–R’
Sep 09 06:23:41 cloud-init[2517]: util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-001 [1]
Sep 09 06:23:41 cloud-init[2517]: cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts in /var/lib/cloud/instance/scripts)
Sep 09 06:23:41 cloud-init[2517]: util.py[WARNING]: Running module scripts-user (<module 'cloudinit.config.cc_scripts_user' from '/usr/lib/python2.7/dist-packages/cloudinit/config/cc_scripts_user.pyc'>) failed
Cloud-init v. 0.7.6 finished at Fri, 09 Sep 2016 06:23:41 +0000. Datasource DataSourceEc2. Up 88.53 seconds
Even more annoying, I assumed that the early No Match for argument: –y line from the log file was referring to the yum update -y line from my user data. Sure enough, just running the example user data script from the EC2 documentation page, which also uses yum update -y, gives me this same error/warning! Amazon's own example script doesn't work!? So can anyone answer why A) AWS is not displaying the user data code correctly, and B) why my user data is yielding the errors shown above? The help is much appreciated!
For lines such as
yum update –y
The character you are using is a "EN DASH U+2013"
The usual character for a hyphen is "HYPHEN-MINUS U+002D"
Fix your user data source to use "hyphen minus" and have another go
I checked the character codes by cut n pasting into this online site http://www.fileformat.info/info/unicode/char/search.htm?q=-&preview=entity
Don't know if you can see the difference but this is your hyphen
yum update –y
and this is a "hyphen minus"
yum update -y
I am using open-source Pentaho distribution from github.com (version 6.1-SNAPSHOT).
In Spoon there are some step missing (e.g. There is no Mongodb input/output step listed) and I cant add dataservice to step (There are no errors, this option just isn't on the list).
I have reinstalled everything (removed .kettle and .pentaho directories as well as whole source and distribution) but it didn't help.
This is what I get at spoon startup:
16:05:50,304 INFO [KarafInstance]
* Karaf Instance Number: 1 at ~/pentaho-kettle/d
ist/./system/karaf//data1
Karaf Port:8801
OSGI Service Port:9050 *
Dec 23, 2015 4:05:51 PM org.apache.karaf.main.Main$KarafLockCallback
lockAquired
INFO: Lock acquired. Setting startlevel to 100
2015/12/23 16:05:53 - cfgbuilder - Warning: The configuration
parameter [org] is not supported by the default configuration builder
for scheme: sftp
Dec 23, 2015 4:05:58 PM
org.pentaho.caching.impl.PentahoCacheManagerFactory$RegistrationHandler$1
onSuccess INFO: New Caching Service registered
16:06:04,009 ERROR [KarafLifecycleListener] The Kettle Karaf Lifycycle
Listener failed to execute properly. Releasing lifecycle hold, but
some services may be unavailable.
I suspect that
ERROR [KarafLifecycleListener] The Kettle Karaf Lifycycle Listener
failed to execute properly. Releasing lifecycle hold, but some
services may be unavailable.
has something to do with it as missing plugins reside somewhere under karaf/ directory.
It was working just fine week ago.
I am using Ubuntu 15.04.
I will be grateful for any hints.
Greetings.
you are using a non stable release. this is the place where you can download the latest stable release http://sourceforge.net/projects/pentaho/
Agreed with jipipayo, For Pentaho CE version, download it from http://community.pentaho.com/ official instead of github. It is highly possible that the codes in github might be in unstable condition.
Also in case you have a missing plugin, try using the Pentaho Marketplace and download the required plugins.
Hope this helps :)
I need to know the teamcity settings which prevents the re-trigger/trigger of outdated builds/jobs if the new builds are successful.
I am facing a issue where teamcity jobs can be re triggered even if the next builds are successful.And If the trigger event is fired before, then it must stop teamcity to run that job if the latest build is successful.
So I have to 2 jobs in TC for 1 branch -- Build-Precheck and the other is Build-compile
So I could see that Build-compile is just picking the latest available successful build from Build-Precheck and then queing up the next which may be the outdated build.
Build-Precheck is just taking 2 min to finish the builds , it quickly triggers the latest builds , I guess following the principal First In First Out
Build-Precheck
06 Oct 14 14:33 - 14:35 (2m:01s) –7.1.4345
06 Oct 14 14:41 - 14:43 (2m:16s)- 7.1.4346
06 Oct 14 14:45 - 14:47 (2m:10s)- 7.1.4347
Build-compile
06 Oct 14 14:35 - 15:00 -7.1.0.4345
06 Oct 14 14:52 - 15:20 (28m:02s)- 7.1.4347
06 Oct 14 16:08 - 16:33 (24m:52s)- 7.1.4346
Is there any fix for this that TC runs incremental builds rather than outdated ones
Sounds like you are looking for Configuring Build Trigger.
AFAIK, there isn't a way to cancel queued builds if a given build passes. However, you can adjust the Build Triggers that queue those builds. Most likely, you'll need to set the Quiet Period on your VCS Build Trigger to longer than it takes for your build.
For example, if your full build takes 5 minutes, you should set the Quiet Period to 7. This way additional builds wont queue while a build is running.