BOOST ERROR: Failed to retrieve archive table name - boost

I have installed cacti 1.2.17 on Ubuntu 20.04 using the docker container. I have installed the cacti as instructed. However, I have provided some additional resources for cacti. I am getting below 2 problems:
Loading for any leaf section is very slow it take almost 20sec load the graphs when clicked
I am getting boost error "BOOST ERROR: Failed to retrieve archive table name"''
I have added the screenshot of the error and also along with the configuration of the cacti
innodb_file_format=Barracuda
innodb_large_prefix=1
collation-server=utf8mb4_unicode_ci
character-set-server=utf8mb4
innodb_doublewrite=ON
max_heap_table_size=1G
tmp_table_size=1G
join_buffer_size=1G
innodb_buffer_pool_size=3G
innodb_flush_log_at_timeout=3
innodb_read_io_threads=32
innodb_write_io_threads=16
innodb_io_capacity=5000
innodb_io_capacity_max=10000
innodb_buffer_pool_instances=9enter image description here
[1]: https://i.stack.imgur.com/ekdak.jpg

Related

Running my revel application on windows 10 fail

I had problem when run my revel app on windows
it create fine but don't run when I try so only get this. any idea?
C:\Desarrollo\Web\webpro>revel run -a webpro
Revel executing: run a Revel application
WARN 05:53:33 harness.go:175: No http.addr specified in the app.conf listening on localhost interface only. This will not allow external access to your application
Changed detected, recompiling
Parsing packages, (may require download if not cached)... Completed
ERROR 05:53:38 build.go:406: Build errors errors="C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2: no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:\n\tgo get github.com/bradfitz/gomemcache/memcache\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\redis.go:10:2: no required module provides package github.com/garyburd/redigo/redis; to add it:\n\tgo get github.com/garyburd/redigo/redis\nC:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\inmemory.go:12:2: no required module provides package github.com/patrickmn/go-cache; to add it:\n\tgo get github.com/patrickmn/go-cache\n"
C:\Users\Mario\go\src\webpro\C:\Users\Mario\go\pkg\mod\github.com\revel\revel#v1.0.0\cache\memcached.go:11
WARN 05:53:38 build.go:420: Could not find in GO path file=C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11
ERROR 05:53:38 harness.go:239: Build detected an error error="Go Compilation Error (in C:\\Users\\Mario\\go\\pkg\\mod\\github.com\\revel\\revel#v1.0.0\\cache\\memcached.go:11:2): no required module provides package github.com/bradfitz/gomemcache/memcache; to add it:"
Error compiling code, to view error details see proxy running on http://:9000
Time to recompile 5.3684655s
I am newer ok
Best
Check your IPv4 address with the ipconfig command
Open webpro/conf/app.conf and paste the IPv4 address into the http.addr parameter

I'm getting issues in running nightwatch test cases in local configuration. For all browsers there's different issue which are listed as below:

Edge:
Error: An error occurred while retrieving a new session: "Unable to create new service: EdgeDriverService"
Chrome:
Error: An error occurred while retrieving a new session: "Unable to create new service: ChromeDriverService"
Firefox:
Error: An error occurred while retrieving a new session: "Expected browser binary location, but unable to find binary in default location, no 'moz:firefoxOptions.binary' capability provided, and no binale to find binary in default locatiory flag set on the command line"
This looks to me like a configuration issue. Assuming that you are using nightwatchjs without selenium, you need to ensure that the runner is able to find the chromedriver and geckodriver binaries. You can check my sample repo here. If you are using selenium, then you can check here.

ERROR: PrepareVirtualHardDisk: AttachVirtualDisk failed with error 0x80070522

I followed this tutorial to create a basic image of Windows IOT Core:
https://learn.microsoft.com/en-us/windows-hardware/manufacture/iot/create-a-basic-image
I run IoTCorePShell.cmd as an administrator.
All the previous steps was successful, but I encountered an error while running this step:
New-IoTFFUImage ProductA Test
(or) buildimage ProductA Test
Here the errors:
PS C:\MyWorkspace>New-IoTFFUImage ProductA Test
ADK_VERSION : 10.0.17763.1
IOTCORE_VER : 10.0.17763.253
BSP_VERSION : 10.0.0.0
ADDONKITVER : 6.0.190307.1402
HostOS Info : Microsoft Windows 10 Enterprise - 10.0.17763 - en-US fr-CA
Validating product feature ids
Reading feature ids in C:\MyWorkspace\Source-arm\BSP\Rpi2\Packages\RPi2FM.xml
Reading feature ids in C:\MyWorkspace\Common\Packages\OEMCommonFM.xml
Reading feature ids in C:\MyWorkspace\Source-arm\Packages\OEMFM.xml
Checking Microsoft features in OEMInput file..
Warning: IOT_APPLICATIONS is not defined
Warning: IOT_GENERIC_POP is not defined
Checking OEM features in OEMInput file..
Building product specific packages
Processing Registry.Version.wm.xml
Processing Custom.Cmd.wm.xml
Processing Provisioning.Auto.wm.xml
Processing Recovery.WinPE.wm.xml
Building FM files..
Exporting OEM FM files..
Processing OEMFMList..
Exporting RPi2 BSP FM files
Processing RPi2FMFileList.xml
Creating Image..
See C:\MyWorkspace\Build\arm\ProductA_Test.log for progress
This will take a while...
ThreadId2588 ERROR: PrepareVirtualHardDisk: AttachVirtualDisk failed with error 0x80070522
ThreadId2588 ERROR: CreateVirtualHardDisk: OpenDiskInternal failed with error 0x80070522
ThreadId2588 ERROR: Imaging!CreateImage: Failed to create FFU: CreateVirtualHardDisk failed with error code: 80070522
ThreadId2588 ERROR: CreateVirtualHardDisk failed with error code: 80070522
Error: Build failed
False
IoTCorePShell arm 10.0.0.0 Test
any idea?
SOLUTION: There was an IT policy that no new storage media shall be attached. Update your group policy or Temporary deactivation solved the problem.

not able to run solr search example in hue 3.9

enter image description hereI have installed hue-3.9 and solr 5.5.3 in linux rhel. But I am not able to run solr search example in hue.
When I clicked on install search in examples, it showing error like 'twitter_demo' is not available due to init failure.
Please help!
Full exception stack in hue:
Error 500 {metadata={error-class=org.apache.solr.common.SolrException,root-error-class=java.lang.ClassNotFoundException},msg=SolrCore 'twitter_demo' is not available due to init failure:
Could not load conf for core twitter_demo: Can't load schema /usr/local/hue/desktop/libs/indexer/src/data/collections/twitter_demo/conf/schema.xml: Plugin init failure for [schema.xml] fieldType "pint": Error loading class 'solr.IntField',trace=org.apache.solr.common.SolrException: SolrCore 'twitter_demo' is not available due to init failure: Could not load conf for core twitter_demo: Can't load schema /usr/local/hue/desktop/libs/indexer/src/data/collections/twitter_demo/conf/schema.xml: Plugin init failure for [schema.xml]
[enter image description here][2]

HPCC/HDFS Connector

Does anyone know about HPCC/HDFS connector.we are using both HPCC and HADOOP.There is one utility(HPCC/HDFS connector) developed by HPCC which allows HPCC cluster to acess HDFS data
i have installed the connector but when i run the program to acess data from hdfs it gives error as libhdfs.so.0 doesn't exist.
I tried to build libhdfs.so using command
ant compile-libhdfs -Dlibhdfs=1
its giving me error as
target "compile-libhdfs" does not exist in the project "hadoop"
i used one more command
ant compile-c++-libhdfs -Dlibhdfs=1
its giving error as
ivy-download:
[get] Getting: http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
[get] To: /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
[get] Error getting http://repo2.maven.org/maven2/org/apache/ivy/ivy/2.1.0/ivy-2.1.0.jar
to /home/hadoop/hadoop-0.20.203.0/ivy/ivy-2.1.0.jar
BUILD FAILED java.net.ConnectException: Connection timed out
any suggestion will be a great help
Chhaya, you might not need to build libhdfs.so, depending on how you installed hadoop, you might already have it.
Check in HADOOP_LOCATION/c++/Linux-<arch>/lib/libhdfs.so, where HADOOP_LOCATION is your hadoop install location, and arch is the machine’s architecture (i386-32 or amd64-64).
Once you locate the lib, make sure the H2H connector is configured correctly (see page 4 here).
It's just a matter of updating the HADOOP_LOCATION var in the config file:
/opt/HPCCSystems/hdfsconnector.conf
good luck.

Resources