Currently having issues with hyperlinks: Throughout the day hyperlinks will not work because www is not added to the URL and then randomly 20-30 minutes later the issue will be fixed.
My Concrete Environment:
Core Version - 8.2.0
Version Installed - 8.2.0
Database Version - 20170711151953
concrete5 Cache Settings
Block Cache - On
Overrides Cache - On
Full Page Caching - On - If blocks on the particular page allow it.
Full Page Cache Lifetime - Every 6 hours (default setting).
Server Software
Microsoft-IIS/8.5
Server API
cgi-fcgi
PHP Version
7.1.8
PHP Extensions
bcmath, calendar, cgi-fcgi, Core, ctype, curl, date, dom, fileinfo, filter, gd, hash, iconv, intl, json, ldap, libxml, mbstring, mcrypt, mysqli, mysqlnd, odbc, openssl, pcre, PDO, pdo_mysql, Phar, readline, Reflection, session, SimpleXML, SPL, standard, tokenizer, wddx, wincache, xml, xmlreader, xmlwriter, zip, zlib
PHP Settings
max_execution_time - 10240
log_errors_max_len - 1024
max_file_uploads - 500
max_input_nesting_level - 64
max_input_time - 60
max_input_vars - 1000
memory_limit - 4096M
post_max_size - 1000M
sql.safe_mode - Off
upload_max_filesize - 1000M
ldap.max_links - Unlimited
mysqli.max_links - Unlimited
mysqli.max_persistent - Unlimited
odbc.max_links - Unlimited
odbc.max_persistent - Unlimited
pcre.backtrack_limit - 1000000
pcre.recursion_limit - 100000
session.cache_limiter - no value
session.gc_maxlifetime - 7200
wincache.maxfilesize - 2048
wincache.ttlmax - 1200
Related
Getting into Minio. Investigating a few commands,
If I do mc ls alias/bucket then I get expected output:
[2020-12-09 19:48:15 UTC] 10B Account-9.dta
[2020-12-09 19:48:22 UTC] 10B Account-90.dta
[2020-12-09 19:48:22 UTC] 11B Account-92.dta
So, I would expect some kind of output when I execute the following on the same connection:
mc sql --recursive --query "select * from s3object" alias/bucket
but instead, it just goes back to a prompt (No results). I suspect my "from" is wrong but I have no idea what values to use other than "s3object".
How do I properly perform SQL queries on a local MinIO instance?
MinIO Version:
VERSION
2019-08-14T20:37:41Z
MEMORY
Used: 4.4 MB | Allocated: 3.6 GB | Used-Heap: 4.4 MB | Allocated-Heap: 65 MB
PLATFORM
Host: minio-66c9cd74c9-7m6lx | OS: linux | Arch: amd64
RUNTIME
Version: go1.12.8 | CPUs: 12
MinIO Client Version:
mc version RELEASE.2020-11-25T23-04-07Z
When you specify multiple objects, mc filters objects by extension.
If you store files with a .csv extension they should be picked up. Similar for .json files and gzip/bzip2.
On my laptop with Fedora 30 I have Performance Co-Pilot (PCP) daemons installed and running, and Prometheus installed from the package golang-github-prometheus-prometheus-1.8.0-4.fc30.x86_64. In PCP collector's config I specified the following metric namespaces:
# Performance Metrics Domain Specifications
#
# This file is automatically generated during the build
# Name Id IPC IPC Params File/Cmd
#root 1 pipe binary /var/lib/pcp/pmdas/root/pmdaroot
#pmcd 2 dso pmcd_init /var/lib/pcp/pmdas/pmcd/pmda_pmcd.so
proc 3 pipe binary /var/lib/pcp/pmdas/proc/pmdaproc -d 3
#xfs 11 pipe binary /var/lib/pcp/pmdas/xfs/pmdaxfs -d 11
linux 60 pipe binary /var/lib/pcp/pmdas/linux/pmdalinux
#pmproxy 4 dso pmproxy_init /var/lib/pcp/pmdas/mmv/pmda_mmv.so
mmv 70 dso mmv_init /var/lib/pcp/pmdas/mmv/pmda_mmv.so
#jbd2 122 dso jbd2_init /var/lib/pcp/pmdas/jbd2/pmda_jbd2.so
#kvm 95 pipe binary /var/lib/pcp/pmdas/kvm/pmdakvm -d 95
[access]
disallow ".*" : store;
disallow ":*" : store;
allow "local:*" : all;
When I visit the URL localhost:44323/metrics the output is very rich and covers many namespaces, ie. mem, network, kernel, filesys, hotproc etc., but when I scrape it with Prometheus where the job is defined as:
scrape_configs:
- job_name: 'pcp'
scrape_interval: 10s
sample_limit: 0
static_configs:
- targets: ['127.0.0.1:44323']
I see the target status UP, but in the console only these two metric namespaces are available for querying: hinv and mem. I tried to copy other metric names from /metrics page, but queries result in the error: 'No datapoints found.' Initially I thought the problem could have been due to a limit on the number of samples per target or the sampling interval too small (I originally set it to 1s), but hinv and mem are not next to each other and there are other metrics (ie. filesys, kernel) in between them that are omitted. What could be the reason for that?
I have not found the exact cause of the problem, but it must have been version-specific, because after I downloaded and launched the latest version 2.19 the problem was gone and with exactly the same config Prometheus was reading all metrics from the target.
Adding another answer, because I have just seen this issue again in another environment where Prometheus v.2.19 was pulling metrics via PMAPI from PCP v.5 on CentOS 7 servers. In Prometheus config file the scrape was configured as a single job with multiple metric domains, ie.:
- job_name: 'pcp'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['kernel', 'mem', 'disk', 'network', 'mounts', 'lustre', 'infiniband']
When there was a problem with one metric domain, usually lustre or infiniband due to the lack of corresponding hardware on the host, only kernel metrics were collected and no other.
The issue was fixed by splitting the scrape job into multiple jobs with only one target each, ie.:
- job_name: 'pcp-kernel'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['kernel']
- job_name: 'pcp-mem'
file_sd_configs:
- files: [...]
metric_path: '/metrics'
params:
target: ['mem']
[...]
This way metrics from the core domains were always scraped successfully despite of one or all of the extra ones failing. Such setup seems to be more robust, however it makes the target status view busier, because there are more scrape jobs.
Microsoft state here:
The Microsoft Symbol Server provides compressed versions of the symbol
files. The files have an underscore at the end of the filename’s
extension to indicate that they are compressed. For example, the PDB
for ntdll.dll is available as ntdll.pd_.
I have 2 questions here:
The more general question: How can I cause Windbg to prefer the compressed version of the symbols? This would result in massive bandwidth savings. (Compressing c:\Symbols on my computer resulted in a 68% reduction in size).
Sniffing the traffic reveals that the uncompressed version is tried first and then the compressed version (underscore at end of name).
and specific to MS Public Symbols: Are the compressed versions currently available at all? Trying to manually download a compressed versions of ntdll.pdb returns a 404 error.
>$ wget https://msdl.microsoft.com/download/symbols/ntdll.pdb/38A5841BD353770D9C800BF1AF6B17EB1/ntdll.pdb
...
ntdll.pdb 100%[=====================================================================================>] 1.46M 406KB/s in 4.4s
2018-11-11 01:16:56 (341 KB/s) - ‘ntdll.pdb’ saved [1534976/1534976]
>$ wget https://msdl.microsoft.com/download/symbols/ntdll.pdb/38A5841BD353770D9C800BF1AF6B17EB1/ntdll.pd_
....
HTTP request sent, awaiting response... 404 Not Found
2018-11-11 01:17:01 ERROR 404: Not Found.
Update:
I have now discovered that DbgHelp supports an option called SYMOPT_FAVOR_COMPRESSED which is explained as follows:
If there is both an uncompressed and a compressed file available, favor the compressed file. This option is good for slow connections.
The question now is how to enable this option in Windbg?
This option isn't documented in the officical Windbg documentation and setting it manually seems to only affect the UI level.
0:003> .symopt+ 0x800000
Symbol options are 0x830337:
0x00000001 - SYMOPT_CASE_INSENSITIVE
0x00000002 - SYMOPT_UNDNAME
0x00000004 - SYMOPT_DEFERRED_LOADS
0x00000010 - SYMOPT_LOAD_LINES
0x00000020 - SYMOPT_OMAP_FIND_NEAREST
0x00000100 - SYMOPT_NO_UNQUALIFIED_LOADS
0x00000200 - SYMOPT_FAIL_CRITICAL_ERRORS
0x00010000 - SYMOPT_AUTO_PUBLICS
0x00020000 - SYMOPT_NO_IMAGE_SEARCH
0x00800000 - SYMOPT_FAVOR_COMPRESSED
However inspecting the traffic using fiddler shows that the uncompressed version is still requested first.
I tried to use pynsist to make a windows installer for git-cola from my Ubuntu 15.10 desktop.
I just git clone git-cola project and setup the installer according to the instruction.
Seems OK for pynsist pynsist.cfg.
Here is snippet at the end:
Output: "/home/wni/gitworkspace/git-cola/build/nsis/git-cola_2.7.exe"
Install: 6 pages (384 bytes), 4 sections (1 required) (4192 bytes), 903 instructions (25284 bytes), 609 strings (9986 bytes), 1 language table (334 bytes).
Uninstall: 2 pages (128 bytes),
1 section (1048 bytes), 11 instructions (308 bytes), 57 strings (896 bytes), 1 language table (194 bytes).
Datablock optimizer saved 38103 bytes (~0.0%).
Using lzma compression.
EXE header size: 88064 / 73216 bytes
Install code: 7522 / 40556 bytes
Install data: 39171671 / 87321329 bytes
Uninstall code+data: 9216 / 14775 bytes
CRC (0xCB9A7C26): 4 / 4 bytes
Total size: 39276477 / 87449880 bytes (44.9%)
Installer written to build/nsis/git-cola_2.7.exe
And, here is pynsist.cfg just as default:
[Application]
name=git-cola
version=2.7
entry_point=cola.main:shortcut_launch
icon=share/git-cola/icons/git-cola.ico
[Python]
version=2.7.10
bitness=32
[Include]
packages=cola
PyQt4
qtpy
sip
files = share/
Then, I copy build/nsis to one WIN7 32 bit desktop, and run git-cola_2.7.exe to open the installer.
Everything seems OK till the end indicating git-cola is successfully installed upon the machine.
However, I see there is no icon on the desktop(maybe something wrong yet), then, I route to the installation folder and double click "git-cola.launch.pyw", but no response...
Here is the folder content for git-cola:
Here is folder for pkgs:
I see .dll or .exe files are there under each sub folder then.
updated:
I see in the pynsist log that : qtpy.PythonQtError: No Qt bindings could be found
So, seems pynsist didn't add python to PATH and pkgs folder under installation to PYTHONPATH.
Then, I add python executable to PATH and pkgs to PYTHONPATH, after that, issue still there.
It's successful for me to import PyQt4 within python interpreter.
However, failed to import Qt from PyQt4...
Here is content for pkgs/PyQt4:
So, where is wrong with my setup, is there anything wrong with any configuration?
Thanks.
Wesley
my sphinx configuration is:
================================ config/sphinx.yml
development:
bin_path: "/usr/local/bin"
searchd_binary_name: searchd
indexer_binary_name: indexer
but everytime i run a rake ts:index
Sphinx cannot be found on your system. You may need to configure the following
settings in your config/sphinx.yml file:
* bin_path
* searchd_binary_name
* indexer_binary_name
For more information, read the documentation:
For more information, read the documentation:
http://freelancing-god.github.com/ts/en/advanced_config.html
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 110510 bytes/sec, 327.43 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
Generating Configuration to config/development.sphinx.conf
Sphinx 2.0.1-beta (r2792)
Copyright (c) 2001-2011, Andrew Aksyonoff
Copyright (c) 2008-2011, Sphinx Technologies Inc (http://sphinxsearch.com)
using config file 'config/development.sphinx.conf'...
indexing index 'post_core'...
collected 2 docs, 0.0 MB
sorted 0.0 Mhits, 100.0% done
total 2 docs, 675 bytes
total 0.006 sec, 105567 bytes/sec, 312.79 docs/sec
skipping non-plain index 'post'...
total 6 reads, 0.000 sec, 0.0 kb/call avg, 0.0 msec/call avg
total 12 writes, 0.000 sec, 0.1 kb/call avg, 0.0 msec/call avg
rotating indices: succesfully sent SIGHUP to searchd (pid=19438).
So what's the problem? Why does the rake output that it cant find it even though its installed?
The warning from Thinking Sphinx could definitely be clearer... the problem is very likely to be how old your version of Thinking Sphinx is. Older TS versions don't know about Sphinx 2.0.x - so I'd recommend updating to the latest version of Thinking Sphinx (either 1.4.6 for Rails 1.2 and 2.x, or 2.0.5 for Rails 3).
There are two things that help to solve this problem. First, as Pat says, it is useful to update the Thinking Sphinx plugin or gem to the latest version (either 1.4.x for Rails 2, or 2.0.x for Rails 3). Second it helps sometimes to specify the version of Sphinx in the configuration file (you can find it out by calling "indexer"), especially if Sphinx is running on a remote server and Thinking Sphinx does not have access to Sphinx locally:
production:
..
version: 2.0.4 # <------- Version of Sphinx on remote server 192.168.1.10
port: 9312
address: 192.168.1.10
..
I was facing the same issue and looked everywhere for an answer without any resolution.
The trick that worked for me was to install older version of sphinx. v .9 instead of the latest beta.
Using the latest Thinking-Sphinx with this version of sphinx resolved the issue.