Liferay DXP 7.3 Theme Creation: Error during gulp build (sass) - sass

I'm getting this error when I use the gulp deploy command:
[15:18:15] 'build:compile-css' errored after 3.47 s
[15:18:15] Error in plugin "sass"
Message:
build/_css/compat/components/_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build/_css/compat/components/_dropdowns.scss 34:11 root stylesheet
Details:
formatted: Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build/_css/compat/components/_dropdowns.scss 34:11 root stylesheet
line: 34
column: 11
file: /Users/liferay/ibxcom-theme/build/_css/compat/components/_dropdowns.scss
status: 1
messageFormatted: build/_css/compat/components/_dropdowns.scss
Error: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build/_css/compat/components/_dropdowns.scss 34:11 root stylesheet
messageOriginal: compound selectors may no longer be extended.
Consider `#extend .dropdown-item, .disabled` instead.
╷
34 │ #extend .dropdown-item.disabled;
│ ^^^^^^^^^^^^^^^^^^^^^^^
╵
build/_css/compat/components/_dropdowns.scss 34:11 root stylesheet
relativePath: build/_css/compat/components/_dropdowns.scss
domainEmitter: [object Object]
domainThrown: false
[15:18:15] 'build' errored after 7.08 s
[15:18:15] 'deploy' errored after 7.08 s
I've tried the solution to a similar issue on here but it did not work.
I copied _dropdowns.scss file into src/css/compat/components/ and made the modification but it gives way to another error in _forms.scss and once I correct that it throws another error in another clay file. It's endless.
I am running
node v16.13.0 (npm v8.5.4)
Gulp
CLI version: 2.3.0
Local version: 4.0.2
and Sass 1.43.4 compiled with dart2js 2.14.4
Thanks

Related

Problem with warnings after compiling assets

When I compile assets with command npm run prod, I'm receiving this message:
WARNING Compiled with 2 warnings
warning in ./resources/sass/app.scss
Module Warning (from ./node_modules/postcss-loader/src/index.js):
Warning
(1973:3) Error in parsing SVG: Non-whitespace before first tag. Line:
0 Column: 1 Char: d
# ./resources/sass/app.scss 2:14-253
warning in ./resources/sass/app.scss
Module Warning (from ./node_modules/postcss-loader/src/index.js):
Warning
(2084:3) Error in parsing SVG: Non-whitespace before first tag. Line:
0 Column: 1 Char: d
# ./resources/sass/app.scss 2:14-253
after this warning I get list of compiled files, and after that the warnings again. This time they seems to be more precise:
WARNING in ./resources/sass/app.scss
(./node_modules/css-loader??ref--5-2!./node_modules/postcss-loader/src??postcss0!./node_modules/resolve-url-loader??ref--5-4!./node_modules/sass-loader/dist/cjs.js??ref--5-5!./resources/sass/app.scss)
Module Warning (from ./node_modules/postcss-loader/src/index.js):
Warning
(1973:3) Error in parsing SVG: Non-whitespace before first tag. Line:
0 Column: 1 Char: d # ./resources/sass/app.scss 2:14-253
WARNING in ./resources/sass/app.scss
(./node_modules/css-loader??ref--5-2!./node_modules/postcss-loader/src??postcss0!./node_modules/resolve-url-loader??ref--5-4!./node_modules/sass-loader/dist/cjs.js??ref--5-5!./resources/sass/app.scss)
Module Warning (from ./node_modules/postcss-loader/src/index.js):
Warning
(2084:3) Error in parsing SVG: Non-whitespace before first tag. Line:
0 Column: 1 Char: d # ./resources/sass/app.scss 2:14-253
When I compile assets with "npm run dev" there is no warnings after process.
Anyone have idea what's causing this behavior ?
app.scss content:
#use 'sass:math';
#use 'sass:list';
#import "compile/bootstrap";
#import "compile/bootstrap_limitless";
#import "compile/layout";
#import "compile/components";
#import "compile/colors";
#import "datatables";
#import "forms";
#import "daterangepicker";
I finally found the solution. SVGO package from Cssnano was the problem.
To fix this just add to mix.options (in webpack.mix.js file) following lines:
mix.options({
cssNano: {
svgo: false
}
});
That's all.

Check if a setting was applied with clickhouse-client

How can I check if a cluster setting is applied on server?
For example,
I run --query "SET allow_experimental_live_view = 1" on my cluster.
Which query should I use to check that this setting was changed? Is it possible to do with clickhouse-client command?
This question is similar but does not answer mine How to check whether Clickhouse server-settings is really applied?
clickhouse-client has many options. Let's check them:
clickhouse-client --help
...
--allow_experimental_live_view arg Enable LIVE VIEW. Not mature enough.
--live_view_heartbeat_interval arg The heartbeat interval in seconds to indicate live query is alive.
--max_live_view_insert_blocks_before_refresh arg Limit maximum number of inserted blocks after which mergeable blocks are dropped and query is re-executed.
...
It needs to pass it this way:
clickhouse-client --allow_experimental_live_view 1
To check the current settings use:
SELECT *
FROM system.settings
WHERE name LIKE '%_live_%'
┌─name───────────────────────────────────────┬─value─┬─changed─┬─description────────────────────────────────────────────────────────────────────────────────────────────────┬─min──┬─max──┬─readonly─┬─type────┐
│ tcp_keep_alive_timeout │ 0 │ 0 │ The time in seconds the connection needs to remain idle before TCP starts sending keepalive probes │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ Seconds │
│ allow_experimental_live_view │ 1 │ 1 │ Enable LIVE VIEW. Not mature enough. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ Bool │
│ max_live_view_insert_blocks_before_refresh │ 64 │ 0 │ Limit maximum number of inserted blocks after which mergeable blocks are dropped and query is re-executed. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ UInt64 │
│ temporary_live_view_timeout │ 5 │ 0 │ Timeout after which temporary live view is deleted. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ Seconds │
│ periodic_live_view_refresh │ 60 │ 0 │ Interval after which periodically refreshed live view is forced to refresh. │ ᴺᵁᴸᴸ │ ᴺᵁᴸᴸ │ 0 │ Seconds │
└────────────────────────────────────────────┴───────┴─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────┴──────┴──────────┴─────────┘
To run this query around all cluster nodes see is there a better way to query system tables across a clickhouse cluster?.

Installing the Framework (Problem cURL error 6: Could not resolve host: cache-proxy)

I tried to install api-platform: https://api-platform.com/docs/distribution/
after starting I see in the log "api-platform-242_cache-proxy_1"
│ Error: │
│ Message from VCC-compiler: │
│ Expected return action name. │
│ ('/usr/local/etc/varnish/default.vcl' Line 67 Pos 13) │
│ return (miss); │
│ ------------####-- │
│ Running VCC-compiler failed, exited with 2 │
│ VCL compilation failed
If I use the api (post greeting), the response code is 500
"hydra:description": "cURL error 6: Could not resolve host: cache-proxy (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)",
"trace": [
Nevertheless the entity is still inserted.
Furthermore I tried the api-platform without docker (Apache).
I removed the line VARNISH_URL=http://cache-proxy in the .env file.
Then the return code is 500 with
"cURL error 3: malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)"
Do you have any idea?
Kind regards
Ludi
Remove varnish from api_platform.yaml
I believe you should remove or comment out VARNISH_URL=http://cache-proxy from .env file not from api_platform.yaml as your .env can change and is/should be host dependent and configuration (.yaml) should not.
See: https://symfony.com/doc/current/configuration.html#the-env-file-environment-variables
There is also a .env file which is loaded and its contents become environment variables. This is useful during development, or if setting environment variables is difficult for your deployment.
In api_platform.yaml you SHOULD comment out whole http_cache section or you will keep getting cURL errors about malformed from guzzle.
{
"#context": "/api-platform/api/public/contexts/Error",
"#type": "hydra:Error",
"hydra:title": "An error occurred",
"hydra:description": "cURL error 3: <url> malformed (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)",
"trace": [
{
"namespace": "",
"short_class": "",
"class": "",
"type": "",
"function": "",
"file": "...\\api-platform\\api\\vendor\\guzzlehttp\\guzzle\\src\\Handler\\CurlFactory.php",
"line": 186,
"args": []
},
Effect is the same.
I had the same problem, and I resolved it!
As specified here: https://github.com/api-platform/api-platform/issues/777, the problem is the directories/files rights, so instead of downloading the zip or tar.gz archive, I cloned the repo,
All commands I made (after installing Docker for Windows and enabling Shared Drives in Docker for Windows settings):
cd my_parent_directory
git clone https://github.com/api-platform/api-platform.git
cd api-platform
docker-compose pull
docker-compose up -d
And when I go on https://localhost:8443 all work!!
I hope this helps you :)

chef-client local mode not able to create action on template resource on windows machine

I am executing the chef cookbook recipe in local mode, I have placed the template .erb file under cookbooks templates folder.
It giving error and Chef::Exceptions::CookbookNotFound
attaching execution log
PS C:\chef-repo> chef-client -z -r "recipe[my_cookbook::test1]"
Starting Chef Client, version 12.18.31
resolving cookbooks for run list: ["my_cookbook::test1"]
Synchronizing Cookbooks:
- test (0.1.0)
Installing Cookbook Gems:
Compiling Cookbooks...
Converging 1 resources
Recipe: test::test1
* template[c:\test-template.txt] action create
================================================================================
Error executing action `create` on resource 'template[c:\test-template.txt]'
================================================================================
Chef::Exceptions::CookbookNotFound
----------------------------------
Cookbook test not found. If you're loading test from another cookbook, make sure you configure the dependency in you
r metadata
Resource Declaration:
---------------------
# In c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/recipes/test1.rb
1: template "c:\\test-template.txt" do
2: source "test-template.txt.erb"
3: mode '0755'
4: variables({
5: test: node['cloud']['public_ipv4']
6: })
7: end
Compiled Resource:
------------------
# Declared in c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/recipes/test1.rb:1:in `from_file'
template("c:\test-template.txt") do
action [:create]
retries 0
retry_delay 2
default_guard_interpreter :default
source "test-template.txt.erb"
variables {:test=>"1.1.1.1"}
declared_type :template
cookbook_name "test"
recipe_name "test1"
mode "0755"
path "c:\\test-template.txt"
end
Platform:
---------
x64-mingw32
Running handlers:
[2017-03-08T12:32:35+00:00] ERROR: Running exception handlers
Running handlers complete
[2017-03-08T12:32:35+00:00] ERROR: Exception handlers complete
Chef Client failed. 0 resources updated in 05 seconds
[2017-03-08T12:32:35+00:00] FATAL: Stacktrace dumped to c:/chef-repo/.chef/local-mode-cache/cache/chef-stacktrace.out
[2017-03-08T12:32:35+00:00] FATAL: Please provide the contents of the stacktrace.out file if you file a bug report
[2017-03-08T12:32:35+00:00] FATAL: Chef::Exceptions::CookbookNotFound: template[c:\test-template.txt] (test::test1 line
1) had an error: Chef::Exceptions::CookbookNotFound: Cookbook test not found. If you're loading test from another cookbo
ok, make sure you configure the dependency in your metadata
test1.rb
template "c:\\test-template.txt" do
source "test-template.txt.erb"
mode '0755'
variables({
test: node['cloud']['public_ipv4']
})
end
My chef-repo tree :
C:.
├───.chef
│ └───local-mode-cache
│ └───cache
│ └───cookbooks
│ └───test
│ ├───attributes
│ ├───recipes
│ └───templates
| |___test-template.txt.erb
├───cookbooks
│ └───my_cookbook
│ ├───attributes
│ ├───definitions
│ ├───files
│ │ └───default
│ ├───libraries
│ ├───providers
│ ├───recipes
│ ├───resources
│ └───templates
│ └───default
| |___test-template.txt.erb
├───data_bags
│ └───example
├───environments
├───nodes
└───roles
Just a guess but here's what I think is wrong:
The template resource looks for a source file in c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/test/templates/test-template.txt.erb.
With those log line:
resolving cookbooks for run list: ["my_cookbook::test1"]
...
Converging 1 resources
Recipe: test::test1
This makes me think taht either:
Your actual cookbook template is at "c:/chef-repo/.chef/local-mode-cache/cache/cookbooks/my_cookbook/templates/test-template.txt.erb" and your metadata.rb use the wrong name attribute.
You have a typo somewhere in the template name or location while playing with a wrapper cookbook.

Karaf 4.1 - How to add DynamicImport-Package tag within a third party osgi jar bundle?

I have a problem in the execution of a own bundle within Karaf 4.1, I am using Shiro for to persist users sessions, but when I recover the saved session, I have got an exception as:
Caused by: java.lang.ClassNotFoundException: io.twim.models.User
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:?]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:?]
at org.apache.felix.framework.BundleWiringImpl.doImplicitBootDelegation(BundleWiringImpl.java:1782) ~[?:?]
at org.apache.felix.framework.BundleWiringImpl.searchDynamicImports(BundleWiringImpl.java:1717) ~[?:?]
at org.apache.felix.framework.BundleWiringImpl.findClassOrResourceByDelegation(BundleWiringImpl.java:1552) ~[?:?]
at org.apache.felix.framework.BundleWiringImpl.access$400(BundleWiringImpl.java:79) ~[?:?]
I understand the problem in my case Shiro is executing a (cast) deserializing the persisted session object but within Shiro's ClassLoader there is no my class io.twim.models.User. My karaf instance have installed this bundles:
karaf#twim()> list
START LEVEL 100 , List Threshold: 50
ID │ State │ Lvl │ Version │ Name
───┼────────┼─────┼─────────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
51 │ Active │ 80 │ 3.1.0 │ DataStax Java Driver for Apache Cassandra - Core
52 │ Active │ 80 │ 19.0.0 │ Guava: Google Core Libraries for Java
73 │ Active │ 50 │ 2.16.1 │ camel-blueprint
83 │ Active │ 80 │ 1.3.0 │ Apache Shiro :: Core
86 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-cache :: Distributed cache for TWIM
87 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-cassandra :: Implementation Cassandra to TWIM
88 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-common :: Bundle utility for all models, utilities, constants
89 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-core-model :: Bundle utility for all models used in TWIM
90 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-db :: Utilitaries to persitence into TWIM
91 │ Active │ 80 │ 1.0.0.SNAPSHOT │ twim-jaas :: JAAS authentication module for TWIM
I need to add the tag DynamicImport-Package at the bundle 83:
83 │ Active │ 80 │ 1.3.0 │ Apache Shiro :: Core
Executing dynamic-import command within karaf, I have fixed this problem:
karaf#twim()> dynamic-import 83
But I would like to do this automatically in my feature installer, adding this tag DynamicImport-Package: io.twim.models, now I have my features.xml like this:
<feature name="twim-auth" version="${project.version}">
<feature>twim-cassandra</feature>
<bundle>mvn:org.apache.shiro/shiro-core/1.3.0</bundle>
<bundle>mvn:io.twim/twim-core-model/${project.version}</bundle>
<bundle>mvn:io.twim/twim-jaas/${project.version}</bundle>
</feature>
How can I do this within my features.xml?
The wrap protocol can be used to build osgi bundle on-the-fly from a jar. You can probably use it to add some instructions to an existant bundle, but I never used it this way. Try something like this :
<bundle>wrap:mvn:org.apache.shiro/shiro-core/1.3.0$DynamicImport-Package=io.twim.models</bundle>
Finally!, I have figured out the problem thanks to #alexandre-cartapanis only was necessary a correction, putting this it works perfectly:
<bundle>wrap:mvn:org.apache.shiro/shiro-core/1.3.0/$DynamicImport-Package=io.twim.models&overwrite=merge</bundle>
Here in "Wrap deployer" section there is more explanation.

Resources