this is somewhat related to my previous post where I learned a bit more about actions.
I have been trying to figure out how to work with this nifty feature but I seem to be a bit stuck in the past few hours.
In my Component I create an SVG viewbox like so:
<svg id="pitch" viewBox={`0 0 ${width} ${height}`} use:foo>
</svg>
then drawPitch is this function:
function foo(node) {
// the node has been mounted in the DOM
let g = node.append('h1');
g.text("This is the text I'd like to render to check that it works");
return {
destroy() {
// the node has been removed from the DOM
}
};
}
From what I've understood in the docs, the use:foo will pass the calling node to foo, so I thought directly appending svg elements to it should work.
Do I need to update it somehow?
Here is a repl with reproducible code.
I get the following error:
Missing "./types/runtime/internal/keyed_each.js" export in "svelte" package
Thank you!
I would expect the code in foo to start with d3.select(node), and everything to work based off that. Otherwise the DOM tree generated by d3 will not be connected to your document at all. Alternatively the resulting element (selection.node()) has to be appended to node at some point.
The error sounds highly unrelated and probably would require more context.
Note: You cannot add HTML directly to SVGs, SVGs are for canvas-like vector graphics, not document layouts. If you want to insert text, use the <text> element.
Related
Currently I am scraping article news sites, in the process of getting its main content, I ran into the issue that a lot of them have embedded tweets in them like these:
I use XPath expressions with XPath helper(chrome addon) in order to test if I can get content, then add this expression to scrapy python, but with elements that are inside a #shadow-root elements seem to be outside the scope of the DOM, I am looking for a way to get content inside these types of elements, preferably with XPath.
Most web scrapers, including Scrapy, don't support the Shadow DOM, so you will not be able to access elements in shadow trees at all.
And even if a web scraper did support the Shadow DOM, XPath is not supported at all. Only selectors are supported to some extent, as documented in the CSS Scoping spec.
One way to scrape pages containing shadow DOMs with tools that don't work with shadow DOM API is to recursively iterate over shadow DOM elements and replace them with their HTML code:
// Returns HTML of given shadow DOM.
const getShadowDomHtml = (shadowRoot) => {
let shadowHTML = '';
for (let el of shadowRoot.childNodes) {
shadowHTML += el.nodeValue || el.outerHTML;
}
return shadowHTML;
};
// Recursively replaces shadow DOMs with their HTML.
const replaceShadowDomsWithHtml = (rootElement) => {
for (let el of rootElement.querySelectorAll('*')) {
if (el.shadowRoot) {
replaceShadowDomsWithHtml(el.shadowRoot)
el.innerHTML += getShadowDomHtml(el.shadowRoot);
}
}
};
replaceShadowDomsWithHtml(document.body);
If you are scraping using a full browser (Chrome with Puppeteer, PhantomJS, etc.) then just inject this script to the page. Important is to execute this after the whole page is rendered because it possibly breaks the JS code of shadow DOM components.
Check full article I wrote on this topic: https://kb.apify.com/tips-and-tricks/how-to-scrape-pages-with-shadow-dom
I've searched thru Corvid docs and Stack, not finding anything.
Is there a way to appendChild() in Wix Corvid(Code)?
EDIT: Wix does not allow DOM access directly. I assumed that people answering this would know i was looking for an alternative to appencChild and knew this method could not be used as is in Wix.
so to clarify: is there a way to add a child to a parent element using Wix's APIs?
It depends what you are trying to achieve,
the only thing off the top of my head is adding more items to a repeater
which you can do by first getting the initial data from the repeater, adding another item to array and reassign the data property of the repeater
const initialData = $w('#repeater').data
const newItem = {
_id: 'newItem1', // Must have an _id property
content: 'some content'
}
const newData = [...initialData, newItem]
$w('#repeater').data = newData
https://www.wix.com/corvid/reference/$w.Repeater.html#data
In Corvid, you cannot use any function which accesses the DOM.
Coming from one of the developers of Corvid:
Accessing document elements such as div, span, button, etc is off-limits. The way to access elements on the page is only through $w. One small exception is the $w.HtmlComponent (which is based on an iFrame). This element was designed to contain vanilla HTML and it works just fine. You just can't try to trick it by using parent, window, top, etc.
Javascript files can be added to your site's Public folder, but the same limitations apply - no access to the DOM.
Read more here: https://www.wix.com/corvid/forum/main/comment/5afd2dd4f89ea1001300319e
I'm still relatively new to programming and I have a project I am working on. I am making a staff efficiency dashboard for a fictional pizza company. I want to find the quickest pizza making time and display the time and the staff members name to the user.
With the data charts it has been easy. Create a function, then use dc, e.g dc.barChart("#idOfDivInHtmlPage")
I suspect I might be trying to be too complicated, and that I've completely forgotten how to display any outputs of a js function to a html page.
I've been using d3.js, dc.js and crossfilter to represent most of the data visually in an interactive way.
Snippet of the .csv
Name,Rank,YearsService,Course,PizzaTime
Scott,Instore,3,BMC,96
Mark,Instore,4,Intro,94
Wendy,Instore,3,Intro,76
This is what I've tried so far:
var timeDim = ndx.dimension(function(d) {
return [d.PizzaTime, d.Name]
});
var minStaffPizzaTimeName = timeDim.bottom(1)[0].PizzaTime;
var maxStaffPizzaTimeName = timeDim.top(1)[0].PizzaTime;
}
then in the html
<p id="minStaffPizzaTimeName"></p>
<script type="text/javascript" src="static/js/graph.js">
document.write("minStaffPizzaTimeName");
</script>
You are surely on the right track, but in javascript you often have to consider the timing of when things will happen.
document.write() (or rather, anything at the top level of a script) will get executed while the page is getting loaded.
But I bet your data is loaded asynchronously (probably with d3.csv), so you won't have a crossfilter object until a bit later. You haven't shown these parts but that's the usual way to use crossfilter and dc.js.
So you will need to modify the page after it's loaded. D3 is great for this! (The straight javascript way to do this particular thing isn't much harder.)
You should be able to leave the <p> tag where it is, remove the extra <script> tag, and then, in the function which creates timeDim:
d3.select('#minStaffPizzaTimeName').text(minStaffPizzaTimeName);
This looks for the element with that ID and replaces its content with the value you have computed.
General problem solving tools
You can use the dev tools dom inspector to make sure that the p tag exists with id minStaffPizzaTimeName.
You can also use
console.log(minStaffPizzaTimeName)
to see if you are fetching the data correctly.
It's hard to tell without a running example but I think you will want to define your dimension using the PizzaTime only, and convert it from a string to a number:
var timeDim = ndx.dimension(function(d) {
return +d.PizzaTime;
});
Then timeDim.bottom(1)[0] should give you the row of your original data with the lowest value of PizzaTime. Adding .Name to that expression should retrieve the name field from the row object.
But you might have to poke around using console.log or the interactive debugger to find the exact expression that works. It's pretty much impossible to use dc.js or D3 without these tools, so a little investment in learning them will pay off big time.
Boom, finally figured it out.
function show_fastest_and_slowest_pizza_maker(ndx) {
var timeDim = ndx.dimension(dc.pluck("PizzaTime"));
var minPizzaTimeName = timeDim.bottom(1)[0].Name;
var maxPizzaTimeName = timeDim.top(1)[0].Name;
d3.select('#minPizzaTimeName')
.text(minPizzaTimeName);
d3.select('#maxPizzaTimeName')
.text(maxPizzaTimeName);
}
Thanks very much Gordon, you sent me down the right path!
Currently I am scraping article news sites, in the process of getting its main content, I ran into the issue that a lot of them have embedded tweets in them like these:
I use XPath expressions with XPath helper(chrome addon) in order to test if I can get content, then add this expression to scrapy python, but with elements that are inside a #shadow-root elements seem to be outside the scope of the DOM, I am looking for a way to get content inside these types of elements, preferably with XPath.
Most web scrapers, including Scrapy, don't support the Shadow DOM, so you will not be able to access elements in shadow trees at all.
And even if a web scraper did support the Shadow DOM, XPath is not supported at all. Only selectors are supported to some extent, as documented in the CSS Scoping spec.
One way to scrape pages containing shadow DOMs with tools that don't work with shadow DOM API is to recursively iterate over shadow DOM elements and replace them with their HTML code:
// Returns HTML of given shadow DOM.
const getShadowDomHtml = (shadowRoot) => {
let shadowHTML = '';
for (let el of shadowRoot.childNodes) {
shadowHTML += el.nodeValue || el.outerHTML;
}
return shadowHTML;
};
// Recursively replaces shadow DOMs with their HTML.
const replaceShadowDomsWithHtml = (rootElement) => {
for (let el of rootElement.querySelectorAll('*')) {
if (el.shadowRoot) {
replaceShadowDomsWithHtml(el.shadowRoot)
el.innerHTML += getShadowDomHtml(el.shadowRoot);
}
}
};
replaceShadowDomsWithHtml(document.body);
If you are scraping using a full browser (Chrome with Puppeteer, PhantomJS, etc.) then just inject this script to the page. Important is to execute this after the whole page is rendered because it possibly breaks the JS code of shadow DOM components.
Check full article I wrote on this topic: https://kb.apify.com/tips-and-tricks/how-to-scrape-pages-with-shadow-dom
Several of the reusable charts examples, such as the histogram, include the following:
// select the svg element, if it exists
var svg = d3.select(this).selectAll("svg").data([data]);
// append the svg element, if it doesn't exist
svg.enter().append("svg") ...
...where this is the current DOM element and data is the data that's been bound to it. As I understand it, this idiom allows a chart to be created the first time the chart function is called, but not 'recreated', if you like, following subsequent calls. However, could anyone explain this idiom in detail? For example:
Why is .selectAll("svg") used and not .select("svg")?
Why isn't .empty() used to check for an empty selection?
Can any single-element array be passed to .data()? (I assume the purpose of this array is simply to return the enter selection.)
Thanks in advance for any help.
When this is called the first time, there's no SVG and therefore the .enter() selection will contain the data passed to it. On subsequent calls, the .enter() selection will be empty and therefore nothing new added.
Concerning the specific questions:
.selectAll() returns an array which can then be matched to the array passed to .data().
.empty() could be used, but it's not necessary -- if the selection is empty, nothing happens. Checking .empty() would add an if statement and have exactly the same effect.
Yes. Have a look this tutorial for example for some more detail on selections.