Pine Editor - Tradingview - cci

The "CCI" in white appears to be separate from the chart. How can I
fix it?
+ I want to add "SMA" in addition to coding, can you help?
//CCI
length = input.int(41, minval=1)
src = input(close, title="Source")
ma = ta.sma(src, length)
cci = (src - ma) / (0.015 * ta.dev(src, length))
plot(cci, "CCI", color.white, linewidth = 1)

Related

How do I select a line by clicking in Makie?

I want to be able to select a line on the plot by clicking on it. Ideally, when I click on any line, the number of that line will appear on the screen.
I wrote my code based on the tutorial, but there is no example with lines, so I did what I did. https://docs.makie.org/v0.19/documentation/events/index.html#point_picking
At the moment I have no idea what these numbers are telling me and why. They are not even coordinates of clicked points.
P.S. Actually it is just a starting point. I want to create event interaction on series and topoplots. But for now it would be great to find out the basics.
f = Figure(backgroundcolor = RGBf(0.98, 0.98, 0.98), resolution = (1500, 700))
ax = Axis(f[1, 1], xlabel = "Time [s]", ylabel = "Voltage amplitude [µV]")
N = 1:length(pos)
positions = Observable(rand(Point2f, 10))
xs = 0:0.01:10
ys = 0.5 .* sin.(xs)
lines!(xs, ys)
lines!(xs, ys * 2)
hidedecorations!(ax, label = false, ticks = false, ticklabels = false)
hidespines!(ax, :t, :r)
hlines!(0, color = :gray, linewidth = 1)
vlines!(0, color = :gray, linewidth = 1)
i = Observable(0)
on(events(f).mousebutton, priority = 2) do event
if event.button == Mouse.left && event.action == Mouse.press
plt, i[] = pick(f)
str = lift(i -> "$(i)", i)
text!(ax, 1, -0.5, text = str, align = (:center, :center))
end
end
f
Below are some examples of the interaction between clicking and the number displayed (the red dot is where I click).
Check out the mouseposition variable here:
https://docs.makie.org/stable/api/#Events
or the registerInteraction! function here:
https://docs.makie.org/v0.19/examples/blocks/axis/index.html#registering_and_deregistering_interactions
You can use them both as below:
using GLMakie
f = Figure(backgroundcolor = RGBf(0.98, 0.98, 0.98), resolution = (1500, 700))
ax = Axis(f[1, 1], xlabel = "Time [s]", ylabel = "Voltage amplitude [µV]")
#N = 1:length(pos)
positions = Observable(rand(Point2f, 10))
xs = 0:0.01:10
ys = 0.5 .* sin.(xs)
lines!(xs, ys)
lines!(xs, ys * 2)
hidedecorations!(ax, label = false, ticks = false, ticklabels = false)
hidespines!(ax, :t, :r)
hlines!(0, color = :gray, linewidth = 1)
vlines!(0, color = :gray, linewidth = 1)
register_interaction!(ax, :my_interaction) do event, axis
if event.type === MouseEventTypes.leftclick
println("Graph axis position: $(event.data)")
end
end
i = Observable(0)
on(events(f).mousebutton, priority = 2) do event
if event.button == Mouse.left && event.action == Mouse.press
plt, i[] = pick(f)
str = lift(i -> "$(i)", i)
text!(ax, 1, -0.5, text = str, align = (:center, :center))
#show mouseposition(f)
end
end
f
Note that for some reason (perhaps it sees the first click as a selection?) Makie does not start registering the interaction on the graph until the first click within the graph, unlike the clicks on the figure which are all shown even the first one.

PIL.ImageDraw.ImageDraw.text features attribute in Pillow 7.0.0 dosen't seem to give any difference in results

This is the code
img = np.full(shape=(40, 225, 3), fill_value=211, dtype=np.uint8)
b,g,r,a = 0,0,0,0
fontpath = "arial.ttf"
font = ImageFont.truetype(fontpath, 14)
img_pil = Image.fromarray(img)
draw = ImageDraw.Draw(img_pil)
draw.text((25, 10), captcha, font=font, features=['cpsp', 'dist'], fill=(b, g, r, a))
# w=img_pil.rotate(17.5, expand=1)
# img_pil = Image.paste( ImageOps.colorize(w, (0,0,0), (255,255,84)), (242,60), w)
img = np.array(img_pil)
noise_factor = np.random.uniform(low=0.4, high=0.8, size=1)
gauss = np.random.normal(0, noise_factor, img.size)
gauss = gauss.reshape(img.shape[0],img.shape[1],img.shape[2]).astype('uint8')
noise = img + img * gauss
## Display
gray = cv2.cvtColor(noise, cv2.COLOR_BGR2GRAY)
cv2.imwrite(captcha+".png", gray)
The above code didn't alter the space between the characters, am I using it right?
Please include some examples in https://pillow.readthedocs.io/en/stable/reference/ImageDraw.html on how to use this.
Click Here to see the output for the above code

Three JS - Scaling texture to fit a (any size) Plane perfectly

In essence, I want to replicate the behaviour of how the CSS, background-size: cover works.
Looking here you can see the image is being scaled keeping its aspect ratio, but it's not really working correctly, as the image does not fill the Plane, leaving margins either side - https://next.plnkr.co/edit/8650f9Ji6qWffTqE?preview
Code snippet (Lines 170 - 175) -
var geometryAspectRatio = 5/3;
var imageAspectRatio = 3264/2448;
textTile.wrapT = THREE.RepeatWrapping;
textTile.repeat.x = geometryAspectRatio / imageAspectRatio;
textTile.offset.x = 0.5 * ( 1 - textTile.repeat.x );
What I want to happen is for it so scale-up and then reposition its self in the centre (much how cover works).
var repeatX, repeatY;
repeatX = w * this.textureHeight / (h * this.textureWidth);
if (repeatX > 1) {
//fill the width and adjust the height accordingly
repeatX = 1;
repeatY = h * this.textureWidth / (w * this.textureHeight);
mat.map.repeat.set(repeatX, repeatY);
mat.map.offset.y = (repeatY - 1) / 2 * -1;
} else {
//fill the height and adjust the width accordingly
repeatX = w * this.textureHeight / (h * this.textureWidth);
repeatY = 1;
mat.map.repeat.set(repeatX, repeatY);
mat.map.offset.x = (repeatX - 1) / 2 * -1;
}
Updated https://next.plnkr.co/edit/LUk37xLG2yvv6hgg?preview
For anyone confused by this as I was, the missing piece for me is that .repeat.x and .repeat.y properties of any texture can be values less than one, and scales up the image when is under 1 as the inverse of the scale. Think about it, when it's scale 2, in a way it repeats .5 times because you only see half of the image.
So...
Something not supported by textures in THREE.js and common in some libraries, would be
.scaleX = 2; (not supported in THREE.js textures as of v1.30.1)
And the THREE.js texture equivalent would be
texture.repeat.x = .5;
To convert scale to "repeat", simply do the inverse of the scale
var desiredScaleX = 3;
var desiredRepeatX = 1 / desiredScaleX;
The repeat for scale 3 comes out to (1/3) = .3333; In other words a 3x image would be cropped and only show 1/3 of the image, so it repeats .3333 times.
As for scaling to fit to cover, generally choosing the larger scale of the two will do the trick, something like:
var fitScaleX = targetWidth / actualWidth;
var fitScaleY = targetHeight / actualHeight;
var fitCoverScale = Math.max(fitScaleX,fitScaleY);
var repeatX = 1 / fitCoverScale;
var repeatY = 1 / fitCoverScale;

pixijs very slow in mobile compared to css

I'm testing PIXIjs for a simple 2D graphics, basically I'm sliding tiles with some background color and borders animation, plus I'm masking some parts of the layout.
While it works great in desktops it's really slower than the same slide+animations made with pure css in mobile devices (where by the way I'm using crosswalk+cordova so the browser is always the same)
For moving tiles and animating color I'm calling requestAnimationFrame for each tile and I've disabled PIXI's ticker:
ticker.autoStart = false;
ticker.stop();
This slowness could be due to a weaker GPU on mobiles? or is just about the way I use PIXI?
I'm not showing the full code because is quite long ~ 800 lines.
The following is the routine I use for each tile once a slide is captured:
const animateTileBorderAndText = (tileObj, steps, _color, radius, textSize, strokeThickness, _config) => {
let pixiTile = tileObj.tile;
let s = 0;
let graphicsData = pixiTile.graphicsData[0];
let shape = graphicsData.shape;
let textStyle = pixiTile.children[0].style;
let textInc = (textSize - textStyle.fontSize) / steps;
let strokeInc = (strokeThickness - textStyle.strokeThickness) / steps;
let prevColor = graphicsData.fillColor;
let color = _color !== null ? _color : prevColor;
let alpha = pixiTile.alpha;
let h = shape.height;
let w = shape.width;
let rad = shape.radius;
let radiusInc = (radius - rad) / steps;
let r = (prevColor & 0xFF0000) >> 16;
let g = (prevColor & 0x00FF00) >> 8;
let b = prevColor & 0x0000FF;
let rc = (color & 0xFF0000) >> 16;
let rg = (color & 0x00FF00) >> 8;
let rb = color & 0x0000FF;
let redStep = (rc - r) / steps;
let greenStep = (rg - g) / steps;
let blueStep = (rb - b) / steps;
let paintColor = prevColor;
let goPaint = color !== prevColor;
let animate = (t) => {
if (s === steps) {
textStyle.fontSize = textSize;
textStyle.strokeThickness = strokeThickness;
//pixiTile.tint = color;
if (!_config.SEMAPHORES.slide) {
_config.SEMAPHORES.slide = true;
PUBSUB.publish(_config.SLIDE_CODE, _config.torusModel.getData());
}
return true;
}
if (goPaint) {
r += redStep;
g += greenStep;
b += blueStep;
paintColor = (r << 16) + (g << 8) + b;
}
textStyle.fontSize += textInc;
textStyle.strokeThickness += strokeInc;
pixiTile.clear()
pixiTile.beginFill(paintColor, alpha)
pixiTile.drawRoundedRect(0, 0, h, w, rad + radiusInc * (s + 1))
pixiTile.endFill();
s++;
return requestAnimationFrame(animate);
};
return animate();
};
the above function is called after the following one, which is called for each tile to make it slide.
const slideSingleTile = (tileObj, delta, axe, conf, SEM, tilesMap) => {
let tile = tileObj.tile;
let steps = conf.animationSteps;
SEM.slide = false;
let s = 0;
let stepDelta = delta / steps;
let endPos = tile[axe] + delta;
let slide = (time) => {
if (s === steps) {
tile[axe] = endPos;
tileObj.resetPosition();
tilesMap[tileObj.row][tileObj.col] = tileObj;
return tileObj.onSlideEnd(axe == 'x' ? 0 : 2);
}
tile[axe] += stepDelta;
s++;
return requestAnimationFrame(slide);
};
return slide();
};
For each finger gesture a single column/row (of NxM matrix of tiles) is slided and animated using the above two functions.
It's the first time I use canvas.
I red that canvas is way faster then DOM animations and I red very good review of PIXIjs, so I believe I'm doing something wrong.
Can someone help?
In the end I'm a complete donk...
The issue is not with pixijs.
Basically I was forcing 60fps! The number of steps to complete the animation is set to 12 that implies 200ms animation at 60FPS (using requestAnimationFrame) but in low end devices its going to be obviously slower.
Css animation works with timing as parameter so it auto adapt FPS to devices hardware.
To solve the issue I'm adapting the number of steps during animations, basically if animations takes longer than 200ms I just reduce number of steps proportionally.
I hope this could be of help for each web developer used to css animation who have just started developing canvas.

How do you scale n images to fit a certain width?

I've got 3 images I want to fit on a web page horizontally side by side, they are of various proportions and I want them to end up sharing a particular height (to be calculated). So let's say the width of my page is 't' and the current dimensions of the images are h1 x w1, h2 x w2, h3 x w3
I worked out a formula for 2 images but I can't get my head around 3 or more:
(h1*h2*t) / (w1*h2 + h1*w2)
The condition you must respect is:
k1*w1 + k2*w2 + ... + kn*wn = t
where kn is the scaling constant applied to the width to keep the original proportion of the image with its new height.
We can say that
kn = h_new / hn
where h_new is the new height for all images. From there it's all substitution and isolation
h_new*w1/h1 + h_new*w2/h2 + ... + h_new*wn/hn = t
h_new * (w1/h1 + w2/h2 + ... + wn/hn) = t
h_new = t / (w1/h1 + w2/h2 + ... + wn/hn)
I think that should be it, reply if I'm completely wrong! :)
I wrote a photoshop CS5 script in javascript to resize and save out the open images based on #vache's formula. Hope someone finds it useful:
var outputFolder = Folder.selectDialog("Select a folder for the output files")
if(outputFolder != null) {
var startRulerUnits = app.preferences.rulerUnits
var startDisplayDialogs = app.displayDialogs
// Set Adobe Photoshop CS5 to use pixels and display no dialogs
app.preferences.rulerUnits = Units.PIXELS
app.displayDialogs = DialogModes.NO
do {
var totalWidth = parseInt( prompt("How wide do they need to fit into?", 844) );
}
while(totalWidth <= 0 || isNaN(totalWidth));
var DL = documents.length;
var totalArea = 0;
for(a=0;a<DL;a++){
var cur = documents[a];
totalArea += cur.width / cur.height;
}
var newHeight = totalWidth / totalArea;
for(a=1;a<=DL;a++){
activeDocument = documents[a-1];
var AD=activeDocument;
// bring to front
app.activeDocument = AD;
AD.changeMode(ChangeMode.RGB);
var imgName= AD.name.toLowerCase();
imgName = imgName.substr(0, imgName.length -4);
AD.flatten();
AD.resizeImage(null,UnitValue(newHeight,"px"),null,ResampleMethod.BICUBIC);
//AD.resizeImage(UnitValue(newWidth,"px"),null,null,ResampleMethod.BICUBIC);
saveForWeb(outputFolder, imgName, AD);
}
// Close all the open documents
while (app.documents.length) {
app.activeDocument.close(SaveOptions.DONOTSAVECHANGES)
}
// Reset the application preferences
app.preferences.rulerUnits = startRulerUnits;
app.displayDialogs = startDisplayDialogs;
}
function saveForWeb(outputFolderStr, filename, AD)
{
var opts, file;
opts = new ExportOptionsSaveForWeb();
opts.format = SaveDocumentType.JPEG;
opts.quality = 80;
if (filename.length > 27) {
file = new File(outputFolderStr + "/temp.jpg");
AD.exportDocument(file, ExportType.SAVEFORWEB, opts);
file.rename(filename + ".jpg");
}
else {
file = new File(outputFolderStr + "/" + filename + ".jpg");
AD.exportDocument(file, ExportType.SAVEFORWEB, opts);
}
}

Resources