Flashing cloud effect on LED strip/RaspberryPI - raspberry-pi3

I have a LED strip with 20 LEDs and RPI3 B+
now I need to tweak the code below to flash LEDs randomly in various intensity and colours from (80,0,150) to (200,0,255),,
I have all hardware setup well, using adafruit library ws281x bud don't know how to code an infinite loop of flashing (not blinking)..
I've found a project I really want to be close to: https://youtu.be/EoGVQl3SjGY
my code for now is:
from rpi_ws281x import *
import random
# LED strip configuration:
LED_COUNT = 20 # Number of LED pixels.
LED_PIN = 10 # GPIO pin connected to the pixels (18 uses PWM!).
#LED_PIN = 10 # GPIO pin connected to the pixels (10 uses SPI /dev/spidev0.0).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 10 # DMA channel to use for generating signal (try 10)
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)
LED_CHANNEL = 0 # set to '1' for GPIOs 13, 19, 41, 45 or 53
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ,LED_DMA,LED_INVERT,LED_BRIGHTNESS,LED_CHANNEL)
strip.begin()
for x in range(0,LED_COUNT):
while True:
R = random.randint(80,200)
B = random.randint(150,255)
strip.setPixelColor(x,Color(R,0,B))
strip.show()
So my question - is there anyone who know how I would tweak the code to get such an effect?
Thanks in advance to all to push me in right way..
Ereen
tried find an effect on the web

from rpi_ws281x import *
import random
# LED strip configuration:
LED_COUNT = 20 # Number of LED pixels.
LED_PIN = 10 # GPIO pin connected to the pixels (18 uses PWM!).
#LED_PIN = 10 # GPIO pin connected to the pixels (10 uses SPI /dev/spidev0.0).
LED_FREQ_HZ = 800000 # LED signal frequency in hertz (usually 800khz)
LED_DMA = 10 # DMA channel to use for generating signal (try 10)
LED_BRIGHTNESS = 255 # Set to 0 for darkest and 255 for brightest
LED_INVERT = False # True to invert the signal (when using NPN transistor level shift)
LED_CHANNEL = 0 # set to '1' for GPIOs 13, 19, 41, 45 or 53
strip = Adafruit_NeoPixel(LED_COUNT, LED_PIN, LED_FREQ_HZ,LED_DMA,LED_INVERT,LED_BRIGHTNESS,LED_CHANNEL)
strip.begin()
while True: # <- Start the infinite loop, each iteration gets a new random color
for x in range(LED_COUNT): # <- iterate through your LED counts
R = random.randint(80,200) # <- Random RED color
B = random.randint(150,255) # <- Random BLUE color
strip.setPixelColor(x,Color(R,0,B)) # <- Set the pixel color
strip.show() # <- Show the pixel color for each LED index
# Do something here, my guess is a timer to keep the LEDs on for a set time, then turn them off.
# Next would be another timer to wait before restarting the loop (unless you don't want to wait)
I believe this should get you started. This creates an endless while loop that gets the random colors for red and blue, applies the colors to the strip, then shows the colors by turning on the strip.

Related

What is an algorithm for displaying rpm on set of LEDs?

So I have an Arduino that is reading the rpm from my car. I also have a 15-pixel neopixel strip connected to the Arduino.
What I want the Arduino to do is to make the led strip show the rpm. As the rpm increases, the number of LEDs that turn on is increased from the left side of the strip.
What I am stuck on and would love help with is that I don’t just want the LEDs to turn on or off based of the rpm, but also change brightness.
For example let’s say the rpm is at 2000, so pixel 4 (arbitrary number that will be calculated by the equation) is the top pixel to turn on. Now as the rpm is increased, pixel 5 will increase in brightness from 0 to 255. Then pixel 6 will increase in brightness and so on, creating a smooth transition between pixels.
So what I want help with is being able to input the rpm and output the top pixel and it’s brightness. From there I will be able to just fill in the LEDs below the top pixel.
I want the top rpm to be 8000.
Let me know if you need more info. Thanks!
Does this code help? This has the code to calculate the last pixel - and the brightness of that pixel.
#define NUM_LEDS 15
#define NUM_BRIGHTNESS_LEVELS 256
#define MAX_REVS 8000
void setup()
{
Serial.begin(9600);
}
void loop()
{
for ( int revs = 0 ; revs <= MAX_REVS ; revs += 500 )
{
int totalBrightness = ((float)revs / MAX_REVS) * (NUM_LEDS * NUM_BRIGHTNESS_LEVELS);
int lastPixel = (totalBrightness / NUM_BRIGHTNESS_LEVELS);
int brightness = totalBrightness % NUM_BRIGHTNESS_LEVELS;
if ( lastPixel >= NUM_LEDS )
{
lastPixel = NUM_LEDS - 1;
brightness = NUM_BRIGHTNESS_LEVELS - 1;
}
Serial.print("revs = ");
Serial.print(revs);
Serial.print(", pixel = ");
Serial.print(lastPixel);
Serial.print(", brightness = ");
Serial.println(brightness);
delay(100);
}
delay(2000);
}

LMS beamforming Julia Programming

I have been trying to implement a simple LMS adaptive beamforming code. Since I don't have a Matlab license I decided to use Julia since they are quite similar. In order to get a basic code working I implemented the MVRD beamforming example found on Matlabs website (I can't seem o find the link right now). Then I used the link https://teaandtechtime.com/adaptive-beamforming-with-lms/ to get LMS going.
My code at the moment is
using Plots
using LinearAlgebra
# Source: https://teaandtechtime.com/adaptive-beamforming-with-lms/
M = 20; # Number of Array Elements.
N = 200; # Number of Signal Samples.
n = 1:N; # Time Sample Index Vector.
c = 3*10^8; # Speed of light
f = 2.4*10^9; # Frequency [Hz]
lambda = c/f; # Incoming Signal Wavelength in [m].
d = lambda/2; # Interelement Distance in [m].
SNR = 20; # Target SNR in dBs.
phi_s = 0; # Target azimuth angle in degrees.
phi_i1 = 20; # Interference angle in degrees.
phi_i2 = -30; # Interference angle in degrees.
phi_i3 = 50; # Interference angle in degrees.
INR1 = 35; # Interference #1 INR in dBs.
INR2 = 70; # Interference #2 INR in dBs.
INR3 = 50; # Interference #3 INR in dBs.
u_s = (d/lambda)*sin(phi_s*pi/180); # Normalized Spatial Frequency of the Target signal.
u_int1 = (d/lambda)*sin(phi_i1*pi/180); # Normalized Spatial Frequency of the Interferer #1.
u_int2 = (d/lambda)*sin(phi_i2*pi/180); # Normalized Spatial Frequency of the Interferer #2.
u_int3 = (d/lambda)*sin(phi_i3*pi/180); # Normalized Spatial Frequency of the Interferer #3.
tau_s = (d/c)*sin(phi_s*pi/180); # Time delay of the Target signal.
tau1 = (d/c)*sin(phi_i1*pi/180); # Time delay of the Interferer #1.
tau2 = (d/c)*sin(phi_i2*pi/180); # Time delay of the Interferer #2.
tau3 = (d/c)*sin(phi_i3*pi/180); # Time delay of the Interferer #3.
# Target Signal definition.
s = zeros(ComplexF64,M,N)
v_s = exp.(-1im*2*pi*u_s*collect(0:M-1))/sqrt(M); # Target Steering Vector.
for k=1:N
s[:,k] = 10^(SNR/20)*v_s; # Amplitude of Target Signal Generation.
end
# The uncorrelated unit power thermal noise samples with a Gaussian
# distribution are generated by:
w = (randn(M,N)+1im*randn(M,N))/sqrt(2)
# The interference [1iammer] vectors are generated by:
v_i1 = exp.(-1im*2*pi*u_int1*collect(0:M-1))/sqrt(M)
i_x1 = 10^(INR1/20)*v_i1*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i2 = exp.(-1im*2*pi*u_int2*collect(0:M-1))/sqrt(M)
i_x2 = 10^(INR2/20)*v_i2*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i3 = exp.(-1im*2*pi*u_int3*collect(0:M-1))/sqrt(M)
i_x3 = 10^(INR3/20)*v_i3*(randn(1,N)+1im*randn(1,N))/sqrt(2)
# The three signals are added to produce the overall array signal.
x = s + i_x1 + i_x2 + i_x3 + w
# Run LMS algorithm
mu = 0.001; # LMS step size
a = ones(ComplexF64,M); # Complex weights
S = zeros(ComplexF64,M); # Complex weights
ts = 1/(N*f); # sample time
for k = 1:N
d = cos(2*pi*f*k*ts); # Reference Signal
S = a.*x[:,k];
y = sum(S);
global e = conj(d) - y;
println(e)
global a += mu*x[:,k]*e; # next weight calculation
end
println(a)
# Array Response
Nsamples1 = 3*10^4
angle1 = -90:180/Nsamples1:90-180/Nsamples1
LMS_Beam_Pat = zeros(ComplexF64,Nsamples1)
for k = 1:Nsamples1
u = (d/lambda)*sin(angle1[k]*pi/180)
v = exp.(-1im*2*pi*u*collect(0:M-1))/sqrt(M); # Azimuth Scanning Steering Vector.
LMS_Beam_Pat[k] = a'*v;
end
# Plot the corresponding Beampatterns.
display(plot(angle1,10*log10.(abs.(LMS_Beam_Pat).^2),xlims=(-90,90),ylims=(-100,0)))
sleep(10)
# PolardB plot
display(plot(angle1*pi/180,10*log10.(abs.(LMS_Beam_Pat).^2), proj=:polar, lims=(-80,0)))
sleep(10)
The LMS code does not converge (it diverges rather) and I don't know why. I also don't understand the reference signal bit and how it is different from the target steering vector. Perhaps some clarification on the general concepts would be really helpful. I am new to beamforming and my background is in root solvers and such.
Below is the the working Julia code that is rewritten from the Matlab example. It is almost identical to the code above but without the LMS section.
using Plots
using LinearAlgebra
M = 20; # Number of Array Elements.
N = 200; # Number of Signal Samples.
n = 1:N; # Time Sample Index Vector.
c = 3*10^8; # Speed of light
f = 2.4*10^9; # Frequency [Hz]
lambda = c/f; # Incoming Signal Wavelength in [m].
d = lambda/2; # Interelement Distance in [m].
SNR = 20; # Target SNR in dBs.
phi_s = 0; # Target azimuth angle in degrees.
phi_i1 = 20; # Interference angle in degrees.
phi_i2 = -30; # Interference angle in degrees.
phi_i3 = 50; # Interference angle in degrees.
INR1 = 35; # Interference #1 INR in dBs.
INR2 = 70; # Interference #2 INR in dBs.
INR3 = 50; # Interference #3 INR in dBs.
u_s = (d/lambda)*sin(phi_s*pi/180); # Normalized Spatial Frequency of the Target signal.
u_int1 = (d/lambda)*sin(phi_i1*pi/180); # Normalized Spatial Frequency of the Interferer #1.
u_int2 = (d/lambda)*sin(phi_i2*pi/180); # Normalized Spatial Frequency of the Interferer #2.
u_int3 = (d/lambda)*sin(phi_i3*pi/180); # Normalized Spatial Frequency of the Interferer #3.
tau_s = (d/c)*sin(phi_s*pi/180); # Time delay of the Target signal.
tau1 = (d/c)*sin(phi_i1*pi/180); # Time delay of the Interferer #1.
tau2 = (d/c)*sin(phi_i2*pi/180); # Time delay of the Interferer #2.
tau3 = (d/c)*sin(phi_i3*pi/180); # Time delay of the Interferer #3.
# Target Signal definition.
s = zeros(ComplexF64,M,N)
v_s = exp.(-1im*2*pi*u_s*collect(0:M-1))/sqrt(M); # Target Steering Vector.
for k=1:N
s[:,k] = 10^(SNR/20)*v_s; # Amplitude of Target Signal Generation.
end
# The uncorrelated unit power thermal noise samples with a Gaussian
# distribution are generated by:
w = (randn(M,N)+1im*randn(M,N))/sqrt(2)
# The interference [1iammer] vectors are generated by:
v_i1 = exp.(-1im*2*pi*u_int1*collect(0:M-1))/sqrt(M)
i_x1 = 10^(INR1/20)*v_i1*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i2 = exp.(-1im*2*pi*u_int2*collect(0:M-1))/sqrt(M)
i_x2 = 10^(INR2/20)*v_i2*(randn(1,N)+1im*randn(1,N))/sqrt(2)
v_i3 = exp.(-1im*2*pi*u_int3*collect(0:M-1))/sqrt(M)
i_x3 = 10^(INR3/20)*v_i3*(randn(1,N)+1im*randn(1,N))/sqrt(2)
#The three signals are added to produce the overall array signal.
x = s + i_x1 + i_x2 + i_x3 + w
iplusn = i_x1 + i_x2 + i_x3 + w
# Calculation of the i+n autocorrelation matrix.
R_ipn = 10^(INR1/10)*(v_i1*v_i1') + 10^(INR2/10)*(v_i2*v_i2') + 10^(INR3/10)*(v_i3*v_i3') + I
InvR = inv(R_ipn)
# Calculate the Beam Patterns.
# MVDR Optimum Beamformer computed for a phi_s = 0 deg.
c_opt = InvR*v_s/(v_s'*InvR*v_s);
# Spatial Matched Filter | Steering Vector Beamformer Eq. (11.2.16).
c_mf = v_s
Nsamples1 = 3*10^4
angle1 = -90:180/Nsamples1:90-180/Nsamples1
Opt_Beam_Pat = zeros(ComplexF64,Nsamples1)
Conv_Beam_Pat = zeros(ComplexF64,Nsamples1)
for k = 1:Nsamples1
u = (d/lambda)*sin(angle1[k]*pi/180)
v = exp.(-1im*2*pi*u*collect(0:M-1))/sqrt(M); # Azimuth Scanning Steering Vector.
Opt_Beam_Pat[k] = c_opt'*v
Conv_Beam_Pat[k] = c_mf'*v
end
# Plot the corresponding Beampatterns.
plot(angle1,10*log10.(abs.(Conv_Beam_Pat).^2))
display(plot!(angle1,10*log10.(abs.(Opt_Beam_Pat).^2),xlims=(-90,90),ylims=(-100,0)))
sleep(10)
# PolardB plot
display(plot(angle1*pi/180,10*log10.(abs.(Opt_Beam_Pat).^2), proj=:polar, lims=(-80,0)))
sleep(10)
# Calculate the SINR loss factor for the Optimum Beamformer:
Nsamples = 3*10^4; # The resolution must be very fine to reveal the true depth of the notches.
Lsinr_opt = zeros(ComplexF64,Nsamples,1);
Lsinr_mf = zeros(Nsamples,1);
SNR0 = M*10^(SNR/10);
angle = -90:180/Nsamples:90-180/Nsamples;
for k=1:Nsamples
v = exp.(-1im*pi*collect(0:M-1)*sin(angle[k]*pi/180))/sqrt(M); # Azimuth Scanning Steering Vector.
c_mf = v; # This is the spatial matched filter beamformer.
Lsinr_opt[k] = v'*InvR*v;
SINRout_mf = real(M*(10^(SNR/10))*(abs(c_mf'*v)^2)/(c_mf'*R_ipn*c_mf));
Lsinr_mf[k] = SINRout_mf/SNR0;
end
plot(angle,10*log10.(abs.(Lsinr_opt)),xlims=(-90,0));
display(plot!(angle,10*log10.(abs.(Lsinr_mf)),xlims=(-90,90),ylims=(-75,5)));
sleep(10)
A good practice would be building the simulation iteratively.
One should be aware that the adaptive filter adapts to reference signal while all other data is interfering it. The more correlated the data to the reference the harder is the way to adapt to it.
So the reference signal should be something you can filter out from the signal on the sensors. For example, it can be the non delayed version of it or the signal of one of the sensors gets.
To make things simpler I created an example of real signals which basically gets a different delay according to the sensors.
The simplest way to add a delay to a real signal is by adjusting the phase of its analytic signal and taking the real.
Then we basically have N signals with M samples each which are getting into the adaptive filter which applies an inner product on each row of this M x N matrix.
I created a simple simulation both in MATLAB and Julia.
This is the result from the Julia code:
In the above we see the antenna gain for a target signal coming at 30 [Deg] and interference at [20.0; -35.0; 50.0].
As can be seen the array does adapt and have a gain of ~1 on the reference direction while rejection all other directions.
The full code is available on my StackExchange Signal Processing Q81138 GitHub Repository (Look at the SignalProcessing\Q81138 folder).

How to change backlight color of 2x16 LCD display

I am using 2x16 LCD display with 16 header pins with Raspberry Pi 3 .To display messages I installed and configured Adafruit Char LCD library. and it works fine.
currently default backlight color is yellow, so I want to change it to other colors like blue, red .
for this ,I imported Adafruit_RGBCharLCD
class Adafruit_RGBCharLCD from Adafruit char LCD library is as follows
class Adafruit_RGBCharLCD(Adafruit_CharLCD):
"""Class to represent and interact with an HD44780 character LCD display with
an RGB backlight."""
def __init__(self, rs, en, d4, d5, d6, d7, cols, lines, red, green, blue,
gpio=GPIO.get_platform_gpio(),
invert_polarity=True,
enable_pwm=False,
pwm=PWM.get_platform_pwm(),
initial_color=(1.0, 1.0, 1.0)):
"""Initialize the LCD with RGB backlight. RS, EN, and D4...D7 parameters
should be the pins connected to the LCD RS, clock enable, and data line
4 through 7 connections. The LCD will be used in its 4-bit mode so these
6 lines are the only ones required to use the LCD. You must also pass in
the number of columns and lines on the LCD.
The red, green, and blue parameters define the pins which are connected
to the appropriate backlight LEDs. The invert_polarity parameter is a
boolean that controls if the LEDs are on with a LOW or HIGH signal. By
default invert_polarity is True, i.e. the backlight LEDs are on with a
low signal. If you want to enable PWM on the backlight LEDs (for finer
control of colors) and the hardware supports PWM on the provided pins,
set enable_pwm to True. Finally you can set an explicit initial backlight
color with the initial_color parameter. The default initial color is
white (all LEDs lit).
You can optionally pass in an explicit GPIO class,
for example if you want to use an MCP230xx GPIO extender. If you don't
pass in an GPIO instance, the default GPIO for the running platform will
be used.
"""
super(Adafruit_RGBCharLCD, self).__init__(rs, en, d4, d5, d6, d7,
cols,
lines,
enable_pwm=enable_pwm,
backlight=None,
invert_polarity=invert_polarity,
gpio=gpio,
pwm=pwm)
self._red = red
self._green = green
self._blue = blue
# Setup backlight pins.
if enable_pwm:
# Determine initial backlight duty cycles.
rdc, gdc, bdc = self._rgb_to_duty_cycle(initial_color)
pwm.start(red, rdc)
pwm.start(green, gdc)
pwm.start(blue, bdc)
else:
gpio.setup(red, GPIO.OUT)
gpio.setup(green, GPIO.OUT)
gpio.setup(blue, GPIO.OUT)
self._gpio.output_pins(self._rgb_to_pins(initial_color))
def _rgb_to_duty_cycle(self, rgb):
# Convert tuple of RGB 0-1 values to tuple of duty cycles (0-100).
red, green, blue = rgb
# Clamp colors between 0.0 and 1.0
red = max(0.0, min(1.0, red))
green = max(0.0, min(1.0, green))
blue = max(0.0, min(1.0, blue))
return (self._pwm_duty_cycle(red),
self._pwm_duty_cycle(green),
self._pwm_duty_cycle(blue))
def _rgb_to_pins(self, rgb):
# Convert tuple of RGB 0-1 values to dict of pin values.
red, green, blue = rgb
return { self._red: self._blpol if red else not self._blpol,
self._green: self._blpol if green else not self._blpol,
self._blue: self._blpol if blue else not self._blpol }
def set_color(self, red, green, blue):
"""Set backlight color to provided red, green, and blue values. If PWM
is enabled then color components can be values from 0.0 to 1.0, otherwise
components should be zero for off and non-zero for on.
"""
if self._pwm_enabled:
# Set duty cycle of PWM pins.
rdc, gdc, bdc = self._rgb_to_duty_cycle((red, green, blue))
self._pwm.set_duty_cycle(self._red, rdc)
self._pwm.set_duty_cycle(self._green, gdc)
self._pwm.set_duty_cycle(self._blue, bdc)
else:
# Set appropriate backlight pins based on polarity and enabled colors.
self._gpio.output_pins({self._red: self._blpol if red else not self._blpol,
self._green: self._blpol if green else not self._blpol,
self._blue: self._blpol if blue else not self._blpol })
def set_backlight(self, backlight):
"""Enable or disable the backlight. If PWM is not enabled (default), a
non-zero backlight value will turn on the backlight and a zero value will
turn it off. If PWM is enabled, backlight can be any value from 0.0 to
1.0, with 1.0 being full intensity backlight. On an RGB display this
function will set the backlight to all white.
"""
self.set_color(backlight, backlight, backlight)
and I am trying to used lcd.set_color() as follows , but its not working.
import time
from Adafruit_CharLCD import Adafruit_RGBCharLCD
# instantiate lcd and specify pins
lcd = Adafruit_RGBCharLCD(rs=26, en=19,
d4=13, d5=6, d6=5, d7=11,
cols=16, lines=2,red=True,Green=True,Blue=True)
lcd.clear()
#setting backlight color as blue
lcd.set_color(0,0,100)
# display text on LCD display \n = new line
lcd.message('2x16 CharLCD\n Raspberry Pi')
I am using 4 bit node,attached last two pins which are backlight pins to raspberry pi gpio pins as follows:
Backlight Pins
15 LED+ or A Pin No 2 of GPIO(5v Power)
16 LED- or K Pin No 6 of GPIO(GND)
please help me to set customized colors as backlight ,as I am new to all this.
If your display has pins for backlight (LED) you can switch the backlight ON and OFF only unless there is some voltage regulation available, which I doubt.
Usually displays based on HD44780 chip do not support backlight color change. You can only switch it ON and OFF.

How to create 2-D arrays in systemverilog and call the elements in this array later?

I am writing a little game on FPGA in System Verilog and I want to show some small pictures through VGA display. My picture size is 35px X 20px. I converted the picture into three separate arrays (R, G and B values)by using Matlab. I do not know how I can create similar arrays in System Verilog and then call elements in them later. Below is my current code where I have made white dots and red dots. I want to replace these dots with the pictures I mentioned above. Thanks!
begin:RGB_Display
if (missile_On == 1'b1)
begin
Red = 8'h00;
Green = 8'hff;
Blue = 8'h3f;
end
else if (ourMissileOn == 1'b1)
begin
Red = 8'hff;
Green = 8'hff;
Blue = 8'h00;
end
else if ((ball_on == 1'b1))
begin
Red = 8'hff;
Green = 8'hff;
Blue = 8'hff;
end
else if ((enemyAppear == 1'b1))
begin
Red = 8'hff;
Green = 8'h00;
Blue = 8'h2f;
end
else
begin
Red = 8'h3f;
Green = 8'h00;
Blue = 8'h3f; //- DrawX[9:3];
end
end
You can store color information in 24 bit register as
24'hRRGGBB, (RR - Red, GG - Green, BB - Blue) - one pixel
35px x 20px = 700px, 700 * 3 bytes = 2100 bytes
First 35 * 3 = 105 bytes of memory is a first row of image.
Second 35 * 3 = 105 bytes of memory is a second row of image.
....
You can use FPGA Block RAM.

Transition shader

I have created a transition shader.
This is what is does:
On each update the color that should be alpha changes.
Then preform a check for each pixel.
If the color of the pixel is more that the 'alpha' value
Set this pixel to transparent.
Else If the color of the pixel is more that the 'alpha' value - 50
Set this pixel to partly transparent.
Else
Set the color to black.
EDIT (DELETED OLD PARTS):
I tried converting my GLSL into AGAL (using http://cmodule.org/glsl2agal):
Fragment shader:
const float alpha = 0.8;
varying vec2 TexCoord; //not used but required for converting
uniform sampler2D transition;//not used but required for converting
void main()
{
vec4 color = texture2D(transition, TexCoord.st);//not used but required for converting
color.a = float(color.r < alpha);
if(color.r >= (alpha - 0.1)){
color.a = 0.2 * (color.r - alpha - 0.1);
}
gl_FragColor = vec4(0, 0, 0, color.a);
}
And I've customized the output and added that to a (custom) Starling filter:
var fragmentShader:String =
"tex ft0, v0, fs0 <2d, clamp, linear, mipnone> \n" + // copy color to ft0
"slt ft0.w, ft0.x, fc0.x \n" + // alpha = red < inputAlpha
"mov ft0.xyz, fc1.xyzz \n" + // set color to black
"mov oc, ft0";
mShaderProgram = target.registerProgramFromSource(PROGRAM_NAME, vertexShader, fragmentShader);
It works and when I set the filters alpha, it will update the stuff. The only thing left is the partly transparent thing, but I have no idea how I can do that.
Swap the cycle on the Y and X coordinates. By using the X in the inner loop you optimize the L1 cache and the prefetcher of the CPU.
Some minor hints:
Remove the zeros for a cleaner code:
const c:uint = a << 24
Verify that 255/50 is collapsed into a single constant by the compiler.
Don't be crazy by doing it with BitmapData once you're using Starling.
I didn't get if you're grayscaling it by yourself or not. In not, just create a Starling filter for grayscale (pixel shader below will do the trick)
tex ft0, v0, fs0 <2d,linear,clamp>
add ft1.x, ft0.x, ft0.y
add ft1.x, ft1.x, ft0.z
div ft1.x, ft1.x, fc0.x
mov ft0.xyz, ft1.xxx
mov oc ft0
And for the alpha transition just extend the Image Class, implement IAnimatable add it to the Juggler. in the advanceTime just do a this.alpha -= VALUE;
Simple like that :)
Just going to elaborate a bit on #Paxel's answer. I discussed with another developer Jackson Dunstan about the L1 caching, where the speed improvement comes from, and what other improvements can be made to code like this to see performance gain.
After which Jackson posted a blog entry which can be read at here: Take Advantage of CPU caching
I'll post some the relative items. First the bitmap data is stored in memory by rows. The rows memory addresses might look something like this:
row 1: 0 1 2 3 4 5
row 2: 6 7 8 9 10 11
row 3: 12 13 14 15 16 17
Now running your inner loop through the rows will allow you leverage the L1 cache advantage since you can read the memory in order. So inner looping X first you'll read the first row as:
0 1 2 3 4 5
But if you were to do it Y first you'd read it as:
0 6 12 1 7 13
As you can see you are bouncing around memory addresses making it a slower process.
As for optimizations that could be made, the suggestion is to cache your width and height getters, storing the properties into local variables. Also using the Math.round() is pretty slow, replacing that would see a speed increase.

Resources