I'm trying to draw an event-sourcing decider with mermaid.js from something I've seen in a YT video: https://youtu.be/kgYGMVDHQHs?t=1572.
My attempt so far:
graph LR
Command(Command) --> Decide[[Decide]]
Decide --> Events(NewEvents)
Events --> Evolve[[Evolve]]
PreviousState(Previous State) --> Evolve[[Evolve]]
Evolve --> State(New or Initial State)
State --> PreviousState
State --> Decide
Also available on mermaid.live
Which gives something like below:
It looks really clunky, I'm wondering how the different elements can be better positioned to look more like the first picture?
The video mentioned loop several times, perhaps the plot can be designed like this?
Try this sample code in Mermaid Live Editor:
graph LR
Command --> Decide
Evolve --> Event
subgraph A[loop]
direction LR
Evolve --> |New or Initial State|Decide
Decide --> |New events|Evolve
end
Related
I have the following predicates that models a PC. What I want is to print all pc compoments and sub-compoments.
I want to do something like this
find_set_of_attributes(X):-
print(X)
print(X[power)
print(X[power][usb])
pc([power).
power([usb, voltageregulator]).
usb([container, usbhead, pins]).
voltageregulator([barreljack, capacitor, regulator, leds]).
One way to describe the computer components and sub-components is to use Definite Clause Grammars (DCGs). For example:
computer --> monitor, cpu, keyboard, mouse.
...
power --> usb, voltageregulator.
usb --> container, usbhead, pins.
voltageregulator --> barreljack, capacitor, regulator, leds.
Components without sub-components can be defined as:
capacitor --> [capacitor].
leds --> [leds].
To list all the parts (e.g for printing), you can then simply use the de facto standard phrase/2 predicate. For example:
| ?- phrase(computer, Parts).
I am attempting to create a PID controller for an air hose and valve system. The system consists of an Omega FC-22 flow computer, a motorized valve, and a down stream sensor to measure cubic feet per minute (CFM). The control loop goes as follows: user punches in desired CFM -> valve turns until flow computer = desired CFM -> valve corrects itself when desired CFM exceeds +/- 1 (or so) CFM. The upstream airflow is not part of the control loop, once it is on, the valve is used to regulate it.
I understand how PID controllers work in Simulink, but am not sure how to go from a transfer function and PID controller to the scenario above. Also, another issue I face is figuring out how to interface the Omega FC-22 with Simulink, which is a critical portion. Any and all help is appreciated.
**I have attached the user guide for the Omega FC-22
https://www.omega.com/manuals/manualpdf/M2572.pdf
You'll have to make Simulink write the CV of the PID to the valve. Typically this is done via a 4-20mA signal from the controller (pc in your case). You'll probably have to see what types of input signals your valve supports and wire accordingly.
The PV of the PID will come from the flow computer's Modbus RS-485 interface. The manual didn't show a ModbusTCP or RS-232 option. You'll have to use an RS-232/RS-485 or a USB/RS-485 adapter. Then read in the appropriate flow from the modbus registers listed in Appendix C of the Omega manual.
The set point of the PID should be easy enough. Just have a way for the user to enter it on the pc and send that value to your Simulink code.
I am currently developing a subset of the 6502 in LogiSim and at the current stage I am determining which parts to implement and what can be cut out. One of my main resources is Hanson's Block Diagram.
I am currently trying to determine how exactly the output register and its data path works. In this diagram, it looks to me like the data output register goes back onto the bus through the Input Data Latch, but also back into the instruction register.
This confuses me because usually the Address lines to the right of the diagram are sent back into the program memory (not pictured) and not back onto the bus as pictured.
How exactly does this data path work? As a follow up, Is it possible to simplify this area to only take the output and send it to a display instead of back into the processor as pictured?
This confuses me because usually the Address lines to the right of the diagram are sent back into the program memory (not pictured) and not back onto the bus as pictured.
The address bus works differently from the data bus. The address bus is always Output, but the data bus can be Input or Output. We say that the databus is tristate; it either reads, or writes, or does neither. Each pin d0 thru d7 has a simple circuit involving a couple of transistors that controls this. In the case of the 6502, each and every cycle the CPU is either reading something or writing something. In other words, from the 6502's point of view, every cycle is either a read or write cycle.
I am currently trying to determine how exactly the output register and its data path works.
Have a look: the Input Data Latch and Predecode Register are loaded with each φ2. But the Output Data Register is loaded with each φ1. φ1 and φ2 are the two phases of the CPU clock. This arrangement leaves enough time for, say a value to pass from the Input Data Latch, through the ALU, and into the Output Data Register for example.
The Data Output Register's output goes to the Data Bus Tristate Buffers. As you can see, that is controlled by R/W and also by φ2. If it's a read cycle, nothing happens there. So if it's a write cycle, that means the value in the Data Output Register (which was loaded with the previous φ1) is going to be put onto the databus. It also will get loaded into the Predecode Register and into the Input Data Latch.
In this diagram, it looks to me like the data output register goes back onto the bus through the Input Data Latch, but also back into the instruction register.
Absolutely. Anything that the CPU outputs could also get loaded into the Input Data Latch and the Predecode Register. But that doesn't matter, since an instruction will always start with a read cycle, which is the opcode fetch, so the Input Data Latch and the Predecode Register will get overwritten then with the proper value.
How can I use EnvGen in a loop in such a way that it won't restart at every iteration of the loop?
What I need it for: piecewise synthesis. I want e.g. 50ms of a xfade between first and second Klang, then a 50ms xfade between second and third Klang, then a 50ms xfade between third and fourth Klang and so on, and I want this concatenation as a whole to be modulated by an envelope.
Unfortunately the EnvGen seems to restart from the beginning on every iteration of the loop that plays the consecutive Klang pairs. I want a poiiiiinnnnnnnnnng, but no matter what I try all I get is popopopopopopopopo.
2019 EDIT:
OK, since nobody would answer the "how to achieve the goal" question, I am now downgrading this question to a mere "why doesn't this particular approach work", changing the title too.
Before I paste some code, a bit of an explanation: this is a very simplified example. While my original desire was to modulate a complicated, piecewise-generated sound with an envelope, this simplified example only "scissors" 100ms segments out of the output of a SinOsc, just to artificially create the "piecewise generation" situation.
What happens in this program is that the EnvGen seems to restart at every loop iteration: the envelope restarts from t=0. I expect to get one 1s long exponentially fading sound, like plucking a string. What I get is a series of 100ms "pings" due to the envelope restarting at the beginning of each loop iteration.
How do I prevent this from happening?
Here's the code:
//Exponential decay over 1 second
var envelope = {EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2)};
var myTask = Task({
//A simple tone
var oscillator = {SinOsc.ar(880,0,1);};
var scissor;
//Prepare a scissor that will cut 100ms of the oscillator signal
scissor = {EnvGen.kr(Env.new([1,0],[1],'hold'),timeScale: 0.1)};
10.do({
var scissored,modulated;
//Cut the signal with the scisor
scissored = oscillator*scissor;
//Try modulating with the envelope. The goal is to get a single 1s exponentially decaying ping.
modulated = {scissored*envelope};
//NASTY SURPRISE: envelope seems to restart here every iteration!!!
//How do I prevent this and allow the envelope to live its whole
//one-second life while the loop and the Task dance around it in 100ms steps?
modulated.play;
0.1.wait;
});
});
myTask.play;
(This issue, with which I initially struggled for MONTHS without success, actually caused me to shelve my efforts at learning SuperCollider for TWO YEARS, and now I'm picking up where I left off.)
You way of working here is kind of unusual.
With SuperCollider, the paradigm shift you're looking for is to create SynthDefs as discrete entities:
s.waitForBoot ({
b = Bus.new('control');
SynthDef(\scissors, {arg bus;
var env;
env = EnvGen.kr(Env.linen);
//EnvGen.kr(Env.new([1,0.001],[1],curve: 'exp'), timeScale: 1, doneAction: 2);
Out.kr(bus, env);
}).add;
SynthDef(\oscillator, {arg bus, out=0, freq=440, amp = 0.1;
var oscillator, scissored;
oscillator = SinOsc.ar(freq,0,1);
scissored = oscillator * In.kr(bus) * amp;
Out.ar(out, scissored);
}).add;
s.sync;
Task({
Synth(\scissors, [\bus, b]);
s.sync;
10.do({|i|
Synth(\oscillator, [\bus, b, \freq, 100 * (i+1)]);
0.1.wait;
});
}).play
});
I've changed for a longer envelope and a change in pitch, so you can hear all the oscillators start.
What I've done is I've defined two SynthDefs and a bus.
The first SynthDef has an envelope, which I've lengthened for purposes of audibility. It writes the value of that envelope out to a bus. This way, every other SynthDef that wants to use that shared envelope can get it by reading the bus.
The second SynthDef has an a SinOsc. We multiply the output of that by the bus input. This uses the shared envelope to change the amplitude.
This "works", but if you run it a second time, you'll get another nasty surprise! The oscillator SynthDefs haven't ended and you'll hear them again. To solve this, you'll need to give them their own envelopes or something else with a doneAction. Otherwise, they'll live forever.Putting envelopes on each individual oscillator synth is also a good way to shape the onset of each one.
The other new thing you might notice in this example is the s.sync; lines. A major feature of SuperCollider is that the audio server and the language are separate processes. That line makes sure the server has caught up, so we don't try to use server-side resources before they're ready. This client/server split is also why it's best to define synthdefs before using them.
I hope that the long wait for an answer has not turned you off permanently. You may find it helpful to look at some tutorials and get started that way.
Each module can be consider to have following power:
[1] It can store data.
[2] It can operate on the data.(arithmetic operation)
some property of modules (listing just that, i am concerned with right now.)
[1] all register/memory element in modules are RAISING edge triggered.
Now this architecture can be use to create a model of a computer processor.
Real Deal:
Is it neccessary to have "control unit next state register" to be FALLING egde triggered ?
(below i explain why i think so)
CLOCK:
|------| |------|[1] |------| |------|
_____| |_________| |_________| |_________| |____
|----|
Data should be valid in this region at least.(considering the setup/hold time).
|----------------|[1]
____________| |_________
So the write signal should be up (if control unit want to) in this region.
This control signals are just the conbinational result of input and CURRENT STATE.
SO that means as the current state changes the control signal changes, which implies the state should change at falling edge[1].
So change of state is simply the change in "control unit state register" which is happening at the falling edge of the clock.
Thats why i think "Is it necessary to have "control unit next state register" to be FALLING edge triggered" ....am i thinking/considering things right ?
If yes then the same(falling edge triggered of control unit state register ) should be happening in actual processor as well.
I am learning stuff so please forgive + correct my mistakes
A common way to handle this is to consider the rising edge of the clock to trigger the “fetch” cycle, and the falling edge to trigger the “execute” cycle.
During “fetch” the memory address is incremented and data from memory is alowed to stabilize and propagate to control circuits (such as ALU’s settings , demultiplexers to control things, multiplexers to sample states for conditional tests, set up shift logic, etc).
During “execute” the things being controlled by the control circuit outputs are triggered (i.e. the test state being read by a multiplexer would be tested, and if true a branch might be taken by loading the program counter with the branch address,so that during the next fetch cycle the system would load the next instruction from the branch address instead of simply incrementing to the next address in memory ).
ANSWERED by: a generous man "BL" (name initials)