Proteus error:logic race conditions detected during transient analysis - divide

I’m trying to design a simple alu that get two 5-bit number and return the result of adding them or subtract or multiply or divide.
With using ICs ,it goes well.However I want to design everything myself.
In designing divide I get error :
logic race conditions detected during transient analysis
It happens when I tried implementing 4-bit magnitude compartor.
This module work properly but in this project came to get error.
I use this module multiple times but just in this part came to get error.
Please help me to solve this error!!

Related

OpenMDAO - information on cycles

In OpenMDAO, is there any way to get the analytics about the execution of the nonlinear solvers within a coupled model (containing multiple cycles and subcycles), such as the number of iterations within each of the cycles, and execution time?
Though there is no specific functionality to get this exact data, you should be able to get the information you need from case record data which includes iteration counts and time-stamps. So you'd have to do a bit of analysis on the first/last case of a specific run of a solver to compute the run times. Iteration counts should be very strait forward.
This question seems closely related to another one, recently posted which did identify a bug in OpenMDAO. (Issue #2453). Until that bug is fixed, you'll need to use the case names to separate out which cases belong to which cycles, since you can only currently add the recorders to the components/groups and not to the nested solvers themselves. But the naming of the cases should still allow you to pull the data you need out.

How is cross validation implemented?

I'm currently trying to train a neural network using cross validation, but I'm not sure if I'm getting how cross validation works. I understand the concept, but I can't totally see yet how the concept translates to code implementation. The following is a description of what I've got implemented, which is more-or-less guesswork.
I split the entire data set into K-folds, where 1 fold is the validation set, 1 fold is the testing set, and the data in the remaining folds are dumped into the training set.
Then, I loop K times, each time reassigning the validation and testing sets to other folds. Within each loop, I continuously train the network (update the weights) using only the training set until the error produced by the network meets some threshold. However, the error that is used to decide when to stop training is produced using the validation set, not the training set. After training is done, the error is once again produced, but this time using the testing set. This error from the testing set is recorded. Lastly, all the weights are re-initialized (using the same random number generator used to initialize them originally) or reset in some fashion to undo the learning that was done before moving on to the next set of validation, training, and testing sets.
Once all K loops finish, the errors recorded in each iteration of the K-loop are averaged.
I have bolded the parts where I'm most confused about. Please let me know if I made any mistakes!
I believe your implementation of Cross Validation is generally correct. To answer your questions:
However, the error that is used to decide when to stop training is produced using the validation set, not the training set.
You want to use the error on the validation set because it's reduces overfitting. This is the reason you always want to have a validation set. If you would do as you suggested, you could have a lower threshold, your algorithm will achieve a higher training accuracy than validation accuracy. However, this would generalize poorly to the unseen examples in the real world, that which your validation set is supposed to model.
Lastly, all the weights are re-initialized (using the same random number generator used to initialize them originally) or reset in some fashion to undo the learning that was done before moving on to the next set of validation, training, and testing sets.
The idea behind cross validation is that each iteration is like training the algorithm from scratch. This is desirable since by averaging your validation score, you get a more robust value. It protects against the possibility of a biased validation set.
My only suggestion would be to not use a test set in your cross validation scheme, since your validation set already models unseen examples, a seperate test set during the cross validation is redundant. I would instead split the data into a training and test set before you start cross validation. I would then not touch the test set until you want to gain an objective score for your algorithm.
You could use your cross validation score as an indication of performance on unseen examples, I assume however that you will be choosing parameters on this score, optimizing your model for your training set. Again, the possibility arises this does not generalize well to unseen examples, which is why it is a good practice to keep a seperate unseen test set. Which is only used after you have optimized your algorithm.

Determinism in tensorflow gradient updates?

So I have a very simple NN script written in Tensorflow, and I am having a hard time trying to trace down where some "randomness" is coming in from.
I have recorded the
Weights,
Gradients,
Logits
of my network as I train, and for the first iteration, it is clear that everything starts off the same. I have a SEED value both for how data is read in, and a SEED value for initializing the weights of the net. Those I never change.
My problem is that on say the second iteration of every re-run I do, I start to see the gradients diverge, (by a small amount, like say, 1e-6 or so). However over time, this of course leads to non-repeatable behaviour.
What might the cause of this be? I dont know where any possible source of randomness might be coming from...
Thanks
There's a good chance you could get deterministic results if you run your network on CPU (export CUDA_VISIBLE_DEVICES=), with single-thread in Eigen thread pool (tf.Session(config=tf.ConfigProto(intra_op_parallelism_threads=1)), one Python thread (no multi-threaded queue-runners that you get from ops like tf.batch), and a single well-defined operation order. Also using inter_op_parallelism_threads=1 may help in some scenarios.
One issue is that floating point addition/multiplication is non-associative, so one fool-proof way to get deterministic results is to use integer arithmetic or quantized values.
Barring that, you could isolate which operation is non-deterministic, and try to avoid using that op. For instance, there's tf.add_n op, which doesn't say anything about the order in which it sums the values, but different orders produce different results.
Getting deterministic results is a bit of an uphill battle because determinism is in conflict with performance, and performance is usually the goal that gets more attention. An alternative to trying to have exact same numbers on reruns is to focus on numerical stability -- if your algorithm is stable, then you will get reproducible results (ie, same number of misclassifications) even though exact parameter values may be slightly different
The tensorflow reduce_sum op is specifically known to be non-deterministic. Furthermore, reduce_sum is used for calculating bias gradients.
This post discusses a workaround to avoid using reduce_sum (ie taking the dot product of any vector w/ a vector of all 1's is the same as reduce_sum)
I have faced the same problem..
The working solution for me was to:
1- use tf.set_random_seed(1) in order to make all tf functions have the same seed every new run
2- Training the model using CPU not the GPU to avoid GPU non-deterministic operations due to precision.

Racing/ S-R Circuits?

Following truth table resulted from the circuit below. SR(NOR) latch is used. I have tried several times to trace through the circuit to see how truth table values are produced but its not working. Can someone explain to me what is going on ? This circuit was introduced in conjunction with racing although I am not sure if it has anything to do with it.
NOTE: "CLOCK" appears as a straight line to show how its connected everything. It is a normal clock that oscillates between 1 and 0. (this is how my instructor drew it).
Strictly, this does belong on EE. The other questions you've found are likely to be old - before EE was established.
You should look at the 1-to-0 transitions of the clock. When that occurs and only when that occurs, the value currently on S is transferred to Q.
The Race condition appears when the clock signal is delayed, even with the tiny amount of copper track between real components. The actual waveform is not 1-0 or 0-1, it ramps between the two values. A tiny variation between two components, one seeing the transition at say 2.7V and the other at 2.5 would mean that the first component moves the value from S to Q fractionally before the second, so when the second component decides to transfer the value, it may see the value after the transfer has occurred on the prior component. You therefore may have a race between the two. These delays can also be affected by supply-rail stability and temperature, so the whole arrangement can become unreliable if not carefully designed. The condition is often overcome be deliberately routing the clock so that it will arrive at the last component in the chain first, giving that end of the chain a head-start.
I've worked on systems where replacing a component with a faster version caused the circuit to stop working. The new component was working too fast for the remainder of the circuit - and you needed to deliberately select (or use factory-selected) slower versions.
On a related note, before hard-drives became cheap, and floppy-drives (you may need to google that) before them it was common to use casste tapes (even more likely you'd need google on those.) Cheap and cheerful was best. If you used a professional quality recorder/player, you'd often get unusable results.

Training and Validating Correctly With Encog

I think I'm doing something wrong with Encog. In all of the examples I've seen, they simply TRAIN until a certain training error is reached and then print the results. When is the gradient calculated and the weights of the hidden layers updated? Is this all contained within the training.iteration() function? This makes no sense because even though my TRAINING error keeps decreasing in my program, which seems to imply that the weights are changing, I have not yet run a validation set through the network (which I broke off and separated from the training set when building the data at the beginning) in order to determine if the validation error is still decreasing with the training error.
I have also loaded the validation set into a trainer and ran it through the network with compute() but the validation error is always similar to the training error - so it's hard to tell if its the same error from training. Meanwhile, the testing hit rate is less than 50% (expected if not learning).
I know there are a lot of different types of backpropogation techniques, particularly the common one using gradient descent as well as resilient backpropogation. What part of the network are we expected to update manually ourselves?
In Encog, weights are updated during the Train.iteration method call. This includes all weights. If you are using a gradient descent type trainer (i.e. backprop, rprop, quickprop) then your neural network is updated at the end of each iteration call. If you are using a population based trainer (i.e. genetic algorithm, etc) then you must call finishTraining so that the best population member can be copied back to the actual neural network that you passed to the trainer's constructor. Actually, its always a good idea to call finishTraining after your iterations. Some trainers need it, others do not.
Another thing to keep in mind is that some trainers report the current error at the beginning of the call to iteration, others at the of the iteration(improved error). This is for efficiency to keep some of the trainers from having to iterate over the data twice.
Keeping a validation set to test your training is a good idea. A few methods that might be helpful to you:
BasicNetwork.dumpWeights - Displays the weights for your neural network. This allows you to see if they have changed.
BasicNetwork.calculateError - Pass a training set to this and it will give you the error.

Resources