I'm making a smart home system using nodeMCU, and I need to store and retrieve data from the module. I used the following function.
function save_settings(name,value)
file.remove(name)
file.open(name,"w+")
file.writeline(value)
file.close()
end
It works but it's slow and the NodeMCU crashes if I trigger the above function rapidly... Sometimes requiring a FS format to be able to use it again.
So my question is: is there any other way to make variables persistent between restarts?
I'm using the latest firmware, 0.9.6-dev_20150704, the float version (https://github.com/nodemcu/nodemcu-firmware/releases)
This code took 62-63 ms to complete at first, and seems to add a few fractions of a millisecond with each successive run of the code, after a few hundred executions, it was up to almost 100 ms. It never crashed on me.
function save_setting(name, value)
file.open(name, 'w') -- you don't need to do file.remove if you use the 'w' method of writing
file.writeline(value)
file.close()
end
function read_setting(name)
if (file.open(name)~=nil) then
result = string.sub(file.readline(), 1, -2) -- to remove newline character
file.close()
return true, result
else
return false, nil
end
end
startTime = tmr.now()
test1 = 1200
test2 = 15.7
test3 = 75
test4 = 15000001
save_setting('test1', test1)
save_setting('test2', test2)
save_setting('test3', test3)
save_setting('test4', test4)
1exists, test1 = read_setting('test1')
2exists, test2 = read_setting('test2')
3exists, test3 = read_setting('test3')
4exists, test4 = read_setting('test4')
completeTime = (tmr.now()-startTime)/(1000)
print('time to complete (ms):')
print(tostring(completeTime))
If you upgrade to the newer version (based on SDK 1.4.0) you can use the rtcmem memory slots:
local offset = 10
local val = rtcmem.read32(offset, 1)
rtcmem.write32(offset, val + 1)
That memory is documented to persist through a deep sleep cycle; I've found it to persist across both hardware and software resets (but not cycling power.)
Related
I'm self study of Python and it's my first code.
I'm working for analyze logs from the servers. Usually I need analyze full day logs. I created script (this is example, simple logic) just for check speed. If I use normal coding the duration of analyzing 20mil rows about 12-13 minutes. I need 200mil rows by 5 min.
What I tried:
Use multiprocessing (met issue with share memory, think that fix it). But as the result - 300K rows = 20 sec and no matter how many processes. (PS: Also need control processors count in advance)
Use threading (I found that it's not give any speed, 300K rows = 2 sec. But normal code same, 300K = 2 sec)
Use asyncio (I think that script is slow because need reads many files). Result same as threading - 300K = 2 sec.
Finally I think that all three my script incorrect and didn't work correctly.
PS: I try to avoid use specific python modules (like pandas) because in this case it will be more difficult to execute on different servers. Better to use common lib.
Please help to check 1st - multiprocessing.
import csv
import os
from multiprocessing import Process, Queue, Value, Manager
file = {"hcs.log", "hcs1.log", "hcs2.log", "hcs3.log"}
def argument(m, a, n):
proc_num = os.getpid()
a_temp_m = a["vod_miss"]
a_temp_h = a["vod_hit"]
with open(os.getcwd() + '/' + m, newline='') as hcs_1:
hcs_2 = csv.reader(hcs_1, delimiter=' ')
for j in hcs_2:
if j[3].find('MISS') != -1:
a_temp_m[n] = a_temp_m[n] + 1
elif j[3].find('HIT') != -1:
a_temp_h[n] = a_temp_h[n] + 1
a["vod_miss"][n] = a_temp_m[n]
a["vod_hit"][n] = a_temp_h[n]
if __name__ == '__main__':
procs = []
manager = Manager()
vod_live_cuts = manager.dict()
i = "vod_hit"
ii = "vod_miss"
cpu = 1
n = 1
vod_live_cuts[i] = manager.list([0] * cpu)
vod_live_cuts[ii] = manager.list([0] * cpu)
for m in file:
proc = Process(target=argument, args=(m, vod_live_cuts, (n-1)))
procs.append(proc)
proc.start()
if n >= cpu:
n = 1
proc.join()
else:
n += 1
[proc.join() for proc in procs]
[proc.close() for proc in procs]
I'm expect, each file by def argument will be processed by independent process and finally all results will be saved in dict vod_live_cuts. For each process I added independent list in dict. I think it will help cross operation for use this parameter. But maybe it's wrong way :(
using IPC is costly, so only use "shared objects" for saving the final result, not for intermediate results while parsing the file.
limiting the number of processes is done by using a multiprocessing.Pool, the following code uses it to reach the max hard-disk speed, you only need to post-process the results.
you can only parse data as fast as your HDD can read it (typically 30-80 MB/s), so if you need to improve the performance further you should use SSD or RAID0 for higher disk speed, you cannot get much faster than this without changing your hardware.
import csv
import os
from multiprocessing import Process, Queue, Value, Manager, Pool
file = {"hcs.log", "hcs1.log", "hcs2.log", "hcs3.log"}
def argument(m, a):
proc_num = os.getpid()
a_temp_m_n = 0 # make it local to process
a_temp_h_n = 0 # as shared lists use IPC
with open(os.getcwd() + '/' + m, newline='') as hcs_1:
hcs_2 = csv.reader(hcs_1, delimiter=' ')
for j in hcs_2:
if j[3].find('MISS') != -1:
a_temp_m_n = a_temp_m_n + 1
elif j[3].find('HIT') != -1:
a_temp_h_n = a_temp_h_n + 1
a["vod_miss"].append(a_temp_m_n)
a["vod_hit"].append(a_temp_h_n)
if __name__ == '__main__':
manager = Manager()
vod_live_cuts = manager.dict()
i = "vod_hit"
ii = "vod_miss"
cpu = 1
vod_live_cuts[i] = manager.list()
vod_live_cuts[ii] = manager.list()
with Pool(cpu) as pool:
tasks = []
for m in file:
task = pool.apply_async(argument, args=(m, vod_live_cuts))
tasks.append(task)
for task in tasks:
task.get()
print(list(vod_live_cuts[i]))
print(list(vod_live_cuts[ii]))
Code:
local DataStoreService = game:GetService("DataStoreService")
local InvDataStore = DataStoreService:GetDataStore("InvDataStore")
game.Players.PlayerAdded:Connect(function(player)
local Id = player.UserId
local Inventory = Instance.new("Folder")
Inventory.Name = "Inventory"
Inventory.Parent = player
local Inv = InvDataStore:GetAsync(Id)
print(Inv)
print(table.concat(Inv, " "))
end)
game.Players.PlayerRemoving:Connect(function(player)
local Id = player.UserId
local InvTable = {}
for i, v in pairs(game.Players:FindFirstChild(player.Name).Inventory:GetChildren()) do
print("Repear")
if v:IsA("NumberValue") then
table.insert(InvTable, v)
print(v)
end
end
print(InvTable)
print(table.concat(InvTable, " "))
InvDataStore:SetAsync(Id, InvTable)
end)
Output:
13:25:35.288 - Untitled Game auto-recovery file was created
Realism Mod is currently running v2.09! (x2)
table: 0x08cb53598b2d3aa1
table: 0xd8ce847b521d4091
1
13:26:26.703 - Disconnect from ::ffff:127.0.0.1|60556
Explorer:
It seems to be skipping this loop:
for i, v in pairs(game.Players:FindFirstChild(player.Name).Inventory:GetChildren()) do
print("Repear")
if v:IsA("NumberValue") then
table.insert(InvTable, v)
print(v)
end
end
as it seems to not print repear (repeat) OR v (Value) anyone know whats up?
Note: The thing i dont understand, is it doesnt print the value after the save, and before the save, and forgetting the for loop. I can provide extra things to.
It is ignoring the loop because when it gets to game.Players:FindFirstChild(player.Name) the return will be nil, due to that player just leaved the server. What you can try to do is iterating directly from the player object you have, you don't need to look for the object player if you already have it.
Try:
for i, v in pairs(player.Inventory:GetChildren()) do
print("Repear")
if v:IsA("NumberValue") then
table.insert(InvTable, v)
print(v)
end
end
also it is a good practice to store data during the game and not when it leaves, all those objects are removed too when the player leaves, it is better to handle the table during the game and during player removing just update the data store.
Also a good practice is to update datastore for the player every ~5 minutes
I am writing this question related to this. In his reply, Marco gave me an excellent answer but, unfortunately, I am new with OpenModelica so I would need some further help.
I am actually using OpenModelica and not Dymola so unfortunately I have to build the function that does it for me and I am very new with OpenModelica language.
So far, I have a model that simulates the physical behavior based on a DAEs. Now, I am trying to build what you suggest here:
With get time() you can build a function that: reads the system time as t_start translates the model and simulate for 0 seconds reads the system time again and as t_stop computes the difference between t_start and t_stop.
Could you please, give me more details: Which command can I use to read the system at time t_start and to simulate it for 0 seconds? To do this for both t_start and t_stop do I need to different function?
Once I have done this, do I have to call the function (or functions) inside the OpenModelica Model of which I want to know its time?
Thank you so much again for your precious help!
Very best regards, Gabriele
From the other question:
I noticed in Modelica there are different flags for the simulation time but actually the time I get is very small compared to the time that elapses since I press the simulation button to the end of the simulation (approximately measured with the clock of my phone).
The time that is reported is correct. Most of the time taken is not initialisation or simulation, but compilation. If you use the re-simulate option in OMEdit (right-click a result-file in the plot view for variables), you will notice the simulation is very fast.
$ cat e.mos
loadString("model M
Real r(fixed=true, start=2.0);
equation
der(r) = time;
end M;");getErrorString();
simulate(M);getErrorString();
$ omc e.mos
true
""
record SimulationResult
resultFile = "/mnt/data/#Mech/martin/tmp/M_res.mat",
simulationOptions = "startTime = 0.0, stopTime = 1.0, numberOfIntervals = 500, tolerance = 1e-06, method = 'dassl', fileNamePrefix = 'M', options = '', outputFormat = 'mat', variableFilter = '.*', cflags = '', simflags = ''",
messages = "LOG_SUCCESS | info | The initialization finished successfully without homotopy method.
LOG_SUCCESS | info | The simulation finished successfully.
",
timeFrontend = 0.004114061,
timeBackend = 0.00237546,
timeSimCode = 0.0008126780000000001,
timeTemplates = 0.062749837,
timeCompile = 0.633754155,
timeSimulation = 0.006627571000000001,
timeTotal = 0.7106012479999999
end SimulationResult;
""
OMEdit does not report these other numbers (time to translate and compile the model) as far as I know. On Windows, these times are quite big because linking takes longer.
I'd like to use Tarantool to store data. How can I store data with TTL and simple logic (without the spaces)?
Like this:
box:setx(key, value, ttl);
box:get(key)
Yes, you can expire data in Tarantool and in a much more flexible way than in Redis. Though you can't do this without spaces, because space is a container for data in Tarantool (like database or table in other database systems).
In order to expire data, you have to install expirationd tarantool rock using tarantoolctl rocks install expirationd command. Full documentation on expirationd daemon can be found here.
Feel free to use a sample code below:
#!/usr/bin/env tarantool
package.path = './.rocks/share/tarantool/?.lua;' .. package.path
local fiber = require('fiber')
local expirationd = require('expirationd')
-- setup the database
box.cfg{}
box.once('init', function()
box.schema.create_space('test')
box.space.test:create_index('primary', {parts = {1, 'unsigned'}})
end)
-- print all fields of all tuples in a given space
local print_all = function(space_id)
for _, v in ipairs(box.space[space_id]:select()) do
local s = ''
for i = 1, #v do s = s .. tostring(v[i]) .. '\t' end
print(s)
end
end
-- return true if tuple is more than 10 seconds old
local is_expired = function(args, tuple)
return (fiber.time() - tuple[3]) > 10
end
-- simply delete a tuple from a space
local delete_tuple = function(space_id, args, tuple)
box.space[space_id]:delete{tuple[1]}
end
local space = box.space.test
print('Inserting tuples...')
space:upsert({1, '0 seconds', fiber.time()}, {})
fiber.sleep(5)
space:upsert({2, '5 seconds', fiber.time()}, {})
fiber.sleep(5)
space:upsert({3, '10 seconds', fiber.time()}, {})
print('Tuples are ready:\n')
print_all('test')
print('\nStarting expiration daemon...\n')
-- start expiration daemon
-- in a production full_scan_time should be bigger than 1 sec
expirationd.start('expire_old_tuples', space.id, is_expired, {
process_expired_tuple = delete_tuple, args = nil,
tuples_per_iteration = 50, full_scan_time = 1
})
fiber.sleep(5)
print('\n\n5 seconds passed...')
print_all('test')
fiber.sleep(5)
print('\n\n10 seconds passed...')
print_all('test')
fiber.sleep(5)
print('\n\n15 seconds passed...')
print_all('test')
os.exit()
I've written a feature for my library Rubikon that displays a throbber (a spinning — as you may have seen in other console apps) as long as some other code is running.
To test this feature I capture the output of the throbber in a StringIO and compare it with the expected value. As the throbber is only displayed as long as the other code is running the content of the IO gets longer when the code runs longer. In my tests I do a simple sleep 1 and should have a constant 1 second delay. This works most of the time, but sometimes (apparently due to external factors like heavy load on the CPU) it fails, because the code doesn't run for 1 second, but for a bit more, so that the throbber prints a few additional characters.
My question is: Is there any possibility to test such time critical features in Ruby?
From your github repository, I found this test for the Throbber class:
should 'work correctly' do
ostream = StringIO.new
thread = Thread.new { sleep 1 }
throbber = Throbber.new(ostream, thread)
thread.join
throbber.join
assert_equal " \b-\b\\\b|\b/\b", ostream.string
end
I'll assume that a throbber iterates over ['-', '\', '|', '/'], backspacing before each write, once per second. Consider the following test:
should 'work correctly' do
ostream = StringIO.new
started_at = Time.now
ended_at = nil
thread = Thread.new { sleep 1; ended_at = Time.now }
throbber = Throbber.new(ostream, thread)
thread.join
throbber.join
duration = ended_at - started_at
iterated_chars = " -\\|/"
expected = ""
if duration >= 1
# After n seconds we should have n copies of " -\\|/", excluding \b for now
expected << iterated_chars * duration.to_i
end
# Next append the characters we'd get from working for fractions of a second:
remainder = duration - duration.to_i
expected << iterated_chars[0..((iterated_chars.length*remainder).to_i)] if remainder > 0.0
expected = expected.split('').join("\b") + "\b"
assert_equal expected, ostream.string
end
The last assignment of expected is a bit unpleasant, but I made the assumption that the throbber would write character/backspace pairs atomically. If this is not true, you should be able to insert the \b escape sequence into the iterated_chars string and remove the last assignment entirely.
This question is similar (I think, altough I'm not completely sure) to this one:
Only real time operating system can
give you such precision. You can
assume Thread.Sleep has a precision of
about 20 ms so you could, in theory
sleep until the desired time - the
actual time is about 20 ms and THEN
spin for 20 ms but you'll have to
waste those 20 ms. And even that
doesn't guarantee that you'll get real
time results, the scheduler might just
take your thread out just when it was
about to execute the RELEVANT part
(just after spinning)
The problem is not rubby (possibly, I'm no expert in ruby), the problem is the real time capabilities of your operating system.