I'm currently learning how to use the Lua C API and while I've had success binding functions between C/C++ and Lua, I have a few questions:
Is it a good idea to load multiple scripts into a single lua_State? Is there a way to close specific chunks? If a script is no longer in use how can I clear it from the lua_State while retaining everything else?
What is the best way use scripts that may use the same name for functions/global variables? If I load all of them the newer definitions override the older ones.
After reading online I think I need to separate each loaded chunk
into different environments. The way I envision this working is each
time a chunk is loaded I assign it a unique environment name, when I
need to work with it it I just use that name to fetch the
environment from the LUA_REGISTRYINDEX and perform the operation. So
far I haven't figured out how to do this. There are examples online
but they use Lua 5.1.
Is it a good idea to load multiple scripts into a single lua_State?
Yes, definitely. Unless those scripts is unrelated and should run in multiple parallel threads.
Is there a way to close specific chunks?
Chunk is just a value of type "function". When you don't have that value stored anywhere - chunk will be garbage-collected.Anything chunk produced - globals, or locals that has references somewhere outside - those will live on.
how to clear it from the lua_State while retaining everything else?
That depends on how do you see that chunk. Is it just a set of functions, or represents some entity with own state. If you don't create global functions and variables, then everything defined in separate script file will be local to chunk, and will be removed when there's no references to chunk left.
What is the best way use scripts that may use the same name for functions/global variables?
Consider rewriting your code. Do not create any globals, unless it's explicitly required to build communications with other parts of your program. Make variables local (owned by chunk), or store it in table/closure that will be returned by chunk as new object - chunk might be a factory producing new objects, and not jush the script.Also Lua runs just faster with local variables.
The way I envision this working is each time a chunk is loaded I assign it a unique environment name
You should do that if scripts comes from outside - written by users, or received from other external sources. Sandboxing is cool, but there's no need in sandboxing if chunks is your internal stuff. Consider rewriting code without globals. Return some object (api table, or closure) if your chunk produces other objects - you can call that chunk many times without reloading it. Or save one global - module interface, if chunk represents Lua-like module. If you don't organize your code well, then you will be forced to use separate environments, and you'll have to prepare new environment for every script, copy basic stuff like print/pairs/string/etc. You'll have many breaks in run time until you figure out what's more is missing from new environment, and so on.
After poking around some more I found what I think is the solution I was looking for. I'm not sure if this is the correct/best way to do it, but it works in my basic test case. #jpjacobs answer on this question helped a lot.
test1.lua
x = 1
function hi()
print("hi1");
print(x);
end
hi()
test2.lua
x =2
function hi()
print("hi2");
print(x);
end
hi()
main.cpp
int main(void)
{
lua_State* L = luaL_newstate();
luaL_openlibs(L);
char* file1 = "Rooms/test1.lua";
char* file2 = "Rooms/test2.lua";
//We load the file
luaL_loadfile(L, file1);
//Create _ENV tables
lua_newtable(L);
//Create metatable
lua_newtable(L);
//Get the global table
lua_getglobal(L, "_G");
lua_setfield(L, -2, "__index");
//Set global as the metatable
lua_setmetatable(L, -2);
//Push to registry with a unique name.
//I feel like these 2 steps could be merged or replaced but I'm not sure how
lua_setfield(L, LUA_REGISTRYINDEX, "test1");
//Retrieve it.
lua_getfield(L, LUA_REGISTRYINDEX, "test1");
//Set the upvalue (_ENV)
lua_setupvalue(L, 1, 1);
//Run chunks
lua_pcall(L, 0, LUA_MULTRET, 0);
//Repeat
luaL_loadfile(L, file2);
lua_newtable(L);
lua_newtable(L);
lua_getglobal(L, "_G");
lua_setfield(L, -2, "__index");
lua_setmetatable(L, -2);
lua_setfield(L, LUA_REGISTRYINDEX, "test2");
lua_getfield(L, LUA_REGISTRYINDEX, "test2");
lua_setupvalue(L, 1, 1);
lua_pcall(L, 0, LUA_MULTRET, 0);
//Retrieve the table containing the functions of the chunk
lua_getfield(L, LUA_REGISTRYINDEX, "test1");
//Get the function we want to call
lua_getfield(L, -1, "hi");
//Call it
lua_call(L, 0, 0);
//Repeat
lua_getfield(L, LUA_REGISTRYINDEX, "test2");
lua_getfield(L, -1, "hi");
lua_call(L, 0, 0);
lua_getfield(L, LUA_REGISTRYINDEX, "test2");
lua_getfield(L, -1, "hi");
lua_call(L, 0, 0);
lua_getfield(L, LUA_REGISTRYINDEX, "test1");
lua_getfield(L, -1, "hi");
lua_call(L, 0, 0);
lua_close(L);
}
Output:
hi1
1
hi2
2
hi1
1
hi2
2
hi2
2
hi1
1
I'm using Lua 5.3.2 with Visual Studio 2013 if that means anything.
This basic test case works as needed. I'll continue testing to see if any issues/improvements come up. If any sees any way I could improve this code or and glaring mistakes, please leave a comment.
you should treat each of your scripts as different module.
just like you have more then 1 "require" in your code.
your 'loaded chunk' should return table that will be stored globally.
this isn't good idea to load a lot of global variables, this can cause bad thing after you will add more modules.
Related
I want to know the time when a disk is made offline by user. Is there a way to know this through WMI classes or other ways?
If you cannot find a way to do it through the Win32 API/WMI or other, I do know of an alternate way which you could look into as a last-resort.
What about using NtQueryVolumeInformationFile with the FileFsVolumeInformation class? You can do this to retrieve the data about the volume and then access the data through the FILE_FS_VOLUME_INFORMATION structure. This includes the creation time.
At the end of the post, I've left some resource links for you to read more on understanding this so you can finish it off the way you'd like to implement it; I do need to quickly address something important though, which is that the documentation will lead you to
an enum definition for the _FSINFOCLASS, but just by copy-pasting it from MSDN, it probably won't work. You need to set the first entry of the enum definition to 1 manually, otherwise it will mess up and NtQueryVolumeInformationFile will return an error status of STATUS_INVALID_INFO_CLASS (because the first entry will be identified as 0 and not 1 and then all the entries following it will be -1 to what they should be unless you manually set the = 1).
Here is the edited version which should work.
typedef enum _FSINFOCLASS {
FileFsVolumeInformation = 1,
FileFsLabelInformation,
FileFsSizeInformation,
FileFsDeviceInformation,
FileFsAttributeInformation,
FileFsControlInformation,
FileFsFullSizeInformation,
FileFsObjectIdInformation,
FileFsDriverPathInformation,
FileFsVolumeFlagsInformation,
FileFsSectorSizeInformation,
FileFsDataCopyInformation,
FileFsMetadataSizeInformation,
FileFsMaximumInformation
} FS_INFORMATION_CLASS, *PFS_INFORMATION_CLASS;
Once you've opened a handle to the disk, you can call NtQueryVolumeInformationFile like this:
NTSTATUS NtStatus = 0;
HANDLE FileHandle = NULL;
IO_STATUS_BLOCK IoStatusBlock = { 0 };
FILE_FS_VOLUME_INFORMATION FsVolumeInformation = { 0 };
...
Open the handle to the disk here, and then check that you have a valid handle.
...
NtStatus = NtQueryVolumeInformationFile(FileHandle,
&IoStatusBlock,
&FsVolumeInformation,
sizeof(FILE_FS_VOLUME_INFORMATION),
FileFsVolumeInformation);
...
If NtStatus represents an NTSTATUS error code for success (e.g. STATUS_SUCCESS) then you can access the VolumeCreationTime (LARGE_INTEGER) field of the FILE_FS_VOLUME_INFORMATION structure with the FsVolumeInformation variable.
Your final task at this point will be using the LARGE_INTEGER field named VolumeCreationTime to gather proper time/date information. There are two links included at the end of the post which are focused on that topic, they should help you sort it out.
See the following for more information.
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntifs/nf-ntifs-ntqueryvolumeinformationfile
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/wdm/ne-wdm-_fsinfoclass
https://learn.microsoft.com/en-us/windows-hardware/drivers/ddi/content/ntddk/ns-ntddk-_file_fs_volume_information
https://msdn.microsoft.com/en-us/library/windows/desktop/ms724280.aspx
https://blogs.msdn.microsoft.com/joshpoley/2007/12/19/datetime-formats-and-conversions/
I'm trying to load several symbol modules using the following code:
DWORD64 dwBaseDllSymLocal = 0;
SymInitializeW(GetCurrentProcess(), NULL, FALSE);
SymSetOptions(SYMOPT_DEBUG);
dwBaseDllSymLocal = SymLoadModuleExW(GetCurrentProcess(), NULL, L"C:\\module1.dll", NULL, 0, 0, NULL, 0);
if (0 == dwBaseDllSymLocal)
{
__debugbreak();
}
dwBaseDllSymLocal is 10000000 now.
dwBaseDllSymLocal = SymLoadModuleExW(GetCurrentProcess(), NULL, L"C:\\module2.dll", NULL, 0, 0, NULL, 0);
if (0 == dwBaseDllSymLocal)
{
__debugbreak();
}
Dbghelp gives the following message: module1 is already loaded at 10000000.
Same behavior happens when I try to load the same module twice. (unlike what is written in the documentation of the function).
Last error is ERROR_INVALID_ADDRESS though it doesn't seem relevant, because last error has this value following the first successful function call too.
Is it possible to load several modules with SymLoadModuleExW? What is the right way to do so?
You are loading these binaries outside of the context of a debugger session, right? In which case, the fifth parameter, BaseOfDll, might be causing a problem:
The load address of the module. If the value is zero, the library obtains the load address from the symbol file.
When loading a binary standalone, it might just use 10000000 for everything... in which case, the second module load would conflict with the first one. So try passing something different there.
Last error is [...] though it doesn't seem relevant, because last error has this value following the first successful function call too.
If the function succeeds, the last error is not applicable; it could contain anything, but you should ignore it unless the documentation explicitly says that it sets the last error in success cases.
I've a fairly simple program which needs user input in the form of a text string. I've a CLR form with an edit box and I need to take that input and pass it into my class which just copies it to a member variable.
In the Form.h code, handling the TextChanged event is...
int textLength = m_userDest->TextLength;
if (textLength > 2 && textLength < 5)
{
// Could be an ICAO code in here
char dest[5];
String^ text = m_userDest->Text->ToUpper();
sprintf_s(dest, 5, "%s", text);
airTraffic.SetUserDest(dest);
}
My class (airTraffic) SetUserDest function is just
void CAirTraffic::SetUserDest(char* dest)
{
strncpy_s(m_userDest, 5, dest, 5);
}
When this is run I get this debug assertion, it doesn't stay on the screen and automatically clears after a few seconds.
Debug Assertion Failed!
Program: ...sual Studio 2010\Projects\FSAirTraffic\Debug\FSAirTraffic.exe
File: f:\dd\vctools\crt_bld\self_x86\crt\tcsncpy_s.inl
Line: 24
Expression: ((_Dst)) != NULL && ((_SizeInBytes)) > 0
I don't have an f:\ drive so I'm guessing this is some internal Microsoft(?) code so I can't see the context of the assertion and exactly what it's problem is. I don't have a file called tcsncpy_s.inl on my machine.
If I don't call my class function then there's no assertion so I assumed that was the problem.
Curiously though, when stepping through the debugger the assertion occurs as I step out of the TextChanged event, with the rest of the functions operating as intended (as far as I can see).
Does anyone know what the problem is and how I can go about solving it?
I don't understand how your code works. You use m_userDest twice, first it appears to be a pointer to a structure of some sort, maybe a handle to a TextBox control:
int textLength = m_userDest->TextLength;
Later you pass it to strncpy_s, which needs a char*, not a pointer to some structure.
void CAirTraffic::SetUserDest(char* dest)
{
strncpy_s(m_userDest, 5, dest, 5);
}
While it's possible for a structure to implicitly convert to a char*, it's not possible for a structure pointer to do so. Perhaps there's a smart pointer involved? Or you are using the same member variable name for completely different purposes in different classes1?
In any case, strncpy_s is inspecting the value of its first argument and not liking it.
1 Note that the new "wisdom" saying not to use Hungarian notation has destroyed the ability to understand this code in textual form. We don't have an IDE providing mouseover information about the data type of variables. Applications Hungarian is still a good idea in the real world, despite how many "best practices" documents decry it. Amazing how many code style documents are written from a purely theoretical basis.
I'm trying to use boost::shared_ptr and boost::enable_shared_from_this to no avail. It looks as if shared_from_this() is returning the wrong shared_ptr. Here is what I see:
Task* task = new TaskSubClass();
boost::shared_ptr<Task> first = boost::shared_ptr<Task>(task); // use_count = 1, weak_count = 1
boost::shared_ptr<Task> second = first; // use_count = 2, weak_count = 1
boost::shared_ptr<Task> third = first->shared_from_this(); // use_count = 2, weak_count = 2
I also noticed that first.px = third.px but first.pn.pi != third.pn.pi. That is, they both share the same object but they use a different counter. How can I get the two to share the same counter?
Turns out that this was caused by the fact that TaskSubClass's constructor invoked some method that, in turn, invoked new boost::shared_ptr<Task>(this) instead of new boost::shared_ptr<Task>(shared_from_this()). As an added bonus, you're not supposed to invoke shared_from_this() from the constructor and the documentation is far from obvious on this point: There must exist at least one shared_ptr instance p that owns t. It makes sense in retrospect, but the documentation should really be more explicit :)
Sorry for the misleading question.
I have a scenario where, at certain points in my program, a thread needs to update several shared data structures. Each data structure can be safely updated in parallel with any other data structure, but each data structure can only be updated by one thread at a time. The simple, naive way I've expressed this in my code is:
synchronized updateStructure1();
synchronized updateStructure2();
// ...
This seems inefficient because if multiple threads are trying to update structure 1, but no thread is trying to update structure 2, they'll all block waiting for the lock that protects structure 1, while the lock for structure 2 sits untaken.
Is there a "standard" way of remedying this? In other words, is there a standard threading primitive that tries to update all structures in a round-robin fashion, blocks only if all locks are taken, and returns when all structures are updated?
This is a somewhat language agnostic question, but in case it helps, the language I'm using is D.
If your language supported lightweight threads or Actors, you could always have the updating thread spawn a new a new thread to change each object, where each thread just locks, modifies, and unlocks each object. Then have your updating thread join on all its child threads before returning. This punts the problem to the runtime's schedule, and it's free to schedule those child threads any way it can for best performance.
You could do this in langauges with heavier threads, but the spawn and join might have too much overhead (though thread pooling might mitigate some of this).
I don't know if there's a standard way to do this. However, I would implement this something like the following:
do
{
if (!updatedA && mutexA.tryLock())
{
scope(exit) mutexA.unlock();
updateA();
updatedA = true;
}
if (!updatedB && mutexB.tryLock())
{
scope(exit) mutexB.unlock();
updateB();
updatedB = true;
}
}
while (!(updatedA && updatedB));
Some clever metaprogramming could probably cut down the repetition, but I leave that as an exercise for you.
Sorry if I'm being naive, but do you not just Synchronize on objects to make the concerns independent?
e.g.
public Object lock1 = new Object; // access to resource 1
public Object lock2 = new Object; // access to resource 2
updateStructure1() {
synchronized( lock1 ) {
...
}
}
updateStructure2() {
synchronized( lock2 ) {
...
}
}
To my knowledge, there is not a standard way to accomplish this, and you'll have to get your hands dirty.
To paraphrase your requirements, you have a set of data structures, and you need to do work on them, but not in any particular order. You only want to block waiting on a data structure if all other objects are blocked. Here's the pseudocode I would base my solution on:
work = unshared list of objects that need updating
while work is not empty:
found = false
for each obj in work:
try locking obj
if successful:
remove obj from work
found = true
obj.update()
unlock obj
if !found:
// Everything is locked, so we have to wait
obj = randomly pick an object from work
remove obj from work
lock obj
obj.update()
unlock obj
An updating thread will only block if it finds that all objects it needs to use are locked. Then it must wait on something, so it just picks one and locks it. Ideally, it would pick the object that will be unlocked earliest, but there's no simple way of telling that.
Also, it's conceivable that an object might become free while the updater is in the try loop and so the updater would skip it. But if the amount of work you're doing is large enough, relative to the cost of iterating through that loop, the false conflict should be rare, and it would only matter in cases of extremely high contention.
I don't know any "standard" way of doing this, sorry. So this below is just a ThreadGroup, abstracted by a Swarm-class, that »hacks» at a job list until all are done, round-robin style, and makes sure that as many threads as possible are used. I don't know how to do this without a job list.
Disclaimer: I'm very new to D, and concurrency programming, so the code is rather amateurish. I saw this more as a fun exercise. (I'm too dealing with some concurrency stuff.) I also understand that this isn't quite what you're looking for. If anyone has any pointers I'd love to hear them!
import core.thread,
core.sync.mutex,
std.c.stdio,
std.stdio;
class Swarm{
ThreadGroup group;
Mutex mutex;
auto numThreads = 1;
void delegate ()[int] jobs;
this(void delegate()[int] aJobs, int aNumThreads){
jobs = aJobs;
numThreads = aNumThreads;
group = new ThreadGroup;
mutex = new Mutex();
}
void runBlocking(){
run();
group.joinAll();
}
void run(){
foreach(c;0..numThreads)
group.create( &swarmJobs );
}
void swarmJobs(){
void delegate () myJob;
do{
myJob = null;
synchronized(mutex){
if(jobs.length > 0)
foreach(i,job;jobs){
myJob = job;
jobs.remove(i);
break;
}
}
if(myJob)
myJob();
}while(myJob)
}
}
class Jobs{
void job1(){
foreach(c;0..1000){
foreach(j;0..2_000_000){}
writef("1");
fflush(core.stdc.stdio.stdout);
}
}
void job2(){
foreach(c;0..1000){
foreach(j;0..1_000_000){}
writef("2");
fflush(core.stdc.stdio.stdout);
}
}
}
void main(){
auto jobs = new Jobs();
void delegate ()[int] jobsList =
[1:&jobs.job1,2:&jobs.job2,3:&jobs.job1,4:&jobs.job2];
int numThreads = 2;
auto swarm = new Swarm(jobsList,numThreads);
swarm.runBlocking();
writefln("end");
}
There's no standard solution but rather a class of standard solutions depending on your needs.
http://en.wikipedia.org/wiki/Scheduling_algorithm