I’m using Coldfusion MX 8. I recently had a situation where variables seem to be “swapping” between sessions. I found some information regarding entire sessions swapping, but this was not the case. It was just one variable being swapped, not the entire session. My code snippets follow:
var idArray = ListToArray(arguments.event.getArg("itemIDs"));
var oItemDetail = 0;
var oItem = 0; //Inserting this line seems to have fixed the error.
var i = 0;
for (i=1;i lte ArrayLen(idArray);i=i+1) {
//Log Spot #1 – cflog idArray[i] and arguments.event.getArg("statusNotes")
oItem = getItemService().getItem(idArray[i]);
oItemDetail = getItemService().getItemDetail();
oItemDetail.setItemID(oItem.getItemID());
oItemDetail.setStatusNotes(arguments.event.getArg("statusNotes"));
getItemService().saveItem(oItem);
getItemService().saveItemDetail(oItemDetail);
}
//getItem and getItemDetail just call getTransfer().get()
//saveItem and saveItemDetail just call getTransfer().save()
For example, at Log Spot #1, idArray[i] might have been “1”, and the StatusNotes event arg might be “abc”.
But should another person, in another session, using another login, in another place, another browser, etc.etc. Use the function at exactly the same time, using idArray[i] = “2” and statusNotes = “def”, then Item Detail “abc” might instead attach to Item “2”, and Item Detail “def” attach to Item “1”.
The fact that, at Log Spot #1, the variables logged are correct, but in the database they are swapped, points to these lines of code as the suspects.
This problem has gone away by declaring “var oItem” at the top.
So I guess I’m a little shocked by this revelation. I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same session might get overwritten. But this seems to be some sort of internal memory issue. The variables are not even being overwritten, rather swapped, between sessions!
I was wondering if anyone had any similar experiences and could shed some light on this?
Unvar'd variables are made private variables within the object that they are contained in. Which causes two problems,
They are shared (accessed and written to) by functions (within that same component)
They live beyond the life of the function call
When you var a variable it makes it a local variable only to that function. Only that function can use it and it only lives as long as that function does.
In your case, this problem doesn't really have anything to do with sessions, other than that is the persistent scope where you happen to be storing the data from these functions.
You said
I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same session might get overwritten.
But it would be more accurate to say
I would assume that not declaring my local variables would mean another variable, with the same name, in another function, but in the same OBJECT might get overwritten.
Related
I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
I am working on developing a debugger plugin for visual studio using VSIX. My problem is I have an array of addresses but I cannot set the IDebugMemoryBytes2 to a particular address. I use DEBUG_PROPERTY_INFO and get the array of addresses, and I also am able to set the context to the particular addresses in the array using the Add function in IDebugMemoryContext2. However, I need to use the ReadAt function to retrieve n bytes from a specified address (from IDebugMemoryBytes2).
Does anyone have any idea how to retrieve data from arbitrary addresses from memory?
I am adding more information on the same:
I am using the Microsoft Visual Studio Extensibility package to build my debugger plugin. In the application I am trying to debug using this plugin, there is a double pointer and I need to read those values to process them further in my plugin. For this, there is no way to display all the pointer variables in the watch window and hence, I am not able to get the DEBUG_PROPERTY_INFO for all the block of arrays which the pointer variable is pointing to. This is my problem which I am trying to address. There is no way for me to read the memory pointed to by this double pointer.
Now as for the events in the debuggee process, since the plugin is for debugging variables, I put a breakpoint at a place where I know this pointer is populated and then come back to the plugin for further evaluation.
As a start, I was somehow able to get the starting addresses of each of the array. But still, I am not able to read x bytes of memory from each of these starting addresses.
ie., for example, if I have int **ptr = // pointing to something
I have the addresses present in ptr[0], ptr[1], ptr[2], etc. But I need to go to each of these addresses and fetch the memory block they are pointing to.
For this, after much search, I found this link: https://macropolygon.wordpress.com/2012/12/16/evaluating-debugged-process-memory-in-a-visual-studio-extension/ which seems to address exactly my issue.
So to use expression evaluator functions, I need an IDebugStackFrame2 object to get the ExpressionContext. To get this object, I need to register to events in the debuggee process which is for breakpoint. As said in the post, I did:
public int Event(IDebugEngine2 engine, IDebugProcess2 process,
IDebugProgram2 program, IDebugThread2 thread, IDebugEvent2
debugEvent, ref Guid riidEvent, uint attributes)
{
if (debugEvent is IDebugBreakpointEvent2)
{
this.thread = thread;
}
return VSConstants.S_OK;
}
And my registration is like:
private void GetCurrentThread()
{
uint cookie;
DBGMODE[] modeArray = new DBGMODE[1];
// Get the Debugger service.
debugService = Package.GetGlobalService(typeof(SVsShellDebugger)) as
IVsDebugger;
if (debugService != null)
{
// Register for debug events.
// Assumes the current class implements IDebugEventCallback2.
debugService.AdviseDebuggerEvents(this, out cookie);
debugService.AdviseDebugEventCallback(this);
debugService.GetMode(modeArray);
modeArray[0] = modeArray[0] & ~DBGMODE.DBGMODE_EncMask;
if (modeArray[0] == DBGMODE.DBGMODE_Break)
{
GetCurrentStackFrame();
}
}
}
But this doesn't seem to invoke the Event function at all and hence, I am not sure how to get the IDebugThread2 object.
I also tried the other way suggested in the same post:
namespace Microsoft.VisualStudio.Debugger.Interop.Internal
{
[InterfaceType(ComInterfaceType.InterfaceIsIUnknown), Guid("1DA40549-8CCC-48CF-B99B-FC22FE3AFEDF")]
public interface IDebuggerInternal11 {
[DispId(0x6001001f)]
IDebugThread2 CurrentThread { [return:
MarshalAs(UnmanagedType.Interface)]
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType =
MethodCodeType.Runtime)]
get; [param: In, MarshalAs(UnmanagedType.Interface)]
[MethodImpl(MethodImplOptions.InternalCall, MethodCodeType =
MethodCodeType.Runtime)] set; }
}
}
private void GetCurrentThread()
{
debugService = Package.GetGlobalService(typeof(SVsShellDebugger)) as IVsDebugger;
if (debugService != null)
{
IDebuggerInternal11 debuggerServiceInternal =
(IDebuggerInternal11)debugService;
thread = debuggerServiceInternal.CurrentThread;
GetCurrentStackFrame();
}
}
But in this method, I think I am missing something but I am not sure what, because after the execution of the line
IDebuggerInternal11 debuggerServiceInternal =
(IDebuggerInternal11)debugService;
when I check the values of the debuggerServiceInternal variable, I see there is a System.Security.SecurityException for CurrentThread, CurrentStackFrame (and so obviously the next line causes a crash). For this, I googled the error and found I was missing the ComImport attribute to the class. So I added that and now, I get a System.AccessViolationException : Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
I am new to C# programming as well and hence, it is a bit difficult to grasp many things in short duration. I am lost as to how to proceed further now.
Any help in the same or suggestions to try another way to achieve my objective will be greatly appreciated.
Thanks a lot,
Esash
After much search, since I am short of time, I need a quick solution and hence, for now, it seems like the quickest way to solve this problem is to hack the .natvis files by making it display all the elements of the pointer and then using the same old way by using IDebug* interface methods to access and retrieve the memory context for each of the pointer elements. But, after posting the same question in msdn forums, I think the proper answer to this problem is as mentioned by Greggs:
"For reading memory, if you want a fast way to do this, you just want the raw memory, and the debug engine of the target is the normal Visual Studio native engine (in other words, you aren't creating your own debug engine), I would recommend referencing Microsoft.VisualStudio.Debugger.Engine. You can then use DkmStackFrame.ExtractFromDTEObject to get the DkmStackFrame object. This will give you the DkmProcess object and you can call DkmProcess.ReadMemory to read memory from the target."
Now, after trying a lot to understand how to implement this, I found that you could just accomplish this using :
DkmProcess.GetProcesses() and doing a ReadMemory on the process returned.
There is a question now, what if more than one processes are returned. Well, I tried attaching many processes to the current debugging process and tried attaching many processes to the debuggee process as well, but found that the DkmProcess.GetProcesses() gets only the one from which I regained the control from, and not the other processes I am attached to. I am not sure if this will work in all cases but for me, it worked this way and for anyone who has similar requirements, this might work as well.
Using the .natvis files to accomplish this means, using IndexListItems for VS2013 and prior versions, and using CustomListItems for VS2015 and greater versions, and to make it look prettier, use the "no-derived" attribute. There is no way to make the Synthetic tag display only the base address of each variable and hence, the above attribute is the best way to go about, but this is not available in VS2013 and prior versions (The base address might get displayed but for people who want to go beyond just displaying contents and also access the memory context of the pointer element, Synthetic tag is not the right thing).
I hope this helps some developer who struggled like me using IDebug* interfaces. For reference, I am also giving the link to the msdn forum where my question was answered.
https://social.msdn.microsoft.com/Forums/en-US/030cef1c-ee79-46e9-8e40-bfc59f14cc34/how-can-i-send-a-custom-debug-event-to-my-idebugeventcallback2-handler?forum=vsdebug
Thanks.
I want to put money in the game like a bank account the problem is that when I put the variable "BankAccount" which is attached to an Interger in the GameScene or the game view controller only those to know what "account" is. When I go to my secondScene it doesn't know what account is. So. I won't every scene to be able to access/know what "account" is. And if I put account as a variable in all scenes then now they are not the same account. I don't know what to do?
You have to make sure that your variable is outside of any functions or classes, unless you want to use those methods to call the variable.
Variables defined outside of any blocks will be available to the entire application.
If your var is defined inside a block then you need to use that block to call it.
struct getMoney {
var money: Int = 100
}
You could access it like...
var playerMoney = getMoney.money
I am trying to develop a webapp using Google Apps Script to be embedded into a Google Site which simply displays the contents of a Google Sheet and filters it using some simple parameters. For the time being, at least. I may add more features later.
I got a functional app, but found that filtering could often take a while as the client sometimes had to wait up to 5 seconds for a response from the server. I decided that this was most likely due to the fact that I was loading the spreadsheet by ID using the SpreadsheetApp class every time it was called.
I decided to cache the spreadsheet values in my doGet function using the CacheService and retrieve the data from the cache each time instead.
However, for some reason this has meant that what was a 2-dimensional array is now treated as a 1-dimensional array. And, so, when displaying the data in an HTML table, I end up with a single column, with each cell being occupied by a single character.
This is how I have implemented the caching; as far as I can tell from the API reference I am not doing anything wrong:
function doGet() {
CacheService.getScriptCache().put('data', SpreadsheetApp
.openById('####')
.getActiveSheet()
.getDataRange()
.getValues());
return HtmlService
.createTemplateFromFile('index')
.evaluate()
.setSandboxMode(HtmlService.SandboxMode.IFRAME);
}
function getData() {
return CacheService.getScriptCache().get('data');
}
This is my first time developing a proper application using GAS (I have used it in Sheets before). Is there something very obvious I am missing? I didn't see any type restrictions on the CacheService reference page...
CacheService stores Strings, so objects such as your two-dimensional array will be coerced to Strings, which may not meet your needs.
Use the JSON utility to take control of the results.
myCache.put( 'tag', JSON.stringify( myObj ) );
...
var cachedObj = JSON.parse( myCache.get( 'tag' ) );
Cache expires. The put method, without an expirationInSeconds parameter expires in 10 minutes. If you need your data to stay alive for more than 10 minutes, you need to specify an expirationInSeconds, and the maximum is 6 hours. So, if you specifically do NOT need the data to expire, Cache might not be the best use.
You can use Cache for something like controlling how long a user can be logged in.
You could also try using a global variable, which some people would tell you to never use. To declare a global variable, define the variable outside of any function.
I have some data being loaded from a server, but there's no guarantee that I'll have it all when the UI starts to display it to the user. Every frame there's a tick function. When new data is received a flag is set so I know that it's time to load it into my data structure. Which of the following ways is a more sane way to decide when to actually run the function?
AddNewStuffToList()
{
// Clear the list and reload it with new data
}
Foo_Tick()
{
if (updated)
AddNewStuffToList();
// Rest of tick function
}
Versus:
AddNewStuffToList()
{
if (updated)
{
// Clear the list and reload it with new data
}
}
Foo_Tick()
{
AddNewStuffToList();
// Rest of tick function
}
I've omitted a lot of the irrelevant details for the sake of the example.
IMHO first one. This version separates:
when to update data (Foo_Tick)
FROM
how to loading data (AddNewStuffToList()).
2nd option just mixing all things together.
You should probably not run the function until it is updated. That way, the function can be used for more purposes.
Let's say you have 2 calls that both are going to come and put in data to the list. With the first set up, checking the variable inside of the function, you could only check if one call has came in. Instead, if you check it in the function that calls the data, you can have as many input sources as you want, without having to change the beginning function.
Functions should be really precise on what they are doing, and should avoid needing information created by another function unless it is passed in.
In the first version the simple variable check "updated" will be checked each time and only if true would AddNewStuffToList be called.
With the second version you will call AddNewStuffToList followed by a check to "updated" every time.
In this particular instance, given that function calls are generally expensive compared to a variable check I personally prefer the first version.
However, there are situations when a check inside the function would be better.
e.g.
doSomething(Pointer *p){
p->doSomethingElse();
}
FooTick(){
Pointer *p = new Pointer();
// do stuff ...
// lets do something
if (p){
doSomething(p);
}
}
This is clumbsy because every time you call doSomething you should really check you're
not passing in a bad pointer. What if this is forgotten? we could get an access violation.
In this case, the following is better as you're only writing the check in one place and
there is no extra overhead added because we always want to ensure we're not passing in a bad pointer.
doSomething(Pointer *p){
if (p){
p->doSomethingElse();
}
}
So in general, it depends on the situation. There are no right and wrong answers, just pros and cons here.