Can someone help me. I have a Lite plan in IBM Watson Studio and I want to create new project but when I try to add storage I got an error "[409, Conflict] LITE_PLAN_LIMIT". Please someone help me. I am currently enrolled in a course and I can't continue because of this problem
The Lite plan of Watson Studio include 50 capacity hours per month:
https://cloud.ibm.com/catalog/services/watson-studio
0 capacity unit-hours monthly limit
Environment = # of capacity units required per hour
• 1 vCPU + 4 GB RAM = 0.5
• 2 vCPU + 8 GB RAM = 1
• 4 vCPU + 16 GB RAM = 2
• Decision Optimization = Environment + 5
You can upgrade to the Standard plan (for a fee,) create another IBM Cloud account and spin up another Watson Studio lite instance, wait a month for the capacity to reset, or ask your instructor for a feature code that will give you more resources.
Related
I was looking into a bug where PdhLookupPerfNameByIndex was giving me a buffer size of 0 and discovered that there are 2 counters for "% Processor Time" and we were using the last one in the list (index 6 and 4676). The spanish language pack only has this counter once (index 6). I was curious as to why there would be 2 counters for the same thing in English and if there was a valid reason, why is it not included in the Spanish language pack.
Using Windows Server 2012 R2
We have small array of gpdb cluster. in that, few queries are failing
System Related information
TOTAL RAM =30G
SWAP =15G
gp_vmem_protect_limit= 2700MB
TOTAL segment = 8 Primary + 8 mirror = 16
SEGMENT HOST=2
VM_OVERCOMMIT RATIO =72
Used this calc : http://greenplum.org/calc/#
SYMPTOM
The query failed with the error message shown below:
ERROR: XX000: Canceling query because of high VMEM usage. Used: 2433MB, available 266MB, red zone: 2430MB (runaway_cleaner.c:135) (seg2 slice74 DATANODE01:40002 pid=11294) (cdbdisp.c:1320)
We tried :
changed following parameters
statement_mem from 125 MB to 8GB
MAX_STATEMENT MEMORY from 200 MB TO 16 GB
Not sure what exactly needs to change here.still, trying to understand root cause of error.
Any help in it would be much appreciated ?
gp_vmem_protect_limit is for per segment. You have 16segments. based on your segments and vm_protect, you need 2700MB X 16 total memory.
I want change the memsize to 6GB in SAS 9.4. I have read the previous posts that I need to change the .cfg file.
I am following the instruction here but it does not work.
http://www.ciser.cornell.edu/FAQ/SAS/MemoryAllocation.shtm.
However, my memsize is unchanged.
proc options group = memory; run;
SAS (r) Proprietary Software Release 9.4 TS1M2
Group=MEMORY
SORTSIZE=1073741824
Specifies the amount of memory that is available to the
SORT procedure.
SUMSIZE=0 Specifies a limit on the amount of memory that is available
for data summarization procedures when class variables are
active.
MAXMEMQUERY=0 Specifies the maximum amount of memory that is allocated
for procedures.
MEMBLKSZ=16777216 Specifies the memory block size for Windows memory-based
libraries.
MEMMAXSZ=2147483648
Specifies the maximum amount of memory to allocate for
using memory-based libraries.
LOADMEMSIZE=0 Specifies a suggested amount of memory that is needed for
executable programs loaded by SAS.
MEMSIZE=2147483648
Specifies the limit on the amount of virtual memory that
can be used during a SAS session.
REALMEMSIZE=0 Specifies the amount of real memory SAS can expect to
allocate.
NOTE: PROCEDURE OPTIONS used (Total process time):
real time 0.01 seconds
cpu time 0.01 seconds
My SAS/OS information is as below,
NOTE: Copyright (c) 2002-2012 by SAS Institute Inc., Cary, NC, USA.
NOTE: SAS (r) Proprietary Software 9.4 (TS1M2)
Licensed to UNIVERSITY OF CALIFORNIA SYSTEM-SFA-T&R, Site 70081229.
NOTE: This session is executing on the X64_7PRO platform.
NOTE: Updated analytical products:
SAS/STAT 13.2
SAS/ETS 13.2
SAS/OR 13.2
SAS/IML 13.2
SAS/QC 13.2
NOTE: Additional host information:
X64_7PRO WIN 6.1.7601 Service Pack 1 Workstation
NOTE: SAS initialization used:
real time 0.98 seconds
cpu time 0.63 seconds
Would really appreciate some suggestions.
Thanks!
edited: 04:06PM, 9/23/2015
I am following the steps here http://www.ciser.cornell.edu/FAQ/SAS/MemoryAllocation.shtm
Only some minor differences.
Go to C:\Program Files\SASHome\SASFoundation\9.4
2.Copy sasv9.cfg (SAS Confugration Information)
Go to u:\Documents\My SAS Files\9.4
Paste sasv9.cfg
Open sasv9.cfg
In the second line, type "-memsize 6g"
Go to Start→All Programs→SAS then right-click on SAS 9.4 (English)
Select Send to→Desktop to create a shortcut
Go to the Desktop, right-lick on the SAS shortcut, then open Properties
Modify the Target. Replace the segment that says:
-CONFIG "C:\Program Files\SASHome\SASFoundation\9.4\nls\en\sasv9.cfg"
with this one:
-CONFIG "U:\Documents\My SAS Files\9.4\sasv9.cfg"
9.Click OK, then OK.
Invoke SAS THROUGH the SHORTCUT ICON
You will receive a warning message ask you if you want to "make modification of your computer setting", click "Yes"
I'm using Windows HPC Pack 2008 R2 SP4 to run a MPI application. I'm having problems getting the Job Scheduler to run the app on all available cores. Here's my code...
using (IScheduler scheduler = new Scheduler())
{
scheduler.Connect("MyCluster");
var newJob = scheduler.CreateJob();
newJob.Name = "My job";
//newJob.IsExclusive = true;
var singleTask = newJob.CreateTask();
singleTask.WorkDirectory = #"C:\MpiWorkspace";
singleTask.CommandLine = #"mpiexec MyMpiApp.exe";
newJob.AddTask(singleTask);
scheduler.SubmitJob(newJob, null, null);
}
Ran like the above I get allocated 1 (measly) core out of the 16 available across the two Compute Nodes in the cluster. The best I can get is by uncommenting the line newJob.IsExclusive = true;, which then allocates me all the cores on one of the Compute Nodes (8 cores).
If I was running from the command-line I could using the mpiexec switch /np * to use all available cores, but this appears to be overridden by the Job Scheduler.
How do I get the same effect in-code, how can I run on all available cores without explicitly declaring a minimum and maximum number of cores for the task?
Not sure if you have solve it. I have similar problem.
One thing I can think of is that you HPC scheduler override you affinity. Go to "Configuration" -> "Configure job scheduler policies and settings" -> "Affinity", set to "No Jobs"
you also need to add parameters to mpiexec
mpiexec -np * -al 2:L -c 8 MyMpiApp.exe
/al set the affinity
/c set cores per node
Not sure how to do it when your nodes have different number of cores.
Hope this helps
In many places in the web, you will see:
What is the memory limit on a node process?
and the answer:
Currently, by default V8 has a memory limit of 512mb on 32-bit systems, and 1gb on 64-bit systems. The limit can be raised by setting --max-old-space-size to a maximum of ~1gb (32-bit) and ~1.7gb (64-bit), but it is recommended that you split your single process into several workers if you are hitting memory limits.
Can somebody confirm this is the case as Node.js seems to update frequently?
And more importantly, will it be the case in the near future?
I want to write JavaScript code which might have to deal with 4gb of javascript objects (and speed might not be an issue).
If I can't do it in Node, I will end up doing in java (on a 64bit machine) but I would rather not.
This has been a big concern for some using Node.js, and there are good news. The new memory limit for V8 is now unknown (not tested) for 64bit and raised to as much as 32bit address space allows in 32bit environments.
Read more here: http://code.google.com/p/v8/issues/detail?id=847
Starting nodejs app with a heap memory of 8 GB
node --max-old-space-size=8192 app.js
See node command line options documentation or run:
node --help --v8-options
I'm running a proc now on Ubuntu linux that has a definite memory leak and node 0.6.0 is pushing 8gb. Think it's handled :).
Memory Limit Max Value is 3049 for 32bit users
If you are running Node.js with os.arch() === 'ia32' is true, the max value you can set is 3049
under my testing with node v11.15.0 and windows 10
if you set it to 3050, then it will overflow and equal to be set to 1.
if you set it to 4000, then it will equal to be set to 51 (4000 - 3049)
Set Memory to Max for Node.js
node --max-old-space-size=3049
Set Memory to Max for Node.js with TypeScript
node -r ts-node/register --max-old-space-size=3049
See: https://github.com/TypeStrong/ts-node/issues/261#issuecomment-402093879
It looks like it's true. When I had tried to allocate 50 Mb string in buffer:
var buf = new Buffer(50*1024*1024);
I've got an error:
FATAL ERROR: CALL_AND_RETRY_2 Allocation failed - process out of memory
Meantime there was about 457 Mb of memory usage by Node.js in process monitor.