Google Cloud Build: How to increase RAM memory? - google-cloud-build

How do I increase RAM memory on Google Cloud Build?
I'm getting this error:
Step #1: FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
I'm using the REST API.
I'm trying to find the RAM memory config, but I only found a property called diskSizeInGb.
The default for diskSizeInGbis 100GB, and it's just a React app I'm compiling, so I don't think that's the case.
https://cloud.google.com/cloud-build/docs/api/reference/rest/v1/projects.builds#buildoptions

The RAM of the instance is dependant on the machine type you are using, if you need more RAM in your build, you will need to use a different value for machineType.
By default, Cloud Build uses a "n1-standard-1" instance to run the build which has 3.75 GB of memory, however, you can change it to a "n1-highcpu-8" which has double that. You can find the information regarding the instance types over here.
Keep in mind that Cloud Build only accepts "n1-standard-1", "n1-highcpu-8" and "n1-highcpu-32" machines as mentioned in the documentation, and that each has a different billing.
Hope you find this useful!

Related

Unexplained memory usage on Azure Windows App Service Plan - Drill down missing

We have a memory problem with our Azure Windows App Service Plan (service level is P1v3 with 1 instance – this means 8 GB memory).
We are running two small .NET 6 App Services on it (some web APIs), that use custom containers – without problems.
They’re not in production and receive a very low number of requests.
However, when looking at the service plan’s memory usage in Diagnose and Solve Problems / Memory Analysis, we see an unexplained 80% memory percent usage – in a stable way:
And the real problem occurs when we try to start a third app service on the plan. We get this "out of memory" error in our log stream :
ERROR - Site: app-name-dev - Unable to start container.
Error message: Docker API responded with status code=InternalServerError,
response={"message":"hcsshim::CreateComputeSystem xxxx:
The paging file is too small for this operation to complete."}
So it looks like docker doesn’t have enough mem to start the container. Maybe because of the 80% mem usage ?
But our apps actually have very low memory needs. When running them locally on dev machines, we see about 50-150M memory usage (when no requests occur).
In Azure, the private bytes graph in “availability and performance” shows very moderate consumption for the biggest app of the two:
Unfortunately, the “Memory drill down” is unavailable:
(needless to say, waiting hours doesn’t change the message…)
Even more strange, stopping all App Services of the App Service Plan still show a Memory Percentage of 60% in the Plan.
Obviously some memory is being retained by something...
So the questions are:
Is it normal to have 60% memory percentage in an App Service Plan with no App Services running ?
If not, could this be due to a memory leak in our app ? But app services are ran in supposedly isolated containers, so I'm not sure this is possible. Any other explanation is of course welcome :-)
Why can’t we access the memory drill down ?
Any tips on the best way to fit "small" Docker containers with low memory usage in Azure App Service ? (or maybe in another Azure resource type...). It's a bit frustrating to be able to use ony 3GB out of a 8GB machine...
Further details:
First app is a .NET 6 based app, with its docker image based on aspnet:6.0-nanoserver-ltsc2022
Second app is also a .NET 6 based app, but has some windows DLL dependencies, and therefore is based on aspnet:6.0-windowsservercore-ltsc2022
Thanks in advance!
EDIT:
I added more details and changed the questions a bit since I was able to stop all app services tonight.

How to set memory limit while running golang application in local system

i want set heap size for go application in my windows machine
In java we used to provide -Xms settings as vm arguments in intellij but how to provide similar setting in golang and set memory limit for the go application.
Tried with
<env name="GOMEMLIMIT" value="2750MiB" />
but not working
we are using go 1.6.2 version.
Go 1.19 adds support for a soft memory limit:
The runtime now includes support for a soft memory limit. This memory limit includes the Go heap and all other memory managed by the runtime, and excludes external memory sources such as mappings of the binary itself, memory managed in other languages, and memory held by the operating system on behalf of the Go program. This limit may be managed via runtime/debug.SetMemoryLimit or the equivalent GOMEMLIMIT environment variable.
You can't set a hard limit as that would make your app malfunction if it would need more memory.
To set a soft limit from your app, simply use:
debug.SetMemoryLimit(2750 * 1 << 20) // 2750 MB
To set a soft limit outside of your app, use the GOMEMLIMIT env var, e.g.:
GOMEMLIMIT=2750MiB
But please note that doing so may make your app's performance worse as it may enforce more frequent garbage collection and return memory to OS more aggressively even if your app will need it again.

Limiting memory for varnish process

On Varnishv4.1 when i use ram as backend for Caching,when Reqests comes to it
after a while the amount of server's ram begans to full little by little and after it completely fills, the server crashes
and again it starts caching in the ram
I assign following variables in systemd service file for varnish.service
but still it does its Previous behavior and it crushs again:
LimitMEMLOCK=14336
MemoryLimit=13G
MemoryHigh=13G
MemoryMax=13G
How can i limit and specify the special amonut of memory that it cant exceed from that?
#Version used:
Varnishv4.1
#Operating System and version:
Ubuntu16.04
#Source of binary packages used (if any)
Installed from official ubuntu packages
You will have to limit both malloc and Transient.
I.e. as startup parameters -s malloc,3GB s Transient,1GB
In general the RAM allocated for Varnish should not exceed the 80% of total RAM avaiable on the system

RAM Usage in TeamCity

We have a large TeamCity Server (10.0.3), with around 2.000 builds configurations and around 50 build agents.
Frequently, we encounter some performances issues, with a garbage collection.
Inside the teamcity-server.log, we found this:
[2017-11-28 12:30:54,339] WARN - jetbrains.buildServer.SERVER - GC usage exceeded 50% threshold and is now 60%. GC was fired 82987 times since server start and consumed total 18454595ms. Current memory usage: 1.09 GB.
We are unable to figure out the source of the issue.
According to the Documentation, a 64 bit version of Java should be used, with only 4g RAM. We encountered some issues, and decided to use -Xmx6g parameter instead.
Do you know where we can enable/find more traces in order to figure out the source of our over-consumption of memory ?
First, you can try disabling third-party plugins and see if it helps.
Then you try benchmarking the server according to this blog post and see if increased memory limits will improve the situations.
But the best way to investigate the memory over-consumption would be capturing the memory dump and investigating the content using profiling tools. You can create memory dump from Administration | Server Administration | Diagnostics page of your TeamCity web UI using Dump Memory Snapshot button.
You can investigate the dump on your own or send it to Jetbrains for investigation.

JBOSS Configurations

Disclaimer: I am more of a programmer and have little knowledge of JBOSS.
When we deployed the system, it works properly in the test environment. However in production, since there are multiple users and a lot of data are being updated/saved, some issues occurred. Double updates are being created, some functions are not working unless the server is restartedd. I'm thinking that this may be corrected by modifying whatever session or memory parameter JBOSS has. So we could prevent restarting the server every time an error occurs.
Question: What parameter or jboss configuration should we edit to accommodate multiple users and a large number of transactions.
You need to investigate the reason for your application not behaving the way you want to. Some Points you can consider :
Log request in Jboss support.
Try increasing Java heap size. (this can be done by editing entry in standalone.conf).
Try enabling gc logs to see , if your garbage is properly collected or not.
Check for memory leakage in code.
try to analyze thread dumps to check whether some of your threads are being blocked or not.
See what if you server CPU and memory Utilization is high or not.
I'm not sure what version of JBoss you are using, but if you are wanting to increase the JVM memory you can modify the following line in the run.bat file in your bin folder:
set JAVA_OPTS=%JAVA_OPTS% -Xms128m -Xmx512m
Xms = Minimum size and Xmx is Maximum size. If you think it is due to lack of resources, you may want to increase it to something like this:
set JAVA_OPTS=%JAVA_OPTS% -Xms512m -Xmx1024m
Does everything work normally with only 1 or very few users using the system? To me it seems more like a coding issue.
Check the Heap size of JVM and set it according to Machine RAM available.
You must have performance testing done before you deploy to production. You can check the behaviour of your code in performance testing using some monitoring tool or JMX tool.
Tune the paramter like Heap size, GC alogrithm, you might want to define fixed size of young generation
Tune the thread as well.
https://developer.jboss.org/wiki/ThreadPoolConfiguration?_sscc=t

Resources