Using mikefarah/yq (4.25.3) I am trying to replace an empty map in a yaml file with a map stored in a string.
This is the map data:
RESOURCES=$(cat <<EOF
limits:
cpu: 4000m
memory: 3600Mi
requests:
cpu: 500m
memory: 900Mi
EOF
)
And this is what I am trying to execute:
yq -i ".cluster.resources = \"${RESOURCES}\"" values.yaml
As a result I get a multiline string (instead of a map):
resources: |-
limits:
cpu: 4000m
memory: 3600Mi
requests:
cpu: 500m
memory: 900Mi
How do I insert a map instead?
resources:
limits:
cpu: 4000m
memory: 3600Mi
requests:
cpu: 500m
memory: 900Mi
Use the env() operator to load environment variable into yq:
RESOURCES=$(cat <<EOF
limits:
cpu: 4000m
memory: 3600Mi
requests:
cpu: 500m
memory: 900Mi
EOF
) yq --null-input ".cluster.resources = env(RESOURCES)"
Will produce:
cluster:
resources:
limits:
cpu: 4000m
memory: 3600Mi
requests:
cpu: 500m
memory: 900Mi
Related
I wish to run k6 in a container with some simple javascript load from local file system,
It seems the below had some syntax error
$ cat simple.js
import http from 'k6/http';
import { sleep } from 'k6';
export const options = {
vus: 10,
duration: '30s',
};
export default function () {
http.get('http://100.96.1.79:8080');
sleep(1);
}
$kubectl run k6 --image=grafana/k6 -- run - <simple.js
//OR
$kubectl run k6 --image=grafana/k6 run - <simple.js
in the k6 pod log, I got
│ time="2023-02-16T12:12:05Z" level=error msg="could not initialize '-': could not load JS test 'file:///-': no exported functions in s │
I guess this means the simple.js is not really passed to k6 this way?
thank you!
I think you can't pipe (host) files into Kubernetes containers this way.
One way that it should work is to:
Create a ConfigMap to represent your file
Apply a Pod config that mounts the ConfigMap file
NAMESPACE="..." # Or default
kubectl create configmap simple \
--from-file=${PWD}/simple.js \
--namespace=${NAMESPACE}
kubectl get configmap/simple \
--output=yaml \
--namespace=${NAMESPACE}
Yields:
apiVersion: v1
kind: ConfigMap
metadata:
name: simple
data:
simple.js: |
import http from 'k6/http';
import { sleep } from 'k6';
export default function () {
http.get('http://test.k6.io');
sleep(1);
}
NOTE You could just create e.g. configmap.yaml with the above YAML content and apply it.
Then with pod.yaml:
apiVersion: v1
kind: Pod
metadata:
name: simple
spec:
containers:
- name: simple
image: docker.io/grafana/k6
args:
- run
- /m/simple.js
volumeMounts:
- name: simple
mountPath: /m
volumes:
- name: simple
configMap:
name: simple
Apply it:
kubectl apply \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
Then, finally:
kubectl logs pod/simple \
--namespace=${NAMESPACE}
Yields:
/\ |‾‾| /‾‾/ /‾‾/
/\ / \ | |/ / / /
/ \/ \ | ( / ‾‾\
/ \ | |\ \ | (‾) |
/ __________ \ |__| \__\ \_____/ .io
execution: local
script: /m/simple.js
output: -
scenarios: (100.00%) 1 scenario, 1 max VUs, 10m30s max duration (incl. graceful stop):
* default: 1 iterations for each of 1 VUs (maxDuration: 10m0s, gracefulStop: 30s)
running (00m01.0s), 1/1 VUs, 0 complete and 0 interrupted iterations
default [ 0% ] 1 VUs 00m01.0s/10m0s 0/1 iters, 1 per VU
running (00m01.4s), 0/1 VUs, 1 complete and 0 interrupted iterations
default ✓ [ 100% ] 1 VUs 00m01.4s/10m0s 1/1 iters, 1 per VU
data_received..................: 17 kB 12 kB/s
data_sent......................: 542 B 378 B/s
http_req_blocked...............: avg=128.38ms min=81.34ms med=128.38ms max=175.42ms p(90)=166.01ms p(95)=170.72ms
http_req_connecting............: avg=83.12ms min=79.98ms med=83.12ms max=86.27ms p(90)=85.64ms p(95)=85.95ms
http_req_duration..............: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
{ expected_response:true }...: avg=88.61ms min=81.28ms med=88.61ms max=95.94ms p(90)=94.47ms p(95)=95.2ms
http_req_failed................: 0.00% ✓ 0 ✗ 2
http_req_receiving.............: avg=102.59µs min=67.99µs med=102.59µs max=137.19µs p(90)=130.27µs p(95)=133.73µs
http_req_sending...............: avg=67.76µs min=40.46µs med=67.76µs max=95.05µs p(90)=89.6µs p(95)=92.32µs
http_req_tls_handshaking.......: avg=44.54ms min=0s med=44.54ms max=89.08ms p(90)=80.17ms p(95)=84.62ms
http_req_waiting...............: avg=88.44ms min=81.05ms med=88.44ms max=95.83ms p(90)=94.35ms p(95)=95.09ms
http_reqs......................: 2 1.394078/s
iteration_duration.............: avg=1.43s min=1.43s med=1.43s max=1.43s p(90)=1.43s p(95)=1.43s
iterations.....................: 1 0.697039/s
vus............................: 1 min=1 max=1
vus_max........................: 1 min=1 max=1
Tidy:
kubectl delete \
--filename=${PWD}/pod.yaml \
--namespace=${NAMESPACE}
kubectl delete configmap/simple \
--namespace=${NAMESPACE}
kubectl delete namespace/${NAMESPACE}
I am creating a multi stage image build with podman and have the application.yml set to deploy to openshift. The image gets build locally but does not get deployed. Have tried with both GraalVM and Mandrel. Anything pointers on what I am doing wrong ?
podman build -f src/main/docker/Dockerfile.multistage .
#FROM quay.io/quarkus/ubi-quarkus-native-image:22.0-java11 AS build
#FROM quay.io/quarkus/ubi-quarkus-mandrel:22.0-java11 AS build
FROM quay.io/quarkus/ubi-quarkus-mandrel:22.0-java11 AS build
USER root
COPY --chown=quarkus:quarkus certs /tmp/certs
COPY --chown=quarkus:quarkus mvnw /code/mvnw
COPY --chown=quarkus:quarkus .mvn /code/.mvn
COPY --chown=quarkus:quarkus pom.xml /code/
COPY --chown=quarkus:quarkus .env /code/.env
RUN for f in /tmp/certs/* ; \
do keytool -import -trustcacerts -keystore $JAVA_HOME/lib/security/cacerts -storepass changeit -alias `echo $(basename $f)` -noprompt -file $f ; \
done;
USER quarkus
WORKDIR /code
RUN ./mvnw -B org.apache.maven.plugins:maven-dependency-plugin:3.1.2:go-offline
COPY src /code/src
RUN ./mvnw package -Pnative -Dquarkus.native.native-image-xmx=6g -Drelease.version=1.0
## Stage 2 : create the docker final image
FROM quay.io/quarkus/quarkus-micro-image:1.0
WORKDIR /work/
COPY --from=build /code/target/*-runner /work/application
# set up permissions for user `1001`
RUN chmod 775 /work /work/application \
&& chown -R 1001 /work \
&& chmod -R "g+rwX" /work \
&& chown -R 1001:root /work
EXPOSE 9443
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
application.yml ( The variables are defined in .env file)
quarkus:
kubernetes:
deployment-target: openshift
deploy: ${DEPLOY:false}
kubernetes-client:
master-url: ${MASTER_URL}
trust-certs: true
token: ${TOKEN}
container-image:
builder: docker
push: ${PUSH_IMAGE:false}
# # docker.build-args:
# native:
# builder-image: mandrel
datasource:
db-kind: db2
jdbc:
url: jdbc:db2://${DB_URL:testing}:${DB_PORT:60000}/${DB_DATABASENAME:testing}
username: ${DB_USERID:testing}
password: ${DB_PWD:testing}
metrics:
enabled: ${DATASOURCE_METRICS_ENABLED:true}
# # devservices:
# enabled: false
ssl:
native: true
# prometheus:
# port: ${HTTP_SSL_PORT:9443}
# scheme: https
http:
port: ${HTTP_PORT:9080}
test-port: 8080
ssl-port: ${HTTP_SSL_PORT:9443}
root-path: /
non-application-root-path: ${quarkus.http.root-path}
# insecure-requests: redirect/disabled
ssl:
certificate:
# key-store-password: changeit
files: /var/run/secrets/openshift.io/svc-certs/tls.crt
key-files: /var/run/secrets/openshift.io/svc-certs/tls.key
health:
extensions:
enabled: true
smallrye-health:
root-path: /health
liveness-path: liveness
readiness-path: readiness
hibernate-orm:
validate-in-dev-mode: false
database:
generation: none
openshift:
# jvm-dockerfile: src/main/docker/Dockerfile.nultistage
build-strategy: docker
name: ${CONTAINER_NAME:}
# namepsace: ${NAME_SPACE}
replicas: ${INITIAL_REPLICAS:1}
# service-type: cluster-ip
image-pull-policy: always
env:
secrets: ${SECRETS:}
configmaps: ${CONFIGMAPS:}
ports:
tcp-9080:
container-port: ${HTTP_PORT:9080}
protocol: TCP
9443-tcp:
container-port: ${HTTP_SSL_PORT:9443}
protocol: TCP
deployment-kind: DeploymentConfig
labels:
app: es-mpsv7
stack: quarkus
resources:
limits:
cpu: ${RESOURCES_LIMIT_CPU:}
memory: ${RESOURCES_LIMIT_MEMORY:}
requests:
cpu: ${RESOURCES_REQUEST_CPU:}
memory: ${RESOURCES_REQUEST_MEMORY:}
liveness-probe:
http-action-path: ${LIVENESS_PATH}
initial-delay: ${LIVENESS_INITIAL_DELAY}
period: ${LIVENESS_PERIOD}
timeout: ${LIVENESS_TIMEOUT}
success-threshold: ${LIVENESS_SUCCESS_THRESHOLD}
failure-threshold: ${LIVENESS_FAILURE_THRESHOLD}
readiness-probe:
http-action-path: ${READINESS_PATH}
initial-delay: ${READINESS_INITIAL_DELAY}
period: ${READINESS_PERIOD}
timeout: ${READINESS_TIMEOUT}
success-threshold: ${READINESS_SUCCESS_THRESHOLD}
failure-threshold: ${READINESS_FAILURE_THRESHOLD}
mounts:
svc-certs:
name: svc-certs
path: /var/run/secrets/openshift.io/svc-certs
read-only: true
secret-volumes:
svc-certs:
secret-name: svc-certs
Manually pushed the image to opeshift and deployed with a deploymentConfig, but only port 8080 is exposed.
Given a docker compose file:
version: '3'
services:
service1:
image: image:name
environment:
- TEMP_ID=1928
volumes:
- type: bind
source: local/path/to
target: container/path/to
ports:
- 8900:8580
service2:
image: image:name
environment:
- TEMP_ID=1451
volumes:
- type: bind
source: local/path/to/1451
target: container/path/to
ports:
- 8901:8580
I am limited to writing a bash script to add a service to the above template based on its content. some of the values are being directly copied from the last service written in the services array, some fields needs modification.
I managed to extract and prepare all the values I need to add and I am stuck with creating the service object and add it to the existing file.
The script I have so far is:
#!/bin/bash
SERVICE_NAME=$1
TEMP_ID_ARG=$2
#extract data to copy from last configured client.
SERVICES=($(yq eval '.services | keys' docker-compose.yml))
LAST_SERVICE=${SERVICES[${#SERVICES[#]}-1]}
echo "adding user based on last service: $LAST_SERVICE"
IMAGE=$(yq -P -C eval '.services["'$LAST_SERVICE'"].image' docker-compose.yml)
ENVIRONEMNT_ENTRY="TEMP_ID=${TEMP_ID_ARG}"
TARGET_PATH_IN_CONTAINER=$(yq -P -C eval '.services["'$LAST_SERVICE'"].volumes[0].target' docker-compose.yml)
VOLUME_TYPE=$(yq -P -C eval '.services["'$LAST_SERVICE'"].volumes[0].type' docker-compose.yml)
LOCAL_PATH_TO_SOURCES=$(yq -P -C eval '.services["'$LAST_SERVICE'"].volumes[0].source' docker-compose.yml)
PATH_AS_ARRAY=($(echo $LOCAL_PATH_TO_SOURCES | tr "\/" " "))
PATH_AS_ARRAY[${#PATH_AS_ARRAY[#]}-1]=$TEMP_ID_ARG
NEW_PATH_TO_RESOURCE=$(printf "/%s" "${PATH_AS_ARRAY[#]}")
# extract port mapping, take first argument (exposed port) increment its value by 1 (no upper limitation)
# join back together with : delimiter.
PORT_MAPING_AS_ARRAY=($(yq -P -C eval '.services["'$LAST_SERVICE'"].ports[0]' docker-compose.yml | tr ":" " "))
# NO UPPER LIMITATION FOR PORT!!!
PORT_MAPING_AS_ARRAY[0]=$(expr $PORT_MAPING_AS_ARRAY + 1)
NEW_PORT_MAPPING=$(printf ":%s" "${PORT_MAPING_AS_ARRAY[#]}")
NEW_PORT_MAPPING=${NEW_PORT_MAPPING:1}
VOLUME_ENTRY=$(yq -P -C eval --null-input '.type = "'$VOLUME_TYPE'" | .source = "'$NEW_PATH_TO_RESOURCE'" | .target = "'$TARGET_PATH_IN_CONTAINER'"')
test=$(yq -P -C -v eval --null-input ' .'$SERVICE_NAME'.image = "'$IMAGE'" | .'$SERVICE_NAME'.environement = ["'$ENVIRONEMNT_ENTRY'"] | (.'$SERVICE_NAME'.volumes = ["'$VOLUME_ENTRY'"] | .'$SERVICE_NAME'.ports = ["'NEW_PORT_MAPPING'"])')
echo $test
when running, what I thought to be a working assembly command of all parts, returns the following error:
Error: cannot pass files in when using null-input flag
the expected output when calling add_service.sh service3 1234
given the above input file:
version: '3'
services:
service1:
image: image:name
environment:
- TEMP_ID=1928
volumes:
- type: bind
source: local/path/to
target: container/path/to
ports:
- 8900:8580
service2:
image: image:name
environment:
- TEMP_ID=1451
volumes:
- type: bind
source: local/path/to/1451
target: container/path/to
ports:
- 8901:8580
service3:
image: image:name
environment:
- TEMP_ID=1234
volumes:
- type: bind
source: local/path/to/1234
target: container/path/to
ports:
- 8902:8580
As my bash skills are not so strong I welcome any advice or better solution to my problem.
You should do it all with one yq call, passing the external values as variables; that will make the code safer and faster:
service_name="service3" \
temp_id="1234" \
environment="TEMP_ID=1234" \
yq eval '
.services[ .services | keys | .[-1] ] as $last |
.services[ strenv(service_name) ] = {
"image": $last.image,
"environment": [ strenv(environment) ],
"volumes": [ {
"type": $last.volumes[0].type,
"source": (
$last.volumes[0].source |
split("/") |
.[-1] = strenv(temp_id) |
join("/")
),
"target": $last.volumes[0].target
} ],
"ports": [ (
"" + $last.ports[0] |
split(":") |
.[0] tag = "!!int" |
.[0] += 1 |
join(":") |
. tag = ""
) ]
}
' docker-compose.yml
output:
version: '3'
services:
service1:
image: image:name
environment:
- TEMP_ID=1928
volumes:
- type: bind
source: local/path/to
target: container/path/to
ports:
- 8900:8580
service2:
image: image:name
environment:
- TEMP_ID=1451
volumes:
- type: bind
source: local/path/to/1451
target: container/path/to
ports:
- 8901:8580
service3:
image: image:name
environment:
- TEMP_ID=1234
volumes:
- type: bind
source: local/path/to/1234
target: container/path/to
ports:
- 8902:8580
I have log files that are broken down into between 1 and 4 "Tasks". In each "Task" there are sections for "WU Name" and "estimated CPU time remaining". Ultimately, I want to the bash script output to look like this 3 Task example;
Task 1 Mini_Protein_binds_COVID-19_boinc_ 0d:7h:44m:28s
Task 2 shapeshift_pair6_msd4X_4_f_e0_161_ 0d:4h:14m:22s
Task 3 rep730_0078_symC_reordered_0002_pr 1d:1h:38m:41s
So far; I can count the Tasks in the log. I can isolate x number of characters I want from the "WU Name". I can convert the "estimated CPU time remaining" in seconds to days:hours:minutes:seconds. And I can output all of that into 'pretty' columns. Problem is that I can only process 1 Task using;
# Initialize counter
counter=1
# Count how many iterations
cnt_wu=`grep -c "WU name:" /mnt/work/sec-conv/bnc-sample3.txt`
# Iterate the loop for cnt-wu times
while [ $counter -le ${cnt_wu} ]
do
core_cnt=$counter
wu=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'WU name: \K.*' | cut -c1-34`
sec=`cat /mnt/work/sec-conv/bnc-sample3.txt | grep -Po 'estimated CPU time remaining: \K.*' | cut -f1 -d"."`
dhms=`printf '%dd:%dh:%dm:%ds\n' $(($sec/86400)) $(($sec%86400/3600)) $(($sec%3600/60)) \ $(($sec%60))`
echo "Task ${core_cnt}" $'\t' $wu $'\t' $dhms | column -ts $'\t'
counter=$((counter + 1))
done
Note: /mnt/work/sec-conv/bnc-sample3.txt is a static one Task sample only used for this scripts dev.
What I can't figure out is the next step which is to be able to process x number of multiple Tasks. I can't figure out how to leverage the while/counter combination properly, and can't figure out how to increment through the occurrences of Tasks.
Adding bnc-sample.txt (contains 3 Tasks)
1) -----------
name: Rosetta#home
master URL: https://boinc.bakerlab.org/rosetta/
user_name: XXXXXXX
team_name:
resource share: 100.000000
user_total_credit: 10266.993660
user_expavg_credit: 512.420495
host_total_credit: 10266.993660
host_expavg_credit: 512.603669
nrpc_failures: 0
master_fetch_failures: 0
master fetch pending: no
scheduler RPC pending: no
trickle upload pending: no
attached via Account Manager: no
ended: no
suspended via GUI: no
don't request more work: no
disk usage: 0.000000
last RPC: Wed Jun 10 15:55:29 2020
project files downloaded: 0.000000
GUI URL:
name: Message boards
description: Correspond with other users on the Rosetta#home message boards
URL: https://boinc.bakerlab.org/rosetta/forum_index.php
GUI URL:
name: Your account
description: View your account information
URL: https://boinc.bakerlab.org/rosetta/home.php
GUI URL:
name: Your tasks
description: View the last week or so of computational work
URL: https://boinc.bakerlab.org/rosetta/results.php?userid=XXXXXXX
jobs succeeded: 117
jobs failed: 0
elapsed time: 2892439.609931
cross-project ID: 3538b98e5f16a958a6bdd2XXXXXXXXX
======== Tasks ========
1) -----------
name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730_0
WU name: shapeshift_pair6_msd4X_4_f_e0_161_X_0001_0001_fragments_abinitio_SAVE_ALL_OUT_946179_730
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26882.771040
slot: 1
PID: 28434
CPU time at last checkpoint: 3925.896000
current CPU time: 4314.761000
fraction done: 0.066570
swap size: 431 MB
working set size: 310 MB
2) -----------
name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54_0
WU name: rep730_0078_symC_reordered_0002_propagated_0001_0001_0001_A_v9_fold_SAVE_ALL_OUT_946618_54
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:08 2020
report deadline: Thu Jun 11 09:58:08 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 26412.937920
slot: 2
PID: 28804
CPU time at last checkpoint: 3829.626000
current CPU time: 3879.975000
fraction done: 0.082884
swap size: 628 MB
working set size: 513 MB
3) -----------
name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2_0
WU name: Mini_Protein_binds_COVID-19_boinc_site3_2_SAVE_ALL_OUT_IGNORE_THE_REST_0aw6cb3u_944116_2
project URL: https://boinc.bakerlab.org/rosetta/
received: Mon Jun 8 09:58:47 2020
report deadline: Thu Jun 11 09:58:46 2020
ready to report: no
state: downloaded
scheduler state: scheduled
active_task_state: EXECUTING
app version num: 420
resources: 1 CPU
estimated CPU time remaining: 27868.559616
slot: 0
PID: 30988
CPU time at last checkpoint: 1265.356000
current CPU time: 1327.603000
fraction done: 0.032342
swap size: 792 MB
working set size: 668 MB
Again, I appreciate any guidance!
In Go I can create goroutines like this (EDITED as reported by kelu-thatsall's answer):
// test.go
package main
import (
"fmt"
"os"
"strconv"
"sync"
"runtime"
)
func main() {
var wg sync.WaitGroup
if len(os.Args) < 2 {
os.Exit(1)
}
k, ok := strconv.Atoi(os.Args[1])
if ok != nil {
os.Exit(2)
}
wg.Add(k * 1000)
for z := 0; z < k*1000; z++ {
go func(x int) {
defer wg.Done()
fmt.Println(x)
}(z)
if z%k == k-1 {
// #mattn: avoid busy loop, so Go can start processing like BEAM do
runtime.Gosched()
}
}
wg.Wait()
}
The result in Go 1.8.0 (64-bit):
# shell
$ go build test.go ; for k in 5 50 500 5000 50000 500000; do echo -n $k; time ./test $k > /dev/null; done
5
CPU: 0.00s Real: 0.00s RAM: 2080KB
50
CPU: 0.06s Real: 0.01s RAM: 3048KB
500
CPU: 0.61s Real: 0.12s RAM: 7760KB
5000
CPU: 6.02s Real: 1.23s RAM: 17712KB # 17 MB
50000
CPU: 62.30s Real: 12.53s RAM: 207720KB # 207 MB
500000
CPU: 649.47s Real: 131.53s RAM: 3008180KB # 3 GB
What's the equivalent code in Erlang or Elixir? (EDITED as reported by patrick-oscity's comment)
What I've tried so far is the following:
# test.exs
defmodule Recursion do
def print_multiple_times(n) when n <= 1 do
spawn fn -> IO.puts n end
end
def print_multiple_times(n) do
spawn fn -> IO.puts n end
print_multiple_times(n - 1)
end
end
[x]=System.argv()
{k,_}=Integer.parse(x)
k=k*1000
Recursion.print_multiple_times(k)
The result in elixir 1.4.2 (erts-8.2.2):
# shell
$ for k in 5 50 500 5000 50000 ; do echo -n $k; time elixir --erl "+P 90000000" test.exs $k > /dev/null; done
5
CPU: 0.53s Real: 0.50s RAM: 842384KB # 842 MB
50
CPU: 1.50s Real: 0.62s RAM: 934276KB # 934 MB
500
CPU: 11.92s Real: 2.53s RAM: 1675872KB # 1.6 GB
5000
CPU: 122.65s Real: 20.20s RAM: 4336116KB # 4.3 GB
50000
CPU: 1288.65s Real: 209.66s RAM: 6573560KB # 6.5 GB
But I'm not sure if the two are equivalent. Are they ?
EDIT Shortened version as mudasobwa's comment does not give correct output
# test2.exs
[x]=System.argv()
{k,_}=Integer.parse(x)
k=k*1000
1..k |> Enum.each(fn n -> spawn fn -> IO.puts n end end)
The result for k in 5 50 500 5000 50000 ; do echo -n $k; time elixir --erl "+P 90000000" test.exs $k | wc -l ; done:
5
CPU: 0.35s Real: 0.41s RAM: 1623344KB # 1.6 GB
2826 # does not complete, this should be 5000
50
CPU: 1.08s Real: 0.53s RAM: 1691060KB # 1.6 GB
35062
500
CPU: 8.69s Real: 1.70s RAM: 2340200KB # 2.3 GB
373193
5000
CPU: 109.95s Real: 18.49s RAM: 4980500KB # 4.9 GB
4487475
50000
erl_child_setup closed
Crash dump is being written to: erl_crash.dump...Command terminated by signal 9
CPU: 891.35s Real: 157.52s RAM: 24361288KB # 24.3 GB
Not testing 500m for elixir because it took too long and +P 500000000 argument is bad number of processes
I'm sorry guys but I'm not convinced that this code in Go is really working as expected. I'm not an expert, so please correct me if I'm wrong. First of all it prints z which it seems is a current value of it in global scope (usually k*1000) https://play.golang.org/p/a4TJyjKBQh
// test.go
package main
import (
"fmt"
"time"
)
func main() {
for z:=0; z<1000; z++ {
go func(x int) { // I'm passing z to the function with current value now
fmt.Println(x)
}(z)
}
time.Sleep(1 * time.Nanosecond)
}
And also if I comment out Sleep the program will exit before even starting any goroutines (at least it doesn't print out the results). I would be happy to know if I'm doing something wrong, but from this simple example it seems the problem is not with Elixir, but Go code provided. Some Go gurus out there?
I've also run some test on my local machine:
go run test.go 500 | wc -l
72442 # expected 500000
go run test.go 5000 | wc -l
76274 # expected 5000000