I am using MLFlow to log metrics and artefacts in the AzureML workspace. With autolog, tensorflow training metrics are available in the experiment run in the AzureML workspace. Along with auto-logging of metrics - I want to log extra metrics and plots in the same experiment run. Doing it with MLFlow - it is creating a new experiment run.
Auto logging:
mlflow.autolog()
Manual logging:
mlflow.log_metric(f"label-A", random.randint(80, 90))
Expected:
Manually logged metrics are available in the same experiment run.
Instead of using the module method call mlflow.log_metric to log the metrics, use the client MlflowClient which takes run_id as the parameter.
Following code logs the metrics in the same run_id passed as the parameter.
from mlflow.tracking import MlflowClient
from azureml.core import Run
run_id = Run.get_context(allow_offline=True).id
MlflowClient().log_metric(run_id, "precision", 0.91)
Related
I have currently dockerized my DBT solution and I launch it in AWS Fargate (triggered from Airflow). However, Fargate requires about 1 minute to start running (image pull + resource provisioning + etc.), which is great for long running executions (hours), but not for short ones (1-5 minutes).
I'm trying to run my docker container in AWS Lambda instead of in AWS Fargate for short executions, but I encountered several problems during this migration.
The one I cannot fix is related to the bellow message, at the time of running the dbt deps --profiles-dir . && dbt run -t my_target --profiles-dir . --select my_model
Running with dbt=0.21.0
Encountered an error:
[Errno 38] Function not implemented
It says there is no function implemented but I cannot see anywhere which is that function. As it appears at the time of installing dbt packages (redshift and dbt_utils), I tried to download them and include them in the docker image (set local paths in packages.yml), but nothing changed. Moreover, DBT writes no logs at this phase (I set the log-path to /tmp in the dbt_project.yml so that it can have write permissions within the Lambda), so I'm blind.
Digging into this problem, I've found that this can be related to multiprocessing issues within AWS Lamba (my docker image contains python scripts), as stated in https://github.com/dbt-labs/dbt-core/issues/2992. I run DBT from python using the subprocess library.
Since it may be a multiprocessing issue, I have also tried to set "threads": 1 in profiles.yml but it did not solve the problem.
Does anyone succeeded in deploying DBT in AWS Lambda?
I've recently been trying to do this, and the summary of what I've found is that it seems to be possible, but isn't worth it.
You can pretty easily build a Lambda Layer that includes dbt & the provider you want to use, but you'll also need to patch the multiprocessing behavior and invoke dbt.main from within the Lambda code. Once you've jumped through all those hops, you're left with a dbt instance that is limited to a relatively small upper bound on memory, a 15 minute maximum runtime, and is throttled to a single thread.
This discussion gives an rough example of what's needed to get it running in Lambda: https://github.com/dbt-labs/dbt-core/issues/2992#issuecomment-919288906
All that said, I'd love to put dbt on a Lambda and I hope dbt's multiprocessing will one day support it.
What makes Kibana to not show docker container logs in APM "Transactions" page under "Logs" tab.
I verified the logs are successfully being generated with the "trace.id" associated for proper linking.
I have the exact same environment and configs (7.16.2) up via docker-compose and it works perfectly.
Could not figure out why this feature works locally but does not show in Elastic Cloud deploy.
UPDATE with Solution:
I just solved the problem.
It's related to the Filebeat version.
From 7.16.0 and ON, the transaction/logs linking stops working.
Reverted Filebeat back to version 7.15.2 and it started working again.
If you are not using file beats, for example - We rolled our own logging implementation to send logs from a queue in batches using the Bulk API.
We have our own "ElasticLog" class and then use Attributes to match the logs-* Schema for the Log Stream.
In particular we had to make sure that trace.id was the same as the the actual Traces, trace.id property. Then the logs started to show up here (It does take a few minutes sometimes)
Some more info on how to get the ID's
We use OpenTelemetry exporter for Traces and ILoggerProvider for Logs. The fire off batches independently of each other.
We populate the Trace Id's at the time of instantiation of the class as a default value. This way you in the context of the Activity. Also helps set the timestamp exactly when the log was created.
This LogEntry then gets passed into the ElasticLogger processor and mapped as displayed above to the ElasticLog entry with the Attributes needed for ES
I have an Apache Beam pipeline reading from an AWS Elasticsearch cluster. The pipeline code is as follows:
PCollection<String> output =
pipeline.apply(
ElasticsearchIO.read().withConnectionConfiguration(
ElasticsearchIO.ConnectionConfiguration.create(
hostName,
options.getElasticSearchIndex(),
options.getElasticSearchType()
)
)
);
output.apply(TextIO.write().to("gs://<bucket-name>/test.txt")
.withSuffix(".txt"));
pipeline.run();
The job is deployed without any errors. I set the maxNumWorkers to 3 just to test my code initially. But the pipeline stalls and does not process any of the data.
When I look at the Google Cloud Logs, I see the following log entries:
Proposing dynamic split of work unit <project>;<hash> at {"fractionConsumed":0.5}
Rejecting split request because custom reader returned null residual source.
I see these logs getting generated over and over again. I let the pipeline run for 20-30 mins but there seems to be no progress.
I'm not sure how to debug this issue. Any thoughts on how to proceed?
EDIT: Updated pipeline code.
I am trying to create multiple instance of application on same marklogic environment. I can able to create all the configurations(users,roles,databases,forests,app servers...) but could not able to schedule individual tasks for separate database with same module path.
When tried to run ml-gradle mldeployApps failing at Tasks creation.
My whole application configuration will depends on from property file. for any APP-NAME a seperate insiance need to be created.
I tried deploying through ml-gradle
The mlDeployTasks is failing as already an task is available for the module path. When try to run secong with new failing as it is not recognizing task database
JSON:
{
"task-enabled":true,
"task-path":"/ext/schedules/monitor.xqy",
"task-root":"/",
"task-type":"daily",
"task-period":1,
"task-start-time": "10:00:00",
"task-database":"%%DATABASE%%",
"task-modules":"%%MODULES_DATABASE%%",
"task-user":"admin",
"task-priority":"normal"
}
ERROR:
Logging HTTP response body to assist with debugging: {"errorResponse":{"statusCode":"500", "status":"Internal Server Error", "messageCode":"MANAGE-INVALID", "message":"MANAGE-INVALID (err:FOER0000): task-database"}}
Error occurred while sending PUT request to /manage/v2/tasks/5389046897270663947/properties?group-id=Default; logging request body to assist with debugging: {
Expectation :
wants to deploy and undeploy whole application including schedules tasks based on APPLICATION-NAME as seperate instance
Actual:
the mlDeployTasks based on the module-path each task is identified with old existing database and fails to create a new task server.
Please suggest me the right way to achieve the same
MarkLogic's Management API is seeing your request as an attempt to change the task-database, but it only allows one property for a scheduled task to change (task-enabled). I think what you'll need to do here is have different task-path values for your different databases. That's not ideal, but if the implementation logic is all in a library that's imported by the task, the different modules themselves will be very lightweight.
Try ml-gradle 3.10.0 - support for this now exists - see the release notes for ml-app-deployer 3.10.0 (which provides most of the functionality in ml-gradle) - https://github.com/marklogic-community/ml-app-deployer/releases/tag/3.10.0
i'm trying to display test results statistics in the statistics tab in Teamcity, so i will be able to see if a test is always failed (for example) via the static chart.
I'm not trying to have dashboard of the last tests executed (LINK) can't help me.
I can't use Robot Framework listeners because i use scheduletask on the remote machine to run test cases.
Any suggestions?
Solved by using the : "Providing data using the teamcity-info.xml file" here
after creating the "teamcity-info.xml" file in the root of the build, i create a custom chart with the keys : chart1Key / chart2Key (existing in the "teamcity-info.xml" file). So i will be able to see the OK/KO tests percents.
Find here the Result