I am receiving this error:
DataSetError: Failed while loading data from data set SQLQueryDataSet(load_args={}, sql=select * from table)
when I run (within kedro jupyter notebook):
%reload_kedro
c:\users\name.virtualenvs\pipenv_kedro\lib\site-packages\ipykernel\ipkernel.py:283:DeprecationWarning: should_run_async will not call transform_cell automatically in the future. Please pass the result to transformed_cell argument and any exception that happen during the transform in preprocessing_exc_tuple in IPython 7.17 and above.
and should_run_async(code)
2021-04-21 15:29:12,278 - kedro.framework.session.store - INFO - read() not implemented for BaseSessionStore. Assuming empty store.
2021-04-21 15:29:12,696 - root - INFO - ** Kedro project Project
2021-04-21 15:29:12,698 - root - INFO - Defined global variable context, session and catalog
2021-04-21 15:29:12,703 - root - INFO - Registered line magic run_viz
Then this:
catalog.list()
#['table', 'parameters']
catalog.load('table')
where my catalog.yml file contains:
table:
type: pandas.SQLQueryDataSet
credentials: secret
sql: select * from table
layer: raw
However, I am able to pull back the expected result when I run this (within the same kedro jupyter notebook):
from kedro.extras.datasets.pandas import SQLQueryDataSet
sql = "select * from table"
credentials = {
"con": secret
}
data_set = SQLQueryDataSet(sql=sql,
credentials=credentials)
sql_data = data_set.load()
How can I fix this error?
The discrepancy I believe comes from the credentials. In your catalog you had
table:
type: pandas.SQLQueryDataSet
credentials: secret
but in the notebook you were testing with
credentials = {
"con": secret
}
The value mapped in the yaml file should match to the name of an entry in credentials.yml so something like
# in catalog.yml
table:
type: pandas.SQLQueryDataSet
credentials: db_creds
# in credentials.yml
db_creds:
con: secret
Related
My goal is to have a user with a given uid. I try to have a simple user created with the very basic state:
Add Student:
user.present:
- name: Student
- uid: 333123123123
- allow_uid_change: True
333123123123 is just some dummy value. I'd like something more meanigful later, but this is what I use for testing.
This creates the user perfectly fine, but with generated uid:
ID: Add Student
Function: user.present
Name: Student
Result: True
Comment: New user Student created
Started: 19:47:33.543457
Duration: 203.157 ms
Changes:
----------
account_disabled:
False
account_locked:
False
active:
True
comment:
description:
disallow_change_password:
False
expiration_date:
2106-02-07 07:28:15
expired:
True
failed_logon_attempts:
0
fullname:
Student
gid:
groups:
home:
homedrive:
last_logon:
Never
logonscript:
name:
Student
passwd:
None
password_changed:
2022-02-21 19:47:33
password_never_expires:
False
profile:
None
successful_logon_attempts:
0
uid:
S-1-5-21-3207633127-2685365797-3805984769-1043
Summary
------------
Succeeded: 1 (changed=1)
Failed: 0
------------
Total states run: 1
Total run time: 203.157 ms
Now, if I try running state.apply again, I get the following message:
ID: Add Student
Function: user.present
Name: Student
Result: False
Comment: Encountered error checking for needed changes. Additional info follows:
- Changing uid (S-1-5-21-3207633127-2685365797-3805984769-1043 -> 333123123123) not permitted, set allow_uid_change to True to force this change. Note that this will not change file ownership.
Started: 19:47:45.503643
Duration: 7000.025 ms
Changes:
Summary
------------
Succeeded: 0
Failed: 1
------------
Total states run: 1
Total run time: 7.000 s
So it IS being considered, checked and verified - but not working while creating the user. The syntax seems to be confirmed. Why is it not getting applied upon creating the user?
It is possible to change a user's SID, but it requires unsupported registry hacking. Creating a new user with a specific SID would be even harder. Salt won't do that.
If you need to know the SID of a Windows user, you have to create it first and then query it. If you need it in a following state in the same run, then you can use slots.
i'm trying to transfer data from one oracle_db1.table1 to another oracle_db2.table1. I've already installed the backport-provider: https://pypi.org/project/apache-airflow-backport-providers-oracle/.
Import works fine now. But trying first tasks i get this error. I think it's something about the connection:
Here the error log
[2020-08-18 12:30:15,485] {logging_mixin.py:112} INFO -
[2020-08-18 12:30:15,485] {base_hook.py:84}
INFO - Using connection to: id: DB1234.
Host: 192.168.50.123:1521/testserver, Port: 1521, Schema: blup, Login: blup, Password: xxXXX, extra: None
[2020-08-18 12:30:15,485] {logging_mixin.py:112} INFO -
[2020-08-18 12:30:15,485] {connection.py:342} ERROR - Expecting value: line 1 column 1 (char 0).
And here is my example DAG Task:
T3 = OracleToOracleOperator(
task_id="insert_data_to_db",
oracle_destination_conn_id= "BCDEFG",
destination_table= "BCDEFG.TEST_BENUTZER3",
oracle_source_conn_id= "DESTINATION_DB",
source_sql= """
SELECT * FROM DESTINATION_DB.BENUTZER
""",
source_sql_params=None,
rows_chunk=5000
)
Thanx in advance
problem with the connection. There were inputs in "extra". I deleted them. then it works
I am trying to call trained model from google colab with example provided.
But there is an error.
Who knows is it beta error or I have not set somethoing properly?
Thanks in advance.
The code
from google.cloud import automl_v1beta1 as automl
automl_client = automl.AutoMlClient()
# Create client for prediction service.
prediction_client =
automl.PredictionServiceClient().from_service_account_json(
'XXXXX.json')
# Get the full path of the model.
model_full_id = automl_client.model_path(
project_id, compute_region, model_id
)
# Read the file content for prediction.
#with open(file_path, "rb") as content_file:
snippet = "fsfsf" #content_file.read()
# Set the payload by giving the content and type of the file.
payload = {"text_snippet": {"content": snippet, "mime_type": "text/plain"}}
# params is additional domain-specific parameters.
# currently there is no additional parameters supported.
params = {}
response = prediction_client.predict(model_full_id, payload, params)
print("Prediction results:")
for result in response.payload:
print("Predicted class name: {}".format(result.display_name))
print("Predicted class score: {}".format(result.classification.score))
The eror msg^
InvalidArgument: 400 List of found errors: 1.Field: name; Message: The provided location ID is not valid.
You have to use a region that supports AutoML beta. This works for me:
create_dataset("myproj-123456", "us-central1", "my_dataset_id", "en", "de")
I clone the repo "python-docs-samples" :
$ git clone https://github.com/GoogleCloudPlatform/python-docs-samples.git
I navigate to the automl examples
$ cd /home/MY_USER/python-docs-samples/language/automl/
I set the environment variables for [1]:
GOOGLE_APPLICATION_CREDENTIALS
PROJECT_ID
REGION_NAME
I typed:
$ python automl_natural_language_dataset.py create_dataset automltest1 False
I got this message:
Dataset name: projects/198768927566/locations/us-central1/datasets/TCN7889001684301386365
Dataset id: TCN7889001684301386365
Dataset display name: automltest1
Text classification dataset metadata:
classification_type: MULTICLASS
Dataset example count: 0
Dataset create time:
seconds: 1569367227
nanos: 873147000
I set the environment variable for :
DATASET_ID
Please note that I got this for the step 5.
I typed:
python automl_natural_language_dataset.py import_data $DATASET_ID "gs://$PROJECT_ID-lcm/complaints_manual.csv"
I got this message:
Processing import...
Dataset imported.
When I run puppet agent --test I have no errors output but the user did not create.
My puppet hira.yaml configuration is:
---
version: 5
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
users.yaml is:
accounts::user:
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
I use this module
Nothing in Hiera data itself causes anything to be applied to target nodes. Some kind of declaration is required in a manifest somewhere or in the output of an external node classifier script. Moreover, the puppetlabs/accounts module provides only defined types, not classes. You can store defined-type data in Hiera and read it back, but automated parameter binding via Hiera applies only to classes, not defined types.
In short, then, no user is created (and no error is reported) because no relevant resources are declared into the target node's catalog. You haven't given Puppet anything to do.
If you want to apply the stored user data presented to your nodes, you would want something along these lines:
$user_data = lookup('accounts::user', Hash[String,Hash], 'hash', {})
$user_data.each |$user,$props| {
accounts::user { $user: * => $props }
}
That would go into the node block matched to your target node, or, better, into a class that is declared by that node block or an equivalent. It's fairly complicated for so few lines, but in brief:
the lookup function looks up key 'accounts::user' in your Hiera data
performing a hash merge of results appearing at different levels of the hierarchy
expecting the result to be a hash with string keys and hash values
and defaulting to an empty hash if no results are found;
the mappings in the result hash are iterated, and for each one, an instance of the accounts::user defined type is declared
using the (outer) hash key as the user name,
and the value associated with that key as a mapping from parameter names to parameter values.
There are a few problems here.
You are missing a line in your hiera.yaml namely the defaults key. It should be:
---
version: 5
defaults: ## add this line
datadir: "/etc/puppetlabs/code/environments"
data_hash: yaml_data
hierarchy:
- name: "Per-node data (yaml version)"
path: "%{::environment}/nodes/%{::trusted.certname}.yaml"
- name: "Common YAML hierarchy levels"
paths:
- "defaults/common.yaml"
- "defaults/users.yaml"
I detected that using the puppet-syntax gem (included if you use PDK, which is recommended):
▶ bundle exec rake validate
Syntax OK
---> syntax:manifests
---> syntax:templates
---> syntax:hiera:yaml
ERROR: Failed to parse hiera.yaml: (hiera.yaml): mapping values are not allowed in this context at line 3 column 10
Also, in addition to what John mentioned, the simplest class to read in your data would be this:
class test (Hash[String,Hash] $users) {
create_resources(accounts::user, $users)
}
Or if you want to avoid using create_resources*:
class test (Hash[String,Hash] $users) {
$users.each |$user,$props| {
accounts::user { $user: * => $props }
}
}
Note that I have relied on the Automatic Parameter Lookup feature for that. See the link below.
Then, in your Hiera data, you would have a key named test::users to correspond (class name "test", key name "users"):
---
test::users: ## Note that this line changed.
joed:
locked: false
comment: System Operator
uid: '1700'
gid: '1700'
groups:
- admin
- sudonopw
sshkeys:
- ssh-rsa ...Hw== sysop+moduledevkey#puppetlabs.com
Use of automatic parameter lookup is generally the more idiomatic way of writing Puppet code compared to calling the lookup function explicitly.
For more info:
PDK
Automatic Parameter Lookup
create_resources
(*Note that create_resources is "controversial". Many in the Puppet community prefer not to use it.)
I have started using terraform to automate AWS resource provisioning for setting up k8s cluster. I am facing an issue when trying to refer aws_instance.id from aws_eip. Here are the useful details:
aditya#aditya-VirtualBox:~/Desktop/terraform-states$ terraform -v
Terraform v0.11.11
+ provider.aws v1.54.0
1) aws-eip.tf
resource "aws_eip" "nat" {
instance = "${aws_instance.xenial.id}"
vpc = true
depends_on = ["aws_internet_gateway.esya_igw"]
}
2) aws_inst.tf:
resource "aws_instance" "xenial" {
ami = "${var.aws_ami}"
instance_type = "t3.large"
ebs_optimized = true
monitoring = true
count = "8"
key_name = "${var.aws_key_name}"
tags{
Name = "KubeVMCluster${count.index + 1}"
}
}
Expected Behavior: AWS EIP must be able to refer to AWS Instance.
Current Behavior: We are getting this error:
aditya#aditya-VirtualBox:~/Desktop/terraform-states$ terraform plan
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
Error: Error running plan: 1 error(s) occurred:
* aws_eip.nat: 1 error(s) occurred:
* aws_eip.nat: Resource 'aws_instance.xenial' not found for variable 'aws_instance.xenial.id'
I have tried to find a solution by referring to similar kind of issues in Github and elsewhere, but to no avail. According to me, I don't find anything problematic with the declarative code.
I need help in resolving this issue.
Regards
Aditya