I created an AWS Config rule and lambda operating on resource type AWS::RDS::DBInstance and Trigger Type = 'Configuration changes'. CloudWatch logs verify that the function return is ...
{ "ResultToken": "<Redacted>",
"Evaluations": [
{"ComplianceResourceId": "db-<Redacted>",
"ComplianceResourceType": "AWS::RDS::DBInstance",
"ComplianceType": "COMPLIANT",
"OrderingTimestamp": 1576676501.52}
]
}
And although the rule is successfully invoked, the AWS console claims that the compliance status of the rule is 'No results available'. Additionally, this bit of powershell script using the AWSPowershell module ...
Get-CFGComplianceByConfigRule -configrulename security-group-of-rds | select -expandProperty Compliance
... returns ...
INSUFFICIENT_DATA
Why isn't the reported compliance status COMPLIANT?
My first thought is that I've got the schema for the return object wrong, but based on the example functions that AWS has supplied, it looks correct to me.
The short answer is:
Evaluation results need to be reported via a call to config:Put_Evaluations() rather than the actual lambda return.
The lambda return should just be the evaluations list.
The long answer is, Here is my solution that works:
AWS Lambda Function (language python3.8) for the Config Rule
'''
#####################################
## Gherkin ##
#####################################
Rule Name:
security-group-of-rds
Description:
Checks that all Oracle databases are using the correct security group and only that group.
Trigger:
Configuration Change on AWS::RDS::DbInstance . Scope of changes == Resources.
Reports on:
AWS::RDS::DbInstance
Parameters:
| ----------------------|-----------|-----------------------------------------------|
| Parameter Name | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| vpcSecurityGroupId | string | Id of the required vpc Security Group. |
| ----------------------|-----------|---------------------------------------------- |
| Assume-Rule-Role | boolean | If true, switch to the config role. |
| | | Defaults to false. |
|-----------------------|-----------|-----------------------------------------------|
| Mode | Enum | Range: Fully-Operational-DeathStar | |
| | | Put-Evaluations-Test | |
| | | Lambda-Console-Test |
| | | Defaults to Fully-Operational-DeathStar . |
| | | Meanings: |
| | | Fully-Operational-DeathStar: |
| | | Normal operation. |
| | | Put-Evaluations-Test: Set TestMode to True, |
| | | when invoking put_evaluations. |
| | | Refer: https://docs.aws.amazon.com/config/latest/APIReference/API_PutEvaluations.html
| | | Lambda-Console-Test: |
| | | Do not call put_evaluations() at all. | |
|-----------------------|-----------|-----------------------------------------------|
Envars:
| ----------------------|-----------|-----------------------------------------------|
| Envar Name | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| PROXY | string | http(s) proxy. Default to no proxy. |
|-----------------------|-----------|-----------------------------------------------|
| NO_PROXY | comma- | list of exemptions to proxy. |
| | separated-| Defaults to no exemptions |
| | list | |
|-----------------------|-----------|-----------------------------------------------|
| TURN_OFF_SSL | boolean | Turns of SSL verification. Defaults to False |
|-----------------------|-----------|-----------------------------------------------|
| REGION | string | Region for config service. |
| | | Defaults to the lambda region |
|-----------------------|-----------|-----------------------------------------------|
| CONFIG_ENDPOINT | string | Customised end-point for config service |
| | | Defaults to the standard end-point. |
|-----------------------|-----------|-----------------------------------------------|
Feature:
In order to: to protect the data confidentiality for Oracle oracle-ee RDS databases.
As: a Developer
I want: To ensure that all databases have the correct security group attached.
Scenarios:
Scenario 1:
Given: Wrong security group
And: The group is inactive
Then: No conclusion.
Scenario 2:
Given: Wrong security group
And: The group is active
And: type == oracle-ee
Then: return NON_COMPLIANT
Scenario 3:
Given: Right security group
And: The group is active
And: type == oracle-ee
Then: return COMPLIANT
Scenario 4:
Given: No security group
And: type == oracle-ee
Then: return NON_COMPLIANT
Scenario 5:
Given: type != oracle-ee
Then: return NOT_APPLICABLE
Required Role Policy Statements:
If you are not assuming the config rule role, then the lambda role needs all these
actions, except sts:AssumeRole.
If you ARE assuming the config rule role, then the lambda role needs the logs and sts
actions, and the config rule role needs the logs and config actions.
| ----------------------|-------------|-----------------------------------------------|
| Action | Resource | Condition | Why do we need it? |
| ----------------------|-------------|---------------------------------------------- |
| logs:CreateLogGroup | * | Always | For logging. |
| logs:CreateLogStream | | | |
| logs:PutLogEvents | | | |
| ----------------------|-------------|------------|----------------------------------|
| sts:AssumeRole | Your AWS | if Assume-Rule-Role == True | If you want the |
| | config role | | lambda to execute in the main |
| | | | config role. |
| ----------------------|-------------|------------|----------------------------------|
| config:PutEvaluations | * | Always | To put the actual results. |
| ----------------------|-------------|------------|----------------------------------|
Inline Constants Configuration:
| ----------------------|-----------|-----------------------------------------------|
| Identifier | Type | Description |
| ----------------------|-----------|---------------------------------------------- |
| defaultRegion | string | Default region, if we can't get it from the |
| | | Lambda environment. |
| ----------------------|-----------|---------------------------------------------- |
'''
import json
import datetime
import time
import boto3
import botocore
import os
proxy = None
no_proxy = None
configClient = None
defaultRegion = 'ap-southeast-2'
def setEnvar( name, value):
if os.environ.get( name, '') != value:
if value != '':
os.environ[ name] = value
else:
del os.environ[ name]
def setProxyEnvironment():
# Sometimes lamdba's sit in VPC's which require proxy forwards
# in order to access some or all internet services.
global proxy
global noProxy
proxy = os.environ.get( 'PROXY' , None)
noProxy = os.environ.get( 'NO_PROXY', None)
if proxy is not None:
setEnvar( 'http_proxy' , proxy )
setEnvar( 'https_proxy', proxy )
if noProxy is not None:
setEnvar( 'no_proxy' , noProxy)
def jpath( dict1, path, sep = '.', default = None):
# Traverse a hierarchy of dictionaries, as described by a path, and find a value.
ret = dict1
if isinstance( path, str):
particleList = path.split( sep)
else:
particleList = path
for particle in particleList:
if isinstance( ret, dict):
ret = ret.get( particle, None)
elif (isinstance( ret, list) or isinstance( ret, tuple)) and particle.isdigit():
idx = int( particle)
if (idx >= 0) and (idx < len(ret)):
ret = ret[ idx]
else:
ret = None
else:
ret = None
if ret is None:
break
if ret is None:
ret = default
return ret
def coerceToList( val):
# Make it into a list.
if val is None:
return list()
else:
return val
def coerceToBoolean( val):
if isinstance( val, str):
return val.lower() == 'true'
else:
return bool( val)
def get_region():
# Find the region for AWS services.
return os.environ.get( 'REGION', os.environ.get( 'AWS_REGION', defaultRegion))
def get_assume_role_credentials( role_arn):
# Switch to a role. We need sts:AssumeRole for this.
global proxy
if coerceToBoolean( os.environ.get( 'TURN_OFF_SSL', False)):
sts_client = boto3.client('sts', verify=False)
else:
sts_client = boto3.client('sts')
try:
assume_role_response = sts_client.assume_role(RoleArn=role_arn, RoleSessionName="configLambdaExecution")
print( 'Switched role to ' + role_arn)
return assume_role_response['Credentials']
except botocore.exceptions.ClientError as ex:
# Scrub error message for any internal account info leaks
if 'AccessDenied' in ex.response['Error']['Code']:
ex.response['Error']['Message'] = "AWS Config does not have permission to assume the IAM role."
else:
ex.response['Error']['Message'] = "InternalError"
ex.response['Error']['Code'] = "InternalError"
print(str(ex))
raise ex
def get_client(service, event):
# Get the AWS service client for the specified service.
# If specified, switch roles and go through a custom service end-point.
global proxy
region = get_region()
ruleRole = jpath( event, 'executionRoleArn')
doAssumeRuleRole = coerceToBoolean( jpath( event, 'ruleParameters-parsed.Assume-Rule-Role', '.', False)) and (ruleRole is not None)
parms = {}
if coerceToBoolean( os.environ.get( 'TURN_OFF_SSL', False)):
parms['verify'] = False
if region is not None:
parms['region_name'] = region
if doAssumeRuleRole:
credentials = get_assume_role_credentials( ruleRole)
parms['aws_access_key_id' ] = credentials['AccessKeyId' ]
parms['aws_secret_access_key'] = credentials['SecretAccessKey']
parms['aws_session_token' ] = credentials['SessionToken' ]
endPointEnvarName = service.upper() + '_ENDPOINT'
endPointEnvarValue = os.environ.get( endPointEnvarName, '')
if endPointEnvarValue != '':
parms['endpoint_url'] = endPointEnvarValue
return boto3.client(service, **parms)
def get_configClient( event):
# Get the AWS 'config' service, and store it in a global singleton.
global configClient
if configClient is None:
configClient = get_client( 'config', event)
return configClient
def initiate_Globals():
# Mainly setup the proxy forward, if required.
configClient = None
setProxyEnvironment()
def evaluate_compliance( configuration_item, ruleParameters):
# Evaluate the compliance of the given changed resource.
# Return a dictionary in the standard 'evaluation' schema.
referenceVpcSecurityGroupId = ruleParameters.get('vpcSecurityGroupId','')
annotation = 'Ok'
if ((jpath( configuration_item, 'configuration.engine') == 'oracle-ee') and
(configuration_item.get('resourceType','') == 'AWS::RDS::DBInstance')):
ok = False
for vpcSecurityGroup in coerceToList( jpath( configuration_item, 'configuration.vpcSecurityGroups')):
actualId = vpcSecurityGroup.get('vpcSecurityGroupId','')
ok = ((actualId == referenceVpcSecurityGroupId) or
(vpcSecurityGroup.get('status','inactive') != 'active'))
if not ok:
# The security group was active, but was not equal to the prescribed one.
annotation = 'Wrong security group'
break
if ok:
# All active security groups, and at least one, are the prescribed one.
compliance_type = 'COMPLIANT'
else:
if referenceVpcSecurityGroupId == '':
annotation = 'Malformed rule parameter configuration'
if annotation == 'Ok':
annotation = 'No security groups'
compliance_type = 'NON_COMPLIANT'
else:
# This rule only deals with oracle-ee RDS databases.
compliance_type = 'NOT_APPLICABLE'
evaluation = dict()
evaluation['ComplianceResourceType'] = configuration_item['resourceType']
evaluation['ComplianceResourceId' ] = configuration_item['resourceId']
evaluation['OrderingTimestamp' ] = configuration_item['configurationItemCaptureTime']
evaluation['ComplianceType' ] = compliance_type
evaluation['Annotation' ] = annotation
return evaluation
def printEnvars( envarList):
for envarName in envarList.split(','):
envarValue = os.environ.get( envarName, None)
if envarValue is not None:
print( f'Envar {envarName} == {envarValue}')
def lambda_handler(event, context):
global configClient
# Phase 1: Setup and parsing input.
# Uncomment this when debugging:
# print( 'event == ' + json.dumps( event))
printEnvars( 'PROXY,NO_PROXY,TURN_OFF_SSL,REGION,CONFIG_ENDPOINT')
initiate_Globals()
invokingEvent = json.loads( event.get('invokingEvent','{}'))
event['invokingEvent-parsed'] = invokingEvent
ruleParameters = json.loads( event.get('ruleParameters','{}'))
event['ruleParameters-parsed'] = ruleParameters
print( 'Config rule Arn == ' + event.get( 'configRuleArn', ''))
print( 'Rule parameters == ' + json.dumps( ruleParameters))
get_configClient( event)
configuration_item = invokingEvent['configurationItem']
# Phase 2: Evaluation.
evaluation = evaluate_compliance( configuration_item, ruleParameters)
# Phase 3: Reporting.
evaluations = list()
evaluations.append( evaluation)
mode = ruleParameters.get( 'Mode', 'Fully-Operational-DeathStar')
if mode == 'Fully-Operational-DeathStar':
response = configClient.put_evaluations( Evaluations=evaluations, ResultToken=event['resultToken'])
elif mode == 'Put-Evaluations-Test':
response = configClient.put_evaluations( Evaluations=evaluations, ResultToken=event['resultToken'], TestMode=True)
else:
response = {'mode': mode}
# Uncomment this when debugging:
# print( 'response == ' + json.dumps( response))
print( 'evaluations == ' + json.dumps( evaluations))
return evaluations
Related
I have a problem in my Route.
I see this error:
Route [utilizadores.editar] not defined
The error occurs on the page when I try to update the data in my DB.
My Route:
Route::put('Utilizadores/{item}', [FuncionarioController::class, 'editar'])->name('utilizadores.editar');
Route::get('Utilizadores/{item}/edit', [FuncionarioController::class, 'edit'])->name('utilizadores.edit');
My controller:
public function editar(Request $request, funcionario $item){
$item->nome = $request->nome;
$item->email = $request->email;
$item->telefone = $request->telefone;
$item->foto = $request->foto;
$item->data_nasc = $request->data_nasc;
$item->nacionalidade = $request->nacionalidade;
$item->n_cartao_cc = $request->n_cartao_cc;
$item->nif = $request->nif;
$item->morada = $request->morada;
$item->n_porta = $request->n_porta;
$item->localidade = $request->localidade;
$item->concelho = $request->concelho;
$item->distrito = $request->distrito;
$item->cp = $request->cp;
$item->data_entrada = $request->data_entrada;
$item->funcao = $request->funcao;
$item->estado = $request->estado;
// $item->n_ferias_disponiveis = $request->n_ferias_disponiveis;
// $item->data_registo = $now;
dd($item);
$item->save();
return redirect()->route('utilizadores.index');
}
My View:
<form class="needs-validation" method="POST" action="{{route('utilizadores.editar',$item->id)}}" enctype="multipart/form-data">
#csrf
#method('put')
Where am I wrong? I have other pages like this done and it works.
Thanks to anyone who can help me.
Edit: My php artisan route:list
| | DELETE | Utilizadores/{item} | utilizadores.delete | App\Http\Controllers\FuncionarioController#delete | web
|
| | PUT | Utilizadores/{item} | utilizadores.editar_perfil | App\Http\Controllers\FuncionarioController#editar_perfil | web
|
| | GET|HEAD | Utilizadores/{item}/delete | utilizadores.modal | App\Http\Controllers\FuncionarioController#modal | web
|
| | GET|HEAD | Utilizadores/{item}/edit | utilizadores.edit | App\Http\Controllers\FuncionarioController#edit | web
|
| | GET|HEAD | Utilizadores/{item}/edit_perfil | utilizadores.edit_perfil | App\Http\Controllers\FuncionarioController#edit_perfil | web
|
| | PUT | Utilizadores/{item}/editpass | utilizadores.passwordeditar | App\Http\Controllers\FuncionarioController#passwordeditar | web
Just swap edit and editar route. Something like this
Route::get('Utilizadores/{item}/edit', [FuncionarioController::class, 'edit'])->name('utilizadores.edit');
Route::put('Utilizadores/{item}', [FuncionarioController::class, 'editar'])->name('utilizadores.editar');
Or better, use resource controller for simpler Route file
Route::resource('utilizadores', FuncionarioController::class);
Keep in mind that you will tweak some function name and route file
Docs
You cannot have two identical routes for the same method/path. It is apparent in route:list that you already have a route registered for put('Utilizadores/{item}') with a different name (utilizadores.editar_perfil), therefore,
Route::put('Utilizadores/{item}',
[FuncionarioController::class, 'editar'])
->name('utilizadores.editar');
is not being registered, so you get that error when you try to insert {{ route('utilizadores.editar',$item->id) }} in your view.
First look to your route list using below command
php artisan route:list
If the routes exists then hit this command :
php artisan optimize
php artisan optimize:clear
I am using responsive FileManager 9.14.0 with TinyMCE 5.0.16 and Laravel 6 running on Nginx 1.16.1
I have the following folder structure:
| public
| |- uploads
| |- thumbs
| |- filemanager
| |- js
| | |- tinymce
| | | |- plugins
| | | | |- responsivefilemanager
| | | | | |- plugin.min.js
I use laravel authentication to protect a 'create' page where the user can add text using tinyMCE and upload images using RFM as tyniMCE plugin.
But RFM is accessible directly if with the following URL
http://www.myhost.test/filemanager/dialog.php
How can I prevent this behavior. I want RFM to be accessible only from the tinyMCE editor.
im not familier with laravel but ...
in Responsive File Manager 9.0 there is a folder called config that contain config.php
| public
| |- uploads
| |- thumbs
| |- filemanager
| | |- config
| | | |- config.php
| |- js
| | |- tinymce
| | | |- plugins
| | | | |- responsivefilemanager
| | | | | |- plugin.min.js
open config.php and change
define('USE_ACCESS_KEYS', false); // TRUE or FALSE -------- to ------> define('USE_ACCESS_KEYS', true); // TRUE or FALSE
this force Responsive File Manager to use Aaccess Key to prevent all attempt from being accessed to your files and folders.
in same file at line 190 add your users auth_key for whom they need to use file-manager .
for example :
username: jim auth_key: a1s2d3f4g5h6j7k8l9mm
username: lisa auth_key: zqxwd3f4vrbth6j7btny
so line 190 should rewrite like line below:
'access_keys' => array( "a1s2d3f4g5h6j7k8l9" , "zqxwd3f4vrbth6j7btny"),
go to your form and add a button/link to access RESPONSIVE FILE MANAGER
<a href="https://www.example.com/admin/responsive-filemanager/filemanager/dialog.php?akey=<?php echo {{{your authenticated user AUTH_KEY}}}; ?> </a>
if there is no {{{your authenticated user AUTH_KEY}}} there is 2 way:
1)) add a column auth_key to your users table and generate auth_key that should be equal for users they want to access to responsive file manager in both database and config.php file.
2)) use username instead of auth_key so your config at line 19 will be:
'access_keys' => array( "jim" , "lisa"),
and now your responsive file manager access link will be like this:
<a href="https://www.example.com/admin/responsive-filemanager/filemanager/dialog.php?akey=jim ></a>
jim is static here u should make it dynamic by call function to return authenticated user USERNAME and put it after &akey= in link
now if akey value in link find in access_key array the responsive file manager page will be work otherwise it show you ACCESS DENIED!!!
If it's still relevant, I can show you how I did it in Laravel 8
I proceeded from the opposite - if the user logged in under the admin, then there is no need to check it and therefore USE_ACCESS_KEYS do FALSE, otherwise - TRUE
And therefore, if he is NOT authenticated as an administrator, then he will NOT get access to the ResponsiveFileManager.
To do this, add such a function in the responsive_filemanager / filemanager / config / config.php file somewhere at the beginning of the file.
( Specify your own paths to the files '/vendor/autoload.php' and '/bootstrap/app.php' )
function use_access_keys() {
require dirname(__DIR__, 4) . '/vendor/autoload.php';
$app = require_once dirname(__DIR__, 4) . '/bootstrap/app.php';
$request = Illuminate\Http\Request::capture();
$request->setMethod('GET');
$app->make('Illuminate\Contracts\Http\Kernel')->handle($request);
if (Auth::check() && Auth::user()->hasRole('admin')) {
return false;
}
return true;
}
and then this line:
define('USE_ACCESS_KEYS', false);
replace with this:
define('USE_ACCESS_KEYS', use_access_keys());
And one moment.
If after that, when opening the FileManager, the following error suddenly pops up: "Undefined variable: lang"
then open responsive_filemanager / filemanager / dialog.php
find the array $get_params and in it change like this:
'lang' => 'en',
I'm new to Sacala,I have text file where i'm trying to read and load into dataframe thereafter I'm trying to load into database, while loading into database I'm getting error which is given below on other-hand when same credential I'm using in Toad I'm able connect successfully.Any help will be appreciated
text.txt
TYPE,CODE,SQ_CODE,RE_TYPE,VERY_ID,IN_DATE,DATE
"F","000544","2017002","OP","95032015062763298","20150610","20150529"
"F","000544","2017002","LD","95032015062763261","20150611","20150519"
"F","000544","2017002","AK","95037854336743246","20150611","20150429"
val sparkSession = SparkSession.builder().master("local").appName("IT_DATA").getOrCreate()
Driver=oracle.jdbc.driver.OracleDriver
Url=jdbc:oracle:thin:#xxx.com:1521/DATA00.WORLD
username=xxx
password=xxx
val dbProp = new java.util.Properties
dbProp.setProperty("driver", Driver)
dbProp.setProperty("user", username)
dbProp.setProperty("password", password)
//Create dataframe boject
val df = sparkSession.read
.format("com.databricks.spark.csv")
.option("header", "true")
.option("inferSchema", "true")
.option("location", "/xx/xx/xx/xx/test.csv")
.option("delimiter", ",")
.option("dateFormat", "yyyyMMdd")
.load().cache()
df.write.mode("append").jdbc(Url, TableTemp, dbProp)
df.show
+-----+-------+---------+---------+-------------------+---------+-------------+
| TYPE| CODE| SQ_CODE| RE_TYPE | VERY_ID| IN_DATE| DATE |
+-----+-------+---------+---------+-------------------+---------+-------------+
| F | 000544| 2017002| OP | 95032015062763298| 20150610| 2015-05-29|
| F | 000544| 2017002| LD | 95032015062763261| 20150611| 2015-05-19|
| F | 000544| 2017002| AK | 95037854336743246| 20150611| 2015-04-29|
+-----+-------+---------+--+------+-------------------+---------+-------------+
Error
java.sql.SQLException: ORA-01017: invalid username/password; logon denied
I'm writing a program to test update scripts for Azure sql.
The idea is to
- first clone a database (or fill a clone with the source schema and content)
- then run the update script on the clone
Locally I have this working, but for azure I have the probem that I don't see any file names. If I restore one database to another on the same azure "server", don't I have to rename the data files during restore too?
For local restore I do this:
restore.Devices.AddDevice(settings.BackupFileName, DeviceType.File);
restore.RelocateFiles.Add(new RelocateFile("<db>", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + ".mdf")));
restore.RelocateFiles.Add(new RelocateFile("<db>_log", Path.Combine(settings.DataFileDirectory, settings.TestDatabaseName + "_1.ldf")));
restore.SqlRestore(srv);
Is something similar required for cloning a database on azure?
Lots of Greetings!
Volker
You can create a database as a copy of [source]:
CREATE DATABASE database_name [ COLLATE collation_name ]
| AS COPY OF [source_server_name].source_database_name
{
(<edition_options> [, ...n])
}
<edition_options> ::=
{
MAXSIZE = { 100 MB | 500 MB | 1 | 5 | 10 | 20 | 30 … 150…500 } GB
| EDITION = { 'web' | 'business' | 'basic' | 'standard' | 'premium' }
| SERVICE_OBJECTIVE =
{ 'basic' | 'S0' | 'S1' | 'S2' | 'S3'
| 'P1' | 'P2' | 'P3' | 'P4'| 'P6' | 'P11'
| { ELASTIC_POOL(name = <elastic_pool_name>) } }
}
[;]
I read Error Handling but if I use log_message('debug', 'Hi I m in Cart Controller'); or log_message('info', 'Hi I m in Cart Controller'); it does not log any message but work only for log_message('error', 'Hi I m in Cart Controller');
Any idea what my mistake is?
You have to set the log threshold in app/config/config.php:
/*
|--------------------------------------------------------------------------
| Error Logging Threshold
|--------------------------------------------------------------------------
|
| If you have enabled error logging, you can set an error threshold to
| determine what gets logged. Threshold options are:
| You can enable error logging by setting a threshold over zero. The
| threshold determines what gets logged. Threshold options are:
|
| 0 = Disables logging, Error logging TURNED OFF
| 1 = Error Messages (including PHP errors)
| 2 = Debug Messages
| 3 = Informational Messages
| 4 = All Messages
|
| For a live site you'll usually only enable Errors (1) to be logged otherwise
| your log files will fill up very fast.
|
*/
$config['log_threshold'] = 2;