Running XAPP1079 on a Zynq Board - fpga

I am trying to run the XAPP 1079 on a Zynq Board (xc7z010clg400-1). Because the profile is not originally made for this specific board, I had to make some changes. I follow all the insructions for the Vivado and the SDK, but in the end after the board boots from the microSD card I don't see anything in the terminal. I was wondering first if the changes that I made are correct and if anymore are needed.
One specific issue I am having is that while creating an instance of the Zynq7 processor, it uses the configuration preset for the ZC702 board. Can/Should I change that?
# Create instance: processing_system7_0, and set properties
set processing_system7_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:processing_system7:5.5 processing_system7_0 ]
set_property -dict [ list CONFIG.PCW_CORE1_IRQ_INTR {1} CONFIG.PCW_USE_FABRIC_INTERRUPT {1} **CONFIG.preset {ZC702}** ] $processing_system7_0
Also, while running the tcl scripts a warning appears.
[BD 41-1731] Type mismatch between connected pins: /ila_0/probe2(undef) and /irq_gen_0/IRQ(intr)
You can see the files for this profile here.
Below are the two files that I have created after modifying the Xilinx ones. I should add that I'm using the 2017.1 version of Vivado.
create_bd_701.tcl
################################################################
# This is a generated script based on design: design_1
#
# Though there are limitations about the generated script,
# the main purpose of this utility is to make learning
# IP Integrator Tcl commands easier.
################################################################
################################################################
# Check if script is running in correct Vivado version.
################################################################
set scripts_vivado_version 2017.1
set current_vivado_version [version -short]
if { [string first $scripts_vivado_version $current_vivado_version] == -1 } {
puts ""
puts "ERROR: This script was generated using Vivado <$scripts_vivado_version> and is being run in <$current_vivado_version> of Vivado. Please run the script in Vivado <$scripts_vivado_version> then open the design in Vivado <$current_vivado_version>. Upgrade the design by running \"Tools => Report => Report IP Status...\", then run write_bd_tcl to create an updated script."
return 1
}
################################################################
# START
################################################################
# To test this script, run the following commands from Vivado Tcl console:
# source design_1_script.tcl
# If you do not already have a project created,
# you can create a project using the following command:
# create_project project_1 myproj -part xc7z010clg400-1
# set_property BOARD_PART digilentinc.com:zybo:part0:1.0 [current_project]
# CHECKING IF PROJECT EXISTS
if { [get_projects -quiet] eq "" } {
puts "ERROR: Please open or create a project!"
return 1
}
# CHANGE DESIGN NAME HERE
set design_name design_1
# If you do not already have an existing IP Integrator design open,
# you can create a design using the following command:
# create_bd_design $design_name
# Creating design if needed
set errMsg ""
set nRet 0
set cur_design [current_bd_design -quiet]
set list_cells [get_bd_cells -quiet]
if { ${design_name} eq "" } {
# USE CASES:
# 1) Design_name not set
set errMsg "ERROR: Please set the variable <design_name> to a non-empty value."
set nRet 1
} elseif { ${cur_design} ne "" && ${list_cells} eq "" } {
# USE CASES:
# 2): Current design opened AND is empty AND names same.
# 3): Current design opened AND is empty AND names diff; design_name NOT in project.
# 4): Current design opened AND is empty AND names diff; design_name exists in project.
if { $cur_design ne $design_name } {
puts "INFO: Changing value of <design_name> from <$design_name> to <$cur_design> since current design is empty."
set design_name [get_property NAME $cur_design]
}
puts "INFO: Constructing design in IPI design <$cur_design>..."
} elseif { ${cur_design} ne "" && $list_cells ne "" && $cur_design eq $design_name } {
# USE CASES:
# 5) Current design opened AND has components AND same names.
set errMsg "ERROR: Design <$design_name> already exists in your project, please set the variable <design_name> to another value."
set nRet 1
} elseif { [get_files -quiet ${design_name}.bd] ne "" } {
# USE CASES:
# 6) Current opened design, has components, but diff names, design_name exists in project.
# 7) No opened design, design_name exists in project.
set errMsg "ERROR: Design <$design_name> already exists in your project, please set the variable <design_name> to another value."
set nRet 2
} else {
# USE CASES:
# 8) No opened design, design_name not in project.
# 9) Current opened design, has components, but diff names, design_name not in project.
puts "INFO: Currently there is no design <$design_name> in project, so creating one..."
create_bd_design $design_name
puts "INFO: Making design <$design_name> as current_bd_design."
current_bd_design $design_name
}
puts "INFO: Currently the variable <design_name> is equal to \"$design_name\"."
if { $nRet != 0 } {
puts $errMsg
return $nRet
}
##################################################################
# DESIGN PROCs
##################################################################
# Procedure to create entire design; Provide argument to make
# procedure reusable. If parentCell is "", will use root.
proc create_root_design { parentCell } {
if { $parentCell eq "" } {
set parentCell [get_bd_cells /]
}
# Get object for parentCell
set parentObj [get_bd_cells $parentCell]
if { $parentObj == "" } {
puts "ERROR: Unable to find parent cell <$parentCell>!"
return
}
# Make sure parentObj is hier blk
set parentType [get_property TYPE $parentObj]
if { $parentType ne "hier" } {
puts "ERROR: Parent <$parentObj> has TYPE = <$parentType>. Expected to be <hier>."
return
}
# Save current instance; Restore later
set oldCurInst [current_bd_instance .]
# Set parent object as current
current_bd_instance $parentObj
# Create interface ports
set DDR [ create_bd_intf_port -mode Master -vlnv xilinx.com:interface:ddrx_rtl:1.0 DDR ]
set FIXED_IO [ create_bd_intf_port -mode Master -vlnv xilinx.com:display_processing_system7:fixedio_rtl:1.0 FIXED_IO ]
# Create ports
# Create instance: ila_0, and set properties
set ila_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:ila:6.2 ila_0 ]
set_property -dict [ list CONFIG.C_ENABLE_ILA_AXI_MON {false} CONFIG.C_MONITOR_TYPE {Native} CONFIG.C_NUM_OF_PROBES {4} ] $ila_0
# Create instance: irq_gen_0, and set properties
set irq_gen_0 [ create_bd_cell -type ip -vlnv xilinx.com:user:irq_gen:1.1 irq_gen_0 ]
# Create instance: processing_system7_0, and set properties
set processing_system7_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:processing_system7:5.5 processing_system7_0 ]
set_property -dict [ list CONFIG.PCW_CORE1_IRQ_INTR {1} CONFIG.PCW_USE_FABRIC_INTERRUPT {1} CONFIG.preset {ZC702} ] $processing_system7_0
# Create instance: processing_system7_0_axi_periph, and set properties
set processing_system7_0_axi_periph [ create_bd_cell -type ip -vlnv xilinx.com:ip:axi_interconnect:2.1 processing_system7_0_axi_periph ]
set_property -dict [ list CONFIG.NUM_MI {1} ] $processing_system7_0_axi_periph
# Create instance: rst_processing_system7_0_100M, and set properties
set rst_processing_system7_0_100M [ create_bd_cell -type ip -vlnv xilinx.com:ip:proc_sys_reset:5.0 rst_processing_system7_0_100M ]
# Create instance: vio_0, and set properties
set vio_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:vio:3.0 vio_0 ]
set_property -dict [ list CONFIG.C_NUM_PROBE_IN {0} ] $vio_0
# Create interface connections
connect_bd_intf_net -intf_net processing_system7_0_DDR [get_bd_intf_ports DDR] [get_bd_intf_pins processing_system7_0/DDR]
connect_bd_intf_net -intf_net processing_system7_0_FIXED_IO [get_bd_intf_ports FIXED_IO] [get_bd_intf_pins processing_system7_0/FIXED_IO]
connect_bd_intf_net -intf_net processing_system7_0_M_AXI_GP0 [get_bd_intf_pins processing_system7_0/M_AXI_GP0] [get_bd_intf_pins processing_system7_0_axi_periph/S00_AXI]
connect_bd_intf_net -intf_net processing_system7_0_axi_periph_M00_AXI [get_bd_intf_pins irq_gen_0/S_AXI] [get_bd_intf_pins processing_system7_0_axi_periph/M00_AXI]
# Create port connections
connect_bd_net -net irq_gen_0_IRQ [get_bd_pins ila_0/probe2] [get_bd_pins irq_gen_0/IRQ] [get_bd_pins processing_system7_0/Core1_nIRQ]
connect_bd_net -net irq_gen_0_slv_reg [get_bd_pins ila_0/probe1] [get_bd_pins irq_gen_0/slv_reg]
connect_bd_net -net irq_gen_0_vio_rise_edge [get_bd_pins ila_0/probe0] [get_bd_pins irq_gen_0/vio_rise_edge]
connect_bd_net -net processing_system7_0_FCLK_CLK0 [get_bd_pins ila_0/clk] [get_bd_pins irq_gen_0/S_AXI_ACLK] [get_bd_pins processing_system7_0/FCLK_CLK0] [get_bd_pins processing_system7_0/M_AXI_GP0_ACLK] [get_bd_pins processing_system7_0_axi_periph/ACLK] [get_bd_pins processing_system7_0_axi_periph/M00_ACLK] [get_bd_pins processing_system7_0_axi_periph/S00_ACLK] [get_bd_pins rst_processing_system7_0_100M/slowest_sync_clk] [get_bd_pins vio_0/clk]
connect_bd_net -net processing_system7_0_FCLK_RESET0_N [get_bd_pins processing_system7_0/FCLK_RESET0_N] [get_bd_pins rst_processing_system7_0_100M/ext_reset_in]
connect_bd_net -net rst_processing_system7_0_100M_interconnect_aresetn [get_bd_pins processing_system7_0_axi_periph/ARESETN] [get_bd_pins rst_processing_system7_0_100M/interconnect_aresetn]
connect_bd_net -net rst_processing_system7_0_100M_peripheral_aresetn [get_bd_pins irq_gen_0/S_AXI_ARESETN] [get_bd_pins processing_system7_0_axi_periph/M00_ARESETN] [get_bd_pins processing_system7_0_axi_periph/S00_ARESETN] [get_bd_pins rst_processing_system7_0_100M/peripheral_aresetn]
connect_bd_net -net vio_0_probe_out0 [get_bd_pins ila_0/probe3] [get_bd_pins irq_gen_0/VIO_IRQ_TICK] [get_bd_pins vio_0/probe_out0]
# Create address segments
create_bd_addr_seg -range 0x10000 -offset 0x78600000 [get_bd_addr_spaces processing_system7_0/Data] [get_bd_addr_segs irq_gen_0/S_AXI/reg0] SEG_irq_gen_0_reg0
# Restore current instance
current_bd_instance $oldCurInst
save_bd_design
}
# End of create_root_design()
##################################################################
# MAIN FLOW
##################################################################
create_root_design ""
create_proj_701.tcl
# Create project
set proj_name project_1
create_project -force $proj_name ./$proj_name
# Set project properties
set obj [get_projects $proj_name]
set_property "part" "xc7z010clg400-1" $obj
set_property "target_language" "Verilog" $obj
set_property board_part digilentinc.com:zybo:part0:1.0 $obj
# Set the directory path for the new project
set proj_dir [get_property directory $obj]
set_property ip_repo_paths $proj_dir/../../src/my_ip [current_fileset]
update_ip_catalog
#create BD
source $proj_dir/../../src/scripts/create_bd_701.tcl
validate_bd_design
save_bd_design
#Create top wrapper file
make_wrapper -files [get_files $proj_dir/$proj_name.srcs/sources_1/bd/design_1/design_1.bd] -top
import_files -force -norecurse $proj_dir/$proj_name.srcs/sources_1/bd/design_1/hdl/design_1_wrapper.v
#implement the design and create bit file
launch_runs impl_1 -to_step write_bitstream
wait_on_run -timeout 60 impl_1
#Export design to SDK
file mkdir $proj_dir/$proj_name.sdk
file copy -force $proj_dir/$proj_name.runs/impl_1/design_1_wrapper.sysdef $proj_dir/$proj_name.sdk/design_1_wrapper.hdf
launch_sdk -workspace $proj_dir/$proj_name.sdk -hwspec $proj_dir/$proj_name.sdk/design_1_wrapper.hdf

I have managed to make XAPP1079 run on Zybo with changes in the tcl script responsible for the block design and some changes in the c file that the CPU0 runs.
As far as the tcl script for block design is concerned, I copied the configs from the configuration preset for the Zybo and pasted them in the property list of the instance of processing_system7_0 in the code.
Here is the code snippet. It is abbreviated because it is too large to post.
# Create instance: processing_system7_0, and set properties
set processing_system7_0 [ create_bd_cell -type ip -vlnv xilinx.com:ip:processing_system7:5.5 processing_system7_0 ]
set_property -dict [ list CONFIG.PCW_DDR_RAM_BASEADDR {0x00100000} \
CONFIG.PCW_DDR_RAM_HIGHADDR {0x1FFFFFFF} \
.
.
.
CONFIG.PCW_PLL_BYPASSMODE_ENABLE {0} \
CONFIG.preset {Default*}\
] $processing_system7_0
As far the .c file for CPU0, I followed this post.
I changed the part that would reset CPU1,
{
/*
* Reset and start CPU1
*- Application for cpu1 exists at 0x00000000 per cpu1 linkerscript
...
...
...
}
with this code.
#define CPU1STARTADR 0xfffffff0
#define sev() __asm__("sev")
print("CPU0: writing startaddress for cpu1\n\r");
print("CPU0: writing startaddress for cpu1\n\r");
Xil_Out32(CPU1STARTADR, 0x00200000);
dmb(); //waits until write has finished
print("CPU0: sending the SEV to wake up CPU1\n\r");
sev();
Now when I debug the application projects in the SDK, I can see continuous Hello Worlds from both CPUs. Despite that the part of the application note about the interrupt doesn't seem to work.

Related

How to programmatically create a PPTP VPN connection on macOS Sierra/High Sierra?

Apple removed high-level PPTP support in macOS Sierra from its network configuration system. However, the PPP internals are all still there, including /usr/sbin/pppd and /etc/ppp/.
How can I programmatically initiate a PPTP VPN connection on macOS Sierra / High Sierra using what's left?
Answer:
This method creates a PPTP connection that doesn't send all traffic and doesn't override other DNS providers, meaning it works with multiple simultaneous VPN connections each having different DNS search domains, and closes it in an orderly fashion.
Not sending all traffic requires you to know the VPN subnet beforehand. If you don't, you must send all traffic (see below), since vanilla PPP/LCP has no means to tell the client its subnet (although theoretically the ip-up and ip-down scripts could guess it from the received IP address).
Save this perl as /usr/local/bin/pptp:
#!/usr/bin/env perl
if (#ARGV) {
my $name = $ARGV[0];
if (length $name && -e "/etc/ppp/peers/$name") {
my $pid;
$SIG{"INT"} = "IGNORE";
die "fork: $!" unless defined ($pid = fork);
if ($pid) { # parent
$SIG{"INT"} = sub {
kill HUP => $pid;
};
wait;
exit;
} else { #child
$SIG{"INT"} = "DEFAULT";
exec "pppd", "call", $name;
exit;
}
} else {
print "Error: PPTP name: $name\n";
}
} else {
opendir my $d, "/etc/ppp/peers" or die "Cannot read /etc/ppp/peers";
while (readdir $d) {
print "$_\n" if !($_ eq "." || $_ eq "..");
}
closedir $d;
}
Run it as sudo pptp AcmeOffice, where AcmeOffice is the PPP connection name, and close it with a single Control-C/SIGINT.
In /etc/ppp/peers, create the PPP connection file, in this example /etc/ppp/peers/AcmeOffice:
plugin /System/Library/SystemConfiguration/PPPController.bundle/Contents/PlugIns/PPPDialogs.ppp
plugin PPTP.ppp
noauth
# debug
redialcount 1
redialtimer 5
idle 1800
#mru 1320
mtu 1320
receive-all
novj 0:0
ipcp-accept-local
ipcp-accept-remote
refuse-pap
refuse-chap
#refuse-chap-md5
refuse-eap
hide-password
#noaskpassword
#mppe-stateless
mppe-128
mppe-stateful
require-mppe
passive
looplocal
nodetach
# defaultroute
#replacedefaultroute
# ms-dns 8.8.8.8
# usepeerdns
noipdefault
# logfile /tmp/ppp.AcmeOffice.log
ipparam AcmeOffice
remoteaddress office.acme.com
user misteracme
password acme1234
The last 4 options are connection-specific. Note the password is stored cleartext. chown root:wheel and chmod 600 is recommended. nodetach, ipcp-accept-local, ipcp-accept-remote, noipdefault are critical.
Since we're not becoming/replacing the default route, you must manually change your routing table. Add an AcmeOffice entry to the /etc/ppp/ip-up script:
#!/bin/sh
#params: interface-name tty-device speed local-IP-address remote-IP-address ipparam
PATH=$PATH:/sbin:/usr/sbin
case "$6" in
AcmeOffice)
route -n add -net 192.168.1.0/24 -interface "$1"
;;
AcmeLab)
route -n add -net 192.168.2.0/24 -interface "$1"
;;
AcmeOffshore)
route -n add -net 192.168.3.0/24 -interface "$1"
;;
VPNBook)
;;
*)
;;
esac
and your /etc/ppp/ip-down script:
#!/bin/sh
#params: interface-name tty-device speed local-IP-address remote-IP-address ipparam
PATH=$PATH:/sbin:/usr/sbin
case "$6" in
AcmeOffice)
route -n delete -net 192.168.1.0/24 -interface "$1"
;;
AcmeLab)
route -n delete -net 192.168.2.0/24 -interface "$1"
;;
AcmeOffshore)
route -n delete -net 192.168.3.0/24 -interface "$1"
;;
VPNBook)
;;
*)
;;
esac
If the VPN has a DNS search domain (i.e. somehost.office.acme.com), create a file in /etc/resolver/ named after the DNS suffix, like /etc/resolver/office.acme.com, with contents like:
nameserver 192.168.1.1
domain office.acme.com
Note that this requires knowing the destination domain & nameserver beforehand. Theoretically ip-up & ip-down could create & delete this file on demand.
To send all traffic (& if you don't know the destination subnet), uncomment #defaultroute in the PPP connection file and leave the ip-up & ip-down entries blank (e.g. the VPNBook example). To override your DNS with the VPN's, uncomment usepeerdns.

ORA-12505, TNS: listener does not currently know of SID given in connect descriptor Vendor code 12505

i have just installed my Oracle Sql Developer 11g , and when I'm starting to make a new database connection and it failed so im configuring the ora files(tnsnames.ora and LISTENER.ora) but their contents are not same as the ora files that shown at the tutorials when I'm researching. So is this the right content of a tnsnames.ora and LISTENER.ora file? Thank you for answering..
this is my tnsnames.ora
# This file contains the syntax information for
# the entries to be put in any tnsnames.ora file
# The entries in this file are need based.
# There are no defaults for entries in this file
# that Sqlnet/Net3 use that need to be overridden
#
# Typically you could have two tnsnames.ora files
# in the system, one that is set for the entire system
# and is called the system tnsnames.ora file, and a
# second file that is used by each user locally so that
# he can override the definitions dictated by the system
# tnsnames.ora file.
# The entries in tnsnames.ora are an alternative to using
# the names server with the onames adapter.
# They are a collection of aliases for the addresses that
# the listener(s) is(are) listening for a database or
# several databases.
# The following is the general syntax for any entry in
# a tnsnames.ora file. There could be several such entries
# tailored to the user's needs.
<alias>= [ (DESCRIPTION_LIST = # Optional depending on whether u have
# one or more descriptions
# If there is just one description, unnecessary ]
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= # Optional depending on whether u have
# one or more addresses
# If there is just one address, unnecessary ]
(ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
[ (ADDRESS=
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=decnet)
(NODE=<nodename>)
(OBJECT=<objectname>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST is used or not
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
(DESCRIPTION=
[ (SDU=2048) ] # Optional, defaults to 2048
# Can take values between 512 and 32K
[ (ADDRESS_LIST= ] # Optional depending on whether u have more
# than one address or not
# If there is just one address, unnecessary
(ADDRESS
[ (COMMUNITY=<community_name>) ]
(PROTOCOL=tcp)
(HOST=<hostname>)
(PORT=<portnumber (1521 is a standard port used)>)
)
[ (ADDRESS=
(PROTOCOL=ipc)
(KEY=<ipckey (PNPKEY is a standard key used)>)
)
]
... # More addresses
[ ) ] # Optional depending on whether ADDRESS_LIST
# is being used
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
[ (SOURCE_ROUTE=yes) ]
)
[ (CONNECT_DATA=
(SID=orcl)
[ (GLOBAL_NAME=<global_database_name>) ]
)
]
... # More descriptions
[ ) ] # Optional depending on whether DESCRIPTION_LIST is used or not
AND
this is my tnsnames.ora
# copyright (c) 1997 by the Oracle Corporation
#
# NAME
# listener.ora
# FUNCTION
# Network Listener startup parameter file example
# NOTES
# This file contains all the parameters for listener.ora,
# and could be used to configure the listener by uncommenting
# and changing values. Multiple listeners can be configured
# in one listener.ora, so listener.ora parameters take the form
# of SID_LIST_<lsnr>, where <lsnr> is the name of the listener
# this parameter refers to. All parameters and values are
# case-insensitive.
# <lsnr>
# This parameter specifies both the name of the listener, and
# it listening address(es). Other parameters for this listener
# us this name in place of <lsnr>. When not specified,
# the name for <lsnr> defaults to "LISTENER", with the default
# address value as shown below.
#
# LISTENER =
# (ADDRESS_LIST=
# (ADDRESS=(PROTOCOL=tcp)(HOST=localhost)(PORT=1521))
# (ADDRESS=(PROTOCOL=ipc)(KEY=PNPKEY)))
# SID_LIST_<lsnr>
# List of services the listener knows about and can connect
# clients to. There is no default. See the Net8 Administrator's
# Guide for more information.
#
# SID_LIST_LISTENER=
# (SID_LIST=
# (SID_DESC=
# #BEQUEATH CONFIG
# (GLOBAL_DBNAME=salesdb.mycompany)
# (SID_NAME=sid1)
# (ORACLE_HOME=/private/app/oracle/product/8.0.3)
# #PRESPAWN CONFIG
# (PRESPAWN_MAX=20)
# (PRESPAWN_LIST=
# (PRESPAWN_DESC=(PROTOCOL=tcp)(POOL_SIZE=2)(TIMEOUT=1))
# )
# )
# )
# PASSWORDS_<lsnr>
# Specifies a password to authenticate stopping the listener.
# Both encrypted and plain-text values can be set. Encrypted passwords
# can be set and stored using lsnrctl.
# LSNRCTL> change_password
# Will prompt for old and new passwords, and use encryption both
# to match the old password and to set the new one.
# LSNRCTL> set password
# Will prompt for the new password, for authentication with
# the listener. The password must be set before running the next
# command.
# LSNRCTL> save_config
# Will save the changed password to listener.ora. These last two
# steps are not necessary if SAVE_CONFIG_ON_STOP_<lsnr> is ON.
# See below.
#
# Default: NONE
#
# PASSWORDS_LISTENER = 20A22647832FB454 # "foobar"
# SAVE_CONFIG_ON_STOP_<lsnr>
# Tells the listener to save configuration changes to listener.ora when
# it shuts down. Changed parameter values will be written to the file,
# while preserving formatting and comments.
# Default: OFF
# Values: ON/OFF
#
# SAVE_CONFIG_ON_STOP_LISTENER = ON
# USE_PLUG_AND_PLAY_<lsnr>
# Tells the listener to contact an Onames server and register itself
# and its services with Onames.
# Values: ON/OFF
# Default: OFF
#
# USE_PLUG_AND_PLAY_LISTENER = ON
# LOG_FILE_<lsnr>
# Sets the name of the listener's log file. The .log extension
# is added automatically.
# Default=<lsnr>
#
# LOG_FILE_LISTENER = lsnr
# LOG_DIRECTORY_<lsnr>
# Sets the directory for the listener's log file.
# Default: <oracle_home>/network/log
#
# LOG_DIRECTORY_LISTENER = /private/app/oracle/product/8.0.3/network/log
# TRACE_LEVEL_<lsnr>
# Specifies desired tracing level.
# Default: OFF
# Values: OFF/USER/ADMIN/SUPPORT/0-16
#
# TRACE_LEVEL_LISTENER = SUPPORT
# TRACE_FILE_<lsnr>
# Sets the name of the listener's trace file. The .trc extension
# is added automatically.
# Default: <lsnr>
#
# TRACE_FILE_LISTENER = lsnr
# TRACE_DIRECTORY_<lsnr>
# Sets the directory for the listener's trace file.
# Default: <oracle_home>/network/trace
#
# TRACE_DIRECTORY_LISTENER=/private/app/oracle/product/8.0.3/network/trace
# CONNECT_TIMEOUT_<lsnr>
# Sets the number of seconds that the listener waits to get a
# valid database query after it has been started.
# Default: 10
#
# CONNECT_TIMEOUT_LISTENER=10
Ping the database ip if it is listening then confirm your system hostname , that it must be same as in your tnsname.ora file.
After that conenction depends on the following highlighted areas that must be correctly entered.enter image description here

"Item not writable" when trying to set a leaf from a bash script

This is a simplified version of the YANG model I'm working with:
module echo
{
namespace "http://namespace.com/ns/echo/1.0";
prefix echo;
import tailf-common
{
prefix tailf;
}
import ietf-inet-types
{
prefix inet;
}
...
container echo
{
...
container client
{
...
leaf ip
{
type inet:ipv4-address;
tailf:info "Destination IP of remote device";
tailf:snmp-name echoClientDestIp;
tailf:hidden debug;
}
}//container client
}//container echo
}//module echo
And this is a simplified bash script I'm running to change CDB:
#!/bin/sh
MAAPI=$CONFD_TOOLS_PATH/maapi
file_check $MAAPI
#...
$MAAPI --clicmd "unhide debug"
$MAAPI --set "echo:echo/client/ip" "$1"
#...
$MAAPI --clicmd "commit"
#...
$MAAPI --clicmd "hide debug"
exit 0
The rest of the model and the script are quite similar to what is here. What I get when trying to execute the script through the CLI (via a clifspec file) is:
"Failed to set value: item is not writable - "
So I tried to literally make it writable (for which I had to make it operational and thus add a callpoint) so the leaf ended up looking like this:
leaf ip
{
type inet:ipv4-address;
tailf:info "Destination IP of remote device";
tailf:snmp-name echoClientDestIp;
tailf:hidden debug;
config false;
tailf:writable true;
tailf:callpoint useless-but-needed-callpoint-2;
}
Which yielded the same error. Any idea what is wrong?

Bash Script Not Working Bandwidth Shaping

I hope this is an easy answer
Problems:
I placed the following bash script called learn-address.sh in the following folder:
vi /etc/openvpn/netem/learn-address.sh
Added the following (2) lines to the .conf file:
script-security 3
learn-address /etc/openvpn/netem/learn-address.sh
And applied the following permission to the learn-address script:
chmod 755 /etc/openvpn/netem/learn-address.sh
However, the script does update the files ($ip.classid and $ip.dev)
in the tmp file and passes the variables correctly
But the bash script does not execute the tc class and filter commands (there is no change to qdisc)
What permissions would I use on the script to execute the tc class and filter commands when the learn-address script is called when a user connects to OpenVPN or is there something else that I missed?
Many thanks
Name of script: learn-address.sh
#!/bin/bash
statedir=/tmp/
function bwlimit-enable() {
ip=$1
user=$2
dev=eth0
# Disable if already enabled.
bwlimit-disable $ip
# Find unique classid.
if [ -f $statedir/$ip.classid ]; then
# Reuse this IP's classid
classid=`cat $statedir/$ip.classid`
else
if [ -f $statedir/last_classid ]; then
classid=`cat $statedir/last_classid`
classid=$((classid+1))
else
classid=1
fi
echo $classid > $statedir/last_classid
fi
# Find this user's bandwidth limit
# downrate: from VPN server to the client
# uprate: from client to the VPN server
if [ "$user" == "myuser" ]; then
downrate=10mbit
uprate=10mbit
elif [ "$user" == "anotheruser"]; then
downrate=2mbit
uprate=2mbit
else
downrate=5mbit
uprate=5mbit
fi
# Limit traffic from VPN server to client
tc class add dev $dev parent 1: classid 1:$classid htb rate $downrate
tc filter add dev $dev protocol all parent 1:0 prio 1 u32 match ip dst $ip/32 flowid 1:$classid
# Limit traffic from client to VPN server
tc filter add dev $dev parent ffff: protocol all prio 1 u32 match ip src $ip/32 police rate $uprate burst 80k drop flowid :$classid
# Store classid and dev for further use.
echo $classid > $statedir/$ip.classid
echo $dev > $statedir/$ip.dev
}
function bwlimit-disable() {
ip=$1
if [ ! -f $statedir/$ip.classid ]; then
return
fi
if [ ! -f $statedir/$ip.dev ]; then
return
fi
classid=`cat $statedir/$ip.classid`
dev=`cat $statedir/$ip.dev`
tc filter del dev $dev protocol all parent 1:0 prio 1 u32 match ip dst $ip/32
tc class del dev $dev classid 1:$classid
tc filter del dev $dev parent ffff: protocol all prio 1 u32 match ip src $ip/32
# Remove .dev but keep .classid so it can be reused.
rm $statedir/$ip.dev
}
# Make sure queueing discipline is enabled.
tc qdisc add dev $dev root handle 1: htb 2>/dev/null || /bin/true
tc qdisc add dev $dev handle ffff: ingress 2>/dev/null || /bin/true
case "$1" in
add|update)
bwlimit-enable $2 $3
;;
delete)
bwlimit-disable $2
;;
*)
echo "$0: unknown operation [$1]" >&2
exit 1
;;
esac
exit 0
The calls to tc here are occurring before dev is defined, which happens later after you've parsed the function arguments and either called bwlimit-enable or bwlimit-disable. It looks like you want to move those two calls:
# Make sure queueing discipline is enabled.
tc qdisc add dev $dev root handle 1: htb 2>/dev/null || /bin/true
tc qdisc add dev $dev handle ffff: ingress 2>/dev/null || /bin/true
... to below the case statement.

How to Create Installation Script for Tcl Package

I have a Tcl package which consists of a couple modules, unit tests, and samples; which I am looking to distribute. I have searched, but not found any easy/simple way to create installation script for it. I looked at other packages such as Tclx and looks like they use autotools--an untouched territory for me. So, is autotools the only way? My target platforms are mostly Macs and Linux, but might expand to Windows in the future.
It seems I have to create my own setup script. Right now, it is very crude: copy files to a location. It works for me now. I will continue to enhance my setup script to fix bugs and add features as the need arise. Here is my setup.tcl
#!/usr/bin/tclsh
# Crude setup script for my package
# ======================================================================
# Configurable items
# ======================================================================
# Install dir: $destLib/$packagename
set destLib [file join ~ Library Tcl]
set packageName tcl_tools
# List of source/dest to install
set filesList {
argparser.tcl
dateutils.tcl
htmlutils.tcl
listutils.tcl
pkgIndex.tcl
samples/common.tcl
samples/dateutils_sample.tcl
samples/httputils_sample.tcl
samples/parse_by_example_demo_missing_parameter.tcl
samples/parse_by_example_demo.tcl
samples/parse_simple_demo.tcl
}
# ======================================================================
# Determine the destination lib dir
# ======================================================================
if {[llength $::argv] == 1} {
set destLib [lindex $::argv 0]
}
if {[lsearch $auto_path $destLib] == -1} {
puts "ERROR: Invalid directory to install. Must be one of these:"
puts "[join $auto_path \n]"
exit 1
}
set destLib [file join $destLib $packageName]
file mkdir $destLib
# ======================================================================
# Install
# ======================================================================
foreach source $filesList {
set dest [file join $destLib $source]
puts "Installing $dest"
# Create destination dir if needed
set destDir [file dirname $dest]
if {![file isdirectory $destDir]} { file mkdir $destDir }
# Copy
file copy $source $dest
}
puts "Done"

Resources