Extract all Change Packages for all the files in a PTC project - mks-integrity

I am trying to get a list of all the change packages used to update all the files in a PTC project. I used the following command:
si viewproject --recurse --fields=name,creationcpid,cpid,memberrev,indent --project=%Project% --hostname=%Host_name% --port=%port1% -Y
But I do not get all the CP used, only the first one. I also tried the command:
si rlog --recurse --format="{membername},{memberrev},{revision},{cpid},{author}\n" --noHeaderFormat --project=%Project% --hostname=%Host_name% --port=%port1%

Using the following cli command you will get all change packaged used by current user
si viewcps
But, viewcps accept --filter= where you can specify the project
si viewcps --hostname=%Host_name% --port=%port1% --filter=project:%Project%
This command need to be called recursively for each sub project because will return only change packages from first level in the specified project.
Usage: si viewcps options... issue|issue:change package id...; options are:
--fields=field1[:width1],field2[:width2]... where fieldn can be any of: closeddate,cptype,creationdate,deployrequestid,deployrequeststate,deploytarget,description,id,issue,propagated,propagatedby,siserver,stage,stagingsystem,state,summary,user The fields to be displayed
--filter=user:name
issueid:issue
state[:closed|:open|:submitted|:accepted|:rejected|:discarded|:commitfailed]
closeddate:<date>
creationdate:<date>
membertype[:member|:subproject]
member:<expression>
project:<expression>
variant:<expression>
mainline
description:<expression>
summary:<expression>
typemodifier[:committed|:pending]
type[:add|:addfromarchive|:drop|:import|:exclusivelock|:nonexclusivelock|:renamefrom|:renameto|:movememberfrom|:movememberto|:update|:updatearchive|:updaterevision|:createsubproject|:addsubproject|:addsharedsubproject|:configuresubprojectfrom|:configuresubprojectto|:movesubprojectfrom|:movesubprojectto|:dropsubproject]
hasissue
pendingreviewby:name
acceptedby:name[;<date>]
rejectedby:name[;<date>]
cptype[:development|:propagation|:deploy|:staging|:resolution]
stagingsystem:<expression>
stage:<expression>
deploytarget:<expression>
deployrequeststate[|:cancelled|:cleanedup|:cleaningup|:cleanupfailed|:created|:deployed|:executed|:executing|:packageactionsfailed|:packagecontentfailed|:packagingactions|:packagingcontent|:prepared|:preparing|:queuedonsource|:queuedontarget|:readytodeploy|:readytotransfer|:rollbackfailed|:rolledback|:rollingback|:stopped|:transferfailed|:transferring]
deployrequestid:<expression> The filter used to select change packages
--height=value The height in pixels of the windows
--myReviews Show the change packages awaiting review by current user
--query=value The query used to select change packages
--width=value The width in pixels of the windows
-x value The x location in pixels of the window
-y value The y location in pixels of the window
-? Shows the usage for a command
--[no]batch Control batch mode (no user interaction in batch mode)
--cwd=value Act as if command executed in specified directory
-F value Read the selection from a specified file
--forceConfirm=[yes|no] Specify an answer to all confirmation questions
-g User interaction should happen via the GUI
--gui User interaction should happen via the GUI
--hostname=value Hostname of server
-N Responds to all confirmations with "no"
--no Responds to all confirmations with "no"
--password=value Credentials (e.g., password) to login with
--[no]persist Control persistence of CLI views
--port=value TCP/IP port number of server
--quiet Control status display
--selectionFile=value Read the selection from a specified file
--settingsUI=[gui|default] Control UI for command options
--status=[none|gui|default] Control status display
--usage Shows the usage for a command
--user=value Username to login to server with
-Y Responds to all confirmations with "yes"
--yes Responds to all confirmations with "yes"

Related

Where files from tap are kept in Meltano

I have the following combination of tap/target in Meltano: tap-marketo and target-s3-parquet.
I want to extract data from tap-marketo from data A to date B in the past.
I saw that we can only define start_date and max_export_days.
I have tried to start with start_date A and stop the run once I reach B. But this does not work.
The loader only emit the state once their work is completely done, and the target is not called. So a load was not done.
I also saw that, the export is being done.
{'run_id': '46ba5256-7019-48c7-890a-28746bb5272a', 'state_id': '2023-02-09T152428--tap-marketo--target-s3-parquet', 'stdio': 'stderr', 'cmd_type': 'extractor', 'name': 'tap-marketo', 'event': 'INFO GET: https://XXXXXXX/bulk/v1/activities/export/6636daf1-ad1e-41e1-b8d5-cdd31de5d4e0/file.json', 'level': 'info', 'timestamp': '2023-02-09T17:35:02.098016Z'}
But where do I find this file in my container?
I want to invoke the target separately but need to give the --input.
# meltano invoke target-s3-parquet --help
Environment 'dev' is active
Usage: target-s3-parquet [OPTIONS]
Execute the Singer target.
Options:
--input FILENAME A path to read messages from instead of from
standard in.
--config TEXT Configuration file location or 'ENV' to use
environment variables.
--format [json|markdown] Specify output style for --about
--about Display package metadata and settings.
--version Display the package version.
--help Show this message and exit.
To invoke the tap and target separately
meltano invoke tap-marketo > ./outfile.singer.jsonl
cat ./outfile.singer.jsonl | meltano invoke target-s3-parquet
Which is equivalent to:
meltano invoke tap-marketo > ./outfile.singer.jsonl
meltano invoke target-s3-parquet --input=./outfile.singer.jsonl
In both of the above cases, you can retry just the second step.
However, if you invoke both together, using meltano run tap-marketo target-s3-parquet or similar, the intermediate file will not be stored on disk, and you would not be able to replay just the target-side processing.
Why these files aren't stored on disk by default
The stream of messages you'll see in the examples above will necessarily contain potentially secret or confidential data, and the volume contained within the stream can be extremely large, since it contains the records themselves as well as metadata used for coordinating between the tap and target. For this reason, this stream of messages from tap to target is not stored to disk during a normal sync operation.

Avoid mass e-mail notification in error analysis bash script

I am selecting error log details from a docker container and decide within a shell script, how and when to alert about the issue by discord and/or email.
Because I am receiving the email alerts too often with the same information in the email body, I want to implement the following two adjustments:
Fatal error log selection:
FATS="$(docker logs --since 24h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"
Email sent, in case FATS has some content:
swaks --from "$MAILFROM" --to "$MAILTO" --server "$MAILSERVER" --auth LOGIN --auth-user "$MAILUSER" --auth-password "$MAILPASS" --h-Subject "FATAL ERRORS FOUND" --body "$FATS" --silent "1"
How can I send the email only in the case, FATS has another content than the previous run of the script? I have thought about a hash about its content, which is stored and read in a text file. If the hash is the same than the previous script run, the email will be skipped.
Another option could be a local, temporary variable in the global user's bash profile, so that there is no file to be stored on the file system (to avoid read / writes).
How can I do that?
When you are writing a script for your monitoring, add functions for additional functionality, like:
logging all the alerts that have been send
make sure you don't send more than 1 alert each hour
consider sending warnings only during working hours
escalate a message when it fails N times without intermediate success
possible send an alert to different receivers (different email adresses or also to sms or teams)
make an interface for an operator so he can look back when something went wrong the first time.
When you have control which messages you send, it is easy to filter duplicate meassages (after changing --since).
I‘ve chosen the proposal of #ralf-dreager and reduced selection to 1d and 1h. Consequently, I‘ve changed my monitoring script to either go through the results of 1d or just 1h, without the need to select each time again and again. Huge performance improvement and no need to store anything else in a variable or on the file system.
FATS="$(docker logs --since 1h $NODENAME 2>&1 | grep 'FATAL' | grep -v 'INFO')"

ISDeploymentWizard.exe command (SSIS deployment ) in CMD doesn't print any indication for status

I'm running the below command in CMD for SSIS:
ISDeploymentWizard.exe /Silent /ModelType:Project /SourcePath:"C:\TEST\Integration Services.ispac" /DestinationServer:"TEST03,1111" /DestinationPath:"/TEST/DEVOPS"
and it finished successfully but with no indication to the command line. I can only check with SSMS to make sure it was really deployed. any idea why?
Solid observation here #areilma - the /silent option eliminates all status info. I had always assumed that flag controlled whether the gui was displayed or not.
If I run this command
isdeploymentwizard.exe /Silent /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
My package is deployed to my local machine at the path specified. Removing the /silent option causes the GUI to open up with the prepopulated values.
isdeploymentwizard.exe /ModelType:Project /SourcePath:".\SO_66497856.ispac" /DestinationServer:".\dev2017" /DestinationPath:"/SSISDB/BatchSizeTester/SO_66497856"
When the former command runs, nothing is printed to the command prompt. So that's happy path deployment, maybe if something is "wrong", I'd get an error message on the command line. And this is where things got "interesting".
I altered my destination path to a folder that doesn't exist. I know the tool doesn't create a path if it doesn't exist and when I ran it, I didn't get an error back on the command line. What I did get, was a pop up windowed error of
TITLE: SQL Server Integration Services
The path does not exist. The folder 'cBatchSizeTester' was not found in catalog 'SSISDB'. (Microsoft.SqlServer.IntegrationServices.Wizard.Common)
BUTTONS:
OK
So the /silent option removes the gui to allow us to have an automated deploy but if a bad value is passed, we return to having a gui... I then repeated with a bad server name, which led to a second observation. The second I hit enter, the command line returned ready for the next command. 15 seconds later however,
TITLE: SQL Server Integration Services
Failed to connect to server .\dev2017a. (Microsoft.SqlServer.ConnectionInfo)
ADDITIONAL INFORMATION:
A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: SQL Network Interfaces, error: 26 - Error Locating Server/Instance Specified) (Microsoft SQL Server, Error: -1)
For help, click: http://go.microsoft.com/fwlink?ProdName=Microsoft%20SQL%20Server&EvtSrc=MSSQLServer&EvtID=-1&LinkId=20476
Well now, that tells me that the actual deployment is an independent spawned process. So it won't return any data back to the command line, in any case.
Since I assume we're looking at this from a CI/CD perspective, what can we do? We could fire off a sqlcmd afterwards looking for an entry in the SSISDB catalog views to see what happened. Something like this
SELECT TOP 1 O.end_time, SV.StatusValue, F.name AS FolderName, P.name AS ProjectName FROM catalog.operations AS O
CROSS APPLY
(
SELECT
CASE O.status
WHEN 1 THEN 'Created'
WHEN 2 THEN 'Running'
WHEN 3 THEN 'Canceled'
WHEN 4 THEN 'Failed'
WHEN 5 THEN 'Pending'
WHEN 6 THEN 'Ended unexpectedly'
WHEN 7 THEN 'Succeeded'
WHEN 8 THEN 'Stopping'
WHEN 9 THEN 'Completed'
END AS StatusValue
)SV
INNER JOIN catalog.object_versions AS OV ON OV.object_id = O.object_id
INNER JOIN catalog.projects AS P ON P.object_version_lsn = OV.object_version_lsn
INNER JOIN catalog.folders AS F ON F.folder_id = P.folder_id
/*
INNER JOIN
catalog.packages AS PKG
ON PKG.project_id = P.project_id
*/
WHERE O.operation_type = 101 /*deploy project*/
AND P.name = 'SO_66497856' /*project name*/
AND F.name = 'BatchSizeTester'
ORDER BY o.created_time DESC
Perhaps a filter against end_time of within the past 10 seconds would be appropriate and if we have a result and the status is Succeeded we got a deploy. No result means it failed. I presume something similar happens when the gui runs and despite all this testing, I'm not interested in firing up a trace to fully round out this answer and see what happens behind the scenes.
If you want to negate the value of the prebuilt tool, the other option would be to use the ManagedObjectModel/PowerShell approach to deploy as you can get info from there. The other deployment option is with the TSQL Commands. The second link in my documentation section outlines what that would look like
Paltry documentation I could find
I could find no documentation as to the command line switches for isdeploymentwizard.exe
Deploy an SSIS project from the command prompt with ISDeploymentWizard.exe
Deploy Integration Services (SSIS) Projects and Packages
From #arielma's deleted answer, they found a more succinct answer saying "not possible"

Consuming function module with SAP Netweaver RFC SDK in Bash

I'm trying to make a request to a function in a SAP RFC server hosted at 10.123.231.123 with user myuser, password mypass, sysnr 00, client 076, language E. The name of the function is My_Function_Nm with params: string Alternative, string Date, string Name.
I use the command line:
/usr/sap/nwrfcsdk/bin/startrfc -h 10.123.231.123 -s 00 -u myuser -p mypass -c 076 -l en -F My_Function_Nm
But it always shows me the help instructions.
I guess I'm not specifying the -E pathname=edifile, and it's because i don't know how to create a EDI File to include the parameters values to the specified function. Maybe someone can help me on how to create this file and how to correctly invoke startrfc to consume from this function?
Thanks in advance.
If you actually check the help text the problem shows, you should find the following passages:
RFC connection options:
[...]
-2 SNA mode on.
You must set this if you want to connect to R/2.
[...]
-3 R/3 mode on.
You must set this if you want to connect to R/3.
Apparently you forgot to specify -3...
You should use sapnwrfc.ini which will store your connection parameters, and it should be places in the same directory as client program.
Sample file for your app should be following:
DEST=TST1
ASHOST=10.123.231.123
USER=myuser
PASSWD=mypass
SYSNR=076
RFC_TRACE=0
Documentation on using this file is here.
For calling the function you must create Bash-script, but better to use Python script.

How to configure xemacs to recognize database specified in .sql-mode?

I am running xemacs with a .sql-mode file containing the following:
1 (setq sql-association-alist
2 '(
3 ("XDBST (mis4) " ("XDBST" "xsius" "password"))
4 ("dev " ("DEVTVAL1" "xsi" "password" "devbilling"))
5 ))
When I log in to the database in xemacs by selecting Utilities->Interactive Mode->Use Association, it logs me in but it does not pick up the database parameter. For example, when I log in to "dev", it logs me in but then when I do "select db_name()" it yields csdb instead of devbilling. It appears that it is picking up the default database associated with the user and ignoring the database parameter. How do you configure xemacs so that it picks up the database parameter specified in .sql-mode when the option is selected?
Thanks,
Mike
I did some more research and xeamcs is using sql-mode.el which on my system is in /usr/local/xemacs/lisp/sql-mode.el to login with SQL Mode. The code in the file does not use the database specified in .sql-mode in Interactive Mode. It does, however, use the database specified in .sql-mode in Batch Mode. You can use Batch Mode as a workaround.

Resources