I am trying to look for xmlaccess port number along with the configuration location on AIX / UNIX box, where WebSphere Portal 6.0 is installed, can you help with a script that I can use as i have multiple boxes and instance to search on?
Attempted the following without success:
I have tried to use find command but there are too many results and warnings as i do not have admin privileges i keep getting tons of results like the ones shown below,
find: 0652-081 cannot change directory to :
: The file access permissions do not allow the specified action.
This is very tedious and time consuming to follow thorough, when you have scores of warnings coming through.
You can try the following and let me know how you get along,
$ grep -e "XmlAccessPort=" `find / -name wpconfig.properties 2>/dev/null`
Hint : You can fine tune the above find script if you know the location of the Portal Installation.
Sample output:
/WebSphere/PortalServer/config/wpconfig.properties:XmlAccessPort=60644
In a typical installation it should be on the same port that Portal runs on. For Portal 7, that typically has been 10039. I haven't worked with verion 6 to add much more than that though.
Related
I've installed the Snowsql CLI tool (v1.2.16) and tried connecting to Snowflake using a command similar to snowsql -a <account details> -user datamonk3y#domain.com --authenticator externalbrowser.
For myself, and a few other colleagues, a pop up window appears which will allow us to authenticate. Unfortunately this isn't the case for some of my other colleagues...
I've not found anything obvious, but the authentication browser window simply isn't popping up for some users (Around half of us), therefore the connection is aborting after time out.
We're all using AWS workspaces with the same version of windows, same version of chrome and the same version of Snowsql. There's nothing I can see in the chrome settings that could be causing this. I'm also able to change the default browser to Firefox and I still authenticate fine.
Logging into the UI works for everyone too...
The logs don't really give much away, the failed attempts get a Failed to check OSCP response cache file message, but I think this is because the authentication isn't initiated with the server.
When I check my local machine (C:/Users/<datamonk3y>/AppData/Local/Snowflake/Caches/) I see a ocsp_response_cache.json file, but this isn't there for my colleagues who aren't able to log in.
As #SrinathMenon has mentioned in the comments below, adding -o insecure_mode=True to the login command will bypass this issue, but does anyone have any thoughts as to what could be causing this?
Thanks
Try by using the turning off OCSP :
snowsql -a ACCOUNT -u USER -o insecure_mode=True
The only root cause I see this issue happening is when the request is not able to reach the OCSP URL and that is failing.
Adding the debug flag in snowsql would give more details / information. Use this to collect the debug logs:
snowsql -a <account details> -user datamonk3y#domain.com --authenticator externalbrowser -o log_level=debug -o log_file=<path>
In my case, what worked was including the region in account name. So instead of -a abc1234, you would do something like -a abc1234.us-east-1.
https://docs.snowflake.com/en/user-guide/admin-account-identifier.html#format-2-legacy-account-locator-in-a-region explains this a little, but basically you use the first part of the web console URL, eg: https://abc1234.us-east-1.snowflakecomputing.com/ (this only works with classic console)
I’m trying to install Hadoop 3.2.0 on Windows 10 using mostly the following tutorial:
https://wiki.apache.org/hadoop/Hadoop2OnWindows
I’ve found some relevant tutorial on the web even though they are principally related to Linux.
Every time I'm trying to verify the HDFS daemons are running with this code:
"%HADOOP_PREFIX%\bin\hdfs dfs -put myfile.txt /"
I constantly get the same error message: “Your endpoint configuration is wrong;”
I tried to change the port to 9000, change to localhost, also tried to use hostname:8820.
I checked Stack Overflow and Super User but I haven’t found already the answer.
What should I try?
Try backslashes instead of forward slashes for windows paths
I'm trying to install weblogic server on Centos 7 with following instruction of oracle about console mode. Everything will be fine till weblogic file 's extracting on my computer. I get this message about
display enviroment variable failed
I google it and found xming as solution. But is there any solution to install weblogic without xming.
You need to do a silent install as mentioned. You can find the documentation here.
Basically, you need two files:
A response file
Here, you will set some parameters like ORACLE_HOME, proxy information if needed and installation type, etc.
An oraInst.loc file
In this file, you need to do the following(from documentation):
Replace oui_inventory_directory with the full path to the directory where you want the installer to create the inventory directory. Then, replace oui_install_group with the name of the group whose members have write permissions to this directory.
After doing all of this, you can run the command as follows;
java -jar distribution_name.jar -silent -responseFile file [-options] [()*]
I uploaded my own oraInst.loc and response files here for demonstration. I strongly suggest you to read the documentation though. Good luck.
I am new to GWAS analysis and I've been trying to run the PLINK tutorial sample datasets (hapmap 80K loci) on gPLINK to do some exclusions. I am currently working on a Mac OSX 10.10. I've applied the threshold settings (high missing rate, low MAF etc.) to my file "hapmap1.ped" and prepared to execute the command through gPLINK, however it keeps giving me the error prompt "can not execute command locally".
Is there something wrong with my library or directory settings?
gPlink runs in two modes a remote mode and a local one. It seems you are running the local one. Please check if you are specifying the correct path where PLINK is installed when cofiguring gPlink. For more details refer to gPlink configuration
I'm trying to run an Neo4j-database online-backup. I am using a Windows 7-machine and Neo4j enterprise 2.0.1.
I'm pretty new to all that database stuff, so I need pretty precise advice.
So far I have tried various steps to run the backup:
I created a clear directory for the backup (C:\Users\Tobi\Desktop\neo_backup)
I typed the following statement into the Neo4j command box: ./neo4j-backup -from single://localhost:7474 -to C:\Users\Tobi\Desktop\neo_backup.
But, despite the red help box dropping down, nothing happens. I also tried some slightly different statements (i.e. using the IP-address etc.)
What am I doing wrong? Could someone give me some advice?
You have to run this from your Windows command line, execute cmd from the start menu, navigate (cd c:\your\path\to\neo) to the Neo4j installation directory and run it from there.
Also I think at least on Windows it's called Neo4jBackup.bat (but I'm not sure, having no Windows)