Can't start deluge - windows

i'm using windowsXp, sp3 ,setup deluge-1.2.0_rc3-win32-setup.exe, but when i try \deluge\deluge-python\deluge.exe , an error happen
[error] init:1982 Dll load failed: The specified module could not be found.
...
...
..
ImportError: Dll load failed: The specified module could not be found.
[error] xxxxxx ui:147 There was an error whilst launching the request UI: gtk
[error] xxxxxx ui:148 Look at the traceback above for more information
so, deluge don't start :(

i had no prob in installing. I gave complete install. Didn't give recommended. And make sure you select proper directory for installing dll files. (in my case it was the recommended /bin option). try reinstalling.

Related

Define setup.py dependencies from a private PyPI

I'd like to install dependencies from my private PyPI by specifying them within a setup.py.
I've already tried to specify where to find dependencies within the dependency_links this way:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://my.private.pypi/"],
...
)
I've also tried to define the entire URL within the dependency_links:
setup(
...
install_requires=[],
dependency_links=["https://my.private.pypi/foo/foo-1.0.tar.gz"],
...
)
but when I try to install with python setup.py install, neither of them worked for me.
Can anybody help me?
EDITS:
With the first piece of code I got this error:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Searching for foo==1.0
Reading https://my.private.pypi/
Reading https://pypi.org/simple/foo/
Couldn't find index page for 'foo' (maybe misspelled?)
Scanning index of all packages (this may take a while)
Reading https://pypi.org/simple/
No local packages or working download links found for foo==1.0
error: Could not find suitable distribution for Requirement.parse('foo==1.0')
while in the second case I didn't get any error, just the following:
...
Installed .../test-1.0.0-py3.7.egg
Processing dependencies for test==1.0.0
Finished processing dependencies for test==1.0.0
UPDATE 1:
I've tried to change the setup.py following sinoroc's instructions. Now my setup.py looks like this:
setup(
...
install_requires=["foo==1.0"],
dependency_links=["https://username:password#my.private.pypi/folder/foo/foo-1.0.tar.gz"],
...
)
I built the library test with python setup.py sdist and tried to install it with pip install /tmp/test/dist/test-1.0.0.tar.gz, but I still get this error:
Processing /tmp/test/dist/test-1.0.0.tar.gz
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Regarding the private PyPi, I don't have any additional information because I'm not the administrator of it. As you can see, I just have the credentials (username and password) for that server.
Additionally, that PyPi is organised in sub-folders, https://my.private.pypi/folder/.. where the dependency I want to install is.
UPDATE 2:
By running pip install --verbose /tmp/test/dist/test-1.0.0.tar.gz, it seams there is only 1 location where to search for the library foo, in the public server https://pypi.org/simple/foo/ and not in our private server https://my.private.pypi/folder/foo/.
Here the output:
...
1 location(s) to search for versions of foo:
* https://pypi.org/simple/foo/
Getting page https://pypi.org/simple/foo/
Found index url https://pypi.org/simple
Looking up "https://pypi.org/simple/foo/" in the cache
Request header has "max_age" as 0, cache bypassed
Starting new HTTPS connection (1): pypi.org:443
https://pypi.org:443 "GET /simple/foo/ HTTP/1.1" 404 13
Status code 404 not in (200, 203, 300, 301)
Could not fetch URL https://pypi.org/simple/foo/: 404 Client Error: Not Found for url: https://pypi.org/simple/foo/ - skipping
Given no hashes to check 0 links for project 'foo': discarding no candidates
ERROR: Could not find a version that satisfies the requirement foo==1.0 (from test==1.0.0) (from versions: none)
Cleaning up...
Removing source in /private/var/...
Removed build tracker '/private/var/...'
ERROR: No matching distribution found for foo==1.0 (from test==1.0.0)
Exception information:
Traceback (most recent call last):
...
In your second attempt, I believe you should still have foo==1.0 in the install_requires.
Update
Be aware that pip does not support dependency_links (it used to, but does not anymore).
For pip, the alternative is to use command line options such as --index-url, --extra-index-url, or --find-links. These options can not be enforced on the user of your project (contrary to the dependency links from setuptools), so they have to be properly documented. To facilitate this, a good idea is to provide an example of a requirements.txt file to the users of your project. This file can contain some of pip options.
For example:
# requirements.txt
# ...
--find-links 'https://my.private.pypi/'
foo==1.0
# ...

Running Pyspark on Pycharm

On a Mac (v. 10.14.5), I am trying to run PySpark programs in PyCharm (professional edition, v. 19.2).
I know my simple PySpark program is fine, because when I run it with spark-submit outside PyCharm from the terminal, using Spark I installed via brew, it works as expected. I have tried linking PyCharm to this version of Spark, but am getting other issues.
I followed multiple instructions online to install pyspark within Pycharm (Preferences -> Project Interpreter), and set the SPARK_HOME environment variable to the appropriate venv directory (Run -> Edit Configurations -> Environment Variables). For example, this stackoverflow thread.
But, I get an error message when I run the program:
Failed to find Spark jars directory (/Users/rahul/PycharmProjects/spark-demoII/venv/assembly/target/scala-2.12/jars).
You need to build Spark with the target "package" before running this program.
Traceback (most recent call last):
File "/Users/rahul/PycharmProjects/spark-demoII/run.py", line 6, in <module>
sc = SparkContext("local", "SimpleApp")
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/context.py", line 133, in __init__
SparkContext._ensure_initialized(self, gateway=gateway, conf=conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/context.py", line 316, in _ensure_initialized
SparkContext._gateway = gateway or launch_gateway(conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/java_gateway.py", line 46, in launch_gateway
return _launch_gateway(conf)
File "/Users/rahul/virtualenvs/pyspark/lib/python3.7/site-packages/pyspark/java_gateway.py", line 108, in _launch_gateway
raise Exception("Java gateway process exited before sending its port number")
Exception: Java gateway process exited before sending its port number
Process finished with exit code 1
Anyone know how to get PyCharm to run Pyspark programs on a similar machine?
In response to #pissal suggestion:
I tried that previously but that version of spark does work. I tried it again anyway: after switching to a virtual environment, I did a pip install pyspark. To ensure that this version of spark works, I ran a spark-submit run.py (outside of PyCharm), and here is the error message.
WARNING: An illegal reflective access operation has occurred
WARNING: Illegal reflective access by org.apache.spark.unsafe.Platform (file:/Users/rahul/.virtualenvs/test1/lib/python3.7/site-packages/pyspark/jars/spark-unsafe_2.11-2.4.4.jar) to method java.nio.Bits.unaligned()
WARNING: Please consider reporting this to the maintainers of org.apache.spark.unsafe.Platform
WARNING: Use --illegal-access=warn to enable warnings of further illegal reflective access operations
WARNING: All illegal access operations will be denied in a future release
Exception in thread "main" java.lang.ExceptionInInitializerError
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:611)
at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
at org.apache.spark.util.Utils$$anonfun$getCurrentUserName$1.apply(Utils.scala:2422)
at scala.Option.getOrElse(Option.scala:121)
at org.apache.spark.util.Utils$.getCurrentUserName(Utils.scala:2422)
at org.apache.spark.SecurityManager.<init>(SecurityManager.scala:79)
at org.apache.spark.deploy.SparkSubmit.secMgr$lzycompute$1(SparkSubmit.scala:348)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$secMgr$1(SparkSubmit.scala:348)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:356)
at org.apache.spark.deploy.SparkSubmit$$anonfun$prepareSubmitEnvironment$7.apply(SparkSubmit.scala:356)
at scala.Option.map(Option.scala:146)
at org.apache.spark.deploy.SparkSubmit.prepareSubmitEnvironment(SparkSubmit.scala:355)
at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:774)
at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:161)
at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:184)
at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:86)
at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:920)
at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:929)
at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.StringIndexOutOfBoundsException: begin 0, end 3, length 2
at java.base/java.lang.String.checkBoundsBeginEnd(String.java:3720)
at java.base/java.lang.String.substring(String.java:1909)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:52)
... 25 more
So the reason this was happening was that pyspark has not been updated to use the latest version of Java. After removing Java version 13, I made sure my home brew installation of spark uses java version 1.8. Then added the following to the Environment Variables in Run -> Edit Configurations in Pycharm:
SPARK_HOME=/usr/local/Cellar/apache-spark/2.4.4/libexec
With these settings I can run pyspark jobs in PyCharm.

Getting error file not found for ProcessCenter_CaseManagerConfig.properties while running BPMGenerateUpgradeSchemaScripts.bat command

I am updating IBM BPM 8.6.0 to IBM Business Automation Workflow Version 18.0.0.2, after updating fix pack for IBAW when I am running below command I get an error.
BPMGenerateUpgradeSchemaScripts.bat -profileName Node1Profile -de ProcessCenter
Below is the error which is coming on running above command.
Unable to find the response file
C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties
Unable to find the file C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties, please run the command 'BPMConfig -update -profile deployment_manager_profile -de deployment_environment_name -caseConfigure' to collect the configuration information for the content data sources, please read the knowledge center for details.
java.io.FileNotFoundException: C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties (The system cannot find the file specified.)
CWMCO6007E: The BPMGenerateUpgradeSchemaScripts command could not complete successfully. The following exception occurred :
Faild to initialize the CommonInfo. java.io.FileNotFoundException: C:\IBM\BPM\v8.6\profiles\Node1Profile\config\cells\PCCell1\ProcessCenter_CaseManagerConfig.properties (The system cannot find the file specified.)
The file command asked to run first in the above error is on 11 point in the upgrade guide, can some one please suggest whats wrong with this?

Freeswitch : getting error while loading module mod_h323

I am configuring h323 with freeswitch but while loading mod_h323 module in freeswitch getting below error.
CRIT] switch_loadable_module.c:1520 Error Loading module /usr/local/freeswitch/mod/mod_h323.so
**/usr/local/freeswitch/lib/libh323_linux_x86_64_.so.1.26.5: undefined symbol: _ZN18H235Authenticators19GetEncryptionPolicyEv*
Please anyone knows how to fix this error ?
this is well known problem while compiling some of the module in freeswitch.reason for this can be one of these
1. Ptlib and H323plus need is not configured properly or installed correctly
2. package config is not configured to pc files
so for that
1. first check your library required for h323 is installed correctly and location for the same
2. find .pc file
3. configure pkg config path to that library

Hadoop and Hive Homes in CDH4

I'm trying to configure RHive in the CDH4 environment.
When reading a package 'RHive' in R, the error below got returned.
I'm guessing that's due to wrong homes.
If so, what would be the correct ones?
Or if that's not the reason, what's wrong with that?
Any help would be very appreciated.
Thanks.
> Sys.setenv(HIVE_HOME="/etc/hive")
> Sys.setenv(HADOOP_HOME="/etc/hadoop")
> library(RHive)
Loading required package: rJava
Loading required package: Rserve
This is RHive 0.0-7. For overview type '?RHive'.
HIVE_HOME=/etc/hive
[1] "there is no slaves file of HADOOP. so you should pass hosts argument when you call rhive.connect()."
Error : .onLoad failed in loadNamespace() for 'RHive', details:
call: .jnew("org/apache/hadoop/conf/Configuration")
error: java.lang.ClassNotFoundException
In addition: Warning message:
In file(file, "rt") :
cannot open file '/etc/hadoop/conf/slaves': No such file or directory
Error: package/namespace load failed for 'RHive'
Had the problems but solved it. Downside is that I have to keep track of a bunch of sym links
After struggling with install RHive_0.0-7.tar.gz on CDH 4.7.x and getting:
Warning in file(file, "rt") :
cannot open file '/etc/hadoop/conf/slaves': No such file or directory
[1] "there is no slaves file of HADOOP. so you should pass hosts argument when you call rhive.connect()."
In /etc/hadoop/conf
I added a the following sym link ----> ln -s /opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/etc/hadoop/conf.empty/slaves slaves
(why Cloudera CHD 4.7 installs in /opt without creating the proper sym links from /usr/lib is puzzling)
I also defined the followingin /usr/lib64/R/etc/Renviron
## set hive paths
HIVE_HOME='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive'
HADOOP_HOME='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop'
LD_LIBRARY_PATH='/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hadoop'
At a shell prompt I ran R CMD INSTALL RHive_0.0-7.tar.gz
Installation Happiness!!
++++++
Inside R-Studio (server)
>
> library(RHive)
Loading required package: rJava
Loading required package: Rserve
This is RHive 0.0-7. For overview type ‘?RHive’.
HIVE_HOME=/opt/cloudera/parcels/CDH-4.4.0-1.cdh4.4.0.p0.39/lib/hive
call rhive.init() because HIVE_HOME is set.
rhive.init()
>
+++++++
You should set the HADOOP_CONF_DIR separately.
Try export $HADOOP_CONF_DIR=/etc/hadoop/conf/conf.pseudo
The conf.pseudo has the slaves file.
Though I'd be curious to see if you can make RHive work with CDH4.

Resources