How to setup Pydevd remote debugging with Heroku - heroku

According to this answer I am required to copy the pycharm-debug.egg file to my server, how do I accomplish this with a Heroku app so that I can remotely debug it using Pycharm?

Heroku doesn't expose the File system it uses for running web dyno to users. Means you can't copy the file to the server via ssh.
So, you can do this by following 2 ways:
The best possible way to do this, is by adding this egg file into requirements, so that during deployment it gets installed into the environment hence automatically added to python path. But this would require the package to be pip indexed
Or, Commit this file in your code base, hence when you deploy the file reaches the server.
Also, in the settings file of your project if using django , add this file to python path:
import sys
sys.path.append(relative/path/to/file)

Related

Azure DevOps ThirdParty Tools for build / Deployment

List item
pipelines:
default:
- step:
name: Push changes to Commerce Cloud
script:
- dcu --putAll $OCCS_CODE_LOCATION --node $OCCS_ADMIN_URL --applicationKey $OCCS_APPLICATION_KEY
- step:
name: Publish changes Live Storefront
image: Python 3.5.1
script:
python publishDCUAuthoredChanges.py -u $OCCS_ADMIN_URL -k $OCCS_APPLICATION_KEY
environment variables:
$OCCS_CODE_LOCATION: Path to location of all OCCS code
$OCCS_ADMIN_URL: URL for the administration interface on the target Commerce Cloud instance
$OCCS_APPLICATION_KEY: application key to use to log into the target Commerce Cloud administration interface
So I want to use Azure Dev Repository to CI / CD.
in the above code block if you see I have specified - dcu & python code in two task.
dcu is nodejs third party oracle tool which needed to be used to migrate code to cloud system. I want to know how to use that tool in azure dev ops,
Second python (or) nodejs which I want to invoke to REST api to publish the changes.
So where to place those files and how do we invoke it.
*********** Update **************
I hosted the self pool agent and able to access the system.
Just start executing basic bash code, but end up in two issue -
1) the git extract files from the repository it is going to _work/1/s, not sure how that path is decided. How can I change that location s
2) I did 'pwd' to the correct path but it fails in 'dcu' command. I tried with npm and other few commands it fails. But things like mkdir , rmdir it create & remove folder correctly from the desired path. when I tried the 'dcu' cmd from the terminal manually from the system it works fine as expected.
You can follow below steps to use DCU tool and python in azure pipelines.
1, create a azure git repo to include dcu zip file and your .py files. You can follow the steps in this thread to create a azure git repo and push local files to azure repo.
2, create azure build pipeline. Please check here to create a yaml pipeline. Here is a good tutorial for you to get started.
To create a classic UI pipeline, please choose Use the classic editor in the pipeline setup wizard, and choose start with an Empty job to start with an empty pipeline and add your own steps.(I will use classic UI pipeline in below example.)
3, Click "+" and search for Extract files task to unzip the DCU zip file. Click the 3dots on the Destination folder field to select a destination folder for extracted dcu files. eg. $(agent.builddirectory). Please check my answer in this thread more information about predefined variables
4, click "+" to add a powershell task. Run below script in screenshot to install dcu and run dcu command. For environment variables (like $OCCS_CODE_LOCATION), please click the variables tab in below screenshot to define them
cd $(agent.builddirectory) #the folder where the unzipped dcu files reside. eg. $(agent.builddirectory)
npm install -g
.\dcu.cmd --putAll $(OCCS_CODE_LOCATION) --node $(OCCS_ADMIN_URL) --applicationKey $(OCCS_APPLICATION_KEY)
5, add Use python version task to define a python version to execute your .py file.
6, add Python script task to run your .py file. Click the 3dots on Script path field to locate your publishDCUAuthoredChanges.py file(this py file and the dcu zip file have been pushed to azure git repo in the above step 1).
You should be able to run the script of above question in the azure devops pipeline.
Update:
_work/1/s is the default working folder for the agent. You cannot change it. Though there are ways to change the location where the source code is cloned from git, the tasks' workingdirectory is still from the default folder.
However, You can change the workingdirectory inside the tasks. And there are predefined variables you can use to refer to the places in the agents. For below example:
$(Agent.BuildDirectory) is mapped to c:\agent_work\1
%(Build.ArtifactStagingDirectory) is mapped to c:\agent_work\1\a
$(Build.BinariesDirectory) is mapped to c:\agent_work\1\b
$(Build.SourcesDirectory) is mapped to c:\agent_work\1\s
The .sh scripts in the _temp folder are generated automatically by the agent which contains the scripts in the bash task.
For above dcu command not found error. You can try adding dcu command path to the system variables path for your local machine's environment variables. (path in user variables cannot be found by agent jobs, For the agent use a different user account to connect to local machine)
.
Or you can use the physically path to dcu command in the bash task. For example let's say the dcu.cmd in the c:\dcu\dcu.cmd on local machine. Then in the bash task use below script to run dcu command.
c:/dcu/dcu.cmd --putAll ...

Google cloud app engine - How to edit code using SSH and debug-mode

I am trying to debug an application I have deployed to google cloud app engine. Reading the docs, I figured out that in order to do so I have to enter the debug mode using
gcloud app --project [Project ID] instances enable-debug
afterwards I am able to SSH into my instance and access root. Now I would like to edit some of the files. However, trying to use vim or nano does not seem to work.
Is there a way to edit those files without re-deploying the entire app?
Once you SSH into the App Engine instance and open a shell into the Docker container, you'll need to download the package list before installing nano or vim:
apt-get update && apt-get install nano
Then you can edit your app's files (which are in /app):
nano composer.json
The deployed app runs live code. It is not generally feasible to edit it. Moreover, changes made to the running container are not permanent; in fact they and are lost at the first re-start.
You may find some information on the Debugging an Instance page.
Unrelated to the above, an actual command-line editor is offered in the cloud shell.

Where files are allocated in Parse Server?

I'm implementing an instance of Parse Server, I want know where the Parse Server Allocated the files ?
According to File Adapter, the default file storage is GridFS in mongodb.
Depends on the operating system and type of installation you used.
If installed on a linux/unix using the global install npm install -g parse-server mongodb-runner then your parse-server files will normally be under usr/lib/node_modules/parse-server. ( may differ from linux versions )
be careful when editing these files for hot hacks or modifications. If you later choose to upgrade parse-server they will be overwritten.
Your cloud file directly is normally created by you. So this could be home/parse/cloud/main.js. This can be in any location of your choice. To set a new location you will set that in the index file or json (depending on your startup process ).
cloud: '/home/myApp/cloud/main.js', // Absolute path to your Cloud Code
If you installed not using the global install, then obviously you would need to cd to where you cloned the project.
Windows would be similar. Clone (or download the zip) parse-server from the repo. Open a console window and “cd” to the folder where you have cloned/extracted the example server, eq:
cd "C:\parse-server"
Here is where the files will sit on the parse-server. Hopes this helps!

How to use Ambari service to deploy a jar on all hadoop nodes?

I have a requirement where I want to deploy a jar file at a particular location on all hadoop cluster nodes using Ambari server. For that purpose I think I can use service feature.
So I created a sample service and could deploy it as client or slave on all nodes.
I added a new folder as Testservice inside /var/lib/ambari-server/resources/stacks/HDP/2.2/services/ and it has following files/directories
[machine]# cd /var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice^C
[machine]#
[machine]# pwd
/var/lib/ambari-server/resources/stacks/HDP/2.2/services/Testservice
[machine]# ls
configuration metainfo.xml package
[machine]# ls package/*
package/archive.zip
package/files:
filesmaster.py test1.jar
package/scripts:
test_client.py
[machine]#
With this my service is added and installed on all nodes. On each node, a respective directory "/var/lib/ambari-agent/cache/stacks/HDP/2.2/services/Testservice" is created with same file structure as mentioned above. As of now test_client.py script has no code at all. Just dummy implementation of install, configure function.
So here I want to add code such that package/files/test1.jar from each host to a defined destination location say "/lib folder.
I need help on this point. How I can make use test_client.py script? How I can write generic code to copy my jar file.
test_client.py has install method as shown below
class TestClient(Script):
def install(self, env):
Need more details how env variable can be used to get all required base paths for ambari service directory and hadoop install base paths.
You are correct in thinking that you can use a Custom Ambari Service to ensure a file is present on various nodes in your cluster. Your custom service should have a CLIENT component which handles laying down the files you need on various hosts in the cluster. It should be a client component because it has no running processes.
However, using the files folder is not the correct approach to distribute the file you have (test1.jar). All the Ambari services rely on linux packages to install the necessary files on the system. So what you should be doing is creating a software package that takes care of laying down that lib file to the correct location on disk. This could be an rpm and/or deb file depending on what OSs you are planning to support. Once you have the software package you can accomplish your goal by modifying two files you already have outlined above.
metainfo.xml - You will list the necessary software packages required for your service to function correctly. For example if you were planning on supporting RHEL6 and RHEL7 you would create an rpm package named my_package_name and include it with this code:
<osSpecifics>
<osSpecific>
<osFamily>redhat6,redhat7</osFamily>
<packages>
<package>
<name>my_package_name</name>
</package>
</packages>
</osSpecific>
</osSpecifics>
test-client.py - You will need to replace the starter code you have in your question with:
class TestClient(Script):
def install(self, env):
self.install_packages(env)
The self.install_packages(env) call will ensure that the packages you have listed in metainfo.xml file get installed when your custom service CLIENT component is installed.
Note: Your software package (rpm, deb, etc.) will have to be hosted in an online repository in order for Ambari to access it and install it. You could create a local repository on the node running Ambari Server using httpd and createrepo. This process can be gleaned from the HDP Documentation.
Alternative approach (Not Recommended)
Now that I have explained the way it SHOULD be done. Let me tell you how you can achieve this using the package/files folder. Again this is not the recommended approach to handle installing software on a linux system, the package management system for your distribution should be handling this.
test-client.py - Update your starter file to include the below content. For this example we will copy your test1.jar to /lib folder with file permissions 0664, owner of 'guest', and group 'hadoop':
def configure(self,env):
File("/lib/test1.jar",
mode=0644,
group="hadoop",
owner="guest",
content=StaticFile("test1.jar")
)
Why is this approach not recommended? This is not recommended because installing software on a linux distribution should be managed so that it makes it easy to upgrade and remove said software. Ambari does not have full uninstall functionality when it comes to its services. The most you can do is remove a service from being managed in your ambari cluster, after doing so all those files will remain on the system and would have to be removed by writing a custom script or doing it manually. However if you used package managment to handle installing the files you could easily remove the software by using the same package management system.

How do I set up Mercurial and hgweb on IIS?

I've been looking all over for decent instructions on how to get hgweb working on IIS but I haven't found much of worth.
There's this "step by step" on the Mercurial wiki, but it's not very good.
There's also this and this, but again, I can't find good steps to lead up to where those get started.
I just had to install a fresh Mercurial instance yesterday, here's updated instructions for 1.7:
Install Mercurial (these instructions were tested with 1.7)
Install Python (for Mercurial 1.7, you must use the x86 version of Python 2.6.6)
You will need to download the hgweb.cgi file from the Mercurial source. You can download the source by running: hg clone https://www.mercurial-scm.org/repo/hg/
Create a folder that will be your web application folder. You will need to copy three things into this folder:
The hgweb.cgi file
The contents of the Library.zip from your "C:\Program Files\Mercurial" folder
The Templates folder from your "C:\Program Files\Mercurial"
You will need to make sure you have Python set up in IIS.
Enable CGI via the following: Control Panel -> Turn Windows Features On or Off -> Roles -> Web Server (IIS) -> Add Role Services -> Check CGI
Create a new Web Site in IIS and make sure the physical path is the folder you created above
In the Handler Mappings for the new website, select "Add Script Map". Enter *.cgi for the request path, c:\Python26\python.exe -u "%s" for the Executable, and Python for the Name.
You will also need to create a file named "hgweb.config" with contents similar to below. The path within the file needs to be the location on your drive where you want to store the Mercurial repositories:
[collections]
c:\Mercurial\repos = c:\Mercurial\repos
Edit the hgweb.cgi file and change the line where it sets the path to your hgweb.config to something like the following (wherever the hgweb.config file is):
config = "C:\Mercurial\hgweb.config"
Now, open a browser and navigate to http://localhost/mercurial/hgweb.cgi (or whatever is the appropriate URL path you set up in IIS) and you should see the Mercurial Repositories page.
Also, check out Jeremy Skinners blog post . It's a little outdated, but has some extra nice steps like setting up URL re-writing for cleaner URL's.
It seems since Mercurial 1.5.2 was released, these tutorials don't work exactly right. For one thing, hgwebdir.cgi has been removed, and is now replaced with hgweb.cgi.
The instructions that worked best for me is at eworldui.net:
http://www.eworldui.net/blog/post/2010/04/08/Setting-up-Mercurial-server-in-IIS7-using-a-ISAPI-module.aspx
Those instructions are meant for IIS 7 or greater. If you're setting this up on IIS 6, I wrote up similar instructions geared toward Win2k3 and IIS 6.0:
http://partialclass.blogspot.com/2010/05/setting-up-mercurial-server-on-win2k3.html
UPDATE: Shortly after getting this working I learned that BitBucket changed their pricing scheme to offer free, unlimited, private hosting: https://bitbucket.org/. I would've opted for that in a heartbeat when I was originally working on this project.
Below are what I did after doing a fair amount of research for geting hgwebdir.cgi setup on IIS6 . It is based on the following sites:
http://python.markrowsoft.com/iiswse.asp
http://www.jeremyskinner.co.uk/mercurial-on-iis7/
You'll need to install the following on the server:
Mercurial (I used version 1.5)
Python 2.6. The version of Python depends on the version of Mercurial installed.
Mercurial 1.5 uses Python 2.6. Install x86 even if you are running x64.
The steps for me were:
Create a directory for the website. I used c:\inetpub\wwwroot\hg.
In IIS, right click on the folder for hg, select properties, select the Home Directory tab.
Click on the Create application button. Set the execute permissions to "scripts".
Still in the Home Directory tab, click on the Configuration button. In the "Application Configuration" popup, click the Add button to add an application extension. The Executable is c:\Python26\python.exe -u "%s" "%s". The extension is .cgi. Set the "verbs" to "limit to: GET,HEAD,POST". Check both Script engine and Verify that file exists.
In the Directory Security tab, click on the Edit button in the Authentication and access control section. Uncheck all authentication methods, and check the "Basic authenication" method. Set the Default domain if you like to your Active Directory domain.
In IIS, click on the Web Service Extensions folder on the left panel. Click on "Add a new Web service extension" link. Extension name should be Python, the required file is c:\Python26\python.exe -u "%s" "%s". Make sure the new extension is "Allowed".
Now is a good time to test that Python is working. Create a file in your new Hg folder called test.cgi. Paste the following python code:
print 'Status: 200 OK'
print 'Content-type: text/html'
print
print '<html><head>'
print ''
print '<h1>It works!</h1>'
print ''
print ''
Open the browser to your site, for instance, http://localhost/hg/test.cgi
You should see "It works!" in the browser.
Next let's get the hgwebdir working.
Delete test.cgi
clone the hg repo to a new directory: https://www.mercurial-scm.org/repo/hg/
copy hgwebdir.cgi to your web directory: c:\inetpub\wwwroot\hg\ from the cloned hg repo
Edit the file and change
application = hgwebdir('hgweb.config')
wsgicgi.launch(application)
to
application = hgwebdir('c:\inetpub\wwwroot\hg\hgweb.config')
wsgicgi.launch(application)
Unzip the Library.zip file in the Mercurial directory, c:\Program Files\Mercurial\, to your web directory, c:\inetpub\wwwroot\hg\
Copy the templates directory from c:\Program Files\Mercurial\templates\ to c:\inetpub\wwwroot\hg\templates\
Create a file called hgweb.config in your web directory.
Now is a good time to test it out. Go to the following URL in the browser, http://localhost/hg/hgwebdir.cgi
Edit hgweb.config, and paste the following:
[collections]
\\server\share$\Hg\ = \\server\share$\Hg\
[web]
allow_push = *
push_ssl = false
These are all my preferences, for instance we have our repos in subdirectories at \\server\share$\Hg. The web app will run under the permissions of the logged in user via the browser, so they'll need read/write permissions to the share.
The last step is to allow for long connections which can happen when you first clone a repo. Run the following command to increase the timeout to 50 minutes:
cd \inetpub\AdminScripts\
cscript adsutil.vbs GET /W3SVC/CGITimeout
cscript adsutil.vbs SET /W3SVC/CGITimeout 3000
Use mercurial to clone the mercurial repository:
hg clone https://www.mercurial-scm.org/repo/hg/
you will find hgwebdir.cgi at the top level. It should install
like any other cgi script.
I've been fighting with this setup for mercurial 1.7.2 for the past week or so, I had to do things slightly differently than the above articles do in order to get it working.
Posting here because google kept bringing me back here....
Full instructions posted here
I followed a combination of these instructions and these (in the source)
The main differences are that I had to do the "pure python" install of mercurial otherwise it would complain about missing dlls, and I found it was important to use the "python installers" for pywin and isapi-wsgi. (maybe this is obvious to experienced python developers, but I'm a python newbie so it was news to me)
Hope this helps somebody and I'm not just making stuff up (I might be, like i said, python newbie)
The hg red book contains some much better general instructions than I've seen in other places. They are not IIS specific, but they are quite good:
http://hgbook.red-bean.com/read/collaborating-with-other-people.html#sec:collab:cgi
I was running into a "...can not load module..." type error and after some reading, the key for me was to ignore the Library.zip file in the Mercurial folder, and instead use the one from C:\Program Files (x86)\TortoiseHg folder.
That tip I found as #6 in this guide:
http://www.endswithsaurus.com/2010/05/setting-up-and-configuring-mercurial-in.html
Hope this helps someone...
I know this is an old question, but I really struggled getting Hg installed on Server 2019 and IIS 10.
Here is what I did to get it working:
Install Python 2.7 which in my case was python-2.7.18.amd64.msi. I will assume it's installed in C:\Python27. Make sure python is added to your path and that pip is installed.
Install Mercurial as a module using pip at the command line:
pip install mercurial
Under Default Web Site add a new application called hg and point it to the directory you want to use to use.
Configure Python as CGI handler in IIS 10.0 for this new website (or the entire web server if you wish). You can do this manually or create/add the follwing to your web.config file:
<system.webServer>
<handlers accessPolicy="Read, Script">
<add name="Python 2.7" path="*.cgi" verb="*" modules="CgiModule" scriptProcessor="C:\Python27\python.exe -u "%s"" resourceType="File" />
</handlers>
</system.webServer>
In the 'hg' application folder create a hgweb.cgi that looks similar to the following:
#!/usr/bin/env python3
#
# An example hgweb CGI script, edit as necessary
# See also https://mercurial-scm.org/wiki/PublishingRepositories
# Path to repo or hgweb config to serve (see 'hg help hgweb')
config = "hgweb.config"
# Uncomment and adjust if Mercurial is not installed system-wide
# (consult "installed modules" path from 'hg debuginstall'):
# import sys; sys.path.insert(0, "/path/to/python/lib")
# Uncomment to send python tracebacks to the browser if an error occurs:
#import cgitb; cgitb.enable()
from mercurial import demandimport
demandimport.enable()
from mercurial.hgweb import hgweb, wsgicgi
application = hgweb(config)
wsgicgi.launch(application)
In the 'hg' application folder create the hgweb.config file and point it at your repos like the following:
[collections]
C:\Web\www\hg\repos\ = C:\Web\www\hg\repos\
Navigate to http://localhost/hg/hgweb.cgi and enjoy!
You can try HgLab. This isn't exactly hgwebdir; rather it is a purely managed Mercurial implementation with push and pull server and repository browser.

Resources