Docusign APi with RUby: IS there a way to download a document with all the tab content populated - ruby

I need to download an envelope from docusign by populating the tab data in ruby on rails.
I have used get_combined_document_from_envelope but it does not seems to get all the data.
def method_name
output_pdf = docusign.get_combined_document_from_envelope(
envelope_id: document.external_key,
local_save_path: "docusign_docs/file_name.pdf",
return_stream: false
)
end
I need the output file to have all the tabs populated.

The pdf would have the data inside the document as part of the completed doc (assuming it's visible). If you want to get the data downloaded separately, you would need to use API calls to get the envelope and get the tabs information. see this https://developers.docusign.com/esign-rest-api/guides/features/tabs

Related

Why does twitter API add url at the end of the text

When getting tweet information using the twitter API, the returned text or full_text field has a URL appended at the end of the text. For example:
"full_text": "Just another Extended Tweet with more than 140 characters, generated as a documentation example, showing that [\"truncated\": true] and the presence of an \"extended_tweet\" object with complete text and \"entities\" #documentation #parsingJSON #GeoTagged https://twitter.com/FloodSocial/status/994633657141813248"
https://twitter.com/FloodSocial/status/994633657141813248 is appended at the end(The appended url is acutally a shortened url but stackoverflow does not allow shortened url in the body so I just replace it with the full URL). Why does the API add this and is there a way to get the text without the added URL?
Are you using the correct twitter gem? using gem install twitter and setting up a client according to the docs, you should be able to just get the tweet/status by it's ID. But whatever example you are using doesn't show how you got the full text
text = client.status('994633657141813248').text
=>"Just another Extended Tweet with more than 140 characters, generated as a documentation example, showing that https://twitter.com/FloodSocial/status/994633657141813248"
The url is truncated as a plain string so not sure what you even do to get the string you formulated.
But if you have some long string somehow with the url embedded, you could do
text.split(/\shttp?s/).first
That looks like a quote Tweet where the original Tweet URL is included?
[edit - I was wrong with the above statement]
I see what is happening. The original Tweet links to an image on Twitter (https://twitter.com/FloodSocial/status/994633657141813248/photo/1, via a shortened tco link). Twitter hides the image URL in the rendered Tweet, but returns it in the body of the text. That's the expected behaviour in this case. You can also see the link parsed out in the extended_entities segment of the Tweet data, as well as the image data itself in the same area of the Tweet. If you want to omit the URL from the text data, you'll need to trim it yourself.

Kademi - allow front end user to delete a file they have uploaded

I am trying to allow front end user to delete a file they have uploaded.
#docs() tells me that $page.lead.files has a method called .remove() that accepts either an Int or an Object.
I keep getting a response of "false" when using this method. I am trying to pass and ID or Object of a file within $page.lead.files object.
Debugging...
User: https://spinsurance.admin.kademi.com.au/manageUsers/116783806/#summary-tab
Page: https://crm.spinsurance.co.nz/leads/148615383/
Source: https://spinsurance.admin.kademi.com.au/repositories/spcrm/version1/theme/apps/leadman/components/texteditor?fileName=leadDetailTabContentComponent.html
Under section on page called: Uploaded Files.
Click big red Delete button. (I don't mind if this file gets deleted)
Thanks for your help in advance.
The Lead.files property is a persisted list. Its not a good idea to try to modify the database using that approach.
Note that lead files are exposed as http addressable resources, which support the http DELETE method
So the simplest approach is to delete from the browser using ajax
Eg
DELETE /leads/123/myfile.pdf

How can I get my xpath provided by chrome to pull proper text versus an empty string?

I am trying to scrape property data on from "http://web6.seattle.gov/DPD/ParcelData/parcel.aspx?pin=9906000005".
I identify the element that I am interested in ("Base Zone" data in the table) and copied the xpath from the chrome developer tool. When I run it through scrapy I get an empty list.
I used the scrapy shell to upload the site and typed several response requests. The page loads and I can scrape the header, but nothing in the body of the page loads, it all comes up as empty lists.
My scrapy script is as follows:
class ZoneSpider(scrapy.Spider):
name = 'zone'
allowed_domains = ['web']
start_urls = ['http://web6.seattle.gov/DPD/ParcelData/parcel.aspx?
pin=9906000005']
def parse(self, response):
self.log("base_zone: %s" % response.xpath('//*[#id="ctl00_cph_p_i1_i0_vwZoning"]/tbody/tr/td/table/tbody/tr[1]/td[2]/span/text()').extract())
self.log("use: %s" % response.xpath('//*[#id="ctl00_cph_p_i3_i0_vwKC"]/tbody/tr/td/table/tbody/tr[3]/td[2]/text()').extract())
You will see that the logs return an empty list. In the scray shell when I use query the xpath for the header I get a valid response:
response.xpath('//*[#id="ctl00_headSection"]/title/text()').extract()
['\r\n\tSeattle Parcel Data\r\n']
But when I query anything in the body I get an empty list:
response.xpath('/body').extract()
[]
What I would like to see in my scrapy code is a response like the following:
base_zone: "SF 5000"
use: "Duplex"
If you remove tbody from your XPATH it will work
Since Developer Tools operate on a live browser DOM, what you’ll
actually see when inspecting the page source is not the original HTML,
but a modified one after applying some browser clean up and executing
Javascript code. Firefox, in particular, is known for adding
elements to tables. Scrapy, on the other hand, does not modify the
original page HTML, so you won’t be able to extract any data if you
use in your XPath expressions.
Source: https://docs.scrapy.org/en/latest/topics/developer-tools.html#caveats-with-inspecting-the-live-browser-dom

Check if a favorited tweet contains a URL within the body

Building a small app and would like to return all a users favorited tweets. This is easy enough to do with the twitter gem. I would then like to further filter the returned results by only displaying the favorited tweets that contain a URL within the body.
Now as I understand it Using the the twitter gem running Twitter.favorites will return a json of all the favorited tweets. Now within each of the individual tweets it returns a entity property that contains a URL hash this will be empty if no URL is present in the tweet and a URL if one is present.
How would I implement a check to say if the URL is present then display tweet. I'm a noob with json and Apis.
I never used the twitter gem, but since the question remains unanswered for a couple of hours, I’ll try to do my best.
First of all the gem seems to return Array<Twitter::Tweet> rather than raw json. So you only need to filter the array:
favs_with_urls = Twitter.favorites.reject { |r| r.urls.empty }
The latter should do the trick.
BTW, if you for some reason need to parse json to a hash:
require 'json'
c = JSON.load(raw_json)

How do I search then parse results on a webpage with Ruby?

How would you use Ruby to open a website and do a search in the search field and then parse the results? For example if I entered something into a search engine and then parsed the results page. I know how to use Nokogiri to find the webpage and open it. I am lost on how to input into the search field and moving forward to the results. Also on the page that I am actually searching I have to click on enter, I can't simply hit enter to move forward. Thank you so much for your help.
Use Mechanize - a library used for automating interaction with websites.
Something like mechanize will work, but interacting with the front end UI code is always going to be slower and more problematic than making requests directly against the back end.
Your best bet would be to look at the request that is being made to the server (probably a HTTP GET or POST request with some associated params). You can do this with firebug or Fiddler 2 for windows. Then, once you know the parameters that the server will accept, just make the request yourself.
For example, if you were doing this with the duckduckgo.com search engine, you could either get mechanize to go to duckduckgo.com, input text into the search box, and click submit, or you could just create a GET request to http://www.duckduckgo.com?q=search_term_here.
You can use Mechanize for something like this but it might be overkill. I would take a look at RestClient, especially if you don't need to manage cookies.
Edit:
If you can determine the specific URL that the form submits to, say for example 'example.com/search'; and you knew the request was a POST (which it usually is if you are submitting a form) you could construct something like this with mechanize:
agent = Mechanize.new
agent.post 'http://example.com/search', {
"_id0:Number" => string_to_search_for,
"_id0:submitButton" => "Enter"
}
Notice how the 'name' attribute of a form element becomes a key for the post and the 'value' element becomes the value. The 'input' element gets the value directly from the text you would have entered. This gets transformed into a request and submitted to the server when you push the submit button (of course in this case you are making the request directly). The result of the post should be some HTML that you can parse for the info you need.

Resources