I am trying to get a random sample of an account's twitter followers. I found the following code: https://gist.github.com/aparrish/2910772
However, when I run it, I inevitably get
sname = sys.argv[1]
IndexError: list index out of range
What should I do? Any help would be greatly appreciated.
Related
def linkdin_login(company_name,username,password):
driver.get('https://linkedin.com/')
driver.find_element(By.XPATH,'//*[#id="session_key"]').send_keys(username)
driver.find_element(By.XPATH,'//*[#id="session_password"]').send_keys(password)
driver.find_element(By.XPATH,"//button[#class='sign-in-form__submit-button']").click()
#def company_info(company_name):
element = driver.find_element(By.CSS_SELECTOR,"#global-nav-typeahead > input")
element.send_keys(company_name)
element.send_keys(Keys.ENTER)
driver.implicitly_wait(10) # seconds
driver.get(driver.find_element(By.CSS_SELECTOR,".search-nec__hero-kcard-v2 > a:nth-child(1)").get_attribute("href"))
driver.implicitly_wait(10)
people()
by the above code i am logging into LinkedIn and fetching the LinkedIn page of the some companies after getting the page I am trying to get the employee data by using people function show below
def people():
driver.implicitly_wait(10)
driver.get(driver.find_element(By.XPATH,"/html/body/div[5]/div[3]/div/div[2]/div/div[2]/main/div[1]/section/div/div[2]/div[1]/div[2]/div/a").get_attribute("href"))
driver.implicitly_wait(10)
people = driver.find_element(By.XPATH,"/html/body/div[4]/div[3]/div[2]/div/div[1]/main/div/div/div[2]/div/ul")
people_data = people.find_elements(By.TAG_NAME,"li")
for i in people_data:
print(i.text)
in this function i am trying to access the link to employees data
that is where the problem lies
the line 2 of people function i trying to get the link the problem is due to some reason sometimes i am getting the link(not to frequently!!) but most of the time i am getting the error saying Xpath not found
i didn't know how to attach a html page so i am attaching the link
([https://www.linkedin.com/company/google/](https://www.stackoverflow.com/)
1. I tried implicit wait assuming that the program is trying to access the Xpath during loading of the page
I'm trying to retrieve a single sheet from a google spreadsheet in excel format, I have all the access setup correctly and can run different google sheet v4 api functions on it.
I wanted to use the Google::Apis::SheetsV4::SheetsService::copy_spreadsheet function to copy a single sheet as mentioned in the Ruby example here - https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.sheets/copyTo
This is my code -
service = Google::Apis::SheetsV4::SheetsService.new
service.client_options.application_name = APPLICATION_NAME
service.authorization = authorize
spreadsheet_id = "<passing my spreadsheet id here>"
gid = "<setting this as my sheet id from the spreadsheet>"
request = Google::Apis::SheetsV4::CopySheetToAnotherSpreadsheetRequest.new(
destination_spreadsheet_id: "0",
)
response1 = service.copy_spreadsheet(spreadsheet_id,gid,request)
puts response1.to_json
This always fails with the following error -
/usr/local/lib/ruby/gems/3.1.0/gems/google-apis-core-0.4.2/lib/google/apis/core/http_command.rb:229:in `check_status': badRequest: Invalid destinationSpreadsheetId [0] (Google::Apis::ClientError)
from /usr/local/lib/ruby/gems/3.1.0/gems/google-apis-core-0.4.2/lib/google/apis/core/api_command.rb:134:in `check_status'
Would be great if someone can help me on how to use this properly, also if there's a better way to download/export a single sheet from a spreadsheet in Ruby let me know.
Answer for question 1
This always fails with the following error -
/usr/local/lib/ruby/gems/3.1.0/gems/google-apis-core-0.4.2/lib/google/apis/core/http_command.rb:229:in `check_status': badRequest: Invalid destinationSpreadsheetId [0] (Google::Apis::ClientError)
from /usr/local/lib/ruby/gems/3.1.0/gems/google-apis-core-0.4.2/lib/google/apis/core/api_command.rb:134:in `check_status'
Would be great if someone can help me on how to use this properly,
When I saw your error message and your script, I thought that destination_spreadsheet_id: "0", is not correct. In this case, please set the destination Spreadsheet ID. When this is reflected in your script, it becomes as follows.
src_spreadsheet_id = "###" # Please set the source Spreadsheet ID.
src_sheet_id = "###" # Please set the sheet ID of the source Spreadsheet.
dst_spreadsheet_id = "###" # Please set the destination Spreadsheet ID.
request = Google::Apis::SheetsV4::CopySheetToAnotherSpreadsheetRequest.new(
destination_spreadsheet_id: dst_spreadsheet_id,
)
response1 = service.copy_spreadsheet(src_spreadsheet_id,src_sheet_id,request)
puts response1.to_json
Answer for question 2
I'm trying to retrieve a single sheet from a google spreadsheet in excel format, I have all the access setup correctly and can run different google sheet v4 api functions on it.
also if there's a better way to download/export a single sheet from a spreadsheet in Ruby let me know.
In this case, how about the following sample script? In this script, a XLSX data including the specific sheet is downloaded using the endpoint of https://docs.google.com/spreadsheets/d/{spreadsheetId}/export?format=xlsx&gid={sheetId}. So, please set your Spreadsheet ID and sheet ID to the URL. And, in this case, the access token is retrieved from service you are using.
url = 'https://docs.google.com/spreadsheets/d/{spreadsheetId}/export?format=xlsx&gid={sheetId}'
filename = 'sample.xlsx' # Please set the saved filename.
access_token = service.request_options.authorization.access_token
open(
url,
"Authorization" => "Bearer " + access_token,
:redirect => true
) do |file|
open(filename, "w+b") do |out|
out.write(file.read)
end
end
When this script is run, the specific sheet of sheetId of spreadsheetId is downloaded as a XLSX data and save it.
This script uses require "open-uri".
Note:
When the Spreadsheet is downloaded as a XLSX data, when an error related to the scope, please add https://www.googleapis.com/auth/drive.readonly to the scopes and reauthorize again. By this, the script works.
Reference:
Method: spreadsheets.sheets.copyTo
I'm trying to create a discord bot that could have the option to ban people forever. That means that even if someone unbanned him he will be banned again.
I'm trying to do that with a file that will save the userid but the problem is that the userid is not a string and I can't save it in a file.. but still if I can save it as str and convert it to integer it's not the problem.
My code is:
#client.command()
#commands.has_permissions(administrator=True)
async def testban(bot):
member = client.get_user(int(460688177846550528))
await member.ban(reason='this is a test')
Can someone help me, please?
Discord's User.id is an int. To write it to a file you can simply convert it to a str:
str(userId)
When reading it, you can convert it back to an int:
int(userIdStr)
The User object can then be retrieved using Client.get_user()
how can I get ALL records from route53?
referring code snippet here, which seemed to work for someone, however not clear to me: https://github.com/aws/aws-sdk-ruby/issues/620
Trying to get all (I have about ~7000 records) via resource record sets but can't seem to get the pagination to work with list_resource_record_sets. Here's what I have:
route53 = Aws::Route53::Client.new
response = route53.list_resource_record_sets({
start_record_name: fqdn(name),
start_record_type: type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
})
response.last_page?
response = response.next_page until response.last_page?
I verified I'm hooked into right region, I see the record I'm trying to get (so I can delete later) in aws console, but can't seem to get it through the api. I used this: https://github.com/aws/aws-sdk-ruby/issues/620 as a starting point.
Any ideas on what I'm doing wrong? Or is there an easier way, perhaps another method in the api I'm not finding, for me to get just the record I need given the hosted_zone_id, type and name?
The issue you linked is for the Ruby AWS SDK v2, but the latest is v3. It also looks like things may have changed around a bit since 2014, as I'm not seeing the #next_page or #last_page? methods in the v2 API or the v3 API.
Consider using the #next_record_name and #next_record_type from the response when #is_truncated is true. That's more consistent with how other paginations work in the Ruby AWS SDK, such as with DynamoDB scans for example.
Something like the following should work (though I don't have an AWS account with records to test it out):
route53 = Aws::Route53::Client.new
hosted_zone = ? # Required field according to the API docs
next_name = fqdn(name)
next_type = type
loop do
response = route53.list_resource_record_sets(
hosted_zone_id: hosted_zone,
start_record_name: next_name,
start_record_type: next_type,
max_items: 100, # fyi - aws api maximum is 100 so we'll need to page
)
records = response.resource_record_sets
# Break here if you find the record you want
# Also break if we've run out of pages
break unless response.is_truncated
next_name = response.next_record_name
next_type = response.next_record_type
end
I use tweetstream gem to get sample tweets from Twitter Streaming API:
TweetStream.configure do |config|
config.username = 'my_username'
config.password = 'my_password'
config.auth_method = :basic
end
#client = TweetStream::Client.new
#client.sample do |status|
puts "#{status.text}"
end
However, this script will stop printing out tweets after about 100 tweets (the script continues to run). What could be the problem?
The Twitter Search API sets certain arbitrary (from the outside) limits for things, from the docs:
GET statuses/:id/retweeted_by Show user objects of up to 100 members who retweeted the status.
From the gem, the code for the method is:
# Returns a random sample of all public statuses. The default access level
# provides a small proportion of the Firehose. The "Gardenhose" access
# level provides a proportion more suitable for data mining and
# research applications that desire a larger proportion to be statistically
# significant sample.
def sample(query_parameters = {}, &block)
start('statuses/sample', query_parameters, &block)
end
I checked the API docs but don't see an entry for 'statuses/sample', but looking at the one above I'm assuming you've reached 100 of whatever statuses/xxx has been accessed.
Also, correct me if I'm wrong, but I believe Twitter no longer accepts basic auth and you must use an OAuth key. If this is so, then that means you're unauthenticated, and the search API will also limit you in other ways too, see https://dev.twitter.com/docs/rate-limiting
Hope that helps.
Ok, I made a mistake there, I was looking at the search API when I should've been looking at the streaming API (my apologies), but it's possible some of the things I was talking about could be the cause of your problems so I'll leave it up. Twitter definitely has moved away from basic auth, so I'd try resolving that first, see:
https://dev.twitter.com/docs/auth/oauth/faq