How to share SQL query results in Slack - slack

I have a similar question to this one (for Pandas), in that I'd like to have the results of an SQL query appear nicely in a Slack message, as a table.
If for example I output the query results as Markdown and then paste this in Jira, a table appears exactly as I'd like it, regardless of whether column names are in snake_case. However, if I choose the Markdown (raw) code snippet in Slack, underscores are interpreted as beginning italics, which is completely wrong.
Does anyone have a better suggestion for displaying tabular results? Or forcing Markdown (raw) to ignore underscores? I tried code blocks as well but to no avail.
For info, the database IDE I'm using is DataGrip.

Slack does not have support for tables, like Jira. So your only option to to choose from workarounds. I see 3 available approaches:
1) Display in external browser
Store your data in an external web app and just post a link to Slack. That works very e.g. with Google Sheets if you use the Google Apps integrator in your Slack workspace.
2) Display as image
Another option is to generate an image (e.g. GIF) from your table and then post the image to Slack. That way the data can be displayed within Slack. To safe Slack storage space I would suggest storing the image file in an external image service (e.g. imgur) and then only post the link. Imgur has an API which would allow this process to be fully automated.
3) Display as plain text
Building upon one of the answers from the question you linked you can also convert your table into a plain text using a tool like Tabulate and then upload it as plain text snippet to Slack. That way the table could also be viewed within Slack. Note that the max size for snippet uploads is 1MB. Also, Slack will only show the first fews lines by default.

Doug here, creator of SQLBot.co. Tables are not supported in Slack, but you can get pretty close using ascii tables and code formating (three tick marks to begin and end)
Here's an example:
The only issue is that if your table gets too wide the text wraps.
Depending on how you're generating the output there may be good helpers. In the ruby world you can use the terminal-table gem or the text-table gem.

Since this comes up on a google search, and the site mentioned previously requires signup, thought I'd share what I found:
https://ozh.github.io/ascii-tables/
Just paste the output in Slack within a Code Block

Related

How to download the content of the web page to Google sheet, using importxml [duplicate]

I am trying to import data from the following website to Google Sheets. I want to import all the matches for the day.
https://www.tournamentsoftware.com/tournament/b731fdcd-a0c8-4558-9344-2a14c267ee8b/Matches
I have tried importxml and importhtml, but it seems this does not work as the website uses JavaScript. I have also tried to use Apipheny without any success.
When using Apipheny, the error message is
'Failed to fetch data - please verify your API Request: {DNS error'
Tl;Dr
Adapted from my answer to How to know if Google Sheets IMPORTDATA, IMPORTFEED, IMPORTHTML or IMPORTXML functions are able to get data from a resource hosted on a website? (also posted by me)
Please spend some time learning how to use the browsers developers tools so you will be able to identify
if the data is already included in source code of the webpage as JSON / literal JavaScript object or in another form
if the webpage is doing a GET or POST requests to retrieve the data and when those requests are done (i.e. as some point of the page parsing, or on event)
if the requests require data from cookies
Brief guide about how to use the web browser to find useful details about the webpage / data to import
Open the source code and look if the required data is included. Sometimes the data is included as JSON and added to the DOM using JavaScript. In this case it might be possible to retrieve the data by using the Google Sheets functions or URL Fetch Service from Google Apps Script.
Let say that you use Chrome. Open the Dev Tools, then look at the Elements tab. There you will see the DOM. It might be helpful to identify if the data that you want to import besides being on visible elements is included in hidden / not visible elements like <script> tags.
Look at Source, there you might be able to see the JavaScript code. It might include the data that you want to import as JavaScript object (commonly referred as JSON).
There are a lot of questions about google-sheets +web-scraping that mentions problems using importhtml and/or importxml that already have answers and even many include code (JavaScript snippets, Google Apps Script functions, etc.) that might save you to have to use an specialized web-scraping tool that has a more stepped learning curve. At the bottom of this answer there is a list of questions about using Google Sheets built-in functions, including annotations of the workaround proposed.
On Is there a way to get a single response from a text/event-stream without using event listeners? ask about using EventSource. While this can't be used on server side code, the answer show how to use the HtmlService to use it on client-side code and retrieve the result to Google Sheets.
As you already realized, the Google Sheets built-in functions importhtml(), importxml(), importdata() and importfeed() only work with static pages that do not require signing in or other forms of authentication.
When the content of a public page is created dynamically by using JavaScript, it cannot be accessed with those functions, by the other hand the website's webmaster may also purposefully have prevented web scraping.
How to identify if content is added dynamically
To check if the content is added dynamically, using Chrome,
Open the URL of the source data.
Press F12 to open Chrome Developer Tools
Press Control+Shift+P to open the Command Menu.
Start typing javascript, select Disable JavaScript, and then press Enter to run the command. JavaScript is now disabled.
JavaScript will remain disabled in this tab so long as you have DevTools open.
Reload the page to see if the content that you want to import is shown, if it's shown it could be imported by using Google Sheets built-in functions, otherwise it's not possible but might be possible by using other means for doing web scraping.
According to Wikipedia,
Web scraping, web harvesting, or web data extraction is data scraping used for extracting data from websites.
Use of robots.txt to block Web crawlers
The webmasters could use robots.txt file to block access to website. In such case the result will be #N/A Could not fetch URL.
Use of User agent
The webpage could be designed to return a special a custom message instead of the data.
Below there are more details about how Google Sheets built-in "web-scraping" functions works
IMPORTDATA, IMPORTFEED, IMPORTHTML and IMPORTXML are able to get content from resources hosted on websites that are:
Publicly available. This means that the resource doesn't require authorization / to be logged in into any service to access it.
The content is "static". This mean that if you open the resource using the view source code option of modern web browsers it will be displayed as plain text.
NOTE: The Chrome's Inspect tool shows the parsed DOM; in other works the actual structure/content of the web page which could be dynamically modified by JavaScript code or browser extensions/plugins.
The content has the appropriated structure.
IMPORTDATA works with structured content as csv or tsv doesn't matter of the file extension of the resource.
IMPORTFEED works with marked up content as ATOM/RSS
IMPORTHTML works with marked up content as HTML that includes properly markedup list or tables.
IMPORTXML works with marked up content as XML or any of its variants like XHTML.
The content doesn't exceeds the maximum size. Google haven't disclosed this limit but the below error will be shown when the content exceeds the maximum size:
Resource at url contents exceeded maximum size.
Google servers are not blocked by means of robots.txt or the user agent.
On W3C Markup Validator there are several tools to checkout is the resources had been properly marked up.
Regarding CSV check out Are there known services to validate CSV files
It's worth to note that the spreadsheet
should have enough room for the imported content; Google Sheets has a 10 million cell limit by spreadsheet, according to this post a columns limit of 18278, and a 50 thousand characters as cell content even as a value or formula.
it doesn't handle well large in-cell content; the "limit" depends on the user screen size and resolution as now it's possible to zoom in/out.
References
https://developers.google.com/web/tools/chrome-devtools/javascript/disable
https://en.wikipedia.org/wiki/Web_scraping
Related
Using Google Apps Script to scrape Dynamic Web Pages
Scraping data from website using vba
Block Website Scraping by Google Docs
Is there a way to get a single response from a text/event-stream without using event listeners?
Software Recommendations
Web scraping tool/software available for free?
Recommendations for web scraping tools that require minimal installation
Web Applications
The following question is about a different result, #N/A Could not fetch URL
Inability to use IMPORTHTML in Google sheets
Similar questions
Some of this questions might be closed as duplicate of this one
Importing javascript table into Google Docs spreadsheet
Importxml Imported Content Empty
scrape table using google app scripts
One answer includes Google Apps Script code using the URL Fetch Service
Capture element using ImportXML with XPath
How to import Javascript tables into Google spreadsheet?
Scrape the current share price data from the ASX
One of the answers includes Google Apps Script code to get data from a JSON source
Guidance on webscraping using Google Sheets
How to Scrape data from Indiegogo.com in google sheets via IMPORTXML formula
Why importxml and importhtml not working here?
Google Sheet use Importxml error could not fetch url
One answer includes Google Apps Script code using the URL Fetch Service
Google Sheets - Pull Data for investment portfolio
Extracting value from API/Webpage
IMPORTXML shows an error while scraping data from website
One answer shows the xhr request found using browser developer tools
Replacing =ImportHTML with URLFetchApp
One answer includes Google Apps Script code using the URL Fetch Service
How to use IMPORTXML to import hidden div tag?
Google Sheet Web-scraping ImportXml Xpath on Yahoo Finance doesn't works with french stock
One of the answers includes Google Apps Script code to get data from a JSON source. As of January 4th 2023, it's not longer working, very likely because Yahoo! Finance is now encrying the JSON. See the Tainake's answer to How to pull Yahoo Finance Historical Price Data from its Object with Google Apps Script? for script using Crypto.js to handle this.
How to fetch data which is loaded by the ajax (asynchronous) method after the web page has already been loaded using apps script?
One answer suggest to read the data from the server instead of scraping from a webpage.
Using ImportXML to pull data
Extracting data from web page using Cheerio Library
One answer suggest the use of an API and Google Apps Script
ImportXML is good for basic tasks, but it won't get you too far if you are serious in scraping:
The approach only works with the most basic websites (no SPAs rendered in browsers can be scraped this way. Any basic web scraping protection or connectivity issue breaks the process, and there isn't any control over HTTP request geo location, or number of retries) - and Yahoo Finance is not a simple website
If the target website data requires some cleanup post-processing, it's getting very complicated since you are now "programming with Excel formulas", rather a painful process compared to regular code writing in conventional programming languages
There isn't any proper launch and cache control, so the function can be triggered occasionally and if the HTTP request fails, cells will be populated with ERR! values
I recommend using proper tools (automation framework and scraping engine which can render JavaScript-powered websites) and use Google Sheets just for basic storage purposes:
https://youtu.be/uBC752CWTew (Pipedream for automation and ScrapeNinja engine for scraping)

Auto format #1234 string in Microsoft Teams channel

Is it possible to add some code or something else to make it possible that whenever I type a hashtag followed by a number that this will be replaced by a url?
My requirement is whenever some developer mention a ticket number like #1234 in his chat post into a channel I want to make this clickable and directly opens a url like myticketsystem.com?id=1234.
If I understand correctly, you're looking to implement an auto-linking similar to how GitHub handles things like Fixes issue #xxxx? It isn't possible to implement this in Teams today, it isn't possible to inject your own logic into the composition rendering pipeline.
What you could do however is build a Compose Extension. This wouldn't replicate the GitHub experience but it would certainty make it easier to insert links to tickets into the compose editor. It could also be a more powerful tool, allowing users to search the ticketing system rather than having to know the number before writing the post.

Putting our logo in the Email Notifications that system sends out

We want to be able to show our logo, but it always comes out with "Download Image" prompt instead of showing the email in the body..
Emails are built using the Oracle Mail_pkg... How do I add the image to the HTML body so that it shows without a user having to download the image like it's an attachment? (also want to show it in blackberry)
I know it's possible since if you add an image to the outlook signature, you can see those pictures on outlook and blackberry
Along with a nice answer sleske provided I can suggest you to create a message you want to look like in Outlook or elsewhere, take a look at the message source and try to reproduce it in Oracle.
Also bear in mind that using utl_mail package you can send messages with body only 4 000 chars long (the size of varchar2 type in SQL). I can suggest you try utl_smtp package, though it requires a little more coding and accuracy, it gives you much more flexibility in creating fancy e-mails.
You need to include the image itself inside the email, usually as an attachment.
To have it show inside the email, you need to link to the attachment from inside your email. This is commonly done using the cid scheme.
See e.g. http://mailformat.dan.info/headers/mime.html for more information.
Or straight to the source:
Content-ID and Message-ID Uniform Resource Locators

HTML codes showing in viewpage HTML data

I’m a new to Codeigniter. Just using it in my project from last 2 months. I’ve a comment section in my project. Where any one can give comments. Every things are going perfect but when ever any one putting HTML content(image/videos) & then when those are showing back in the comment section… direct HTML codes are showing in the comment page rather than HTML content(image/videos).
ex: when i’m saving any “embed youtube video code” in the comment box & save that the out put comes as “raw Embed Video codes” rather than Youtube Video…..
I feel like it must be a minor thing but really can’t understand where the fault has occurring. Plz, if any body have the solution reply me back as soon as possible.
Couldn't one devise a system where somebody just posts the youtube link itself and through a combination of regular expressions your own system generates the object/embed code itself so there's no security risk possible?
I had a similar problem a while back - wanting to give end users the ability to post YouTube videos, but not allow them to just post anything without some sort of XSS protection.
I ended up using htmlpurifier - http://htmlpurifier.org/ to filter the contents being submitted in the form.
There is a modification that can be made to the whitelist that allows YouTube code through the purifier.
http://htmlpurifier.org/docs/enduser-youtube.html
So far, that's working well, but my system is still in development.
As a quick hack you can do htmlspecialchars_decode when displaying the comment in your view. This is very dangerous though without the use of sanitization when you receive the comment - search xss_clean on this page. You should also use strip_tags to remove all the HTML tags you don't need (everything except the video tags) prior to inserting the comment in the database.

Content Water Marking

We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.

Resources