Whenever I need to include a picture in a GitHub project's README file I usually just stick it in a Screenshots folder and relatively link to it. However this unnecessarily bloats the file size of the project, especially if I include an animated .gif of the project in action.
I've noticed in a couple popular Github iOS projects (like MMDrawerController and JASidePanels) that the images are NOT relatively linked, but rather they exist on a domain I've never seen - "https://github-camo.global.ssl.fastly.net". Navigating to this site directly doesn't work and Google searches bring up nothing. So for my question: is this site affiliated with GitHub, and how does one get his/her images uploaded here? Of course I could always use a generic image hosting service but I'd prefer to use one that has official ties with GitHub (if such a site exists).
Where is this?
GitHub itself has a "secret" feature to upload images.
I read about this in a comment by GitHub's own Phil Haack:
I edit (or create) an issue and drag it into there and copy the resulting markdown into my post. It's probably an abuse of GitHub issues.
If you do it like this, the image will be stored on some GitHub server, and it will have a URL like this one:
https://f.cloud.github.com/assets/19977/1656110/a3f8b280-5b6d-11e3-818d-c06ab05bd613.jpg
Fastly is not an image host, it's a content delivery network. See their website and this CDN Planet entry.
If you peek at the source code of the README.md page in the MMDrawerController repository, you'll notice that the image aren't initially stored at Fastly.net.
Moreover, they're supposed to be served through standard http (ie. not https).
<p align="center" >
<img src="http://mutualmobile.github.io/MMDrawerController/ExampleImages/example1.png" width="266" height="500"/>
<img src="http://mutualmobile.github.io/MMDrawerController/ExampleImages/example2.png" width="266" height="500"/>
</p>
The links your refer to are dynamically rewritten thanks to the Camo tool.
This tool simplify routing images through an SSL host in order to prevent users from being warned by their browser about potential unsecure content as every GitHub.com content is being served over https.
I built MMDrawerController. I host the images in a gh-pages repo and link to them from the README.
No you don't need a host. just put images in root of your own project and give link in readme.md
something like this
![Preview1](./img1.PNG)
![Preview2](./img2.PNG)
## and so on
Follow these steps to host the image on GitHub's official website.
Visit any repository on GitHub and click your way through to the issues.
Create a new issue by clicking the New Issue button. You'll now see the title and description fields.
Drag and drop an image onto the description field. This will start the uploading process.
Copy the URL and use it in README, issues, or pull requests however you like.
Demonstration of how it works:
Simply open the image you wish to post on GitHub, right-click, Copy Image, then in the Github post, hit ctrl-v.
Related
I found out i can get Summoner Icon image using this url:
https://ddragon.leagueoflegends.com/cdn/11.14.1/img/profileicon/934.png
The basic form of this is:
https://ddragon.leagueoflegends.com/cdn/{version}/img/profileicon/{profileIconId}.png
i know i can get the second value of {profileIconId} through Riot API but how do i know when i should update the version value? I don't want my app to crash when the version should be changed.
You should not be referencing ddragon for displaying icons or images. In fact, DataDragon specifically requests that you download the archive (.tgz) for each patch/version and host the assets locally or on your own CDN.
Websites like op.gg do this for all of the assets and host the images on their own CDN. They have to update their CDN every patch. You can automate updating the CDN using scripts, but for most small projects the work to automate this process may not be worth it.
Generally, it is considered rude to piggyback off of someone else's CDN without explicit permission to do so. Riot goes a step further and explicitly asks that you do not do this.
If someone is using the data dragon (ddragon) cdn, you can know the latest version looking at this json that they provide:
https://ddragon.leagueoflegends.com/api/versions.json
Just take the first element of the array and you are good to go without any scripting.
I'm parsing data from other website, and question wether it's better to download images and show them on my own or just links to website images I parsed . Is the link to image by default slower then image from own source?
Couldn't find answer to the simple question. If question is discussable and doesn't belong here, someone comment down please in order to delete it.
Some rules of thumb:
Don't display content on your page which you 'source' from another site without the other sites permission. ('Share this' links provided by youtube are okay, directly linking to the .flv file of someones video from another site to display on yours is not).
Don't copy content from other domains onto your domain without their permission first (doing so would be a copyright violation).
So to answer your question: You should copy the content onto your domain/host, but only if they have given permission to allow this kind of use.
Edit: I am interpreting your question as "I am taking content from another website [and putting it on my own] and I am wondering if I should link directly to their content ( tags pointing to the other domain) or if I should download/copy the content to my website and have my server handle everything?"
The "technical" answer is "it depends on how good your host is compared to the other host when serving content to the average visitor". Compare a page run by Google vs. the same thing run on a home server behind a 56k modem. It matters if you have broadband, but if you're on a 33.3k modem it doesn't.
When using FB/sharer/sharer.php Im running into problems with my TYPO3 installations as no picture are found. Or better, pictures are found but safe_image.php does not use the correct URL when trying to download them.
In Firebug I noticed FB does following:
safe_image.php?d=XYZ&url=http%3A%2F%2Fwww.my-domain.com%2Fen%2Ffileadmin%2FmyPicture.jpg
which fails because the real path is
safe_image.php?d=XYZ&url=http%3A%2F%2Fwww.my-domain.com%2Ffileadmin%2FmyPicture.jpg
Any idea why there is an /en added? The website has multiple languages (changed by domain.com/en or domain.com/de) but all images could be found under domain.com/fileadmin/...
When using og meta tags it works nice, but we want the user to choose the picture he would like to share.
Facebooks is not able to use relative links with base tag. Which is bad.
Solution is to use absolute URLs (config.absRefPrefix).
Related bug tracker issue: http://bugs.typo3.org/view.php?id=18121
I'm doing an iPhone version of a desktop site that includes a blog. The blog often embeds images from other domains (the image URLs always start with http:// in this case, obviously), but because I'm using cache-manifest, these images don't load because they aren't declared in the manifest file.
I have a NETWORK: whitelist section that has all of my AJAX request files, etc. I've even whitelisted the flickr farm domains because a lot of the images we add to the blog come from our flickr page. The flickr images show up just fine, but any other "random" image hotlinks from another domain show broken.
I tried adding a line like this:
http://
to the NETWORK: section, but it doesn't seem to like http:// as a whitelist.
Does anyone have any thoughts on this?
Thanks!
Alex
just add the "online whitelist wildcard flag" to your manifest:
NETWORK:
*
that should do the trick! more info on the whatwg spec page
hope this helps!
I think I've got a workaround. What if you created a simple server-side file (remoteResource.php) that you could reference like this:
remoteResource.php?resource=http://somewhere.com/remote/image.jpg
The PHP (or whatever server side language you're using) could just cURL in the remote resource and send it unmodified to the browser. Then, whitelist that file.
I haven't tested this because the environment I'm working with doesn't have cURL installed (ugh) but I don't see why it can't work.
We have members-only paid content that is frequently copied and republished without our permission.
We are trying to ‘watermark’ our content by including each customer’s user id in a fake css class, for example <p class='userid_1234'> (except not so obivous, of course :), that would help us track the source of the copying, and then we place that class somewhere in the article body.
The problem is, by including user-specific information into an article, it makes it so that the article content is ineligible for caching because it is now unique to each user.
This bumps the page load time from ~.8ms to ~2.5sec for each article page view.
Does anyone know of any watermarking strategies that can still be used with caching?
Alternatively, what can be done to speed up database access? ( ha, ha, that there’s just a tiny topic i’m sure.. )
We're using the CMS Expression Engine, but I'd like to hear about any strategies. They don't have to be EE-specific.
If you're talking about images then you could use PHP to add a watermark to the images.
How can I add an image onto an image in PHP like a watermark
its a tool to help track down the lazy copiers who just copy the source code as-is. this is not preventative, nor is it a deterrent. – Ian 12 hours ago
Going by your above comment you are happy with users copying your content, just not without the formatting etc. So what you could do is provide the users an embed type of source code for that particular content just like YouTube does with videos. Into that embed source code you could add your own links back to your site, utilize your own CSS etc.
That way you can still allow the members to use the content but it will always come out the way you intended it with links back to your site.
Thanks
You could always cache a version that uses a special string, like #!username!#, and then later fill it in with PHP based on which user is viewing it.
Another way I believe is to switch from caching on the server to instead let the browser cache it locally for a little. That way it is only cached per user, and it reduces the calls to your database. Because an article is pretty static, you could just let the local computer cache it, and pull in comments via javascript.
This last one is probably not one you are really looking for, but I'm gonna come out and say it anyway. You could not treat your users like thieves, and instead treat the thieves as thieves. Go to the person hosting the servers your content is on and send them an email telling them copyrighted premium content is being hosted on their servers without your permission. You can even automate that process.
How to find out what sites are posting your content? Put a link in the body content to your site, and do a Google Search/Blog Search for articles linking to that site. To automate it, use Google Blog Search because it offers RSS feeds. Any one that has a link back to your site could go into a database with a link to the page, someone could look at it, and if it is the entire article, go do a Whois and send them an email.
What makes you think adding css to something is going to stop people from copying it without that CSS? It's more likely that they are just coping the source of the content you are showing them and ignoring all the styling around it. For example, I use tamper data to look at all HTTP requests made by Firefox, if I can see it on the page, I can see it in the logs. Even with all the "protection" some sites try to put in place, they generally will never work. I can grab what I want, without using any screen capture/recording.
If you were serving flv's, for example, I would easily be able to grab the source of that even if you overlayed it with some CSS. I think the best approach would be to get the sites publishing your premium content and ask them to remove it. It's either that or watermark the actual content on the fly while sending it to the browser.