- cross-posted to:
- opensource@programming.dev
- opensource@lemmy.ml
- cross-posted to:
- opensource@programming.dev
- opensource@lemmy.ml
A site similar to 12ft.io but is self hosted and works with websites that 12ft.io doesn’t work with.
How does it work?
It pretends to be GoogleBot (Google’s web crawler) and gets the same content that google will get. Google gets the whole page so that the content of the article can be indexed properly and this takes advantage of that.
If you’re on Firefox on desktop/laptop, check out Bypass Paywall [0]. It was removed from the firefox add-on store due to a DMCA claim [1], but can be manually installed (and auto updates) from gitlab. The dev even provides instructions on how to add custom filters to uBlock Origin [2], so you don’t have to add another extension but still get some benefit.
[0] https://gitlab.com/magnolia1234/bypass-paywalls-firefox-clean
[1] https://winaero.com/mozilla-has-silently-removed-the-bypass-paywalls-clean-add-on-from-amo/
[2] https://gitlab.com/magnolia1234/bypass-paywalls-clean-filters
Your correct indexing is highly appreciated!
took the words right out my mouth
It must have been while he was kissing you.
That’s the dude who was butt hurt about something this dude did: https://github.com/iamadamdev/bypass-paywalls-chrome
and so forked it and arguably does a better job, lol.
also, bypass paywalls clean on notfirefox, like Chrome, or Kiwi (android).
Where are the metric versions? I want my 3 meter ladder.
Clone the repo and make it yourself.
Most often I use it, it’s too avoid metrics.
It amazes me that all it takes is just changing user agent to
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
and it can bypass paywalls on many sites? I thought those sites would try harder (e.g. checking if the ip address is truly belong to google), but apparently not.Checking ip ownership is a moving target more likely to result in outcomes these sites don’t want (accidentally blocking google bots and preventing results from appearing on google).
Checking useragent is cheap, easier, unlikely to break (for this purpose, anyway) and the percentage of folks who know how to bypass this check is relatively slim, with a pretty small financial impact.
It’s not necessarily a moving target when entire blocks can be associated with Google.
Unless they are permanently only using specific addresses or blocks and will never change that up, I’d consider it a moving target.
Google literally has an official list of IP ranges for their crawlers, complete with an API that returns the current IP ranges that you can use to automate a check. Hardly a moving target, and even if it is, it doesn’t matter if you know exactly where the target is at all times.
Same. I thought there would be more stuff happening in the background but when I saw it’s just hijacking the google bot headers to display the html i was a bit disappointed it’s so stupidly easy.
Https://1ft.io also seems to work and by the branding seems unrelated to 12ft
There’s 4ft.io too. Oh nvm looks like it’s gone.
If you’re on Android and use Firefox, you can use the Disable JavaScript extension to disable JS on sites with paywalls, like NYtimes. While not perfect, it works remarkably well.
Also works great on Desktop.
I’ve been happy with https://github.com/everywall/ladder
I use this, too! It’s great but doesn’t always work.
Loaded the docker for fun on my NAS. I don’t need it, but other users in my home may appreciate this.
Love it! Deployed it this morning.
Seems like this can be done in the browser using a user agent switcher.
So ist this an http proxy? I don’t quite get it.