"Chromium-based browsers are being “infested” by Instart Logic tech which works around blockers and worst, around browser privacy settings (they may start “infecting” Firefox eventually, but that is not happening now)."
From his linked post:
"Instart Logic will detect when the developer console opens, and cleanup everything then to hide what it does"
Is this implemented via a CDN-delivered script? Why would Chromium-based browsers be more susceptible?
There's another one I can't find, but it writes out things like "user moved mouse to x,y. User has been idle for 10 seconds, page lost focus, page gained focus" .. kinda creepy how much is available to the Javascript engine.
Elephant Games are incredible and their creator is also imcredible. I highly recommend checking out his other games if you haven't already. http://www.jmtb02.com/
Firefox no longer reports installed plugins to websites. It causes some interesting behavior on websites that check to see if you have flash before letting you play games (sorry, you need to have flash installed). I had a user script for a while that 'fixed' the navigator.$whatever to show having flash installed so I could play some game; now I just use Shumway and it seems to work.
The difference is that you run only trusted native applications with the ability to check their source code, while you are visiting untrusted sites which often have obfuscated javascript.
I think that it would be a better comparison between a pdf file and the web: pdfs files can't do the same or more.
And I would argue it is harder for the average user to view the JavaScript inside PDFs: on the web just right click and choose Inspect Element right in the browser; for a PDF you'll have to use specialized tools to decode the myriad encoding schemes inside a PDF. The average JavaScript developer doesn't know how to extract JavaScript from a PDF.
While it works on Chromium it does not seem to work in xpdf, pdf.js nor any other pdf viewer that I tried. It seems that JS, flash, etc inside pdfs are proprietary, useless, and potentially dangerous adobe-only extensions that nobody else implements and nobody uses. It's just that Chromium happened to have a JS engine so they decided to let JS inside pdfs to be executed with a very constrained API (which can't do anything malicious probably - unless a bug is found in its JS engine).
My point still stands: I do not expect a pdf document to be able to do anything like the djsumdog's link and neither do most people. If a pdf document was able to do anything like that I would consider that viewer as broken.
> The average JavaScript developer doesn't know how to extract JavaScript from a PDF.
I would suspect it's sniffing for the window-size suddenly changing in certain ways, perhaps along with watching for the normal dev-tools keyboard shortcuts via keydown listeners. I can't actually remember off the top of my head if Chrome suppresses the event in that case, but it wouldn't be surprising if it didn't.
The easy check for this would be to see whether opening the devtools in detached mode via the menus makes it notice.
Note that this would actually be quite a pain to hide from the page, just because it's something the page needs to know to display some stuff and make rendering calculations. If it was hidden from the page, we'd suddenly be complaining about debugging floating footers which are hiding under the devtools.
But there's different ways of opening dev tools. You can use the mouse to right click as well.
In fact, when using keyboard shortcuts in the better these days, I have a habit of hitting alt+D first to get to the address bar first (Outlook likes to hijack browser shortcuts like Ctrl+n), in which case the website shouldn't know I'm pressing anything at all.
I think OP's point is you can listen for a bunch of signals: If you see the right keydown events, or if you see a contextmenu followed by a rapid change to innerWidth, they probably opened dev tools, and you should delete your evil cookies.
The person who figured this out probably opened them via menu and had dev tools in another window, so the evil folks couldn't detect the resize.
If people would focus on content in HTML and CSS for layout and newfad devs weren't shoving 50 jscripts a page down our throat this wouldn't be happening as bad. I plan to make all my future sites librejs compatible, and forgoe js altogether whenever possible.
I've been using uMatrix for over three years and very very few websites don't work with it.
By far the most annoying 'feature' (more of a consequence of the internet) is the way that you can't whitelist HTTPS sub-domains. (Makes AWS Console a pain.)
Does AWS Console have much that needs blocking? Seems a reasonable place to simply turn off uMatrix. You could also simply use a second browser with uBlock Origin for a limited number of sites - I tend to do that for a lot of checkout /payment options where the repeated reloads of whitelisting in uMatrix could be troublesome.
> I'm trying DuckDuckGo, but it's just not as good.
It took me a while of using DDG before I realized how much I had equated "good" with "looks like Google".
What finally hit me, hard, was that I switched the DDG theme to a color scheme designed to look like Google search results, and all of a sudden the results "felt better". Markedly better.
So, I started regularly comparing results, every time I found something that I felt I didn't get the results I expected for. I would search, and then search again with !g. And almost every time I ran into something that I didn't like the DDG results for, the Google results weren't actually any better. (The rare times they were, I reported that, and often it got fixed.)
I use "!s" in DDG to get to Startpage [1] search results. One letter shorter to access the same - no need to type "!sp" when one can just type "!s" instead. :)
This does not work for replacing "!spi" (image search on Startpage) though. Using "!si" does a search on a non-existent subdomain sportsillustrated.cnn.com.
My problem is not the looks, whenever I use !g I'm annoyed how bad it looks. It IS the results. Often when something doesn't have a ton of relevant results, DDG gives the wrong results while !g has what I'm looking for.
In my experience ddg is less contextual. For instance when searching for something that is trending, Google will often give the most expected results even if the search keywords are incredibly vague, while ddg will give more standard results respecting the keywords.
That’s the very reason I switched to ddg, but it bites me back everytime I try to catch up on stuff everybody already knows for days, and need to fallback to !g.
Oh, I'm pretty sure that's why. Unless encrypted.google.com doesn't use that information? Anyway, I'd love to have some way to personalize my DDG results and tell them what to boost. Alas.
How often do you end up at Google though; a few years back I tried DDG for a couple of months but ended up basically always doing !g and so went back to Google.
It depends on what I'm searching for. If its an error code or something a bit esoteric around tech then google usually has better results, but for everyday topics the results are pretty decent, and they feel like they have improved over the years.
I probably do less searching than I used to, but the bang system really helps me search other sites much more effectively. The more common bangs I use are !w for wikipedia, !so for stackoverflow, !imdb, !gh, !tpb, !bm for bing maps, !bi bing images.
!sp for startpage is really nice to use if you want to have the Google results without the tracking.
Edit: looks like this was already mentioned below. Whoops.
I don't have any figures to back this up, but I feel like I use !g about half of the time (it may be 70 or 20% as well since I have no data). I'm not free from Google then, but this reduced my dependency by about 50%, which is still good to take !
I was in the same situation. Switched over again a few months back. I almost always only need to use !g for rare results (and even there it doesn't always help). On the other hand bangs like !mdn (mozilla developer network) and !w (wikipedia) make things much easier.
I use Firefox "awesomebar" keywords so "w" for Wikipedia, etc. (I think Chrome has this too), so I don't gain anything really with DDG's keywords on most searches.
I basically only use Google for movie listings, calculations, and restaurant hours now. The rare times I switch to Google for other searches DDG fails on, Google is no better.
I follow a three step model:
1. DDG
2. If that doesn't help, then use !s to search Startpage.
3. If that doesn't help, then use !g to search Google.
Other than the relevance and quality of search results, there are differences in features across these sites, like date range search, for example. Image search also seems to have more flexibility on Google. I haven't checked recently if reverse image search even exists on DDG or Startpage.
Same for me. I fired up Firefox for the first time in years and it works great. The hardest part to get used to is their super-weird hamburger menu layout.
What Ankit The mobile menu? The mobile menu is a mess if you have extensions installed, because add-on developers don't put their stuff in the tools/page submenu, and there's no way to rearrange it.
> The purpose of Instart Logic technology is to disguise 3rd-party requests as 1st-party requests, thus bypassing content blockers, and even the ability of browsers to block 3rd-party cookies (because they are stored as 1st-party cookies):
And he has a list of sites that use WebRTC to get around blocking. Didn't Firefox or someone say they'd reconsider auto-enabling WebRTC if this practise became widespread?
"Web publishers make simple DNS changes to flow the network domains that carry their HTML through the Instart Logic system. This allows our system to inject a small piece of JavaScript that can detect the presence of ad blockers. When an ad blocker is detected, the JavaScript-based virtualization layer Nanovisor, together with our intelligent cloud-based, machine learning platform, encrypts and delivers all the elements of the page using the customer’s existing delivery services.
As a result, each resource on the page, and any signals and actions such as measurement beacons or user clicks, will have its URL encrypted and obscured. This renders ad blockers ineffective, as they can no longer search for patterns which would indicate a resource is related to advertising.
The result is simply the experience that the web publisher intended on delivering to the end user with no changes to the ad delivery or measurement systems; end users have no need to be aware the technology is even being used."
This is basically how viruses get around antivirus. I think the virus/antivirus arms race has been more or less won by viruses, so it looks like this is the end state of the adblock wars.
Chromium-based browsers are being “infested” by Instart Logic tech
which works around blockers and worst, around browser privacy
settings (they may start “infecting” Firefox eventually, but that
is not happening now).
Okay, so they have cookies on a subdomain of the main site's domain. But that in itself is not a problem, since it does not enable them to track me across websites. So I don't have a problem with that, unless they use other tricks. Are they using browser profiling or something similar to correlate across websites?
Hadn't heard about this, but this is why I still keep long blacklist in my /etc/hosts file even though I use uBlock Origin also.
Out of habit I get mine from http://winhelp2002.mvps.org/hosts.htm. Not sure it's the best one, but I started using it years ago when I was still using Windows and have just stuck with it.
I'm pretty sure there might be a bug for this in the Chromium bug tracker that is hidden from the common folk, just like they hide their bug where websites can detect if you're in private mode(!):
"Chromium-based browsers are being “infested” by Instart Logic tech which works around blockers and worst, around browser privacy settings (they may start “infecting” Firefox eventually, but that is not happening now)."
From his linked post:
"Instart Logic will detect when the developer console opens, and cleanup everything then to hide what it does"
Is this implemented via a CDN-delivered script? Why would Chromium-based browsers be more susceptible?