My own browser extension

Creating your own browser extension can be a good way to take back control over your user agent so that it can be made to work the way you want instead of the way they want. A lot can be accomplished with just basic JavaScript and DOM manipulation. In this post I will show some stuff I added to my own extension that I have been using for several years now.

Background

A lot of the modern web is quite user hostile these days and JavaScript is a big part of why. But JavaScript is a weapon that can be wielded both ways, and I realised some time ago that by using a simple browser extension I could take back control over my browser and make sites behave the way I want.

Since I’m using Ungoogled Chromium the code I present below will have been tested on that browser, but it should work equally well with other flavors of Chrome. The code we’re running is also very simple, so it is almost guaranteed to work on any other browser as well.

The manifest

A browser extension requires a manifest.json file. I won’t go into too many specifics here but here is the basic version that I started with. The one I use is available in my extension repository on Github.

Google has a complete reference of manifest.json here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
{
  "manifest_version": 3,
  "name": "tedeh.net chrome extension",
  "description": "My personal selection of necessities",
  "version": "1.0",
  "permissions": [
    "webNavigation",
    "webRequest",
    "scripting",
    "storage",
    "tabs"
  ],
  "host_permissions": ["<all_urls>"],
  "content_scripts": [
    {
      "matches": ["https://*/*"],
      "js": [
        "scripts.js"
      ]
    }
  ]
}

Note the most important thing is probably the permissions required. I want all permissions for all sites. In the example I’m just loading scripts.js on all matching sites but in the real extension I take care to load only specific scripts on specific sites, mostly for tidiness and prevent any future conflicts.

The only thing required is to put manifest.json and scripts.js into a folder, and then load the extension “unpacked” into Chromium by opening the chrome://extensions page. Whenever you change something, the extension needs to be reloaded manually on the extensions page.

My specific scripts

Jacking into the DOM is quite simple once you have got this far. Here I will go through the most interesting content scripts in my own extension and how they work.

Automatically redirecting to another site

Sometimes you just want to get away if you ended up somewhere on the web. A typical example is following a link to Reddit (the main site) or Twitter. You just want to be redirected to a less user-hostile place. For example, here is how I redirect to old.reddit.com whenever I end up on www.reddit.com:

1
2
3
4
5
const url = new URL(window.location);
if (url.host.startsWith('www.reddit.com')) {
  url.host = 'old.reddit.com';
  window.location.replace(url.toString());
}

I enabled the above script only for the Reddit domain. So the content_scripts entry in manifest.json for this script looks like this:

1
2
3
4
5
6
"content_scripts": [
  {
    "matches": ["https://www.reddit.com/*"],
    "js": ["reddit.js"]
  }
]

I’ve also made a similar script for Twitter that redirects to https://nitter.privacydev.net/ automatically. Nitter is basically a front-end to Twitter without all the ads and tracking, and it does not require a login. Privacydev.net has a hosted instance of Nitter that has been working well for a long time, up until recently.

Unfortunately Nitter is getting harder/impossible to run. Twitter has more or less disabled third-party clients/front-ends nowadays. The idea behind that is to get people to login to the main site so that they can get exposed to ads. It just made me use Twitter less/not at all. I don’t even know that it is called X now. Good riddance!

Plenty of garbage gets put into URL:s that has no business being there. I have made some scripts to rewrite URL:s to remove some of this. For example, here is how I rewrite Google AMP links to just remove the AMP crap:

1
2
3
4
5
6
const sel = 'a[href*="google.com/amp/s/"]';
const anchors = Array.from(document.querySelectorAll(sel));
anchors.forEach(function (anchor) {
  const realTargetUrl = anchor.href.split('google.com/amp/s/')[1];
  anchor.href = realTargetUrl;
});

Some more things that could probably be done: remove utm, gclid and other garbage from the query string. I just haven’t gotten around to implementing that yet.

Overriding specific CSS-rules (preventing overflow hidden)

An astute observer would notice that the method of rewriting links would not work for links that are dynamically written to the DOM after it has loaded. For links this has not been a problem that I’ve observed, hence I haven’t done anything about it. But having the functionality to respond to DOM updates turned out to be crucial for when I set out to disable overflow: hidden; CSS rule on the <body> tag. This is something usually done by cookie compliance pop-ups and other kinds of annoyances. Their purpose is of course to prevent me to scroll the page.

Now the example gets more involved, and we have to use the MutationObserver class to listen to any updates that could affect the CSS rules.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
const selector = 'html,body';
const observeEls = Array.from(document.querySelectorAll(selector));

const observer = new MutationObserver(mutationList => {
  for (const mutation of mutationList) {
    const { target } = mutation;
    const styleMap = target.computedStyleMap();
    const overflowValue = styleMap.get('overflow')?.toString();
    if (overflowValue === 'hidden') {
      target.style.setProperty('overflow', 'auto', 'important');
    }
  }
});

observeEls.forEach(el => {
  observer.observe(el, {
    attributes: true,
    attributeFilter: ['style', 'class'],
  });
});

We use the attribute filter to listen to changes to style and class on <html> and <body>. We’re not too concerned about exactly how they mutate, when the observer callback runs we just ensure that any overflow value of hidden immediately gets overridden to auto!important.

Prevent text from being added to what is copied

Some websites add stuff to the selected string when you are trying to copy text. Usually it is a link back to the source and perhaps even a notice that it is copyrighted. We don’t want anything to be added at all for any reason. Letting that happen just makes the browser an agent of them instead of an agent of myself.

I don’t encounter this “feature” often, but I opted for the most basic implementation I could think of that worked on the sites I wanted. This is to add my own copy event handler on the <body> tag. The handler just stops the event from propagating further:

1
2
3
4
5
6
7
8
function listener (ev) {
  ev.stopPropagation();
  // simple as that.
}

document.body.addEventListener('copy', listener, {
  capture: true,
});

Note that the copy event is added in the capture phase which ensures that it is one of the first handlers to run when something is copied. Initially I expected to be able to get a list of all event handlers added to the body so I could just remove them, but it turns out that is not actually possible and I was forced to just add my own handler and hope that it gets run first. This proved to work well, granted I have tested this on very few websites.

Repository and further ideas

I have published the repository for my browser extension on Github here: https://github.com/tedeh/chrome-extension. In the README.md file there are some further ideas I’m looking to implement.