javadocs-scraper typedoc - v2.1.2

javadocs-scraper logo

๐Ÿ“š javadocs-scraper

A TypeScript library to scrape Java objects information from a Javadocs website.

Build Status Test Status Latest Release View on Typedoc Built with Typescript

Specifically, it scrapes data (name, description, url, etc) about, and links together:

Some extra data is also calculated post scraping, like method and field inheritance.

Caution

Tested with Javadocs generated from Java 7 to Java 21. I cannot guarantee this will work with older or newer versions.

  1. Install with your preferred package manager:
npm install javadocs-scraper
yarn add javadocs-scraper
pnpm add javadocs-scraper
  1. Instantiate a Scraper:
import { Scraper } from 'javadocs-scraper';

// From an online URL:
const urlScraper = Scraper.fromURL('https://...');

// From a local path:
const pathScraper = Scraper.fromPath('./path/to/javadocs/index.html');
Note

This package uses constructor dependency injection for every class.

You can also instantiate Scraper with the new keyword, but you'll need to specify every dependency manually.

The easier way is to use the static fromX methods, which will use the default implementations.

Tip

Alternatively, you can provide your own Fetcher to fetch data from the Javadocs:

import type { Fetcher } from 'javadocs-scraper';

class MyFetcher implements Fetcher {
/** ... */
}

const myFetcher = new MyFetcher('https://...');
const scraper = Scraper.with({ fetcher: myFetcher });
  1. Use the Scraper to scrape and the resulting Javadocs to access the data:
const javadocs: Javadocs = await scraper.scrape();

/** for example */
const myInterface = javadocs.getInterface('org.example.Interface');
console.log(myInterface);
/**
* {
* qualifiedName: 'org.example.Interface',
* package: { name: 'org.example', ... },
* url: 'https://.../Interface.html',
* description: { text: 'An example interface', html: '<p>An example interface</p>' },
* methods: Collection {...}
* fields: Collection {...},
* typeParameters: Collection {...},
* // and more data, check the docs!
* }
*/
Tip

The Javadocs object uses discord.js' Collection class to store all the scraped data. This is an extension of Map with utility methods, like find(), reduce(), etc.

These collections are also typed as mutable, so any modification will be reflected in the backing Javadocs. This is by design, since the library no longer uses this object once it's given to you, and doesn't care what you then do with it.

Check the discord.js guide or the Collection docs for more info.

  • Make sure to not spam a Javadocs website. It's your responsibility to not abuse the library, and implement appropiate methods to avoid abuse, like a cache.
  • The scrape() method will take a while to scrape the entire website. Make sure to only run it when necessary, ideally only once in the entire program's lifecycle.

There are distinct types of objects that hold the library together:

  • A Fetcherยน, which makes requests to the Javadocs website.
  • Entitiesยฒ, which represent a scraped object.
  • QueryStrategiesยน, which query the website through cheerio. Needed since HTML class and ids change between Javadoc versions.
  • Scrapersยน, which scrape information from a given URL or cheerio object, to a partial object.
  • Partialsยฒ, which represent a partially scraped object, that is, an object without circular references to other objects.
  • A ScraperCache, which caches partial objects in memory.
  • Patchersยน, which patch partials to make them full entities, by linking them together.
  • Javadocs, which is the final result of the scraping process.

ยน - Replaceable via constructor injection.

ยฒ - Only a type, not available in runtime.

The scraping process ocurs in the following steps:

  1. A QueryStrategy is chosen by the QueryStrategyBundleFactory.
  2. The RootScraper iterates through every package in the Javadocs root.
  3. For every package, it's fetched, and passed to the PackageScraper.
  4. The PackageScraper iterates through every class, interface, enum and annotation in the package and passes them to the appropriate Scraper.
  5. Each scraper creates a partial object, and caches it in the ScraperCache.
  6. Once everything is done, the Scraper uses the Patchers to patch the partial objects together, by passing the cache to each patcher.
  7. The Scraper returns the patched objects, in a Javadocs object.
Tip

You can provide your own QueryStrategyBundleFactory to change the way the QueryStrategy is chosen.

import { OnlineFetcher } from 'javadocs-scraper';

const scraper = Scraper.with({
fetcher: new OnlineFetcher('https://...'), // or any other Fetcher implementation
strategyBundleFactory: ($root: CheerioAPI) => { /** ... */ },
});

Query strategies help fetch data across Java versions, without needing to write lengthy conditional code. These strategies don't actually know the Java version at runtime they're running on, and are made to support multiple at once.

In particular, the library provides two strategy "types", which are free to be extended:

For Javadocs 8 to 15. Some of the queries reassemble those of the modern strategy because of 13-15 Javadocs, which are a mix of legacy and modern, but from testing they mostly match legacy.

Legacy Javadocs don't have a consistent structure, so this strategy has a couple of workarounds, hacks and pre-compiled regexes to extract the data correctly.

For Javadocs 16 to last supported (21 at the time of writing). Some of the queries reassemble those of the legacy strategy because of 16 Javadocs, which are a mix of legacy and modern, but from testing they mostly match modern.

Modern Javadocs have a more consistent structure, with classes and ids easy to query directly.