← 返回首页
Update Site Statistics
Page history
Edit this page
How do I edit this website?

Update Site Statistics

Update Site:
Sort: A-Z Sort: Most Used Sort: Newest
Compare To:
+ / %
Time Window:
Daily Daily (7-day avg) Monthly Yearly Ever/Cumulative
Count Type:
Unique IPs Total Checks
Loading data...

Analyzing the data yourself

You can download the raw data directly from individual update sites. Each site has 8 different statistics files available, plus a metadata index:

Data Types:

  • stats-unique-daily.txt.gz - Unique IP addresses per day
  • stats-total-daily.txt.gz - Total update checks per day
  • stats-unique-monthly.txt.gz - Unique IP addresses per month
  • stats-total-monthly.txt.gz - Total update checks per month
  • stats-unique-yearly.txt.gz - Unique IP addresses per year
  • stats-total-yearly.txt.gz - Total update checks per year
  • stats-unique-ever.txt.gz - Cumulative unique IP addresses by day
  • stats-total-ever.txt.gz - Cumulative total checks by day
  • sites.json - Metadata index with site list and summary statistics

URL Format: https://sites.imagej.net/{SITE_NAME}/{STATS_FILE}

Sites Index: https://sites.imagej.net/sites.json

Example URLs:

Data Format: Each line contains a datestamp and count value separated by a space:

20250723 458 20250724 672 20250725 543

Date formats:

  • Daily/Ever: YYYYMMDD (e.g., 20250723)
  • Monthly: YYYYMM (e.g., 202507)
  • Yearly: YYYY (e.g., 2025)

Sites Metadata Format: The sites.json file contains metadata for each update site:

{ "Java-8": { "date_range": { "start": "20151220", "end": "20250904" }, "total_unique_ips": 12534, "total_requests": 89472, "days_with_data": 3546, "last_generated": "2025-09-05" } }

Here is an example Python script to analyze total yearly downloads for a specific site:

import gzip import urllib.request def fetch_yearly_stats(site_name, stat_type='total'): """Fetch and parse yearly statistics for a site.""" url = f'https://sites.imagej.net/{site_name}/stats-{stat_type}-yearly.txt.gz' with urllib.request.urlopen(url) as response: with gzip.open(response, 'rt') as f: data = {} for line in f: if line.strip(): year, count = line.strip().split() data[int(year)] = int(count) return data # Example: Get Java-8 yearly download statistics site = 'Java-8' yearly_stats = fetch_yearly_stats(site, 'unique') print(f"Yearly unique IP statistics for {site}:") for year in sorted(yearly_stats.keys()): print(f" {year}: {yearly_stats[year]:,} unique IPs") # Find the most popular year best_year = max(yearly_stats.keys(), key=lambda y: yearly_stats[y]) print(f"\nBest year: {best_year} with {yearly_stats[best_year]:,} unique IPs")

And another example to list all sites sorted by total unique IPs:

import json import urllib.request with urllib.request.urlopen('https://sites.imagej.net/sites.json') as response: sites_data = json.load(response) # Sort sites by total unique IPs ranked_sites = sorted( sites_data.items(), key=lambda x: x[1]['total_unique_ips'], reverse=True ) print("Sites ranked by total unique IPs:") for site_name, metadata in ranked_sites: print(f" {site_name}: {metadata['total_unique_ips']:,} unique IPs " f"({metadata['days_with_data']} days of data)")