Skip to content

pwittchen/varun.surf

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

770 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

varun.surf πŸ„

CI CD DEPS DOCKER RELEASE

kite spots database and weather forecast for kitesurfers on the web

see it online at: https://varun.surf

screenshot

tech stack overview

  • infra: Docker, Docker Compose, Nginx, GitHub, GitHub Actions, GHCR, Cloudflare, SeoHost, Mikrus
  • backend: Java, Spring Boot, Gradle
  • frontend (bundled with the backend app): Vanilla JS, HTML, CSS, Bun

building

./gradlew build

running

./gradlew bootRun

testing

unit testing:

./gradlew test

e2e testing:

./gradlew testE2e

e2e testing with visible browser:

./gradlew testE2eNoHeadless

docker

docker build -t varun-surf .
docker run -p 8080:8080 varun-surf

docker compose (local)

./deployment.sh dev

for prod setup, check continuous delivery and zero-downtime deployment sections.

docker container registry

docker image is automatically deployed to the registry at ghcr.io via docker.yml GitHub action from the master branch

  • configure PAT (Personal Access Token) here: https://github.com/settings/tokens
  • set permissions: write:packages, read:packages
  • remember, you need to refresh the token in the future, once it will become outdated
  • copy your access token to the clipboard

now, login into docker registry:

PAT=YOUR_ACCESS_TOKEN
echo $PAT | docker login ghcr.io -u pwittchen --password-stdin

pull image and run the container:

docker pull ghcr.io/pwittchen/varun.surf
docker run -p 8080:8080 ghcr.io/pwittchen/varun.surf:latest

continuous integration

After each push to the master or PR, a new build is triggered with tests and test coverage report. It's done automatically via GitHub actions ci.yml and coverage.yml

continuous delivery

After each tag push with v prefix, cd.yml GitHub action is triggered, and this action deploys the latest version of the app to the VPS.

zero-downtime deployment

Deployment of the app is configured with the bash, docker, and docker compose scripts. With these scripts, we can perform zero-downtime (blue/green) deployment with nginx server as a proxy. To do that, follow the instructions below.

  • Copy deployment.sh, docker-compose.prod.yml, .env, and ./nginx/nginx.conf files to the single directory on the VPS.
  • In the deployment.sh and docker-compose.prod.yml files adjust server paths if needed
  • In the .env file, configure the environment variables basing on the .env.example file.
  • Run ./deployment.sh prod script to deploy the app with the nginx proxy.
  • Run the same command again to perform the update with a zero-downtime and the latest docker image.
  • If you want to test the deployment locally, run ./deployment.sh dev script.
  • To stop everything, run: docker stop varun-app-blue-live varun-app-green-live varun-nginx

monitoring

We can view system status, by visiting /status page.

actuator metrics

We can enable application and JVM metrics in the application.yml file and then use /actuator/prometheus endpoint to view metrics.

built-in metrics dashboard

The app includes a custom metrics dashboard at /metrics that displays:

  • Application gauges: total spots, countries, active live stations, cache sizes, last fetch timestamps
  • Fetch counters: forecast/conditions/AI fetch totals, successes, and failures
  • API request counters: spots and single spot endpoint request counts
  • Timers: forecast, conditions, and AI fetch durations (count, total time, mean, max)
  • JVM metrics: heap/non-heap memory usage, thread counts, GC pause stats, CPU usage, uptime
  • HTTP client metrics: active/total/success/failed requests, connection stats, DNS/connect durations
  • Wide/narrow view toggle: expand to full width for better readability

built-in logs dashboard

The app includes a logs dashboard at /logs that displays:

  • Real-time application logs with auto-refresh every 5 seconds
  • Level filtering: filter by ERROR, WARN, INFO, DEBUG, or TRACE
  • Text search: search through log messages, logger names, and thread names
  • Wide/narrow view toggle: expand to full width for better readability
  • In-memory buffer: stores the last 1000 log entries (oldest logs are evicted when buffer is full)

Note: Logs are stored in memory only and are lost on application restart.

Configuration:

Both metrics and logs dashboards share the same credentials. Set your password in the .env file:

ANALYTICS_PASSWORD=your-secure-password

ai forecast analysis

It's possible to enable AI/LLM in the app, so the forecast for each spot will get an AI-generated comment. If you want to use AI in the app, configure OpenAI API key in the application.yml.

An exemplary docker command to run the app with enabled AI analysis:

docker run -p 8080:8080 varun-surf \
    --app.feature.ai.forecast.analysis.enabled=true \
    --spring.ai.openai.api-key=your-api-key-here

NOTE: I added this feature as an experiment, but it does not really add any big value to this particular project, so I disabled it by default. Another interesting thing is the fact that performing 74 calls to OpenAI with gpt-4o-mini model used around 31k tokens and costs $0.01, so If I would like to trigger AI analysis for my current configuration with this AI provider every six hours (4 times in 24h = 120 times in 30 days = 8880 req. / month), I'd spent around $1.2 (~4.35 PLN) for monthly OpenAI usage, which is reasonable price because coffee in my local coffee shop costs more. Nevertheless, more advanced analysis, more tokens or stronger model, should increase the price.

architecture

ai coding agents configuration

custom agent triggers

The project includes specialized Claude Code agents that can be triggered using shortcuts:

Trigger Agent Purpose
@new-kite-spot [location] kite-spot-creator Research and add a new kite spot to spots.json
@new-weather-station [url] weather-station-strategy Create a new weather station integration strategy
@debug-api [target] api-debugger Diagnose issues with external APIs (Windguru, weather stations, maps)
@e2e-test [feature] e2e-test-writer Write E2E tests for features using Playwright
@review [file/feature] code-reviewer General code review for quality, bugs, and best practices
@arch [topic] arch-analyzer System architecture analysis, dependencies, and design patterns
@security [target] security-auditor Security vulnerability assessment and OWASP compliance
@perf [target] perf-analyzer Performance analysis for speed, memory, and resource optimization
@async [target] async-reviewer WebFlux/Reactor patterns, Virtual Threads, and concurrency review

Examples:

@new-kite-spot Tarifa, Spain
@new-weather-station https://holfuy.com/en/weather/1234
@debug-api windguru spot 48009
@e2e-test favorites feature
@review AggregatorService
@arch data flow from API to caching
@security check input validation in controllers
@perf analyze caching efficiency
@async review StructuredTaskScope in AggregatorService

Agent definitions are located in .claude/agents/.

Remember that you can also trigger agents by natural language according to Claude Code guidelines.

custom skills

The project includes Claude Code skills that can be invoked as slash commands. Skills are lightweight, focused tasks that run directly in the conversation.

How to use:

  1. Type the run + slash command in Claude Code (e.g., /check-spots)
  2. For skills with arguments, add them after the command (e.g., /explain caching flow)
  3. Skills run immediately and return a structured report

Examples:

/check-spots
/explain caching flow
Command Purpose
/check-spots Validate spots.json for missing fields, invalid URLs, duplicates, and data consistency
/check-live-stations Analyze live weather station integrations, test data sources, identify spots without live data
/explain [topic] Explain data flows, features, and code paths with visual diagrams and step-by-step breakdowns
/review [target] Quick code review for files or git changes, checking for bugs and best practices
/audit-security Security audit for secrets, SSRF, injection points, dependencies, and headers
/check-deps Analyze Gradle dependencies for outdated versions, CVEs, conflicts, and bloat
/profile-blocking Find blocking calls in reactive WebFlux code that cause thread starvation
/check-concurrency Find race conditions, deadlocks, unsafe shared state, and synchronization issues
/arch-check Verify architecture health: layer violations, circular deps, design patterns
/check-errors Find error handling gaps: swallowed exceptions, missing handlers, resource leaks

Skill definitions are located in .claude/skills/.

features

  • showing all kite spots with forecasts and live conditions on the single page without switching between tabs or windows
  • browsing forecasts for multiple kite spots
  • browsing all kite spots on the map (Open Street Maps)
  • watching live wind conditions in the selected spots
  • refreshing live wind every one minute on the backend (requires page refresh on the frontend)
  • refreshing forecasts every 3 hours in the backend (requires page refresh on the frontend)
  • browsing details regarding different spots like description, windguru, windfinder and ICM forecast links, location and webcam
  • filtering spots by country
  • searching spots
  • possibility to add spots to favorites
  • organizing spots in the custom order with a drag and drop mechanism
  • dark/light theme
  • possibility to switch between a list view and a grid view
  • mobile-friendly UI
  • kite and board size calculator
  • AI forecast analysis
  • single spot view with hourly forecast (in horizontal and vertical view)
  • additional TV-friendly view for the single spot
  • map of the spot (provided by Open Street Maps and Windy)
  • link to the navigation app (Google Maps)
  • displaying a photo of the spot (if available)
  • dynamic weather forecast model selector (40+ Windguru models, auto-discovered per spot)
  • embeddable HTML widget with current conditions and forecast for the spot
  • session cookie authentication for API access (prevents direct API scraping without visiting the site)
  • hero section with random spot photo, name/location, and slogan in PL and EN
  • automatic language detection from browser settings
  • stale live conditions indicators (yellow for outdated data)
  • fallback weather station mechanism (automatic switch when primary returns stale data)
  • LLM-friendly Markdown endpoints at /llms/*.md (public, no session cookie) for AI crawlers and agents

llm-friendly markdown endpoints

The app exposes a set of public Markdown documents under /llms/*.md for LLMs, AI crawlers and agents. These endpoints are not gated by the session cookie (unlike /api/v1/**) and are linked from /llms.txt.

Endpoint Description
GET /llms/spots.md Index of all kite spots with links to per-spot documents and a list of countries
GET /llms/spots/{wgId}.md Full spot document: overview, current conditions (when available), daily/hourly forecast, links
GET /llms/countries.md Index of all countries with spot counts
GET /llms/countries/{slug}.md Spots available in the given country

The country {slug} is the lowercased country name with spaces replaced by hyphens (e.g. poland, czech-republic). All responses are served as text/markdown; charset=UTF-8 and use the same in-memory caches as the JSON API.

About

πŸ„ kite spots database and weather forecast for kitesurfers on the web

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors