kinami

Drone Tracker

Turning Sightings Into Positions
The Problem

In September 2025, unidentified drones flew over Copenhagen Airport, Aalborg Air Base, and other critical sites across Denmark. The police opened phone lines for the public to call in sightings. What they got was thousands of reports — many of which turned out to be stars, aircraft, or birds. No way to verify, no way to locate, no way to correlate.

The Idea

Drone Tracker is a mobile app. You see something in the sky, you point your phone at it and tap. The app captures a photo along with everything your phone already knows — GPS position, compass bearing, camera angle, field of view. That's one observation. When multiple people do this for the same object, the system triangulates a real position in 3D space.

No phone calls. No guesswork. Just structured data that gets better with every observer.

A B C alt
How It Works
01
Point and Report
Aim your phone at the object, take a photo. The app records your location, the direction you're pointing, and the camera's angle — everything needed to draw a line from you to the object.
02
Triangulate
Two or more sightings of the same object from different positions give intersecting lines. Where they meet is a 3D coordinate — altitude, latitude, longitude.
03
Track
Successive triangulations over time produce a flight path. Authorities access live data through a REST API — positions, trajectories, confidence levels, and imagery.
App Store Beta

Audioreactive

Visuals That Listen
Concept

A visual instrument that listens to music and generates art in real time. The system picks up what's playing — the weight of a kick drum, the shape of a melody, the shimmer of a hi-hat — and turns it into a continuous stream of images that feel like the music looks.

Built for live settings. Plug into a venue's sound system or a DJ mixer, and the visuals unfold on stage alongside the music. No pre-rendered loops, no manual triggering — the art responds to every moment as it happens.

How It Works
01
Listen
A microphone or audio feed captures what's playing. The system breaks the sound down into its musical components — rhythm, pitch, energy — thirty times per second.
02
Interpret
Those musical features steer the AI model through a vast space of possible images. Low frequencies move the big shapes, mids shift the composition, highs add texture and detail.
03
Generate
The model produces a new image every frame. The result is a fluid, never-repeating visual stream that's musically coherent — it doesn't just react to beats, it follows the music.
Built For
Live
Performance
Real-time
Generation
Any genre
Audio Source
Project 001
Drone Tracker
Crowdsourced airspace observation
Project 002
Audioreactive
Live generative visuals from audio
Contact
Get in Touch
inquiries@kinami.studio
scroll