A 48-hour online hackathon for engineers building with the perception layer. Index any video, search everything, ship a working agent — in one weekend.
Use the VideoDB SDK to ingest media, index it with AI, and expose perception to your agent. Anything goes — surveillance assistants, video search, sports highlights, lecture recall, live event copilots.
Pipe in live streams, uploaded files, RTSP, YouTube links — any continuous media source.
Spoken word, scene, and visual indexes. Multimodal search across everything you've ingested.
Compose clips, fire events, and let your agent respond. Wire it into anything: Slack, the web, a phone.
Keep it simple. We're judging on the idea and the build, not the deck. Submit before 10:00 IST on Monday, May 18.
A public repo with your source code and a README that explains what you built, how to run it, and which VideoDB primitives you used.
A 60–180 second walkthrough showing the project in action. YouTube, Loom, or a public video link. No edits required — keep it real.
Submissions are open from kickoff. You can re-submit any time before the deadline — only your most recent entry counts.
We've logged your entry and sent a confirmation to your email. You can re-submit any time before 10:00 IST on Monday, May 18 — only your latest entry counts.
Receipt: