Overengineering a zero-user app: distributed media processing with Quarkus, Go, and FFmpeg
Published at Apr 12, 2026 · 4 min read
Back in 2024 I built tiny-img for a college project. The premise was just an image optimizer. I could have thrown a Node.js monolith at it and called it a day, but the requisites involved multiple services and I saw a chance to mess around with distributed systems, message queues, and async architectures.
So, I built an “enterprise-grade” backend for an application that exactly zero users ever used. Here’s how that went.
Why split the architecture?
Running CPU and IO heavy tasks (like FFmpeg compression) on the exact same server routing your HTTP requests is a bad idea. A big upload spike could choke any web server resources and kill the API.
To fix this, I decoupled the processing:
- Edge API: Quarkus and Kotlin. Handles auth, validates uploads, saves the original file, and returns a 202 instantly.
- Message Queue: RabbitMQ sits in the middle.
- Workers: Asynchronous Go daemons subscribe to RabbitMQ, download the file, and run FFmpeg.
- Notification: Pings the user when it’s done.
Here is the exact flow I mapped out:
The Quarkus Edge
I went with Quarkus + Kotlin for the edge API. It boots extremely fast and keeping the DX close to Spring Boot is a nice plus.
Pushing jobs to RabbitMQ using MicroProfile Reactive Messaging looks like this:
package dev.mateux.adapters
import dev.mateux.application.dto.QueuePayload
import dev.mateux.ports.MessageQueue
import jakarta.enterprise.context.ApplicationScoped
import kotlinx.coroutines.future.await
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.eclipse.microprofile.reactive.messaging.Emitter;
@ApplicationScoped
class MessageQueueImpl(
@Channel("optimize") private var emitter: Emitter<QueuePayload>
) : MessageQueue {
override fun sendImage(payload: QueuePayload): Boolean {
val returnValue = emitter.send(payload).toCompletableFuture().join()
return true
}
}package dev.mateux.adapters
import dev.mateux.application.dto.QueuePayload
import dev.mateux.ports.MessageQueue
import jakarta.enterprise.context.ApplicationScoped
import kotlinx.coroutines.future.await
import org.eclipse.microprofile.reactive.messaging.Channel;
import org.eclipse.microprofile.reactive.messaging.Emitter;
@ApplicationScoped
class MessageQueueImpl(
@Channel("optimize") private var emitter: Emitter<QueuePayload>
) : MessageQueue {
override fun sendImage(payload: QueuePayload): Boolean {
val returnValue = emitter.send(payload).toCompletableFuture().join()
return true
}
}It’s just fire and forget. The HTTP thread is freed up instantly.
Taming FFmpeg with Go
For the actual heavy lifting, I wrote the workers in Go. Goroutines are lightweight enough to process a huge backlog of queue messages without eating all my RAM.
Inside the worker, os/exec safely triggers FFmpeg as a subprocess.
func processMessage(body []byte) {
initTime := time.Now()
payload := getPayload(body)
if payload == nil {
return
}
log.Printf("Processing image %s for user %s", payload.ImageID, payload.User)
// Ensure the output directory exists
outputFolder := getFolderFromPath(payload.OriginalImagePath)
if err := os.MkdirAll(outputFolder, 0755); err != nil {
log.Printf("Failed to create output folder: %v", err)
return
}
// Build the exact ffmpeg command string
ffmpegArgs := buildFfmpegOptions(payload)
cmd := exec.Command("ffmpeg", ffmpegArgs...)
notifyQueue(fmt.Sprintf("FFmpeg processing image %s started", payload.ImageID), payload.User)
err := cmd.Run() // Holds until FFmpeg is entirely done formatting the media
if err != nil {
log.Printf("Failed to process image %s: %v", payload.ImageID, err)
}
notifyQueue(fmt.Sprintf("Image %s processed", payload.ImageID), payload.User)
log.Printf("Image %s processed in %v ", payload.ImageID, time.Since(initTime))
}
func buildFfmpegOptions(payload *queuePayload.QueuePayload) []string {
return []string{
"-i", payload.OriginalImagePath,
"-q:v", strconv.Itoa(payload.Quality),
"-vf", fmt.Sprintf("scale=iw*%d/100:ih*%d/100", payload.Size, payload.Size),
getNewFilePath(payload), // The final resized and optimized output file
}
}func processMessage(body []byte) {
initTime := time.Now()
payload := getPayload(body)
if payload == nil {
return
}
log.Printf("Processing image %s for user %s", payload.ImageID, payload.User)
// Ensure the output directory exists
outputFolder := getFolderFromPath(payload.OriginalImagePath)
if err := os.MkdirAll(outputFolder, 0755); err != nil {
log.Printf("Failed to create output folder: %v", err)
return
}
// Build the exact ffmpeg command string
ffmpegArgs := buildFfmpegOptions(payload)
cmd := exec.Command("ffmpeg", ffmpegArgs...)
notifyQueue(fmt.Sprintf("FFmpeg processing image %s started", payload.ImageID), payload.User)
err := cmd.Run() // Holds until FFmpeg is entirely done formatting the media
if err != nil {
log.Printf("Failed to process image %s: %v", payload.ImageID, err)
}
notifyQueue(fmt.Sprintf("Image %s processed", payload.ImageID), payload.User)
log.Printf("Image %s processed in %v ", payload.ImageID, time.Since(initTime))
}
func buildFfmpegOptions(payload *queuePayload.QueuePayload) []string {
return []string{
"-i", payload.OriginalImagePath,
"-q:v", strconv.Itoa(payload.Quality),
"-vf", fmt.Sprintf("scale=iw*%d/100:ih*%d/100", payload.Size, payload.Size),
getNewFilePath(payload), // The final resized and optimized output file
}
}Since this runs inside a Go routine pulling from a RabbitMQ channel, scaling is trivial. If an actual user base showed up and started uploading huge batches of images, I’d just spin up more Go containers while Quarkus happily keeps accepting HTTP traffic without dropping connections.
Worth it?
Building a microservice network with message queues and sub-processing for a 0-user app sounds a bit ridiculous. But honestly, it’s the best way to learn.
You don’t figure out distributed tracing, message poison-pills, or subprocess bottlenecks just by reading docs. You have to overengineer your side projects and break them to really get how these tools work in the real world.