- Published on
Affinity
- Authors
- Name
- Sidharth Singh
- https://x.com/sid10singh
Affinity is a matchmaking and social platform backend written in Rust, built on the Axum web framework and backed by PostgreSQL via SeaORM. It handles user registration, profile management, matchmaking between users, game-based scoring, and a sandboxed code execution engine that evaluates submissions inside Docker containers.
The system is designed as a single Axum HTTP service that communicates with PostgreSQL for persistent state, Redis for caching, and AWS S3 for object storage. A broader ecosystem of services -- including a real-time chat gateway (PerOXO), RabbitMQ consumers, and ScyllaDB persistence -- can be composed alongside it via Docker Compose or Helm.
Tech Stack
| Layer | Technology |
|---|---|
| Language | Rust (2021 edition) |
| Runtime | Tokio |
| Framework | Axum 0.7 |
| ORM | SeaORM 1.0-rc.5 (sqlx-postgres) |
| Database | PostgreSQL |
| Cache | Redis |
| Object Storage | AWS S3 (aws-sdk-s3) |
| Auth | JWT (jsonwebtoken), bcrypt, TOTP, HMAC |
| Lettre (SMTP), Handlebars templates | |
| Code Execution | Docker-in-Docker (glot images) |
| Containerization | Docker, Docker Compose |
| Orchestration | Kubernetes, Helm |
| CI/CD | GitHub Actions, Jenkins |
Architecture
At the binary level, Affinity is a single Axum process listening on port 3001. The application is structured as a Cargo workspace with three crates:
| Crate | Role |
|---|---|
rusty_backend | Binary: Axum HTTP API, handlers, middleware, utilities |
entity | SeaORM entity definitions (generated models for all tables) |
migration | SeaORM migrator with schema definitions |
The router is composed from nested sub-routers, each owning a domain of the application -- user management, matchmaking, scoring, authentication, diagnostics, and AWS integration. Shared state is injected via Axum Extension layers:
let app: Router<()> = Router::new()
.nest("/user", user_routes())
.nest("/matchmaking", matchmaking_routes())
.nest("/score", score_routes())
.nest("/auth", auth_routes())
.nest("/diagnostics", diagnostics_routes())
.nest("/aws", aws_routes())
.layer(cors)
.layer(Extension(db))
.layer(Extension(redis_client));
Two pieces of shared state flow through the handler tree: a DatabaseConnection from SeaORM and an Arc<RedisClient> wrapping a Mutex<Connection>. Every handler that needs database or cache access extracts these from the request extensions.
CORS is configured by iterating over an environment-defined list of allowed origins and applying AllowOrigin::exact for each one, supporting credentialed cross-origin requests with explicit method and header whitelisting.
Database Design
A single SeaORM migration defines the entire schema. Six tables capture the core domain.
users is the identity table. Each user has a unique username and email, a bcrypt-hashed password, gender, age, and a creation timestamp. The auto-incrementing integer primary key is referenced by every other table in the schema.
user_details extends the user with profile and personality data, keyed by user_id as a foreign key to users. It stores attributes like location, openness, interests, relationship type, social habits, values, traits, commitment style, conflict resolution approach, a bio, an image URL, and a floating-point score. This separation keeps the core identity table lean while allowing the profile schema to evolve independently.
matches tracks matchmaking state between two users (male_id, female_id, both foreign keys to users) with a status field governing the match lifecycle -- pending, accepted, rejected, or contest. Both foreign keys cascade on delete, ensuring orphaned matches are cleaned up when a user is removed.
game_sessions records individual game rounds within a match. Each session links a male_id, female_id, match_id (FK to matches), a game_id, and a score. The triple foreign key constraint (both users and the match) enforces referential integrity at the database level.
avatar maps a user_id to an S3 object_key for profile images, decoupling storage location from the user record.
pass_reset uses a composite primary key (user_id, token) with a token_expiry timestamp. Tokens are stored as HMAC digests rather than plaintext -- the raw token is sent to the user via email, and verification recomputes the HMAC to compare against the stored digest. This means a database compromise does not expose usable reset tokens.
Authentication and Security
Authentication is built on multiple layers: bcrypt password hashing, JWT session tokens, TOTP-based email verification, and HMAC-secured password resets.
Transactional Signup
User registration is wrapped in a SeaORM database transaction. A single signup creates entries across three tables -- users, user_details, and optionally avatar. If any insert fails, the entire operation rolls back:
let txn = db.begin().await?;
let inserted_user = user_model.insert(&txn).await?;
user_details_model.insert(&txn).await?;
if let Some(image_url) = signup_info.image_url {
if !image_url.is_empty() {
avatar_model.insert(&txn).await?;
}
}
txn.commit().await?;
This guarantees that a user never exists without their associated profile data, and a failed avatar upload does not leave a partially-created account.
JWT Session Management
Login authenticates against bcrypt-hashed passwords and issues a JWT with a 24-hour expiry. The token encodes the user ID (not email) as the subject claim. It is returned both as an HttpOnly cookie and in the Authorization header, supporting browser-based and API-based consumption patterns.
let claims = Claims {
sub: user.id.to_string(),
exp: (chrono::Utc::now() + chrono::Duration::days(1)).timestamp() as usize,
};
let token = encode(
&Header::default(),
&claims,
&EncodingKey::from_secret(JWT_SECRET.as_ref()),
)?;
A JWT authorization middleware exists in the codebase for protecting routes, extracting and validating the Bearer token from incoming requests.
HMAC Password Reset Flow
Password reset avoids storing raw tokens in the database. When a reset is requested, a cryptographically secure random token is generated. The raw token is embedded in the email link, while only the HMAC digest is persisted alongside a one-hour expiry timestamp.
When the user submits the reset, the system loads all stored reset entries for comparison -- each candidate digest is checked by recomputing the HMAC of the submitted token. This constant-time comparison prevents timing attacks while ensuring that even a full database dump yields no usable tokens.
TOTP Email Verification
Time-based one-time passwords are generated via the totp-rs crate and delivered through Lettre's SMTP transport with Handlebars-templated HTML emails. The OTP endpoint serves double duty: called without an otp parameter it generates and sends a code; called with one it verifies against the current time window.
Sandboxed Code Runner
The code runner is Affinity's most architecturally interesting subsystem. It powers competitive coding challenges used in the matchmaking flow -- users solve problems, and their scores influence match outcomes.
Execution Model
The handler accepts a multipart upload containing a source file. It determines the language from the file extension, constructs a JSON payload matching the glot runner protocol, and spawns a Docker container from the corresponding glot/<language>:latest image.
The container is launched with hardened security flags:
let mut docker_process = Command::new("docker")
.arg("run")
.arg("--rm")
.arg("-i")
.arg("--read-only")
.arg("--tmpfs")
.arg("/tmp:rw,noexec,nosuid,size=65536k")
.arg("--tmpfs")
.arg("/home/glot:rw,exec,nosuid,uid=1000,gid=1000,size=131072k")
.arg("-u")
.arg("glot")
.arg("-w")
.arg("/home/glot")
.arg(runner_image)
.stdin(Stdio::piped())
.stdout(Stdio::piped())
.stderr(Stdio::piped())
.spawn()?;
Several constraints enforce isolation:
- Read-only root filesystem (
--read-only) prevents persistent modification of the container image. - Unprivileged user (
-u glot) drops all root capabilities inside the container. - Bounded tmpfs mounts provide non-persistent scratch space --
/tmp(64MB, noexec) for temporary data, and/home/glot(128MB, exec) for compilation and execution. Both are backed by RAM only. - Automatic cleanup (
--rm) destroys the container and its filesystem immediately on exit.
Testcase Evaluation
The JSON payload is piped to the container's stdin. The container's stdout is parsed as JSON to extract stdout, stderr, and error fields. The actual output is compared against an expected answer fetched from S3.
To avoid repeated S3 round-trips, both testcase inputs and expected answers are cached in Redis on first access. Subsequent code submissions for the same problem read from Redis directly. The cache key is derived from the filename stem, so problem_1.py and problem_1.rs resolve to the same testcase.
Redis Integration
Redis serves two roles in Affinity:
Code runner caching -- Testcase inputs and expected answers are stored in S3 as the source of truth but cached in Redis on first access. The cache eliminates S3 latency from the hot path of code evaluation.
General key-value store -- The
RedisClientstruct wraps aMutex<Connection>behindArc, exposing asyncset_valueandget_valuemethods. It is injected as an AxumExtensionand available to any handler in the tree.
pub struct RedisClient {
connection: Arc<Mutex<Connection>>,
}
impl RedisClient {
pub fn new() -> Self {
let client = Client::open(REDIS_URL.to_string()).expect("Invalid Redis URL");
let connection = client.get_connection().expect("Failed to connect to Redis");
Self {
connection: Arc::new(Mutex::new(connection)),
}
}
pub async fn set_value(&self, key: &str, value: &str) -> RedisResult<()> {
let mut con = self.connection.lock().await;
con.set(key, value)
}
pub async fn get_value(&self, key: &str) -> RedisResult<Option<String>> {
let mut con = self.connection.lock().await;
con.get(key)
}
}
The Mutex here is tokio::sync::Mutex, not std::sync::Mutex, so holding it across .await points is safe and will not block the Tokio runtime's worker threads.
Deployment
Dockerfile
The build uses a multi-stage Dockerfile. The builder stage compiles on rust:bookworm with pkg-config, clang, lld, and libssl-dev. The runtime stage uses debian:bookworm-slim with only ca-certificates, docker.io (for the code runner's Docker CLI), and jq. The DOCKER_HOST environment variable points to the Docker-in-Docker sidecar over TLS on port 2376.
Docker Compose
The base docker-compose.yml runs three services:
| Service | Image / Build | Purpose |
|---|---|---|
rusty_backend | Built from . | Axum API (host 8000 -> container 3001) |
docker | docker:dind | Docker-in-Docker for code execution |
redis | redis:latest | Caching layer |
PostgreSQL is expected to be reachable externally at the DATABASE_URL configured in .env.
A merged compose file (docker-compose_merged.yaml) extends this with the full ecosystem: ScyllaDB, RabbitMQ, the PerOXO WebSocket gateway, chat-service (gRPC on port 50052), and rabbit-consumer -- all running on a dedicated chat_network bridge.
Kubernetes with Helm
The hell_charts/ directory contains a Helm chart (affinity-rust) that deploys the full platform. The chart includes Deployments for the backend, Redis, Docker-in-Docker, ScyllaDB, RabbitMQ, PerOXO, and chat-service, along with ConfigMaps, Secrets, PersistentVolumeClaims, and a NodePort service exposing port 30100.
helm install <app-name> ./hell_charts
CI/CD
GitHub Actions runs two workflows on push to main:
- Lint (
rust.yml) --cargo fmt --checkandcargo clippyon every push and pull request, enforcing formatting and catching common mistakes before merge. - Docker (
docker-image.yml) -- Builds and pushessidharthsingh7/rusty_backendto Docker Hub on every push tomain.
Jenkins (Jenkinsfile) provides an alternative pipeline: Docker build, push to Docker Hub, and a webhook trigger to deploy on an AWS EC2 instance with post-build image pruning.
Ecosystem
Affinity's backend is one piece of a larger platform. The broader ecosystem includes:
| Component | Repository | Role |
|---|---|---|
| Frontend | Affinity-Frontend | Web application |
| Chat Gateway | PerOXO | Actor-based real-time WebSocket server |
| Chatbot | DateHer | AI-powered matchmaking assistant |
| Discord Bot | affinity-bot | Community integration |
These services run as separate containers orchestrated through Docker Compose or Helm, communicating with the backend via gRPC and shared data stores.
Contributing
Contributions are welcome. See CONTRIBUTING.md for guidelines on workflow, code quality expectations, and project conventions.
License
This project is licensed under the MIT License. See the LICENSE file for details.