Selected Systems

These projects are presented as systems and architecture case studies rather than commercial offerings. The focus is not on deliverables, but on system design, control mechanisms, and architectural decisions under real-world constraints.

Across these systems, the common thread is architectural thinking: managing complexity, isolating risk, and designing stable systems that operate reliably over time.

Some of these systems are active trading architectures. Others are large-scale infrastructure and data systems built in different contexts. Together, they represent a body of work centered around system architecture, risk control, and operational design.

TRADING SYSTEMS

Global Risk Engine

Distributed risk and capital allocation architecture for multi-strategy trading

The Global Risk Engine is a specialized control system designed to coordinate multiple trading strategies within a shared capital environment. The architecture is inspired by institutional risk systems used in hedge funds and asset managers such as State Street, adapted for private trading environments.

The system is built as a distributed microservices architecture, coordinating multiple trading engines and risk modules across independent services. It integrates with Interactive Brokers via TWS and API interfaces to manage execution, exposure, and capital allocation.

The architecture focuses on maximizing capital efficiency while controlling correlations between underlyings, strategies, and exposure types. Strategy isolation, exposure limits, and centralized kill-switch logic ensure that individual trading layers cannot create uncontrolled portfolio-level risk.

The result is a multi-layer trading infrastructure bringing hedge-fund-style risk architecture to private trading environments.

Layer 1 Trading Engine

Fundamental-driven premium-selling architecture for undervalued equities

Layer 1 is built around identifying undervalued, high-quality equities using automated fundamental analysis. The system evaluates financial metrics such as cash flow, EBITDA, earnings per share, and gross margins across rolling windows of one, three, five, and ten years.

This fundamental screening process identifies quality companies trading below intrinsic value. The trading layer then systematically sells short puts on these underlyings, with assignment leading to long-term equity positions.

Layer 1 is implemented as a system monolith optimized for reliability and deterministic execution. The system manages seven-figure capital and has processed seven-figure trading volumes.

Assignment handling, position lifecycle management, and covered call workflows are integrated into a structured portfolio management architecture. Layer 1 acts as the core yield layer within the broader multi-strategy trading system.

Layer 2 Trading Engine

Defined-risk credit spread architecture for yield optimization

Layer 2 builds on Layer 1 underlyings and introduces defined-risk credit spreads. This layer operates with bounded exposure and predefined risk limits.

The architecture is designed to optimize capital efficiency and improve yield on selected underlyings. Like Layer 1, the system is implemented as a deterministic system monolith optimized for reliability.

Layer 2 operates on seven-figure capital and integrates directly into the global risk architecture. This layer acts as a yield optimization engine within the multi-strategy trading system.

SPX 0DTE Engine

Intraday options execution architecture for profit maximization

The SPX 0DTE engine is designed for intraday options trading with short decision cycles and strict risk constraints. The system operates within predefined time windows and rule-based execution logic.

This layer acts as a profit maximization engine within the architecture. Integration into the global risk engine ensures that intraday exposure remains aligned with portfolio-level risk constraints.

DATA & INFRASTRUCTURE SYSTEMS

Crawling & Data Pipeline Architecture

Large-scale distributed crawling architecture for Financialbot AG

This system was designed as a multi-layer data acquisition architecture using distributed crawling infrastructure. The platform operated on bare-metal servers and processed large volumes of structured and unstructured data.

The architecture included distributed crawlers, ingestion pipelines, normalization layers, and Elasticsearch-based indexing systems. RabbitMQ was used for queue-based processing and distributed workload coordination.

The result was a scalable data platform capable of continuous data ingestion and structured retrieval.

Technology stack: Python, MySQL, Elasticsearch, RabbitMQ, Bare-metal infrastructure

Distributed Infrastructure Platform

Large-scale distributed infrastructure architecture for Living Internet / Yellowgrey

This project involved designing and operating a distributed infrastructure across a large number of nodes. The architecture included deployment automation, monitoring systems, and operational safeguards.

The infrastructure operated across distributed environments with orchestration and monitoring layers. The system focused on maintaining operational stability and visibility across distributed infrastructure.

Technology stack: Python, MySQL, Elasticsearch, distributed node infrastructure

High-Volume Data Processing System

Large-scale ingestion and indexing architecture for Hoosh / NOVASOL

This system focused on high-volume data ingestion and large-scale indexing pipelines. The architecture handled sustained processing load while maintaining consistent indexing and retrieval.

The design emphasized throughput, reliability, and structured access to large datasets. This created a scalable data processing environment.

Technology stack: Python, MySQL, Elasticsearch, distributed processing pipelines

AUTOMATION & BUSINESS SYSTEMS

Distributed Engineering Architecture

Remote engineering and delivery architecture for LI Ukraine

This architecture focused on distributed engineering teams operating across locations. The system defined delivery workflows, engineering standards, and coordination structures.

The goal was to maintain consistent engineering output across distributed teams. This resulted in a structured delivery architecture.

Pharmacy Digitalization Architecture

Operational workflow and communication architecture for Montanus Apotheke

This system focused on replacing manual operational workflows in a pharmacy environment. Previously fragmented communication and document handling were replaced with structured digital workflows.

The architecture centered on process design, validation logic, and integration between operational steps. This created a more predictable and scalable operational environment.

WHY THIS MATTERS

Architecture made visible through systems.

Most architecture work remains invisible. It sits behind execution, quietly shaping how systems behave under pressure.

These selected systems make that layer visible.

The point is not polished launches or product narratives. It is to show how architecture decisions translate into working systems: how constraints are handled, how complexity is reduced, and how control is maintained as systems evolve.

Each case reflects a different type of system: trading architecture, data infrastructure, distributed platforms, or operational workflows.

Together, they represent a consistent approach: designing systems that remain stable, understandable, and adaptable over time.