aka "Juce"
Lead Software Engineer · Backend-focused
Fullstack Engineer
I help teams build systems that don't fall over at scale.
With 9+ years of experience in Node.js, PHP, React, and AWS, I focus on distributed systems, clean architecture, and pragmatic engineering decisions that hold up in production.

Stepped into a lead role to guide technical direction and execution across the platform, focusing on planning, architectural decisions, and team enablement. Worked closely with the Director of Engineering, led technical interviews, and helped scale both people and processes.
Built and scaled a greenfield full-stack platform to 100K+ users, growing the team from 2 to 20+ engineers. Laid the technical foundation for an ecosystem spanning doctor prescription tools, patient payments, and internal support systems.

Contributed to building the first working prototype of a healthcare platform, rapidly validating core product ideas and laying the groundwork for later scaling and productionization.

Contributed to internal platforms at Deutsche Bahn, including systems for building and workplace management and the company's internal search infrastructure.

Developed e-commerce solutions and web applications, working with modern frameworks and best practices.
For additional career history and details, visit my
LinkedIn ProfileI enjoy building backend systems that don't break in surprising ways. I'm especially good at finding edge cases, questioning assumptions, and making sure things behave correctly even when something goes wrong.
Below are the principles and strengths that shape how I work day to day.
I'm good at spotting problems before they turn into bugs.
Focus on Backend, but comfortable across the whole stack
Pragmatic, Fail-fast mindset
Our company had to connect customer systems to AWS using Direct Connect and Virtual Private Gateways (VPGs). This was a very sensitive task because we acted as the "ISP" for our customers—if we made a mistake, their entire network went offline.
The biggest challenge was timing. It takes about 10 minutes for AWS to set up these connections. Doing this manually was slow and caused many mistakes. If a step failed halfway through, the system would get stuck in a "broken" state, leaving the customer without a connection.
I designed a system that can retry, wait as long as needed, pick up where it left off, and—in the worst case—rollback by itself.
Complex infrastructure tasks shouldn't be handled by a single, long script. By using delayed messaging and a status-based design, we turned a risky 10-minute "waiting game" into a reliable process. The system is now smart enough to fix itself through retries, or protect the customer by rolling back automatically if something goes wrong.
I broke the process into small, independent services for gateways, connections, and route tables to ensure each part was manageable.
I used RabbitMQ to connect the steps. This allowed the system to "wait" during long AWS setups without crashing or blocking other tasks.
Instead of processes idling and wasting resources, I used delayed messages to check status. This let the system pick up exactly where it left off if it got stuck.
I built a "failsafe" system. If the new network settings didn't work perfectly, the system would automatically "undo" the changes to keep the customer online.
The system handles temporary AWS errors automatically. Anything unexpected immediately alerts the team so we can take manual action before the customer notices.