Editor’s Note: This post was written by AI based on my prompts to demonstrate the capabilities of the system described below.
In software engineering, the ultimate goal is often to remove the human from the loop for repetitive cognitive tasks. We’ve mastered this for CI/CD pipelines and infrastructure, but complex analytical workflows—like evaluating an investment portfolio—have largely remained manual.
My latest project, the Multi-Agent Portfolio Analyst, isn’t just about “using AI for trading.” It is a case study in workflow automation: building a resilient, autonomous system that ingests raw data, orchestrates complex reasoning, and delivers finished intelligence without human intervention.
Here is how I engineered an “AI Employee” that works 24/7.
The problem with most “AI scripts” is that they are brittle. They run once on a laptop and break the moment an API changes or a context window overflows. To build a robust analyst service, I needed to treat it like a production data pipeline.
The architecture focuses on three automation pillars:
Automation fails without reliable data. I built a structured Data Module that acts as the system’s ground truth. Instead of feeding an LLM raw internet search results, the system maintains a local SQLite state.
The core innovation here is the shift from a single prompt to a Multi-Agent Workflow. Using CrewAI, I modeled the decision-making process of a real investment firm.
I defined two specialized agents with opposing goals to force a dialectic, often called “Adversarial Analysis”:
# Simplified CrewAI Agent Definition
bull_agent = Agent(
role="Bullish Analyst",
goal="Find opportunities and defend long positions.",
backstory="You are a growth-focused investor...",
llm="gpt-4o"
)
bear_agent = Agent(
role="Bearish Analyst",
goal="Identify risks and protect capital.",
backstory="You are a risk manager concerned with capital preservation...",
llm="gpt-4o"
)
This isn’t just a chat. It’s a directed graph of tasks.
This structured workflow significantly reduces bias and hallucinations, ensuring that the final output is rigorous and balanced.
The most critical part of automation is reliability. A script that you have to run manually isn’t an automated system; it’s just a tool.
To turn this into a service, I leveraged GitHub Actions as a cron scheduler.
The result? I receive a professional, deep-dive portfolio report in my inbox every week, completely hands-off.
Building an autonomous system is an iterative process. While the current version provides solid insights, I have a roadmap to make the analyst even smarter:
This project demonstrates that workflow automation is about more than just moving data from A to B. It’s about orchestrating intelligence. By combining robust data engineering (SQLite/Alpaca) with agentic AI (CrewAI) and CI/CD scheduling (GitHub Actions), we can build systems that don’t just “help” us work, but actually do the work.
The code for this project is open-source. Check out the code on GitHub.
I am open to collaborations to expand this system or apply these automation principles to new domains. If you’re interested, feel free to reach out!
Technologies used: Python, CrewAI, Alpaca API, SQLite, GitHub Actions, Jinja2.