Developing This Website With AI
A personal look at how this website came together over four months, and how my workflow evolved from hands-on coding with AI assistance to full agent-driven implementation.

At a glance
- Author
- Rohan Barrett
- Category
- Understanding AI
- Published
- April 2, 2026
- Reading time
- 8 min read
This website took me roughly four months to build, and in a lot of ways it became more than just a website project. It gave me a real environment to ship pages, routes, blogs, galleries, and games, but it also became the place where I learned how AI could actually fit into my day-to-day work. I started the project wanting to launch something real for the internet, and I finished it with a very different understanding of how I wanted to build software.
Summary
Key takeaways
- The website gave me a practical way to evaluate AI while also building something meaningful for my wife.
- My workflow changed in stages, from mostly manual coding with AI assistance to structured planning and eventually full agent-driven implementation.
- The biggest shift was not just speed. It was learning how to define work clearly, review outputs well, and collaborate with AI instead of treating it like autocomplete.
Why I chose a website as the test case
My background is more on the integration, backend, and API side, so launching a polished public website had been sitting on my bucket list for a long time. I had built features for websites before, and I had made local test sites, but I had never taken something all the way to a finished internet-ready experience.
At the same time, AI tools were advancing quickly enough that I felt I needed a serious way to evaluate them. I did not want to watch the space move forward without building enough hands-on experience to understand what was useful, what was hype, and what would actually change how I work.
That made this project a strong fit. Building a website let me accomplish several goals at once:
Checklist
Why this project made sense
- Create a real website for my wife so she could showcase her craft work and the beautiful items she creates.
- Use a real project to evaluate AI tools, workflows, and development approaches.
- Sharpen my own skills and start transitioning from traditional coding toward stronger AI prompting and engineering habits.
How the project started: mostly coding by hand
In the beginning, I was still using what people would now call a much lighter AI-assisted workflow. I was doing most of the implementation myself, probably around 80 percent of the coding, and using AI mainly to speed up specific parts of development.
That usually meant working in VS Code with Copilot, planning a feature myself, building the component or behavior directly, and leaning on AI suggestions when they helped reduce repetitive work. This stage still felt very close to the development style I already knew. The AI was useful, but it was mostly acting as an accelerator rather than a partner.
Early workflow
The first phase was still very developer-led
I was choosing the approach, writing most of the code, and using AI to move faster inside a plan I had already formed. That made the workflow feel familiar, which was useful at the start because I was also learning how to ship a public-facing website.
The shift into plan and agent collaboration
Over time, my process became more structured. Instead of jumping straight into implementation, I started spending more time defining the feature first. I would brainstorm, research, and get clearer about what I wanted before asking the AI to help build it.
That led me into a plan-and-agent style workflow. I would use Copilot in plan mode to flesh out a feature or component with me, then switch into agent mode and let it handle more of the implementation work. This changed the nature of the collaboration. Instead of using AI only for isolated code suggestions, I was starting to use it to reason about the work before coding began.
Even in that stage, I was still very involved in the review loop. I would inspect the code, analyze what the agent produced, fix issues, and work with it to refine anything that did not match my requirements. That review step stayed important throughout the project.
What changed when I moved to full agent coding
Eventually I moved into a much more agent-driven workflow, especially after switching over to ChatGPT Codex AI. By that point, I was no longer trying to fully define every feature before handing it off. I would often start with rough ideas or abstract goals and let Codex do a larger share of the discovery and planning work.
The process became much more structured and much more scalable:
Checklist
What I started asking Codex to do
- Research the feature I wanted and bring back multiple implementation options with pros, cons, and time estimates.
- Generate a roadmap, backlog items, work specs, and phased implementation plan before the coding started.
- Track progress in markdown documents so the feature work stayed organized and reviewable.
- Implement features in sequence while I reviewed each phase and redirected where needed.
This was the point where the workflow stopped feeling like assisted coding and started feeling like directed execution. My job shifted toward choosing direction, evaluating options, reviewing output, and deciding when a feature was actually good enough to keep.
What the review loop looked like in practice
One of the most useful parts of this workflow was how flexible it became. I could review each phase as it finished, work with Codex to fix issues, and keep the implementation moving without losing track of the bigger plan.
Sometimes, if it was late and a feature was already well defined, I would let Codex continue implementing while I stepped away. Then I would come back the next morning, review the progress, fix anything that needed correction, and decide what still had to be improved. If work remained, I would send it back into the next cycle and repeat that pattern until the feature was complete.
What stayed constant
AI did not remove the need for judgment
Even when the workflow became heavily agent-driven, review never stopped mattering. The more work the agent could do independently, the more important it became for me to evaluate the output carefully, catch mismatches early, and keep the implementation aligned with the actual goal.
What I learned from building the site this way
This project taught me that AI is most useful when it sits inside a real workflow with clear goals, real constraints, and a consistent review loop. It also showed me that my role was changing. I was spending less time only thinking in terms of raw implementation and more time thinking about prompting, orchestration, evaluation, and iteration.
That shift matters to me because it feels closer to where software work is going. I still care about code quality and technical understanding, but I also care much more now about how to frame work well, how to guide an agent, and how to keep momentum without losing control of the result.
Building this website with AI did not just help me ship a project. It changed how I think about the work of building in the first place.
Keep reading
More posts along the same lines
If this topic caught your attention, these posts are a good next stop.
Understanding AI
Understanding AI: Is Using AI Cheating? How to Use It Responsibly
Learn how to leverage AI tools ethically and responsibly in your work and studies without compromising integrity.
Enterprise AI
2026 AI Trend: Legacy Workflows Must Be Rebuilt for AI-Native Work
Explore how organizations are transforming their workflows to harness the full potential of AI technology.
Enterprise AI
2026 AI Trend: AI Impact Will Follow AI Integration
Understanding the timeline and expectations for AI implementation and its measurable business impact.