DevRel Insights on AI Assisted Coding
Last week was a powerful reminder that learning never stops, especially when you join a team that's pioneering how developers handle code at massive scale. I’m Dylen, the new Developer Relations lead at Brokk, and I'm here to bridge the gap between our engineering team and the community building on our platform.
After diving into the Brokk platform this past few weeks, I'm sharing some impactful takeaways. These aren't just feature highlights; they are fundamental shifts in how we should think about AI Assisted Development and the LLMs that we work with.
What I Learned Last Week
The biggest lessons last week confirmed that focused tooling and deep context management are paramount for large scale development and Gemini, while pretty impressive in many ways, is still not quite S Tier.
Why Compiler-Level Context is Essential for LLMs
My first key takeaway confirms that accurate context is a core differentiator when working with AI dev tools. Most AI coding assistants fail because they treat your codebase as disconnected text files.
- The Insight: Brokk overcomes this by analyzing the entire repo every dependency and relationship gaining a much deeper relational understanding of your codebase.
- Why It Matters: This semantic understanding is the difference between an AI that guesses and an AI that generates practical results you supervise rather than babysit.
The End of Manual Merge Conflict Resolution
It is shocking to be reminded about how much time we still lose to manual Git conflict resolution.
- The Insight: The Brokk Merge Agent doesn't just patch markers; it analyzes the intent of the commits from both branches to synthesize a working resolution.
- The Result: We can finally stop manually fixing conflicts that an intelligent system should be able to handle instantly.
Mastering the Loop with the Context window
While reviewing Lutz’s session on debugging, authoring a fix, and testing it with the Blender project, I grasped the real value of a persistent Context Area and guiding the resolutions via a Task List.
- The Insight: This isn't just a chat client; it’s a workspace where you can pull in any files, fragmented code snippets, diffs, or issue history.
- Why It Matters: It ensures you never lose the thread during a complex development cycle, maintaining state from the first issue report to a final fix.
Check out the latest video from Lutz with Brokk and Blender below.
November Power Rankings are in and Gemini 3 is not quite there yet
The team at Brokk just published the November Power Rankings and the findings paint a different picture from the current narrative around which LLM is at the top and it is not who you would assume given the buzz around the net.
- The Insight: Haiku with its crazy fast response times and GPT 5 with a solid cost to speed ratio both give intelligent enough responses in our testing to still hold the top slots. The newish hotness GPT 5.1 landing at A Tier at least for API Customers and Gemini 3 coming in at C Tier. Learn more about why below. Don't believe the hype I guess.
- Why it Matters: If you're chasing the trend line it may lead you the wrong way. To get the best out of your models you need a deeper understanding of where they shine and where they do not and a tool that can easily switch between them based on the nature of the task at hand. Check out the ranking below for more insights.
Gemini 3 Pro Preview: Not Quite Baked
Looking Ahead
It's clear that scaling development efficiently hinges on better context management and guided development work. These lessons demonstrate the Brokk engineering team's commitment to not just innovating, but solid lineage and commitment to engaging directly with the open dev communities. Something near and dear to my heart. I am dedicated to helping you master this platform and solve challenges with your largest codebases.
Give Brokk a try, come join us on Discord or hit me up on Linked In and tell me, what was the single most surprising thing you learned?