AnySoft Case Study
AnySoft: Scaling AI Development with a Quality Feedback Loop
AnySoft paired Devin with Propel to scale AI-driven development while keeping review quality, engineering standards, and release rigor high.

Challenge
As AnySoft's engineering velocity increased, powered by both their team and AI coding tools like Devin, the volume of changes moving through the development cycle grew rapidly.
With more code being produced, the team wanted to ensure that review quality stayed consistently high, internal standards were applied across contributors, engineers spent less time on repetitive reviews, and the workflow could scale without adding bottlenecks.
AnySoft needed a system that could match Devin's speed with high-quality review.
Approach
AnySoft paired Devin for fast code generation with Propel for rigorous code review, forming a tight loop that ensures both velocity and quality.
Devin generates the initial implementation. Propel reviews the PR, catches issues, enforces standards, and provides high-signal suggestions.
This simple pairing created a scalable, dependable review cycle that fit naturally into their existing workflow.
Solution
AnySoft needed a reviewer that did not just point out issues, but guided the rest of their AI workflow.
Propel's tailored comments surfaced the context that agents needed to improve or regenerate code, making the entire loop more reliable.
As a result, Propel became the quality layer that kept pace with AI-generated output.
Results
The quality loop gave AnySoft a measurable way to scale AI-generated output without losing engineering rigor.
- 66% of PRs improved, with at least one Propel comment implemented in two-thirds of PRs.
- 76% implementation rate across review comments addressed through code changes.