Why Expert Developers Dismiss AI Tools as “Vibe Coding”

Introduction: What the Research Discovered
Though numerous AI coding tools are currently accessible to the masses, professional developers are still not ready to accept “vibe coding.” This concept signifies allowing AI to produce extensive code sections with little input from humans. A new investigation reveals that the skilled programmers want to keep control, meticulously directing and checking the AI work rather than letting AI take the whole process.
Research Behind the Findings
A research paper named Professional Software Developers Don’t Vibe, They Control: AI Agent Use for Coding in 2025 was published on January 5, 2026. The research was carried out by a team of scientists from the University of California, San Diego, and Cornell University. The main objective was to find out how the developers with the most experience—those having a minimum of three years of professional coding skill—incorporate AI coding tools into their daily workflow.
The research involved observing 13 field studies and conducting a survey with 99 developers which was aimed at examining real production environments instead of just speculating about future use cases.
Response to Public Debate
The study in question took notice of a public discussion ignited by a computer science professor, Pedro Domingos, who tweeted on X that “AI coding tools don’t work for business logic or with existing code.” His statements received not only negative feedback but also collegial sharing of fruitful AI tool cases from other software engineers. This contention showed a difference between the perception of the AI coding tools by the public and the actual usage in the field.
Developers Don’t “Vibe” — They Control
In contrast to the vibe coding narrative, the research revealed that professional developers:
- Maintained control of the design: It was not seen by anyone that a feature would be given over to AI just like that, without any planning or supervision.
- Decongest tasks: Developers very seldom let AI perform more than two or three steps at a time, and then the result is checked by them.
- Alter AI output: In most cases, the developers make some changes to the code generated by AI rather than taking it as it is.
- Employ comprehensive prompts: The most successful AI outputs were those coming from prompts that were full of context — for instance, filenames, specific functions, and the expected behavior.
Developers have also said that AI is most helpful for basic tasks such as writing tests, refactoring code, documentation generation, and regular debugging. As the tasks became more sophisticated—especially those involving business logic—AI tools were deemed to be less trustworthy.
Conclusion: Oversight Still Matters
Developers with experience, without exception, concurred with one major argument: AI helps but does not substitute for human decision making. No one involved in the research allowed AI to operate entirely on its own on actual code. Conversely, they regard AI tools as partners who take care of monotonous tasks faster, while the developers themselves continue to be accountable for the main design choices and the last quality check.
Business News
Kimbo Fund’s Convertible Debt Investment Aims to Scale Angola’s FoodCare Exports
Creating a Safe, Stylish Home That Grows With Your Family's Lifestyle
Clear Insurance Flags Winter Vulnerabilities Following ONS Crime Statistics
James Dempsey Appointed President and CEO of Frontier Airlines as the Company Turns over a New Leaf
iOnctura CEO will be the one to take the spotlight at the 2026 J.P. Morgan Healthcare Conference showcasing precision oncology developments.



















