The study tracked around 800 developers, comparing their output with and without GitHub’s Copilot coding assistant over three-month periods. Surprisingly, when measuring key metrics like pull request cycle time and throughput, Uplevel found no meaningful improvements for those using Copilot.
It’s a glorified autocorrect. Using it for anything else and expecting magic is an interesting idea. I’m not sure what folks are expecting there.
But I don’t ask it to explain things or generate algorithms willy nilly. I don’t expect or try to have it do something that’s not more than simply auto-completion.
I honestly like it, even if I strongly dislike the use of AI elsewhere. It’s working in this area for me.
I’ve not been too keen on copilot, then we got it at work so I tried it. For my previous position working in an ancient java project which knows no rhyme or reason, a codebase which belongs in hell’s fires, it was mostly useless.
I switched to a modern web developer position where we do a lot of data manipulation and massage it into common types to visualise in charts and tables, there it excels. A lot of what we do uses the same datasets and are then aggregated into one of a set of common types, so copilot often “understands” what I intend and gives great 5-10 line suggestions.
These last 3 weeks I’ve had the massive task of separating our data processing into separate files to finally add unit tests. Doing the refactoring was easy with IntelliJ, copilot quickly wrote tests as with 100% coverage which allowed me to find a good number of undiscovered bugs.