Github copilot

We recently had staff from Github Copilot do a presentation on how their product can be useful to Software Developers. I found their answers to be a bit wishy-washy. I think it’s a really complex topic and having what I think were essentially sales managers trying to pitch something technical to us was a challenge. They didn’t have a full understanding of how it actually worked.

Someone asked a question to clarify if Copilot just looked at your open documents, or if it had the permission to see all the other files in your repository. Their answer was vague, along the lines of “it might do. Could come down to chance“.

For it to be effective, it really does need to look at your codebase to see what your product does, what features are already developed, and for bonus points, your coding style.

When it needs to suggest calling third-party code and installing additional libraries, does it understand that you may need to abide by a certain licence (pay some fee, or not use it in open-source etc)? and does it know that you may be limited to a certain version of it due to other dependencies? when features and the API (required parameters etc) can change drastically between versions, does Copilot understand that?

It’s probably the same scenario as what Wolfram Alpha were talking about when they came to our company to do a presentation on AI. They were emphasising how standard language models often suggest some text which reads like it makes sense, but it’s actually nonsense. They gave an example where it quoted a real journal from that country, stated the title of a chart that exists, quoted some figures and years – but the figures were fictional.

I saw a news article about how a lawyer presented some documentation to a judge about similar cases, but it turns out the lawyer had used ChatGPT and it had made up the case numbers and years.

The way those models work is that it knows some related words, and knows sentence structure, but the likes of ChatGPT doesn’t understand that something like that needs to be accurate and you can’t make stuff up. So Wolfram were saying their plugin can be combined with ChatGPT’s conversational structure to plug in actual figures to make accurate essays. TEAMWORK.

I would imagine there’s a good chance Copilot has exactly the same issue. It knows a bit of structure, slaps in the correct programming language, but it has no idea that it’s from a different library version that you aren’t using.

From what I have seen of Copilot, it is very impressive but does often give you code that doesn’t quite compile but gives you a good template and inspiration of how to progress.

In the past I have seen people blindly copy code from the internet, or just do what a colleague suggests without actually thinking about it. I think we are gonna be seeing this more from now on, but it’s gonna be the AI’s fault.

I am not against AI in programming because it can speed up development in certain tedious areas, but it always comes down to the idea that the best programmers are ones with a certain mindset of quality, and I think AI is gonna produce more developers with the wrong mindset because it’s about speed and cutting corners.

I’ve heard people suggest that the next wave of developers can be so dependent on AI, that they will be unable to come up with a solution when the AI doesn’t get it right.

Leave a comment