I think the surprising thing to me here is the high usage of ChatGPT (82%). Every time I try to use it I find I can just search for an answer quicker than it, especially when taking into account the time I have to spend trying to figure out if what it's telling me is actually accurate or if it imagined functionality or features that don't actually exist.
For the more straightforward tasks it probably would do well at, copilot seems like the better solution since it's much more tightly integrated into my developer environment.
I think some people's brains just don't grok what LLMs are good for. They either ask complex gotcha questions, which get mostly wrong answers or use them as search engine replacements.
I just happen to have a real world example of what I used Gemini (free version) for just yesterday open in a tab.
Me: "I want to write a Go program that connects to a running Plex instance and gets a list of unwatched episodes from a show called 'After Midnight'"
Gemini: gives me source code for what I want to do
Me: checks that the Github url for the main library works, it doesn't -> "Library <url> doesn't exist" (Gemini has a habit of suggesting old or defunct Go libraries, or hallucinating them)
Gemini: another attempt with a different library
Me: checks that the Github url for the main library works, it doesn't -> "Library <url2> doesn't exist"
Gemini: Admits that it doesn't have any more official Go Plex libraries it can suggest, it suggest considering either using an unofficial one or a different language like Python, that has more options
Me: "Let's go with Python then instead of Go"
Gemini: Gives me 100% working code using the 'plexapi' library
At this point I had to spend more time getting my personal X-Plex-Token from the Plex server sources (personal project, no need to bring web auth into this), bootstrapping the project with Poetry etc. than it took me to get working code out of Gemini. After that I can start iterating towards what I want (a CLI tool to copy unwatched episodes from my Plex server locally for offline use).
All this would've taken me hours longer if I had to start the project from scratch and do the boring bits myself, manually digging through the API docs to find out how to connect to Plex and how the data is structured.
I've went through a similar process using ChatGPT and Gemini multiple times. There's a simple-ish thing I want to do, I use an LLM to get me to a point where I can start iterating towards a solution instead of having to start from scratch. Sometimes the first attempt Just Works, sometimes it's 90-95% complete, because the LLM has used a wrong property or function at one point.
You have to use the tools for what they are good at. As search-engine replacements, they are pretty lousy - that's not their strength. My three primary use cases:
- Correcting spelling and grammatical errors in a text I have written, especially in a language that is not my mother tongue. I still have to check the result, but this prevents a lot of silly mistakes.
- Getting examples of particular API calls, for things I don't do often. The AI (usually) delivers correct code examples, along with decent explanations. The result is generally better and more understandable than the official documentation.
- Just for fun, sometimes I ask about random obscure facts that have come up in conversation, or somehow aroused my curiosity. Ask a quick question, get a quick answer without wading through search results.
I kinda suspect it might be that chagpt is excellent at getting you to an "average" performance in any field.
My background is computational material science, but more on materials than the computational part. I have an ok broad knowledge of most CS topics but I'm always finding I'm playing catch up. My work also involves a lot of making research prototypes in areas I don't have time to get a proper background in.
For me GPT has had a transformative impact on my work.
For example I had a lot of projects that needed Docker. I have an ok idea of what Docker is and what i want to do with it. But, I don't have the time of a real software developer to learn the syntax and deal with subtle bugs or how to do basic things, e.g., "how do I ssh into my Docker container X"
I think I'm on the end of users that is best poised to make use of llms. A decent knowledge of what strategy i want to go for but don't know the tactics. And I'm mediocre enough at programming that the Llm can usually beat me. Another example, I would just never write any unit tests, not enough time. With llms I can get simple dirty tests done + I know enough about testing to filter out the bad ones and tune the best ones.
I see poor responders on two extremes on either side of me. People who really don't know what they are doing and can't prompt correct the llm into doing anything better. And people who really know what they are doing and are generally working on one tech stack/ project and don't need help getting dumb basics in place + have more time to write things themselves.
I don't see how they're deriving these judgments from the question that was asked:
> Which programming, scripting, and markup languages have you done extensive development work in over the past year, and which do you want to work in over the next year? (If you both worked with the language and want to continue to do so, please check both boxes in that row.)
Desired is clear enough. If somebody said they wanted to work with XYZ next year, then it's desired. But how are they deriving "admired" from the fact that somebody had to work with XYZ this year?
The meaning is comically vague instead of just "used" "want to use", and also frustrating is they never allow sorting by admired for some reason.
Edit: What do these words even mean in this context? It is so unnecessarily confusing.
Admire - to feel respect and approval for
Desire - an inclination to want things
Why pick these 2 similar words... that don't match up with the survey question:
"Which programming, scripting, and markup languages have you done extensive development work in over the past year" - so popularity? (what if you don't admire it?)
"Which do you want to use" - desire
The mouse-over seems to be sorted by the blue "Desire". But the list looks similar to a popularity list, except Go and Rust are above C#, C++, and Java.
What is going on?
Edit 2: The question does not directly map. A request to Stack Overflow, just label them:
"Used and want to keep using"
"Haven't used but want to"
As a programmer reading charts, precision and clarity is better than ambiguity.
Then what does desired mean? Is that haven't used it but want to? The cleverness of the alliteration is not even close to worth the confusion this causes every year.
From a Prolog perspective I think they're bad results.
It's more popular than languages like OCaml, COBOL or Nim. This is not too bad I guess, I somehow got the impression that there were more people from all those languages than in Prolog.
However it is the second least desired language (after Zephyr), and the second least admired language (after MATLAB). That means people don't want to learn Prolog, and people working with it don't want to continue doing it.
But, is it well paid? No, Ada and Prolog are the least paid languages (but compared to 2023 there's a big change here, so there may be some noise here).
I wonder what the combined percentage of Linux usage is for personal use. The results (distros) add up to 61% which might be overlapping (one dev using more than one distro).
- Operating systems. Linux is split across five different lines, but if you add these up, personal use of Linux is nearly as high as Windows, and professional use is much higher.
- IDEs: I was shocked at the huge dominance of VS-Code.
- AI tools: Disappointing representation of open-source tools, even though some of them are quite good.
For the more straightforward tasks it probably would do well at, copilot seems like the better solution since it's much more tightly integrated into my developer environment.