AI, being human and the future of coding

Thoughts on AI in 2024

4 minute read ( )
a man in a workshop carving wood

This post is also available in different formats so you can read on the go or share it around!

Photo by Alex Gruber on Unsplash

Artificial Intelligence (more specifically, large language models or LLMs) has become a techno-cultural zeitgeist. It’s grown rapidly to the point of absurdity. AI lurks in every application you use, it sits prominently in blog posts we read, we hear it through eerie YouTube videos and feel its emptiness as it permeates every facet of our digital lives. One question sits, firmly at the forefront of my mind as I watch the world change, are we building AI that can be tools humans use or are we building our replacements?

Software

Software and the craft of writing software is being affected by this new wave of AI. Being a software engineer means working in an ever-changing field. That’s what makes it exciting, you’re constantly learning, adapting and encountering new challenges. The evolution of software has seemed natural until now. AI is demanding the mindshare and vast resources are being put towards programming with AI specifically.

Today, tools like GitHub Copilot have been on the tool side, enhancing a programmer’s ability rather than superseding it. Copilot is often compared to a studious junior developer, it will create code that might not be great but can be a good first draft or at the very least save the tedium of typing. When given a scoped problem, it can perform quite well, or it may just hallucinate an imaginary API on a whim. Tools like Copilot are being sold as productivity boosters, tools which need a programmer to supervise and shape. But, for how long are we in the driving seat?

We might not have to wait too long before LLMs become capable enough to appear good at programming. In my mind, it’s unclear if non-programmers will use tools like this to justify downsizing engineering teams, and what does this mean for a junior developer? Will companies use LLMs as an opportunity to get rid of junior engineers to cut costs, and career advancement with it? We’re in an interesting period where we may see a shortage of experienced engineers as it becomes more difficult to get into the field. Time will tell, but it’s probable there will be some disruption and far-reaching consequences which we can’t foresee.

Art, Music and Writing

LLMs appear to have some amount of intelligence because they are trained on large quantities of data, overwhelmingly from the internet. Blog posts, technical documentation, scientific journals and Reddit posts are all fodder for LLMs. Will this very blog post be used in training an AI? Do I get a cut? Do I get attribution? Generating writing, images and music using AI devalues the medium it’s drawing from. Not only that, but there have been instances of AI recreating art directly and robbing the original artist of their creativity. This leads to less creativity in these areas and further contributes to the dying internet.

With AI, the concept that hard work and determination will prevail may be further disproven. A sufficiently intelligent machine can learn from the entirety of human history to distill answers, synthesise solutions and create anything in the digital world, more effectively than a human. No amount of hard work in the digital space can compete. Our only hope is that creativity and innovation will still be in the realm of human thought vs. the machine and that humans will value the work of other humans.

Human thought

As it stands in 2024, LLMs are pretty good at two particular tasks that make me wary of the future. Firstly, LLMs are great at ingesting a large amount of text and performing summarisation. Secondly, LLMs are pretty good at writing answers, padding out text and changing tone. We are months away from LLMs being put in the hands of the average person without a subscription, without having to do “prompt engineering”. How will people use these tools in the real world?

My concern is that people will rely too much on AI to generate and publish content at an alarming rate. To combat this, people consuming that content will summarise it with AI, they won’t actually read it. Will the appetite for long-form content diminish even more as we become accustomed to summarising and condensing every piece of text in sight? And think of the torrent of content that will be written using LLMs that will in turn be used as training data for LLMs. What value does anything have when it’s a copy of a copy of a copy? Maybe the rate of advancement in LLMs will plateau as we, as a civilisation, no longer have sufficient high-quality content to feed to the all-consuming AI as training data.

Coexistence

Can we coexist with AI? Other industries have been disrupted in the past, and sometimes it was catastrophic for people and their livelihoods, other times it necessitated a more diverse system to take its place. We can look back at our history to see parallels.

When only few could read, it was an expert skill that only some people could benefit from. Once reading became more widely taught, there was still a need for experts. Authors, poets, journalists, etc. The proliferation of reading created a need for specialists and wordsmiths. The mass production of books and news, ushered in by the invention of the printing press, created a new industry. It made reading even more accessible to the masses. In modern times, digital has transformed the way we read yet again. It’s more accessible than ever to people all over the world, but books exist alongside their digital counterparts. Authors exist alongside the avid reader. The balance has shifted over time, but it creates different opportunities as needs change and a new equilibrium is found.

When we look at a future where AI is more capable, and it’s applied to more industries, will it rebalance the status quo? Will it create new specialists, forcing us to rethink and change, or will AI absorb everything that can be done in the digital space, replacing what for so long has been the domain of humans?

If AI can learn from existing things we’ve done and create derivative products, services, and content. Will we see a point where AI is capable of running its own startups, making its own products to market and sell? The collective decisions of millions will inform the decisions of nascent LLMs. Entire industries could be commoditised and automated. Will this create disenfranchised, or will it empower us to stop creating derivative things and look to innovation?

If AI can do the things humans once did, this has the potential to free people to be more creative, specialised, and innovative. Equally, the biggest danger is that our capitalistic tendencies will prevail. A few companies will monopolise AI, extract wealth and move on.

Conclusion

We are entering a time of uncertainty and massive disruption like we’ve never seen before. The future of work isn’t as certain as it was a few years ago. The rate of change has accelerated and we’re left playing catchup. What we do now shapes the future we will live in, AI could be a catalyst for great things, but it could also shift the balance we once knew. AI could shift the values we hold and irreparably damage how we perceive what it means to be human.

Seth Corker

A Fullstack Software Engineer working with React and Django. My main focus is JavaScript specialising in frontend UI with React. I like to explore different frameworks and technologies in my spare time. Learning languages (programming and real life) is a blast.