Upsides and Downsides

Don't forget about the humans.

Aidan Brotherhood

Like with any new technology, AI is creating new opportunities and driving real change for many people, from automating mundane tasks in the workplace to transforming creative industries.

But what specifically is it changing?

The why behind AI is somewhat obvious—AGI, singularity, playing god. I’ll explore the deep ethical, moral, and philosophical implications of this technology another time and spare you the fluff.

Usually, we start with ‘why’ when we have these conversations, but at this very moment, I’d argue the ‘what’ is more important.

What is AI impacting, and what do we need to be aware of?

The buzz around OpenAI right now is akin to what Google was experiencing the better part of 25 years ago. While I might not have been alive to truly understand what that excitement was like, I can at least speculate.

So often, we bow ignorantly to technological innovations without properly considering the impact they have.

For Google, their upside was the democratisation of information (and of course, that was a good thing), but their downside includes the monopolisation of that information, potential bias in search algorithms, and an overwhelming influence on how information is consumed globally—a much more complex and nuanced issue.

If on the other hand, we look at Facebook (now Meta), it’s much more straightforward.

Their upside is connection—they created a space where people can socialise online.

Their downside? Dopamine hacking to increase screen time to serve more ads to people as a means of generating more ad revenue. This can only ever have a negative effect on an individual’s life, especially that of a young person. I wouldn’t exactly call it putting the customer first.

And even today, we still haven’t put the proper mechanisms in place to protect children, young people, and even adults from the addictive and soulless behaviours that platforms like Meta and others (TikTok) drive.

But what about AI?

Is it too early to say?

The upsides are as deep as they are broad, but there is one key area I’ve been able to identify that is being negatively impacted by AI.

But before I continue, it’s important I make it crystal clear that the solution to this is not to scale back, hinder AI development, or prevent people from using it (we couldn’t even if we wanted to)—but instead to adapt to it.

There is no question that students are using AI to get through school at the lowest and highest levels.

For the student, you would expect this to increase their academic performance, which in turn would hypothetically lead to more positive outcomes when entering the workplace—this still might be true.

And for the institution, you would think increased academic performance across the board would lead to a better reputation and consequently better relationships with employers—this also still might be true.

But my fear is that in actuality, the opposite will occur.

I believe reliance on AI will cause universities to produce the most uneducated graduates we’ve ever seen, as evidenced by reports of students using AI tools to complete assignments without understanding the material, leading to superficial knowledge that doesn’t hold up under practical application.

Many students will enter the workforce looking great on paper but will likely fall considerably short of expectations when put under some minor stress tests.

If this continues to happen, the value of obtaining a degree will fall further, and their validity will come into question for employers.

Right now, many academic institutions treat AI like the elephant in the room that no one wants to talk about.

Instead of openly discussing the ethical implications of using AI for academic work, students are often shamed for using a technology that they should be embracing.

The responsibility lies not with the student for leveraging modern technology, but with the institution for failing to establish clear guidelines for fair AI use and ensuring students do not misrepresent AI-generated work as their own.

To accomplish this, the way universities test students has to evolve to put human performance first.

We're reached a point where institutions have to be willing to raise the bar.

Otherwise, we run the risk of humans depending on AI so much that they forget to invest in their own intellectual development.

But why am I saying all this?

I'm actively working on a platform to try and address this exact issue.

It's still conceptual, but the goal would be to build a platform that accelerates learning through AI utilising constant feedback and analysis to help students learn faster.

Human's can't be left behind as this technology progresses.

If implemented correctly, AI should make us superhuman in all the right ways.

But what about AI?

Is it too early to say?

The upsides are as deep as they are broad, but there is one key area I’ve been able to identify that is being negatively impacted by AI, and before I continue it’s important I make it crystal clear that the solution to this is not to scale back, hinder AI development, or prevent people from using it (we couldn’t even if we wanted to) - but instead to adapt to it.

There is no question that students are using AI to get through school at the lowest and highest levels.

For the student, you would expect this to increase their academic performance, which in turn would hypothetically lead to more positive outcomes when entering the workplace - this still might be true.

And for the institution you would think increased academic performance across the board would lead to a better reputation,and consequently better relationships with employers - this also still  might be true.

But my fear is that in actuality the opposite will occur.

I believe reliance on AI will cause universities to produce the most uneducated graduates we’ve ever seen.

Many students will enter the workforce looking great on paper, but will likely fall considerably short of expectations when put under some minor stress tests.

If this is allowed to happen the value of obtaining of a degree will fall further, and their validity will come in to question.

Right now, many academic institutions treat AI like the elephant in the room that no-one wants to talk about.

Instead of openly discussing the ethical implications of using AI for academic work, students are often shamed for using a technology that they should be embracing.

The point here is the fault isn’t on the student for taking advantage of a recent technology, but on the institution for not accommodating fair AI use, but also ensuring that it isn’t used by the student in a way that is intended to pass AI work as their own.

To accomplish this the way universities test students has to evolve, because it’s essential we put human performance first, otherwise we run the risk of humans depending on AI so much that they forget to invest in their own intellectual development.

I’m currently working on a solution to this that embraces AI in the way it should be to actively improve human performance, accelerate learning, and improve cognition, which i intend to share in another post.

Check Out Some of My Other Work...

How to Accelerate Your Growth on TikTok in 5 Easy Steps

Learn how to use TikTok to grow your business.

Grow Now!

How To Build A Lead Generation Website With Webflow (Beginners Guide)

Build a lead generation website with Webflow.

Get Started

The Beginners Guide To Facebook Advertising 2024 (Step-by-Step)

Become a Facebook Advertising expert.

Start Today