A.I.: Is The Ecological Cost & The Beige-ification of Information Worth It?

By Lauren Gaggioli

Truth be told, I'm a bit nervous about sharing this episode. I've learned that, when I feel this way, it usually points to 2 things for me:

  1. It's an indication that I care deeply about the issue.
  2. It's time to ship it so I can invite other voices into the conversation.

When I'm stewing on a topic, I can become a bit obsessive and introverted. I tend to curl myself around the issue at hand, as though my isolated study will provide the pressure and polishing needed to turn to an amorphous chunk of mental carbon into a precisely chiseled diamond ready to be shared with the world.

But that's not really the way of things, is it?

Conducting research and sitting with new information is certainly one step. But it can only be the starting point.

A good old fashioned think is all well and good, but we really need foil of others to fully wrestle with the implications of our own line of thinking. I find my ideas become more nuanced when I engage in informed, thoughtful conversation around them with others.

So here's my request of you, friend.

Will you listen to this episode, take a look at the resources shared below, then consider joining the conversation?

This topic is far too important for us not to have a chat about it. I would love to hear your position on A.I. and how you're thoughtfully deploying it or why you're opting out.

I'm looking forward to hearing your thoughts in the comments below.

TL;DL - My Thoughts On A.I. Summarized

I'm greatly concerned about A.I. for 2 primary reasons. The first is the ecological toll that A.I. is taking on our world. The second is about the quality of the information that A.I. is generating and the trickledown effect that that may have on critical thinking and creative license down the line.

1. The Ecological Impact of A.I. 

As I share in this episode, Google conducts up to 8.5 billion searches per day. Gen A.I. queries take up to 5 times the real world resources to deliver these results.

And that's just for Google search queries, which is - frankly - the tip of the iceberg when we think about all the myriad companie that are training and leveraging A.I. to run their businesses behind the scenes or provide an A.I. add-on to the SaaS product you're already using.

Training ChatGPT released an estimated "500 metric tons of greenhouse gas emissions—equivalent to about 1 million miles driven by a conventional gasoline-powered vehicle," according to the Brookings Institute. "Updated versions such as GPT-4 have much greater needs and generate higher carbon emissions, though a lack of accessible input and output information renders analyses difficult."


And, that was just through training with LLM (large language models). As futurist Amy Webb predicts in her 2024 SXSW talk about A.I. and the convergence of two additional general purpose technologies, this super cycle will also lead to A.I. training on LAM (large action models) fed by all the details from our wearable devices meaning there will be considerable more data to crunch than language alone could ever provide.

I'm concerned both about the ecological cost that the ongoing ingestion of this ever-growing mountain data to train A.I. will no doubt cause as well as some of Webb's predictions for how this technology will negatively impact our society if stringent regulations are not put in place.

2. The Beige-ifcation of Information and It's Impact On Us

If you're a Disney World fan like me, you probably know the story of the "river of poo" that flows through Liberty Square.

If you're not, let me explain...

In colonial America, there were no indoor toilets. Instead, at the end of a long night, bedpans would be flung out windows, the sewage coalescing and flowing "right down the middle of main street". When Disney Imagineers created Liberty Square, they did not include bathrooms in the design and also put a lovely little strip of dark brown down the middle of he street. 

To me, this is what A.I. is doing to our ability to gather quality information.

When we search, we're getting the most boring, uncited, unnuanced coagulation of information that you could ever hope to read.

Now this doesn't matter much if, say, you want to know what time a movie is showing. So long as the information is correct and A.I. hasn't hallucinated, bully for you. Speedy, accurate, friction-free results. No second click needed.

But what if you want a movie review? In that case, you might want to know why somebody liked or disliked a movie. Or what that particular movie reviewer's taste in movies is to see if their preferences generally align with yours.

In the the exact opposite way that social algorithms are polarizing us and trapping us in an echo chamber of those who agree with us, A.I. is dragging information and opinions to the middle of the road, removing nuance and vibrancy in the process.

Additional concerns over I.P. and ownership set aside for a moment: this prospect terrifies me.

We already have a crisis around critical thinking in our country. Lazily deployed A.I. can only make it worse.  

As Ex Google Mo Gawdett shares on Steven Bartlett's Diary of a CEO podcast, if he could turn back the clock - push a button and make A.I. disappear - he would. 

And so, for now, I stand to the side and let the river of A.I.-generated poo flow by.

Yes! I even scroll down and click through on Google! Blue links or bust, baby.

I trust that, if and when I feel the desire to engage with this tool, I will be able to pick it up and learn to wield it well.

But, for now, to me, the cost seems too great. I have no desire to use my time and energy to engage with a technology that is taking so much from us without offering anything other than the promise of getting more done without any commitment to elevating the quality of the work.

Frankly, as I write this on the cusp of the holiday season, I can't help but think we already have access to too much. 

What if we did less? What if we did less better? What if we did it with our own brain aided by peer reviewed studies we could cite and the information housed in books that took more than 30 seconds to let a computer write for us?

What if the answer was in being more human? What if we did as Ryan Levesque thinks we ought to and zigged when they zagged.

Creation itself is a primal human urge. And it's one I don't believe any underground server heavily trained on what already exists will ever be able to give us in quite the same way that a human being will.

So I am committed to stay A.I.-free for the time being. It may change down the line but, for me, right now, the costs seem far too great. 

But to discuss is human too. And I've had a chance to share my thoughts.

I hope you'll share yours here in the comments below. I'm looking forward to hearing where you stand on all things A.I.