Notes on Current AI (LinkedIn Post)

{ I am archiving these here, just because. Here’s the link to the LI Post, and then to the article itself. If you want to engage or amplify the post, please go there! }

{I plan to post occasional thoughts here, ranging from views on technology to leadership to organizational culture change. I hope they are interesting and even occasionally thought-provoking. We’ll start with some thoughts on generative AI.}

I am struck by the (alarmingly?) wide range of views on AI being articulated by people I respect. It’s clearly a very challenging topic to thread the needle on, and, as always, what we see is largely defined by where we stand. Critics gonna’ critique, consultants gonna’ sell, and if your livelihood is perceived as under threat from an emergent technology, that is certainly going to impact your position.

As an instinctive contrarian, I am attracted to people willing to say things that may, on the surface, seem to go against their interests. As such, I hold deep respect for what Devlin Liles is doing at Improving . His short videos about the challenges of successful AI implementation are very smart, very on point, and very useful as cautionary tales.

Clearly, he is implying that Improving is the best partner to overcome these issues (and indeed, they may be), but it is a decidedly different tone than the overselling of the ease and impacts of the usual sales pitch. And the specific obstacles Devlin points out–which boil down, once you move past the specific technical concerns, to scope, expectations, and stakeholder engagement–are well worth considering.

But Improving’s stance is very much focused on the enterprise. And I’m not sure that’s where AI is currently focused. It’s certainly not where the marketing efforts behind AI are aimed. My assumption is that’s because, in general terms, AI is still so, so, so … um … bad. My favorite recent example: I typed Who played LF with Joe DiMaggio into the search bar and, among the answers given were Lou Gehrig (contemporary, not LF) and Paul O’Neill (LF, far from contemporary). This is a pretty easy research question, solved in a few clicks.

The CoPilot app and ChatGPT did pretty well on the question, and the results continue to vary pretty widely, depending on which tool you ask and the specific phrasing of the question. This highlights the core of the challenge: there is a burden on the user to not do the easiest thing to get good results. This is, of course, an opportunity for expertise (that is, the better you are at prompt crafting, the better your results), but it’s an obstacle to common professional adoption.

A more potentially damning example is something I have seen quite often recently:

Candidate is located in Houston, TX, which is relatively close to Bethesda, MD, allowing for feasible commuting or relocation.

Eyebrow raised: Oh, really?

Another data point: I am served ads for CoPilot where the enticing message is … make cartoon versions of me and my Dad, because that will bring us closer. I mean … far be it from me to downplay the generationally healing potential of laughter and joy. But I don’t know that AI art is the avenue to such reconciliation.

Once the AI slop spills out of the bottle, it’s not going back in. But that doesn’t mean we have to drink it. The Oatmeal, as it often does, explores this particular angle–AI and its relationship to artistic creation and art–far better than I could.

What I like about this is the nuance: there are legitimate uses of AI in the creation of art, but they must be approached with great care, and with our eyes open as to the potential costs, both in terms of effectiveness and quality, but also with regards to the ethical price of contributing to the ongoing production of the slop.

As always, tante over at Smashing Frames is incredibly useful in his analysis of AI. His take on Agentic AI–which I am pretty sure Improving would protest strongly–being especially trenchant for those interested in figuring out the true economic (read, corporate: cartoon versions of the family will only ever generate but so much revenue) impact of AI.

Ultimately, tante sees AI, in its current place in the marketplace, as a hype engine, as something that, due to its current limitations, can never actually deliver on its promise and, hence, is constantly forced to produce more promises. Some see this as a definition of a looming AI bubble, but tante intimates that perhaps bubble is the wrong vision: instead, he envisions a massive hamster wheel, where, whenever one generation expires from sprinting at full speed, another hops on to take its place because what is fueling the movement of money right now is the momentum of the wheel, without regard to its output. AI needs to always be promising the next great thing, because the current thing … as we’ve seen, not so great. See his full analysis, but his full archive is worth a deep dive.

Obviously, that cannot last. But it may indeed last long enough for output that is genuinely useful and impactful to rise out of the slop. This is important. What is happening right now is not good. But that doesn’t mean it will never be so. Indeed, the way to avoid an AI driven economic collapse may be for AI to start to deliver–ever so slightly, ever so impactfully–on its constantly evolving promises.

A final perspective. The editors at the magnificent n+1 Foundation recently posted a pro-Luddite screed against AI. It’s entertaining and fun to read and most certainly not wrong. And its takedown of the formulaic I tried AI and This is What I Learned article (which it terms AI and I, which I just love) is borderline brilliant. But it’s also far from right. Waving one’s hands and asking people to Just. Stop. Using. It. is a reaction that is doomed to both mockery and failure. Thick skin can protect against the former, but the latter sticks. See The full post here (warning, long–n+1 is a literary magazine after all).

Still.

Michael Franti, way back in the Spearhead days, once sang, Must everything in life have political ramifications, even taking kids on vacation. And n+1’s point that the real concern over AI should be resource consumption, exploitation, and the continued service of the few against the many is clearly correct.

But the solution to that isn’t to ignore it. You can choose not to reward it (no, CoPilot, I will not be burning resources to see anime versions of my family), but burying one’s head in the sand is a poor tactical choice.

So what to do?

As with all technology, the answer lies in judicious application, in steadfast skepticism of the hype, and in locating where and how the abilities of generative AI may enhance and add ease to life. Everyone’s calculus may differ, and if yours leaves you rejecting it in total due to a resource consumption argument, I think that’s an honorable position. But hold that calculus lightly: its cost will inevitably decrease, and its presence will only increase.

#technology #ai #musings

This entry was posted in Culture and tagged , , . Bookmark the permalink.

One Response to Notes on Current AI (LinkedIn Post)

  1. I look forward to more of your thinking as it, and AI itself, evolve. I find ChatGPT immensely useful in my current writing project (for fetching data out of government publications), but I am resigned to later having to check everything for errors. My favorite AI slop encounter was a New York Post article about a grim discovery in a van. A man was found inside who was “unresponsive, unconscious, and badly decomposed.”

Leave a Reply