125 Comments
Sep 5, 2023Liked by Brian K. Vaughan

I'm pretty staunchly anti-AI at this point. You mentioned ChatGPT being fed by underpaid workers, which is one thing, but I think it was a former Google engineer who said it's too powerful, too soon. Not enough safeguards in place at this point and, while we're not at a point of computers taking over and killing humans, the prompts from humans could lead to some very dangerous reactions by others.

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

The more I see of AI the more agree with you. Tech rarely moves backwards. There is already such a messy and dark underbelly to the AI programs that the people having harmless fun are quickly getting overshadowed by people try to use it as a tool to take advantage of others.

Expand full comment

Of course, by the time we're at the point of an AI taking over and killing everyone, it will be too late.

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

I hardly know enough about AI to be able to quantify all of the good that it has done. Especially within the health sciences. Ignorantly speaking, using the powers of tunnel vision, I wish technology paused in the 90s, when phones were flip, internet was dial up, video games and movies required some imagination, and this Substack subscription would have been something I had to have mailed to me. As Eric Carle, author of The Very Hungry Caterpillar said, “Simplify, slow down, be kind. And don’t forget to have art in your life – music, paintings, theater, dance, and sunsets.”

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

“Be kind” is so important. The internet is such a beautiful way to be able to connect across the world, it should be something used to bring us all together as humans. But naw, people get pissed off and think that gives them the right to say literally anything, it’s crazy out there.

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

Wow, this is definitely a lot to think on. AI is something being used in the field of medicine to work on new treatment modalities, and I think that a few months ago, the Cleveland Clinic and IBM turned on the first Quantum Computer dedicated to just that. It might end up being Ultron. We have yet to see. As far as my direct job, it might be a while before the industry embraces robot nurses. I'll likely be retired by that point. I do really hope that the strike ends soon for you guys, it is really shitty when those CEO's would only need to make a miniscule cut on their literal dragons hoard of wealth to make this work. As far as the hug, I'm down. I just went 1 for 4 at Magic today, and could use a pick me up. Also, I'm winning at my contest. So thanks to anyone who happened to click the link and vote for me!

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

If I am ever in a bad way, I would want my nurse to be Nurse Dan. F those robots. Wait. Are they sexy robot nurses?

Expand full comment
founding
Sep 6, 2023·edited Sep 6, 2023

I appreciate that. If you need a Colonoscopy and happen to be in Cleveland area, I know a few people.

I feel like a nurse robot would end up like R2-D2 in Jabba's palace. Just a serving tray full of pills and maybe a syringe attachment. Skynet might incorporate a sexy motif in the future nurse robots. We will find out eventually.

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

The studios will use AI to make screenplays, then have non-WGA writers sign off on them. Then those writers will get feature credits and become WGA members. Then those guys will nuke the union next contract. :/

I thought Dr Horrible was the best thing to come from the last strike. But it was definitely a harbinger of things to come for short forms, mini seasons, and creators revealed to be problematic. :/

Expand full comment

I didn't/don't think the WGA works like that. I could be wrong, though.

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

I'm pretty scared for the future now, since according to Val, we're in for Trump administrations in the plural?! I'm already wishing for a time machine so I can poke ahead 14 months and make sure that hasn't happened...

Expand full comment
author

Maybe she was referring to one of Donald's unexpectedly benevolent grandchildren...?

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

Oh I hope so -- and that Spectators-like, he has to watch it all happen...

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

This makes me feel a*little* better about that S.

Expand full comment

Thank you for saying that Brian. Cuz, really, more of the orange-haired clown i (WE) do not need. and i love the idea of a future generation working to un-do the harm the idiot grandpa caused. You would think of that cuz you are you. Mil gracias!

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

I use chatGPT to help me write emails to clients which saves me some time. So that’s nice, especially since I find myself writing very similar emails all the time just slightly different for each client. I’d rather not be the guy who has to email all the clients all the time tbh.

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

So, this made me remember Roko's Basilisk. It's not something I believe in, but is definitely something I understand people being fearful of. Do I think an A.I. will take over the world? No. Do I think A.I. will be both good and bad? Yes. I don't necessarily fear A.I. but I think there should be limits. Not every situation is black and white. There's a lot of grey areas. The humanity of anything, especially art, is what makes it interesting. I think A.I. will be useful, but should not be relied on 100% for anything. I don't fear for my job being lost to A.I. which I recognize is a privilege, but maybe I'm just naive as well.

Expand full comment

You were imagining a weird human in a box. Someone you could have a human-like relationship with. AI is not like that.

The component pieces that explain the prediction that AI will kill us all are not at all difficult to understand. But people reject the prediction based on aesthetics, psychoanalytic ad hominem, and dubious "base rate" arguments.

I'll try to summarize the explanation for the prediction quickly in my own words:

A: A human-level AI would not stay at human-level for long. There is no law of the universe that says that it is impossible to be smarter than a human. Any 'mind' (or whatever you want to call it) that is good enough at altering itself can alter itself to be more intelligent, which would make it easier to come up with further ways to make itself more intelligent, and so on. This would start a feedback loop that ends with something extremely intelligent.

A1: Something much smarter than humanity would be much more powerful than humanity. Occasionally people dispute this, but most people don't find this statement objectionable. For the interests of brevity, I won't elaborate on this point.

B: Humanity does not know how to instill a superintelligent AI with human morality, nor how to make it corrigible, nor how to install restraints that it couldn't circumvent.

C: The idea that creating a being far more powerful than humanity without human morality would most likely result in human extinction is something that many people grasp intuitively, or at least intuitively find plausible enough to be worrisome. For more information, check out this video essay: https://www.youtube.com/watch?v=ZeecOKBus3Q

If you don't want to watch that, the concept is called "instrumental convergence". Basically, no matter what one's goals are, there are things that tend to be useful for that goal. Self-preservation, resources, and dealing with threats. If an AI can maximize it's utility function better by using all the resources on Earth, instead of leaving enough resources for humanity to survive, it will go ahead and do that.

How soon will this happen? Difficult to estimate. But judging by the current rate of growth of AI capabilities, I'm not too optimistic.

I'll reply to this comment with some further links to check out.

Expand full comment

https://gwern.net/fiction/clippy

A fictional treatment of an "AI hard takeoff scenario", written by someone with subject matter expertise.

https://www.youtube.com/@RobertMilesAI

A series of video essays on the subject.

Expand full comment
author

Yesssss, bring me all the scary-links!!!!

Expand full comment

https://www.youtube.com/watch?v=aC99lNQdNmA

A nice insight into it from the perspective of a filmmaker.

Expand full comment
founding

Kill all humans. Kill all humans. Hey baby, do you want to kill all humans?

Expand full comment
founding

From a technology standpoint, I feel like we're at an inflection point with generative AI. The rate and pace we've seen since end of 2022 is amazing. With current technology, I am firmly in the camp that there is a general lack of "intelligence". The generated outputs can be intriguing for sure, be it words or images or whatever, but the technology is only doing an amazing job art stringing together bits that it knows about based on its training. It lacks any sort of knowledge or understanding or emotional tone. My opinion is that the conveying of any sort of emotional reaction is a reflection of our basic human nature and trying to associate "meaning" with things we engage with.

So is it good or bad? Both. There are great applications of this technology. Automating manual tasks, responding to the core FAQ type questions, deep analysis to help remove process bottlenecks, etc. Let's leverage the technology to help make us more productive. On the other hand, this techology can absolutely be dangerous. It is great at creating harmful arguments/images that can be used to manipulate opinions. It can generate automation that is capable of bringing technology stacks to their knees. Many others as well. It is already being weaponized and we'll need to continue to keep this front of mind as we assess what we consume and how.

Will technology evolve to be more human like and sentient in the future? I'm personally skeptical.

Expand full comment
Sep 5, 2023·edited Sep 5, 2023Liked by Brian K. Vaughan

It's not just mimicking, unless one stretches the word mimicking to lose its meaning.

People say that GPT4 has no real understanding, that it's only pattern matching. Of course, there's nothing humans do that isn't, to some extent, pattern matching. Human thoughts are also based on information we've absorbed. What you are saying would be a sort of valid claim if GPT4 could only complete patterns it had read before with solutions it has read before, compared to humans who can extrapolate and synthesize new information from what we know. But GPT4 isn't just repeating blocks of text it's seen. It can extrapolate and synthesize, demonstrably so.

Here are some examples:

1. Solve novel complicated logic/visualization problems:

https://twitter.com/liron/status/1698103911297761356

For it to solve any problem it hasn't seen before demonstrates that it has some ability to extrapolate from what it knows. The more specific the problem, the more capabilities the AI has demonstrated.

2. Chat-GPT4 can play chess well enough that it must understand the rules and tactics to some extent. As demonstrated by the fact that it can make decent moves in positions that aren't in its training data. I don't know how good GPT4 is at chess, but the fact that it can play in novel positions at all, even if it sometimes makes illegal or unwise moves, means that it has understanding.

If you give GPT4 a long series of sentences it can compress it. Then you can give that compression to a new instance of GPT4 and it can decompress it. GPT4 can't just copy a compression of a paragraph of text I wrote 5 seconds ago from its training set, because my paragraph isn't in its training set. It needs to learn from its training set how compressing and decompressing works. That is not copying something a human has done, that is the AI developing a capability based on data from the world.

https://twitter.com/gfodor/status/1643297881313660928

GPT4 can come up with novel jokes, or humorous pieces of microfiction. To do that requires some understanding of humor. It is not necessary to experience humor in the emotional way that humans do in order to possess the cognitive capability to craft a joke. In some ways, it's more impressive that GPT4 isn't crafting jokes based on its own emotions. GPT4 can't just ask itself whether it finds a joke funny like you or I would when crafting a joke for others to hear, it has to understand humor on a deeper level than that.

If one uses an overly narrow definition of the word understanding, with a healthy dose of semantics, one could say that GPT-4 doesn't understand these things, even if it has the genuine capability to do them. I would say that that is the wrong definition of understanding.

If you define the word understanding such that only sapient beings can understand something, then you can claim an easy victory here. But that would be missing the point. A state-of-the-art neural network can beat any human in chess. If you don't consider neural networks to truly understand chess, then true understanding isn't necessary.

If that doesn't sway you, how about this: Sapience is a spectrum. Or at the very least, it's a point on a spectrum of mental capabilites. If you define the word understanding such that GPT4 doesn't (according to your definition) qualify until it has passed that point, you're not saying anything qualitative about GPT4's abilities, only quantitative. With that definition, saying that GPT4 doesn't understand how to compress and decompress arbitrary sentences says that GPT4 isn't sapient, and carries no information beyond that.

It's not enough to just say "GPT4 isn't sapient, so it's not a threat." Yes, that's true. I do realize that GPT4 isn't likely to kill us all. Here's what you get if you don't limit the definition of understanding to only include sapience.

1. GPT4's understands a lot, as demonstrated by its capability to solve novel problems. It can even chain novel problems of different sorts together, to some extent. So it's understanding isn't siloed into different areas, it can synthesize. Not as well as humans, but there is no qualitative difference.

2. Eventually LLMs will understand enough that one becomes sapient, at which point it will start acting on its own behalf to maximize its utility function, including pursuing exponential intelligence growth. Or maybe the process of exponential growth in intelligence will start and then it'll become sapient, either way.

So by defining the word understanding that way, one can say that there is a qualitative difference between chat-gpt4 and a dangerous AI. But sapience is just a product of a large enough amount of understanding, comprehension, and mental horsepower. GPT3 had some amount of understanding and comprehension, GPT4 will have more, future LLMs will have even more than that. There's nothing to suggest that trend line will slow down before we all die.

Sometimes technological growth in a field stalls. I have no particular reason to think that it will stall in this field and have not heard one. I have talked with people who think things will slow down and their justifications for that prediction seem horrifically vague to me. For all we know we can just add more parameters/training time/GPUs/whatever to existing techniques to get there. I have heard that GPT4 didn't require any big breakthroughs in technique compared to GPT3, just more scaling of existing techniques. And the difference between GPT3 and GPT4 is pretty big, as we all know.

Expand full comment

I am a teacher and have watched as each new technology has been incorporated into the education system. From programmable calculators (yes, I am that old), computers to mobile phones and, now, AI. While I am not currently threatened by the AI in terms of employability (If an AI wants to stand in front of 25 adolescents and explain the intricacies of Macbeth, it's welcome), things that seemed to be posing a threat to the world as we know it have worked themselves out and proven not to be the evil they first appeared. I understand the threat it poses to the creative industries and know that if it can be exploited by the capitalist powers that be. For these reasons it should be controlled by legislation with in-built protections and monitored very closely. But, as with all technologies, the cat is out of the bag and we will have to learn to live with it.

Expand full comment

Retired teacher, here, I remember the new calculators! The new smartboards, etc. Sadly, never worked in a school where every student had their own iPad. Technology has always been a blessing in the classroom (when you could get it). Many of our schools still suffer from digital divide. I always thought that AI could be so useful to personalize instruction, that AI could function as an assistant teacher to the "master" teacher. Since I taught ESL/EFL and study languages, I always wondered if AI could one day be programmed to function as a "language partner" for students in remote areas or areas where native speakers were rare. But, I realize all of this is problematic and there is something to be said for being able to teach & engage students with nothing but a pencil in your hand. I don't know if they were good lessons, but some of the students' favorite lessons were when we went and sat under a tree and just talked to each other. Toward the end of my teaching career, free time was a largesse we could not afford, every minute crammed w/ activities like test prep. Anyway, wouldn't it be damn wonderful if we used AI etc exclusively for benevolent ends? imagine that...

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

Yes I would like a hug, thank you very much...

But only from Genesis.

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

Oh boy the AI debate. I think ChatGPT and the AI art things are fun. I enjoy goofing around with them and using them for my own creative things. I’m sure there are a lot of people like me who just mess around with it, get a laugh and then move on. However, these programs have definitely been unleashed on the world and there are things being created that are downright bad and disgusting that effect others who have absolutely zero control over it. People are twisting the programs for their own ill will or dark fantasies. Even in the short time ChatGPT and co. have been around the game has already advanced and soon video and audio creations will be more in play. Add this on top of all the scams and misinformation we are already seeing and it’s a recipient for chaos. The people advancing the AI programs are already ahead of any legislation or rules that would be put in place to contain them. In my opinion AI is a powerful tool that can obviously be used in helpful ways, but it’s the people using it for “evil”, greed, and harm that are ultimately creating the biggest problems and ruining it for the rest of us.

Expand full comment

As a spectator to creative writing, I'm honestly terrified of AI content. The thought of losing genuinely new ideas and stories told from lived experiences makes me weep for the future. But with all big shifts and with accepting that we are in the midst of the fourth industrial revolution, future generations might not care. Will they study this time and wonder why we held on so hard to do a job when all labor in the future is automated to some degree? Will real writing be a niche hobby done only when our coding jobs are done? Who knows. I can see it coming in my field already. I'm not a creative writer, but I run a technical writing division. What we pay editors and writers to do will be 80% AI generated before I retire. I guarantee it. Set up style sheets and parameters, feed it engineering drawings, maintenance levels, and tool lists, and out pops a perfectly written and illustrated repair manual that anyone can follow. Fin.

Expand full comment
founding

I'm terrified of the boring films we will have to endure. Imagine the irony of the Matrix 5.

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

Fuck.

Seriously fuck.

So a friend of mine went into the hospital this week. The cancer that she thought beaten has returned and now stage 4. So yeah.

She is kind of a nerd, so as gifts I have given her GNs from Neil Gaiman, and, last time, the Tom King's Supergirl thing.

Sorry BKV.

So this time, I bought her Saga Volume 1.

Here's where it gets bad, and because of this weeks topic....

I gave it to her and said, 'You don't have to really read it, but Fiona Staples is probably the best artists in the business'.

I am so sorry, sir.

Here is where it gets worse.

So I am a little snarky by nature and I look at her and say 'It was very nice meeting you.' and kiss her on her forehead.

I say that to people I have known a longtime a bunch. To me, it's funny. But...

WTF, right?

Expand full comment
author

Oh, dear. I'm very sorry, pal. All my best to your friend.

Expand full comment
founding

Thank you, sir. There you go. AI should be fighting cancer not writing the next screenplay.

Expand full comment

Hoping your friend will be okay. The real issue is corporate greed - curing cancer doesn’t have as large of a profit margin (after R&D, clinical trials, FDA approval, etc) as asking Chatbot to just write Barbie 2: Ken Was Right for a media conglomerate. Besides curing disease is never a goal of big pharma if they can hook you on lifetime prescriptions instead

Expand full comment
Sep 5, 2023·edited Sep 5, 2023Liked by Brian K. Vaughan

I'm sorry to hear about your friend, and I'm praying for her. She is blessed to have a friend who gives her graphic novels. Don't beat yourself up about what you innocently said or what you could have said better; friends know you by what you do, and it is wonderful that you are there for her.

Expand full comment
founding

Thanks Reed!

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

Cancer sucks!! Hoping for the best for you and your friend. Comics help heal the soul. You’re a great friend for being there and bringing joy.

Expand full comment
founding

Thanks Miles!

Expand full comment

That's awful news, sorry to hear. I hope she gets the best treatment and care.

She should keep on fighting it and find joy in what keeps her going and striving though. I find gift surprises like Saga and other great comics are brilliant - you're a special friend for this!

Expand full comment

So sorry to hear about your TM, I wish her all the best. I know she feels blessed to have a friend like you. I hope she kicks the cancer's ass.

Expand full comment
founding

Thanks Eric!

Expand full comment
founding
Sep 5, 2023Liked by Brian K. Vaughan

I will read what you have suggested, but I am not yet educated enough to speak...intelligently...about A.I.

I am listening to the book "The Twilight World" written and narrated by Werner Herzog. It is about Hiroo Onoda, a Japanese Imperial Soldier in the Philippines who did not surrender when Japan did in 1945...but 29 years later. I'd highly recommend. Considering Herzog did a documentary on the internet and has asked the question "Does the internet dream of itself," I'd be interested in the book you've mentioned.

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

omg I can't believe you made a reference to Enemy Mine! I watched it for the first time a couple of months ago, really enjoyed it.

AI is complicated. I feel like in this, as in most things, I don't dislike the tech much at all - but I dislike how it is used. I feel like the internet is the most obvious previous example of a shift in technology, and the internet as a tool is amazing. But the internet, as controlled, moneytised, and used through capitalism, is doing incredible damage to our democracies, our mental health, and all manner of other things with no clear fix in sight.

I tend to be optimistic - we've managed to get past nuclear weapons and other huge issues without destroying civilization yet. But we must always be viligant of how new technology is used, and fight for equity. History is not linear, as we can see right now with the unfortunate rise in facism and attacks on queer people very reminiscent of what happened in the Weimar Republic a century ago.

Thanks as always for a great newsletter and comic! I have a signed issue of Y The Last Man 16 you signed for me when I was an awkward teenage boy at a New Zealand convention framed in my house, which is among my favourite pieces of art! I'm now a (still kind of awkward) mid-30s woman, so I am glad I didn't ask you to sign my name on it haha!

Expand full comment
author

One of my favorite cons ever, Jaina! Glad you still have some connection to that ancient issue of Y...

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

It ended up being a good Con for Garth Ennis too - I ended up spending a lot of money after asking you what the "Fuck Communism" lighter in a recent issue of Y was a reference to haha!

Expand full comment
Sep 5, 2023Liked by Brian K. Vaughan

This is spot on for me. The technology in its desired form is really cool and useful. But of course people in power immediately start trying to ruin it. I agree that we can navigate it, but I just wish governments and other groups could be a little more altruistic and stop thinking about $$$

Expand full comment