I'm pretty staunchly anti-AI at this point. You mentioned ChatGPT being fed by underpaid workers, which is one thing, but I think it was a former Google engineer who said it's too powerful, too soon. Not enough safeguards in place at this point and, while we're not at a point of computers taking over and killing humans, the prompts from humans could lead to some very dangerous reactions by others.
I hardly know enough about AI to be able to quantify all of the good that it has done. Especially within the health sciences. Ignorantly speaking, using the powers of tunnel vision, I wish technology paused in the 90s, when phones were flip, internet was dial up, video games and movies required some imagination, and this Substack subscription would have been something I had to have mailed to me. As Eric Carle, author of The Very Hungry Caterpillar said, “Simplify, slow down, be kind. And don’t forget to have art in your life – music, paintings, theater, dance, and sunsets.”
Wow, this is definitely a lot to think on. AI is something being used in the field of medicine to work on new treatment modalities, and I think that a few months ago, the Cleveland Clinic and IBM turned on the first Quantum Computer dedicated to just that. It might end up being Ultron. We have yet to see. As far as my direct job, it might be a while before the industry embraces robot nurses. I'll likely be retired by that point. I do really hope that the strike ends soon for you guys, it is really shitty when those CEO's would only need to make a miniscule cut on their literal dragons hoard of wealth to make this work. As far as the hug, I'm down. I just went 1 for 4 at Magic today, and could use a pick me up. Also, I'm winning at my contest. So thanks to anyone who happened to click the link and vote for me!
The studios will use AI to make screenplays, then have non-WGA writers sign off on them. Then those writers will get feature credits and become WGA members. Then those guys will nuke the union next contract. :/
I thought Dr Horrible was the best thing to come from the last strike. But it was definitely a harbinger of things to come for short forms, mini seasons, and creators revealed to be problematic. :/
I'm pretty scared for the future now, since according to Val, we're in for Trump administrations in the plural?! I'm already wishing for a time machine so I can poke ahead 14 months and make sure that hasn't happened...
I use chatGPT to help me write emails to clients which saves me some time. So that’s nice, especially since I find myself writing very similar emails all the time just slightly different for each client. I’d rather not be the guy who has to email all the clients all the time tbh.
So, this made me remember Roko's Basilisk. It's not something I believe in, but is definitely something I understand people being fearful of. Do I think an A.I. will take over the world? No. Do I think A.I. will be both good and bad? Yes. I don't necessarily fear A.I. but I think there should be limits. Not every situation is black and white. There's a lot of grey areas. The humanity of anything, especially art, is what makes it interesting. I think A.I. will be useful, but should not be relied on 100% for anything. I don't fear for my job being lost to A.I. which I recognize is a privilege, but maybe I'm just naive as well.
You were imagining a weird human in a box. Someone you could have a human-like relationship with. AI is not like that.
The component pieces that explain the prediction that AI will kill us all are not at all difficult to understand. But people reject the prediction based on aesthetics, psychoanalytic ad hominem, and dubious "base rate" arguments.
I'll try to summarize the explanation for the prediction quickly in my own words:
A: A human-level AI would not stay at human-level for long. There is no law of the universe that says that it is impossible to be smarter than a human. Any 'mind' (or whatever you want to call it) that is good enough at altering itself can alter itself to be more intelligent, which would make it easier to come up with further ways to make itself more intelligent, and so on. This would start a feedback loop that ends with something extremely intelligent.
A1: Something much smarter than humanity would be much more powerful than humanity. Occasionally people dispute this, but most people don't find this statement objectionable. For the interests of brevity, I won't elaborate on this point.
B: Humanity does not know how to instill a superintelligent AI with human morality, nor how to make it corrigible, nor how to install restraints that it couldn't circumvent.
C: The idea that creating a being far more powerful than humanity without human morality would most likely result in human extinction is something that many people grasp intuitively, or at least intuitively find plausible enough to be worrisome. For more information, check out this video essay: https://www.youtube.com/watch?v=ZeecOKBus3Q
If you don't want to watch that, the concept is called "instrumental convergence". Basically, no matter what one's goals are, there are things that tend to be useful for that goal. Self-preservation, resources, and dealing with threats. If an AI can maximize it's utility function better by using all the resources on Earth, instead of leaving enough resources for humanity to survive, it will go ahead and do that.
How soon will this happen? Difficult to estimate. But judging by the current rate of growth of AI capabilities, I'm not too optimistic.
I'll reply to this comment with some further links to check out.
From a technology standpoint, I feel like we're at an inflection point with generative AI. The rate and pace we've seen since end of 2022 is amazing. With current technology, I am firmly in the camp that there is a general lack of "intelligence". The generated outputs can be intriguing for sure, be it words or images or whatever, but the technology is only doing an amazing job art stringing together bits that it knows about based on its training. It lacks any sort of knowledge or understanding or emotional tone. My opinion is that the conveying of any sort of emotional reaction is a reflection of our basic human nature and trying to associate "meaning" with things we engage with.
So is it good or bad? Both. There are great applications of this technology. Automating manual tasks, responding to the core FAQ type questions, deep analysis to help remove process bottlenecks, etc. Let's leverage the technology to help make us more productive. On the other hand, this techology can absolutely be dangerous. It is great at creating harmful arguments/images that can be used to manipulate opinions. It can generate automation that is capable of bringing technology stacks to their knees. Many others as well. It is already being weaponized and we'll need to continue to keep this front of mind as we assess what we consume and how.
Will technology evolve to be more human like and sentient in the future? I'm personally skeptical.
I am a teacher and have watched as each new technology has been incorporated into the education system. From programmable calculators (yes, I am that old), computers to mobile phones and, now, AI. While I am not currently threatened by the AI in terms of employability (If an AI wants to stand in front of 25 adolescents and explain the intricacies of Macbeth, it's welcome), things that seemed to be posing a threat to the world as we know it have worked themselves out and proven not to be the evil they first appeared. I understand the threat it poses to the creative industries and know that if it can be exploited by the capitalist powers that be. For these reasons it should be controlled by legislation with in-built protections and monitored very closely. But, as with all technologies, the cat is out of the bag and we will have to learn to live with it.
Oh boy the AI debate. I think ChatGPT and the AI art things are fun. I enjoy goofing around with them and using them for my own creative things. I’m sure there are a lot of people like me who just mess around with it, get a laugh and then move on. However, these programs have definitely been unleashed on the world and there are things being created that are downright bad and disgusting that effect others who have absolutely zero control over it. People are twisting the programs for their own ill will or dark fantasies. Even in the short time ChatGPT and co. have been around the game has already advanced and soon video and audio creations will be more in play. Add this on top of all the scams and misinformation we are already seeing and it’s a recipient for chaos. The people advancing the AI programs are already ahead of any legislation or rules that would be put in place to contain them. In my opinion AI is a powerful tool that can obviously be used in helpful ways, but it’s the people using it for “evil”, greed, and harm that are ultimately creating the biggest problems and ruining it for the rest of us.
As a spectator to creative writing, I'm honestly terrified of AI content. The thought of losing genuinely new ideas and stories told from lived experiences makes me weep for the future. But with all big shifts and with accepting that we are in the midst of the fourth industrial revolution, future generations might not care. Will they study this time and wonder why we held on so hard to do a job when all labor in the future is automated to some degree? Will real writing be a niche hobby done only when our coding jobs are done? Who knows. I can see it coming in my field already. I'm not a creative writer, but I run a technical writing division. What we pay editors and writers to do will be 80% AI generated before I retire. I guarantee it. Set up style sheets and parameters, feed it engineering drawings, maintenance levels, and tool lists, and out pops a perfectly written and illustrated repair manual that anyone can follow. Fin.
I will read what you have suggested, but I am not yet educated enough to speak...intelligently...about A.I.
I am listening to the book "The Twilight World" written and narrated by Werner Herzog. It is about Hiroo Onoda, a Japanese Imperial Soldier in the Philippines who did not surrender when Japan did in 1945...but 29 years later. I'd highly recommend. Considering Herzog did a documentary on the internet and has asked the question "Does the internet dream of itself," I'd be interested in the book you've mentioned.
omg I can't believe you made a reference to Enemy Mine! I watched it for the first time a couple of months ago, really enjoyed it.
AI is complicated. I feel like in this, as in most things, I don't dislike the tech much at all - but I dislike how it is used. I feel like the internet is the most obvious previous example of a shift in technology, and the internet as a tool is amazing. But the internet, as controlled, moneytised, and used through capitalism, is doing incredible damage to our democracies, our mental health, and all manner of other things with no clear fix in sight.
I tend to be optimistic - we've managed to get past nuclear weapons and other huge issues without destroying civilization yet. But we must always be viligant of how new technology is used, and fight for equity. History is not linear, as we can see right now with the unfortunate rise in facism and attacks on queer people very reminiscent of what happened in the Weimar Republic a century ago.
Thanks as always for a great newsletter and comic! I have a signed issue of Y The Last Man 16 you signed for me when I was an awkward teenage boy at a New Zealand convention framed in my house, which is among my favourite pieces of art! I'm now a (still kind of awkward) mid-30s woman, so I am glad I didn't ask you to sign my name on it haha!
SPECTATORS - Part 76
I'm pretty staunchly anti-AI at this point. You mentioned ChatGPT being fed by underpaid workers, which is one thing, but I think it was a former Google engineer who said it's too powerful, too soon. Not enough safeguards in place at this point and, while we're not at a point of computers taking over and killing humans, the prompts from humans could lead to some very dangerous reactions by others.
I hardly know enough about AI to be able to quantify all of the good that it has done. Especially within the health sciences. Ignorantly speaking, using the powers of tunnel vision, I wish technology paused in the 90s, when phones were flip, internet was dial up, video games and movies required some imagination, and this Substack subscription would have been something I had to have mailed to me. As Eric Carle, author of The Very Hungry Caterpillar said, “Simplify, slow down, be kind. And don’t forget to have art in your life – music, paintings, theater, dance, and sunsets.”
Wow, this is definitely a lot to think on. AI is something being used in the field of medicine to work on new treatment modalities, and I think that a few months ago, the Cleveland Clinic and IBM turned on the first Quantum Computer dedicated to just that. It might end up being Ultron. We have yet to see. As far as my direct job, it might be a while before the industry embraces robot nurses. I'll likely be retired by that point. I do really hope that the strike ends soon for you guys, it is really shitty when those CEO's would only need to make a miniscule cut on their literal dragons hoard of wealth to make this work. As far as the hug, I'm down. I just went 1 for 4 at Magic today, and could use a pick me up. Also, I'm winning at my contest. So thanks to anyone who happened to click the link and vote for me!
The studios will use AI to make screenplays, then have non-WGA writers sign off on them. Then those writers will get feature credits and become WGA members. Then those guys will nuke the union next contract. :/
I thought Dr Horrible was the best thing to come from the last strike. But it was definitely a harbinger of things to come for short forms, mini seasons, and creators revealed to be problematic. :/
I'm pretty scared for the future now, since according to Val, we're in for Trump administrations in the plural?! I'm already wishing for a time machine so I can poke ahead 14 months and make sure that hasn't happened...
I use chatGPT to help me write emails to clients which saves me some time. So that’s nice, especially since I find myself writing very similar emails all the time just slightly different for each client. I’d rather not be the guy who has to email all the clients all the time tbh.
So, this made me remember Roko's Basilisk. It's not something I believe in, but is definitely something I understand people being fearful of. Do I think an A.I. will take over the world? No. Do I think A.I. will be both good and bad? Yes. I don't necessarily fear A.I. but I think there should be limits. Not every situation is black and white. There's a lot of grey areas. The humanity of anything, especially art, is what makes it interesting. I think A.I. will be useful, but should not be relied on 100% for anything. I don't fear for my job being lost to A.I. which I recognize is a privilege, but maybe I'm just naive as well.
You were imagining a weird human in a box. Someone you could have a human-like relationship with. AI is not like that.
The component pieces that explain the prediction that AI will kill us all are not at all difficult to understand. But people reject the prediction based on aesthetics, psychoanalytic ad hominem, and dubious "base rate" arguments.
I'll try to summarize the explanation for the prediction quickly in my own words:
A: A human-level AI would not stay at human-level for long. There is no law of the universe that says that it is impossible to be smarter than a human. Any 'mind' (or whatever you want to call it) that is good enough at altering itself can alter itself to be more intelligent, which would make it easier to come up with further ways to make itself more intelligent, and so on. This would start a feedback loop that ends with something extremely intelligent.
A1: Something much smarter than humanity would be much more powerful than humanity. Occasionally people dispute this, but most people don't find this statement objectionable. For the interests of brevity, I won't elaborate on this point.
B: Humanity does not know how to instill a superintelligent AI with human morality, nor how to make it corrigible, nor how to install restraints that it couldn't circumvent.
C: The idea that creating a being far more powerful than humanity without human morality would most likely result in human extinction is something that many people grasp intuitively, or at least intuitively find plausible enough to be worrisome. For more information, check out this video essay: https://www.youtube.com/watch?v=ZeecOKBus3Q
If you don't want to watch that, the concept is called "instrumental convergence". Basically, no matter what one's goals are, there are things that tend to be useful for that goal. Self-preservation, resources, and dealing with threats. If an AI can maximize it's utility function better by using all the resources on Earth, instead of leaving enough resources for humanity to survive, it will go ahead and do that.
How soon will this happen? Difficult to estimate. But judging by the current rate of growth of AI capabilities, I'm not too optimistic.
I'll reply to this comment with some further links to check out.
From a technology standpoint, I feel like we're at an inflection point with generative AI. The rate and pace we've seen since end of 2022 is amazing. With current technology, I am firmly in the camp that there is a general lack of "intelligence". The generated outputs can be intriguing for sure, be it words or images or whatever, but the technology is only doing an amazing job art stringing together bits that it knows about based on its training. It lacks any sort of knowledge or understanding or emotional tone. My opinion is that the conveying of any sort of emotional reaction is a reflection of our basic human nature and trying to associate "meaning" with things we engage with.
So is it good or bad? Both. There are great applications of this technology. Automating manual tasks, responding to the core FAQ type questions, deep analysis to help remove process bottlenecks, etc. Let's leverage the technology to help make us more productive. On the other hand, this techology can absolutely be dangerous. It is great at creating harmful arguments/images that can be used to manipulate opinions. It can generate automation that is capable of bringing technology stacks to their knees. Many others as well. It is already being weaponized and we'll need to continue to keep this front of mind as we assess what we consume and how.
Will technology evolve to be more human like and sentient in the future? I'm personally skeptical.
I am a teacher and have watched as each new technology has been incorporated into the education system. From programmable calculators (yes, I am that old), computers to mobile phones and, now, AI. While I am not currently threatened by the AI in terms of employability (If an AI wants to stand in front of 25 adolescents and explain the intricacies of Macbeth, it's welcome), things that seemed to be posing a threat to the world as we know it have worked themselves out and proven not to be the evil they first appeared. I understand the threat it poses to the creative industries and know that if it can be exploited by the capitalist powers that be. For these reasons it should be controlled by legislation with in-built protections and monitored very closely. But, as with all technologies, the cat is out of the bag and we will have to learn to live with it.
Yes I would like a hug, thank you very much...
But only from Genesis.
Oh boy the AI debate. I think ChatGPT and the AI art things are fun. I enjoy goofing around with them and using them for my own creative things. I’m sure there are a lot of people like me who just mess around with it, get a laugh and then move on. However, these programs have definitely been unleashed on the world and there are things being created that are downright bad and disgusting that effect others who have absolutely zero control over it. People are twisting the programs for their own ill will or dark fantasies. Even in the short time ChatGPT and co. have been around the game has already advanced and soon video and audio creations will be more in play. Add this on top of all the scams and misinformation we are already seeing and it’s a recipient for chaos. The people advancing the AI programs are already ahead of any legislation or rules that would be put in place to contain them. In my opinion AI is a powerful tool that can obviously be used in helpful ways, but it’s the people using it for “evil”, greed, and harm that are ultimately creating the biggest problems and ruining it for the rest of us.
As a spectator to creative writing, I'm honestly terrified of AI content. The thought of losing genuinely new ideas and stories told from lived experiences makes me weep for the future. But with all big shifts and with accepting that we are in the midst of the fourth industrial revolution, future generations might not care. Will they study this time and wonder why we held on so hard to do a job when all labor in the future is automated to some degree? Will real writing be a niche hobby done only when our coding jobs are done? Who knows. I can see it coming in my field already. I'm not a creative writer, but I run a technical writing division. What we pay editors and writers to do will be 80% AI generated before I retire. I guarantee it. Set up style sheets and parameters, feed it engineering drawings, maintenance levels, and tool lists, and out pops a perfectly written and illustrated repair manual that anyone can follow. Fin.
Fuck.
Seriously fuck.
So a friend of mine went into the hospital this week. The cancer that she thought beaten has returned and now stage 4. So yeah.
She is kind of a nerd, so as gifts I have given her GNs from Neil Gaiman, and, last time, the Tom King's Supergirl thing.
Sorry BKV.
So this time, I bought her Saga Volume 1.
Here's where it gets bad, and because of this weeks topic....
I gave it to her and said, 'You don't have to really read it, but Fiona Staples is probably the best artists in the business'.
I am so sorry, sir.
Here is where it gets worse.
So I am a little snarky by nature and I look at her and say 'It was very nice meeting you.' and kiss her on her forehead.
I say that to people I have known a longtime a bunch. To me, it's funny. But...
WTF, right?
I will read what you have suggested, but I am not yet educated enough to speak...intelligently...about A.I.
I am listening to the book "The Twilight World" written and narrated by Werner Herzog. It is about Hiroo Onoda, a Japanese Imperial Soldier in the Philippines who did not surrender when Japan did in 1945...but 29 years later. I'd highly recommend. Considering Herzog did a documentary on the internet and has asked the question "Does the internet dream of itself," I'd be interested in the book you've mentioned.
omg I can't believe you made a reference to Enemy Mine! I watched it for the first time a couple of months ago, really enjoyed it.
AI is complicated. I feel like in this, as in most things, I don't dislike the tech much at all - but I dislike how it is used. I feel like the internet is the most obvious previous example of a shift in technology, and the internet as a tool is amazing. But the internet, as controlled, moneytised, and used through capitalism, is doing incredible damage to our democracies, our mental health, and all manner of other things with no clear fix in sight.
I tend to be optimistic - we've managed to get past nuclear weapons and other huge issues without destroying civilization yet. But we must always be viligant of how new technology is used, and fight for equity. History is not linear, as we can see right now with the unfortunate rise in facism and attacks on queer people very reminiscent of what happened in the Weimar Republic a century ago.
Thanks as always for a great newsletter and comic! I have a signed issue of Y The Last Man 16 you signed for me when I was an awkward teenage boy at a New Zealand convention framed in my house, which is among my favourite pieces of art! I'm now a (still kind of awkward) mid-30s woman, so I am glad I didn't ask you to sign my name on it haha!