Large Language Models and Human Minds

Lyle Troxell

Geek Speak with Lyle Troxell

Large Language Models and Human Minds

Geek Speak with Lyle Troxell

well it's been a while and i am fascinated with what's doing going on with ai and large

language models brian young and i have been chatting back and forth and we have a little

bit of a disagreement about what they are and how they work versus how the human brain works

so brian young and ben jaffe and i decided to record an episode of geek speak

i'm lyle troxell and this is geek speak and it's 22nd 23rd 24th year i don't even know

if one of you would like to provide your definition of a large language model

to get us going that would be great who wants yeah a probabilistic word calculator

oh that's so wordy okay how about a word calculator

what what's it mean to calculate words a number calculator you put in a bunch of numbers and you

put in operations and a word calculator you put in a bunch of words and the operations are

implicit the operations are whatever it feels like doing and it smooshes that against all of

its internal workings which were weighted based off of its training data which is basically the

internet

and

and then it predicts what comes next

brian you want to add anything to that

oh yeah it's a calculator all right but um it's just statistically predicting what the next word

should be based on its prompt i mean we can talk about in simple terms that way okay can i ask a

question i i hear people say that a lot is that is that the next word or the next maybe word set

of words well

word for word kind of actually fragment it's it's actually syllables actually it can't possibly be

just phonemes phenomes i never had to say that it is um because what you want to do is you want to

make sure that you understand a word that's spelled wrong as the word that was supposed to be spelled

correctly so it's really much more complicated the next word with the history of not just the

word that comes before you're predicting the next word with like all of the history of what's come

before yeah because like a markov chain is

is basically just yeah you have the word before you choose next so the the thing that i want to

yeah go ahead brian well i i just want to make one important point the the words that it comes

back with uh what it's aiming for is something that is a stylistically uh consistent response

that you would expect in other words those words that it produces don't have to uh be accurate

they have to sound accurate to the to the reader that's what its aim is and therefore that's how

you get results like uh citing papers that don't exist and citing sources that don't exist or

or things like that okay now let's poke at this a little bit what do you need translation is a

really easy way to look at this but what do you need to be able to predict words you it's not just

the frequency it's there's a lot more complexity to it and when you do these prediction sets based

off of like a large a small data set you get not very good systems and for years for 50 years we've

gotten pretty shitty systems small data set meaning a small corpus that the model was trained

on that's right when you get to a certain size though something else happens and my argument is

that when you get to these trillions of node size you have a fundamentally different quality to it

than the simplistic idea of being able to predict

and and the reason i say that is if you were to do translations of things there are some

i really wish i had a good uh uh similar metaphor for this um i have got a a bottle in front of me

a bottle of pills in front of me hear that and i'm trying to build i'm trying to build this

at the same time and i could talk to you about saying i have this bottle of pills and i have

this cup of coffee and i can construct a sentence about these two things and in the context of the

sentence you and i we would all understand how it would work i would say something like i was

drinking my coffee cup of coffee and holding my container of pills and i put it inside and they

got all wet so there's a little bit of ambiguity did i pour the coffee into the the container or

did i put the pills inside the coffee

we don't really know put both inside yourself yeah i might have put both inside myself there's

a whole bunch of context there and you need to know a lot to actually do it correctly

because it wouldn't make any sense to say that i put my coffee cup inside the pills so if i

structure the sentence in a kind of ambiguous way that's one of the things that we know that

didn't happen is the coffee cup itself didn't go inside the pill bottle because we know how

big coffee cups are and we know how big pill bottles are because we humans you're talking about

have a mental model of the world around us and we have an understanding of size and shape and

things like that and relative size and shape and that's how we can come to these conclusions

that's your argument no that's why that's a question yes okay so the argument of intelligence

that we have versus these language models don't have is that well you we can do that and understand

how what can go inside of other things and a predicting algorithm that understands language

it wouldn't be able to do that it wouldn't be able to do that it wouldn't be able to do that

it would just say things that were nonsensical like the cup went into the pills because why

would it how would it not know that but the thing is chat gpt isn't doing that it does understand

the pills can't go in the coffee cup when i say understand it won't construct sentences

that are weird like that as commonly as random how is it doing that it's trained on a lot of

language right yep and encoded in the language when people

talk about bottles of pills or coffee cups is information and maybe it's able to pick up on

signal either about those things individually or maybe from sentences where they're used together

yeah you you say chat gpt does not produce that gibberish response my understanding produce

gibberish i'm not arguing that doesn't produce gibberish i'm saying that it can

statistically more than not understand the pills can't receive the cup

what i was going to say is my understanding is that if you were to wander outside of english

and use another language it does uh return the the gibberish sort of answer not that i can do

that myself because i don't speak another language and keeping in mind if i speak french to you i will

pretty quickly have gibberish going on even if i understand some fundamentals because i just am not

very good at it yeah but if you know the word for pills you know the word for coffee cup and you

know the word for into you'll probably

construct a sentence that makes sense because you have an internal model of the world outside

of your language well i mean that's relative to what i understand about the language i'm speaking

in um i could understand what cups were in french but it's possible it's possible in the languages

i don't really know very well that a cup is synonymous with a small vase and a small vase

could go inside the pillbox i don't actually know the full context of it until i'm very good at the

language so when you're not good at the language you will not be able to understand the language

will make stupid mistakes because you don't know what the words are that's totally i'm just but

let's not dive down that deep my argument in this discussion is going to be that gpt4 is doing stuff

in this paper is doing stuff that infers that there is actually some level of larger than how

we dismiss these language models going on something else and one of the reasons i think that this is

makes sense is that um well actually let's go ahead and go through some of the examples

why this is amazing to me okay my favorite one is the unicorn example have you both heard the

unicorn example yes i have let's pretend i haven't okay we'll pretend ben doesn't doesn't remember it

or hasn't heard it um so this language model had text in and text out it's all it can do it has no

it hasn't processed any binary images it hasn't it doesn't have any kind of um it hasn't trained

on modeling images and when you're modeling images with a large language uh model system

you can do things like edge detection uh pre-processing and then put in edges there's a

lot more stuff you can do it doesn't have any of that stuff going on in the gpt4

so what the the research papers did is say hey draw me a picture of a unicorn now when it says

draw me it didn't actually say that it said please use latex which is a uh a type of um

text type of language that can draw you know like svg yeah not the um material latex no

just spelled yeah pronounced that way it spit out some text and if you take that and then render it

in a link in the system that can draw bitmap images you'll get the picture if you will that

was drawn and so it said bill draw me a unicorn and the unicorn that it produced is i don't know

it's something like 15 different circles a big circle for the body of the horse uh some rectangles

for the legs with hoof kind of marks there's three colors it's like a light pink a dark pink maybe

three colors a mauve and a light pink and a dark pink and a light pink and a dark pink and a light

like a yellow unicorn horn and then some circles representing the tail area kind of and some

circles representing the head the neck and over time as it was training with more and more data

this thing actually got these unicorns would actually get more correct in shape so

not only is that amazing because how does it know that how can it visualize it it also can

somehow process the image data in some kind of spatial idea um so what they did is they

took a fresh set of the data they gave it back a unicorn uh drawing in in latex without the

unicorn horn and said put the unicorn horn back in and it was able to do that so the question is

if it's just a language predicting system and maybe it's been trained on a lot of latex files

but not i mean probably not a lot of unicorn latex files how is it able to go from a sentence

of english saying draw me a unicorn to

an image being rendered that looks like a unicorn without something other than translation

and prediction happening

okay i totally give you that's a mystery now the question is what what argument what are you going

to use that mystery to make an argument for okay we'll jump to the meat of it

do you know what how you're going to finish sentences

uh no

no as i start speaking i generally don't i probably think a a concept or two ahead

when i'm really good i've got like maybe 15 concepts laid out that i'm trying to get to

but i usually get sidetracked okay and where do they think those concepts they're queued up

somewhere in your brain they're like some part of you is holding on to some concepts you have

kind of a an inkling that there's more you want to talk about there's another topic we haven't

come to but it's not like a clear list you could list out it's kind of comes from the engagement

of the thought and there's the words come to you it's not the words that are queued up it's the

concepts that are queued up the words are almost it's almost magical that they come out and

amazingly so most of the time grammatically correct and the way magic is doing a lot of work

the way that that sentence just came out was very odd i would re-edit it if i was writing it but in

speaking it just kind of came out in that strange way way where i corrected the grammar over time

um

how am i going to do this i'm going to do this i'm going to do this i'm going to do this i'm going to do this

right so we don't know i i don't know that we have a description of that that uh any of us could find

i'm definitely not planning ahead and looking at the possibilities and choosing them in i'm not

i'm not aware of how it happens my brain does it

i've often thought uh around well programming and also just basically speaking

like

man would i ever be screwed if this broke because i have no idea i have no idea why i can talk

i have no idea why i can program um so yeah i think a big part of my intelligence is being

able to speak and be comprehensible and be be comprehended by my fellow humans what was that big

it's so funny whenever i talk about how language happens i'm thinking about it as i go and it's

always bad yeah

i don't think there's very much difference between us and laying large language models

like why what what's so what's so amazingly different about my ability i make mistakes all

the time right so we we are creatures if you want to look at it as a brain we're a brain that has

all of this way to get input and express itself and and do many experiments to figure out whether

how it expressed itself worked and yeah that that is i mean that's the goal with these these uh

machine learning systems is to hook them up to more inputs and more feedback systems to determine

whether what they're doing is in some way correct or not have either of you used a lathe in a

machine shop yeah i've seen it i haven't done it you've seen it brian okay what do you know

about using a lathe in a machine shop wait what's a safety thing you know brian since ben and i've

used them what do you know about it from a safety perspective uh well uh it spins very fast and

therefore uh things can fly off in a dangerous way or the tool cutting it could possibly kick

back at you have you ever heard anything about touching versus not touching it with your body

uh nope but um it's safer to touch it

you

are you serious no i'm no right so friction is is going to create a lot of heat so you know a

whole bunch about using a lathe and how to be safe around it but you've never used it

your body's never engaged in using one's one how'd you learn that

um yeah i mean i i remember talking to a friend who was an airplane pilot

and i'd only ever studied the basic surfaces of an airplane

and uh he was talking about uh maneuvers he was doing and i said well if you're gonna do

that don't you have to counteract because this force will uh do something else he's like yeah

how'd you know that um it's the same way i i i i think i build a mental model i i have a

actually i have a very visceral feeling i think i translate the thoughts

into a physical feeling that is completely within my own head okay does that get you anywhere that

you're going to get you anywhere that you're going to get you anywhere that you're going to get you

yeah my point is that you mostly learn that through language like there is some embodiment

qualities that you use because your body engages in the world and does things that

these language models are never doing but a lot of it is from intellectual discussion

i don't think i don't produce mental models from hearing from people yeah i mean we also

i think one of the reasons that i have any intuition about the way an airplane works

is a couple of things one is learning about it on youtube and whatever right um learning about it

when my dad talked about it when i was a kid um but i think my visceral understanding of like

what how things would work came from a couple of things one is like when drive when um i was a kid

and my parents would be driving i'd have my my hand out and kind of like go up and down in the

wind as it went by so i guess i kind of like

gained a visceral understanding of control surfaces and like what wind and velocity does to

those another was um lyle you and i worked at netflix you work at netflix i used to and there's

a plane that goes down to la and i took that plane one time and it felt like all of a sudden

the experience of the plane you know blazing through the air was a lot more visceral i felt

the back hold on a second my cat's about to break something okay

i felt the back of the plane in the front of the plane moving independent somewhat independently

from each other and like following each other um in in some turbulence uh none of that was

from language that was from like a a visceral understanding from my body experiencing it

and then also microsoft flight simulator yeah and later x-plane which gave me a more accurate version

um there's also stuff that i kind of know about what's going on in the world of flight simulator

can and can't be done because of math so in physics so like i can i can take a piece of paper

and i can run at full speed like a standard a4 like eight and a half by 11 piece of paper

uh 20 way whatever and run at full speed with stretched out in front of me and it will not break

and i can probably hold on to it the entire time but if i take that same thing and stick my hands

out a car window that's going 200 miles an hour i know that they will just be ripped to heck

i've never done that

i've played with a little bit but it's just that i know that at that level the amount of pressure

that's happening is so much greater than the paper could be and i just made up a number of 200 right

it probably happened at 40 miles an hour i don't know where the breaking point is have you ever

been in a car at 200 miles an hour no but my point is that i also have these ideas that come about

because i visualize it based off of mathematics and the paper i've used like there's a lot more

going on than me just testing things

you

so what i

yeah

what i think is interesting about all these examples we've talked about a visceral physical

understanding me and ben have uh you're talking about mathematical understanding and somewhat of

an experiential understanding from something related to wind and holding up paper uh i look

at these as all systems that are separate but are feeding into a full understanding

yeah do you does that make sense

Yeah.

So that visceral understanding we're talking about, muscle memory.

I have a lot of understanding of the battle of the world through the muscle memory and the actions that I do that are successful.

And I really get punished when I mess up.

I've got plenty of scars to prove that.

A lot of pain that fed into that system to let me know that was the wrong thing.

But language is a part of that.

But it's one piece out of many, many other pieces.

Yeah, I agree.

So you're saying, Brian, that we're just reinforcement learning systems.

Okay, well.

That was sarcasm, by the way.

Yeah, yeah.

I definitely think we are.

I don't think there's anything.

No, no.

I said just very emphatically.

Do you think we're just reinforcement learning systems?

Yeah, of course.

I mean, that comes back down to, you know, do you believe there's God?

I don't.

So is it is my do I have free will?

I don't believe I have free will.

I believe I completely a a environmental representation of my genetics and my place in the world as I'm raised.

That's it.

And there's nothing else going on.

There's no spirit or anything in there.

There's just the reality of the world and the brain.

Now, you could say, well, there's some quantum effects, but that doesn't matter.

It doesn't.

It's random or it's not random.

It's still not something that's controllable.

It's.

We're just meat puppets engaging in a world.

And that's magic.

I mean, that's amazing.

I'm not saying that that's not a great, great thing.

I'm just saying that.

So we we do the exact same thing as language models do just happens.

We have more data inputs.

You lost me at the last sentence, especially.

Also, if we were just meat puppets, wouldn't that imply a creator and controller?

No.

Yeah.

Puppets a bad example of that.

Uh, we're not, we're not, we're, we're much more complicated rocks.

An emergent biological system.

Yeah.

So I, I tend to, I tend to agree with you, Lyle.

At the same time, I leave, I leave open completely that I don't have the answers when you get far enough down that road.

Yes, I agree too.

The other thing I wanted to say is that, um, a molecule of water is very, very different from an ocean.

The qualities of an ocean in a system on a planet with wind and creatures inside it, that thing is not understandable by the small aspects of it.

It is an emergent thing.

That's totally different.

I think intelligence is more like that than, than anything else.

It's, it's big enough that it's complicated enough and that's how it is.

And I know that's kind of a wavy hand kind of aspect of it.

I, with playing with GPT.

And looking at that paper and thinking about this, the way I feel about it right now is, oh, anytime you have a large system that has a feedbacks way, if it's large enough, it will come up with qualities that are very much like us humans, because that's what we are as well.

Yeah.

Is there something interesting in the fact that these systems are the way they are?

They are because they are trained on us.

The worst parts of us, to be clear.

Um, the internet.

And the best parts of us, right?

Well, you know, there's some really great stuff in there.

Yeah, a lot of scientific papers were processed, most likely.

The, the trolls are overrepresented.

Because, like, if you take, if you take, if you take a neuron in one of these systems, and then you make a whole bunch of them.

And you connect them.

You don't necessarily get intelligence out of that.

Like, you need your training.

You need your back propagation.

You need, like, all of that stuff.

You need that to feed into the system in order for it to mimic what it's, what it is, um, generated from.

And you can make some kind of argument about.

Yeah, I mean, like, that's kind of.

That's kind of what these things are doing, right?

Is there.

Okay.

I mimic.

Right.

If they mimic, I mimic.

Yeah.

But there's a difference.

There is, there is at least one difference.

There's at least one difference.

Which is, um, when you come out, when you came out of the womb, you were not a completely blank slate.

Right?

Like, you started.

Like, you started mimicking.

Absolutely.

You spent, we, we spend like our faces pretty quickly.

We spend our first 25 years, especially learning to mimic.

And for the last 10 of those 25, we think that we're done, but we're not baked yet.

Um, but like there is something we are starting from.

Right.

Brian, you were going to assist me.

I, well, you'll, you'll notice I'm not coming back and arguing any of this.

What you're saying, I don't think is.

That controversial.

Um, the, the real question is when we start looking at the tool that we have now in the various chat GPTs.

Right.

What it is useful for and where it's going.

Okay.

How we can imagine it becoming more useful and more powerful in certain ways.

That's, that's where I wonder if we're, we're disagreeing or whether we're, we're going to be agreeing.

Okay.

Well.

The reason why I want to talk really.

Intimately about what I think our brains are doing and how GPT predictive qualities are, are not just like, it's not just stitching words that are comprehensible.

It's actually more than that because it has some concept of what, like that, that coffee cups can't go in pill bottles.

Like it, it understands that somehow.

Okay.

There's more saying it has a concept and I think that's a fallacy.

I'm saying that it does things that are like to know that that concept exists.

Yeah.

You're saying you don't think that's different than us.

You're saying it feels like it has a concept of.

No, I'm saying, I'm saying that I don't think we have concepts of that.

When we're speaking, it just works in the same way that it, that system works.

So I don't, when I'm forming a sentence and you're talking and you talk about it being put inside the other, and we were talking about a pill box and a cup, you instantly think the pill box went into the cup.

You're not going.

You're mathematically, does this make sense?

What's the visual?

Now, if you're a visual person, you might have imagery of that.

Some people don't have imagery of that.

Both those humans that have the visual concept of a pill bottle and the cup, and therefore they see how they go together versus a person like me that doesn't think of it visually first, just kind of intuits that the cup was holding the pill bottle.

That those are not embodiment questions at that time.

There is something that I've done with the embodiment experience to make subsystem that responds in that way.

I'm not.

I don't have a, like, yeah, we say we have a concept, but what do you mean when you say you have a concept?

I don't visualize it.

You do.

We have two different ways of conceptualizing that.

So that's kind of my point is that I don't think that we have a concept of that, but we can produce thought that that quality emerges from it.

And when I say thought, I mean, communicate stuff to ourselves or outlet to other people.

And GPT can do the same.

It can do an aspect of the many things that we're doing.

Absolutely.

It's only a part of it.

So the output is the same.

You're saying that what's inside is also the same.

That's the tricky part about intelligence and modeling.

Everybody always talks about, well, we have a mental model of the world.

Well, yeah, you can slowly kind of talk about it.

Some people are better at it in some areas.

And other people, if you ask somebody, like, really, how does a house work?

A two-story house?

How much space is between the levels of the floor?

Some of them will be right.

The people that are carpenters will know a lot.

Those mental models are completely formed on how we've trained ourselves.

They're not something that we just have.

All of us are different in all these qualities.

But you just said those mental models are.

So that's the thing.

Is it a mental model or is it the thing that we've trained over time to understand in some way?

Like, what is a mental model?

I guess I don't understand what you're saying around.

Like.

If we're talking about, if we're saying that we are basically a large language model, right?

And that whatever we would think of as us having a concept of how many feet of space are between floors or any of these things, what things fit inside of what other things.

Are you saying that those things are implicitly converted to language in our minds?

No.

Are you saying that?

Are.

Because, like, what I'm hearing you saying, but I don't think you are, is that we have a mental model which is developed through language.

But I'm also hearing you say that we don't have a mental model.

So I'm not really sure.

Yeah.

Let me let me try to clarify that.

I think that the concept of a mental model, when you really poke at it, is elusive.

You.

You cannot.

You cannot just list out all the things, you know.

It's not possible.

Our brains don't work that way.

Right.

So when you wanted to, if you want to introspect a human.

You can't.

And you can't also introspect GPT.

Okay.

Those are two qualities are very similar.

This is all this data compressed into our neural nets that represent flows of data and come up with conclusions that are kind of predictive.

The way those things are trained is not.

As important, like I think of a human being that's been blind their entire life as still a human being, though a lot of my training has been visual.

We have concepts of the world that are non-visual and visual, and we're still humans and we still have thoughts and we still can communicate and still have some way of.

Of sharing that we have knowledge about the world.

Okay.

GPT is doing the same thing.

It just was trained a totally different way than us.

And I don't see a big difference between them.

Except for all the parts it's missing.

Yeah.

I mean, you can have a human who has never spoken a word in their life that has no useful language other than, I suppose, well, they could be totally socially isolated.

I think we probably have examples of this from what I remember.

But obviously, that person hasn't used the language qualities of their brain and therefore hasn't been trained.

They're not trained in a large language model aspect.

But that's just one aspect of what makes us thinking emergent creatures.

But I think so.

The reason why I want to talk with you guys about this is that the dismissive there's there's two like kind of large camps right now about GPT.

One is that there's gonna be artificial intelligence really soon.

It's gonna be like a godlike quality in the and we're gonna like change the world from it, which I think is totally crap.

The singularity side.

And the other is this thing is just a.

Predictive system.

It's not it doesn't do anything interesting.

It just makes it just makes gibberish.

These are two of many.

They're just the two loud ones.

They're two loud ones right now.

And to my mind, the more interest.

I mean, and I think we're going to get into the actual debate.

I think the thing that I think is really interesting about this stuff.

But the point that I'm making is that I do not.

I lean much more to these things actually having qualities like humans.

There is.

Thought going on now, let me talk about two different ways that we think, okay, let's talk about two different ways we think when we say thoughts and thinking it's very strange when I'm speaking the way the language is constructed and it makes sense at the end is not something that I'm planning and it's not something that I have my frontal cortex focused on when I'm doing a task.

I've done a hundred times.

I am not using my brain.

I'm using my body and there's systems in place that are taking care of it for me, right?

I.

Can drive the dishes without having a lot of thought.

But if I instead am like, don't do puzzles a lot and I have to do a puzzle or I'm pulling a splinter out of a foot, I am very focused and I'm using a different part of my brain that focus part of the brain, the logical, like the thing that we maybe use for programming.

Sometimes the thing that you're really focusing that you can't do secondhand GPT does not have that.

Right.

So.

But there's a.

A lot of other things going on that represent my brain working and GPT has some of those.

Yeah.

I don't.

I don't argue that.

I mean, I see that in machine learning in general.

So the question is why.

So, Brian, when you were talking about, I don't think these things are intelligent.

What do you mean by that?

So.

I think first getting back to the two camps you were referring to.

Yeah.

Um.

I think the important thing to keep in mind.

Is that we are using language to talk about these things.

And our language is built around human learning concepts, understanding.

So the words we use imbue a agency or a intention or a thought process that does not exist yet in these systems.

So I think everybody needs to be really careful when we're talking about this.

To separate what is a human incarnation of these words versus the machine that we have at the moment.

Because it's very, very easy to fall into that trap.

Okay.

Yeah.

So you're asking me what I think intelligence is.

I think back to the whole thing that the Inuits.

I may have the wrong term there, but those native peoples of the great north have 40 words for snow.

They, they have taken what we often think of as one or two or three things, and they have defined it down to 40 different things.

Uh, approximately, I may have the number wrong.

Um, I think 40 to 50 quick fact check.

Thank you.

Um, I think intelligence is similar in that there are many, many, many aspects of it and what we know of as human intelligence, uh, may have some additional aspects.

Then some animal intelligence, but that one keeps getting redefined over the years as we learn more and more qualities in animals that meet those human aspects of intelligence.

So I look at this tool, uh, these large language model machine learning tools as containing an aspect of, uh, intelligence.

But just like you see emergent qualities in a large language model.

I think there are emergent qualities.

So I look at this tool, uh, these large language model machine learning tools as containing an aspect of, uh, intelligence.

But just like you see emergent qualities in a large language model.

I think there are emergent qualities in a large language model.

To combining more and more aspects of intelligence.

Ben, you have anything to add to that?

It's a lot of words for snow.

I feel like I've been very pushy and like argumentative in my stance.

And so I'm trying to stop it.

No, you haven't.

To both of you.

Touche.

Actually, it's pronounced touche.

So.

I.

Are you saying that the, um, the, the 50 words for snow show that there's a lot more cultural thought going on to that part of the language?

That that culture has expanded out to have more meaning to those things and dove in deeper and understand the more because they have more words about it.

Is that kind of why you're using as an example?

They can talk about snow in a way that is more precise and more.

Uh,

um, creates more understanding than we can talk about intelligence. I think our language around

intelligence is minimal. Thank you. Yes. Thank you for now. I connected back. Okay. Thank you.

So we are deficient in being able to talk about the experience of intellect because we just are

not very, we don't have a lot of words for it. Yeah. I think we have more words and concepts

when we start talking about what makes a person good at work. Um, and when we talk about software

engineers and what makes them good at work, often we come into a discussion that has nothing to do

with programming. It's about social qualities, those soft skills people talk about. So, um,

but there we have a language that we've built up around it. What language do we have about

intelligence? I, I, I feel like at least outside of an academic environment, we don't.

I guess, I guess the thing that has shocked me is that when people are dismissing these systems,

they just talk about them as if it's unimportant that they can

make coherent, um, sentences that, and by the way, a lot of the time they do make a lot more sense

than a random system, right? They are normally making sense. There's examples of them being

really crappy. And, and there's examples that are amazing. Like there's one that I have here

that somebody was able to diagnose an illness in their dog that saved the dog's life that the two

other vets missed. Yeah. So we, uh, I just feel like,

obligatory to say, do not use chat GPT or GPT-4 for medical advice, because it very likely will

give you incorrect information. And the human involved in that story is a key part of that

process. Right. I mean, just as much as Ben, just as much as using WebDV, WebDM, WebMD and using.

No, I disagree because, because my, oh, sorry. Yeah.

No, go ahead. Go ahead.

My experience of WebMD is this website is telling me this thing and it's being really cagey about

what exactly it's saying because they don't want to get sued. And it's got a bunch of weird Viagra

ads on the side. And like, that's my experience of, I don't know if WebMD has Viagra ads, but you

get the idea. Um, GPT, whatever, I ask it a question and it authoritatively answers me.

And if it gives me the wrong information,

it will give me the wrong information with authority and confidence. And I will believe it.

Even if I go in knowing intellectually, because like we don't have any experience with

entities that can talk to us in our language with authority that don't have brains attached.

Like that's not true. Well, first of all, um, let's not talk about brains attached or not,

but that give false information authoritatively. No, I'm talking about,

uh, brains attached. I'm saying like, why does that matter? Anyway, I don't want to actually

derail. I just wanted to, I like that statement. I have people in my life that tell me bullshit

really authoritatively. Well, you also have a history with these people probably. And you

know that they tell you bullshit. I mean, a great example of that is the entire industry of

astrological reading. Yeah, we have it already. It's total crap.

Yeah. Yeah. So what's different. But anyway, I won't argue that point. My point is that

we do have a lot of BS. Now, how do we successfully figure out the dog has a disease or not right now?

Well, we are without GPT. What we would do is we would do a Google search.

And I have like 25 years experience. I'm doing Google searches and reading web pages and getting

feelings for certain things and seeing if there's a.org and seeing if Wikipedia article

correlates.

Relates to it. Finding someone that I've, oh, I've kind of know them before. They give seem to

give good advice. All of that stuff that I would do is a very trained way of engaging with the web

as we've had it. We do not have those skills yet for these GPT type systems, but we will grow them.

I don't know if we will, because I mean, you know that you can't now do a Google search for an exact

string anymore. Oh, no. Like they stopped that. Yeah. If you, if you put something in quotes.

Yeah. If you put something in quotes, it'll give you a result.

Which might have one of the words switched or maybe one of the words polarized. But like

when I'm searching for an error message, like SH is not a file or folder or something like

the SH is really important and is not a file or folder is really important. If I put it in quotes,

I expect to be able to get that string. And the reason that I do that is because I have

25 years of experience searching Google, but like these systems change and Google changes

and Google might change more slowly than some of these, uh, large language models or AIs that we

interact with. So I think that it's actually misleading to say, um, I know how to search

the web. Like I, I definitely am someone who would have said that, but I am starting,

this is a recent thought of mine. I'm starting to now doubt that I know how to search the web

because things change.

And I know that I'm going to be overconfident because I have experience of things that have

worked for me in the past. But you're all, you're just kind of reinforcing my argument that

these systems are like, anytime we engage in the world, we have to use our skills and

qualities of intellect to try to figure out what's real and what's not real.

And these systems are going to be very difficult because we haven't worked on them yet,

but over time we'll probably get better at it or they'll completely

destroy our economy very, very quickly. And that's more likely the case that's in my mind.

That's an orthogonal point.

Okay. So I want to talk about one more.

But, but I don't think that my point reinforces your point because.

Oh, really?

No, no, because these systems change. So the, the intuition, the skills, the confidence in

those skills that we develop for systems that we think are static because we engage with them the

same way every day, but they're actually changing.

Under the hood. Like this is already a problem that we have. I think it's going to be an

exacerbated problem with AI. And so I think that we're going to develop, we're going to develop

skills that are actually less useful than we think they are. And we're going to be over dependent or

over reliant or overconfident in the accuracy of these, of these models, which is why I interrupted

the entire conversation to say, don't use chat, chat GBD for medical advice, because they're

already overconfident. And so I think that we're going to develop skills that are actually less

useful than we think they are. So I think that we're going to develop skills that are actually less

useful than we think they are. And so I think that we're going to develop skills that are actually

a lot of stories of things that have gone wrong.

I, I, I, I fundamentally disagree with your statement, Ben.

Uh, I hope it's not the, don't do what you should totally use these tools for research.

Absolutely. They're amazing. And you think that you have to use other things. You have to use it

as part of a tool just so you can do a Google search. Do you think the average human can

understand web DB? No.

That, what?

Yeah, go ahead. Sorry.

My point is that the average human being is screwed anyway.

Oof.

Like what you can do on the internet, Ben, by figuring something out,

most humans on the planet cannot do that.

So why give them an even better foot gun?

Well, why is not the question I have. Like the why is about capitalism.

And I want to get into that discussion about capitalism. But the, my point is,

is that there are amazing tools for us to, to get information. And this is a new one.

And just like you're going to get, fall into a, uh, an amass of crazy conspiracy theory stuff online.

You could also be led astray by GPT. They're not very different in the way a human might

follow the wrong path. I see a lot of garbage out there and I've trained myself to have a filter for

it.

But you have.

Yes. And a lot of people haven't, and they get stuck up by it. And I don't think this.

Yeah. And I think that WebMD is a foot gun, but they're worried about litigation and it's really

easy to like hedge every kind of everything on WebMD. Cause you just add a, add a hedge

statement for each one of them. I think that AI is a much more powerful foot gun that people can

accidentally use.

To get themselves in much more trouble.

I found it very useful for doing research.

I would believe that.

Um, I want to change the tone of the, the, the place of the conversation and talk about,

um,

Wait, can, can I ask Brian, did you have anything else to add to this? Cause

I just wanted to change this topic entirely and talk about this paper that shows that,

um, developers that used an AI assistant to write code.

Um, produced code that was less secure, but they had more confidence in the security of that code

by using an AI.

Oh,

anyways, I just thought it backed up everything I said. It's a, it's a, it's pretty interesting

paper.

Oh, that is interesting. Send that to me.

I will, I will definitely, I want to read that. Definitely share it with us so I can read it

after this. The thing I want to talk about is the dangers of these systems and how screwed up this

whole thing is going to get.

Wait, that's a topic change?

Yeah.

So what they can do and what they can't do and how we're going to use them is orthogonal to how

they're going to be used. Um, yeah, that's, that's kind of the point I was trying to make before.

Yeah.

Yes. The problem space is that it used to be that creative endeavor. When I say creative endeavor,

things that GPT seems to be able to do versus computers couldn't do before. I'm calling that

creativity. It's a bad definition.

Once again, we're back to bad language for what our concepts are.

Totally bad language. But it used to be that if you,

wanted a short story that would entertain people for a few minutes,

um, you had to have a human being write it. Okay. Now you can pay, um, you can pay for compute time

to write it. So what these systems are doing is taking the, it used to be that if you had a lot

of capital, you would hire human beings, labor force to do things for you. Um, and so therefore

some of that capital would go down to the labor market would go down to humans like us.

In this new world, that does not need to happen as much. You can have capital and produce a lot

of creativity without humans being involved as much. That is really scary because we don't know

when I say scary, I mean, that's problematic because of how powerful these tools would be

and humans aren't even involved in it anymore. And that's the, it feels like a little bit of

a Marxism quality to this. And I'm kind of doing that slightly intentionally, even though I haven't

really, I'm not really versed in that, in that language.

What do you guys think about that? Well, yeah, I mean, you were mentioning capitalism earlier.

Um, so there, there are people that are getting work out of this that are pushed down even lower

on the scale of socioeconomic scale. Those are the people having to label the data that's being

used for training. So we are through the system further continuing, uh,

that we've seen of, what do you call that, externalizing the problems and exploiting

people's labor to create a more efficient system.

Even though it makes terribly big mistakes, factual mistakes, it has so many problems

in its use, it still fills a niche for what you call creativity, whether we're talking

about images or words, that is very useful and in a capitalist society is going to be

used.

I mean, my wife has people that she knows in the film industry that have used these

image creation tools and it made things faster and they got the project done under budget

or at least within the budget that they had.

But once again, the core problem here is that we are not building.

The incentives and the system around our society, we're allowing people to exploit

aspects of our society.

So I think a just way to do this would be if it's being trained on material, people

who have created that material should have some economic benefit from that.

And that is discussed, but no one's near making that happen.

There's no...

There's no legal framework that's falling into place to do that.

And we already have examples through Google scraping the web for decades now and being

able to do that for free and some laws being built around that concept.

We, you know, you've never paid for the experience of consuming information, except for when

you bought a ticket to a museum or you bought books or you paid for the access to the stuff.

But in general, like right now...

Yeah.

You could spend 700 lifetimes just reading what is in the open source domain online.

So and we've never had a methodology for making sure people got paid except for original access

to those things.

Well, open, open, what's it called?

Public domain feels very different from like, I made this corpus of work and then a bunch

of people asked for...

Yes.

Art in the style of Ben Jaffe.

I agree.

I'm still alive.

I think there's a difference there.

But if River got highly, my daughter got highly interested in the way you produce art, visual

art, Ben.

I don't think she would.

She could get relatively quickly be able to emulate you perfectly.

Anytime I wanted to get a Ben Jaffe image, I could just ask River to do that.

We would never stop her from doing that in the sense that to have that ability, we might

stop her.

We might stop her.

We might decide that it's not appropriate.

for someone to copy your style and make money on it um but to be able to do that we wouldn't

stop her from doing that so there is this like this high this high thing we do in society where

we say human beings can learn whatever they want to learn that's a good thing you know i know how

to draw mickey mouse i can't make money by drawing mickey mouse but i can draw mickey mouse why can't

you make money because anytime you draw mickey mouse it's under copyright and disney will sue

you so because they've changed the laws over time to make it protected and because disney has money

to sue you yeah because disney has money so i'm curious if chat or if gpt whatever draws a bunch

of mickey mouses and disney gets pissed off like i wonder if this is how this moves along

i think it'll probably be in the in the making money off of it whether you make money off the

output of these systems or not but i guess

my

point is that um open ai could pay for everybody that's gotten data from some amount and it could

do that for 10 years and everybody be happy but in reality it wouldn't affect at all what's how

powerful and potentially bad this thing could be to society like the the disenfranchising and

taking care of creative taking advantage of creative people has been done it's not going

to get undone there might be some reprepant you know might be some way of putting

money back there might be some laws in place that kind of protect that idea of ownership of ideas

but it's not i don't know if it's a primary concern i have because i think the system's

it's much worse than that here's the example of the thing i've been thinking about you know

highly targeted ads individual ads to a person as good as the algorithms are right now that

catch you on tiktok like the video feeds where you get caught and caught and caught

right now those are all actually generated by human beings and so they're going to partially

match what's interesting to you but over time they'll just be created by these systems

currently tiktok stuff's their playlist no no not the playlist the actual content oh the content

itself that you're swiping between so the content selection is is ai but the content itself is i see

right the content selection is ai it's really really good at predictive qualities and now it

can start injecting new ones that are manufactured and it can tune the system to get better and

these i mean what we're going to start seeing in the creative in the artificially created videos

on tiktok will be even more captivating the current feeds and that is scary to me because

we don't even know how to deal with them now i i i believe that's a big risk but as a side note i

did hear a description of the um of the tiktok algorithm and it actually uh is less um ai driven

than you might think it actually is less uh it's less uh it's less uh it's less uh it's less

involves more randomness than uh like youtube or something um because they they just worked out

that they would get more engagement by throwing more randomness in there anyways but your point

is not that yeah your point still stands and it's the ads that are the the tricky part because

that's where the money is right if you can make a perfect ad for me to get electric truck

i'll buy the electric truck like i know i'm i'm malleable in that way and we're just we're not

for systems that we're not like right now if i took 50 human beings and i said try to convince

ben to do something by controlling all the media around him make new art call him on the phone

engage with him 50 human beings doing that to ben could make ben do anything unfortunately

it's a horrible thought i think there's limits but i i do agree that we are the we're highly

influenced by what we surround the messages the the input that we take in we're a huge

influence there's no doubt about that shapes so much of our our mental conception of the world

so yes these these would be you know ai generated content that is tuned to the engagement algorithm

is a scary concept we actually have a societal mechanism to deal with this in some way which

is called government and making laws to to try to try to rein in the worst parts of stuff the

question is can we figure that out quickly enough before

it all falls down

sorry to use you as an example ben it's terrifying to put all of my hope in washington

i don't think we i mean i mean that's u.s centric but you know that the equivalent

i i don't know

i mean you know it

is as much as humans engaging in corporations

have been a deterrent from corporations becoming more and more dominant

is pretty insignificant it will now be less significant

can you say that one more time humans are how corporations do things

and a lot of the stuff that corporations do human beings decide to do them you know i

do login experiences for netflix without me the login experience for netflix is not going to work

as well it's going to improve over time but it's going to improve over time and i think that's a

because i spend time at it over time and this is a bad example programming is a bad example

necessary but all those qualities that corporations can do with human beings they no longer will need

to do use as much human beings and that means that they're less influenced by humanity

is that true because they're optimizing against humanity

well they're optimizing for capital that's what corporations do right who do they get the capital

from peter teal from humanity yeah yeah yeah i mean there are you know some people say there's

the argument to say that we already live in a place with super intelligences those super

intelligences are corporate companies corporations are super intelligences right they can do things

above and beyond what a human being is they have more intellect than a human being because they're

made up of a lot of them and they have a motivation and a and a and a you know they have a a motivation

for themselves so if you think of them as entities they are super intelligences like

this is a

pretty valid way to think about them super intelligences as defined as

can do more than a human being can do intellectually

okay i think of i think of a super intelligence as something that can that would like uh be able

to exponentially improve its intelligence that's its type as opposed to like just

more intelligent than humans which i think we're super intelligence compared to ants

even though we can't really modify how smart we are like it's just a scaling factor factor in some

ways i guess what i'm saying is like uh google right now has thousands of employees making

software right the amount of software it produces as an entity is more than any of us will ever do

in our entire life so it's a super software developer compared to us if you think of it as

an individual which we do legally and it has motivations and they're destroying the world

i mean we already in we're already in that space

and now they have tools that are allowed to remove humans from creativity that's that's just another

step of them being powerful now now i i want to i want to say yeah i want to inject something here

what you're describing

is the need for less humans to get a more and more optimized result

now i i've seen a similar mechanic in play in my old career in computer graphics and we we used to

talk about how appliances would give us so much free time so in both cases there was this idea

that as the appliances got more powerful or as the computers got more powerful we would need less

artists and we would need more people to do the work and we would need more people to do the

work and we would need more people to do the work and we would need more people to do the work and

we would need less or uh we would need less labor

in our life but what actually happened is we continued to include as much labor in our life

and artists continued to make more and more complex images with the computing power available

and and it was a equilibrium that that didn't work out the way people were were expecting

maybe in the beginning so the question is yes we'll have these tools to be more

productive and efficient does that mean we just make a whole lot more software

does that mean we make a whole lot more creativity um does that mean more

images and stuff i mean you'll devalue each individual image or writing

that is a concern i don't see anything countering that it'll also redistribute

um the people who are involved like uh like for

example so

an image that i that that came to me in in thinking about this like a metaphor that came

to me and thinking about this is if you think about if you think about each career path as a

tree in this field right and there's some trees that are kind of close to each other um because

maybe you can hop from one career to another and there are some trees that are quite far from each

other um

tools like gpt whatever number are effectively sawing the bottom branches off of many of these

trees so it's very hard to get into these trees now because it's hard to make a living doing

you know basic or not as good uh programming or whatever it may be right it also means that we

have less of a less of a safety net for people who fall out of whatever tree they're in and

because they can't go to another tree that's easier to climb and climb it right um and it's true that

ai is affecting different industries in different ways and some industries it's not going to affect

as much right like ai ai is not the same as robotics as much as the movies like us to think

that they are right and generally speaking robot robots are still mainly used for moving very heavy

things

more quickly than humans could right so like if your job can't be done 100 on a computer then

you're okay uh you're safer you're safer um you'll still have value yeah i mean i wouldn't

put those words in my mouth but um but yeah that's that's kind of the way that i'm thinking

about this shift i don't want to sound too like gloom and doom because i also think there's going

to be amazing things that come out of this shift i don't want to sound too like gloom and doom

out of this one for example is um the embodiment of you being human the quality of actually being

in a space with other humans um talking breathing the same air hopefully without covid um and

listening to the environment you're in and seeing nature all of those things will be as important as

they've ever been to human beings and i mean i i have an instinct that as we make all stuff coming

out of computers uh extremely

prolific to the point that's potentially overwhelming it won't have as much meaning

in some ways there'll be some lack of of meaning the more there is and i think we've already kind

of like that like how many streaming services are you connected to how much content can you

actually watch there was so much content right now it's a bit overwhelming the the system we're in

the capitalist system we're in we need to be able to make a living to climb the ladder as has been

described um so is there a way to make a living to climb the ladder as has been described um so is

the problem automating that away with a ai or is the problem the society and the economic structure

we're in the interesting thing is with all of the generative ai images text if we got to the point

where 99 of our content was generated through it we're not it's it's all derivative and we're not

adding more to the pie from which it will derive

and therefore i the way i see it tell me if you think i'm wrong it would just become uh we wouldn't

advance creatively in our images in our text we would be stuck in this pool that we're not adding

to because nobody can get enough value out of adding to it i unfortunately disagree with you

yeah two things one is attribution i think is going to be very important and i think we're

going to fail at it so

uh these systems these systems are uh only as good as their training data right

if their output becomes the next system's training data then i don't know i mean i

wouldn't necessarily call it low quality but it definitely has a lack of variability right

um so if you if you decide to put a bunch of

systems in a system that's not a good system that's not a good system that's not a good system

spam junk out there on the internet right just like articles or or whatever and you spend

i don't know a hundred thousand dollars doing that for whatever ends you're trying to accomplish

well now that if it's not able to be attributed to your algorithm or to an algorithm that is now

the part of the training data for the next thing and so i don't know like the the training data for

the next algorithms is going to be

overrepresented based off of what the financial incentives are for the production of that right

that's one i think we're saying the same thing wait which is nothing different that's been going

on for a hundred years how so nothing's different about that statement because human beings are

creative based on the environment they live in and what they experience yeah but and most of us

experience a lot of the same media and so we produce similar stuff we're not snowflakes we're

just don't but don't you think there's a difference in the environment that we're in with the

volume have you gone and seen a show a movie that's like kind of predictive and dumb but

everybody watched yeah like most movies there you go no that that's what we do but that's not my

argument but that's what i'm saying is that we already are making content that's kind of crap

that trains people to make content that's kind of crap we do that that's humans beings experience

a lot of what we produce is uninteresting drivel and then sometimes things are like wow that's

really neat and then the zeitgeist goes around it to then make that be the baseline for the

next phase like i don't see like i guess where i come from is what they're doing what the systems

are going to do compared to what humans are going to do i don't see a difference the difference is

that if i have a million dollars and i discover how to make a hundred million dollars by spamming

everyone on the internet i can do that with ai oh yeah and with compute but 50 years ago i couldn't

50 years ago the outsized impact that i can make as a crappy human

with a lot of money it's like yeah it's so it's orders of magnitude larger today

fully agree and we would probably be saying the exact same thing about spam email uh you know

back when we were recording geek speak in 2005 or 2010 but the difference is that spam email

was not the training data for the next systems that we rely upon the current way we're going

to be doing these back propagation training sets on large data is that we're going to be doing

these data sets versus how we're going to do it in the future where a lot of the output is these

systems is definitely going to be different we will get to the point where we train models to

be better to produce new content not just on the data set like there'll be there'll be um what is

it called confrontational systems that make each other better like we're we're just seeing the very

beginning of this capability so that idea like i used to be in that space of like well without

humans being creative the system won't be able to do that so i'm not going to be doing that

to get me better so if all the creativity starts happening with systems then people won't be

putting creative stuff online because it'll be dwarfed and therefore the system will never get

better but then as i thought about i'm like well i don't know how i get a creative thought i get a

creative thought by experience a lot of things and i see something i like this is something else i

like and i kind of mix it together kind of like my creativity is like puns they're just a mishmash

of how i involve myself but it really becomes uh dangerous when you have a quick um

feedback cycle of something like you were describing with tiktok

that's the problem said is that we're really good at having like the current way maybe tiktok

simplistic but the way facebook for example runs uh showing things to different groups of people

and getting people to stay longer online is using the content that is generated by the people and

trying to do that system we're really good when we have content we have a lot of eyeballs to train

the content better and better now we can actually recreate the content and specialize the content

on your area all the ads you'll see will be about where you live and about the kind of music you

like and it'll be very hard to not get swooed by it because just imagine five artists making content

directly for you and what it would feel like and we're going to see that yeah there's gonna be a

plethora of that kind of content and the shame of it is the monetary gain out of this is by appealing

to our it's the right way to say it the baser instincts the the the base pleasures as opposed

to a system to

around creating knowledge and creating us and to be better thinkers i mean all of that is possible

it's just there's no not a great uh incentive for somebody to put the effort into building that

system i was thinking about injection molding what's injection molding yeah you know if you

pick up a power tool like a drill or something the case of that thing's an injected molding case

it's everywhere all all stuff that's made of plastic basically pretty much all stuff that's

made of plastic besides

plastic bags and things anything that's firm you make a you make a mold of metal or something like

that and then you just shoot a bunch of plastic goo into the mold and then you let it dry really

quickly and you have an object and you have a thing yeah of uh yes and these things are highly

efficient at making cheap stuff and that's why we have such amazing cheap stuff like plastic things

on amazon or just everywhere chairs and furniture everything you can possibly imagine is made by this

technique and it's completely removed the you know somebody with a hand plane

in a workshop making a chair like it's just those aren't there anymore it's all this

mechanic process but the reason i'm thinking about that is that how that changed how our world was

wasn't really thought about when injection molding started getting cheap and cheap and cheap right it

just kind of slowly happened we got really good quality cheap things they weren't like amazing

but over time the price point of the system got to the point where like let's not make stuff that

will last more than five years and then you always can make stuff like that can pop

that composting quality i mean i don't know if those things are necessarily connected though

i don't think they're necessarily you could definitely use injection molding to make

amazingly strong durable products and you could also do planned obsolescence before injection

molding absolutely and people do even with non-plastic but i think we hadn't realized how

much money was in it but a a petrol-based plastic material injection molded with big machines

only makes sense at a large scale

once again which is what we're talking about with automating with ai and that's kind of why i make

the connection is that we have a plethora of plastic cheap stuff we're about to have a plethora

of creative endeavor and that's very different in my mind to how our culture works then but maybe

not maybe it's very similar maybe it's just it's just like that you know you'll just have a lot of

cheap content too like we have cheap furniture

the point i was making earlier is not a point that i'm making very strongly but it is a a

it's definitely a concern about attribution because if you don't know what is made by these

systems then the systems eventually could just feedback loop themselves into an interesting

place and if we rely on those systems then that's really probably not a good thing um let's but the

to answer uh uh the the other thing that you were saying brian

uh we we can get novelty if if we just have ai systems that are trained off of let's say the

music of of the last i don't know 250 years or something like that right um because if you inject

randomness into it and then your system that's evaluating whether it's any good is humans and

you know spotify or whoever or environment puts uh environment or environment as in

or environment as in as in like what um the way the way darwinistic selection works oh i see you

call it you call it environment i see i see our culture is like that too right the environment is

the not just the humans that like it but the amount of humans that like it that will pay for

it in some way so that you can then produce more of it right yeah and so there there may be an

evolution of basically through uh i guess you could call it a natural selection and random

mutation or semi-random mutation um also

i i feel like capitalistic direct directed musical fashion is circular just like fashion

in general i i've just noticed a lot of 80s tropes uh being introduced into pop music as

they're brand new um so so yeah i i also kind of wonder i also kind of wonder about that

let's we we kind of glossed over something i think is really um important and i think

brian you're mentioning what melanie um is kind of concerned about

i think all of us are concerned about is these systems stole our culture and these corporations

that had open ai basically stole our culture and put into a system that can reproduce our culture

stole or like by stole you mean like monetized took we can say took we can say pirated we can

say has a copy of it doesn't matter how it got it it's just it could even be legal it's just that

it used our some aspect of large aspect of human culture

set so that it can reproduce that culture and and when i say culture i'm also i mean creative

endeavor there's all these loose terms like we like brian said we don't have good language around

any of this stuff so i'm just using the terms to the conversation will bring across these

meanings better than any of these phrases we're actually using but um the point is that i also

am doing that all the time everything i watch and look at engages and changes how i engage in the

world we i don't think we have a soul i don't think we just are experiencing things we don't

in the world and changing uh our brain gets changed and who we are gets changed by what we

engage with so if i spend a long time reading french literature from the 1600s then i will

be more about that you'll be better and i'll probably be better at french or old french

but i guess my point is that no one's gonna say it's problematic for me to look at all of pixar's

work and then create derivative work of pixar's work there's nothing wrong with that because i'm

human beings can do what they want in fact company companies have in cases embrace that because it

makes their own product more valuable as it becomes more part of the culture and yeah yeah

absolutely and you know disney's famously not done a good job of that until you know more recently

so the the reason it's problematic to train and create derivative work is because it's not

embodied by a human is that why they're not embodied by a human is that why they're not

these things are no it's because it can do it on a scale no single human can do

if i grabbed a hundred artists let's say i was a billionaire and i just was able to

hire like 700 artists to completely produce content that's very similar to pixar and actually

even has like would you would be hard pressed to say this came out of pixar studios versus it came

out of this new billion into billion you know industry thing um would that be wrong for that

company to do that i mean

you would be producing four orders of magnitude less content using four orders of magnitude more

money i'm not i'm not i'm not talking about how much money i put into it i'm talking about

if i were to create an entity that really produced content that was really really good that was 3d

animation had that kind of zeitgeist of pixar's content i'm using that as an example because

it's successful and people like going to those movies and also there's another company that's

doing those kind of movies and they spend a lot of money to do it is it problematic for

them to do it and they spend a lot of money to do it is it problematic for them to do it

to kind of emulate that style and produce many you know because yeah like this is a thing

it's totally doing no no but but the and you take a look at the two corpuses of work they are

definitely distinct but they're totally but the um they're playing on the same ground rules they

have the same mechanisms they have that are at their disposal and the artists are the same the

swap back and forth right people work at both companies like

so the reason why this is probably not problematic is not about emulating culture and reproducing it

that's not the reason it's a problem it's because there's something different about a computer doing

it i disagree i mean i don't know if i disagree with the full statement but i definitely disagree

that because you you it feels like you put a it feels like you put a statement in

our mouth or my mouth at least it feels it feels like you turned into a value judgment

well that's what i think it is i think it's a value judgment because all of us are always

copying from the culture that we're in that's how creative work happens all artists look at other art

and make stuff that's unique and they're building it from all the experience they've had in their

culture and when you buy a painting or go to a pixar or dreamworks movie and you give them money

are you doing it just to receive the benefit of viewing it or being a part of the culture what

what's the other alternative if you're going to do it you're going to do it you're going to do it

buying a painting from a gallery you might be supporting the painter you might be supporting

the studios that made the movie it may be part of your motivation is to make sure that continues

to exist and grow right um and my my my point is um there there is more at play than just wanting

content and when you have a machine that can produce even medium quality content at a massive

it changes an equation that's basic to our culture yep and so the value judgment you seem to be

creating was that we're you know bias against machines or racist against uh yeah are you a

machine but but but i i think that misses the point because the point is not what our bias

against the machine is the point is what it will do to our culture and our society and uh the way

we live yeah i think i would agree with that that that was more eloquently stated than i was about

to yeah i i feel like i i think the reason i'm poking at this i feel like too much people are

saying you can't just look at other people's work and produce new work like that that's not okay

yeah it is if it's a human doing it it's totally fine so it's not about the theft of the idea and

the intellect it's about a machine that can do it at scale that's the problem it's because if i'm

emulating some artist and i do really good work i'm not going to be able to do it at scale

people will buy paintings for me they'll know they're not the originals but they'll might like

them because i'm copying kandinsky or something um in my own style slightly different right i could

get really good at it people might really like them and there's nothing really wrong with that

from a human experience the issue is the scale problem and that's that is a problem but i guess

what i'm saying is it's not really about the training data i don't really know if that's

the right way to think about it there there may be a better way to think about it that

we can fit this into our culture

in an equitable and and beneficial way um it's it's interesting that we all know instinctually

that it's wrong i just try to figure out why we know that because we we are not transactional

creatures we are motivated by more than just getting something we we want our culture to

grow we want to be a part of the culture i think it's interesting that like when we're talking

about the scale being the problem here this is kind of like the scale being the problem here

kind of like saying all right you've got a group of four kids in front of you and you give them

rules for how they want to play and then you turn around and there's a group of four thousand kids

you can't just give the same rules and expect the same outcome right like there is some sort

of a fundamental difference when when you multiply something when you scale something by

several orders of magnitude and the thing that's fascinating

to me about that is that there's not those are not different conceptually but there is some sort

of a phase change that happens somewhere somewhere along the line right um so when you're talking

about an individual who's making derivative work or you're talking about a millionaire who is having

uh a thousandaire who's having 10 people make derivative work or you're thinking about a

millionaire who's making a thousand people make derivative work or you're talking about someone

who works at open ai who's having

10 million ais make derivative work like somewhere along there it gets wrong to me and i don't know

if i can exactly draw a clear line in the sand i'm maybe i might be able to if i think about it a

little bit harder but i think it's interesting that it's very clearly wrong at the far end of

the scale yeah and it's very clearly okay at the other far end of the scale i think this has to do

with how our society

thinks about power dynamics and value and i think we all know kind of instinctually that it's

we don't do it the right way we have a very rich 10 percent of the planet people that control these

large entities that have a lot of power and we all know that's kind of wrong and now we're going

to face with it at a very different level and it feels like it could be if if this goes well for

humanity and my wishes if you will if you want to go into the space where it's like i'm a whole lot

hoping it is a large backlash to what we value in in the world and that we map it back down to

human beings like i i don't want corporate entities to have power like that i want human

beings to be the primary value the health and flourishing of individuals that should be somehow

our metric for a successful society if this somehow migrates to that method methodology

i think we're we're set i don't see any way it's going to but i would hope that that's going to

occur um and then you get into you know i don't want to sound like you know it's going to star

trek universe kind of thing right where everybody's needs are taken care of um at a base level and

people can just be creative as they want to and the systems serve them post scarcity utopia

yeah i think it's a pipe dream but it's also a tool we've never had before coming back to your

point though brian like

we've had these we've had these transformations with like dishwashers or whatever uh and

we have not seen the changes we were hoping for out we are not sitting around working five hours

a week we're not seeing more leisure in fact we're seeing less leisure and you know who is

squeezing that out of us as a population

and who has control of the

large language models you know like it's i so wanted to end on a happy note a possible hope

note well do you have any thoughts on that part of the problem is who's sitting around dreaming

how to use these tools to make that better world you were talking about research lyle how it could

benefit research i mean i've gotten a lot of benefit out of my little experiment so far and

i can see taking it further we just need to change the rules of our society or of our economy to

encourage more of that right

you know we haven't done a good job on compensating some of the most important human activity

teaching is a great example of this right we don't map our capitalistic power structure

around teachers getting benefit that's not how we do it we don't map it around people

that raise children like we don't say that has a lot of value in our capitalistic society

right and we also don't do it in government you know most of the time government employees

are paid kind of below market

for the kind of work that they do

it is possible that

one of the reasons why the system like why maybe our local government doesn't work as

efficiencies it could is because we underpay

uh... for people

and they're all straxed trying to get things done and they just get things done at a bare

minimum

and maybe these tools will allow

uh... you know local governments to be more efficient more functional

that could benefit all of us

that's what government's really doing

so it's really very important no matter what the infrastructure is

it's a very important piece of roll

it's a very critical piece of roll

and as I mentioned before

and as you can tell from the reviews that I'm getting from the operations that we're

doing

you know

I mean

you know

if we're going to be able to move the money forward

and we're able to make make sure that we can help the apprentices

and we help the high school students

and we help to fulfill their needs

it's important to make sure that some of the money we provide to the university is not going to go to others

right

Continue listening and achieve fluency faster with podcasts and the latest language learning research.