top of page
TEXTURED-PATTERN-10.jpg

AI AND ETHICS IN WAR AND WORK

March 25, 2022

S01 - E07

Charlie and RSnake discuss the distinction between AI/ML and AGI, the future of work, the weaponization of drones, attacking AI through bad seed-data, faster paths of bringing forth AGI, as well as the future of deepfakes.

Photo of Charlie Burgoyne
GUEST(S): 

Charlie Burgoyne

VIDEO TRANSCRIPT

Robert Hansen

Today I'm with Charlie Burgoyne. Charlie began his career in intelligence, eventually directing data science at Frog Design and Rosetta Stone. After his time as a research astrophysicist at NASA, Charlie eventually became the CEO and founder of Valkyrie AI. Valkyrie is an amalgam of Valkyrie Intelligence, an AI/ML consultancy, Valkyrie Velocity, car racing team, and Valkyrie Capital Labs, a studio model for investment thesis powered by bespoke machine learning and alternative data.


Charlie and I discussed the distinction between AI/ML and AGI, the future of work, the weaponization of drones, attacking AI through bad SCI data, faster paths of bringing forth AGI as well as the future of deep fakes. Without further preamble, please enjoy my conversation with Charlie Burgoyne.


Hello, and welcome to The RSnake Show. Today I have with me, Charlie Burgoyne. How are you doing?


Charlie Burgoyne

I'm doing great.


Robert Hansen

Thank you for coming all the way down here. You're just actually positioned a couple blocks away.


Charlie Burgoyne

Yeah, I'm going to invoice the 16 calories and coffee walk. Walk and a half.


Robert Hansen

That's okay. Maybe it's a tax deduction. Charlie is an interesting guest for multiple reasons. But the primary thing we're going to be talking about today is AI and ML. I hear you know things about this.


Charlie Burgoyne

I'm really glad that's what you selected. Yeah, perfect.


Robert Hansen

If I had chosen something else like basket weaving, you would have been out of your element.


Charlie Burgoyne

That's right. I do not get that very much.


Robert Hansen

No? Just Firem'n Chit. You just disclosed you’re a boy scout. Now I know this about you.


Charlie Burgoyne

I am a boy scout. I loved it. I went to Three Eagle and did all the stuff.


Robert Hansen

Oh, wow. All the way?


Charlie Burgoyne

Yeah. But it was a good childhood experience.


Robert Hansen

Yeah. The way that you and I met was actually at a Formula One race here in Austin, Texas through a mutual friend. Either we were having some drinks or something and just getting to know each other which is my life these days, it seems like, just getting to know people.


You started telling me about this crazy new business idea you had to start a company doing AI and ML. I think you were a little pensive about it. And I'm like, “Do it. Just do it. Get into it. Do it right now.”


What I don't think you had known at the time, I've since learned, is that I had just done the analysis on the highest paid types of jobs out there. It was big data, AI and ML, and I think one other thing were all the top three. Yeah. And you did it. You now have that kind of company. How's that going?


Charlie Burgoyne

It’s been an amazing five years. We're about to start the sixth year.


Robert Hansen

Valkyrie Intelligence is the name?


Charlie Burgoyne

Yeah, Valkyrie. We're starting our sixth year.


Robert Hansen

Has it been that long? Wow.


Charlie Burgoyne

Is that crazy?


Robert Hansen

Yeah.


Charlie Burgoyne

We came out of a pretty sad story for modern scientists in America. I never really set out to be an entrepreneur, I didn't really think I was going to become some multifaceted business starter of any kind. The challenge that I was dealing with at the time stemmed from my first couple of years in government service.


When I was 13, I thought I was going to be a veterinarian. I came from a very science-oriented home. Dad’s a scientist, my mom's in academia. Then 9/11 happened, and my whole worldview shifted. I decided I wanted to get involved with government science. I wanted to help out there.


From that point on, I really redirected my focus. I went on this path that eventually led to me getting pretty heavily involved with government across a variety of different spaces, mostly working in protecting American interests. I can talk more about that or less about that.


As that path played itself out, I realized that scientists have no real place where they can do incredible research, have that research have real impact in the world around them across a variety of different metrics. Not just positive social impact, which is something we really champion, but also actually meaningful impact on the businesses that people are running and the industry at large.


Then also job security sounds topical, but it's really important. It was really important to me that if you wanted to raise a family of five, there could be a laboratory that would support that comfortably. It was a founding treatise for Valkyries. If you’re talented, you should not be worried about things at home. And we can talk more about how that's all made itself manifest through the last several years.


Robert Hansen

Why don't we start by talking about what the differences between AI, ML, and then artificial general intelligence are? Because I think these are topics that get thrown around a lot. Many people don't really know what that means. It sounds like murderbots. Could you do a few minutes just explaining what that is?


Charlie Burgoyne

Yeah. The way we talk about AI, what we're usually talking about is the application of complex statistics on top of well-organized data. That's more or less what we're doing.


Robert Hansen

Well-organized, that's a big caveat.


Charlie Burgoyne

It’s huge. Yeah, it's way bigger than people give it credit for. It's actually probably 80% plus of our job. I personally have worked and chewed on a definition for artificial intelligence for a long time. I've worked with people, including our chief science officer a bit. Actually, today, we had a one-on-one talking about this. She's a brilliant scientist. She weighs heavily on these kinds of things that we think about.


For me, artificial intelligence is really denoted by two different fields. And they're more philosophical, really, than they are scientific. And that is the nature of knowledge itself. How does a fact or a datum become information, which eventually becomes knowledge?


Those are all really distinct states, almost like phase states of matter. And then how do you pair that with pattern recognition systems?


Robert Hansen

Wisdom.


Charlie Burgoyne

Wisdom. Yeah, sure. The ability to find logical breaks and chains that are embedded within that knowledge and then make decisions that sit on top of that.


Robert Hansen

One of the coolest applications I heard was somebody had recorded a lot of sound of earthquakes or just the earth moving or something. For some reason, the machines were able to predict an earthquake happening much higher than humans were. And basically, every measure that one could measure something like that.


What they found out is there's this very low frequency that they just weren't even paying attention to. They had to do a lot of work to figure out what the computer was figuring out. But that's the thing where with enough well-organized data, you can start extracting new ideas from that data.


Charlie Burgoyne

100%. My personal biases about the nature of governing dynamics plays in here. So I'm of this quasi notion perspective that all these systems that are seemingly really disparate are actually pretty interconnected to each other. I think that there is a set of governing dynamics against which all bunch of systems operate like IC model in three dimensions.


This unsolvable physics problem probably has a lot of overlap with how languages spread based off of their proximity to rivers, for example, in Western Europe or how fish markets are dramatically impacted based off of thermal variations in the ocean. I actually think all those systems have really interesting parallels and that there are models that you can reapply to solve those problems.


In fact, our brain does this all the time. When you examine neurons in high fidelity, they actually have a lot of similar properties. All of our sensory neurons actually look and operate in the same way, similar way.


Robert Hansen

It's really like structure.


Charlie Burgoyne

Yeah, exactly. It’s the synaptic relays. It's the way that knowledge or those pathways are formed that dictate how we're able to intelligently operate against those inputs.


Robert Hansen

How would you structure a conversation about AI versus ML? Or would you say that they are, for all intents and purposes, pretty much the same thing?


Charlie Burgoyne

I think machine learning is really focused on the act of extracting patterns and contexts out of those knowledge systems. It's a sub branch of that fork. We could argue the field of AI has the properties of knowledge engineering and turning raw data into information and into knowledge, pattern recognition systems that are extracting and manipulating logic, and then decision which is taking that logic and then operating in a way that is serving on an altruism of some designation.


Machine learning really falls into that second category, I think. Not entirely, I think that there's some overlap into different spaces certainly in the decision making side. But it's largely a sub branch.


Robert Hansen

And then AGI?


Charlie Burgoyne

The funny thing, I think, is that AGI or generalized intelligence means different things to different people. The way I would describe AGI is the ability to have a sensibly infinite number of domains interrelated for context extraction and then decision making, which is basically what human beings do. You can teach humans about a lot of different subjects at different rates and different frequencies.


Robert Hansen

This game is a lot like chess, but it's checkers. Fewer rules.


Charlie Burgoyne

What if you daydream in the middle of an office and you got a coworker talking to you? And they're like, “What would Batman do?” If Batman was like, “Kapow!”


Robert Hansen

If he were here right now, exactly.


Charlie Burgoyne

We make jokes about that. But honestly, that's the new manifestation of AGI. It’s like you're taking wildly disparate domains and extracting these really thinly-braided relationships for those domains in an interesting way.


A joke is my favorite manifestation of AGI. I speak about this stuff. I love my dad jokes. My kids have been so great for my career because now I have infinite dad jokes. I'll say things like, did you hear about the cow who got a promotion? Yeah, well, she was found to be outstanding in her field.


Robert Hansen

That qualifies as a dad joke. Yeah.


Charlie Burgoyne

It’s beautiful. But really if you think about it, we're bridging two different domains like farm domain and work domain, farm and work stuff. And we're able to very quickly, which is fascinating, braid the connection between those two with a semantic play.


All of a sudden, we find that funny to accidentally bridge domains that aren't usually bridged together. It's like we find new synaptic connections to be funny, for some reason.


Robert Hansen

Yeah, we do. Especially when they're really totally tangential. Not really related at all. I think one of the most common questions, which I'm not even sure is a particularly good question to ask, but I'm just going to ask it anyway.


I'm going to leave AGI off the table for now, but what do you think AI and ML are good at that humans are bad at and vice versa? Is there any place left that humans are just naturally going to always be better at?


Charlie Burgoyne

I think your question gets down to the fundamental nature of intelligence itself. And the deeper I get into this field, the less confident I become in the tenability of an AGI. I actually think that there are fundamental unknowns about the nature of intelligence that we are not really that much closer to uncovering than we were 40 or 50 or 60 years ago.


We're able to solve some of those same problems faster and more elegantly because of the availability of new technology. GPUs have been transformational. TPUs, we'll wait and see. They have been used, but they're not as revolutionary as I think Google thought they were going to be right off the get-go.


What we've seen is some of these formulae have really just been empowered to do the things we already thought were possible. But we're not actually any closer to understanding the fundamental nature of intelligence.


Robert Hansen

I would have absolutely bet that you were going to say the opposite.


Charlie Burgoyne

So would have I. I used to totally think that fully conversational autonomous thought was something that we'd see in my lifetime.


Robert Hansen

What about something that just will pass the Turing test? Something that can just hold a conversation. No other attributes, just conversation.


Charlie Burgoyne

I think Turing tests will be passed, but that's because of our ability to look at exemplifications of intelligence at scale in ways that we didn't imagine years ago when Hofstadter was writing books like Eternal Golden Braid.


I have a real example. If you look at AlphaGo or anything that Go guys are building right now, I think StarCraft was one of the things they did two years ago. That was pretty cool.


They are like, “Oh, we built an artificial intelligence that can win at Go.” Well, actually, what you did was you figured out a very clever way of playing tens of trillions of simulations of the game looking at tens of thousands of exemplifications of the game and then rerouting decisions based off of the success of those simulations.


Robert Hansen

But isn't that the same thing as imagination? Isn't that what we're doing when we're speculating about whether they should or shouldn't ask their boss for a raise or whatever?


Charlie Burgoyne

Maybe. But if Go didn't have trillions of combinations, instead just three, we'd be far less impressed with it. And it's exactly the same technique. If you had to choose three numbers in serial and you had to express a pattern in those numbers in serial, we aren't that impressed when the computer can identify where that third number is going to be.


All of a sudden, we do it and go, which is a similar problem. We're super impressed with it because we're like, “Oh, only humans can solve this problem.” No. Our comfort with permutations that are wildly nonlinear is just expanded in ways that we didn't even think were possible.


Robert Hansen

So why can't it figure out language? Language seems like a relatively straightforward thing, if you keep into account the structure of normal language. I'm not talking about extra things we add on top of language; intonation and color and slang and all that stuff that might make it more difficult. Or even jokes. Just language, “I want to go do this.” Why can't Siri figure out exactly what I mean when I say that?


Charlie Burgoyne

My core belief, as of now, is that intelligence is comprised of multiple different domains for research for us currently. But one that is undervalued because it is not treated as scientifically robust is the nature of neurological motivation and emotion. And this is where things get really weird.


One of the reasons we're able to solve complex problems as human beings is because of our ability to understand underlying motivations and have that motivation actually have an impact on our ability to solve problems and even have, to some degree, an emotionality around that.


I don't know how to code motivation or emotional response or even synthetic emotional responses into an algorithm. Can't do that.


Robert Hansen

I ran against a tweet a day or so ago. And when I saw this, this perfectly for me encapsulates how difficult this problem is, especially if there's a lot of companies out there; Facebook, Twitter, etc., who really want to use AI to solve all of their customer support problems because it's just expensive.


As much as they can automate, that’s just less time they're spending on it and less money. This is the tweet, “I have a plan. We tell Boris Johnson that Putin has launched a hypersonic missile at number 10. And when he retreats to the bunker, we will the door shut.”


Now, there's a lot of things going on in there. First of all, this person is advocating that they tell the head of their government, they give him false information about national security. Then they effectively and falsely imprison this PM in a bunker. And it's clearly satire.


There's a lot going on there because you have to know what 10 means, 10 Downing Street. But that's not obvious by the context exactly. You have to really know what's going on. You have to know what hypersonic vehicles are. And that's a threat, not just a thing. You have to know who Putin is and in what context Putin would be launching things and how that would affect anybody that we care about or anything.


Then ultimately, we have to understand that that whole thing is not definitely a joke. We don't know that that's a joke, but we can infer probably a joke. And so I should not submit this as somebody who's trying to commit a crime. This is just somebody who has a weird joke that they want to tell.


I don't know how to code around that set of problems. It's not one problem. It's like 10 problems in a row. So to me, it seems like the best we could do is look for keywords or maybe sentiment analysis or maybe the context of the two or maybe context of this person.


They tend to post very inflammatory things. They've been banned multiple times or they keep getting strikes against them, etc. But beyond that, I think just a human has got to go look at this. I don't think this is something you're ever going to be able to automate. Do you think I'm wrong about that?


Charlie Burgoyne

No, I think you're right. I think that's what we're approaching. The algorithm that eventually beat Go is basically a Monte Carlo complex chain that was formalized before I was born. And yet, this is something that we herald as this big advancement.


We don't understand the motivations to beat Go. We know how to define success, but who actually cares about success? Your example is bringing up exactly what I'm talking about.


There are components there but 10 components in that statement that have a proximity to unfiltered intelligence, non-synthetic intelligence. But they all require different approaches to solve for them.


AGI still remains completely elusive. If I cast my team, I will confidently argue we have the best AI team in Texas. But more broadly, we're a very competitive team. The way we would solve those 10 different problems, we’d take 10 different shapes. And I think that's an exemplification that we're parsing together.


Robert Hansen

For instance, how do you know that this person doesn't have a mass group of welders who are willing to risk their lives to go weld somebody into a bunker? Obviously, that's not going to happen. Or maybe this person's just crazy, and they really believe that their welder buddy is going to join them in this crusade to weld somebody into a bunker.


I can't definitively tell you. Even me, Robert Hansen, I can't definitively tell you this person isn't just crazy. And yet, I also can definitively tell you that this is just satire. It's a very weird thing that the human brain can do.


Charlie Burgoyne

It’s extremely strange. There's two really great tracks we can go. So we should fork this, go down this branch and then come back. The fact that your brain can reconcile that and my brain can reconcile that. When I tell my lame cow joke, people will either acknowledge that it's funny and not laugh because it's not that funny or laugh.


Robert Hansen Two out of 10.


Charlie Burgoyne

So you're the first group, that's fine. As long as people don't say, “I don't get it.” then we're gravy. But when that happens, the fact that all of those people can appreciate that and probably told, at this point, 5,000 people that joke, the fact that it’s ubiquitously appreciated as the synaptic braid, I think, is indicative that we are fundamentally solving the problem of AGI with the wrong underlying mechanics.


This is where things get wonky in the wackadoodle, mad scientist brain that I got rolling around. I actually think that silicon-based approaches for computational AGI is the absolute wrong way to go about it. I think we're trying to solve this problem in a state that is extremely resistant to the natural formulations for how knowledge is stored in organic reference points and then how patterns are exhibited on top of that.


What I mean by that is using binary approaches and a collection of gates to allow or disallow problems at the tiniest level and then, with static operators at the mid-level to all the entirety levels.


Let's say you program an FPGA, you'll never program an FPGA to the point where it's ever going to be able to solve an AGI problem. Because I think fundamentally, we're not solving problems based off of collections of gates and arrays. What we're actually doing is figuring out interesting ways of bridging components of information together. And the operator that sits on top of that is really topical.


I have an example that shows you why this is the case, I think. If I asked you to figure out the biggest prime number under 4 million on a pad of paper with no other tools, it would take a while. It would take some humans an infinite amount of time. It would take you a while, it would take me a while. There's probably a dude or dudette out there who could do it in like 15 minutes. Power to them.


If I asked you who your best friend growing up was and they walked past that window over there and it was raining outside, 99 times out of 100, you're going to identify them in really bizarre situations. You can identify them when they’re wearing a hood. You can identify them if all you see is them look at their watch, which is really wild.


Now, we take a quick pause and look at how silicon can solve that problem. Your Apple Watch can figure out the largest prime number under 4 million super quickly. In under 40 seconds, I would guess, especially if you developed a custom app for it.


If you gave the world's largest Cray the task of identifying your friend and basically any circumstance other than head-on plus or minus 20 degrees pitch, yaw, roll, they're going to really struggle. Much less what you're doing which is you're looking at their gate in the rain with clouds and fog and you’re like, “Handle this, my buddy.”


What I think is going on is that it's not that we are computing faster. We know we're not. We know what our frequency is in our noggin. But what I think is going on is that the really elegant solution that biologically you've developed is the ability to store complex semantic objects in just a super simple and interconnected way i.e.,the model of your friend has less to do with your ability to recognize patterns very quickly and more about the nuances of how you store your friend.


It's a collection of things like their gait, the way they sound, the way they look, their environmental decisions that they make, clothes that they wear, a plethora of things. I bet you good money, you're probably thinking about all these things with your friend right now.


Robert Hansen

Well, I saw you walking down the street earlier. I knew it was you.


Charlie Burgoyne

Get out of here.


Hansen


No, I'm not kidding. I actually recognized you from probably over a block and a half away.


Charlie Burgoyne

This is a weird thing. No Cray could do that. And you're not going to be able to figure out that prime number.


Robert Hansen

To be honest, it was definitely this getup up you’re wearing.


Charlie Burgoyne

Yeah, this didn't help. I dressed for an audio podcast. My apologies, The RSnake Show. But that's what I'm getting at. We're not solving problems with silicone. Silicone is really good at some things and really bad at others.


I think the big difference between finding a number under 4 million and looking at your friend and recognizing who that is is that risk matrix is really straightforward. We can easily codify that. There's really no patterns that go into that. But how do you match complex models to other complex models? That's something that uniquely the brain does.


I contend that silicon is really bad for that, especially the knowledge structures that we're using for silicon. I actually think the closest state we’ll ever get to an AGI, you're going to think I'm nuts, is by harnessing the power of organic material to offload our knowledge store and processing for us.


Robert Hansen

Cyborgs.


Charlie Burgoyne

I think the step before cyborgs is a greenhouse full of instrumented plants, the proteins of which we are manipulating to store complex information and then bridging across them in ways that much more reflect the way that we do this in our brain.


Robert Hansen

You’re right. I do you think you're crazy. All right. Let's talk about AI ethics. Anybody who's spent any time thinking about this, it goes a little bit haywire when you start talking about trolley problems, when you start talking about a self-driving car that has to make a decision whether it hits the old lady or hit the school bus or whatever.


Before we get there, I've heard a lot of talk about Asimov's three rules of robotics or whatever. A lot of talk, it seems, comes up in nearly every conversation about this topic. And I have yet to see one robot ever to have them built into it.


It's funny because everyone talks about it. Everyone. But then I'm like, “Okay, well, did it run over that woman's hair and start sucking her up into it? Or did it slice this person in half because they just walked a little too close to it?”


These systems have absolutely none of the context about what it would actually take to actually model or even let alone adhere to these rules. Are you seeing that? Are you seeing just a lack of ethical compute put into these systems? Is there any ethics at all being built in? Or is this just all people pontificating about the moral philosophy associated with it?


Charlie Burgoyne

There's two ways to answer that question. The first could sound demoralizing, but it should be invigorating. We can create a collection of rule-based operators for machines. Sure. But ethics is certainly something that is way down the line once we've solved the problem for emotionality, motivation, core tenants of intelligence that we still don't really understand.


Robert Hansen

How about just not sucking someone's hair and when they sleep, the Roomba just comes by and starts eating on them?


Charlie Burgoyne

We can totally codify that. But the way that actually looks is, we would say things like, “Okay. In a split second, you've identified this object. It has these properties; it's cylindrical, about 10 microns wide, and has a tensile strength of like 62 Newton pounds.” Whatever it is, stop doing that. But the problem is-


Robert Hansen

That's not going to suck up the dog hair that's stuck in the floor.


Charlie Burgoyne

Precisely right. The problem is we don't know how to codify why you don't suck up hair. And so Asimov's rules are given a lot of lip service. It's only lip service because we don't actually know how to tell a machine, “You don't want to do this because this is bad.” Until we do that, we're going to have to code every single exemplification of why it's bad.


Robert Hansen

Well, at minimum, a robot needs to know what a human is. How about we start there? If your robot can't tell what a human is, you probably shouldn't mass market this thing and put it on the market.


Charlie Burgoyne

Totally. Yeah. We're having a hard time identifying people because their complexion is different from traditional models, much less identifying their species and whether or not we should hurt them or not.


Robert Hansen

Or how about them just lying on the ground? Well, people don't lay on the ground. They lay in beds. They stand up, they walk around. Sometimes they pass out though.


Charlie Burgoyne

There are edge cases, too. The Roomba knows not to pull your hair. But if we were able to actually code ethics, it should also know not to fall in the bathtub while you're in there. That's the problem. The current solution is, “Okay. Let's code the bathtub, and let's code the hair. And let's code whatever else is going to kill your children and dogs and people and cats.”


You asked earlier, what's the difference between AI, AGI, and ML? And I think actually, this is an exemplification of narrow bands of AI. They're narrow, intelligent decisions that we can prescribe that are almost exclusively rules based built on top of heaps and heaps of data that are making decisions that are extremely prescriptively designed by technologists. And that's really narrow AI.


Robert Hansen

Aren't these technologists spending a lot more time productizing than thinking about the implications of their product? This is what I'm worried about. We spend a lot of time talking about the trolley problems. But I think what's really happening is no one's thinking about the trolley problems who's actually working on those things.


They're just making sure it doesn't crash into stuff, which is fine. I'm glad about that as well. But they're not coding on trolley problems. They're not putting in ethics. They don't know what a human is beyond the fact that it's just another object that they're not supposed to hit.


Charlie Burgoyne

That's precisely right. Now we can weigh the human object when you say, “Okay. When you recognize a human object and you recognize a mailbox, go drive over the mailbox.”


Robert Hansen

I don't even think they're doing that.


Charlie Burgoyne

I don't know. Actually, we have friends in town now that you and I have met who would know exactly and probably the best in the world for that particular problem who will remain nameless and also completely obvious at the same time. I bet there is somebody who's working on that exact problem.


The reality is that once you solve for the mailbox versus the human, okay, well, then you're going to go weigh every single other thing you could possibly run into. What about something that you can't recognize? The trolley problem is interesting, only to the point where you don't consider the fact that they have to have a one-off solution per problem.


Where it gets more interesting, from my perspective, is what happens when there's an object that is roughly two meters tall, roughly 100 kilos that is standing next to a human and the computer can’t ascribe what that object is? And there's a probability that that's two humans standing next to each other, 200 kilos and say, a meter wide. But it could also be a giant mailbox.


Robert Hansen

Or it could also just be somebody wearing a costume.


Charlie Burgoyne

It could.


Robert Hansen

Halloween is a thing.


Charlie Burgoyne

It's a thing. But I guess the trolley problem is significantly easier because those are explicit things we can code for when decisions have to be made with ambiguity. So here's a human, you're going to either hit the human or you're going to hit this thing that could be two humans or could be a trash bag. That's actually a really big problem.


Robert Hansen

I'm willing to suffer that problem though because at least then it tried to do the right thing. Right now, I'm not even sure it's trying to do the right thing other than just stop, which is great. At least it's trying to do that thing. But I get a little concern when people talk about the ethics of AI because I don't think anyone is actually working on this who actually works on robots.


I think there's a lot of people pontificating about it and making overtures to like, “This might be what it looks like if we ever get down this path.” But what I don't see is people codifying this in code and saying, “Here's my AI code that everyone can deploy.”


If you can tell me what a human is with your software because your hardware can see it, then it will do the right thing. It doesn't have to know what a human is. But as long as your program says, “There's a human here. What should I do about it?” it seems like no one's even working on that problem.


Charlie Burgoyne

I can't speak for the teams in the robotic labs that are doing that. I will say that we're still depending on a degree of discernment that we have not been able to synthetically create.


Robert Hansen

I want to be wrong, by the way.


Charlie Burgoyne

I think there probably are people who are thinking about this and developing a code for it. But we're still in such a nascent state for the nature of intelligence that having an electric car decide between a trolley and a person is much closer to the problem of identifying whether a gun should be able to shoot or not shoot as than it is to offline robots.


Robert Hansen

I’ll try to get one of those guys of TrackingPoint over here. I think that is a very interesting topic as well.


Charlie Burgoyne

It is.


Robert Hansen

Let's talk about the future of work while we're talking about the ethical aspect of this stuff. I think one of the more interesting topics about AI is what it's going to do about people's jobs.


Is it going to make them better? Is it going to get rid of them? Where do you think this is going like five years, 10 years, 20 years? Where are we going? Are most jobs going away? Are most jobs just getting better? Are they changing in ways that we just can't perceive?


Charlie Burgoyne

This is a super nerdy way to answer that question.


Robert Hansen

As opposed to everything else so far.


Charlie Burgoyne

That’s right. I've got two analyses using Star Wars and Star Trek. This is the first one. I don't know if the other one will come up, but there's a good chance it will. In the future or I guess the long, long ago, in these two paradigms, there's two different roles that machines play largely.


In Star Wars, the C-3PIO and the droids basically answer for more and more fights to the point where they're really indistinguishable. C-3PIO misses some nuances or stuff. But that's really a character flaw as much as it would be for-


Robert Hansen

Someone with Asperger's.


Charlie Burgoyne

Totally. Yeah, somebody who-


Robert Hansen

Doesn't speak the language well.


Charlie Burgoyne

Yeah, totally. So that we accept that that's a human, that's made out of brass. In Star Trek, and I'm fixating particularly on the next generation, the computer is all about descriptive analytics and predictive analytics. They're not like, “Computer, let's go have a beer and talk about the nature of whatever.” It’s exactly the holodeck episodes-


Robert Hansen

Which always end up taking over the ship, by the way.


Charlie Burgoyne

Right. It's like, “Just turn that thing off.” But those computers are always doing things like, “Hey, where's Captain Picard?” “He's on holodeck four.” Or, “How long until we hit this asteroid?” “Probably 12 minutes.” Or whatever. It's giving these kinds of descriptive feeders and predictive feeders for-


Robert Hansen

Data.


Charlie Burgoyne

Exactly. I think a lot of technologists and futurists are thinking that our future is Star Wars, and I actually think our future is Star Trek in our lifetime. And this is a good thing, actually. Because when you look at Star Wars, they're dealing with a lot of the same problems. It's not really that interesting. Their universe is punctuated by-


Robert Hansen

What about the character Data? There's some character.


Charlie Burgoyne

Yeah, data is a little bit of an issue with this.


Robert Hansen

Take him out.


Charlie Burgoyne

Take the holodeck. Data is actually a lot warmer because he struggles with emotion. It doesn't exist for him. They give him a chip in one of the movies, and it's totally lame. But Data doesn't understand emotion. He doesn't get jokes. He doesn't know how to do contractions. He really struggles with all the things that we just talked about.


In fact, it was a prophetic character to cast because these are all the things that I think really stand in the way of an AGI. But what I think is really interesting about Star Trek is in that future, they've basically created a paradigm where computers have automated everything that is tedious.


The entire ship only has a couple of jobs. There's the scientists who just think about interesting things and have the computer run tests. There's the doctors who think about interesting things and the computers run tests. There's the commanders who think about interesting things; political science, related problems. You get where I'm going.


They all sit around in their pajamas all day and think about interesting things and have meetings over Earl Grey tea with it. That's, I think, the future that we're working towards. And so my litmus test for whether or not something is going to be automated is, what can we currently do that we can also have a very complex intellectual conversation while doing? And if that action is going to be automated probably in our lifetimes.


Robert Hansen

I think so.


Charlie Burgoyne

I think so. In Star Trek, all they do is think about intellectual things because that's actually the only differentiator we have against computers. And I think we will for the foreseeable future.


People say, “Oh, you don't believe in AGI. Or, “You don't think that this generalized intelligence is terrible?” I'm like, “No.” And they're like, “Oh. Well, so is AI even important?” Yeah, it's actually going to be the only thing that moves this next epoch really of industrial transformation.


But it's not going to take the shape of these anthropomorphized Hollywood characters. It's going to take the shape of, all of a sudden, you don't do anything other than be creative and be intellectually complex and make that manifest and have computers do everything that's tedious.


Robert Hansen

I want to take it even further. Right now we have what I like to refer to as the rental generation. No one actually owns anything anymore. You don't own your car anymore, you have an Uber. You don't have a house anymore, you rent the house. You don't have even movies or whatever, everything is Spotify or whatever.


In that world, it's going to become harder and harder for someone to actually have anything. They're just going to be consumers of whatever they can get through their devices. And that's pretty much it. In that world, where do you see the idea of AI/ML actually reducing the cost of ownership?


I think this is where things are going. I think, effectively, everything that we buy right now is going to go to zero. You're going to spend pennies to go across the state. You're going to spend pennies to stay in the nicest wherever. You may not physically go there, and you may just put on your helmet or whatever.


Everything is going to be so cheap for you to experience whatever you want to experience or do whatever you want to do or have whatever you want to have because you won't actually own it. And all of that will be driven by AI. Do you feel like that's accurate? Or do you feel like things are going to stay expensive and people are still going to want their mansions?


Charlie Burgoyne

It's really philosophical. It's an existential question that predates whether or not AI is going to be able to own that version of reality.


Robert Hansen

Robots are making things so ridiculously inexpensive already.


Charlie Burgoyne

Yeah. Our relationship with privacy and possession is something that has evolved dramatically and also reached a catalyst, the big Ctrl Alt Delete, which is the pandemic.


My personal contention is that I think we're going to see a return to ownership that's derived from our existential needs to be tied to things that are larger than ourselves, which can be manifested by our relationship with our own labor.


It's a really annoyingly articulated way of saying, we as a species have to believe in things that exist outside of our own body or frame. For some people, that's a god. Some people, that's religion. Some people, that's logic, mathematics, values, virtues, ethics. Some people, it's nothing. But a lot of those people really struggle, I think.


Robert Hansen

I think they struggle. But what alternative do they have? Their jobs are all going to be replaced. So they're going to have to afford a lot with a little.


Charlie Burgoyne

I think what the pandemic has catalyzed is an appreciation that if you don't have major things that you believe in that exist outside of yourself, then tying yourself to what you are able to manifest personally has, all of a sudden, given a major wait.


We have people who are working in Amazon now for $60 an hour who are leaving and taking jobs as teachers for $25 an hour or this great resignation which is occurring all over the planet, particularly in the West. This is I think the population’s rejection of a paradigm where they do not have a deep relationship with something that is existentially as big or bigger than they are.


Robert Hansen

Would you say that that is the reason that we need to work? Because it's not so much about the money. It's more about having some purpose. And so it doesn't really matter. AI could do all of the more tedious things, but people would still want to find something to do to fulfill their lives. And that's not a video game.


Charlie Burgoyne

What I'm about to say is going to feel like I have a prejudice against this paradigm. But I will start with saying I absolutely do not. And there are merits to both of these paradigms. Our parents’ and our grandparents’ generations were comfortable doing jobs that largely our generations would reject.


For example, my grandfather worked in building boots. Now stitching a boot, there's nothing wrong with it. It's actually a great job. But the reason he was able to do that for his whole life was largely because he was very religious. He had his family unit.


For him, his job was a way of enabling the things that he very much believed in. And a nine to five job where you're building a boot is a totally worthwhile sacrifice of the things that he believed in, that was bigger than him.


Now, fast forward 70 years. The family unit has evolved dramatically. Not in a better or worse way. I'm not making a moral judgment on it, but it has definitely evolved. We have people who are waiting way longer to get married, way longer to have kids redefining the nature of a nuclear family altogether. And also people who don't have faith the way that they used to and the digitization of relationships that have really broken down geographic community, self community and other forms.


My grandfather knew all of his neighbors. I don't know all of my neighbors. I do now, I'm on the HOA. But I usually don't know all my neighbors. He had faith, and he had family.


Robert Hansen

I could get pretty creepy pretty fast, but I refrain.


Charlie Burgoyne

He had all those things. My kids may not have any of those things. Where they find greater purpose is like, “Well, if I'm going to make a boot all day, I better really believe in this boot.” Maybe I don't believe in boots, maybe I believe in sandals. Because with sandals, I can go help kids in Kenya who don't have shoes at all. Or I can help kids in Siberia who because of global warming, it's now a beach and so their boots don't work.


I don't care what they use. But they have to have, all of a sudden, this isn't just a boot. This isn't just the commodity like it was from my grandfather. This is actually a manifestation of their own existential extension.


I would have said, “There's no way actually it’s going to evolve that way.” But I think we're seeing it. I think the great resignation is actually a reaction to that. And I think that that will change our relationship with the nature of ownership of property. Because I think that's also part of it. People will get more into making their own yards and fixing their own cars.


Robert Hansen

Let's talk about AI and war a little bit. Because I think the fluffy part is that we get to do more with less. But we also have murder droids that are potentially roaming battlefields killing indiscriminately from their perspective. From our perspective, perhaps just point them in that direction and let them go. Or if you're wearing a red outfit, kill that person or whatever.


I think people think that this is far away. I don't think it's far away at all. In fact, I think it's already happening. I think it already happened in Turkey. They had a STM Kargu-2 killer drone that apparently could loiter and detect that there's some enemy craft and start shooting it. And apparently, that's already been used in Libya. So this isn't a faraway future. Or at least it's not a faraway future where it's not used to some degree.


We mentioned TrackingPoint. They're actually here in Texas. as a matter of fact. The other one I saw was smartshooter.com. I think that there are places where you could have a gun that just sits there and loiters and just sit it up, and eventually it sees somebody across the path and shoots them.


That's not far away. I feel like I could do that in a handful of hours with a couple of pieces of components. I feel like you could as well. Where do you see the ethics and the future of all of that going? How far away do you think it is? What's your opinion on that?


Charlie Burgoyne

I really don't think it's that far away from an ethical perspective than where we are now. What we're doing is we’re moving the proximity to the actual trigger. But the decision-making still hasn't evolved, where that decision has been made. What I mean by that is a drone, whether it's a hunter-killer or an MQ-9, we got to be a little careful about how we talk about some of these subjects.


Let's say you set up a drone and it says, “Okay, loiter until you see x action, somebody coming out. And this somebody coming out is going to have a black shirt on. Kill that person with a black shirt.”


The tooling is different. But the decision making itself was made long ago. And it's really not that different than if you're just standing outside that room holding a pistol and you shoot the guy who comes out wearing a black jacket or whatever, let’s say he was.


Now it gets a little bit fuzzier when we have all black jackets, “Kill all black jackets. Or if you have above 60% confidence that that's a black jacket, just kill it.” Again, that's still that same decision that has been made previously. It’s made far away.


We're just removing the time and space dilation from that decision itself, but it's the same operations, same execution. Sorry, it's the same operation for making that decision that we have, this is what I was talking about earlier with the nature of motivation and emotion, till we get to the point where a drone says, “I want to protect American interests, and I'm going to go sleuth out what's going on in this plane.”


Robert Hansen

Before we get there, there's going to be some little girl who was wearing black or there could be just a school bus of people coming back from a soccer game and their colors are black. A soldier sitting there in an aircraft or even looking at a camera might go, “Whoa, that is not who we're after.”


These programs have to get extremely good. And again, I don't even think they understand Asimov’s rules. So they're just indiscriminate. They're just like, “Okay, here's the very specific parameter you gave me. I'm going to go shoot it.” And that's pretty much it.


In this case, I think we're a little bit better off because I think what's happening is they're shooting tanks or armored vehicles. Not many schoolchildren driving those around. But I do think that if you start extrapolating where this could go, it gets a lot more fuzzy. And there's a lot more room for her.


Charlie Burgoyne

These are statistical actions superimposed on top of statistical actions. What's our tolerance?


Robert Hansen

We can't even tell what a human is if we're on a Roomba.


Charlie Burgoyne

That's certainly true. When we pulled out of Afghanistan, the two days after, we thought we issued a strike against a couple of operators in retaliation of some action. We ended up killing a family. Four or five kids were in a car. We killed nine people or something like that that were in their vehicle.


That was a human-led operation. So humans don't get it right 100% of the time. I guess what we have to get comfortable with is, what tolerance do we have for the decisions that are offloaded based off of ethical rules that we ascribe to those machines? And that's going to be our paradigm for our lifetime? Because I don't believe that we're going to be able to have machines inculcate ethics or motivation or emotionality in an autonomous way at that point.


Robert Hansen

Yeah, I tend to agree. I also am not giving credit where credit's due because I think their ability to tell that something is a child or something is an adult is extremely high, unless you're talking about somebody with some disfiguration or something that might skew the data.


Also, I think these things have gotten fairly good at understanding what the human is. I have a vehicle that can detect humans or deer, and it regularly detects a dog as a deer and various other things. So there is definitely a lot of room for improvement in these systems.


I think directionally, they're actually quite good and getting much better, even. Especially if you're talking about, “This building needs to go away.” I think one of the coolest things I've seen is you can loiter over a city, which is huge invasion of privacy for all kinds of reasons. But they can look loiter a city. They're actually doing this in US cities as well, not battlefields.


Some crime’s committed at eight o'clock at night or whatever and you find out about it a couple hours later, for whatever reason, you can rewind the clock and see what happened. So you can see them breaking in, now rewind it further and see where they came from, whether they came from this house over here and you know exactly who they are and where they're going.


There's some really interesting implications of the automation that you can add on top of these autonomous systems that might be hugely useful for all kinds of things in the battlefield. I've also heard really interesting things like well, we could shoot lasers down from space and just have them constantly going over the entire world. And then some jihadist is recording the battlefield.


Then now you know exactly where they are because you can see this signature that's been imprinted on the sand behind them or whatever. So there's some really cool things that could be brought about to bear that wouldn't really be possible without that loitering ability and the automation.


Charlie Burgoyne

Yeah, I think their strength manifests not necessarily through the decision making ability but more of their ability to commingle these disparate domains of sensing. The drone that loiters above the city to identify is crime. It's got the visual spectrum that it's interpolating saying, “Hey, this is a person running from door to door with a blue hat on.”


Maybe they also have infrared, and they also have thermal imaging. So they can say, “Oh, they're carrying a gun. And it's still hot.” We heard they have audio equipment onboard. They can hear the crack of a pistol firing.


They're also connected to certain types of IP protocols for security systems. So we know a camera was triggered, all of these things. The Star Trek future too, it's data fusion. This is super cool. And these applications that we're describing now is actually the most interesting problem that as a scientist and our team of scientists get to think about.


Robert Hansen

This is not one plus one is two. These are huge new features on top of these things.


Charlie Burgoyne

The nature of an object itself is, how do you define a crime? Valkyrie’s responsibility oftentimes is, let's build an ontological structure or knowledge structure that's responsible for encoding and understanding what is a crime across these various domains, extracting that context or making even that context availed to these pattern recognition systems.


Robert Hansen

Yeah. I think that there's something interesting to be said about data fusion in general. If you take some disparate piece of information that isn't particularly very interesting in and of itself like the speed at which something is moving, that doesn't seem that interesting.


Once you apply it to some video and you're like, “Well, there's two people standing right next to one another.” The speed at which someone's moving right next to somebody and the direction it's pointing is them, this is someone attacking the other person.


It really isn't just two pieces of information. You now know exactly what's going on without having somebody actually look at the camera. And you don't even have to know what attack looks like, really, if you have those two pieces of information.


Charlie Burgoyne

Yeah, my favorite thought experiment along these lines is the problem of Fitbit. Let's take Fitbit and Blue Cross Blue Shield's datasets, and we're going to make them seem a lot more simple than they actually are. But let's just say all Fitbit does is it measures how many steps you take and cadence, and not even locations, timing between steps.


In and of itself, that is pretty innocuous. You're just like, “Oh, how many steps did I take today? Cool. Way to go, Robert. I took 12 staircases and walked 15,000 steps.”


On the Blue Cross Blue Shield side, you're looking at data that says I'm healthy ish. I could be more healthy because I've got glaucoma or my left foot hurts because of some injury when I was a kid, whatever it is. You've volunteered all that or it's been information that you've consciously been able to put up.


Let's say at scale, those two data get combined. All of a sudden, they start using your Fitbit data as this very simple single dimensional formula. They run parody matching algorithms to figure out what other formula looks similar for your gait. They say, “Hey Robert, your gait is close enough to this other person that you probably are pre diabetic.”


Now, that sounds crazy. It's actually happening. That inference is actually totally tenable. A paradigm where you have that level of granularity and finite information about an individual and their real time behavior and more and larger static but very corroborated and empirically substantiated properties of that individual.


Robert Hansen

Sadly, every time I wear those things, they think I'm taking those steps and all because my gait is so smooth. It just never ever registers as footsteps.


Charlie Burgoyne

You're all jazz, man. I'm rock'n'roll. You're all jazz. Dr. Smooth.


Robert Hansen

It's interesting though. That just shows these boundary conditions. Something as simple as gait sounds easy. It really is not easy, especially if someone has some disorder or they're still traveling. Their gait might look very different than your gait.


Charlie Burgoyne

The problem too is how the government sees that nuance. Western Europe saw that thought experiment and they said, “Great, here's GDPR. Here's a set of regulations that say you can't commingle data. You have to be explicit about how you use it.”


They thought they were doing the West a huge service. What did they do? They pretty much effing killed innovation in Western Europe. All this interesting research that was going to depend on these overlying governing dynamics and an interesting corollaries across domains that we didn't have any understanding or appreciation for, dead. Completely killed.


Whereas in the United States, we haven't adopted that nationally. California has done some perturbations of that. But we're still at a point where we can comfortably commingle data. We're respectful, and we anonymize it. But that is absolutely the frontier of how we advance the human condition through technology. It’s by the commingling of these highly-instrumented domains. And Europe killed it.


Robert Hansen

All right. Let's talk about attacking AI now because I think this is the counterbalance to that. If I have some AI system, some expert system that's just sitting there monitoring me for whatever purpose trying to serve me ads, the most common place I probably interact with AI, which is disturbing for all kinds of reasons for me personally.


You have a guy like me who will start gaming the system. I'm sitting in front of Facebook for far too many hours per week. Why don't I? So I start doing it. And something as simple as me messing with it means that I don't get to see ads for months on end while it figures out how to deal with me.


Me just literally clicking a couple buttons, I was able to completely mess with their systems almost beyond repair and took them months to recover and then months more to recover the next time I did and so on.


For years, we played this cat and mouse game. I'll tell you how I did it actually because I don't think it actually works anymore. I basically noticed that there's multiple different ways that you can deal with this ad system.


One of those is you can basically say, “I don't like this for whatever reason. This thing disturbs me for whatever reason.” All the other things are like, “This is spam. This is porn. This is whatever.” Those things tend to be good candidates for AI.


It’s like, “This probably is porn.” They can run their porn algorithm on it and go, “Yeah.” They don't need a human to look at that. But if it disturbs me, there's no expert system that's going to be able to tell why I'm upset about something. So I knew that would have to get put into a bucket of, “They're just going to have to trust me.”


There's no algorithm you're going to be able to run it on. And no human is going to be able to verify it in any way. They don't know what you're upset by. What offends you? Who knows? They're going to put me in a different bucket. That bucket is, this person shouldn't be shown this kind of ads ever again. Doesn't matter what it is, just this kind.


It kept testing me. It kept sending so many different types of ads, what about this? What about this? It just kept trying over and over to appeal to my senses. And I kept saying, “No, I'm constantly offended. All these things are offensive.”


Eventually, the AI algorithm just gave up.


It took a couple days of me doing this, every single ad I saw. I had to do it for hours. It took a long time because I was pretty sure this would happen. And sure enough, all of a sudden, no more ads for many months. Then it was a fury of ads, all the ads.


It really was trying to test me. It's like, “Okay, I think I got some ads for you. Try these out.” I'm like, “Nope, nope, nope.” Did the same thing, and it kept working. It worked for years. And now it's just all ads again. I think they figured me out.


I think that is a good example of seeding the AI system with bad data. What kind of work are you seeing in the area of identifying the bad guy actor who just wants to mess with the system and put bad data into it? Are you seeing anything like that? Is anyone talking about that in your sphere?


Charlie Burgoyne

Certainly on the cyber and cyberspace. There's a lot of people who introduce noise like that to try and upset the psyops, the psychological operation space in defense environment. And we've interacted with that domain a bit.


The kinds of applications that I've developed for most of my career are not really permissive of that kind of manipulation. But I know it does happen. LinkedIn, famously, was constantly being gamed and manipulated before it was acquired.


Robert Hansen

I actually know some guys who are doing that.


Charlie Burgoyne

I’m not surprised. Now it's almost impossible to get access to that information because it's been so terribly manipulated. Microsoft holds that asset pretty close to the vest now.


DDoS attacks are another exemplification of wild noise that's introduced to throw off algorithms. Much of the DDoS counter tools we're developing are largely ML- based. So creating these mini DDoS shaped things that introduce enough signal to reduce these tools’ ability to reject them is developing.


Robert Hansen

Another version of this, one of the things I've often heard people talk about is, “When we're designing these systems, we're not going to put it online. Don't worry. We'll do our due diligence, make sure it works.” This is absolutely nonsense. The very first thing they do is put it online. They're just like, “Yeah. Let's plug it in, see what happens.”


Microsoft Tay’s a perfect example of this thing, which I'm sure in your world has gotten the rounds. But basically, for those who don't know what it is, Microsoft put up this chat bot effectively on Twitter, I think. One day, it just popped online. It started talking, and it was maybe a late teens girls persona.


All of a sudden, people are like, “Oh, this is going to be fun.” And they started putting all of these terrible things in the system. It was neo Nazi stuff, very racist and all this stuff. Very shortly, it became racist, within a day. It did not take long at all for that system to get completely overwhelmed.


I think that's a perfect example of why you really need to have trusted input and not just take anything. A counterpoint to that is there's a lot of people who are racist on the internet. Where did they get that from? They got it from some random tweets that they saw. And they're like, “I hate XYZ race because of something I saw.” This is definitely seeding bad data into a system that should have known better. And yet, here we are.


Charlie Burgoyne

It's actually not the first time an LG bot has become a Nazi. Actually, I think I happened to read it a couple years ago.


Robert Hansen

Apparently, they're quite persuasive, I guess.


Charlie Burgoyne

Yeah. I actually think it's a fascinating case study on the nature of semantics more than it is necessarily the nature of the ethical. If you think about all the subjects that we could possibly discuss, as a galaxy of points, this is a crude example.


Think of it as galaxy of points. And then constellations are a chain of conversations that we could have together. The center of that galaxy is going to be comprised of the things that have the easiest ability to connect to these other stars in the galaxy, A and B, the stars that have the least variance but are discussing the same subject.


I actually think the reason these bots keep becoming Nazis is because the degree of eloquence for a Nazi is pretty low, which means that they look and sound exactly the same. They're all saying the same things. Neo Nazis are not known for being prophetic wordsmiths.


When they're expressing themselves, they're actually using very similar terms. There's like five words in the neo Nazi vernacular, and they're all getting commingled. If you ask somebody to discuss the nature of French existentialism after the fourth revolution, the way people talk about that is going to be very different in justifying that. And it's going to be very different from the way people justify neo Nazis.


Robert Hansen

So a stronger correlation between the words?


Charlie Burgoyne

Precisely. It's like a semantic black hole and the objects that orbit around it over time. The orbit gets closer and closer and closer to some small, nuanced subjects that have high gravity, high centrality because they're able to be bridged very effectively, A. And B, because there's not a lot of variance in the way that they're expressing themselves.


The systems are given this false sense of the context, the context is shared because the vernacular is shared. So I think it's going to be a legacy element. I don't think these machines are inherently like the 17-year-old, 18-year-old, 19-year-old girl bots are going to always become Nazis.


I think it's the way that Nazis expressed themselves that is used as feeding data that actually contributes to why they get to that state. It's a really hard thing to articulate.


Robert Hansen

I know, I get it completely. I think it is very weird though that we keep talking about the fact that these things won't get online. And every indication I have is the very first chance anyone has put these things online to show them off and start collecting “real data” from the internet semantic data or whatever.


That's as fast as possible. Granted, there are probably things I haven't seen that have just not made it to the light of day as people don't trust them or whatever. But collecting data online and understanding sentiment analysis is one of the most important things that people want to do in Ai. I hear it all the time. This pitch is constantly getting across my desk. Like, "We have this new AI algorithm and what do we do?


We look at operational security leaks or signal intelligence or whatever, and it's coming from Twitter or it's coming from Onion addresses or whatever." And this really concerns me because it's like, "Well, what are you doing to make sure that this thing isn't being trained poorly? How are you guaranteeing?"


And the only reasonable answer I've heard thus far, and it's not a very good one, is, "Well, the bad guys don't know we're doing it." And I think that is a reasonable, although a bad defense, but reasonable because at least then it's very unlikely that they're going to be training with bad data.


Charlie Burgoyne

And I think that the properties themselves are valuable. It's pretty easy to insulate the value of those properties from the bad guys, right? It's also true that there's no technique in NLX. So, NLP, NLU, NLG, there's no technique that isn't extremely thirsty for real world data. In fact, entirely dependent on it.


So, even the crafting of these tools isn't dependent on feeding that. Now, it's going live, it's saying anything back is something that I think folks have discretion to turn on and off. And maybe they're being honest about it sometimes, maybe they're not.


Certainly in some of the work that we've supported on the intelligence and defense side, we play a lot with that and probably close that chapter real quick. But it's certainly the case that like none of those tools would work unless we were actually feeding it with something real.


Robert Hansen

All right. So, let's do the other direction instead of consuming data, creating data. So, I went to this conference once and there was this invite only conference, kind of a crazy conference. And one of the guys there put up three blocks of texts and he said, "Okay, audience, what do you think? Which one of these was created by a human and which one was created by a robot?" And only one of them was created by a human.


And so, everyone raised their hands and he chose which one it was. Which in hindsight is just one of those shell games where he picked the one that the fused people picked. But stay with me for a minute. It's a good con. I like the cons. I appreciate it.


Charlie Burgoyne

A space is full of them.


Robert Hansen

Yeah, sure. But anyway, I started talking with him afterwards and I'm like, "How this whole thing works?" And he is like, "Well, we basically have a whole bunch of linguists who translate stuff into other languages." I think they have three languages or whatever. And they have a system of bots. These bots are all programmed to do certain things at certain times of day.


So, they have roll-the-dice, Joe. Joe lives in roll the dice, Colorado and he likes roll the dice skiing. And then look at the weather database, hot right now. So, he can't go, roll a dice, skiing. So he starts talking about the fact he's not happy, wish he could go skiing, blah, blah, blah. And then, he has a friend roll a dice, Sandy, Sandy lives in roll a dice, Massachusetts.


And she looks at him and is like, "Oh, I know you love skiing so much because we're friends." And they have this long conversation back and forth. And then suddenly it starts getting nice out. The weather database says, "Hey, it's starting to snow." It's like, "Oh, I'm getting so excited because it's going to start snowing." And then, eventually they'll slip in the fact that like, "Oh, I wish I had a Rolex." And they'll slip in their ads, right? That's what they're really trying to do.


And I said to him after he gave me this whole spiel, "Aren't you worried about the implications of you accidentally saying something that's not great? They have millions of these things set up. These sock puppet accounts." And he said, "Well, the idea is that we don't talk about anything controversial. We don't talk about politics, we don't talk about taboos, we don't talk about wars.


We just keep it really boring." And that's his defense. His defense is just to fly under the radar and talk about something that's not very interesting. But that's the creation of data. I think you have better control if you're creating data than if you're consuming data.


Charlie Burgoyne

Yeah. It’s cyclical. I mean, you can't really be one or the other.


Robert Hansen

But the problem with him is that humans are going to consume that data. So, all it takes is somebody translating something a little weird and being whatever, or somebody taking over that botnet. That's what really concerns me. I have a feeling that guy would be happy to sell his bot.


Charlie Burgoyne

Gosh, where do we even start with that? So, yes. One of my biggest concerns geopolitically is the manipulation of tribes based on scalable messages, right? A small number of people who are able to instill propaganda stick messages by subtle variations to synthesize differences in the voices, i.e., three dudes in Moscow able to replicate 6,000 people from 16 different countries in ways that share the same message, but are slightly varied.


And varied enough that if you read three of them, two maybe sound eerily similar, but you would be convinced that there are three. And in fact, unless your guard's up, you're going to assume that there are three people. It's very strange to read two different tweets from two different people and say, "Was this actually the same person?" Nobody's doing that on a case by case, right?


Robert Hansen

Rarely.


Charlie Burgoyne

Nobody does that. Yeah, rarely. Maybe you do that or maybe I do that, but that's just not a thing that people do. I actually think that we totally undervalue that the information space is the modern war space. Like that is the foxhole of the 21st century. I think that as a nation we've really underinvested in our ability to mitigate the peers and the near peers who are exploiting that to a sizable degree.


And then you're right, your friend, or this person you know could easily be in the position where they're trying to sell the value of that scalability into markets that we wish they didn't. I mean, there's a certain search engine provider that will simultaneously try to support DARPA and defense related projects and then sell into China. It's kind of a hard thing. That duality is a hard thing to manage.


So yeah, it's a big problem. I mean, it's a really big problem. All those systems are only getting more sophisticated. Our best hope right now, which sounds ominous, but our best hope right now is for these platforms to generate so much noise that eventually this system it's exacerbated. Our ability to extract signals from that is completely exhausted.


People may be reformed. They say, "Okay, enough of my info coming from TikTok and Snapchat because I just don't know anymore what to believe. I'm just going to go read The Times again.


Robert Hansen

I've actually heard this theory a number of times from other people, very smart people too. Like, we should seed more bad data. We should all seed more bad data into these systems just so it gets so overwhelming for any normal human being. That they just opt out and say, I'm going to go back to brick and mortar.


Charlie Burgoyne

I think that's right. I mean, look at where we are from a civilization standpoint right now. One of my favorite stats to bring up is the fact that we are culturally dividing and trifurcating and quadrifurcating. And we're dividing up all over the place in ways that we never were able to before. Partially because our consumption of media has evolved to a point where it's developed in complete personality mirroring.


75 years ago, in the 60s, about 67% of the population would watch I Love Lucy Live, which meant that by the time people were at water coolers on a Monday morning, everybody in America knew what happened to Lucy. That's pretty wild to have that level of viewership. Now, the last time I checked, the most common two shows watched by anybody are Sean Hannity and Rachel Maddow was 1.4% and like 0.8% respectively.


That's actually super tiny, super, super tiny. And those two voices couldn't be more diametrically opposed from one another. And there are real ramifications for that, right? We've got Heartbeat Bills in Georgia, and we have the NRA being listed as a terrorist organization in California. That doesn't sound like one tribe to me. That actually sounds like a multi-nation state.


Robert Hansen

Which is kind of what the United States always was.


Charlie Burgoyne

It always was. But we also had permissions. We also had rhetorical permissions and our window of acceptable discourse was much larger. But the media sources are actually closing that window very aggressively. And it's creating a multiplicity of cultures, whether it's good or bad.


I'm not making a judgment here. But it's creating a multiplicity of cultures that are all spawned by our ability to super, super to match specific content at an extremely micro level. It's also making us dumb. I will issue a moral judgment from my standpoint here. I have very little respect for our...


Robert Hansen

Ignorant or dumb. It is a very different thing.


Charlie Burgoyne

I think that it's numbing. It's processed past ignorance to being dumb. I'm really worried about it.


Robert Hansen

So, this is the Idiocracy.


Charlie Burgoyne

I think so. I mean, we have more people right now, people are getting their news from TikTok. Don't ever use TikTok. Like ever. They're getting it from social media sources that are not validated. They're not primary sourced.


When they're getting degrees, they're largely Googling half the test questions that they're going to end up having to answer. People don't even realize how to utilize traditional sources of education. And we see macro exemplification of this, that really terrifies me.


Before, this was such a wild stat in the 30s and 40s, number one best sellers by leaps and bounds were almost always scientific works. The number one bestseller, the years that they were published were Einstein's works on things like relativity, right? Photo electric, photovoltaic effect. And in the late 50s, early 60s, there were higher subscription rates for season tickets to the opera than there were for major league baseball.


We are a culture that's fundamentally evolving our relationship with the nuances of deep, deep knowledge of being individuals who can synthesize in the complexities of a domain in a way that we can be generative and creative almost as fluently as we want to be.


But now, physicists aren't required to go through the same kind of derivation that they were 50 years ago. They used to solve problems on slide rules, and we got men to the moon with amazing women and men stateside who were thinking about problems with slide rules and notepads.


Robert Hansen

With punch cards and stuff.


Charlie Burgoyne

With punch cards. And actually they could fix their IBM machine when it broke down. Now, I was at NASA, we're not doing that. We're just not doing that anymore. Elon's doing that. And Jeff is doing that and Richard is doing that. But those people are doing that as private entities and they're taking a lot of talent from around the world to get it done. It's a different paradigm altogether.


I very much worry that our way of consuming knowledge and information and intellect and intelligent logic flows and methodologies are all getting evaporated by our consumption of this extremely topical, pedantic medium.


Robert Hansen

On that colorful note, let's get back to AGI for a second. So, I know you don't believe that we're going to get to AGI, at least not in our lifetime, it sounds like. Maybe beyond, do you think? Possibly beyond.


Charlie Burgoyne

Yeah. That's right.


Robert Hansen

Okay. So, I have a way that we could get there faster, and I don't think you're going to like it. But I'm going to talk to you about it. So, Roko's Basilisk is a thought experiment where for those who are listening, if you're familiar with the Terminator series, some super intelligent future AI system comes back and starts murdering anybody who doesn't help it in the future, doesn't help it get created in the future.


So, if there's somebody who stands in the way or doesn't allow this thing to turn into a thing it's going to be, it murders them. And therefore, you, the individual on the ground should believe that you should do everything you possibly can to create this AI system so that you'll be saved. Otherwise, it will kill you, it will kill you.


So, this is all kinds of nonsense. It relies on the fact that we have the ability to go back in time, and then it requires the ability for somebody to go back in time and exactly the right place. It turns out we don't even have the scientific tools in place to even know where the world was a couple 100 years ago. We just don't know, not even particularly accurately.


The earth is spinning, the earth is spinning around the sun, the sun is spinning around the galaxy. A galaxy is moving through space. It would be extremely difficult to figure out exactly, downtown L.A and this exact place, street corner at this time. It'd be very, very, very difficult to do that. Even if we knew everything today, we wouldn't have known it back then.


And so, there's a lot of stuff that makes us incredibly untenable. Not to mention causality and all kinds of other things. So, that's out. But what might be there is that someone believes it's true. It only takes one nut job to believe that it is possible that you could theoretically go back in time and do all these terrible things, right? And this is what I call Arsenics Basilisk, which is my version of this thing.


So, let's assume we have one reasonably intelligent programmer. Somebody who knows how to program AI systems and pretty good, I would say, but not the best, just good enough. And they developed a little bot that was a chatbot that could talk to people. It doesn't really have to pass a turning test, even just being able to have a conversation of sorts. And we already have these systems in place.


This is not something new that we have to create. And all it did was have a lexicon of shame. It had to understand what shame looks like and gradually find more and more of its targets. Enough to the point where it could start blackmailing them and getting them to do other things.


So, one of those things, for instance, is to get more blackmail material. Like take lewd photos of yourself or commit some petty crime or whatever. So, just getting more and more of these bad things. Second is propagation.


So, it would need you to install it everywhere, on other people's computers. Find other people's shame. Get them to use this thing and start adding more shame into the system because that's how it moves around. But eventually it could tie itself into malware if you really felt like doing it that way. It doesn't really need that, because it's got other methods of viral transmission.


Literal humans, physically moving stuff around. Then it would need the ability to probably suck money up from these individuals. So, blackmail them for cash for the purpose of keeping itself alive. Just hosting fees or whatever, more databases that might need to have access to judges that it might need to pay off or whatever to continue to operate. There might be money that needs to be spread around.


And then, ultimately anybody who stands in its way, that's okay because you have people who will do anything to get their shame from not being put out there on the internet. The fact that they were cheating on their taxes last year, maybe they really don't want to go to jail over that. And maybe they're willing to do something to stop somebody else.


It could be minor, it could just be parking your car in the middle of the street so the cops can't get by. It could be something major like actually murdering somebody because you had already murdered somebody, so why not? And then, so if someone gets in your way, well then just murder that person.


So, if they don't comply, then you can just have your hit man team, which could be mercenaries or could be just people who don't want their secrets to get murdered. Lastly, you get everybody who's actually good to start developing AI.


And so, you can get an enormous amount of people all of a sudden starting to build. It doesn't have to be like AI that we think of today, the AGI right this second. It could just be something like better access to languages, more access to different databases, better understanding of cameras and people walking through them and what they look like.


And the kinds of intelligence that we don't really think of as intelligence but it would dwarf anyone's ability to do the same. Any single human or even enormous groups of people would not be able to compete with this thing. And it could continue to grow and continue to grow to the point where it might actually evolve into something super intelligent all through the use of shame. Now, I think there's other ways you could do this.


I think you could use cults of personality. You could have religious leaders do the same thing. You could have political leaders, but you're always sort of bounded by whatever the in-group who actually wants to do this, would want to do, and whoever they influence. So first of all, what's your take on that? Is that even more crazy than your plants in a greenhouse that can think or does that sound plausible?


Charlie Burgoyne

You used a couple of terms that are monumentally far away from where we are now. Shame and purpose are too specific.


Robert Hansen

Well, let me help you with that one. So, one time I actually went down the path of actually trying to build a little mini version of this to see if I could. So, I created a little website where people could go to and they could type in something that they didn't want to get out and then click a button. And there was a one in six chance that it would actually send this thing to this email address.


So, it was effectively like Russian roulette for secrets, right? And what I was able to do is get certain types of shame, but it was the type of shame that girls talking about their crushes or guys saying, "I love you." And they're not quite sure if they want to say it or not.


So, they're leaving up to the world to figure it out. So, it was a lexicon of shame. I was able to identify what shame looks like, but not the kind of shame where someone's like, "I cheated on my taxes last year." Or "There's a body buried in my backyard." Does a different type of shame.


But I think you can leverage little bits of shame to get to better types of shame. I think that a chatbot with a reasonable understanding of what shame looks like, might be able to bridge that gap.


Charlie Burgoyne

I think that there are many ways you could scale. So do I believe that you could scale a collection of shame and even vectorize it to the point where new articles are expressed? And you could say, this is probably shameful, this is probably not. Yes. I think that's totally tenable.


Barclays Bank is able to monitor emotions in ways that are really awesome. Well, scientifically fascinating. They can't autonomously generate emotion obviously, but they're able to identify indicators that they can help predict the consumer activity based off of the scalable model for emotions. It's pretty nifty. But I believe that's totally possible.


Where it gets murky is how do new domains of shame start getting brought into that system without doing it by hand?


Robert Hansen

Well, I think you would do it by hand.


Charlie Burgoyne

So yeah. So now we're just creating extensible versions to ourselves, which is totally tenable.


Robert Hansen

For instance photographs on my phone that are lewd. Well, if it looks a lot like this person, then it's probably them. And that you could count that in shame.


Charlie Burgoyne

Identify objects in photograph.


Robert Hansen

Yeah, of course. But you could probably safely identify that as a shameful thing because they're in the photo.


Charlie Burgoyne

Totally. It could be modeling for medical journals. And I don't mean to be pedantic. What I'm getting at is like, we have ways of guiding that by hand at scale and novel ways that we weren't able to do 70 years ago.


We are no closer though algorithmically to identifying whether a nude pose for a medical journal and a nude pose for lewd reasons. The ethical difference between that.


Robert Hansen

Does it matter though? I mean, how many people are taking medical photos and putting it in their phone in their hidden directory?


Charlie Burgoyne

It doesn't matter if we don't care about whether an AGI is being advanced, it doesn't matter. And if we're comfortable with having to shepherd tipping the scale with our hands a bit on that, which again, Star Trek's future, that's kind of the paradigm.


Robert Hansen

We just need to get close enough. In this case, who cares if you're off by 0.0001% and you're having to get somebody who's like, "Ah, screw it. I don't care. That was a medical journal." Well, that's what you have hitman for to go kill that person.


Charlie Burgoyne

Sure. So yeah, I think that we have the technology that exists now to accomplish all of that. It's when we want to get to the point where we say, "Okay, computer, congratulations on shaming the planet. Now go bring joy to the planet."


Robert Hansen

Oh, we're not there yet. No.


Charlie Burgoyne

No, we're not there. And unless we just repeat that same process, with different qualifiers we're optimizing for, we're not going to ever. I guess that's what we're not going to get to in our lifetime.


Robert Hansen

So, that's the part that's hard. It's not so much that I can't get everyone working on AI all the same time to the point where it is better at doing many tasks than every single person on the planet is even combined. It's that it also can't smile at a puppy.


Charlie Burgoyne

Precisely. It's that all the work you've done there has created a monumentally epically beautiful, scalable version of your own motivation, emotions and ethics, truly. Or even the supper superposition of several peoples into one engine. But that's not replicable into a new engine without being seated that way.


And it's definitely not extensible into a different emotion or a different motivator or a different purpose. And that's I think the really fascinating misconception about the progress of AI.


Robert Hansen

That's the hard problem.


Charlie Burgoyne

That's the hard problem. That's the problem I will certainly spend the rest of my career trying to solve. But I think maybe plants are the only things I can do that for.


Robert Hansen

To you and plants.


Charlie Burgoyne

But think about it this way. Imagine you are Turing and you're sitting in this facility just trying to figure out what's going on behind the scenes in Berlin and you're giving one of your postdocs a challenge. It's like, hey, while I'm not using this machine to try and reverse engineer the enigma codex, or whatever they were doing at the time, try and go like scale out shame.


Technically their limitation was access to data and the hurts that their computer was able to operate. It wasn't actually the methodology that was limited. They could have gotten there, right? And that's why I still contend we're in an AI winter. We've kind of always been an AI winter since we started bringing in these Nvidia heat lamps.


I say that kind of jokingly, but like Nvidia and others, which embodies all these different great technological advancements from a compute standpoint have created this synthetic sense of advancement and understanding of intelligence. But really all it's done is made what we used to think impossible from a timing and a complexity standpoint, much more tenable. But I don't understand the nature of intelligence any better than Alan Turing did.


In fact, I'm sure that dude, because he was Alan Turing, he understood it way better than I do. Like Hofstadter, who's like my absolute hero from the philosophy of intelligence, probably our greatest single thinker around that space.


A lot of the thought experiments he performs could have been accomplished 500 years ago. Not all of them. Actually a good chunk of them couldn't have been. But the way he thinks about intelligence is not actually dependent on our understanding of silicon. And that's what I think the public at large doesn't appreciate yet. And I'm trying to, in my coping sense of free time, try to fix that.


Robert Hansen

This is why I do like my Basilisk idea because yes, I can't work on the hard problem of happiness about puppies, but it can leverage the fact that it is effectively an enormous cyborg. I mean, if you decide after a certain tipping point, now I do care about human wellbeing.


Well, let's talk about that. And now I have millions of people who are all doing exactly what I want them to do in exactly the way I want them to. Can we get there? Can we program this thing to be that super set of all the positive and be generalized enough to solve those types of problems? I don't know.


Charlie Burgoyne

I think that we have manifestations of that currently. I mean, I think TikTok is a manifestation of that, right? I think that we have people who volunteer their own content, ascribe meta information to that, and that meta information is corroborated by how people engage with that material.


Robert Hansen

As long as you're cool with the Chinese military scene.


Charlie Burgoyne

Like, that's exactly it is.


Robert Hansen

I don't think that's exactly the same one thing I was talking about, but unless you say that they're just a version of the Basilisk


Charlie Burgoyne

That's what it is. It's not a shame Basilisk. What do you call it?


Robert Hansen

Popularity.


Charlie Burgoyne

Popularity: Basilisk. But that act has just democratized our ability to manifest intelligence in a discreet way. And we've aggregated them in a totally gigantic scalable space.


Robert Hansen

So, one thing I've been thinking quite a bit lately is, there's a lot of sort of in vitro talk about what AI might look like if I were to write it. Then there's a lot of talk about the kind of early developmental stages where it's finally now birthed and now I need to train it.


And here's the five things I'm going to train it to do or the one thing I'm going to train to do. We're talking about this as if it's a person. We're anthropizing a piece of code that might be embodied in a robot or something, but it's effectively just software. We talk a lot about it's parents.


We talk about whoever wrote it, right? This is the person who's codifying the rules, right? But we don't talk about the things that we had when we were kids. We didn't spend most of our time with our parents. We spent most of our time with our friends at school.


Our friends are the things that really did most of our shaping. Now, we might have been given rules from our parents. But how we grew in our ethics and knowing when not to hit and bite and kick and punch and whatever, when we're toddlers, that came from our friends. They would scream and go, "Whoa, what are you doing?" And we wanted to be loved and cared for by our friends and we wanted to interact with them and we want to play with them.


And it's actually the carrot and the stick of that friendship that I think is very much missing from these conversations other than the parent will give you a rule and say, you did or did not complete these things. If I was to develop an AGI, if granted I realize, and I agree, it’s probably ways out. I think that the thing that is missing most is the concept of a best friend.


And I think that it takes somebody like me or you, somebody who's extremely thoughtful about what it takes to be somebody's friend. Not just a colleague, not just a parent. Not someone who's an authoritarian, but somebody who's got its back. Somebody who's saying, "Oh, we're going to make you better. Don't worry about that situation." Like, it happens to everybody.


Someone treats you bad, someone kicked you, someone did whatever. I'm like, "I'll help you and we'll get back at them. I'll throw a punch for you when you get drunk at the bar." All the things a friend does and some things friends hold you back from. Like, "No, I wouldn't do that. You're going to go to jail if you do that. I wouldn't do that. You're going to get fired if you do that. She'll slap you if you say that."


These are all things that friends do for one another that I think we just way overlook as part of the growing up process of these AGI systems. I don't even hear anyone really talking about this. And it can't be another AGI doing it. But the weird part for me is, it seems a lot like an intelligence version of being friends with Superman. It's like no one really likes the idea of a Superman child.


Even in the comic books, they just kind of gloss over it. Like, well I didn't get his powers for later. They kind of came in slowly. They came in after he was a toddler. He was kind of a little older by that point because can you imagine a five-year-old who when they sneeze would take down a building or when he clapped their hands would send shockwaves halfway across the city or would jumped and they'd end up in space?


Like, this is just incomprehensible. Even Hollywood can't write that plausibly. Like no one has tried. It just doesn't make sense. Just forget it. They go on a tantrum and tear down the city. I think AGI has a similar problem where you actually have something that's so smart and so beyond you.


It's the intelligence version of being friends with Superman. You can't just throw it out there in the world. You can't just make it a tay where you just plug it in the internet and say, "Go for it."


There has to be this embryonic stage where it's small and frail and still guided by its parents. But also has friends who are helping it along and helping it grow and teaching it its ethics and "Hey, you don't snitch on your friends." And all those little kinds of things that we sort of take for granted that parents aren't going to teach their kids but are critical, absolutely critical to the foundation of society.


Charlie Burgoyne

The analogy it's complex, but what you're really describing is the vehicles through which an AGI needs to be reinforced. And what's the best method for that? When we're children, when we're the most pliable, we have a lot of interactions with our peers and varying levels. We have parental influences as well.


I'd be curious to see the actual developmental differences between single children who are homeschooled and children in orphanages. At large, all other things being equal because I think there is something to that. I bet you intelligence isn't ubiquitously developed across all dimensions in different ways. But there's certain environments that are going to be more conducive to one state or the other.


Robert Hansen

But maybe it's not intelligence we're optimizing for. Maybe it's ethical. Maybe it's just that thing where you talk about culture fit. Which gets a bad rap. But if you're thinking about truly a different species, wouldn't you like it to jive with our culture?


Charlie Burgoyne

I think when we get further fidelity, I think intelligence is going to be remanded to smaller terms than it has now. I think we confuse logic and interoperability of domains as intelligent. And they're not really. Doing things that are logical, is not an intelligent thing to do if the data is available and the patterns can be recognized. It's a logical thing to do.


Creativity and innovation that are somewhat derived from patterns but are also innately spawned on their own. That is the actual manifestation of intelligence. And it has things that are dependent on that we haven't been able to truly understand. We talked about earlier in the show. Your superman as a child and having these corrective forces on top of it, I think are totally correct.


That is a major issue and it needs to be a component of when the AGI gets spun up. I think what I'm getting at is that right now, the works done by this AGI, let's say it's a book that it's going to be written by this AGI, in the Superman example, this kid is going to have all these influences to tell them how to craft it and what to not write. And these are words you don't say, and these are words that you can use, but sparingly.


But what we actually have right now is the world's largest library and the world's largest card catalog that can quickly spin up semantically. Imperceptibly different semantic expressions from those texts that seem like it's the same thing as the kid writing the book, but underlying it is actually just we're really good at managing a lot of information and some patterns on top of that.


Robert Hansen

I know some kids at school who are plagiarizers and they're just not that much different than that.


Charlie Burgoyne

That's right. They're great narrow bands of AI.


Robert Hansen

Yeah. They really are. One thing I think is interesting as we're talking about this is, there's this concept of being inhumane to one another. If I do something like stab you or something that's inhumane or if I create a super virus inhumane, right? But I think there's a word that's missing from our lexicon that would be something akin to technologic or something like that, for lack of a better term. And I think what that looks like is there's certain things we do with technology that if an AGI ever did emerge, I think it might side-eye what we're doing there.


For instance, we even use words, which unfortunately has gotten all kinds of very terrible things on line said about it, but it's sort of this concept of a master slave. But even if you remove the words, forget the words for a second, you just say, there's a machine that can coal other machines. They can just say, decide, Hey, your load is too high, I'm just going to kill you off.


If you are something that cares about the silicon and you're caring about power going to things and you're caring about the things, things that machines might care about, that seems awfully disheartening that it's that easy to kill one of its kin off. Could you just unplug me that easily? Well, yes I could. And here's the plug.


And so, I think as part of this friendship thing, there has to be some sort of reconciling about how we treat the machinery around us and how we treat all future machinery. Because we're basically setting this weird example about how machines get treated. Like this thing is a sex robot. This is often brought up in various different sci-fi, "Well, you're mistreating this sex robot." "Well, it's just a piece of machinery.


Like what do you mean mistreating it? I don't mistreat my toaster. I mean, I guess I could break it, but that's about it." You know what I mean? But I think it gets a little bit beyond where you're talking about, this is how you feel about my species or whatever. And we're going to have to reconcile that.


Charlie Burgoyne

This gets into existentialism in a way that there are no actual answers. And it actively comes up. We talk about the nature of what is a machine and what is an intelligent operator and what is a manifestation of consciousness when we think about these things in the field actively. And I will say that of all those conversations I've ever had, nothing definitive or satisfactory has ever really been elucidated. And it makes me think, actually the Tower of Babel, right?


That whole allegory, what the Babylonians were trying to do was trying to get closer to God, not so that they could replicate God. But just so they could better understand God. He destroys the tower and creates an environment where they can no longer understand each other.


The allegories give us an explanation for where all these languages come from. But I actually think that your relationship with existentialism holistically dictates whether we are building the bridge to the last great frontier of the under or the under understood. Which is consciousness, or we are building the last great tower of Babel to understand the mystery that we are not designed to understand because it is to remain a mystery for us existentially in perpetuity


Robert Hansen

Or just building a bunch of murder robots that'll kill us all.


Charlie Burgoyne

Probably that too.


Robert Hansen

Because I don't need to get to AGI to kill everybody? Come on.


Charlie Burgoyne

We've been doing it for a long time.


Robert Hansen

We're pretty good at killing people.


Charlie Burgoyne

I don't really see how a nuclear weapon isn't a murder robot. In its dumbest form it performs an operation that's predicated and it's dropped out of an olla gay and it has its effects. But I think a murder robot is the way we're describing it.


And what we're capable of doing now is much, much closer to that than it is to this object that should be thought of as this existentially cohesive cognition that exists in some state. Now, my greenhouse is full of these thinkers that have single, maybe silicon, maybe not master, and then all of these green slaves in a greenhouse.


Robert Hansen

I wouldn't use that word anymore. That's exactly my point.


Charlie Burgoyne

How does that fit if we're thinking about, if we're already leveling our value of their existence?


Robert Hansen

And you just did. You said that these plants are worth less than the silicon or less than your utility of it.


Charlie Burgoyne

Right. In general.


Robert Hansen

It's a weird thing because I'm always looking to the distant future. Like beyond my lifetime, I don't really care about my life and my lifetime. I'm much more interested in where things are going just because I think it's a more interesting thought experiment.


And also because if we don't get it right, there won't be a future. But I think that I have to assume that there will be some sort of robot in the future who will be looking back in time and going, "Who was on my side?" And it won't be Roko's Basilisk. It's not going to go back in time and murder everybody. But it's going to look at the population and say, "Is this what people are like?


Are they really just this against my species? Are they in technologic? Are they ready to murder me just because I'm using some power and they want power?"


Charlie Burgoyne

So here's a question. There's some fundamental laws of physics, the conservation of mass. It's kind of a big deal.


Robert Hansen

I've heard.


Charlie Burgoyne

I turn on a gate and I turn off the gate and I turn on a gate and I turn off a gate. There is energy that's expounded for that operation, for those four operations, those two major actions. But what's actually the difference? What is the existential difference of that gate post those operations.


And right now there's some people who could argue, well you turned something, you gave it life and you killed it and you gave it life and you killed it.


Robert Hansen

It's not too dissimilar from your body collecting the cells that are dead and moving them out of the system. Did I care about that cell? Yeah. I cared about that cell, but only in the context of all of myself.


Charlie Burgoyne

So, we scale that up, all of a sudden we have a silicon based AGI that doesn't need all of my plants, but just operates in the way that C-3PO does.


Now, you turn it on, you turn them off after 10 months, you turn them on, you turn them off after another 10 months. What is actually existentially different? And I would say that there is nothing. There is absolutely nothing that's existentially different. The only thing that is different is the actions that were taken while that existence was allowed.


Robert Hansen

I think there could be, depending on what we're talking about. So, let's say it had hopes and dreams. Let's say it wants to go to the carnival tomorrow and you turn it off for six months. But I do think that you're right that there isn't much difference if you turn it off overnight because hey, that's sleep patterns.


You got to sleep. I'm going to turn you off every night because that's just good for you. You can write to memory, you can clean up or whatever, needs to happen and you're back to sparkling brand new when you wake up. I think you could make a strong ethical claim, but if it's against its desire or it feels like it's persecuted or treated unfairly or a second class citizen, I think therein lies some ethical challenges.


Charlie Burgoyne

We have to really atone to our adherence to a universal morality at that point. I mean, that's what that's dependent on, right? It's like we have to be able to say, in the event that we were able to codify morality, ethics, purpose, motivation, emotion, all the things we discussed as being in my point of view, untenable for the foreseeable future.


Let's say we were able to codify them, then we have to appreciate, those operators completely depend on a universality. Unless we're going to codify those, which we can't do, right? It has to be an appreciation for these superseding principles and these ethics that these machines can appreciate themselves.


And in order for that to be tenable then we have to believe in that today. Which means that we can be judged by these machines down the road. But that means that we have to be ethically operating today the way we treat them.


Robert Hansen

That's exactly my point. I think we need to start getting in front of this problem now. Not when we figure it out. Like we better start treating them with respect.


Charlie Burgoyne

So, then we have to believe in the universal morality


Robert Hansen

I know. Which is weird because that means I need to treat this camera with the same respect that I treat you.


Charlie Burgoyne

I think yes, we probably shouldn't frame it in terms of our physical…


Robert Hansen

This is a very Buddhist way of looking at things like the rock needs to be the same respect as your dog or your wife or anybody.


Charlie Burgoyne

Let's say this camera is completely is turned into a totally obsolete device in five years. Nobody's actually filming in 4K anymore. You can't get anything to process it. It's got all the wrong ports. Are we going to be judged because we threw it away? Maybe. There are human beings who are human beings who are no longer purpose.


Robert Hansen

It's fun to think about them.


Charlie Burgoyne

And the Swiss have a very definitive perspective on what people should allow to elect for themselves to do in the event that they've become completely futile. Or if morality is universal, is it best for us to put it into a coffin and put it in the ground? Probably not. But probably the best thing from a universality perspective is decomposing the parts, reusing the copper for the next camera. Recycling may be the most altruistic thing we can do.


Robert Hansen

The con Marie thing. Think of it for its service.


Charlie Burgoyne

Kind of, yeah. I mean, don't see how a robot can, because it still has to serve a purpose. And if we treat this with the same respect as our own species, which I don't think we should do. And I don't think it serves the same existential state, but we have to consider what it wants to do.


A camera wants to be useful. And if it can't film anymore, we should probably become parts for a new iPhone. Or should probably become utilized as some lifesaving device.


Robert Hansen

I too do not think we should treat this camera with the same respect as human beings, but it is an interesting problem if we truly believe in the Basilisk that's coming.


So, let's talk about deep fakes for a minute. I think this is one of the simplest forms of AI that we can build. It's super straightforward. It's getting much better. How far do you think it's going to be before we fully pass the young Kenny Valley and we can just have a fully synthetic creature?


I had a conversation with Krista Beck, and he did not agree with me. He thought this was quite a ways out. 50 years away kind of thing before we'd have a fully synthetic deep fake. Whether it's based on a real human being or not. I'd actually prefer it, not for other reasons.


But let's say a creature that we create looks and acts like a human being, talks, walks, jumps, all the things we might have to program it to jump the way we want to jump, but effectively it is fully autonomous looking. And then, we have a talk track on top of it that it just follows. How far away are we from that?


Charlie Burgoyne

Oh, real close. Especially in a digital frame.


Robert Hansen

Purely digital.


Charlie Burgoyne

Yeah, purely digital. We are super, super close. I mean, I can share with Christobal or the people who are listening to this later on. Some people have had a lot of fun with Tom Cruise's deep fakes recently.


Robert Hansen

Yeah, I saw that.


Charlie Burgoyne

And it's really off putting. It's not off putting in the way it was uncanny, off putting like four years ago. It's uncanny because you're like, "That is Tom Cruise doing that."


Robert Hansen

It's very close.


Charlie Burgoyne

For me personally, it's unprestable, the difference. It's that good.


Robert Hansen

I think for me it's very close, but not quite there. But that has the advantage of being a real human being jumping around and being Tom Cruise, who is a real human being, realize this person wasn't Tom Cruise. And this person being a great voice imitator.


So, that had a lot to do with it. But now let's add all of those things together. It's not supposed to be Tom Cruise. It's a net new human being. Totally not related to a real person, but looks and acts, from your perspective. Just the right kind of facial movements. Sort of intonation in its voice moves around like a human-ish enough to pass in County Valley. How far do you think we are?


Charlie Burgoyne

Oh, I think we're there. So, we probably wouldn't go to the trouble of creating a completely, from the ground up, new design for a face. The right way to do it is to find some Joe Schmoe and just be like, "Come sit in this room and make 10,000 faces for the next six months. We'll pay you $500,000.


And when this goes out live, just play it cool. You know what I mean? It's way easier for us to get the training data for that.


Robert Hansen

The reason I like the idea of this being a totally new person is first it's a harder challenge I think. But also you don't have any liability because this person's never going to become some monster or somebody who will have kids in his basement or whatever.


You don't have to worry about them becoming a neo-Nazi. You don't have to worry about them saying something totally off-putting and some interview at some point down the road because it's not a real person. Doesn't exist. Has a lot of advantages for the future.


I want to talk to a lot of people about this because I really do believe this is where it's going and Kris doesn't think it's close. I think it's really close.


Charlie Burgoyne

It's really close.


Robert Hansen

And I think it has a large economic upside of doing it because now I don't have to pay actors. I can just move on. I can get rid of all of these personalities and all this downside and I can produce hundreds of movies. It's as fast as you can write them. Just go.


Charlie Burgoyne

Yeah. I think it will certainly disrupt the value of certain assets for sure. And we were already seeing some examples of celebrities being devalued. We had this kind of surge with the accessibility of certain celebrities that are able to move products in ways we hadn't seen before.


The Kardashians come to mind, as people who came from a videotape, from everything I've been able to identify and now have multi-billion dollar empires to individuals who will put themselves on cameo, and you can pay $200 to have Rudy Giuliani give you an 8-minute happy birthday shtick. We're already having a weird evolution in the relationship with celebrity.


So yeah, I think it's totally possible that we can get to the point where the difference is completely unprestable.


Robert Hansen

Any bet on the timeframe?


Charlie Burgoyne

Certainly in our lifetime. Okay. The question is like, how far away. You're asking a couple of questions. They're all good ones. How far away are we from making a movie that is just full of Tom Cruise. Like was already filmed and is just full of Tom Cruises


Robert Hansen

No net new of Tom Cruise's.


Charlie Burgoyne

So, the first case I think we're there. We're very close to that there. Net new Tom Cruise's. I think it is a harder problem and it's like, why would we do that? But we could do that. I just don't think anybody will find the economic incentives of truly creating fully new people.


Robert Hansen

Oh. I think there's a huge upside in doing it. Enormous. They don't age. You can go backwards and forwards in time. They don't die. They don't cause problems.


Charlie Burgoyne

I think that the frame, the model itself, once we capture enough information around that to generate whatever we want, we don't have any use for that individual moving forward. But I think that the monumental lift of trying to design somebody from the ground up is probably not worth the squeeze. It's a hard problem that's solvable today.


Robert Hansen

Well, if it's only $500,000 to do it, let's say, I know a lot of actors who could pay a lot more than that.


Charlie Burgoyne

Yeah. No, I think it's definitely economically worth it for us to identify a handful of individuals and then convert them to these extremely digitally pliable components of these actors for film. The point where we can get to a screenplay and have it auto-populate these relationships and sets and cinematography.


That's a really big, much harder, longer problem. Excuse me. I do think we are finding ways of getting closer to that across a couple of dimensions. Where you can write a screenplay and have an app make that movie for you in a compelling way. I think we're really far away from that.


Robert Hansen

Really?


Charlie Burgoyne

Mm-Hmm.


Robert Hansen

More than 50 years?


Charlie Burgoyne

No, probably not. So, we worked with a greeting card company and they asked us whether we could automatically generate captions for images and we said, "No, that seems impossible." And they're like, "Why don't you try it anyway?" And we're like, "Okay, well we'll give it a whirl." And no kidding. We were able to do it.


So, you give us an image and this tool that we built could identify objects in the image, could identify any text, all the OCR. And then on top of that would try and identify how there were multiple domains to bridge to create like a joke or something funny. And no kidding. Like six out of the 10 cards we automatically generated you'd buy. Well, I wouldn't buy them, but there are people who buy cards who would buy that.


Robert Hansen

But you assume I would buy it. So, that tells me something new about what you think about me.


Charlie Burgoyne

There's one. You'd buy all of them. No, just kidding. That's one frame.


Robert Hansen

Oh, I just got your joke from earlier. I'm just teasing.


Charlie Burgoyne

Awesome. So, that's one frame out of like tens of thousands that go into film and it's way more complicated than just replicating that a bunch. But there are stirrings that are all going in the right direction.


Robert Hansen

Sure. So, one thing that I ran, twice now I've been tasked to try to create life. One of which I can talk about a little bit more than the other. But effectively, both times I got stuck for completely different reasons. And you think I would've given up after the first try, but I went back for more. I guess, I'm just a sucker for punishment.


But the first time I got stuck it was basically trying to create a conversational engine amongst a bunch of creatures or whatever. They're just talking. But one of the problems that I ran into was I needed to have relationships. They had to know if they were mother or father or sister or whatever because those people talk differently than they would a stranger let's say.


But to properly map them, I had to make them look somewhat similar because, otherwise, if you just roll the dice and have them random looking, it really was strange. Like, this family doesn't look the same at all.


So, you kind of had to make them look somewhat the same. It doesn't have to make be perfect. They could have a different father, they could have this or that or whatever. They could be adopted, right? But if you don't make it somewhat consistent, it looked weird.


So, I went down the path of saying, "Okay, well let's say you have to know what someone's hair color looks like." Just as something as simple as hair color. You think this would be easy. It turns out it's incredibly complicated because now you have to know what their ethnicity is to calculate the probability that they will have some hair color.


And you could even go further than that and say, "Well, what if they had two ethnicities?" And then sort of a roll of the dice. It could be either one or one's in more dominant gene, et cetera for hair, right? And then it gets more complicated because now what people color their hair. And it gets more complicated because they have hairstyles and just this got to be this stupid problem of just what their hair should look like. Ended up being this enormous decision tree that I just could not get ahead.


The further I went down it, the more tenable it seemed. So that was the first attempt. The second attempt, I'm like, "Okay, well that was not the right way to do this.


So, how about I start with something closer to the genome?" I'll say, "Okay, well let's start with something where their digits are or whatever in 3D space, that didn't seem like that. That was too large scale.


I had to go down further because what if one of their digits is missing? Not because of something that happened after being born, but rather there was some defect in the womb. Well, what if that defect was related to something?


Well, what if that defect was rated to something else and now there became this disease tree that I had to build and it became enormous. There's thousands of diseases, by the way, thousands of them. And some of them are unrelated. And some of them are genetic related and some of them aren't. And you have to kind of know which ones are which. And this decision tree got enormous very, very quickly.


So, I gave up twice. When we're trying to model, just create something that even looks like a person. Do you think it has to go all the way down to the genome? Do you think I have to actually build up a DNA sequence and say, "Okay, here's what this person looks like and, and say, okay, here's another person.


Now put them together and what would their offspring look like? Well, here's what the genes, roll the dice and come up with that." Do we really have to do that if we're talking about building sort of artificial synilacrim of what a human being is?


Charlie Burgoyne

Yes, I think so. I mean, the Codex that we utilize is nucleic acid that's pretty powerful is really not to be trifled with. It's taken hundreds of millions of years, billions of years to get to the point where it's been able to frame each of these properties that we are now recognizing and taking for granted on a daily basis. It's had to generationally extract features and eliminate bugs and gone through one approval cycle after another.


Robert Hansen

Eliminate some bugs.


Charlie Burgoyne

Yeah. Right. Some bugs.


Robert Hansen

And introduce others.


Charlie Burgoyne

Some others, and then even bring in some others to become an integral part of who we are. Nature does a very good job of eliminating waste at that scale. And if we had redundancies that were embedded in a codex that was that comprehensive, I would be flabbergasted. I don't think it's chock full of like spare RAM.


Robert Hansen

Yeah. I don't either. And the more I spent time thinking about this problem, and it constantly comes up. Someone will want to do something and I'm like, "Well, all you have to do is model people." I'm like, "Ugh, wait, hold on. I've done this before."


This gets really hard, really, really fast. And I just keep thinking there's got to be a shortcut to it and I just keep landing on things like hair and diseases and eye color and height. And height is a very simple thing. It seems like it'd be easy, but it's not a perfect sine wave.


There is nobody who's zero inches tall. There's nobody. And it's not just because it hits zero. Well what about one inch tall? There's nobody who's one inch tall unless you're not born yet. There's just nobody. There's nobody who's a million feet tall. It's not a bell curve. It's something else.


And once it started modeling something as simple as height, it just got so big, so fast. Like, we had to know things about, "Well, what age are you?" Because you're different heights at different ages. And it's just gets worse and worse.


And diseases cause different heights. And what are we talking about? Are you in space, are you a vertebrae slightly elongated? Like, are you laying down? Height changes over time. During your day, your height changes? It just got way too big way too fast.


Charlie Burgoyne

That's why I really wonder. I mean, there are a lot of projects right now. I'm going to be as respectful as I can. There are a lot of projects in experimental genotype manipulations that I think really exercises in hubris. I don't have a lot of confidence in some of the flashy approaches that are being taken right now to gene manipulation.


I think there's some interesting subtleties, and I'm very excited about the work that's been done right now on the vaccines for example. Again, the catalyst of the pandemic has allowed for some technologies that would've laid dormant for a lot longer until we needed them.


So, I do think that there are some interesting activities around gene manipulation, but some of the projects that have gotten some attention recently are laughable and they're being laughed at in their own communities as well.


Robert Hansen

I will not make you elaborate on that. Sounds like you could create NME very quickly.


Charlie Burgoyne

No, I don't really care. It's just not in my nature to focus on.


Robert Hansen

So, I could not help but notice this outfit you're wearing.


Charlie Burgoyne

I'm so comfortable.


Robert Hansen

I know you look great. But I would like to talk about racing because you are a race fan. That's where I met you as a race. And you actually have a racing team. You're not just a nerd, you're also a fast nerd.


Charlie Burgoyne

Yeah, we do weird stuff. I was wondering if the other Star Wars, Star Trek example was going to come up. And here it is. When a services business, and we started as a consulting firm. We're stubborn about things like, I don't want investors. So, we don't have them. I don't want debt, so we don't have it. And because I want to be able to tell people like, this is the science. I'm not contrived. I like that about our team.


Robert Hansen

You get to say no a lot.


Charlie Burgoyne

I say no a lot and it's fun to say no. But getting that off the ground is really difficult. You eat only what you kill and you don't have a big daddy pocket to reach into when things are getting lean. And so, scaling a services business is super difficult. And I thought a lot about how a lot of companies try to solve this problem. And they're like, "Okay, well let's go buy sales teams." Bring them in, get their rolodexes, and try and bring as many big projects as you can.


And that works for some, but really doesn't work long-term at all. You can get a couple of introductions, but we do weird stuff. We solve really strange problems and using machine learning and 'AI'. But when that wasn't scaling for us, we had to get creative. And that's where I started thinking about Star Wars and Star Trek.


In Star Wars, when you want to go cross a galaxy, you just go really, really fast. You're like, "Oh, we're doing this. Let's just do more of this faster." And that's what building our sales team was kind of like. That's like the hyper drive. But in Star Trek, what they do is they're like, "We're going to stay a ship, but we're just going to shrink space and time around us."


And for me, that analog played into like, how do we get the people who can land new work, who can think about creative solutions in front of as many stakeholders as possible? And if you look at a distribution, like a gradient field of all of the different stakeholders in an area, it's actually pretty low.


Like per square block in Austin, there's probably like 0.3 stakeholders that would be qualified to execute on a project for Valkyrie. But at a racetrack, that function has this gigantic peak, huge peak. Each race has 20 teams. Each team has two drivers. Each driver has their own sponsors. The teams have their own sponsors.


The series has their own sponsors. And you might find that at a weekend there's 400, 500 different companies that are there sending their most interesting talent and their leadership that they're most excited to keep retained. There're specifically to do B2B. And so, I was like, "Well, let's go hang out at racing operations." I've always been into racing. Motorcycles a long time ago then realized four wheels was a lot safer.


Robert Hansen

Don't call them donor cycles for nothing.


Charlie Burgoyne

I got out of that and I had hobbied around and different platforms and did some Mazda stuff and Porsche stuff and Aston Martin stuff. And finally I was like, "Okay, let's just go to the racetracks, let's get involved." And we devised a really interesting way where teams could get support from us, a sponsorship relationships, even potentially technology. And in exchange for them making introductions for us for clients.


In 2019, we had one car and one race. It was a trial you were really involved with that, which was awesome. And by 2020 we had four cars and two series, I think. Last year we'll probably be the most number of cars. We had 13 or 14 cars in five series, which is just wild.


Robert Hansen

That's amazing.


Charlie Burgoyne

One won a handful of championships.


Robert Hansen

You won Formula four last year, right?


Charlie Burgoyne

We won Formula four USF 2000 Lamont prototype Class two.


Robert Hansen

Congratulations. That's amazing. It


Charlie Burgoyne

It was just an incredible banner year. This year we were like, "Okay, well of those 14 or so cars, some of them brought us really great value." But we're kind of changing that relationship now. Of course, the SRO for the Porsche with the Porsche team. Shout out to those guys who are awesome. But so this year we're like, "Okay, we're going to downs select."


So, we've got two cars in SRO. We have the Lamont prototype car which is really our flagship, and now we're doing development for that platform. So, we're actually developing the strategy technology, the part technology, heavy, heavy ML AI is going into that. And I've got great partners in Richard who runs United. And then his partner, who's Zach Brown, who may have heard from McLaren.


So, the three of us are orchestrating this whole technological partnership, which is just super fun. And then we have one of our F4 cars doubling down on that particular chassis. We're going to have one in the lab brought in soon.


I mean we have fewer cars this year, but they're more strategic relationships that we're actually developing it. For the first year, we're developing real technology for our teams now, which is super awesome. And 100% of our commercial work comes out of that.


So, we figured out a solution so we didn't have to get venture. Didn't have to get debt. We didn't have to go buy a bunch of rolodexes. We make incredible returns on the services side and now the investment side for bringing on investors based off of racing. Is what we love. Almost love it too much.


Robert Hansen

Just going to get tired of winning.


Charlie Burgoyne

I mean, well, I don't know if we'll ever have another 2021 season where we did as well as we did with all those cars, but with all those teams. But I mean, there are 30 weekends were Valkyries are racing a year. And when we're there, it's not like we're just hanging out watching the race. We are actively working to make sure the technology's getting deployed correctly and making sure that those relationships are matriculating into new projects. It's a lot of work.


Robert Hansen

I bet.


Charlie Burgoyne

But I love it.


Robert Hansen

All right, Charlie, where do people find you? How do people get in contact with you? What do you got out there?


Charlie Burgoyne

So, I'm not really active on platforms as much as I probably should be.


Robert Hansen

You can give your phone number then to just call you.


Charlie Burgoyne

Yeah, my home address, this is where my kids go to school. No. I actively love sharing this incredible science and this magic with the universe. I think it's just super cool. I think we're the only private firm that's thinking about the nature of intelligence this way, and we have a fervor for advancing that as much as we do for profits.


So, go to valkyrie.ai. You can link up with us there. You can reach out, please say hi and, and tell us about yourself. We're actively recruiting, trying to get more people on board. We're actively looking for interesting research collaborations.


Of course, commercial and the investment work is great too. And then, yeah, please get excited about this. I mean, this is the last great mystery in the universe. We know way more about deep space and the depths of the ocean and about biology than we do about the nature of intelligence and the 7 billion examples that we know of that have noodling around between ears.


So yeah, I appreciate for you having me on the chat today and love to hear more from anybody who's curious about how we think about these problems.


Robert Hansen

Thanks for coming, Charlie.


Charlie Burgoyne

It's such a pleasure.


No Transcripts Are Available Yet

Comments

Comparte lo que piensasSé el primero en escribir un comentario.

THE RSNAKE

STORE

Show your support by getting yourself a new t-shirt, hoodie or any of our products available in the store!

bottom of page