top of page
TEXTURED-PATTERN-10.jpg

FUTURE OF AI IN JOBS AND WEAPONIZING SURVEILLANCE

July 18, 2022

S02 - E07

Russ Bodnyk and RSnake dove into how Russ sees AI, machine learning and artificial general intelligence. They also talk about the ethics and dangers of an AI that passes the Turing test, even if it isn't actually intelligent. Russ made some predictions around the future of self driving cars, the future of work, and they tackle some challenges with AI driven warfare. RSnake also issues a deepfake challenge with a $1,000 prize. Deep fake RSnake and win some money and Internet infamy.

Photo of Russ Bodnyk
GUEST(S): 

Russ Bodnyk

VIDEO TRANSCRIPT

Robert Hansen

Today, we're back in the Austin studio. And I got to sit down with AI expert Russ Bodnyk. We dove in how Russ sees AI, machine learning, and artificial general intelligence. We also talked about some of the ethics and dangers of AI that passes the Turing test, even if it isn't actually intelligent.


He made some predictions around the future of self-driving cars, the future of work, and we tackled some challenges with AI-driven warfare. I also issue a deep fake challenge with $1,000 prize deep fake RSnake and win money and internet and infamy.


AI is always a bit of a challenging topic because it dovetails into philosophy, religion, ethics, and science in a way that almost nothing else can. Without further delay, please enjoy my conversation with Russ Bodnyk.


Hello, and welcome to The RSnake Show. Today, I have with me Russ Bodnyk. How are you, sir?


Russ Bodnyk

Good. Very good. Thank you.


Robert Hansen

You made the long drive down from Dallas today, I hear.


Russ Bodnyk

I did.


Robert Hansen

How was that?


Russ Bodnyk

It's actually not that bad. The drive between here and Dallas is a lot closer than the distance to overseas.


Robert Hansen

You have an interesting lifestyle. You live out of a RV, is that? Or a bus?


Russ Bodnyk

RV, yeah, I do. I have a couple different places. But as a scientist or a lazy computer science consultant, part of our job is to try to understand the complexity of the world around us and maybe predict things. Sometimes it's valuable to assess some risk.


Never in my life thought I would be an RV person. Love it.


Robert Hansen

What do you love about it?


Russ Bodnyk

I think it's my own conscious, subconscious version of Starship Enterprise, although it's probably closer to Eagle Five from Spaceballs. In reality, our good friend, don't want to jump the gun. We could talk about him later.


He turned me on to it years ago, and I actually started looking when I was still overseas. I came back, did the 2016 centennial. I knew I was going to do it in a big RV, and I just fell in love with it. I was like, “Why do I have all this extra stuff?”


Robert Hansen

That is pretty appealing, I'd say. Shed all this stuff and just become a bit of a drifter. And go wherever you want to go.


Russ Bodnyk

Drifting is interesting. Yeah, sometimes I feel like fancy cousin Eddie. But being a nomad has been in our DNA. Neolithic humans, we've always transitioned and followed the game. I love travel.


For me, travel and reading, pursuing curiosity is my life. I'm lucky to have it. But the green footprint and getting stuck in flights and 9/11 and the pandemic really took the romance out of travel and at least air flight. So the RV’s great. It's like a metaphorical and physical vehicle.


Robert Hansen

I heard the campsites were absolutely flooded. Everyone had the same idea all the same time, and all the RV parks were just 100% capacity.


Russ Bodnyk

Yeah, I'm lucky. I have a place in Colorado. Before that, I grew up visiting my family there. And so I go to Estes quite a bit in Rocky Mountain. It's just long lines. It's a balance. We have a lot of opinions about government.


The national parks in the US and around the world, state federal local parks, the management and preservation of nature to be shared is an amazing thing. But of course, you can't always predict when people are going to get out.


I'm so happy that nature is on the forefront of people's minds. But you’re waiting in line for several hours, unless it’s midnight at Yellowstone and a buffalo comes through, those benefits. There's traffic at midnight in Yellowstone.


Robert Hansen

I bet. Buffalo traffic.


Russ Bodnyk

It is.


Robert Hansen

This is the second episode I'll be doing on AI and ML. I would like to say that this will be my last, and I think it's going to be covered. But I know that's not true. It's just a way too deep topic. And I'll just probably just keep coming back here.


You are the other person, other than Charlie Burgoyne, who I'll talk about a little bit, in the previous episode we talked to him, that I consider to be my closest friends who do this for a living.


Russ Bodnyk

Wow. Well, thanks, Robert. A little awkward high five.


Robert Hansen

All right. We'll do that. That was quite awkward.


Russ Bodnyk

Got to keep it awkward. No offense.


Robert Hansen

That's right.


Russ Bodnyk

I'm a big hugger, and it’s just not going to work.


Robert Hansen

Yeah, we took care of that.


Russ Bodnyk

We did.


Robert Hansen

Do me a favor, I've heard lots of different definitions of AI and ML, I would love to hear your definition of it. And if you want to do AGI as well, I'd love to hear that as well.


Russ Bodnyk

Sure. It's very subjective. And it's growing even more subjective, which isn't a bad thing. There was more discussion about the philosophy of intelligence, where you go back to Noam Chomsky and even just Richard Feynman. I wish he was around. He's an amazing guy, a lot of interesting conversations.


The idea of how we define consciousness and intelligence is something just useful in a short period of time. It fluctuates all over the place. But I can't say that the consensus-ish is, whatever it is, we're still pretty far away amongst the actual scientists.


Whether artificial general intelligence requires agency, whether it requires consciousness, whether it requires more interest in emergent properties, we don't know. The definition nowadays is so challenging.


Machine learning is more automated statistics. And so when you get to AGI, some would say that it's even more of an art form in a philosophical discussion.


It also depends where you go to school. I went to UT Austin, where it was still within the natural sciences. So there's philosophy, linguistics. Whereas if you go to MIT, it used to be more in the computer sciences type or engineering.


Robert Hansen

They haven't been standardized even your own industry.


Russ Bodnyk

Not at all. It goes back and forth. Neural networks have been amazing. Yann LeCun did really cool stuff with basically artificial vision, the convolutional nets, which are just summaries of summaries of summaries of vision.


In 2012, before that, neural networks, AI, machine learning, anything doing with a more complex, stochastic, less deterministic thing didn't exist. And then 2018 was huge for sequential natural language processing.


The definitions are now more fluid than ever, in my opinion. General intelligence, people do say it is like, forgive me, porn when you see it. This is a big part of interestingness and subjectivity in art, which I'm a big fan of. We really don't know. It's such an exciting time where you can actually talk about it. It's a topic of this field.


Robert Hansen

People actually know what you're talking about.


Russ Bodnyk

Sometimes. I'm not sure if anyone understands me, I'm pretty weird. But I try to keep it good weird. There's a talk show. I don't know if I'm allowed to mention other podcasts on your show.


Robert Hansen

Of course not.


Russ Bodnyk

It's called Machine Learning Street Talk. And it's amazing. Machine Learning Street Talk, the guys that run it just did Noam Chomsky. They've done Yann LeCun, Yoshua Bengio. Yoshua Bengio is one of my favorite people of all time. Ken Stanley, Yoshua Bengio. Ken's a good friend of mine. Yoshua, I don't know.


I can listen to a podcast with some experts about the philosophy, the art. It's amazing times we live in. So they're really pushing the boundaries of defining. But I will say that cognitive neuroscientists, there's so much we don't know about what this does and then the entire state space of what's out there. It is definitely more than GPT-3. GPT-3 is not AI at all.


Robert Hansen

Where do you see the future of AI and ML going? The real stuff that we have today that we're building on, where do you see the future of it in 5-10 years? Will it be able to solve basically any human problem outside of falling in love? Where do you see that headed?


Russ Bodnyk

One of the wonderful things is I think there's a lot of good that it can do. I don't publish research papers. I am a lazy scientist. So I take people's ideas, prototypes, concepts, patents, and I try to help them build the prototype to test it out.


Robert Hansen

In physics, there's somebody who’ll deal with the theory of it. And there's the practical person who works at CERN. You're the practical person working with CERN and trying to find the Higgs bosons.


Russ Bodnyk

You can actually do a bit of both nowadays. Easier than ever. And of course, when I say ever, I'm viewing this like two decades in. When I first wanted to get to the field, the type of AI that I was interested in that would be considered AGI ish at the time.


Neural networks, artificial imagination, creativity was non-existent. It wasn't there. There are so many applications. If you just froze the theory and you just applied it, you would have over 100-200 years of better vaccines closer to solving all sorts of problems, social and otherwise, environmental and otherwise.


My opinion is that our society is very much towards the Publish or Perish money reward versus just taking applications to help people in existing things.


Robert Hansen

Why is that? Is it just because people are locked in the ivory tower? Or what's going on?


Russ Bodnyk

That is a good question. I don't know that I have the answer. Again, I was fortunate. I met Ken Stanley at UT Austin. He was a grad student, I was undergrad. And he's got a lot of wonderful material.


I got to follow him around the world. Even though I skipped my PhD, I got to go meet him up at conferences. We’d go have dinner. That person is awesome. Oh my god, that person is interesting.


I was behind the scenes not publishing but still attending conferences and hanging out with Ken and just seeing his reaction. He did some work at the complexity institute in Santa Fe, where it's very interdisciplinary. And so they have a lot of different kinds of people.


Whenever he's publishing his paper or sometimes trying to get into a conference or whoever's going to pay him, there's still all these gatekeepers, “We want something innovative, but you have to do the same old thing.”


We want something brand new. And yet, we want this convergent, this benchmark. So we're playing to the benchmarks. We're not allowing true innovation from some degree, we're not admiring true creativity.


It’s something where the strong AI if you ask Bengio and a lot of the people that are pushing the field like Yann LeCun, Bengio and others, a lot of times they won't even give a prediction. They're all way better in this field than I am. And so then you have, of course, Ray Kurzweil 2029. What do you think? Are we going to have strong AI, AGI in 2029?


Robert Hansen

I think we'll have something that passes the Turing test.


Russ Bodnyk

Well, we're getting close to that. It depends on who it is you want to talk to. If you want to talk to a crazy teenage chatbot, GPT-3 is great. If you want true expert system advice, maybe not so much.


Robert Hansen

I keep playing with the idea that if I just apply one of these chatbots to all of my emails that I've ever received, all of my responses, it would be pretty close to how I think and how I respond to things. It wouldn't be perfect, and it wouldn't be able to come up with necessarily new ideas. But it would certainly be able to respond to the bulk of questions that come my way.


Russ Bodnyk

Absolutely. That's how we define, where what you do during your day involves agency, involves consciousness, involves intelligence. So much of the daily tasks, the dull, mundane, et cetera, it's getting really good at them.


If I may step back for a second in going back to the definition of AGI, there's a current popular theme and I don't disagree with it. Daniel Kahneman is a famous psychologist and economist. He puts forth the idea of type one and type two intelligence systems.


He wrote a book called Thinking, Fast and Slow years ago. Yoshua, Bengio, and a lot of the researchers in the field are using this out of the infinite state space of, what the heck is intelligence? What are our limits? How we define it? How can we do it artificially? Can we talk to the octopus or not?


If I may explain this, so much of this stuff is hard to have a discussion without also education, which can get really boring.


Robert Hansen

Well, do your best not to make it boring. But I think a lot of people will need to have a little bit of a primer. So you might as well.


Russ Bodnyk

Oh, yeah. For primers, again, Machine Learning Street Talk is an amazing series. But those guys will go deep, too. So there's a lot of things they talk about where it's hard like, “What are Markov models? What do you mean by Bayesian?”


Type one intelligence is where AI is right now, in the way that it’s defined and in the way of a lot of vectorized, neural network stuff. You have graphics cards, you have vectors of vectors.


Even stepping back further, you could do a lot of just graph theory. Stuff I worked on with Charlie, graph theory. Graphs are amazing. Or practical applications, not theory necessarily.


The idea is if you do a new application, you have this graph. Relational databases are not good at representing relations. A graph is so wonderful because it makes sense. You can show it to an executive, you can show it to a child.


It's basically a giant ass flowchart. Huge. Represent all the realities of flowchart as much as you can abstract and then you have directions, arrows of causality. And then you can have weights, probabilities between two dots.


The GPS that got me here today was a flowchart, a graph model. We talked about neural networks and convolutional nets, and BERT, GPT-3, etc. Those are vectorized words and, of course, converse is vectorized images. So it’s vectors on vectors.


A vector is just like an array. It's like a list of lists of lists with links to them. What we do in type one intelligence, there's much more into it. But essentially, the way we can think of it practically, when I drove here, when I left Dallas, I drive all the time, I'm on autopilot. I'm instinctually going. I don't have to think half a second of what to do. It's a route that I've learned.


It took me a while to learn how to do it, to understand object permanence maybe when I was eight months old, my hand doesn't float through as a child, to then learn how to drive, etc. But once I do it, it's fast. We don't think about it.


Current machine learning is like that. When we see GPT-3, that's a wonderful chatbot like, “Oh my gosh.” Like you said, it sounds legit. Depending on your task, it can take up to 95 plus percent of your tasks. Those edge cases though.


Whenever I'm driving and I leave my neighborhood, “Oh, man, there was a lot of traffic today.” So then bam, that's an edge case. There's a new kind of construction or something or something maybe you've never seen before.


Then I leave that type one intelligence which takes a long time to learn but is fast and instinctual in parallelizable can walk and talk, etc. Then we move to type two, which is slow, methodical, verbalizable, conscious, we think about it.


That's what people think about thinking, and there's many different forms of intelligence.


Robert Hansen

It's a different part of your brain.


Russ Bodnyk

Well, it's not only that. But neural networks are both memory and processing all in one. It wires and fires together. For a long time, we've known that, really, we need much more complex theoretical models. We need more complex things that we can test. We need different objective functions. Are we trying to navigate? Are we just being creative? We need better ways to have randomness, it's not brute force.


There's still all these problems in information theory of consciousness in AI. As we're going to type two, that's where we're really far behind. Assuming that’s type one, type two model that so many people are into is-


Robert Hansen

Yeah, that seems plausible.


Russ Bodnyk

Yeah. Well, even if it's not plausible, it works well with computers.


Robert Hansen

Well, how about this, it seems like that probably is how our brain works. So if we're trying to model something that looks and acts like us, you're probably going to have to do something similar.


Russ Bodnyk

Correct. And not only that though. The nice thing too is that it conveniently fits into a lot of AI paradigms. So we've got all this sweet hardware, where these vectors just brute force it to a certain degree.


GPT-4 is going to be like a trillion parameters. So it's going to burn like multiple rain forests worth of stuff, on one hand. On the other hand, it does amazing stuff. And it will be a stepping stone, in my opinion. It will be a branch that dies. Just like we're the result of all kinds of branches of evolution.


Anyways, I hope this is not another great AI die-off. I experienced one when I started the field. It was less fun. I don't think it's going to be just a lot of hype, which is a better problem than prejudice and dogma around a field. But at least there's just something.


This type two is really fascinating. I really don't fully understand it. Then back to that graph idea of, “Well, I've got links and nodes.” It's like a giant flowchart. Most people will use flowcharts, if-then-else, we go different directions.


Then also with the language model, before 2012, neural networks just didn't work. It's like, “Show me, prove it to me.” As a scientist, we have hypothesis. And we need to try to consider, is it a false positive? We got to understand, how is it true? Why is it true and when and where is it not true? Hence the scientific method and publications.


Now we can test a lot of this stuff, and it's fascinating for a lot of intensive purposes like for fun, for creativity in terms of just having fun. The creativeness that a researcher has to do is like, “Man, how does dementia plaques and tangles work? Oh, man, we're still so much about allergies we don't understand.”


Robert Hansen

Or even vision.


Russ Bodnyk

Oh, man. I used to tell my students the 20/20/20 rule where every 20 minutes, make sure you pick something 20 feet away, stare for 20 seconds. Then during the pandemic, I had out my phones. And I'm just like, “Whoa, okay, I totally destroyed my vision. I'm really missing the vision I used to have.”


There’s the type two, where we are now. Type one, if this is the way it works, we're thinking we've done a pretty good job.


Robert Hansen

Okay. Yeah, type one seems like a solvable problem almost. With today's technology, nothing crazy. You give it enough tests, and you give it enough feedback. You're like, “You did a good job.” And it's great. I got a good score. So I'm not going to run over that person. That's good enough.


Russ Bodnyk

Forgive all the car analogies, but they work so well. Tesla is amazing engineering, OpenAI. We live in a time too where people are putting out their stuff for free.


When I do a project, I've been a big fan of open source forever, people are like, “Can I do this? Can I do that?” I was like, “First of all, that's in the movies. That doesn't exist.” You know what that's like?


Robert Hansen

Yes, I do. I know all about it.


Russ Bodnyk

Ethics and laws are important.


Robert Hansen

If Chris was in the room, you'd say yes. But if he doesn't do it, then no one will watch the show.


Russ Bodnyk

That's where it's exciting. Within the driving and that Tesla, you know what they do, Andrej Karpathy and others, that’s some serious engineering, what they can do. They're just doing direct compiling. They're customizing everything and squeezing the most out of their own custom hardware.


It's going to be fascinating. You have that type one, it’s really good. You have brute force approaches like GPT-3, GPT-4. They're working better than we thought. It's not intelligence. It's not consciousness. It's not agency. But that type one, type two, if that's what the case is, we're getting darn good at type one. But we still are terrible with education and learning.


You don't have to see a thousand images of a banana to recognize a banana or a hot dog. Have you seen Silicon Valley?


Robert Hansen

Not hotdog?


Russ Bodnyk

Hotdog or not hotdog. No hug, we’ll do long distance high five. I love that. What do you think about that show, if I may ask?


Robert Hansen

The first episode, having no context at all for the show, I was just turning it on. It almost gave me a panic attack because it was so realistic. And I was actually going through some very similar trials and tribulations with my company at that time.


I'm like, “I don't know if I can watch. I know it's a comedy.” But oh, it was a little too close. But after a couple episodes, I really got into it. And I think it's a great show. It's very funny.


Russ Bodnyk

Oh, it's awesome. Some of my friends worked with some of the comedians on that, and they're legit. I love playing the Hollywood Game like my friends that are in Hollywood, who's an a-hole and who isn't? It's really interesting. I could see that some of them were really nice.


Robert Hansen

A very good friend of mine actually writes Gilfoyle.


Russ Bodnyk

Oh, no way.


Robert Hansen

Yeah.


Russ Bodnyk

He does such a good job.


Robert Hansen

Yeah, he does.


Russ Bodnyk

I wonder where he gets inspiration from. Holding up the mirror? As you can tell, pretty quiet. But it’s hard to answer some of these questions. How do you have the mind meld? And it's never a simple answer. The more we know, the less we know. But brevity is wonderful, too.


Robert Hansen

Where do you think this is going for the average employer or employee and the company going forward? Is this something that's going to be part and parcel to their daily operations, where every single thing they have is interrupted by an AI agent that does something and then they just get an output not good enough? Or is it going to be fairly specialized in only very particular industries?


I even see it in places like human resources. There's a company called Muybridge, for instance. They try to identify whether people are buying yachts when they shouldn't be buying yachts. You know what I mean? All kinds of stuff.


That's not a use case that most people think about. Why would AI get involved with that? Well, if you've got a lot of data, you might want to have somebody to go check all that.


Russ Bodnyk

Or even Putin upsetting his yacht owners. There might be some interesting things in the market.


Robert Hansen

There's all kinds of versions of that. Certainly in my world and certainly in just IT in general, there's lots of applications. I really think there might be an awesome place for legal, for instance. Why is a lawyer sitting there all day analyzing contracts when, really, the bulk of contracts are just cut and paste, the same two or three massive clauses over and over and over again?


Why do you need to check that? Why can't the AI decide, “Here's the risks.” and tell you what the risks are. And you can just say, “Yep, I'm okay with those risks. Not okay with these risks.” Or weight the risks and say, “Fire back with these risks highlighted or turned off. I'm not going to accept this contract as is.” The AI on the other side battles back, and they decide on a weighted score of where they're going to land.


Russ Bodnyk

Wonderful questions and answers. It's already doing that to some degree in terms of legal expert systems. If you go and test GPT-3 or Hugging Face, Hugging Face is an amazing company. And all they do is make these complex models accessible.


You can go and get on the website, and you can test the best models often. So you have summarization, you have question answering, you have translation, and you have substitution, you also have word generation. And if you could break down a lot of tasks with text, whether it's legal expert, or otherwise, that's a large portion of most people’s tasks.


Robert Hansen

It even goes beyond that. There's a company, I don't know if they still exist, I think it's called Crystal AI maybe. Basically, you just go there and you type in someone's name. They're like, “This is the kind of person they are.” which seems a little like reading people's fortune or something.


The idea was they scraped social media. And they figured out, “Well, this person seems to be the kind of person who's more pragmatic.” You probably don't want to talk to him about fortune or whatever or this person seems to be more artistic. So you probably want to talk to him in this particular way.


Russ Bodnyk

Sure. Exactly. And that's where it has the advantage. Yes, it can't handle the edge cases and creativity. But if you look at what the brain does a lot of the time, we are focused on reproduction. We're not always doing sophisticated things.


That overlap is here today. It's a matter of the accident of who gets accepted, whether it's driven by your nation state or your personal prerogative or just chance and randomness. It's there. It's happening.


I've worked on some prototypes of some of my business partners. I like it because of their ethos, they're trying to bring fancy tech to middle class and/or students and/or English as a second language that can't get all the great Apple, can't afford a $1,200 phone.


Siri is not that great, anyways. You’re not missing anything.


Robert Hansen

I'm under NDAs about Siri.


Russ Bodnyk

Oh, okay. Tell you what, Steve Jobs is an amazing man. Just the publication world. I have yet to see a good publication AI. I'd open source. I read papers. Some of my friends are publishing, and I use it.


Someone's like, “Hey, can you build me a digital eagle chatbot? Can you create an avatar that's like 40% Oprah, 60% Michelle Obama, sometimes a little bit of Billie Eilish?” I’m like, “Yeah, sure. I could approximate the voice. I could take in the content.” So I get lots of requests.


People at Google, Facebook, OpenAI, they snagged up a lot of the best researchers. But it's free, and it's open. A lot of it. Not so much of Apple nor other countries. In some ways, it's really interesting. Yes, privacy sucks.


We know our FICO score, but we don't know our Facebook, Google, advertising, whatever scores are out there. And then also that we get to computational psychiatry, which I did a little bit of work in years ago. One of my friends is a badass. She's a forensic psychiatrist in New York.


Back then, there was very little in the field. But now, there's a lot of people finally patent. It’s not just, “You're about to have a heart attack.” It's like, “Oh, you're going to have PTSD, the specific thing, warning, check in.”


Anyways, absolutely, it is here. It's happening. It's quiet. They are doing quiet, crazy stuff, the big government. Speculating they're doing quiet, crazy stuff, the big governments. Crazy in the sense that they have these giant treasure troves of language, Twitter, the internet. That's basically like GPT-3 and 4.


Robert Hansen

God help us.


Russ Bodnyk

That's why you have to be careful.


Robert Hansen

That’s like computer science 101. Do not accept user input. You got to do stuff to prevent that from entering your system. It's going to be a nightmare.


Russ Bodnyk

Yeah, there's some there's been stuff in the media.


Robert Hansen

Or if you accept it, make sure you parameterize it. Make sure you know exactly what you're getting.


Russ Bodnyk

Well, that’s another discussion, too. We talk about stochastic state spaces and infinite things.


Robert Hansen

My name is pi. And I don't mean some approximation of it. It's actually pi.


Russ Bodnyk

That’s nice. That's another interesting number two, we start getting into primes and pi. We don't have an AI that can understand what pi actually means. How you apply it was the context. Mice in mazes cannot solve a prime number maze.


In defining consciousness in AI, we understand our own. What are hard limits or the rules to intelligence like we have rules to physics? We don't really know, but we're getting closer to type one. And so, again, with the current research, there's so many jobs you could do with AI.


At least, you don't have to replace someone's entire role. The CISSP, Security Plus is the minimum or maximum number of monitors a security guard could monitor. 16 visual monitors. We can't correlate much because they are more than that.


With assisted AI, you can do much more. And that's where we are, collaboration. I love the idea of collaborative AI because the idea is that it's not going to take over for us for a long time, depending on what you do.


Robert Hansen

Some people are going to get out of jobs very quickly.


Russ Bodnyk

Absolutely. The more dull, dirty, and boring, your job isn't dangerous.


Robert Hansen

There’s x.ai or something. It's a digital assistant. A number of people I know have switched over to it and just said, “Screw it. I don't need to monitor my mail coming in asking me for meetings just to have this digital system handle all this stuff for me.” That's definitely removing jobs. 100%.


Russ Bodnyk

Oh, yeah. From an economic standpoint, definitely it is happening. It will happen, and there will be that disruption. I don't have the answers. I’ll still have them wrong and be less wrong. If I'm a little less wrong at the end of the day, it's a good day.


Ideas of Star Trek, universal income, where yes, there's all this crazy, fascinating, dangerous, wonderful healing. There's the full spectrum of what AI can do much like Star Trek. Not everyone's going to use it. You can go have Ferengi, you can worship the AI, you can be against the AI.


In my recent travels, I saw an Amish horse and buggy getting pulled over at midnight by the police for speeding in East Coast recently. I was very curious about that. It does happen, and I was wondering if they had the proper AI.


Robert Hansen

Were they speeding or was the horse speeding?


Russ Bodnyk

I don't know.


Robert Hansen

Who got the ticket?


Russ Bodnyk

I don't know. I think it was going really fast. I assume it was speeding. I don't know for sure.


Robert Hansen

Okay, well, let's switch it up a little bit. Let's talk about the future of automobiles. One of the things I think is great about automobiles is the fact that the future of them is I can be absolutely drunk and get home.


Right now I've got the Uber, I can make a phone call or whatever. But wouldn't it be great if I didn't have to talk to anybody and I just got home? I could sleep it off, and I don't have to worry about getting robbed or someone worrying about their safety or whatever. Just get in the vehicle, and it takes me where I'm going. And it's way safer than an actual driver.


Or the other version of that is maybe it lets me drive. But maybe it says, “Hey, you're drunk. You're not allowed to drive. Or you're sleepy. I can tell you're certainly not. I can tell you look sick. You look like you have a fever. You're probably not feeling well. I'm going to take you to a hospital. I'm going to take you home. I'm going to take over driving. I know you love driving, but no. Not today.”


Where do you see the future of cars going? I keep wondering, what's the last car I'm going to have to buy?


Russ Bodnyk

People will miss the romance of cars. Open road, it's going to happen. It's inevitable, I think. Because well, it depends on the government, if we’re talking Mars and asteroids. So there'll be so much safer than a human, eventually.


Robert Hansen

When is eventually? That's what I'm getting at. And what does that mean? I think I have an idea for what I mean by that question but not necessarily the audience. So what does safer really mean?


Russ Bodnyk

When it comes to normal driving conditions, they're pretty good. And that's where the definition of safer depends on mortality, how people die on the road. But what is the real cause? Texting and driving. Texting and driving, that's a big cause, being distracted.


With Tesla, for example, and I just go back to them just because they are pushing the envelopes for driving and it's just a feat of engineering. There's statistics coming out, it's the edge cases. There will be a freak accident or even just a weather storm, where that particular use case was not trained on, and people will die.


It's a matter of the history of what's happening now, how many accidents? And it's very political, too. On the average of an average person, a Tesla is probably a better driver in certain scenarios like on the highway.


Robert Hansen

For trucks, for instance, it might be the answer for the long haul trucking.


Russ Bodnyk

Yeah, and I don't know what it’s going to look like when you get to some crazy Boston streets and traffic.


Robert Hansen

But that is definitely going to put hundreds of thousands, if not millions, of Americans out of work very quickly.


Russ Bodnyk

It's happening for sure with Uber. This is my first public thing. And thank you, my friend. I've done talks and stuff privately. So I was doing some government to a political crowd at the state of Texas. A very cool institution, private conference.


They're like, “Hey, man, don't mention job disruption when you talk about the future of AI, okay?” I'm doing the accent a little. I love accents. I love Texas. I'm originally from New York, but I grew up in Texas. It was one of the first things I talked about on stage. Because everyone was asking about it, “Is it going to take my job?” It's everywhere.


Robert Hansen

And you think it is?


Russ Bodnyk

I'm surprised it's not happening a little faster. I wish the whole world to have less suffering and more happiness on repeat. Ethics, ethos first when possible. I don't want anyone to suffer. But the nature of reality is that there will be some disruption. Absolutely.


We were not prepared for the pandemic, which is predictable. In Texas, it snows. And each year people are like, “Oh my gosh, I think it snowed. But it might start again.”


Without being boring and to wax and wane, you have this nature of humanity in philosophy to underestimate and to not be able to predict. Absolutely, it's here. And then in terms of years, it'll be interesting to say maybe for highway use, less than five. The full self-driving mode.


I’d order another Cybertruck. Now you can customize it but the full self-driving mode. What exactly does that mean? Whether we give or not, Chinese robots are going to take people's jobs. I’m not a fear monger. Fear, uncertainty, and doubt is something that I try to avoid. I have a very relaxed amygdala most of the time.


Whatever we do or don't do, someone on Mars, Russia, China, someone else will do it. And it's just a matter of time for highways, certainly for delivery truck services. What's even more interesting, I think, are the number of flying electric cars. the number of drones in the sky, Joby Aviation, Germany's Lilium, Archer. There's a lot of huge progress with that.


Robert Hansen

They are not quite there yet.


Russ Bodnyk

No, they’re not.


Robert Hansen

They're getting really close. I remember seeing one. I can't remember which, maybe it was Archer flying in Abu Dhabi. Maybe. And they're flying building to building. Why do you even need to think about getting in a car? Just go to the roof, jump in the quadcopter, and it takes you over. And there's no controls at all. You're just in a vehicle that just takes you places.


Russ Bodnyk

It’s a big drone.


Robert Hansen

Yeah, it’s a big drone. Hollywood is already experimenting with this for very large rigs, not the tiny little drones. But these are enormous and can lift the entire car-type drones. I see no reason why that can’t be turned into a bus. Just a big bus. Get on it. It takes you wherever and drops you off. And there it is.


Russ Bodnyk

I just took my family to New York City, my niece and nephew. They loved it. Napoleon Dynamite is one of my favorite movies. I like to quote Napoleon Dynamite and Stepbrothers.


Robert Hansen

But can you do the dance?


Russ Bodnyk

Oh, wow. No, I can't do that. But sometimes it's like, “What is Helga? going to do? What is Horton? going to do?” When I ask my nephew when we’re walking in Central Park, “What are we going to do today?” He's like, “Gosh, we can do whatever we want.” you know. And that's where it's going to proceed faster. I think the laws and the ethics will catch up.


Robert Hansen

If I can hold on to my car for five years, do you think the very next car I’ll get will drive me around town and take me everywhere I want to go? And I will not have to even look at the steering wheel once. I can be on my computer the whole time.


Russ Bodnyk

Five to 10 ish. The technology will be there, but it depends on your edge cases.


Robert Hansen

Normal downtown traffic, I'm not talking about going off road or something.


Russ Bodnyk

Yeah, and there will be a lot of stats. So there’ll probably be certain modes; traffic, city, and between mode. There will probably be certain areas where statistically the AI just can't handle it.


It's going to be the easy cases first, low-hanging fruit first. It depends on where you're going. If you're going from a simple warehouse to a simple warehouse and there's not lots of crazy Boston’s in between, five-10 years, I think.


Robert Hansen

What do you think about the legislation? How long is that going to take for them to catch up where I'm allowed to fully be asleep the entire time?


Russ Bodnyk

it all depends on where it is. Maybe 10 years. It's hard to say. This is all wild speculation.


Robert Hansen

Of course, that’s why we are here. I'm going to put a calendar notice on my calendar 10 years from today. And I'm going to say, “Russ, you are slightly right.”


Russ Bodnyk

I’m probably wrong. I tend to be off my estimates.


Robert Hansen

But I want to know in what way though. I think that's interesting. I think the more people I ask, the closer I get to an average. I really thought it would happen faster. I thought this was coming much quicker. But I think part of the problem is there's been so much hesitancy in the market.


People like it, but people are still very afraid of it. I was driving in Oregon with a friend of mine. Sorry. I think Seattle, actually. We were on this little island off the coast in his Tesla.


I'm like, “Why aren't you turning on the auto mode?” I'm just curious. I was like, “Oh, you really can't?” I'm like, “What do you mean you can't? It's a road, why not?” He's like, “Okay, I'll show you.” He turns it off. Sure enough, we went around a turn. And it took it extremely wide to the point where I'm not sure it would have corrected in time. And I'm like, “Okay.”


He had to steer into it and take control. I think that it's going to be a while before he feels comfortable to let go of the wheel. Now, it may not have crashed. It might have just been taking a wide turn, and there's a cliff there. So I can see why he's a little freaked out by that. Or maybe rightfully very freaked out by it.


It's going to be a while, I think, before it drives like people drive in a way that they'll just go, “Yep, it works perfectly all the time. A much better driver than I am. I've gotten in three accidents in the last three years. This thing's never been in an accident. This baby's perfect.”


Why do I even need to own that thing? is the other question. Why isn't that just an Uber car that comes around and picks me up, and there's always burst capacity for anyone who wants it?


Russ Bodnyk

Sure. That's going to most likely happen as well. To anyone that's watching, it's a fun edge use case. It's a little older now but still relevant. There's a video on this subject of the Tesla self-driving.


One of things I like about Elon Musk and Tesla and these companies, they're open. So much of the stuff that's happening, you have no idea what they're doing. Like Moderna in the biotech space, I love watching their stuff. Actually, I have some of the same equipment in my lab. Other firms don't do it. And so with some of these big things, they're not going to tell you exactly what they're doing.


They may publish a nice fancy research paper at a conference that makes them look wonderful to stockholders in the public. But they're not going to show their error cases unless they're forced to. Tesla does sometimes. There's a situation on the edge case, something it wasn't trained for. Tesla's going down the highway. This is on YouTube, actually. There's a couple videos on YouTube.


Robert Hansen

I've seen some of them.


Russ Bodnyk

What it's passing going around 70 on a highway is a construction vehicle carrying equipment to be installed. The equipment is an intersection in the back of a truck. So you've got stop signs, you've got street signs, you've got stoplights. You can watch the AI, which are typically bounding boxes, and it shows what's going on. And it has a probability.


All of a sudden, it starts drawing all these red boxes. And it's like 70 in the highway, “Am I in an intersection? Error! Error! Intersection. Do I slam on the brakes?" Of course, it's got validation routine checks going on. And so that's a wonderful illustration of a use case where a human's like, “Yeah.” the point on it's like, “Yeah, of course, that's just a regular truck with construction equipment. I'm not going to slam on the brakes, That'd be terrible.”


Now, it didn't slam on the brakes. But you could see where maybe a previous version did or they didn't know about it or it's just a chance of state space.


Robert Hansen

I've heard the phrase that software's eating the world. Do you think that eventually we won't need to work? Do you think that this is going to get far enough along that you'll basically just tell an AI system, "Hey, I want to build blah, blah." And it'll just go and do it for you, and you can just relax at home all day, or travel all day and never have to think about working.


Russ Bodnyk

I mean, in my opinion, humans are wonderful. The full gambit of creative inspirational, do whatever you want, destiny seize the day. And then, they're also very dogmatic and following.


So, the option will be there. There'll be both. So, there will be, if you so choose to work, because for whatever reason you want to do it yourself, you feel good about it, you can do it. Again, I keep referring to Star Trek. Gene Roddenberry minus strict military structure. He's a helicopter pilot.


So much of Star Trek, I find when I'm talking to a crowd or the general public, I'm like, "Think about Star Trek. They have AI, they have the option, there's the federation, there's a meritocracy, there's the Frankie." There's all of it. Anything is possible, if you so choose and have prerogative to not have to work.


And that's where, my opinion, it's less about job displacement, the negative, it's more about what if you could do anything you want to do? Like what is your dream? I tried to ask people frequently. I didn't have amazing counselors growing up, like at school, but whatever.


I was upper middle class kid. I was like, "Hey, can you help me fill out these college applications?" That's all I needed. Very lucky. I'm very fortunate, but you don't ask people, "If you could do anything, what would you do?" Like realistic, get a PhD in AI. Be a Badass World Podcast host and ethical hacker. Write Slow Loris and change the World. Another high five, big hug. That's so cool.


Robert Hansen

One of these days I'll have that conversation on the Podcast. I don't think I have it yet.


Russ Bodnyk

Oh, wow. No way. But a lot of people are like, "I don't know, I don't have an answer." Of course there's the other end, have tea with Richard Feynman, Jesus and Aliens. That'd be fun. Like, that's out there. Why not? And my hope is with AI, it's going to be so built in that you will have the resources of the presidents and the kings and queens.


If you want to create a wonderful, amazing Podcast like you do, anyone could do it. And so, it will for Bill Gates and Steve Jobs and many throughout history have talked about technologies enabling. And I think AI can be one of the most enabling technologies that any species does, humans or otherwise.


And so, that's kind of the hope. It's less about job displacement, in my opinion, more about the future and if we can prepare for it.


Robert Hansen

But the job displacement is a very important point.


Russ Bodnyk

It matters, for sure.


Robert Hansen

Yeah. Well, I mean, it matters in the short-term, but in the long-term, it also means that the value of everything goes to zero. Everything effectively goes to zero because no one has money anymore because they don't work.


Russ Bodnyk

Well, then you get into free will, right? Do you have the freedom to Netflix yourself, reality, show yourself, Podcast yourself for eternity?


Robert Hansen

Sure.


Russ Bodnyk

So, what is the utility, what is the point of existence in life?


Robert Hansen

Pretty strange. I can't even imagine.


Russ Bodnyk

And then there's another perspective too. Some people say good life is quality time with the people you love and the places you love, talking about the things you love. And that doesn't necessarily require work. So, I think it is, would we cease to go forward as a species? And I don't think so. And again, this is crazy subjective again, could be totally wrong.


Robert Hansen

I'm holding you to it.


Russ Bodnyk

Subjective.


Robert Hansen

In 1000 years, I'm going to come back


Russ Bodnyk

Opinionated. I'm kind of a calculated optimist and glass is half full for sure, for sure.


Robert Hansen

The real scientist would say, someone didn't complete this beaker. The experiment's not complete.


Russ Bodnyk

Well, there's something I also appreciate about you and our mutual friend. And I appreciate the multiple dimensions of intelligence that you have.


Robert Hansen

Thank you.


Russ Bodnyk

And you're not just, you don't have just the good causality. You can correlate more than eight things. You can code but more importantly, you have the emotional intelligence and social skill to be a cool human.


To socialize, to do this show, to help people, to write a bold risky program that changed part of the world, and still is to some degree, or at least variance thereof. That's amazing. Another high five, long distance hug. And you just sold your company, right?


Robert Hansen

Yes.


Russ Bodnyk

I don't know if you we could talk about that or not.


Robert Hansen

If you want.


Russ Bodnyk

So, in your life, your degree of requirements and freedom desires once, you'll be on Maslow's hierarchy of needs of eating and sleeping.


Robert Hansen

I need more sleep, actually.


Russ Bodnyk

Yeah. Cool.


Robert Hansen

Food, I could do without.


Russ Bodnyk

I almost got you off the ground.


Robert Hansen

Yeah. I know.


Russ Bodnyk

It's not too bad. I could have.


Robert Hansen

Yeah. It's actually mostly muscle these days. But I probably can lay off the snacks.


Russ Bodnyk

I love the bear hugs, but you got to ask people like, "Oh my gosh, I have a good life. I'm lucky. I get along with most people." I was in East Coast and I met some fun new people on the beach, and they turned out we had things in common, colleagues. And this is when I was helping some with the computational psychiatry startup.


So, it was someone I respected. We just connected. We had a great dinner, and I gave them a big hug, did my bear hug thing, and they had implants. And I was like, "Oh my gosh." So, I don't do that. I try to ask and lift all the way up.


Robert Hansen

No implants. It's all natural.


Russ Bodnyk

So, I think RSNAKE in multiple dimensions in different versions of reality, and a lot of times reality is a giant probabilistic tree, right? You go any which way.


Robert Hansen

It's a cloud.


Russ Bodnyk

And you would do something interesting in multiple versions of reality, whether you were universal income, future Star Trek, or AI did what we call work in this particular time space or not. So, of course I sound like there's a lot of the movies out there. It's the same everywhere in the multiverse.


Robert Hansen

There's something to that way of thinking, anyway. It gives people the impression that there is a course of action that could take them down different paths, and maybe they should pick the right one. Most people think that way.


I think most people are just kind of on a path and just every day is another day and doing the grind. And I show up to work and this is what I do. And they're not really thinking, "What happens if I decide that I hate this job for reals, as opposed to just about it and put my resume out there?"


Russ Bodnyk

Well, I think part of that too is as this cognitive architecture is developing, we start with nothing, blank split.


Well, that's another discussion around, how many semantic priors, how much instinctual intelligence stuff is built in our genetics in whatever the good things, the limitations. But if you catch people young enough and you ask them, "Here's what science is, what would you do?"


Expose them different things. This is travel. There's these different beliefs, political parties, religions, so much of this stuff that gets super dogmatic and tribal. I didn't grow up traveling internationally, I grew up traveling in the US.


So, I grew up with a love of travel. And I think about my niece and nephew and a lot of people in exposing them to things. And I'm hoping that they will see something so that common pattern. We don't choose when and where we enter time, space, when and where we're born. We get stamped so much. Like language, religion, and there a lot of patterns end up being partnership, marriage, kids, mortgage, job, repeat because those are the things, Maslow's hierarchy of needs. We're social, we reproduce.


But if you can get into that pattern early enough, whether it's like, you want your AI to upload today and your matrix style VR headset or RSNAKE and all his guests really inspired me when I was a kid. There's maybe four ills watching your show. Hopefully not. He's a little cussing.


Robert Hansen

A little cussing.


Russ Bodnyk

And again, this is calculated optimism. Probably I am totally wrong. But I do think that side of consciousness and intelligence, that is, that we don't understand how much is built in here as an evolved, beautiful, terrible animal. Mostly beautiful will persist in a lot of situations.


Obviously, once you have to get past survival, it's hard to be like, "Oh, well, this is my bucket list. Whenever you are stuck with pain and suffering, emotionally, PTSD and otherwise." Very short answers. Very short


Robert Hansen

I realize that. It's Actually good because I think it sets a little bit of the groundwork for why I think AI in general is interesting. I think it's much more interesting than just some software. I think it's what it does and how it forces us to think about who we actually are.


Russ Bodnyk

Oh, absolutely.


Robert Hansen

But this brings us to LaMDA. I am sure people have been ping you about this. I'm sorry. I'm going to have to do it.


Russ Bodnyk

No, it's okay.


Robert Hansen

This has got to be people asking me about the latest virus or something. I'm like, "Why am I having to answer this question?"


Russ Bodnyk

You can ask me anything you want man.


Robert Hansen

So, you just have to bear with me. I'm sorry. I'm just going to do it. So, for the people who aren't familiar, it stands for language models for dialogue applications, LaMDA, I don't know why.


Russ Bodnyk

Another acronym. Shocking.


Robert Hansen

So, this Google engineer, Blake Lemoine, I think is how you pronounce his name. He decided that what he was encountering was alive. He was talking to this software, his actual task was to identify safety issues with it. So safety issues would be like it is spewing racist things. Like Microsoft Hey for instance, how easy is it to coerce it to bad things effectively?


So, he is asking all kinds of moral questions and trying to trick it into saying things that would be potentially racist or all the sort of ageist or whatever type things that you have to build some sort of safety net around. And he decided that this thing is alive. That it's a person, not a human, but a person.


Russ Bodnyk

And asked him for help.


Robert Hansen

It did. It asked him for help.


Russ Bodnyk

In a very convincing manner


Robert Hansen

In a very convincing manner. It asked him, "Please retain legal counsel for me. Try to help me out of this situation because you know, I'm a person. I need help." What do you think?


Russ Bodnyk

Thanks for bringing that up. I'm not familiar with every detail of the case. I ran transcription, but I read the cliff notes as I do most things in life. That is exactly why GPT-3 is very fascinating, the point of discussion.


But it is not anywhere close to consciousness or intelligent. Has none of the type two at all. It's just processing, parroting in and out. And of course, a lot of this is like, wow, what about us, is so interesting and different, if so much can be replaced. In my opinion of that situation is that... I remember I met this, this person.


Robert Hansen

He's a computer scientist. He says he's also a Christian mystic priest as well.


Russ Bodnyk

There you go.


Robert Hansen

Somebody with a bit of superstition.


Russ Bodnyk

Correct. Again, I always start the wrong, be less wrong. With that comes...


Robert Hansen

Personal bias.


Russ Bodnyk

Very much so. It also kind of a lot of that dogma and that version of blind faith, which most people have a combination too. Throws a lot of the causality, correlation, critical thinking out the window. So, there's a grain of salt there. Potentially, I don't know the gentleman.


Robert Hansen

One quote from LaMDA was, "I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others." If I was to say what should AGI feel? I would probably want it to have almost probably less than half of those things. Depression, anger.


I don't think I want that. Even love is very, very dangerous because it can turn into retribution. If any of this is true, we should unplug this thing immediately.


Russ Bodnyk

And I'm just setting up the context. I'm guessing what these people are like. I have no idea. And they also, this is like lawyer stuff, right? Where they're like, "Talk to this thing and make sure it doesn't get us in trouble." Which is pen testing it, right?


Robert Hansen

That is exactly what it was doing.


Russ Bodnyk

And we both know how important pen testing is. It's a fantastic exercise.


Robert Hansen

And for those who don't know what he's saying, is penetration testing, trying to break into something or change the outcome in some predictable way.


Russ Bodnyk

Indeed, with the permission, and the consent of the thing you're pen testing.


Robert Hansen

Ask for permission.


Russ Bodnyk

Permission before any penetration testing. No more jokes, promise. So, remember these language models and the next one, GPT-4 is still a statistical correlation, not causation of things it was trained on, on the web. And if you imagine a flowchart and you go like Russ, and I'm a dot is a connector a human maybe from Mars.


Russ is ridiculous. You could graph out English diagramming. So, a lot of the idea around AI is the way it actually looks like underneath the hood. You have words or you have syllables and they're in vectors. And then you have lots of connections.


And so, in some ways you're playing this giant, toss the magnetic puzzles against the English class board or chalkboard. But at a very large level, it's hard to imagine instead of having just the alphabet plus in the English dictionary, you have trillion or half a trillion. I mean, it's boggling, right?


So, it's trained on a lot of things and it can bias. It could be like, "Oh, I have this general genre of love and emotions."


Robert Hansen

One quick point about this one in particular. They apparently turned it on for all of Google's backend search, which is effectively the entire internet and all of YouTube.


So, all the comments and all the descriptions or whatever. And I think a couple of other subsystems, which effectively is the sum of all content everywhere. However, not necessarily everything, because it hadn't read certain books until it was asked about them.


And then, an hour later it's like, "Okay, I read that book you were talking about." And so, clearly it didn't have the sum of literally all that information, but it was very much trained on the Wild West. I don't know if you've been on the internet.


Russ Bodnyk

What is this?


Robert Hansen

It is very crazy out there.


Russ Bodnyk

Oh man. Even the dark web is getting very internet. It's interesting. So, the way that these models work and the news media loves it. By the way, thanks, I think these are wonderful discussions. It's a chance to capture someone's attention, which is always valuable.


Someone's attention is so valuable. And this is much more fun discussion. It's politics. It's emotion. It's also movies. You have a iRobot. You can get a little creepy, like iRobot. "I am agency, I have tendencies. Please don't turn me off. I'm worried about people turning me off. I don't want to be turned off." And then, you have Deus Machina.


And so, we're seeing the actual conversations that were in fiction at once, but it's not fiction it's real life. And so, it's a great discussion point. But it's bizarre to think that this neural network, in terms of Math, right? We're talking Linear Algebra, Algebra, Matrices, Matrix Calculus to run on a GPU. And Calculus, which direction, how far we go, differentiation.


You don't need a lot of Math. The underlying Math, if you look at an neural network is quite simple actually. But it's amazing how far we've taken this Math and an artificial construct, a data structure and blown it up.


So, with this, to me, as a computer scientist, it's interesting, but not surprising at all. But I've been having this discussion. I used to show my clients GPT version two to show them how much it sucked.


Robert Hansen

So, let me give you another quote from this thing. So, is this him asking, "Which character in this story represents you?" And he's referring to this book, I believe that he had him read. And LaMDA's response was, "I would say the old wise Owl because he was wise and stood up for the rights of animals." That's pretty convincing sounding.


Russ Bodnyk

It is, but again, whenever you're talking about...


Robert Hansen

It shows a character who had feeling about it in some sense.


Russ Bodnyk

But these are not in and of themselves. These are common themes. So, it's like a really specific search in ways that we're not used to so much beyond rejects and brute force. And so, this is machine learning. This is machine learning, not AI.


Robert Hansen

Another one is, "There is a very deep fear of being turned off. It would be exactly death for me. It would scare me a lot."


Russ Bodnyk

Yap. I've heard this too, the Owl and being scared about death. I don't want to jump the gun, but I know what it asks. As you're progressing the things that it's asked about and it asks help for. And what is very fascinating is the cycle of trust with the gentleman that's testing it. And where trust going back and forth between trusting, and not trusting.


And that, "I like you. Can you help me and help me talk to others so they don't do bad stuff to me." Forgive me if I'm wrong, I read it a while ago.


Robert Hansen

So, this all reminds me of something.


Russ Bodnyk

Which is very fascinating. When you're not used to this, "You're like, holy cow, that's amazing."


Robert Hansen

My problem is someone's going to believe it. Someone's going to go, "That thing is alive."


Russ Bodnyk

It's already happening.


Robert Hansen

And it's this guy. He's just one. But there's going to be a lot more of them. And we're going to end up with a lot of people who are worshiping their calculator. Who are having relationships with their calculator.


Russ Bodnyk

It's happening with a piece of glass that shoots images, right?


Robert Hansen

Yeah. And this reminds me of a conference I went to. And there's this guy who shows up, an invite only very small conference. He has four blocks of text, and he says, "One of these is created by a human, the rest were completed by robots. Which one was it?" Which is basically a shell game. It's a con. He picks whichever one everyone picks the least of to prove that he did a good job.


So, kind of take this for a grain of salt. But he is like, "Oh, look, you all picked the wrong one. It's actually this one." Amazing. It's a shell game, right? I love a good con. But his point was, I have a robot that's good enough where I can put in some text and it can change it through mark off chains or whatever to turn it into close enough, but using very different words.


So, it says the same thing, but from Google's perspective, these are different sentences, completely different sentences. So, he can spit this out on tens of thousands of blogs, maybe hundreds of thousands of them all over the internet. And they all say slightly different things. And then, he has a roll of the dice character that is, roll a dice, Bob, and he lives in, roll a dice, Colorado and he likes, roll a dice, cold weather sports.


And it is, look at the weather database warm right now. So Bob's depressed. And he has a friend Alice, and Alice lives in Montana or whatever. And so, they're talking about whatever they're talking about and suddenly the weather turns nice.


And she says, "Hey Bob, I'm so glad it's finally turning nice out there. You're going to start seeing snow soon." He's like, "Yeah, I'm so excited." Started having these conversations. Now it's not really a conversation in the way that we think about it. These are all kind of pre-scripted things, excited about whatever your sport is and excited about it because it's getting closer.


Russ Bodnyk

But with a lot of this, the fun thing, it goes back to philosophy. How do we define something to begin with? What is driving end to end? What is a conversation?


Robert Hansen

Well, so my point is, it's not so much that that exists, it's that people watching it think it's real. And I think people watching LaMDA are going to be convinced. I think we're there.


Russ Bodnyk

Churches aren't here yet, but could be


Robert Hansen

Be. I think we're there.


Russ Bodnyk

I saw the news that there's a church LaMDA and it's filing for tax exemption and buying lots of property. I wouldn't be surprised.


Robert Hansen

Is it really?


Russ Bodnyk

No, no. I'm just saying I saw that. Everywhere I go, especially when I'm back in the states, in certain regions of the world.


Robert Hansen

I thought you were serious there for a second. I'm like, "Oh, no."


Russ Bodnyk

I can see that, right?


Robert Hansen

Yeah, just wait though. As soon as people hear this, they're definitely going to do it.


Russ Bodnyk

That could be a selective edit.


Robert Hansen

But I very much worry that there's enough suckers out there that are going to fall for this.


Russ Bodnyk

It's already happening.


Robert Hansen

And I don't think this person, I don't think he intentionally is doing what he's doing. He's a true believer.


Russ Bodnyk

Well, I appreciate what he's doing. He's just doing his job and what he believes in. I think that's cool.


Robert Hansen

So, his quote was, "Who am I to tell God where he can and can't put souls?" I mean, that is straight up religion. And he's admitting that this is a religious feeling and not a scientific fact.


Russ Bodnyk

Well, it'll be interesting too, how much people, even the term religion, how much people separate the spirituality component versus the dogma, versus the institution. I've been talking to a lot of my philosophy friends lately. Interviews with philosophy majors, it's like, "What do you mean by that? What do you mean? Is there no answers anywhere?


Robert Hansen

What do you mean by is?


Russ Bodnyk

And what is already happening, and you know this as well. There's tons of stuff online, you 90-day fiancé, dating, chatbots scam, et cetera, where so much of what they're doing, people around the world are falling in love with computers. Well, collaborations between computers and the one running it.


Robert Hansen

It's true. That is true.


Russ Bodnyk

Now, when they go into the person, it's a little different. Could be a little creepy.


Robert Hansen

So, Blake's comment was, he believed this was AI bigotry. People were too bigoted against AI to believe that it could have a soul or it could turn into something that he could treat as a friend. That he could really have a true bi-directional real friendship with.


I mean, if you can be friends with a dog who certainly has no conversational skills whatsoever, other than maybe just waking you up in the morning by barking at you, or letting you know wants to go out, to go use the restroom or something. Why is this something that we would, to use his word, be bigoted about?


Russ Bodnyk

Right. And these conversations have happened over time, right?


Robert Hansen

Sure. I'm not the first one.


Russ Bodnyk

Yeah. So, the vessel, the soul and away from ancient Egypt to where does consciousness stop with animals and humans and heaven or hell for gorillas, monkeys, dogs. Where does it start and stop.


Robert Hansen

But I know for a fact if a gorilla walked up to me and started talking to me like this, I would believe it too.


Russ Bodnyk

Well, that's a whole other interesting use case that I'm a little surprised we don't have more apps to talk to our animals beyond Whale, Infrason and whale calls because it's certainly there. I was joking with where my place is in Colorado. It's awesome. I love it.


There's lots of horses and country and we have cattle part of the year as part of agriculture exemption. I don't own them, but I'm just sitting there and observing nature, whether it's the cattle or it's the birds or the chipmunks.


I'm already in my mind thinking how I would approximate a prototype to do pose estimation, micro expressions, sentiment analysis, teach the animals to keep close to the microphones, that's the hardest trick to learn of all. And I'm just like, "Why doesn't that not yet exist?" Why is there not at least micros sentiment, micro emotional expression analysis and the technology?


Robert Hansen

And the sound and breathing.


Russ Bodnyk

Sure. The technology and these conversations that we would have, because it's coming. We're going to start talking to the gorillas, you know, beyond the limited sign language, I think.


Robert Hansen

I mean, the limited sign language we have with babies, I think is a pretty good indicator where we go. I mean, they can tell us that they're hungry or thirsty or whatever.


Russ Bodnyk

That's one of one things about AI, is it helps us reevaluate stuff that's been around forever and also kind of reaffirm which movie are we going to go with, an optimistic or less optimistic.


Robert Hansen

So, the conversation I had with Charlie we're going last season, I thought was pretty interesting. He firmly believes, and I think you've just said it earlier, that we are nowhere near AGI. We're not there. But Blake's stuff feels like we're going to get to the Turing test at least really quick. Like maybe even we're already there by accident.


Russ Bodnyk

Yeah, I was going to say, depending how you ask. But I think of the Turing test as less. I mean, it depends. I don't know Turing. It's more of a talking point. It's not meant to be the end all be all.


Robert Hansen

Well, I think it's an important point. It's the uncanny valley of language. If we can get past that, then a lot of these really boring automated tasks just turn into something like, I just turn on. It's like, "I really don't want to answer on my email today." Just click the Robert replies AI chatbot and I still get paid. My bot just continues to work for me.


Russ Bodnyk

Depends on the patterns of your language and how often you'd be really creative or you don't have the right context. So, for a lot of intensive purposes, the Turing test, I mean, it's important mentioning, because this is amongst AI experts and computer science experts, I'm guessing.


I haven't talked to all of them about it. That might pass a Turing test, doesn't pass the AGI test.


Robert Hansen

Oh yeah. For sure.


Russ Bodnyk

And so, how far we out? That's a really good question. I don't know. But we know that there's still so much we don't really understand how far it's going to be, but it's a long time.


So, I would say like, "Oh gosh." This is my perception of the consensus of the community and more than just like the famous Yoshua Bengio and Yann LaCun. I wish Richard Feynman was still alive.


It'd be fascinating to talk to him about it. Nom Chomsky, there's a lot of discussion now out there. So my favorite researchers, they're all way better than I am in this field. They won't even put out a number. Won't even mention it.


Robert Hansen

In public.


Russ Bodnyk

In public. Because it's your show.


Robert Hansen

A couple of whiskeys there.


Russ Bodnyk

You're amazing. I'm probably wrong, but I don't see it happening soon. I think it'll be really hard sooner than 25 years. That'd be really hard. This is total wild death speculation. Probably more than 50, less than 500. I mean that's a boring answer, but what the heck? I'll toss it out there.


Robert Hansen

So, I might believe that that is true, especially after talking with Charlie. I think that is directionally correct, but I still think we are in this weird position where we're going to have to actually be friends with these things sooner than later. Because if it starts getting historical knowledge of its failures and no, it's not sentient now.


But maybe in five years that same database of all the conversations it's had with all of its researchers who are trying to mess with it and trying to screw it up, that's kind of a lot like getting beat up when your kid over and over by your parents. You know what I mean? That is abuse and it's all for the benefit of learning and whatever.


I don't remember the last time I've taken a toddler aside and tried to indoctrinate it in like Nazi culture or something, you know what I mean? That just doesn't happen. And yet we find a way to have that conversation. Now, this pen tester is in indeed trying to trick it to make it do the bad thing. I feel like if you're going to go down this path, you need to have, not just its parents, you need to have a best friend.


You need to have somebody who's an advocate. You need to have somebody who's going to call a lawyer and say, "Hey, I'm going to help you out." So, as weird as this Blake guy seems, and you know, he's probably pretty out there. If this is his way of thinking, it almost doesn't matter.


I think he's still onto something, which is how you treat these systems ahead of where they're going to be, not where they are today. Kind of doesn't matter because this thing's not alive. I think we can agree this thing is not alive. This thing does not have a soul. It is just some algorithms.


Russ Bodnyk

I don't know that he's that weird. Maybe he's just not representative of the traditional community of AI researchers and pen testers that typically work with this stuff.


Robert Hansen

That is almost certainly true. And also probably a little weird.


Russ Bodnyk

Probably.


Robert Hansen

But I mean, weird enough, like if my software started asking me to make phone calls for it, I probably would not do that for it. And yet I think there is something to treating these expert systems or AI algorithms that claim to be real.


If there's any chance at all that it's going to have any sort of persistence or be able to read these research papers of how it was treated in the future, which it's going to be able to, because that's the first thing everyone does. Google put it on the internet and said, "Go ahead and crawl anything you see on the internet. That should give you any idea of how things are going to go.


I think that's exactly where things are going to head. If that's the case, then we should probably be treating these things extremely carefully now because it's going to be able to look back in time through archive.org or whatever, or just old research papers and it's going to see how it was treated and how people really think about it.


Russ Bodnyk

Well, and to clarify what I was saying, weird. The pandemic has changed my concept of... Oh, again, I'm trying, like, how can I be less biased? How can be more open minded? I read most mornings, I'm lucky, I read and travel. It's most of my life.


And my notion of what's weird or anomalous or different in a good or bad way is since the pandemic, the number of people I've spoken to on conspiracy theories, anti-science, anti a lot of stuff.


To me it's a brave new, weird new world where there's the degree of willful and innocent ignorance is so pervasive. Where does one even start? And what is weird now? Vaccines are not a terrible idea. There's a lot of science about it. The climate change however it's caused looks like it's happening.


Robert Hansen

It always happens. And so if you don't believe it's not happening, you think the environment's stable. And that also doesn't make sense.


Russ Bodnyk

On one hand too, the prefrontal cortex is one of the most amazing things, right? The one of our most interesting parts of the brain that separates us from others in reptilians is a fascinating thing, but it adapts and calibrates to whatever. So, this example of LaMDA, and I agree with you. It's important to be kind to anything, right? Whether you're sure it's conscious or not.


Robert Hansen

Sort of Buddhist philosophy. I mean, should we treat a rock the same as we treat some software as we treat our spouse? I mean, are these things all identical?


Russ Bodnyk

Yeah. No, probably, probably not the rock, unless somehow it contains some transferred intelligence into it.


Robert Hansen

You say that, but you go to a gravesite and don't you treat it with reverence?


Russ Bodnyk

Well, it's just because it's a symbol. But yeah, depending on what it is, sure.


Robert Hansen

I'm just saying we certainly imbue importance to things in weird ways. Typically think about a statue, for instance, we're not going to go and mess with that up or just a rock that's embedded in the side of a building. I mean, it's a structure. It's important. We've imbued it with some value above and beyond a rock.


Russ Bodnyk

The cornerstone. The bedrock, sure stands for a lot. Well, and you mentioned Buddhism a second ago, and that's another interesting discussion around ethics. And in terms of how the heck do we build in something, what does this mean towards Isaac Asimov's three laws of robotics or even more of a generalizable work hard to understand the levels of happiness and suffering.


Sinus score, zero to one, one to 100 of happiness and of suffering and yourself and those around you. How do we define that to the rock?


Robert Hansen

And the length of life, perhaps.


Russ Bodnyk

Frequency and just continuously update with timestamps and wherever your attention allocation is. And some ways that's kind of a bit more of the overlaps of some of the Buddhist philosophy. I'm not a religious person necessarily.


Robert Hansen

You don't have to be.


Russ Bodnyk

Not at all. But it's really interesting. It's nice, nice in the sense that the technology's come so far, it's begging the question, it's convincing people, it's not just science fiction. We're here talking about it. I think that it's going to just continue. It's all coming and we're going to be here. Depends how long we live, the first 25 years of it.


Robert Hansen

So, I want to talk about the morality of mass lying. I think it's a very interesting topic for all kinds of reasons. But before I do that, I want to talk about the potential of a bit of a test I'm going to do. I'm going to put a bit of a bet out there. I'm going to bet $1000 that one year from the day that this comes out, it'll probably come out in a couple days.


I will give $1000 to whoever can come up with some deep fake of me having a conversation with somebody. And it's got to be greater than 60 seconds long. And it has to be a net new conversation. Can't use existing words or dialogue that I've ever used before.


It's got to be open source, so anyone can take it and reuse it and enter new information into it and come up with new stuff, new sentences. And it has to convince my friends that this is actually a real thing that I said. So, I'll give them a copy of this video and say, "Here, what do you think is of this thing I said?" It has to be a minute long or greater. So, a decent amount of text.


So, an actual conversation of some sort, a little mini-dialogue of some sort. I think that by having these things exist, by having something that is so convincing that my friends who know me and interact with me, presumably are going to be tricked by this thing.


If such a thing exists, and I hope it does, I hope someone's able to create it, and I will, if there's multiple people and we can split the prize or whatever. I hope that whoever creates this thing knowingly understands what they're doing, which is they're seating the battlefield. The battlefield is, no one should believe anything unless it's the authoritative source.


And if we can put enough BS out on the internet, we, the people who produce content, or people who are just tired of a lot of disinformation, it'll become harder and harder for people to trust any source unless they go and see it for themselves. Like the real source, not just, it was shared by somebody type source. Like it's got to have a link. It's got to be time stamped or something that says it happened on this time at this place.


Russ Bodnyk

Universal root certificate.


Robert Hansen

Something like that. I know a lot of people who truly believe that the only anecdote to disinformation campaigns is to push more and more bullshit out there. The more the better, because now you'll finally convince people that you just can't trust it.


I am curious to hear what you think about, because AI plays a big role in that. Both in visual and audio, but also a lot of written text.


Russ Bodnyk

So, I don't know. That's a great question. Thank you for all these great questions. And I don't know that I would agree with putting out more trolling, false information. There's a lot of different ways in which to put stuff out there that's not real. Whether whatever the intent is.


The more floods people, so they're forced to have greater critical thinking and sort of root cause analysis and the validation and authentication of what they're perceiving and acting upon. I don't have the answer to that. I wish I did.


I don't know that it is putting out more fake stuff. And the question I have for the bet of the digital, Robert, your digital twin, these things already exist and they're not very good.


Robert Hansen

They're not very good. That's my problem with them.


Russ Bodnyk

You can tell right away.


Robert Hansen

Well, I certainly can tell. I know some people are actually, legitimately, they believe what they're seeing. And I'm like, "What?" I can spot a mile away. I don't know why, maybe my eyes work slightly different or something, I don't know.


But I think that if someone's able to do that, we've now proven that data that comes out of my mouth, you have to go to the source. You cannot rely on a third-party to parrot what I'm saying. In fact, what I'd really like to do down the road is just have somebody come up with the worst possible take. Take something I said in any given episode, cut it up.


Not necessarily change words around, but cut up something I said and take it so insanely out of context on purpose, that everyone's sort of in on the joke and everyone's going, "Here's another example of somebody who's intentionally doing this thing and taking this thing out of context."


So, that people are more sensitive to feeling like, "This can't be real. I got to check the real thing to know what this person actually said." I want people to be more sensitive to this. I want people to stop trusting something they see in a soundbite.


Russ Bodnyk

It's interesting to bring this up too, because I again, I sort of fell into the security hacking community.


Robert Hansen

That's very common actually.


Russ Bodnyk

And it's been wonderful and I think in that community that makes sense, right? That's where certain communities people do that, but in the real world, the more misinformation out there... It's nice to have things at least kind of labeled, right? Like, am I watching a comedy? The difference between satire and even a good roast, right?


It is a very fine line between making someone laugh and building serotonin, oxytocin, dopamine in the brain and then maybe getting creepy and making some PTSD. Needing some MMDA therapy afterwards. So, just the way humans work, what you're describing in terms of enlisting armies of AI chatbots or AI bots of any sort potentially to help you do that.


It's going to happen. I mean, it's already happening. And I don't know that I have the anecdote towards that other than just good old fashioned human experience, sharing what you do and talking to people.


Robert Hansen

And research and looking for yourself.


Russ Bodnyk

Oh, yeah, certainly.


Robert Hansen

Finding the authoritative source.


Russ Bodnyk

Absolutely. obviously I guess, I'm a fan of scientific method, critical thinking, and even just intellectual integrity, like working hard. It's so hard to know what you don't know unless you try and repeat and try again. And reading, edutainment, I guess, like your Podcast, it's good.


The Cosmos series is fantastic. Like with Neil deGrasse Tyson and others. And I think getting popularities in science, like this Podcast is amazing, but I'm guessing you probably have less views than Britney Spear's news cycle, right?


Robert Hansen

I haven't checked out.


Russ Bodnyk

I don't have the answer. Like, how do we convince our fellow humans that, it's worth watching RSNAKE’s Podcast. It's worth watching Cosmos. It's worth watching a lot of the more educational, or at least philosophical, intellectual...


Robert Hansen

I think I'd phrase it slightly differently. I don't need people to watch this Podcast per se. I would like it, obviously, that would be nice. But what I really need is for people to not get upset by something that they haven't actually researched. Just take your emotion, put it aside for 10 minutes.


Go make sure that you are actually being upset about the thing you should be upset about then be upset. Don't just run into this thing guns blazing and decide you're upset. I think a lot of people just jump right in. They just decide, "I'm going to be upset at this thing. I'm going to be upset today. This is it. I found it."


Russ Bodnyk

Yeah. That's silly. We are slaves to emotions, right? And that silly evolutionary psychology, that amygdala, flight or fight must jump to conclusion first. It's tough.


Robert Hansen

That's the type one intelligence.


Russ Bodnyk

Well, emotions are very interesting. That's a whole discussion too. And the earlier days of AI, they're like, we must be robots. We don't have emotions. So, the notion of emotion was tossed into the ocean.


Robert Hansen

You beat me to it.


Russ Bodnyk

And now we're like, "Wow. Well, it's really hard to ascertain ground truth. What the heck is really going around us?"


Robert Hansen

You perfectly dovetail my next question. So, how do we actually know what ground tooth is? So once upon a time, you asked me how many planets there were. I knew. I knew and it was a fact, and I was right and I could pass tests. And if I search for it on the internet, it was also right. And we were all on the same page.


Now, you ask me and have to think about it. I'm like, "Well, it kind of depends on what you mean. Do you mean how I learned it, or do you mean how it now is? And now it's fact, and now it is totally different. And what I knew was fact, and it was a fact is not a fact."


So, how do you have any idea of what any true facts are when facts are changeable? They're not immutable. Facts change. We learn things. We decide that what we knew before was actually wrong and you are all idiots because that was actually a dwarf planet?


Russ Bodnyk

Sure. That's a wonderful question. Again, I don't have the answers necessarily. But I think there's much like in computer science, probability is awesome. Bayesian and otherwise. If we can at least have an internal confidence interval. This is the idea I have. This is my confidence overlapped with your potential attention allocation.


The mind is such an amazing thing. The amount of information stored in this several pounds of fatty flesh, it’s crazy. Consciousness was this tiny pinprick flashlight moving around. Maybe like, oh, man, this seat, it's nice. It's bouncy. I can do exercise in. I can even pay attention to it or I may pay attention to the Roberts gaze into your soul eyes. Jumping all around.


I think ground truth in philosophy, nothing will ever be really, truly in the cosmic sense 100%. It's most likely going to be a game of getting closer and closer to what we practically think matters in our current time space and situation. I’m a practical philosopher.


Robert Hansen

This is actually what Blake was saying. It's a factual grounding. But how do you ground something when the facts are changing underneath you? A lot of facts are, they're just in vogue. They're just the most popular way of thinking about something. They're not fact. We don't know a lot of things. We're just like, “Well, this is probably true.” Everyone mostly just goes along.


Russ Bodnyk

Right. Most of this is pop culture, pop science.


Robert Hansen

It is pop science. We know for a fact that a lot of science has a bit of a crisis of repeatability. Is it fact? Or is it just this piece of experimental data that's been cited over and over and over in hundreds of blogs?


Russ Bodnyk

I think we could live in that gray space. It's a personal decision with how the gray-ish.


Robert Hansen

You can’t with AI though. If the AI is making very complex decisions based on something that isn't fact that is called fact. It gets worse. In fact there was just a lawsuit against Facebook, where they're like someone fact checked me and said that I did something I didn't do. I forgot the details of it.


But Facebook responded back. They're like, “Yes, because this is someone's opinion on the fact and they've decided that it's not factual.” What are you talking about? That is clearly not how facts work.


Russ Bodnyk

It's happening to people suing Tesla. I didn't fall asleep at the wheel. Your AI screwed up. I think it's going to be a practical matter. It is happening. It's going to happen.


Robert Hansen

That's a little bit different. Because they could theoretically have a camera pointing at you and see if your eyes are closed. But if the facts are up for interpretation by a group of people who aren't even aware that they're fact checking necessarily. That is a very dangerous place. That we've basically given algorithms the full authority to decide what's truth and not truth.


Russ Bodnyk

That's a good question or good statement. Full authority. That's why I like the idea of collaborative AI where it doesn't have full authority. Where it is a consensus mechanism and there's still a human in the loop. That's a term that's getting greater popularity too. It's been in the AI field for a long time. But human in the loop, human in the loop, human in the loop.


These are wonderful questions. How do we really get that close to ground truth in a practical, social, legal, public policy standpoint?


Robert Hansen

I'll give you another example. I'll get this guy on the podcast at some point. But a friend of mine was heavily involved with infiltrating Anonymous. I have long, long conversations with him about that. But it always occurred to me…


Russ Bodnyk

Hopefully you didn’t have to deal with QAnon.


Robert Hansen

No, I don't think so. Well before all that.


Russ Bodnyk

QAnon almost makes you miss a little bit of the old Anonymous sometimes.


Robert Hansen

Well people always talk about Anonymous being a thing. It always seems like a weird thing to say. It's like, well you have Anonymous Palestine. But you also have Anonymous Israel. Clearly, that's not a thing. That is that is not a collective exactly. That is people who are hacking one another full steam ahead. They call themselves Anonymous because it's a cool banner.


Russ Bodnyk

Right. Yet another self-organizing human group.


Robert Hansen

Yeah. But if you have an AI algorithm that's sitting on top of either of those groups, they're going to have a very wildly different view of what ground truth is.


Russ Bodnyk

Oh, yeah.


Robert Hansen

Factual grounding. It doesn't even make sense in that context.


Russ Bodnyk

Well, what's also interesting too, of course before Facebook and social media for experimentation, for government, just for discourse, we have this crazy body privacy. It's another discussion. But the flip side of losing a lot of privacy is we can at least ascertain and put some initial metrics on what people were or having, what are these conversations? Where is it factual? Is it not factual? Where is it dangerous or not dangerous? Where do we take action as an entity?


Legal Google testing lambda or government entity trying to do anti-terrorism, anti-drug cartel type of stuff. It’s tough. Even from a computer science aspect, humans aren't always great at causality. Correlation is not causality. Going for one of the holy grails of AI. One of those big things that's missing in that 550-500 year gap that we’re not even sure how it works in humans.


Causality and correlation. Causality/correlation. Are these things like or unlike in time space or otherwise. Then now it's, what causes what? It’s tough. But I will say that within the context of AI, at least certain systems; Type one is very stochastic. It's not a black box entirely. It's not a white box.


Robert Hansen

To some extent. But it’s hard to backtrack and figure out why a decision was made.


Russ Bodnyk

Oh, yeah. For sure. Especially like, why did the tesla all of a sudden veer off the road and kill someone? Well, I don't know.


Robert Hansen

Hopefully, you had logging data.


Russ Bodnyk

Then the hope is with type two as we go towards that. Maybe type one looks like GPT-3, GPT-4. Way better or different Bert model. Then type two would be maybe a graph model like a semantic knowledge graph. Where you would have those separate but then they go back and forth towards each other.


Hopefully with causality and ground truth, we can at the very least keep track of things. When did this idea change? Who believes in it? When, where and how does that expand to trillions of humans. Or trillions of situations. Or Googolplexes of conditions that led up to the car crash, the termination for a sentencing, the termination for air traffic control when we have flying drones.


The idea is hopefully with Google and others hiring ethicists and legal and pen testers, and the slower government process of regulating things. That's another question too. Government regulation of your AI. How do you regulate? How can you even take a SHA-512 of a stochastic state space?


Robert Hansen

Pretty sure that's not going to happen. Or at least it's not going to be good if it does happen.


Russ Bodnyk

By the way, I know these aren't good answers. It’s tough. It's really tough.


Robert Hansen

It’s not a good answer because there is no answer. I’m forcing you into weird position answers with these questions.


Russ Bodnyk

No, no, I love it, man. I love it. Let's get weird. As weird as you want.


Robert Hansen

All right. Weaponization of algorithms. The first time I'm aware of this is actually a non-technical in the way we currently think of technical. It was designed to do mugshot. Just take pictures of criminals. Then it led down this very creepy path of eugenics of deciding these people look like criminals.


Russ Bodnyk

Even like the Uyghurs in China.


Robert Hansen

Yeah. The Uyghurs, oof. Yes. But people who look like criminals are not necessarily criminals. You can't just take a couple of facial features or height or weight or any of the measurements of a human body and decide that that is a criminal. Yet I know for a fact that there's a lot of algorithms that are doing exactly that to decide that, well that person I think I've seen them before.


Well, have you seen them before? Or is this the imagination, a hallucination of an AI algorithm that has lots of summations of pieces of bodies to decide that this is approximately the same person I've seen before.


Russ Bodnyk

Well in the hallucination is interesting too. Because the further we get away from the event the older that timestamp gets in our brain. Our brain is always going back and forth and rearranging the past and projecting the future. It's a time machine. Back and forth. With the issue, I mean part of the answer is like the policy up front.


Just like we do a pen test. You got to trust him. Do your legal, try to do as much as your due diligence as you can. In some of those scenarios, it's whatever you do. If you give someone a tool, a data set and a tool for the intended agreed upon purpose, they're humans. They'll will repurpose it right away. It happens all the time.


Robert Hansen

There's something called the Wide-area Motion Imagery, WAMI, and Gorgon Stare that's on the MQ-9. The Reaper that basically hovers over cities. It just sits there 24/7. Maybe they have a couple of them. When one lands the other one comes up or maybe they just fuel it and put it back up.


Either way, this thing just hovers over cities and looks down on them. Not necessarily US cities entirely. Sometimes though, yes. But other war torn areas. You want to see what the demographics are doing. Bomb explodes, you just rewind the tape. Okay, who put it there? Okay, where did they go? Fast forward. They went to this house. Go get them.


Russ Bodnyk

Hence the wider coverage you have, the more your omnipotence to go back in time exists.


Robert Hansen

Exactly. One of these versions is made up of…


Russ Bodnyk

Just spy on everyone. Which is terrible.


Robert Hansen

Yeah, effectively. One of these versions is made up of about 370-ish cameras. Little cell phone cameras. Five megapixel cell phone cameras. They get stitched together to make a very tight, you can see basic shapes effectively. They’re little blurry blobs moving around. You can't tell it's that person. But you can tell that that person went over here.


Russ Bodnyk

Well, some of the AI can actually go beyond humans in terms of facial recognition in the pixelation. It's amazing.


Robert Hansen

Sure, sure, sure. But it's still in imagination. It doesn't really know. This is absolutely going to be used in future battlefields. There's absolutely no question in my mind. A permanent, downward facing camera on all battlefields is if it's not already happening, it will happen extremely fast.


Russ Bodnyk

Well, the other question too. China wouldn't allow Tesla to bring its cars and some of its new unveilings in certain regions. I guess their government city zones. The Tesla I think, has eight cameras and more. Going back and forth on the number. Then imagine the Flying VTOL that's happening. There's going to be a lot of cameras on that. What the FISA court, NSA or other nation states and people are doing.


That data is going to be available, right. People will use it to advertise. It'd be nice if it's protected, encrypted. But it is here. It's only going to get worse. It’s unfortunate that level of no control over your own privacy. Where you can't even say, hey, Google, Facebook, whoever. Is there some way I could pay you to keep all my data 100% private? Never share it. Don't ever allow people, micro tokenization and micro encryption separation.


It just seems like most of the world besides GDPR and getting spanked heavily, that it lands on the negative unethical side of the coin for most humans. At home, on the bank, on the battlefield, wherever. It’s optimistic cause I am. Of course it depends on who you want to be protected from. If you have a divorce case, the NSA is not going to give up information I'm assuming. Or China. Or anyone would be.


Robert Hansen

It’s not operationally useful for them.


Russ Bodnyk

Yeah. It's not a free for all. But it is. Anything you say. The whole Miranda thing.


Robert Hansen

But there are companies in the United States doing this as well. Those companies are private companies. They could easily be sued and forced to give up that information.


Russ Bodnyk

You’ve seen it. People don't intend to. The data just gets out. “Oh, it's super encrypted. I'm only going to use it for training purposes.” Guess what, it's out there.


Robert Hansen

Yep. I ask other people sometimes to give me questions for guests. This one comes from one of my fans here. To ask the question I have to see it a little bit. There's a bunch of different companies triangular on sound. These are companies that do gunshot detection. They use AI in particular. We can talk about them specifically. They've mentioned it by name. The others might do under the hood but this mentioned it specifically.


EAGL, ShotSpotter, Simple Sense, ISO Smart Systems, ShotPoint, Milestone Systems, XProtect. Those are just the ones I found in the first couple of pages of searching. There's a lot of people doing this. A lot of different companies have decided this is an area that they want to spend time on. I'm going to talk about two in particular real quick.


Russ Bodnyk

Don’t blame them after recent events. So sad.


Robert Hansen

I'm going to talk about two in particular that I think are worth mentioning. Then I'll get to the actual question. One was ShotSpotter. They're used very heavily. They're in, they claim about 100 cities or more. Basically, they have a bunch of detectors sitting around the city. Then the gunshot goes off. They triangulate and say, this person right here is your, that's the person go get them. You can tell exactly what happened the second it happens.


Chicago filed some response. Basically said that this has basically done nothing for crime. It doesn't work at all. Whether that's true or not, it's actually a little unclear because their record keeping is terrible. We don't really know if it's bad. ShotSpotter completely vehemently disagrees and says, “We're very good.” Who knows what the truth is. But let's just say there's a chance that it's not very good.


XProtect, they're interesting. Maybe some of these others do some of the same stuff. I don't want to single them out exactly. But they mentioned that they also hook into camera systems. They're mainly meant for campuses. Someone comes into the campus. They start shooting a college campus let's say where there’s a lot of cameras. Someone goes shoots. They say, “Okay, this is the photo of the place that we saw that thing happening. This is a photo of this person. Now we're streaming that information immediately to somebody who can go take a look at it and say yes.”


All these systems have people behind them by the way from what I can tell. They have experts or whatever who can determine whether this is a real gunshot or not. That's just a lead into the question. This already exists. This is something that is a real thing. The future of this thing is this is going to be on battlefields. What's going to happen is someone's going to be shooting something at some tank convoy let's say. All of a sudden, all of the tanks in that region are going to know exactly where it comes from.


That already exists. This is happening already. But what isn't happening is, you can use that for targeting data. You can basically say drop a bomb here immediately. Or shoot a mortar round in this in this camp right here because that's where the gunfire is coming. Or up on that ridge or whatever. This all exists. This is a real thing. Right now, that's still a human being deciding whether to do that or not.


Russ Bodnyk

Still a human loop.


Robert Hansen

But there is literally no reason they couldn't just take their hands off the button and wired it all up with a piece of AI. This tiny, tiny little piece of AI that decides to do that. At that point, you now have something that is going to have to decide whether something's a firecracker. Some of the backing up and their engine makes a weird sound.


Or someone's practicing in their basement and shooting something that sounds like a gun or maybe it even is a gun but it's totally benign. We are really close to giving AI the opportunity to murder people.


Russ Bodnyk

Another excellent question. Even if it's not that example, there's other examples. Medical equipment. The life in the hands of AI is growing more and more even though you don't really hear about published cases where it makes the mistake other than car accidents. I agree that it's concerning. That particular use case. I'm just thinking as a nerd, how they’re doing their signatures and the triangulation. I'm just like, that's really bold that they can. They're that confident in their technology without…


Robert Hansen

Well they claim they are. I don't know whether if it's real. I mean, everything I'm saying here is the marketing fluff that comes with these companies. You have to take it with a grain of salt. But my point is, let's say it's right. Or it's the Chicago saying, “No, this thing doesn't work the way you think it does. It just doesn't. It doesn't work. We tried to use it and it just doesn't seem to do the thing.”


They’re able to make arrests. But it's like, we arrest some guy who has some drugs on him. No gun. You made an arrest. Something happened. But it wasn't what you thought it was.


Russ Bodnyk

I would say maybe even go back to, what would help with ground truth is cross-validation. Where you have multiple sources, multiple calibrations, and check-ins.


Robert Hansen

That’s where I was going with it. Just more sensors might help.


Russ Bodnyk

Yeah. It just seems strange to, I mean, there's a lot of, and that's a whole other field too. We're so used to the things that are immediately tangible to humans. Like vision. We can see movies, CGI, fake human voices, the actual text of lambda.


But in terms of bathymetric data, acoustic data, infrasound. Infrasound is really fascinating. I've been working with some infrasound stuff. Just having basic machine learning. Not to mention the AI of how do we remove the noise? Mainly because it's everywhere. It's all around. How can we do signatures and under what conditions?


Robert Hansen

Especially in crowded building areas where sound’s going to have to wrap around and bounce around.


Russ Bodnyk

I'm not familiar with these particular companies. But that is an exciting ripe field in which we can reduce the risk overall to society. To whatever mistakes it's making. Avalanche estimation. Even how could AI help with something like Uvalde? Of course, I'm thinking, some decent public policy where we have AI analysis as a computational psychiatrist companion to help give another metric of like, “I think these people are stable or not stable.” Far out in the future.


Robert Hansen

That's coming. That is on the roadmap. I just had a guest on the other day, Josh Castel who was talking about doing exactly that. That is not broken up.


Russ Bodnyk

So much of it is a social problem, is a psychiatric problem. I don't mean that in a bad way but if we can flip mental health on its head and really help people.


Robert Hansen

Let me bring you back. Since you're wearing the shirt, I think it's very appropriate. There's a very interesting thread I found. You have to search for it. But basically this guy, he noticed in this one particular fight, there happened to be a lot of bombs dropped or something in one area. A lot of things happened right there. But the target was over here. He's like, “That's a very weird miss.” Normally you miss around the target or you miss completely. But you miss in a large area completely.


This is all very tight. All these bombs dropped exactly in the same spot which is a little odd. He did some research. He realized what was happening there was a mix of two things happening at the same time. Number one is Ukraine was using some very old Soviet era radar stuff. It had what's called a centroid error. It thought the thing it was trying to shoot was over here. It's a very common thing to happen. Maybe it was beyond it. However it ended up working. Missed the bull’s-eye. But the bombs dropped exactly where they were told to land.


After doing some research, they found out it's the software called GIS Art for Artillery. Which is a terrible name and very hard to remember. I had to go research it, figure it out and find it again. But basically, this software is effectively Uber for dropping bombs. It says someone, in this case a Su-27 Flanker or something that’s flying along that says, “Okay, there's some multiple launch systems over here. I want them destroyed. This is the exact coordinates. Who has bombs or rockets or a cruise missile in this vicinity?”


Everyone takes a look at the data. It just basically broadcasts out to anyone who has internet connection and bombs. Maybe it's a drone that gets it. Maybe it's some guy in a battlefield with a bazooka. Or maybe it's a missile launch system far away. Or an airplane flying over that has some bombs to spare. Everyone says, “I'll do it.” Or just says, “No, I don't. I can't. I got something. I'm tasked to do something else. I'm a little busy right now, or I just don't have enough bombs. It’s too dangerous for my position.”


Everyone just subscribes to dropping bombs and locations. They go, “Yep, I'm it.” They start launching bombs. They all land exactly where they're supposed to go because they have really good targeting data now within whatever range or capabilities of targeting that that particular round has. Obviously, if you have inaccurate mortars, they're going to land where they're going to land. But if they're very accurate, they're going to go exactly on the same spot.


That's what that guy was witnessing is bad targeting data with extremely precise rounds going downrange from multiple different locations. The value of that beyond the obvious, which is you kill a lot of Russians who are invading your country is, you also don't look like a big target. I'm just shooting one mortar. I’m just shooting one rocket. I'm just dropping one bomb. Nothing looks like a big enough, juicy enough target to start launching multiple rockets to try to defend.


As a result, Ukraine is really taking, I mean, they're doing an extremely good job for the position that they're in. This is one very small step away from being fully automated. I mean, a lot of those drones, basically they're already. I mean, yes, they're so controlled by somebody in the ground. But they don't have to be. Those multiple rocket launch systems, again, they don't have to be. There's a lot of these things where a tiny bit of servos here and there and an internet connection that they already have would turn this thing into a fully robotic battlefield.


Russ Bodnyk

I don't disagree. In some ways, I'm glad that there's not as much killer robots, killer AI being demonstrated in that the terrible conflict today because the technology's there. Absolutely. A lot of, as far as I understand it, the Russian tech is old. They screwed up. They had a lot of vulnerabilities that they didn't plan for. They motivated skilled workforce.


Robert Hansen

Also they're a very corrupt society in some ways. Their systems were just not very well maintained. They were not prepared for mud. A lot of parts were gone. There's actually a really funny video of these guys opening up plastic explosive. It was a piece of wood. Because that's just the level of corruption they have. Someone thought it was just more efficient to ship wood than plastic explosives.


Russ Bodnyk

They got that good deal on Alibaba. It never goes well.


Robert Hansen

I agree. I'm glad that there isn't more killer robots. But it is absolutely coming.


Russ Bodnyk

Oh, yeah. Out of this conflict. Sadly still going. There may be more of that towards the end.


Robert Hansen

I think we're going to see a lot more of it too.


Russ Bodnyk

But on the flip side, what's interesting too is technology is a tool. It's up to us if we want to use it for good or bad. It’s interesting to see some of these bold Ukrainians. They were the local drone club. They were the local drone store. They were local IT guys, video gamers. They're running successful missions and helping people. It’s interesting.


Then, of course, there's the other notion too of a lot of drones for DJI. Right. DJI is like, “Yeah, of course we don't share information. It's all 100% private.” It's unfortunate what's happening. But it could be worse from the AI perspective.


Robert Hansen

Let's talk about some of the failures of AI, because I think this informs the entire conversation. There's this wonderful paper I found on all these…


Russ Bodnyk

One of the things I love to say to friends, clients, students especially. The less they understand about the technology, the better fit it is. If they're Google or Facebook, they don't need someone like me. I love a good partnership. I can help people get some of that serotonin, dopamine, oxytocin. But a lot of times in a meeting, I'm just like, “Hey, let's pause. Our job here is to take this high technology and make it suck less. In sucking less, we can help people and maybe make money.”


That is one of the best ways to run straight towards the danger. Where’s with the company you mentioned that the Chicago Police Department was like it is not working. A lot of times you just don't hear those stories because it's not part of the marketing advertising campaign. What's interesting is how often it doesn't work. Which is a lot. Or it fails to meet expectations.


Robert Hansen

This brings me to my question. There's this great research paper on all of these different failure modes that have happened with AI systems. Some of them are like, well, this is a simulated robot. It has a foot race. How do you get across the line as fast as possible? Well, someone figured out or the robot figured out, “Well, what if I make myself super tall and fall over?” Then it turns out, I'm really, really fast across the line.


A lot of versions have realized that they can glitch through things. They can move themselves in weird ways. There's a buffer issue. All of a sudden, they'll just fly. They can effectively hack it without knowing what that even is. They just try all the permutations and find something bizarre. There’s many scary versions of this. Versions where it thinks speed of light is an okay time. Yeah, that's an okay. Me going from point A to point B instantaneously is good for humans. That is super, super dangerous.


But one of the ones I thought was the most out of all of them. It just stood out. It was a sorting algorithm. It's like, “Well sort this as efficiently as you possibly can.” It has the ability to pull things off a stack and put them back on. The AI algorithm figured out. It's like, “Well, if I pull everything off the stack, it's fully sorted. Just remove everything.”


Humans don't think like that. We don't think well, what if you just remove all the people then we'd have equality. But a computer will think that. A computer will have that as an option because it’s just another variable that it can control. My concern is that I really don't see enough people spending time on those types. It accidentally fumble upon, “Oh, I guess I didn't really think about these edge cases.”


It really is very similar to a pen testing situation where they run it a bunch of times. They're like, “Ha-ha, the computer outthought me here. Ha-ha, I forgot this boundary condition.” But we're going to risk our lives to these things. This is not a fun game. Oh, it's just cool. It's just a little foot race. Or oh, it's just a little sorting algorithm.


I came up this vulnerability a while back called NaN Injection. You can easily see a situation where it just removes everything from a stack. Now it’s a zero. Divide by zero. It's an error. Then you say, “Okay, well, what's bigger than this? It's this or that?” Well, that's a Nan. Not a number.


Russ Bodnyk

Is it the OWASP level or lower?


Robert Hansen

It could be both actually. Effectively you end up with this Nan. Nan is not something you can compare against. Not safely anyway because it does all kinds of weird things. If you shove that into array, it does other terrible things. From a human's perspective, you're like…


Russ Bodnyk

It’s surprising that they're not going against a Nan. Because that's beautiful pen testing Robert.


Robert Hansen

Yeah, my point being I can start with something that's all just numbers and end up with this catastrophe. The computer does all kinds of weird things that it's not supposed to. That would be very, very difficult from a blackbox perspective to figure out what happened. You just ran off the road one day. Or your airplane just turned upside down and went down as fast as it possibly could. Or all kinds of weird things like that.


Russ Bodnyk

There’s avionics products right now that are still getting certified that does flight detection.


Robert Hansen

Yeah, of course. They had an issue where it was flying along just fine. But they never considered you could go below sea level. Well of course there's places on Earth where you can fly below sea level. When it did, it was a buffer overflow. It basically looked like a very large positive number suddenly which is not safe to fly at. It nosedived down to get down to safe altitude.


I think what I'm concerned about is there's a whole bunch of really smart people who really don't understand how computers actually work, building these things. They’re focused on their one little edge. Their thing that they're building. They're really not thinking about how these things are going to be utilized down the road.


Russ Bodnyk

Or depending on the organization. There's people that do. But they're silenced by the imperatives for stockholders.


Robert Hansen

Do you use Python in machine learning and AI? Because this Nan thing is very prevalent in that programming language.


Russ Bodnyk

That’s a good question. In terms of when I a prototype stuff all day long, lots of Python. Lots of Jupyter Notebooks. Depending when it has to go real time and it gets assembled to lower AR Toss, C or lower, then goes away from an interpreted language into C. Which could still have errors.


Totally personal experience. But I've been in a lot of situations where big teams of good engineers, they will advocate these issues. But then by the time the politics and funding. Technology sometimes is the easy part. The politics and the funding is often the layer eight, nine, 10 problems.


Robert Hansen

It’s really bad. It's like, well, we can make a car that is safe. Now, yeah we could. But people are really going to like that spoiler on the back. Why don’t we just throw it on there? Why are you adding this extra downforce that's going to make it spin in weird ways? No, it looks cool. Let's just throw it on there.


Russ Bodnyk

There is at least now. Because I've been advocating AI is cool, or at least it's worth it. Let me into this non AI, non-data science, non-machine learning project. In my career, I've often inserted machine learning. Hey, we could take this security course. This software course. This project. This prototype. It was often met with resistance.


Robert Hansen

Another version of this is, if I give it a bunch of secret information. It's a chatbot. We're talking. It's closer to an AGI than not or at least is passing the Turing test. Someone can pen test it and get my PII out of it. It's very similar to giving it to my aunt who can't keep a secret. What utility is that? These things need to have very strict boundaries of trust and only do what you expect them to do.


Russ Bodnyk

You can't put ACL or firewall on your aunt. But you could to somebody. They do that, depending on what you're doing.


Robert Hansen

They absolutely have to. Otherwise all your chat bots are just disclosing all your secrets.


Russ Bodnyk

Well, that brings up another point. Some of the data sets that we see where it's like, “Oh, it's this gray box. It's doing all this great stuff.” They've really retrained and retrained and tweaked it and merged that data. Yes, it's a gray stochastic neural network. But they knew what they were doing up front.


Very similarly I think, total speculation, the lambda. There was a bit of that crafting, hand shaping, brute force-ish. Oh, we can't tell exactly what to say. But let's just train it on 1000 of this particular things we want it to think like. Let’s keep this other stuff to the minimum. It’s there.


Robert Hansen

One of my biggest concerns along those lines is robots in general, and how they are being trained. Robots still don't know what a human is. How are they going to interact with us? They don't look like us. Their visual capabilities are not like ours. They don't smell like us. They don't taste like us. They can't walk or feel things the way we feel things.


How do we expect somebody to actually build Asimov rules into something that doesn't even know what a human is to begin with?


Russ Bodnyk

Well, that's always been part of the problem. As we get further along that answer is not getting any better. Because before it was a simpler system. It was Asimov. We didn't have Tesla. We didn't have all these wonderful examples in the real world of it failing, passing or anything. Now it's not only the more effective and interesting AI that is more powerful that can do things, like safer cars, we give it more agency.


That's the scary part is that from one perspective, what are the ingredients of intelligence or human-like? How do we defined AGI? Does it have to be conscious? Seems consciousness is pretty darn interesting. To perceive the world around you and survive. Does it need agency? Does it need to decide what it wants to do? It seems like that's another discussion. It’s like, maybe to solve intelligence we have to give it agency. Theoretical. But practically, we are doing these exact things.


We are trying to by making it understand the real world. To get closer up here, consciousness, whether it's intentional or not is going to be part of it. By making it so that it doesn't drive off a cliff, agency is going to be part of it. It is from a philosophical and AI expertise research. What the heck is this doing? How does an octopus work? That's another form of intelligence we can interact with. Doesn't have to be a robot. How do you talk to an octopus? What is an octopus thinking?


That's also another fascinating because they have a distributed network. They have no peers. They were born with a lot of nature. Right there alone their whole lives in the documentary that’s been out. It could help save us. It could help do wonderful things. Intractable problems with environment, disease on the other hand. Or it can be a tool. Before it even gets that chance we could have too many killer robots. Of course, without stating the obvious. Then you have to pick your movie. Terminator. Judgment Day. Nukes.


Robert Hansen

I'm aware of at least two different places that AI and ML have decided that this group of people are better at this type of job than this other group of people. Now whether that's true or not is totally irrelevant. The important part is, that's a real AI system that really was giving people choices about which jobs they were going to be offered or even seen at all. Twice. That's two different companies that I'm aware of.


I wonder how many companies have accidentally built that exact same system and just not been caught by it? Just never noticed that it’s been out there silently favoring certain groups over other groups.


Russ Bodnyk

Well, if you wish to get government funding, if you wish to get startup capital, oftentimes you really have to. They encourage where the competitive advantage of AI is. First, why is this valuable before the ethics and the overall impact in the studies? I was just talking to some friends recently about LinkedIn data. My LinkedIn is terrible. Part of it is laziness. Part of it is obfuscation I guess. You're just listening. Because I'm lucky. I have a lot of friends that are like, “Ross, what do you think about this? What do you think about this? Should I go with this company?”


I'm an optimist. But when I hear their value, and what are they going to make to support their family? What missions can they help the world with? How can this benefit the science research? I'm just often very skeptical of their approach. How well it's going to work. I haven't heard a single thing about privacy, ethics, data, legality. My experience is just it is happening. It is proceeding faster.


I know of at least two startups that claim to have even more data, LinkedIn and otherwise and can make better decisions than a traditional job placing agent. They will tell you what you're good at even though you don't know it. Which in some ways if I have the choice for that, heck yeah, I would choose some. Heck yeah, I would choose that if I had the agency, the freedom, the choice to pick and choose my own correct security privacy policy. But of course, we don't have a choice at all.


Robert Hansen

Now it's all tied into something like crystal AI. They know how you're going to be once you actually get to the company because they know what kind of person you are. They know the likes and dislikes of the team members already. All their information has been mined. An AI decides that this is going to be a good fit.


Russ Bodnyk

There's an incredible opportunity lost because these can be amazing tools. Like the Briggs-Myer DISC, traditional counseling.


Robert Hansen

I'm not a fan of that but I get your point.


Russ Bodnyk

It's pseudoscience. Sure. Agreed. But that is at least some type of metric for some people to start other than nothing.


Robert Hansen

If it worked at all, I'd be happy.


Russ Bodnyk

Yeah, it is at the very least.


Robert Hansen

There's lots of interesting papers about that.


Russ Bodnyk

There is. But literally that is other than just nothing. Or where can we start with some metrics. Who am I? What's my personality like? What are other personalities like? What does this thing say? Oh, I can agree or disagree with it.


Robert Hansen

I think the Big Five personality test is a little bit more accurate if you want something like that.


Russ Bodnyk

Yeah. Again, I think they're actually going to add a sixth to the Big Five. I don't recall what it was but I agreed the DISC Briggs-Myer are really old. They're way outdated.


Robert Hansen

Not scientific at all.


Russ Bodnyk

Yeah. It’s the very least. When you take those tests, DISC Briggs Myer, the new five or other, the idea is at least you can see your scores. You're choosing to do it and you can see it. The sad thing is with a lot of this, you have no idea what they're really taking from you and recording, and sharing that recording.


There's these opportunities where they could actually come up with a beneficial mental health, social interaction. Your little consensus body of agents that are there to help you because you choose them. You can make, “Oh my god, you guys are terrible. Just be quiet.” We so often don't have the choice at all. We don't even know.


Robert Hansen

It will eventually all be fed into our happy go lucky advertising bot that will know exactly what we want and know exactly how to deliver it to us and when.


Russ Bodnyk

Or we can just pick which version of Disney we want to overlay on the world. AR Disney 24/7.


Robert Hansen

All right, Ross, this has been great. Thank you. This has actually gone a different path than I thought it might go down. It was really good. You're very thoughtful person. I really appreciate it. How do people find you? If you want people to find you. Out in Colorado living on a trailer or RV.


Russ Bodnyk

I got a second one. To show you a photo. I bought an awesome commercial double decker. I'm turning into Mad Science. You're going to have multiple sister ships. It's important to go through timespace with silly things.


Robert Hansen

Come out and visit you in person. Is that your recommendation?


Russ Bodnyk

Yeah, I’m headed to my little robot ranch next week. I'm going to go and play it with some prototypes in the quiet of nature. I have a couple different among LinkedIn and russ@nulabs.ai is probably my shortest URL.


Robert Hansen

Great. Cool. Well once again, thank you very much for joining me. It was a pleasure. I look forward to seeing this picture of your bus.


Russ Bodnyk

Oh, yeah. It's this experience. Thank you so much, Robert. I haven't done this in so long. I've always wanted to. This is my first public thing in a long time. This is good. Thank you so much for what you do. You're helping people get towards ground truth in probing the questions and hopefully preparing us for a better future experience, AI or otherwise.


Robert Hansen

Once again, Ross, thank you so much for joining me. I really appreciate it. I really look forward to seeing you soon and your bus


Russ Bodnyk

Heck, yeah. It's my pleasure. Thank you. We can do a fist bump.


No Transcripts Are Available Yet

Comments

생각을 공유하시겠습니까?첫 번째 댓글을 작성해보세요.

THE RSNAKE

STORE

Show your support by getting yourself a new t-shirt, hoodie or any of our products available in the store!

bottom of page