Discerning The Unknown with Ryan Peterson

Debunking Digital Misinformation with Expert Insights

September 05, 2024 Ryan Peterson Season 1 Episode 8

Send us a text

Can you tell the difference between real and fake news? Join me, Ryan Peterson, as we unravel the complex web of misinformation in our latest episode of "Discerning the Unknown." From manipulated images of Serena Williams to satirical articles from The Onion that fooled countless readers, we’re diving deep into the dangers and impacts of spreading false information. You'll learn how to verify sources and think critically before hitting that share button, ultimately helping us combat the scourge of fake news together.

Discover how platforms like OtherWeb are stepping up to filter out the junk and bring you high-quality content you can trust. By aggregating news from over 900 sources and using AI-generated summaries and ‘nutrition labels,’ OtherWeb simplifies the process of finding reliable information. We explore the potential for these tools to revolutionize news consumption, making it easier for audiences to stay well-informed and for advertisers to prioritize higher-quality content.

In this episode, we also tackle the darker sides of AI and the psychological underpinnings of misinformation. With insights from experts like Dr. Zahi Hawass, we delve into how advanced AI technologies make detecting fake media increasingly difficult and the critical importance of discerning news consumption. We wrap up with practical tips on leveraging platforms like OtherWeb to stay informed without being overwhelmed, ensuring that you can navigate today’s media landscape with confidence and clarity.

Support the show

Website: www.DiscerningTheUnknown.com
Email: ryan@DiscerningTheUnknown.com
Facebook: www.facebook.com/DiscerningTheUnknownPodcast.com
Watch the Video of this episode on YouTube: www.youtube.com/@DiscerningTheUnknown

And Always Remember....MEN should NOT wear Flip-Flops!

Speaker 1:

The Discerning, the Unknown Podcast. Critical thinking in the age of misinformation. Your You're gonna like this. Here's Ryan Peterson.

Speaker 2:

Hello, welcome to you. I am Ryan Peterson. This is Discerning the Unknown. Again. Primarily, what we do here is try our best to debunk conspiracy theories, to put the right story and correct any myths that have either been in the news or historically that have either been in the news or historically. We talk about history, where people just have where untrue stories have maybe stood the test of time and people have misconceptions about it. That's what we do. I'll talk about some fake news stories that have been in the news lately and I've got a great source of real news for you. In fact, I've got a couple of them that I've been recommending. But today the show is just about that. It's called OtherWeb and if you're looking for real reputable news and a real reputable news source, otherweb is it. We're going to talk with the CEO, Alex Fink, about OtherWeb, so stick around for that.

Speaker 2:

But because we debunk fake news and conspiracy theories, I wanted to present a couple to you today, just interesting, how the spread of fake news is. I think it's overtaking society. It's dangerous, it really is. I'm from a small town and you know I am sure many of you are or are familiar with how small town rumors spread quickly and can really cause stress in people's lives. And can really cause stress in people's lives. Well, now we're on a worldwide scale where it's small town news sometimes and gossip that spreads like wildfire and it can, frankly, it can destroy lives, it can destroy reputations and needs to come to a stop. Maybe we'll never completely stop it, but we want to be sure we're responsible enough with what we spread, or memes that we share, to make sure that the information we are giving out is correct, because fake news is just dangerous. So we're trying to put a stop to that and we're going to talk about otherwebcom and I think you will enjoy it.

Speaker 2:

One thing I wanted to mention here, a news story I found the other day. Serena Williams, of course, is always in the news. The Olympics are going on, but recently there was a meme shared online where it said Serena Williams has been stripped of her titles after it's revealed and look at this picture here it's revealed that she is playing with two rackets. Now people were commenting on this and they were appalled that how did nobody notice this? People asked how did nobody notice this, that she's been playing with two rackets, and how can that be legal? And this is absurd. And so, if you're listening to this podcast, let me assure you there is a picture on the screen right now of Serena Williams with two rackets. It is clearly a manipulated picture. It's AI. Okay, she has not been playing with two rackets and none of us noticed for the past several years. This is AI, it's manipulated, generated artificially. And this spread. You know, maybe you saw it, maybe you didn't, but it spread like wildfire. So she is not playing with two rackets. Some people said I don't know, I have a hard time believing this. Well, that is what we're looking for. You need to see some of that stuff and be discerning, which is the title of the show Discerning the Unknown. But no, no, you need to be a little more realistic.

Speaker 2:

There was also another one recently um, that was spread again about a uh, it was, it was rehashing an old story. Um, fox nation actually uh, reported this and spread it again. A fake uh, barack obama email that it was 75 000 words. It was rambling and rambling and people again were appalled. There were comments like haha, obama unraveled, epic failure, impeach him, you know, get out of office. Well, the story that Fox Nation posted was from the Onion, was from the Onion.

Speaker 2:

Does nobody know yet that the Onion is fake. It's satire, which is fine. I enjoy good comedy, but the Onion is obviously satire, it is not news. There's another one about Subway. There's a tiny publication that reposted an Onion story saying Kim Jong-un was one of people's sexiest men alive. No, that was directly from the Onion. Another one a Bangladesh newspaper was fooled A Neil Armstrong story saying that Neil Armstrong was duped and he never knew until recently that the moon landing was a hoax Onion. Again, it's from the Onion and a Bangladesh newspaper bought it and reposted it.

Speaker 2:

These are funny, yes, but they're dangerous. Somebody will believe them. Just like I've said before, no matter how ridiculous we're in America, no matter how ridiculous a product is, chances are somebody will buy it. Same thing with news. No matter how ridiculous a story is, somebody's going to buy it. So we have to be discerning and we have to look at things with a dose of reality and make sure we're looking things up, make sure we're finding the correct sources of information, confirm these memes before we repost them and make sure it's truth.

Speaker 2:

There are many, in fact. I saw an Einstein one recently where this just popped into my head, where there was a meme that said, albert Einstein was in an argument with his college professor his first year in college, and got in an argument about whether or not God was real, whatever your thoughts on that. Okay, einstein insisted that god was and the the professor. The story goes, the professor asked questions about it and I forget the exact part. But uh, um, einstein, in this meme, had had a good point, uh, saying that, uh, you know the reasons why he believed that, uh, that that god was real, and the professor was trying to be philosophical and so forth. And just at the very end, up to that point, it said a student, a student, a student. And then at the end, it revealed that student was Albert Einstein.

Speaker 2:

Well, I saw that and I thought, wait a minute, albert Einstein's first year in college. I don't think there was anybody there saying oh, this guy's going to be a genius someday who discovers E equals MC squared. So I better write down what he's saying. That didn't happen. Einstein never said that and it took me all of about 42 seconds to find it on things like Snopes, to find it on those other fact finding websites where they give the sources of where these means came from and where they've been spread, and that one. Although it's an enlightening story, it's a nice little story and makes you think, but it wasn't Albert Einstein. They could have left that name out, but they were trying to enhance the story and make you believe that it was true by adding Albert Einstein in there, and that's not necessary and it's fake news. So that's what we're talking about today.

Speaker 2:

And Alex Fink he is the founder and the CEO of OtherWeb, and I wasn't familiar with this until recently and I want you to be familiar with it and so I want you to go there, if you can right now. It's otherwebcom. It's a new platform, allows everybody to read news and commentary, you can listen to podcasts, search the web. No paywalls, no clickbait, no ads, no auto-playing videos or affiliate links or any other kind of junk, and I'm glad to have Alex Fink, founder and CEO of OtherWeb, on the show. Alex, how are you? I'm very good. How are you Doing well, thanks. Now my first question is just what I read no paywalls, no clickbait, no ads on OtherWebcom. How do you make money? How do you keep it?

Speaker 3:

going Well so early on. The answer was investors give us money and we use it to create products and we don't charge our customers at all. Right now, we're coming out with several suites of products for the people creating content, for bloggers, for journalists, etc. And they use it and they can trust us because they see that 12 million people are using the platform. Therefore, our writing is good and our taste is good, right, our judgment is good and hopefully we can maintain that so that the B2B side of the business subsidizes the free product to consumer.

Speaker 2:

Now I think I'll just mention right off the bat, you know I don't have an affiliation with you, I'm not an investor. I think I'll just mention right off the bat I don't have an affiliation with you, I'm not an investor, although I saw a story on OtherWeb just today talking about if you'd invested in Google early on, you'd almost be a billionaire today if you invested a couple thousand dollars. But that's something I think maybe people need to consider is this might be a very wise investment because I think you're going to be very successful. Let's address that, but also let's let's start at the beginning. What is OtherWeb and why did you think it was necessary?

Speaker 3:

So first of all, kind of going through my personal history for the previous 15 years, before working on other web, I was building perception systems. Like I lived in california for a while. I lived in japan before that. I worked on cameras, I worked on computer vision systems etc. And at some point it just dawned on me that, a the world doesn't need more cameras I don't think we're making it better by adding more of these things everywhere and b I'm not even sure the world needs that much more information. We are failing to make sense of the information we're gathering right now. So I wanted to focus on what seems a lot more important, which is learning how to make sense of stuff, how to filter things, how to help people consume the best of what is available, as opposed to the worst of what is available, to the worst of what is available and also looking at the other side of this equation how to raise the price for people creating the sort of junk that you talked about early on. Right, because a part of the problem of why these fake stories are reaching you is that the cost of creating the story is fairly low, but the payoff for disseminating the story is fairly high. So if we help people filter that stuff out now, these stories will pay less. And if we raise the price to create that story now, the equation changes and you'll just see a lot less of that stuff.

Speaker 3:

So I wanted to look at both sides of it. We decided to start from the consumer side because, well, that's easier to then show credibility for everybody else. Right, if we have people that show with their behavior that they care about high quality content, we can take that to advertisers and say hey, you guys should use our algorithms as well to figure out whether you want to advertise on low quality content. Right. But if we don't have the credibility of users showing that they care, why would the advertisers care? And the analogy I use quite a bit is that once Whole Foods showed that organic food sells, suddenly Walmart has organic food. Right, but somebody needs to show that this thing is something that consumers care about, otherwise the producers, the distributors, nobody else, will care.

Speaker 2:

Have you ever found an instance? Now let me go back. This is AI-generated, correct, which? What is the other web? The stories are AI.

Speaker 3:

They're not quite generated.

Speaker 2:

I'm sorry, not generated.

Speaker 3:

We gather content from all over the web. We have more than 900 different sources that we gather content from and they're all sources who did not choose to opt out, did not have a paywall, et cetera. So we respect everybody's wishes. If they don't want to share, but everybody who did not ask not to be there, we take their content. We create an AI generated short bullet point summary that basically tells you what it's going to be about. We create an AI generated nutrition label that shows you everything that our AI models have determined about this content, like is the headline attention grabbing or not? Is the language more objective or more subjective, right, everything that we could define relatively objectively or at least in a way that most people would agree with right. And then we provide all of that data to people and allow them to use that to filter, to set their preferences, to decide what they're going to consume and in what order. So essentially, it's just helping people decide whether they want to consume something before they consume it.

Speaker 2:

Do you think, then, otherweb is something we should be taking a look at on a daily basis on a regular basis, I mean, should that be where we're getting most of our news from?

Speaker 3:

I mean, it's certainly my daily newsreader. Now, as a person who works in this industry, I have to read all the competition as well. But most people just need one good news source, and I think we've tried to make OtherWeb into the news source that people go to every morning when they're drinking their morning coffee instead of reading a newspaper, which is it used to be great, but right now every single newspaper has their bias or their weak spots. They are all somewhat inconsistent. I can't say which are the best ones and which are the worst ones, but even within the best ones, I think we've come to a point where consuming one news source is just not enough. You have to cross-reference, you have to consume multiple news sources, you have to consume from all sides, because every news source is developing their own position on every single issue.

Speaker 3:

It's not even just left and right, which, yes, obviously you should read some people from the left and some people from the right. But you take an issue like Israel-Gaza. It's not really aligned to left versus right anymore. Right BBC is on the Palestinian side, pretty clearly. Wall Street Journal is on the Israeli side, pretty clearly. They are not a left-wing, right-wing kind of organizations. They're just choosing positioning on this particular issue. So unless you read both, you don't really understand what goes on in the world. Even if you're on the ground and you saw what actually happened, you will not be able to predict people's behavior unless you read what they read right. So you need to consume the BBC and Al Jazeera side and the Wall Street Journal Jerusalem Post side to really be able to predict what people are going to do based on the information that they consumed.

Speaker 2:

Why do you think some of these I mean we know why sensationalized stories, you know, kind of trigger our mind so much and we want to look into that. But is that the reason that fake news just spread so much? Because it is sensationalized or because of the headlines it just clicks more in our heads? I think?

Speaker 3:

there's multiple layers to this. So there's this undercurrent that has existed for a very long time of intentional, carefully crafted propaganda, and you will have 150 FSB agents in a dungeon in St Petersburg somewhere. They will work for months to create a story and then to plant it at just the right time and it will have all the references, all the accoutrement of a real story, and then you'll need someone like Snopes to debunk it, because a regular person just doesn't have the resources to do so. That has always existed. It still exists. I don't think it's increasing. I think that the people doing this careful fakery, they have a fixed amount of resources. It's not growing.

Speaker 3:

What you see, growing a lot in the past one years, is what we call junk. It's not even fake news. It's not even news to begin with. It's not that it's false, it's that they didn't care whether it's true or false. That is not a part of the equation. It's basically stories that are a b tested to be more clickable and more viewable because ads pay per click or per view. There is no paper truth or paper quality and a lot of them happen to be false. And the ones who happen to be false also tend to get shared more, but that's a side effect. It's not really the goal of the article itself.

Speaker 3:

If you look at 2016, we had an election with a lot of fake news. Right, the most shared story that year on Facebook was the Pope endorses Donald Trump. It was published by an outlet called and the Fed, which, once you dig underneath, it's basically three kids in Macedonia. Right, it is not done to cause you to believe something fake. They don't care what you believe. They got 800 000 shares with a single story. And if you look at the top 10 that year, I think they had five or six of the top 10 stories that year. So that was just their best one, but they had a lot of them. So this is the entire economy and that's what gets shared. That's what makes people money and, ultimately, that's how evolution works. If you have a selective pressure and something bad reproduces more, you get more of it.

Speaker 2:

Now, there's a lot about the internet that I don't understand, and one of them is you know all the ways that people can make money. So if I'm sitting in my basement somewhere and I'm thinking I need to put out a fake story or I need to create clickbait to get more clicks and attention, how does that? How does that improve my situation by putting out fake news? How does that benefit?

Speaker 3:

me again, if your goal is to make money, it could even be that you start by trying to write real news, but then you accidentally include something that is somewhat of a stretch, like something you don't actually have evidence for, and you suddenly notice people click on it more. Now you have two options. Either you just decide, okay, I'm going to make less money, but I will maintain my standards, or you go with wherever the incentives lead you, and then gradually you become slightly more fake and then slightly more fake, and again fake might not even be the right access to consider here. If you go to CNNcom in the news section and you see breaking news, stop what you're doing and watch this elephant play with bubbles. That happened. That's an actual CNN story, right? They even tweeted a link to that story. Because they were so proud of that headline, they put it in the RSS feed of CNNcom. So I found that story on Google News, which kind of scrapes other news outlets RSS feeds, right? Is that fake news? Is that true, false? Stop what you're doing and watch the cell phone play with bubbles. It's just not news, right? But it got clicks. And so you're seeing this all over the internet, on both the good outlets and the bad outlets. Now, the good outlets they still have some internal standards. They force their journalists to publish corrections from time to time, even though those tend to be buried pretty far and most people don't see them. But they still are trying to go through some sort of motions. They have ethics that they all pledge allegiance to, etc. But they're all drifting in that direction.

Speaker 3:

And consider another thing that is not on the fake versus not fake access Auto playing videos. Almost every news website has them right now. Right? Do you know any users that likes them? Do you know any editor that likes them? This is a pure example of an arms race. Right? Nobody wants to do it, but they can't not do it if their neighbor does it. And just because the industry couldn't just get together and agree not to use the stupid thing, everybody does it because the truth of the matter is, if you put an auto-playing video on your page, engagement increases. It works, right? Nobody likes it. Consumers don't like it. Editors don't like it. Editors don't like it. They have to put it because dwell time went up from three minutes to three and a half minutes. If you put it there, yeah that.

Speaker 2:

Yeah, that's kind of human nature. I guess everybody's doing it. Because everybody's doing it it's it's kind of and because it happens to work right.

Speaker 3:

It hits on some weakness that when there's an image moving in front of your eyes, you pay more attention to what's going on there because there's movement. And when you pay more attention then you might notice something. There are even weirder examples that you're seeing all over the place. There's quite a few studies that show that if you blur the text on a page a little bit, people will pay more attention and spend more time on the page. I haven't seen any news outlets use that yet, but it might be coming, because the academic literature is showing that if something is slightly hard to read, but not very hard to read, it actually makes you pay more attention. Recall is higher.

Speaker 3:

I've seen copywriters use that technique. I've talked to copywriters that intentionally change sizes or change fonts a lot or use more bold, more italic, more things like that, because they say it increases what people remember from the message. They are forced to read more attentively because it's harder to read that way. Now again, haven't seen that many news editors use that yet, but the direction typically is that if it works to increase retention it will happen on every news website out there.

Speaker 2:

It's important to know some of these tricks. Obviously I was just thinking recently. I mean, I frequent Facebook is one of them, and I'm on an old baseball picture website. I just like looking at those 20s and 30s pictures of old baseball players and I've noticed a couple of times that they'll put their headline and say this is Babe Ruth in 1922. And the picture is clearly somebody else or a recent player or something. And I've done it myself. You put the wrong picture in there and only recently I thought, oh my, maybe I've fallen for this. I made a comment, I clicked on it and I probably bumped up some kind of algorithm somewhere because now I've fallen for this. Is that a common tactic? Are they using that to gain clicks?

Speaker 3:

Well, I think again, if posting pictures of Babe Ruth gets you a lot of clicks, then some people will post real pictures of Babe Ruth and some people, if posting pictures of Babe Ruth gets you a lot of clicks, then some people will post real pictures of Babe Ruth and some people will post fake pictures of Babe Ruth. The minority of them might do so unintentionally. The majority might just do whatever gets clicks right, and so you are going to see a lot of fake pictures Now, just posting a picture of the wrong player. I'm not sure that that is that nefarious. That looks a lot like a mistake.

Speaker 3:

But you had some examples of intentionally faked pictures, like the one of Serena Williams in your intro, so those are going to happen more and more just because we've become so much better at faking images and video very recently. Not all of them are done with AI. I know there's a lot of brouhaha about AI everywhere and, oh my god, there's this AI video of Kamala Harris from last week on Twitter that Musk reposted. There wasn't that much AI in that video to be, honest.

Speaker 3:

So there's a lot of things ascribed to AI because it's the bogeyman of the hour, but still we are getting to the point where if, in the past, I wanted to fake an image of something that just didn't happen, I would need to take an image of, let's say, another person, then transport the head of my target to replace the original head in that image. Right, and when I did that, then the edges would be somewhat wrong, the light on the face would be somewhat wrong, maybe the shade behind the face would be slightly wrong. Right, light on the face would be somewhat wrong. Maybe the shade behind the face would be slightly wrong. Right. If the image was compressed and then I uncompressed it to do the editing and I compressed it again, I might have, over compression, artifacts right around the face. Right, there were these things that image experts or people who worked in cameras for 15 years, like me, would immediately notice. And what happens with AI is that none of these things happen anymore, because the way you fake an image with AI is you create two models, one that does the editing and another that reviews it and finds all the artifacts that somebody would spot and say this is edited, and they just iterate between these two. This is called the general adversarial network, or again, where, after enough iterations, all the artifacts that the human could have noticed that are a sign that something has been edited are just gone. So it's similar to what a human does, but just imagine two humans the editor and the critic, working in iterations over and over and over. This is something that humans never did because it consumes too much time, but AI it's just a few cents. It's not time anymore.

Speaker 3:

So you're seeing fakes that are much better. You're seeing fakes that are not in a single image but in a whole video, and they're also pretty hard to spot, and some editing techniques are even somewhat simpler than replacing an entire head. You could generate today a voice that sounds just like Ryan Peterson. Right, you can take a real video of Ryan Peterson talking on a past podcast and their software that would actually change the way your lips are moving to match the newly generated voice, and it will be very hard to notice that this is not Ryan Peterson saying this thing. That was actually generated in 11labscom and not recorded from anywhere. So we're in a world where it's much harder to trust what you hear or see online unless you have full chain of custody. So you almost have to start looking at videos like police look at fingerprints. If you don't know exactly who held the evidence bag at every single stage from the moment it was collected, you can't trust that those fingerprints were actually on the murder weapon from the very beginning. Sure, sure?

Speaker 2:

and yeah, you, you, you brought up a good example about what I said about the serena williams picture. Uh, ai, now I've got in my mind because that's, like you said, the boogeyman of the day. Obviously that was photoshop or something you know it was. We used to say photoshop. Now we're replacing that with ai, that's the term.

Speaker 3:

Yes, so yeah, you got me no, I mean it could have been ai. I didn't look at that particular image up close, but if you remember the image image of Princess Kate from about a month ago two months ago, yes, right, so that one was clearly Photoshopped, right, there was no AI there. In fact, the reason you could see that it is edited is that there was a normal, very human error in the editing process where, if you looked at the left sleeve, there was a discontinuity in the edge of the sleeve right. There was a hole in there, which cannot possibly be true, and any good AI system would have immediately corrected that. So the mere fact that that was visible means it was a bad human editor and not AI.

Speaker 2:

Okay, okay, excellent. You think that, like I said, fake news I really believe is dangerous? Is AI as a whole dangerous? I would think it definitely can be used for good.

Speaker 3:

Yeah, so it's an instrument just like a kitchen knife. I can use it to make a salad or I can use it to stab someone. So it is dangerous in the sense that it's very potent and so if it is used for bad reasons, then obviously bad things will happen. There is another danger with it, which is that it's going to be a big economic transition, and humans are just not very good with those. If you think about the 1920s, when we mechanized agriculture, that was a big economic transition.

Speaker 3:

In 1920, 30% of the US population worked in agriculture. By 1930, 3% of the US population worked in agriculture, so 27% of the US lost their job in a single decade. And if you trace what happened after that, you had the huge migration north right and a lot of the people who migrated back then migration north right, and a lot of the people who migrated back then, their descendants, even to this day, are kind of an underclass in the places they live in right. So you would think that three or four generations are enough time to handle a transition, but clearly the US did not do a very good job with that particular one. So we're going to have a displacement of similar proportions right now, even if AI is used for good right, even if we just use AI to do good things and make humans more efficient, there's a lot of people who are going to need to find something else to do. Just because I know I'm in journalism right. Think of journalists. You probably don't need as many journalists 10 years from now as you have today.

Speaker 2:

I was just going to ask that Exactly. We won't see anybody like a Walter Cronkite anymore. That's how much journalism has changed.

Speaker 3:

No, I think a Walter Cronkite you might actually see. But the question is if I don't know.

Speaker 3:

Well, let's take an example Gannett. They own 300, 400 newspapers, right? They're the biggest newspaper owner in the US. They spend something like $3 billion right now on mostly labor. Do you think they need to spend that much to produce the amount of content that they're producing 10 years from now? Or can they cut that cost by a factor of 10 and still produce roughly the same amount of articles and the same amount of newspapers with the same level of quality? One person can start producing 10x as much if you give them AI models that automate all the tedious parts of their work and eventually get to the point where you give them AI models that do the writing once the journalist has discovered all the facts.

Speaker 3:

You don't want to trust fact-finding to AI, because AI is not going to go to a garage and interview Deep Throat somewhere. It's just not going to work. Ai is not going to badger a politician until that politician accidentally says the truth. So let's say humans still do that. But once a human has a list of bullet points, here's what the article needs to cover.

Speaker 3:

Do you still need the human to write it? Probably not, and that means now one human can do 10 times as much, but are people going to consume 10 times more content? Probably not, and so there's going to be a huge displacement. So, even in the best case scenario, where AI is only used for good, we need to adjust, because big transitions are coming Now. Add to that the fact that some bad guy could decide hey, chat, gpt, help me create a better version of the plague, and that might happen. Unless we have guardrails around these models, we might end up with a more difficult plague than the original one, and so we have to be careful with this technology, because it is fairly potent, so we need some guardrails around it.

Speaker 2:

You mentioned the fact finders. Of course they're important. Snopes, of course, are probably the most popular. There was the controversy not too long ago about Snopes and that it's one particular married couple or something like that. Just as we said, these guys from Bangladesh or whatever you said in their basement making the fake news, is Snopes something we should be looking at? What are some other fact finders that you yourself would trust?

Speaker 3:

So first, to be precise on the terminology, by fact finders I mean sort of the private investigator part of the journalist's job, finding new facts that nobody has known before. Snopes would be a fact checker, and I think those are obviously useful, but they have a few problems. The first problem is that they don't scale right. As I mentioned before, creating fake things is fairly cheap. Debunking fake things, as you probably know from your line of work, is very expensive. It takes a lot of time, and so if you look at the number of claims that snopes and politifact and all these other fact checkers are just able to check in a given month, it's nothing, it's negligible. It does almost nothing to stem the tide of misinformation coming at us, just because it's one millionth of what it should be to be able to address all the different facts.

Speaker 3:

Now, the second problem with them is that you can't really peek inside the brain of the person doing the fact checking. So how can you trust them? Right, and this is one of the benefits of AI that if you have an AI model where the source is open and the data sets that it was trained on are open, it is somewhat better than the human editor in that you can trust it completely. Nothing is hidden, and so a part of what we're going to do and this is something that we have in our suite of tools for journalists is try to do rudimentary fact checking with an open source model that people can verify Right.

Speaker 3:

Obviously it's not going to be as good as Snopes, at least not in the next two or three years, but it will be able to debunk the vast majority of things that are obviously true, and we're giving that to journalists because our assumption is it's not quite good enough as a final product to the consumer right, because sometimes it does make mistakes, but it saves the journalist so much time Now, instead of going and searching for all this information. They already have all the possible comparison points and citations and things that are relevant to this and they just have to go through it and see is this one of the 97% of cases where I can just use it as is, or one of the 3% of cases where it has made a mistake and I need to do the work of Snopes and go and do my own search? So they're useful, but I think they need to be automated because they just don't scale as fast as junk does is getting out.

Speaker 2:

So that means that I need to use the correct terminology myself and once we're done, if you can, if you've got a cheat sheet or something with the definitions of terms, if you would send it to me, I would appreciate it?

Speaker 3:

I'm not sure I do. I just have the benefit of being extremely pedantic.

Speaker 2:

But how about the difference between misinformation and disinformation? I think those are used interchangeably.

Speaker 3:

You need to add malinformation. Now there's a third one.

Speaker 3:

So disinformation here's how I understand them, and if somebody listening to this thinks that I am wrong, please correct me. Disinformation is something that was created intentionally to cause you to believe something that isn't true. So it's the FSB agents in a dungeon in St Petersburg. They are doing disinformation. Misinformation is something that you read that isn't true and causes you to believe something that isn't true, but it wasn't created with that purpose. So it could be the clickbait article about Serena Williams using two rackets. You end up believing that she's a cheat. You were misinformed, but chances are this was not disseminated by somebody who really doesn't like Serena Williams. It was disseminated by somebody who just wants a lot of clicks. And now there's a new term as of last year, I think malinformation, which is something that is true but ends up misinforming you, and that's a tricky one.

Speaker 3:

But if I sometimes tell you a few true factoids that were carefully selected, I might cause you to have the wrong impression about something. Like I could tell you that during the COVID vaccine rollout, there was a big spike in side effects reported in the VAERS system, and if I say nothing else, you might decide that the COVID vaccine is uniquely dangerous. Why would you ever take that, right, the biggest spike in side effects ever recorded, right? But I didn't mention how many deaths were prevented by that same vaccine, right? I didn't mention that there were organized campaigns to report things to VAERS by anti-vax activists, right? So I didn't mention a lot of things.

Speaker 3:

I told you something that is true, but it's carefully selected, and if you read works on propaganda from Soviet masters of the art like Posner, right, they talk a lot about this, that the best propaganda uses quite a bit of truth. But that truth is carefully selected and sometimes sprinkled with untruths to cause you to eventually believe the wrong things. And so those are kind of the three Intentionally false to make you believe something false. Unintentionally false makes you believe something false True and still makes you believe something false. Those are the three categories.

Speaker 2:

Okay, and that's the psychology of it. I think back and you reminded me of an old joke I used to tell, and you know, I think people who have ever studied comedy or even told a joke would understand this. The joke itself isn't important, but when I was telling it I would, I would lead, draw people in with you know it was a story about an ambulance, and do you know where this area is? I saw an ambulance in this area, remember? You remember we were there and you know that, that, that area, and oh yeah, yeah, I know that area and area. And then the punchline, you know has nothing to do with what you just said, but you draw somebody in and by that time they're hooked and you, you pretty much got them.

Speaker 3:

It's the whole psychology that just in a new form for the the age of technology yeah, and there's a lot of other things that I don't even fall into any of these categories, but I've been looking at them a lot lately and I'm not even quite sure how to address them yet. But I think that we should be talking about them as a society quite a bit more. There's almost a terminology war happening on almost every front. If you look at gender transition versus gender affirming care, or if you look at just the choice of words of how things are described illegal immigrant versus economic migrant versus undocumented migrant, right and you have different camps trying to inflict their preferred verbiage on society, because within those terms it's hard to argue with their point of view. So they think if they win the labeling war they will win the opinion war eventually. Right, and it's kind of insidious. And it's especially insidious when that substitution is done without acknowledging that that's what you have done.

Speaker 3:

I've seen one cases with that very first scenario that I described, where Republican lawmakers in Oregon passed a law and in the law in the headline it says restricting gender transition surgeries for minors or something like that. And then the article, I think in UPI, which is one of the big agencies like AP I guess an older one than AP. They said banning gender-affirming care, and I'm thinking, okay, I don't know which one of these terms is more correct. Banning gender affirming care. And I'm thinking, okay, I don't know which one of these terms is more correct. I'm not qualified to tell you whether one term is correct or the other is correct, but you've substituted the term that is in the bill that you're writing about, without telling your readers that you've substituted the term. Is that ethical? Or should you at least acknowledge that in the original bill, a different terminology was used? It seems like what you're doing with that is trying to convince rather than inform, but I'm not sure I could be wrong.

Speaker 2:

Yeah, yeah, yeah I love that, that even though you're you're committed to telling the truth on your website, somewhere in there you say, eh could be wrong. We don't know absolutely for sure. That's my philosophy. If I could find a religion that at the end of their book says, eh, we could be wrong, that might be a religion I follow, but that's science, I think deism to an extent that's probably as close as you can get.

Speaker 3:

We think it's more likely that God exists than not, but we're not quite sure and we definitely don't believe anything people say about him. That's the basic tenet of deism, right, yeah?

Speaker 2:

In fact, my wife and I got in a discussion the other day just when you were talking about how terminology and the terminology war lately. The more I talk about it sometimes and the gender issues, the more I realize I don't understand a lot about it. What brought it up was the boxer in the Olympics Is it a male, is it a female, is it too much testosterone? And at the end of our conversation, you know, she brought up points and I brought up points and I realized I just don't know and I'm not even sure what I think. The more we think about it, the more we don't know.

Speaker 3:

Well, you are a rare breed if you're able to say these days, I'm not sure what. I think. It seems like everybody, especially if they're online, is trained to have an opinion on everything, even if they are completely uninformed on the subject. Now, on that particular subject, it's actually slightly worse than what you said, because the final in boxing is between two boxers who have XY chromosomes, right now in the female division.

Speaker 3:

Right One from Algiers, one from Taiwan. So basically, all the boxers with XX chromosomes have been knocked out, and now it's just the two people with XY chromosomes finding it out. And so, yeah, it's a difficult one, especially since, right before the Olympics, the WBO banned both of these boxers for failing what they called the gender test and wrote an open letter to the Olympic committee warning them that we don't think these athletes should participate in the Olympics. The Olympic committee ignored them, didn't even respond to the letter, and now you have this kind of brouhaha. Now, that said, let me give you the other side of the story. Should this be the top leading story on Fox News on the day on which we had the hostage exchange with Russia, the biggest one in history, probably?

Speaker 2:

not Excellent point yes.

Speaker 3:

But it was right, yeah, yeah.

Speaker 2:

And that reminds me of the old. I love the movie, even part of the reason I wanted to be a broadcaster, the old broadcast news. Early on she was giving a lecture and then she ended it with a nice story about falling dominoes and she said, yeah, funny story, but it's not news. And everybody thought her lecture was over because she ended it on a on a high funny note and, as people are leaving, she said this just doesn't news. So but yeah, that seems to be what we're consumed with lately is the fluff stuff.

Speaker 3:

Or even if it is news, probably not page one news. I'm sure that it is a big deal for the women participating in the Olympics, right? Or the Italian boxer that lost to the Algerian in 46 seconds and was crying afterwards because her dreams were crushed. I realize that is news for some people. It is probably not that important to the average American news viewer, is my guess. So maybe if they want to spend two hours consuming news that day, they'll eventually get to that story and that doesn't make sense. But if they only spent 10 minutes, don't you think they should know about the hostage exchange instead?

Speaker 2:

yeah, yeah yeah, you know this is a conversation I you've got me thinking a lot more about this than even I thought. You know. I thought I was dedicated to thinking this through and thinking this out, how dangerous this could be and how we need to know the truth about things. But yeah, we need to be even more discerning than I thought. You've really got me thinking about. You know what we need to know and what we don't necessarily need to know right away. What we need to know and what we don't necessarily need to know right away. It's nice to know some of that stuff to do well on a trivia night at your local tavern but, some of it just isn't the most important news of the day.

Speaker 3:

Let me complicate it even more for you. If you remember?

Speaker 3:

the Boston Marathon bombing. There were people who were present in the bombing and obviously observed the atrocities up close, heard the booms etc. And then there were news about the Boston Marathon bombing. So there's a study from that year that compared people who were present in the bombing to people who watched at least four hours of news coverage on TV of the Boston Marathon bombing and they checked which one of the two groups is more likely to have PTSD. The latter did so. Four hours of news coverage of atrocities is worse than actually experiencing the atrocities.

Speaker 2:

Wow, that's amazing.

Speaker 3:

So that kind of when I said let me complicate that a little bit. Even if what you're watching is true, you should moderate it. Don't overdo it. Yeah.

Speaker 2:

And that brings up an interesting point too, because I think a lot of people say I don't watch the news anymore. It's all fluff or it's all fake or it's all whatever they believe it is. I even, not long ago, a billboard on a gas station they had right under their price. Their billboard there said turn off the news and turn on your life, or something like that. I think that's dangerous too. I mean, we need to be informed, we just need to get the right news.

Speaker 3:

Well. So let me give you a few personal examples just to kind of illustrate why I don't think that's the right approach at so let me give you a few personal examples just to kind of illustrate why I don't think that's the right approach, at least not for me. I was born in the Soviet Union and I still remember back when I was four or five years old, my parents would wake up at 4am, lock themselves in the closet, turn on the radio to listen to Voice of America. Now, obviously they had to do it in secret so the neighbors wouldn't hear, because that was criminal. Voice of America was banned. But fast forward about a year and a half, we decided bad stuff is about to happen where we live, and so my parents packed everything and we left to Israel, because this was after 1989, you could leave.

Speaker 3:

Three months later there were tanks in my hometown. The Moldovans invaded. The Russians tried to fight back against them. You had something similar to what is going on in eastern Ukraine right now, on a much smaller scale. Perhaps it ended relatively quickly, but still, reading the news, or listening to the radio in this particular case, led to better actions. People who didn't listen to the news like my parents did were caught in that actions people who didn't listen to the news like my parents that were caught in that. So if somebody tells me I just don't listen to the news at all saying okay, but that means if something's going down where you live, you'll be the last one to know. So there's a real benefit to listening to the news. Now, does that mean you need to know everything about every single boxer in the Olympics that failed a certain test? Probably not Right. To know everything about every single boxer in the olympics that failed a certain test probably not right. So there's kind of the right measure here for knowing what you need to know.

Speaker 3:

And another example just not to harp on this too much, but I have a marketing guy who managed to leave ukraine about an hour before the border was locked down and men were not allowed to leave on in february 22. Right, in fact. He tried to drive out, got stuck in traffic, dropped his car, walked across the border to Poland. Why did he know that it's time to leave and others didn't? Because he had a good source of information.

Speaker 3:

So I think knowing things is important, but you need to be very discerning. Which is in the name of your show and consume the things that are actually likely to affect your life in some way. And if it's an earthquake in indonesia, maybe you should know it happened, just for water cooler talk. But knowing the detail, like knowing the day-by-day casualty counts or things like or which country sent aid and which one did, are you sure you need that level of detail just to get by? Probably not. Casualty counts or things like that, or which country sent aid and which one didn't Are you sure you need that level of detail just to get by? Probably not.

Speaker 2:

What do you have for statistics? You must know statistics. How much of the news that we're reading or seeing or spreading could be some sort of disinformation, misinformation, malinformation.

Speaker 3:

I don't think it's possible to calculate, because the way that the internet operates, it's turtles all the way down. It doesn't matter what you point to. I will find something even less credible. There's just no bottom to this. And so you can think of this pyramid where, as you go down, the base gets wider and wider, and most of the content on the internet is not even created for a human to read it, right, it's mostly just created to be discovered by search engines and therefore to get some seo juice with google, and then to send backlinks to some other website, right, and it's possible that no human ever looks at it, and if they do, they click away. But it doesn't matter because the job has been done Right, there's a whole bunch of content out there that you can buy.

Speaker 3:

In fact again, funny personal story I used to be the VP engineering of a company called Aura, which manufactured VR cameras for broadcasting, and the website was Aura O-R-A-H dot C-O. And about a year ago, for some reason I guess, I opened an old computer and that was still in the history in my browser. So I clicked on it and I went to the website and I saw well, the company is no longer in existence and went bankrupt about a year after I left. And if you go to that website right now, you see hundreds of articles written by the same person published in the same minute.

Speaker 3:

So I said, okay, something's funny here, one person doesn't publish this many. No, it is still about photography mostly, so they kind of kept the theme. But I got curious. So I went to all the different marketplaces where you can buy backlinks because there's only like five or six of them and I started searching for Auraco and I found one of them the no BS Marketplace it's called very ironically where for something like 80 bucks, I can buy an article on Auraco that will link back to me, and so I can just pay them 80 bucks. They will write an article on any topic that I want, any number of words that I want, but they have their own editorial guidelines, probably using AI, right, and somewhere in that article, on the phrase that I tell them is my preferred anchor text, they will link to my website and I will get some SEO juice with Google. And that is the majority of the internet, right. So when we talk about bad news outlets, they are not the worst websites out there.

Speaker 2:

It's turtles all the way down wow, and that that brings up a whole discussion then about ethics of of ai. We touched on it a little bit, but uh, you know, are just the ones who, who, uh, who don't play by the rules going to be the most successful? You know, whether it's news outlets or authors. Even you can write a whole book in AI. You know what are the ethics of it.

Speaker 3:

Well, we'll have to decide what the rules are right.

Speaker 3:

So ethics are one element, right, but is it necessarily unethical to write a book using AI if you have decided what you want to write and nobody else has decided to write about that and your book is still informative and high quality, et cetera? So our position in OtherWeb is actually we don't try to figure out if this was written by AI or not, just like we don't evaluate an article based on which outlet published it. We want to just look at the text and if the text is good, then it's good. I don't care if it happened to come from Breitbart or Mother Jones. They occasionally get a good article and if something came from CNBC, it doesn't mean it's good, it might be bad.

Speaker 3:

So we have to look at the substance of what was published, and my personal belief is that 10 years from now, almost everything will be written by AI. So saying that the writing by AI is unethical maybe it is still true today just because only bad guys have done it, but it will not be the case for very long. So I think eventually we'll need to just get to what I think is similar to what we did with science in the 17th century we came up with rules of what your article needs to look like.

Speaker 3:

How should an experiment be structured? The headline should match the abstract, should match the body, what data you should provide. You should cite any work that you're actually referring to or basing this on. There are these rules that every scientific journal editor and every peer reviewer actually evaluates your article by, and if you don't abide by them, you don't get published and a good journal, and I presume that most people who want to know what's new in science only read good journals and they don't read the science section of the daily mail, right?

Speaker 3:

So as long as that is true, these form-based filtering mechanism actually work pretty well, and the latest stat I've seen is that about 14% of all scientific papers include fake data. Well, 14% is pretty good. I wish we can get down to 14% in the news, because in the news it's substantially worse, worse. So form-based checking is not perfect. Still, let's say, one-seventh of everything ends up being fake, but that's a pretty good signal to noise ratio. So if we apply this kind of form-based filtering to news content and to other things on the internet, I think we'll have a lot less junk, or at least junk will be a lot less lucrative, will have a lot less junk, or at least junk will be a lot less lucrative.

Speaker 2:

Mm, hmm, I, yeah, I'm glad you said what you said and the way you said it there, because I just did a book myself, I just published it and yeah, I used AI. I mean, I wrote the story, I spent a long time writing but then I plugged it into AI for and it recommended kind of sentence structure and the flow of it and I moved some things around and, of course, spell check and all of that sort of thing. So you know, I'm certainly not saying that AI wrote it, but it helped me, helped me along to improve it. You know, I think if we, you know, like we said, with anything else, even I think of computer hackers and so forth If we use it for good, you know, then then that'll be much better on society. But if only you know the people who are thinking and working at how to manipulate it for bad reasons, if those people who are probably geniuses would do it for good, you know, society be better as a whole.

Speaker 2:

Sometimes the criminals are just ahead of. Yeah, you know.

Speaker 3:

But that's a job for society you need to figure out like. We can't design complex systems as humans, we're really bad at it right, we can't design a language that people will actually speak. All the languages we speak are organically evolved right? We can't design an economy. I grew up in a country that tried to do. That. Didn't work out so well. Right, we suck at complex systems. But we are fairly good at creating incentives, basically making sure that good behavior or progress pays and bad behavior doesn't, or at least sometimes we are and sometimes we're not.

Speaker 3:

So you can think of the 26 words that created the internet. That worked out pretty well, right, it's one small section, right, and the internet is now trillions of dollars. On the other hand, you can think of ISP regulation in the US in the 90s. That didn't work out so well. We went from over 3000 ISPs to six, I think, and we have the slowest bandwidth anywhere in the Western world. So clearly that did not work out so well. So we have the slowest bandwidth anywhere in the Western world. So clearly that did not work out so well. So we need to be careful about how we structure the incentives so that it pays more to be a productive member of society than to be a criminal.

Speaker 2:

Sure, Sure. What do you say? Well, let's, I don't want to forget about OtherWeb. We're talking about all the reasons for it, but why again remind everybody about OtherWebcom? Why do we need to be going to OtherWeb? How do we get through it?

Speaker 3:

How do we use it? So yeah, otherwebcom is basically a news aggregator with a few additional cool features. You just go there. You either log in or you don't even have to. Logging in is just so that it remembers your preferences for next time, right? But you just get in and you start reading news.

Speaker 3:

There's a lot of things you can tweak, a lot more than any other reader. The content comes from hundreds of different outlets. You can say which topics interest you, which categories interest you, how many articles do you want in each category, so you can configure exactly what is the sort of breakdown of what you're reading. And it now has a chatbot, kind of like ChatGPT that we call NewsGPT, where you can just ask it to fetch news for you and create your own news brief, basically, so you can tell it. Tell me what's new in AI research over the past two months, but don't mention OpenAI.

Speaker 3:

I read about them yesterday and it will create an article for you summarizing the answers to everything that you just asked, with the references, so you can follow and read the original articles as well. That's on the website. You can also download the Android and iOS apps. Again on the website itself. There's a link and a QR code, so it's more convenient to get it from there, and then you can consume the same thing in the app on your mobile device, and that's kind of it Now, as I mentioned, we are working quite a bit on the generative side of things, so you should expect news that was created by us pretty soon, but for the time being, we're more focused on aggregating and summarizing than on creating stuff from scratch yeah, the the news GPT.

Speaker 2:

I like that, that's. That's actually where I found the Serena Williams thing. I just typed in you know, give me some fake news of the last couple weeks or something, and, yeah, it was great. I typed in some other things, you know, baseball stats and boom, it was right there. It was exactly what I asked for and yeah, I'm going to be hooked very soon. What do you say to people this just popped into my head who say I did my research?

Speaker 3:

Well, hey, it really depends on who's the person saying this. If I'm making a claim on physics and the theoretical physicist says I did my research. I'm going to find it very difficult to argue with that.

Speaker 3:

Yeah, that's why I have people like you on the show the experts yeah, but generally speaking, I think that it is true that you have to be able to do your own research and come up with your own heuristics for how to determine what to believe and what to follow. At the same time, you probably don't have the time to be the expert on everything, and so you have to pick which are the areas in which I am the expert, and in those areas you can have contrarian opinions, and those opinions would make a lot of sense, and maybe they will even give you an advantage over people who follow the common. Wisdom because you're the expert at this, but chances are you cannot be the expert at everything. Wisdom because you're the expert at this, but chances are you cannot be the expert at everything. And if you're a genius in one thing, it doesn't actually make you more likely to be right in some other contrarian opinion in a completely unrelated topic. You see this a lot with entrepreneurs and vcs from silicon valley or with actors from hollywood. They tend to have opinions about a lot of things that aren't what made them very successful, and those opinions tend to be pretty stupid, because you cannot be the expert at everything by definition. You've invested most of your time into two or three things that you know really well, chances are and everything else.

Speaker 3:

You should pick somebody to follow who has spent more time on this than you and just follow that person. And how do you pick them? Well, you pick based on mainstream credibility criteria, for lack of a better term, because contrarianism is sometimes correct but it's usually wrong and you have to kind of understand that it needs to exist. If you suppress it on a society-wide basis, nothing new gets done, because every new scientific theory is contrarian in the beginning. Right, every change of paradigm is contrarian. Everything that we consider to be fact today started off as contrarian initially. But if I just write down all the contrarian stuff around me, less than 20% of it is true. So what will make me function better in the world? Probably to be fairly conformist in all the things that I am not an expert in and try to innovate or be contrarian in the things that I am an expert in.

Speaker 2:

Excellent answer, yes, excellent answer. Basically, when somebody says to me I did the research, you know, I roll my eyes. Yeah, sure, get it. You know it's your Google research. You think you found some corner of the internet that all the experts you know never knew about, but you found it Again. You have to be.

Speaker 3:

I'll give you some counter examples. Some things are fairly I'm not going to say easy, but they're not that difficult for a regular civilian to figure out. You just need to pick your, your source of data, right? So, again, to use COVID as an example, because it was just a big fight between contrarians and conformists, for lack of a better term, right? So my parents were asking me because they were reading all these fake news sources, like, should I take the vaccine or is it dangerous? Is gonna kill me?

Speaker 3:

And I looked at sources in the US, like I could just trust the CDC, but I wanted to dig a little deeper. So I looked at sources in the US and I saw it's really contradictory. There's scientists saying this, scientists saying this. It's hard to figure out what is true. So what does somebody who wants to dig a little deeper do? I went and I read sources from israel that was already distributing the vaccine at the time and all the stats were available already.

Speaker 3:

And so why would you have this political argument in the us when there's data on an experiment of eight million people? You can just look at the data. You see that obviously all the bad graphs started going down. And so I don't know. I don't even understand why there was an argument there, right? And there are a lot of examples like this where you can do your research. All you need to do is step out of the zone in which people are just throwing feces at one another and go to some other place where they just followed the data and did things correctly and throughout COVID. You could look at Norway, you could look at Denmark, you could look at Israel. You could look at a whole bunch of countries where there was no controversy. In Sweden, controversial things were done. People are still arguing about that. They work or not? Okay, ignore Sweden. Look at other countries. There are places where the data is fairly clean and there's no argument. So why not just look at what they did? And I think it would actually benefit people a lot in political arguments, especially if they default to this answer of let's just copy paste from somebody else a lot more.

Speaker 3:

That is my answer to a lot of questions. I don't know. I've been asked in podcasts what's my opinion on abortion. I say I don't know anything about abortion, but let me tell you there's 30 countries in the OECD, so if the US is the only one that can figure this out. Pick another one of the other 29. I don't care which one, I'm okay with their law and that that's an easy solution that is generally going well. They don't have arguments, right, so clearly their law is better than what we have because we have arguments, yeah, so I think that could be the answer to many things yeah, I was a police chief for a while.

Speaker 2:

My philosophy is whatever works. You know we got to go with whatever works. And somewhere on my website I've got, you know I that usually in these interviews I will ask does the data support that theory?

Speaker 3:

you know, and I think that's how we be discerning, you know, that's how we uh, how we know we're gonna, we're gonna get the right at the same time, you have to also recognize that in some cases you aren't able to figure out the data because it's complicated right like if you're trying to look at nutrition, for example, and you're not either a phd in nutrition science or at least in some other discipline where you understand how experiments tend to be structured and what is the difference between the meta-analysis and a double-blind clinical study and just an observational study, right, chances are you're going to look at some data and it's going to be very convincing, but it will be some sort of a low quality experiment or something that is contradicted by 17 other studies.

Speaker 3:

So at that point you're going to have to go one level up and say maybe I'm not an expert in this thing. Who is an expert I can trust? Okay, let's just listen to Lane Norton or somebody else right, and follow what they say on nutrition. Is Lane Norton going to be right 100% of the time? No, but he's going to be better than me trying to read the same study. So if a new study comes out and I don't know if it's true or not and I tend to follow everything in nutrition science I just wait for Lane Norton to do a video about it. That's my approach and I think it works pretty well.

Speaker 2:

Sure sure. It seems like anything we discuss, there's an exception to everything, isn't there?

Speaker 3:

That's what life is like, right.

Speaker 2:

Everything is probabilistic.

Speaker 3:

So even you said that OtherWeb is all about the truth. Otherweb is all about increasing the chance that what you see is high quality. Okay, I can't guarantee that there's not going to be some nonsense in there somewhere. My hope is that if there's nonsense, you keep scrolling. You'll see something that debunks the nonsense, three articles later.

Speaker 2:

Okay. Well, alex Fink, this has been fascinating. You raised a lot more questions than I thought I would have and I really think we could go another hour with just the questions that you know. Normally I'm writing down questions as the guest is speaking. You really, you've had me so enthralled, I haven't you know. I'm thinking in my mind and I'm forgetting to write down the questions You're saying.

Speaker 3:

I just talk a lot.

Speaker 2:

No, no. You raise good issues. You raise good points that get me thinking. Well, I was thinking this. Now I need to think this too. So again, remind everybody who maybe isn't watching on video but listening where do we go to find the information? What will we find there and what can we do on otherwebcom?

Speaker 3:

Otherwebcom is the website. The app, called Otherweb, on Android or iOS is how you consume us as an app. You can also sign up to the newsletter there if you just want 10 good articles every morning in your inbox. If you like what you read there, write to me at alexatotherwebcom and tell me you like it. If you don't like it, then definitely write to me at alexatotherwebcom and tell me why you don't like it. We'll fix it.

Speaker 2:

That's the way you improve, isn't it? You get recommendations both ways. Yep.

Speaker 2:

Okay, alex Fink, the CEO of otherwebcom. Thank you so much. I accidentally turned you off there too quick. Thank you very much. It's been great. We're going to have to do it again down the road. I'm going to regularly visit other web, for sure. Sounds good. Thanks for actually before. I let you go real quick and I thought of one other thing I have recently discovered NewsGuard. It's an app that you can, or an extension, rather, that you can add to your browser. What do you think of NewsGuard?

Speaker 3:

Well, they are trying to solve a similar problem by other means. So their general approach is what I call spreadsheet-based filtering, right? So they tend to review every outlet and in fact, they send questionnaires to the outlets with sometimes provocative questions, kind of question why did you do it this way, why did you not include the source here? Et cetera. And at the end they end up ranking the outlet and so every article that comes out of that outlet is ranked the same way. So if NewsGuard decided that this website is not credible, their assumption is everything that comes from this website is not credible. I think that's a relatively low resolution approach, right, Especially if it's done once and then. Well, maybe the website changes over time. It became better, it became worse, but the NewsGuard rating is from last year. Also, can you trust specifically the people who did this? It's not open source AI, right, it's human. So there's some downsides to it. But is the world better because NewsGuard exists? Yes, so even though, there's downsides.

Speaker 3:

If that's the one tool you can afford to use and everything else just consumes too much time, please use them.

Speaker 2:

Excellent, okay, and they're not really a competitor of yours. Would you say that? I think they can be used in conjunction.

Speaker 3:

it's another, uh, another, I mean really like the jd power or like the standard and ports rating company, right. So they're really trying to focus just on evaluating and rating things, whereas we are more focused on how can we affect the entire chain, starting with consumption and going through distribution and monetization all the way to production. So not exactly a competitor, but we did start in a similar way, right? Otherweb actually started with the nutrition label and the product was called ValueRank, which sounds a lot like what NewsGuard is doing now. We just evolved into something else gradually.

Speaker 2:

Great.

Speaker 2:

Thank you, alex fink. This was great stuff. We're gonna have to do it again. Thank you so much, ryan. Okay, thank you, alex fink.

Speaker 2:

Ceo of uh other web folks. Uh, great stuff. You know, I think it's like I said, another arrow in the quiver. Uh, for sure I think it's. It's a site you can go to every day. You know, get your news from there. It'll take you other places, it'll let you know and encourage you to be discerning about the news we read and spread and so forth. But you have to be discerning. You have to know what we're spreading out there. Before you spread a meme to somebody else, make sure that it's something that's passing along the truth because, just like those small town rumors, once you put it out there into the universe it can be dangerous. So fake news can be dangerous.

Speaker 2:

If you look on YouTube, you'll see my interview with Dr Zahi Hawass. I was so excited to talk to him yesterday. He is the most famous archaeologist and Egyptologist in the world, but I had some audio problems so I am working on it. But I wanted to get it out there. So his information is wonderful, clear. His mic was working fine. Mine is a little over-modulated and a little staticky. So I'm working on it and I'll get it fixed and clear it up for sure. But you're going to want to look for that on YouTube and, of course, alex Fink with otherwebcom. I'll have his interview up all over wherever you get your podcasts. So thank you very much. I am Ryan Peterson. This is Discerning the Unknown. Take a look at the website, it's discerningtheunknowncom, and, as I say at the end of every episode, join us next time, and men should not wear flip-flops. Thank you very much. I'm Ryan Peterson. This is Discerning the Unknown.

People on this episode