Archive.fm

Parallel Mike Podcast

#76- Why AI Will Never Become Conscious with Jobst Landgrebe

In episode 76 we are joined by consultant, professor and AI expert Jobst Landgrebe to discuss the AI deception. By which I mean the narrative that is being heavily pushed by mainstream figures in silicon valley and the transhumanist’s that AI is one day going to become conscious and rule over humanity. Jobst explains why this is not only incorrect, but a blatant lie and misrepresentation of what AI actually is. As such Jobst explains very clearly what AI is and how it functions in a very literal sense, showing us all that there is nothing to fear from AI itself…only the people who program it to do nefarious things. We also discuss the underlying occultism that is inherent within the mainstream narrative around AI and transhumanism as well as the links it has to satanism and the ‘do what thou wilt’ ideology. Of course, as an expert and proponent of AI himself Jobst also tells us how AI could in fact be used for the betterment of mankind. But first and foremost people must understand just what exactly it is and where its limitations lie. Hopefully this episode will ensure you are all completely aware of that, so that you can then make your own informed decision on whether or not to embrace or shun AI technologies. Enjoy The Show?

Part 2 for Members - www.parallelmike.com Mike’s Investing Community and Financial Newsletter – www.patreon.com/parallelsystems Consult with Mike 1-2-1 - www.parallelmike.com/consultation

Guest Links

Website - https://x.com/JobstLandgrebe

Duration:
51m
Broadcast on:
28 Aug 2024
Audio Format:
mp3

In episode 76 we are joined by consultant, professor and AI expert Jobst Landgrebe to discuss the AI deception. By which I mean the narrative that is being heavily pushed by mainstream figures in silicon valley and the transhumanist’s that AI is one day going to become conscious and rule over humanity. Jobst explains why this is not only incorrect, but a blatant lie and misrepresentation of what AI actually is. As such Jobst explains very clearly what AI is and how it functions in a very literal sense, showing us all that there is nothing to fear from AI itself…only the people who program it to do nefarious things.

We also discuss the underlying occultism that is inherent within the mainstream narrative around AI and transhumanism as well as the links it has to satanism and the ‘do what thou wilt’ ideology. Of course, as an expert and proponent of AI himself Jobst also tells us how AI could in fact be used for the betterment of mankind. But first and foremost people must understand just what exactly it is and where its limitations lie. Hopefully this episode will ensure you are all completely aware of that, so that you can then make your own informed decision on whether or not to embrace or shun AI technologies.

Enjoy The Show?




[Music] What you are basically. [Music] Deep deep down, far far in, is simply the fabric and structure of existence itself. [Music] [Music] The fabric and structure of existence. Hi everybody, welcome to the parall on my podcast. I'm your host Mike and thank you for joining me for tonight's episode. Tonight we've got a new guest on the show. His name is Yobbed Landgreepe. Now Yobbed is an AI expert. He is also author of the book "Why Machines Will Never Rule the World - Artificial Intelligence Without Fear". Now for those who are regular listeners to the show, you'll know that I'm very skeptical of the AI narrative. I don't think that AI can ever become conscious. I don't think it will ever have anything akin to a soul. In fact, I think it's ludicrous. I think there's a lot of smoke and mirrors going on here to make human beings feel less than. I think they want us to feel like we are incapable, we are useless. And the only way to remake the world is through technology and I think this is all a part of the transhuman agenda. But I think it's a scam. I think it's all a lie. And I was very happy to find that Jobs, who is an AI expert, absolutely 100% agrees with me. And that is why I wanted to get him on this show. I wanted to have a very frank and open conversation about artificial intelligence. Because I think for many of us, we just simply don't know enough about it. We haven't been taught about it. We've just been given these mainstream Hollywood narratives and we've been told by people like Elon Musk, what it's capable of, or we need to be scared of AI, it's going to take over the world. Yeah, I don't think so. I think it's controlled by humans. So Jobs is here to tell us what AI is actually capable of, how it functions, also what its limitations are. And then we start to talk about how it's being used and weaponized against us. And we get Jobs take on that. And in part two, we talk about how it's interlinked with the transhuman agenda, with Satanism. And we also talk, interestingly, about how AI could actually be used to help us. So I think this one is going to be a really important episode. It's going to enlighten people as to how we are being lied to around this subject. As always, the first hour that you are about to listen to now is for free. But if you do enjoy it, please consider becoming a member over on paralomike.com, where you can listen to part two. Also, that's where we continue the conversation. And oftentimes, where we discuss some of the things that might get us banned from mainstream platforms, if we discussed it in part one. Members, of course, please head over to paralomike.com to sign in and listen to the full episode. And like always, I hope you are well, healthy, and reasonably happy. And I'll see you all back here next week. Hi, everyone. Welcome to the paralomike podcast we are joined today by a new guest. His name is Jobs Landgreeve. He is the head of research at a biotech company and also a professor of theory of science or visiting professor at a university in Switzerland. Jobs has also written a book, which is one of the reasons I wanted to speak to Jobs. The book was called Why Machines Will Never Roll the World, Artificial Intelligence Without Fear. And this is a topic that I'm really interested in. I'm one of the only people I know out there that doesn't think AI is going to one day take over everything and become human-like and then God-like and kill us all. I don't believe that. I never had believed that. And I think that goes back to my Christian faith in that I don't believe a machine could ever have something akin to a human capacity of knowledge and creativity. So I was really excited to read this book, Jobs, but before we get started, please just introduce yourself to listeners and let us know a little bit more about your past and how you came to be the person who wrote this book. So first of all, thanks a lot for inviting me to your podcast. Now, as to how I got into this field, so after studying medicine biochemistry, I started mathematics and then worked since 1998 in the field of artificial intelligence and applied artificial intelligence. So I witnessed several of the waves of the hype of artificial intelligence, and when the latest hype wave of AI started 12 years ago, in 2012, when Google created a new clustering algorithm that became quite famous because they claimed that they could recognize pictures, the semantics of pictures, which it couldn't, but they claimed it. They said it could recognize the face of a cat and had to kind of become aware of what was going on the pictures. One year later, I started a company about AI that was doing language processing. And many customers asked me why I don't do chatbots, and I said, because they don't work because they will never work, because they will never be able to lead meaningful conversation with human beings. And the reason didn't believe me, and of course I saw the success of the sequential models like chat GPT coming long before chat GPT was published, but still what I predicted about them was still correct. And so at some point, I got so annoyed with the customers, always asking why you're doing this, that I wrote this book. And I wrote it together with Barry Smith, a good friend of mine who's professor of philosophy and with whom I also teach in Lugano theory of science. And so that's how I came to writing the book. I think it's a very important book, because I think most of us don't even have that basic understanding of what it is and how it came to be. So maybe you could fill us in on that part of jobs, what actually is AI, how does it work, what are its parameters and what is it not as well. Because I think that's the bit that most people miss, what is it and what is it not. First of all, the name of artificial intelligence is a marketing term. It has nothing to do with real intelligence, so I'd like to start what re-intelligent is, and then show what so-called artificial intelligence is. So real intelligence is the ability of animals and humans to find a solution for a new problem that they encounter suddenly, for which they haven't been prepared or trained, and then to find the meaningful solutions spontaneously for this problem. This is real intelligence. Animals, many high animals can do this, parrots for example, but also dolphins and other mammals can do this. And human beings also, but humans, in addition to disability, they also cannot only find solutions spontaneously, they can also utilize language to describe the problems and to do long-term planning and to then build societies that can in the end create towns and technology and build something like the Cathedral of Cologne or compose works of music, draw works of art and all of this. And this has to do with the ability of humans to verbalize their thoughts, to picture things abstractedly and this animals don't have. However, all of this has to do, which I have to repeat again with the ability to find a solution to a new problem spontaneously. And so now what is so-called machine intelligence, so-called artificial or machine intelligence is algorithms, which try to mimic some of the aspects of human thinking. For example, chess computer tries to mimic the ability of you will be to play chess, go computer or go software tries to mimic the ability to play go, and so on and so on. So basically artificial intelligence algorithms are algorithms that try to mimic some of the intellectual behavior of human beings. And these algorithms can of course become better than humans, such as chess or the go software that can all be human players, because they act in a closed world of predefined situations. And in such a closed world, you don't have to find anything new, but you just have to find an optimal solution to a problem that you can define mathematically in a perfect way. And so these are solution finding algorithms mostly that are very elegant, that are, you know, that became program them and run them in modern computers is a great achievement of technology and science. But it has nothing to do with the ability to find new solutions to unseen problems. Computers always totally fail when they get into a situation for which they have not been prepared with a lot of examples before. Then they totally fail. And that's, for example, why we don't have the safe driving car, because safe driving cars fail in situations that are novel. And because so many constellations that happen while you're driving a car are different from the situations that are seen in the training of the algorithms, they fail. So this is just a very, very short summary of what AI is and what is natural intelligence. And in your career jobs, that's you is going through your career and learning about algorithms and AI. Would you say that where we are right now, it's more akin to mathematics from the look at AI than it is something like artistry or creative dance or something that a human being can do with spontaneity and passion. I mean, I mean, this pure purely computed mathematics. So every AI algorithm is a mathematical model. And there are many different kinds of models. You can combine them and you can buy this also mimic human creativity. For example, you can now with with a computer program scale them to draw a picture in the style of Picasso and they do it. And the pictures will quite look and you can even indicate from his, from which phase of Picasso's life, the computer should draw the picture. So, so mimicking human behavior works quite well. But of course, the machine is not creative, it's just applying patterns that have been mathematically formulated in the machine previously. And there are basically two ways to do it. There's deterministic AI, which is, which is explicit programming of situations and logical, basically logical procedures. This is, for example, the type of AI that is built into chess playing computers or chess playing software. And then there's stochastic AI, which is AI that is you where millions billions of trillions of situations are used to give examples of the machine. And from this, this example, the algorithm derives a very big mathematical equation. This big mathematical equation can then take a certain input, like, for example, prompt and then create a picture based on the prompt. But, but it cannot understand the prompt, it cannot interpret the prompt and it cannot use the prompt as instructions. What it does is that it translates the prompt into a sequence of numbers and the sequence of numbers into another sequence of numbers, which represents a picture, out of which you can then make a picture. And this is called sequential stochastic learning and deep learning. And this is this is very remarkable from a mathematical and technology part of you that we can do this now, but it doesn't have the slightest similarity to intelligence because it can only reproduce what it has been told to do before. And this is essentially a mutation that delimits what machines do against real intelligence. So when people talk about AI learning jobs, when they talk about it becoming self-aware and having new thoughts and ideas and giving insights that a human being wouldn't be able to give, what are they actually talking about? Like, when we hear this in the media, is that actually happening or is this kind of illusion? People who say this don't know the mathematics of AI and so they don't know what they really talk about. And this is of course a very frequent phenomenon in media. You know, I mean, journalists usually don't really know what they talk about. Unless some of them are very specialized and then they know what they talk about, but very often they don't, or even if they do, they often believe in ideologies rather than in science. So just a side remark, if you look at journalists specialized in the economy, right, they know a lot about business and the economy, but they will often look at the economy in a way that is basically not aware of the essential problems of the system. They look at it in a, for example, they don't understand the concept of fiat money, something you talk about a lot in your podcast. They don't understand it. And so very often this is the same in reporting. And so what they mean when they talk about AI learning, let's start with learning. This is what we call as mathematicians, the configuration of an equation. So when you give what is called training material to the computer, an optimization algorithm is run. And this optimization algorithm parameterizes a very long equation. By setting, it takes the equation and which is, which is the result that should come out of the optimization process and subtracts the output of the equation from the real output and calculates the difference. And then this difference, this difference is set to zero and then the derivative is calculated. Partial derivatives are calculated. And these partial derivatives are calculated to find a minimal error. So basically the error that the computer prediction model does is calculated by subtracting the output of the computer from the real output. So for example, if you have a billion emails and a billion indicators, this email was spent or not, then of course the software should calculate whether it's spent or not. And its output is subtracted from the output from the real given data whether the email was spent or not. Now, if you minimize this difference, you optimize, it's an optimization problem to minimize this difference. This is called the loss function. This loss function computation is how AI models are trained. And when they are trained, basically derivatives are calculated, local minimal is found in the parameters of the equation that gives the smallest minimal. These parameters are then the so-called trained parameters and that's the learning. So what, what lay people call learning of AI is nothing else, but finding parameters that minimize the difference. That's all. And so, and so of course, this is great because if you now, if you now get input, emails that are very similar to the emails that were used in the training, the algorithm will correctly predict what's going on. But if you get an email that is totally in any way different from the previous emails, the algorithm will fail because it's only years towards solving very, very complex situations. And, and cannot have a very, very basic situation that it has already learned and it cannot adapt to a new situation. That's the essential problem of machine learning and that's why it's not really learning but just parameterization. The machine can only reproduce what was present in the data that were used to optimize the algorithm. So this is, it can't learn. Now, with regard to the other terms that it can only be parameterized. And if you, if you have an evolution in the data, so if the data change, like spammers change their spamming tricks, then you have to use these new data and retrain the algorithms and that the algorithm can also recognize these spamming tricks. And this is, of course, the same with all AI algorithms that you can imagine. So that's, for example, why chat GPT cannot, quote unquote, talk about facts that are newer than its latest training date. Because, because they were not present in the training data, right? So it cannot, it cannot know anything that happened before the, the deadline of the last training run. Now, back to, now to the problem of consciousness or awareness. Of course, what the computer does, it takes, it takes an input, which, which it transforms into a long series of zeros at once. And puts this input into the equation, which then creates another series of zeros at once. This has nothing to do with consciousness or awareness. It is just a computation machine. And in the end, it only performs elementary computations and to yield the result, given the equation that was created using the process that just described. So calling this awareness of consciousness is just silly. And reading an Eve, at best or at worst, it is propaganda. For example, if people try to then say the machine can never will and be evil, then it's propaganda. And I will just, I will come back to this why I think it's propaganda in a minute, but essentially, it's just laughable from a mathematical part of you. Wow, there's so much to get into in this one jobs, because I think it's an even more interesting conversation to have if you accept as the foundational truth that it's not going to be like human, and it can't be conscious. If you can convince the audience of that, then the question becomes, well, why are they trying to convince of that? And where are they taking us? So maybe we'll put that to one side for now. Let's talk about this later. It's very interesting. I have a theory about this, but now go ahead, please. Yeah. Awesome. I'd love to hear your theory on that one. But just before we get there, Jobs, can you tell me, and I don't want to derail you because I want us to pick up where we are now, but can you tell us how you evolved in your education around AI? Where did it start? And did you have any pre knowledge of things like cybernetics or any technocratic solutions? Was that your past? And was AI something that you yourself was convinced was something different until you got into it? Can you just tell me that evolution of thought for yourself? Very cool question. I've never had this question before. And listening to other podcasts, I expected something like this, but of course, not exactly this. So, so I started mathematics as a biologist to be able to apply mathematics to biological questions. And when I started this, I really thought that using mathematics to solve biological questions would help me as a biologist. And that's why I started mathematics. And then when I applied it more and more to biology, the problems are recognized that biological systems, it's very, very hard to describe and model biological systems using mathematics. And then, so that was my starting point. Then I asked myself, why is this the case? And then I realized that that natural systems are complex systems that have properties that are very fundamentally different from, I mean, living systems have have properties that are very different from properties of inanimate systems. For example, the solar system can is inanimate. It's it is dominated by one force of reputation force and can be perfectly described almost perfectly using mathematics. And living systems cannot and because they are complex and what that means, I will explain a bit more in a few minutes, but then to finish answering your question, I realized that if you want to apply mathematics using computers, which I tried since 1998, to biological problems, you have to restrict yourself to very simple problems. And there you can apply. But only very few problems in biology and medicine are amenable to mathematical modeling. And most of what the knowledge we have is not based on mathematical models, but some on statistical inference, but a lot also on on on other ways of acquiring knowledge. The mathematical the mathematics don't play a big role in medicine biology. I mean, some statistical evidence, but that's mainly it. And so, and some other models as well, but very little so. So then I decided to also apply AI to other fields. That's why I went into language processing in 2013. But essentially what I discovered in every field into which I went that that the more complex a problem is the more it is a problem. That occurs in the natural world in which we live, either biology or language, which is in the end also biology because it's new about biology, right? The hardest is to apply AI, because there's a mismatch between the way the world is and how mathematics describes it. And Immanuel Kant, one of my top three favorite philosophers, he said in 1791 in his book critique of judgment that it is impossible to mathematically model in animate systems in the way we can model in animate systems. He didn't give a real reason why this is so, but he hypothesized that this is the case. And 100 years later, we found out 150 years later, we found out why. And I can talk about this as well if you want. Yeah, that's interesting. I'm a really big fan of Kant as well. But yeah, please continue jobs. I'd like to hear more about how you just kind of come across like a pariah to me, because nobody else is giving me this message. And I want to get to the bottom of why that is as well. Is this something that other people in your profession actually agree with you on privately, or is this truly you going against what they believe. So I think it's true that I'm a pariah in the sense that I am, because of my instinct of questioning everything with that, which has to do with my Jewish, you know, I have a Jewish father I think is very Jewish this, this, this. And then necessity to question everything, and it has to do with how the Jewish theological culture evolves over thousands of years of debates about how to interpret the Bible and the secular form of this is what you can see in the small way in me, people like me and when people like Einstein, right to question how things can be that they are and then come up with totally different points of view and it for the rest who prefer to go with with the consensus. This is hard, and but in the mid term, usually these outsiders make it then into the mainstream this is how science progresses. And so when I started with my I skepticism, I was quite alone. But now I see more and more people also from the field, saying what I say they don't use the same depth and breadth that I use for the arguments in terms of using philosophy, psychology, mathematics, biology, physics. So they don't have the, they have worked it out at the same level of detail that I have because I have now dealt with the problem for 10 years. But even, I mean, I started realizing about the limitation of mathematics in its application and complex systems from 2001, 2002 onwards, but, but then in the last 10 years these ideas matured very much and I worked a lot on it and so, so I have now a very detailed position about this but I think that I see more and more AI practitioners who come around and think the same, and I also see more people who are not mathematicians or physicists who also start to understand this. Now the reason why they don't talk so much about it and why the position is a minority position is that AI is part of the transhumanist ideology and that it's basically an ideology that is very powerful in the West, but also even globally powerful because though countries like Russia or China have different political ideologies than the West, they have different political narratives, they also use democracy and to rule and transhumanism is an ideology that belongs into the technocracy style of rule. And I think saying that the emperor has no clothes, that basically machines cannot become intelligent that it's ridiculous to say to talk about machine consciousness or the will of machines is not fashionable because it contradicts the transhumanist technocratic narrative. Yeah, this is the part of it that I find very alarming and it feels to me like human beings have been altered already to a degree in terms of the way that we think that we accept almost without question and hesitation that machines have some kind of superiority to humanity. We've lost this sense of awe that we should have for ourselves as to how brilliant we are that we're here that I'm speaking to you now and we can have this very flexible conversation and I've got questions in front of me and I've used maybe two and then completely gone off track. That's something that AI would not do. It would go through this structure. It seems to me like that's what you're saying, but human beings, it's almost like there's an interaction happening in that they're trying to convince us of AI's great capacity whilst also diminishing the capacities of humans in our making us believe that we are less than so that we accept AI as a superior. Yeah, I mean, before we get to the question why this is happening, if there's a purpose of this, let's let's review briefly the cultural phenomenon. So culturally modern, what I call postmodern collectivism. This is the ideology that is not ruling the West. And I think there is in there, there is a huge overestimation of mankind on the one hand. A huge praise like if you read Harari's "Whomodios", huge overestimation of what can be achieved about what is feasible. But then, but this is basically only reserved in the work of Huxley, the two Huxley brothers, Anderson, Julian Huxley. This is only reserved for an elite of brilliant people. So they aren't seen as superior. So Australian Norm, Alice Huxley said, "Only geniuses are true human beings", right? So these are the rulers and they're at the people who work for the clerics. As Joel Cott can call them, the clerics of the rulers who basically preach this kind of gospel about and they are also transhumanists. They believe that humans can be combined with machines to create super humans and all of this. And this is also a lack of humility towards the creator towards God. So they are, of course, the nihilists that don't believe in God, so they feel that they can themselves be God. This is one side of the court. The other side is, of course, that Harari writes that the majority of human beings will become useless and won't need them anymore because they are not geniuses, they are not superior, and then they will be not needed anymore. So this is this ideology that is behind this. Before we question, so this is what is culturally happening, and it is a result of enlightenment and of the negation of faith, the negation of metaphysics, of classical metaphysics, and the kind of replacing this by technocratic faith, which is also a positivist and devoid of the aspects of the values of the classical, traditional values of the West. But why is this, and before we look at why they are doing this, let's understand why this is so wrong. It is so wrong because they don't understand the nature of complex systems, and I really like to spend a few minutes on this. So, so, so complex systems are systems which have certain properties that make it impossible to model them mathematically. So, for example, every animate system, every living system is a complex system. It has evolutionary behavior, which means it cannot only be spontaneously developed new elements, but also new types of elements, like language, new, new graphical structures can arise that change the expressivity of a language. It has deterministic chaos, so these systems have deterministic chaos, but they also are driven systems, which means that they can produce and consume energy, and therefore they have phenomena of energy transformation. All these phenomena are very, very hard to model mathematically, or impossible to model mathematically in a complete way, and this is why the mathematical models would be required to do all the things that you better to make sure humans want to do are scientifically impossible. And this is what they don't understand so they believe that the simple physics that we have in Newtonian mechanics but also quantum mechanics right quantum mechanics is, if you look at it from a from an advanced mathematical point relatively simply to linear. Wave equation, right? And so, so the equation. And so if you look at these, these mathematical models, they are rather simple. Their scope is very limited. Every physicist knows this. And we all know as physicists, all physicists and also mathematicians looking at physics know that we can only model a tiny part of the universe, but they are, they are so drunken by the success of applying physics in technology that they don't understand. They don't understand these limitations anymore, and now extrapolate the success that was possible, especially using electromagnetism to build the communication devices that we are just using, for example, and microchips and all of a sudden become so self-conscious that they now believe this can be extrapolated to nature, and that also any systems can be programmed and improved in the same way we can build technology. And this shows that they don't understand biology, they don't understand physics, they don't understand mathematics, and are not able to apply this to understand the limitations of these sciences. It kind of feels like we're in between, on the one hand, a great deception, and on the other hand, massive hubris, and I'm not sure which one it is, and I'd love to get your take on this one. Or maybe it's a combination of both, because I do think there is a sort of alchemy to life in that they would have us believe in a new technology as it is. Even if those capacities don't exist because they understand that our collective consciousness can almost help will it into being, or even if they fail, and AI never does have these capacities. If we believe it has the capacities jobs, we'll still behave in very systemic ways that automatically make the AI look like it's successful. So, I'm not sure which one it is, I think it's probably a combination of the two, but I'd love to get your take on that. Yeah, so first of all is of course a youtubers, which has many, many reasons. But if you think that there were some of the programs that the West has rolled out, or have to have to globally rolled out in the last decade, some of them are extremely successful if you just think about the number of people that got the COVID vaccinations. So this was very successful from their perspective. And so now they feel that they are right and that they are going the right direction. Now, why are they hyping AI so much? So I think there are two reasons for this. One reason is that this is an economic reason, that those who basically build the currently leading AI algorithms, they invest literally billions into this. Open AI has consumed tens of billions to create the models that they now have. And similar things are true for many of the huge models by Google, by Facebook. And so there are huge investments and they want of course to preserve the investments and erect barriers of market entry, which is a very typical behavior of companies. And so, by saying that AI can become conscious and dangerous, they are basically getting politicians to believe this and to create legislation that guarantees the monopoly of the existing tech giants. And makes it very hard for small companies to enter the market and to become competitors. So it's about erecting market entry barriers and preventing competition and maintaining oligopolistic structures. This is I think the main reason why people like Elon Musk warned about AI. Certainly Elon Musk is much too clever to believe that AI is really dangerous. I believe he is just trying to erect and buy barriers of market entry. And so are the others. So this is one aspect of it. The other aspect is of course that all digital technology, since it was invented, has been used since the 1920s has been used as a mechanism of rule. And, you know, the Nazis bought a lot of IBM machines and started using them to register people they wanted to put into concentration camps and so on. And this is one hand is yet traditional. It goes back to the zylia rulers in the 11th century, you build up the roots of the modern bureaucracy within Europe. And so it is all the Romans, they also had a bureaucracy. So all modern states want to have bureaucracies. And if you have digital technology, you can make them more powerful. And I think putting AI and digitization together creates quite powerful mechanisms of mass surveillance and the attempt to control populations can be realized using these technologies. So I think a lot of talk, this talk is also based on the fact that they want to use this for these purposes. Now, will it really be successful? Partially, yes, in some areas, yes, but in other areas, it's mainly digitization that is a threat of autonomy of the individual and not so much AI, because AI is so limited in what it can do. And I don't believe that by making people believe that AI is really something, this really changes humans. It's just a fashion. Like, you know, at the middle ages, people believed they had to buy the letters of indulgence to shorten the time that they have to stay in not hell, but how do you call it? And so then this fashion went away. So I think it's just a fashion. And you always have to ask yourself, what is the real effect? And the real effect is mainly digitization. And AI is just, you know, the icing on the cake. If you look at the whole phenomenon of digitization, it's much less important than digitization, which is really the big thing that changes totally the way our societies evolve. Yeah, I think there's going to be a big part of this AI narrative that is humans being psychologically controlled by the narrative around AI, rather than AI itself. And I think we're already there. Like, when I speak to people, even people who, you know, in the alternative media and they read a lot of the same things I've read, they've looked back at the history of eugenics, the Huxley Brothers, the Fabian society, and they understand this big meta narrative to take us towards this godless society and then maybe create a new god, like a man made god that AI could actually feed into. And I'd love to speak to you about that as well. But they still believe that there is this great capacity of AI to actually supersede as a maker's redundant as creatures. I've just always rebelled against that. And I think a part of it is having a spiritual belief and believing that we were created with something completely unique that is impossible for man to recreate. So, I guess the question from that is, do you have to be an atheist at heart? Do you have to have some, I would say, doubt in the, in the creator, or just completely reject the creator to believe that AI could be something like they're telling us. So, so the argument, of course, I also believe what you say about man because I'm a Christian, I love him, right? And so, we agree totally on this view of the of the greatest of the creation and, and that basically we are created in the image of God and so on. But I never in the scientific context, I never use this argument. And so people, because science is of course, since, you know, the 17th century split away from questions of faith in our tradition. So, you can't argue with God is scientific content anymore. So, and so, I mean, the real split was made in the 18th century, but it was prepared already before that. And so therefore, no, I don't think you have to be an atheist to believe all of this. You can believe in God, but you can still be afraid of AI if you don't understand what it is on a mathematical level. So then you can still be afraid of it. But it's not, it's not realistic. And the reasons I think I sketched out. So I don't think you have to be an atheist, I just think you need to, to let the understanding of how it works now. I think one, I mean, in the population of 1 million, like the town I live in Hamburg, right, or it has more than 1 million, there are maybe 10 or 15 people, or maybe 15 who understand how it works, right. The tiny minority. The vast majority, including the politicians, but many, many users of AI, even scientific users of AI, don't understand at all how it works. And so they can be brought to be deep that it can actually overcome become more intelligent than humans because they don't understand, first of all, they don't understand human intelligence. So they like the education to understand what this property really is. They wouldn't be able to give the definition I gave at the beginning of this conversation. And then they wouldn't be able to show like I did, why you cannot achieve this mathematically with mathematics. And that has to do with the essential limitations of mathematics and physics. Now, to be aware of these limitations is a privilege of a very, very tiny minority, because you first of all, you need to understand mathematics and physics, which is hardly enough. And then you need to think very hard about the limitations. Now, the great geniuses of physics, like Richard Feim and either Einstein, they all totally understood the limitations. They were fully aware of them, and it's just funny when you read them, when you read the original texts, how they really are so sure, it's certain about the limitations. Many of them, like Heisenberg, also believed in God, you know, and so, so I, it's just, it is just the problem that the vast majority of people just cannot see this. Now, the really interesting question is, what does this mean and what is the effect? And I think in the, indeed, you can create a cultural phenomenon of all, and fear that is similar to what I described for the 15th century Christianity, where they were all afraid of purgatory, and then you can influence people. And I think that is really going on. Basically, there is in postmodern collectivism, there's an irrational trend, which is very strong, and this irrational trend has also very interesting cultural sources, namely in medieval asotericism, like the corpus hermitical. So, you can trace it back to the corpus hermitical and you find it would have started many other as a terrorist, and there is a tiny flavor of this also in postmodernism, or quite a strong one actually in French postmodernism. This, this, this flavor is very strong and this has made it into the Anglo-Saxon postmodernism. And, and yes, this is, this irrationalism that is now used by, by, by the mainstream media and also our ruling class, is also interestingly mirror. In many dissidents. So many dissidents also are, are, you know, polluted by this internationalism or irrationalism without even noticing it. Those are basically mirror the irrationalism of, of the ruling ideologies, just with a negative sign. You see, and this is why what upsets me very often, so people like James Delingport, right, he's just lost it. I mean, he's just panicking and mirroring all the, the, the horror narrative of the mainstream, just with a different, just putting them into a different perspective, but still at the same level of rationalism. So what we need to do, always need to do, and what, what has always made the West successful, culturally is to, to, to stay rational, to, to take what you observe and what you experience seriously, to, to reason through it and so on. And this also needs to be done with AI. And then you very quickly can see that my view of AI is just true. And that the, that the mainstream view is just silly and really idiotic, I must say. Yeah, I really love what you just said, the jobs about how people are mirroring these narratives, but like you said, they're coming at it from a different perspective. It's actually something that I find really troubling about some of the people I see out there who are producing content and they're trying to supposedly inciting people, some kind of pushback and some passion for humanity and for life. And for the Christian virtues of love and charity and yet they just seem to hold up that black mirror and tell us all how bad it's going to be, how we're going to be subjugated. And every day they've got a new message about how my life's going to suck and be awful. That's not an alternative. You're actually just giving me the same thing they're giving me in a different package and it's not helpful. And I actually don't believe it either. But I do think that if we lock ourselves into these narratives and believe narratives that are patently false, then we will create that reality, whether it's real or not, it doesn't matter. We can create it for ourselves in such a way that our lives become very dark and depressive and we willingly, like many people did during COVID, give up our liberties, because we believe that that is the only solution. And that's what I'm worried about jobs. Yeah, so, so first of all, so with regard to political and cultural phenomena, that is of course true. The COVID time is a very good example, but also the climate hysteria is another example, or all this new trend to actually abolish agriculture, or many, many other topics that are fear narratives brought forward by, by our Western culture. Also the hatred of China and Russia, you name it, there are so many of these of these phenomena. Now, this you can certainly achieve using propaganda, though there are limits to this as well, like Bertrand Russell is great book power. I don't like Bertrand Russell, but his book power is fantastic. It says that if you actually create propaganda that is negating the laws of nature, then at some point people will realize this, because, and that's the limit of propaganda, you cannot say if you put a glass on this table, it will fall up. But now we have achieved a level of propaganda and some of the feats we've discussed that gets close to it. And so this won't work with people, because we have survived because of our primary theory. So, Robin Horton distinguishes primary theory and secondary theory. Primary theory is the view of reality that all humans have in common. Even the most primitive hunters and gatherers that you cannot walk on water, that you should not drink stinking water, that the sun rises every morning, that when you get wet, you will freeze and so on and so on. This is a coping mechanism of humans that replaces the role of instincts and animals. And then you have secondary theory, which basically allows you to think about why all of this is happening. And so hunters gatherers have usually polytheistic religions to do this, or even pre polytheistic religions, like religions where animals are gods and so on. And then we have modern physics and secondary theory and other modern scientific contents. And basically, if you create property, so in the secondary theory domain, you can change the thing of people almost at will. But in the primary theory cannot, and so if you do, so people in, at some point, realize when they are miserable and question why they are, because because the common sense appreciation of reality cannot be subdued. And so this is why people then at the end revelled against the Soviet Union everywhere in the Easter block, or why they wanted to overcome Mao, why Mao is China had to be transformed, because the people just had enough. Why even now North North Vietnam is changing, right? Because Iran is not the Iran of 1980 anymore. And so all of this is happening, because no matter how much propaganda you use, you cannot change the primary theory of human beings, because that is very very, that is probably genetically encoded. And so therefore, the ultimate protection is the primary theory and our biological nature that protects us against this. But I agree that in the meantime, you can create quite some damage in the secondary theory domain, and that's what we're witnessing. But I'm still hopeful, because I see more and more in the broad population of non-privileged. I see more and more a change of attitude towards all of this, including even AI, you know? Because if you look at a lot of what is happening with AI, like Google Jamini, you know, that couldn't portray white people anymore, or the longer dialogue that you could lead with chatability with their totally nonsensical and ridiculous. So people see that this machine cannot think, right? And so in the end, you know, I very much trust in the primary theory capability of people. Well, I think that's a fantastic place to leave part one jobs. I think you've laid it out really well for listeners, and I direct listeners towards your book, "Why machines will never rule the world, artificial intelligence without fear?" Yeah, is there any last words you'd like to say before we go on to part two jobs, or is there anywhere listeners can find you online? I mean, I don't have a strong, I sometimes use Twitter or X to post something on AI, and I regularly publish on theory of science. Sometimes I also write essays, basically the most simple twist just to Google, "My name, you're absolutely lucky," but then you will find the material I'm producing. Yeah, that's for listeners who speak English, or should I say write in English, is J-L-B-S-T-L-A-N-D-G-R-E-B-E. Okay, everyone, that's it for part number one of my interview with Jobs. A fantastic part one, Jobs is a very smart man, and he's also a very kind man. He gave us lots of time to really go through the intricacies of AI, but in part two, we continue the conversation. We talk about how AI is being used to usher in transhumanism, how it fits into the satanic agenda, but as always, we also talk solutions as well, and I get jobs to take as to how AI could actually be used for good. So lots to get into for members over on paralomite.com. If you're not a member yet, please consider coming across and joining us. In closing, hope you are well, healthy, and reasonably happy, and like always, I'll see you in the next one. What you are basically, deep, deep down, far, far in, is simply the fabric and structure of existence itself. Peace for all men and women, peace for all men and women, not merely peace in our time, peace in all time. Honesty in this place in this life. Peace for all men and women, not merely peace in our time, peace in all time. The fabric and structure of existence itself. Peace for all men and women. Peace for all men and women and women. Peace for all men and women and women. The fabric and structure of existence itself. Honesty in this place in this life. Peace for all men and women, not merely peace in our time, peace in all time. Peace for all men and women and women. Peace for all men and women and women.