Dr. Abdalla Kablan on Building Responsible Artificial Intelligence

Interesting Engineering's Ali Diker sat down with machine intelligence expert Dr. Abdalla Kablan to discuss the future of artificial intelligence and exactly how worried we should be.
John Loeffler

Dr. Abdalla Kablan is a serial entrepreneur and award-winning fintech expert. He specializes in the use of artificial intelligence and machine learning in the design of complex financial systems.

Throughout his career, he founded a number of startups and companies specializing in deep learning, professional matchmaking, and Fintech. Furthermore, Dr. Kablan is an Alumni of Microsoft Ventures, UK and worked on projects, which were featured at the World Economic Forum 2017 in Davos. His latest ventures are Wyzer.ai and the Caledo group.

Dr. Kablan is also an academic at the University of Malta where he lectures and researches topics related to computational intelligence, financial engineering, and financial data science. During his Ph.D. studies between 2007-2011, Dr. Kablan researched the cryptographic block algorithms behind bitcoin (now known as blockchain) and is a renowned expert in the field of distributed ledgers. Furthermore Dr. Kablan advices governments on matters related to strategic development and utilization of technology, Fintech, and Blockchain/DLT technologies. In 2018 Dr. Kablan launched DELTA Summit which has become Malta’s official event for DLT and Digital Innovation.

He sat down with Interesting Engineering's own Ali Diker at last week's The Next Web 2019 Conference to discuss the current state of artificial intelligence and whether it poses a threat to humanity or not. The interview has been lightly edited for clarity.

Abdulla Kablan
Source: Dr. Abdalla Kablan / Twitter

Ali Diker: Hello Dr. Kablan, I'm glad you could be with us this afternoon. I wanted us to start by discussing a piece you wrote in TechCrunch a few years ago about how AI is not a threat to society and the world in general. First of all, some people are pretty pessimistic about artificial intelligence because it can be weaponized and can even be very lethal, but there are optimists that believe AI will change things for the better. In your piece, you say that we have to redefine some of the concepts, for example consciousness, as we get further down the road to AI; what do you believe needs to be redefined and how and why do you think we should redefine these concepts for AI?

Abdalla Kablan: First, thanks for having me Ali. Honestly, my piece was not as optimistic as the title suggests. So I said that AI is not a threat to humanity but an internet of smart things, maybe. The reason I phrased it like that is because I genuinely believe that it can go either way. We are at the crossroads of human history where certain decisions have to be made now from a strategic standpoint that might impact the future of humanity.

Now, I spoke about various concepts and one of them is consciousness. Achieving consciousness on the digital front is incredibly difficult and incredibly complex. This is mainly because consciousness, like intelligence, is something quite abstract and it's very difficult to define what consciousness is. However, what I am currently focused on is actually more about achieving wisdom and creating wiser systems rather than the creation of intelligent systems and I'll try to explain why.

The problem that we've had in AI in general goes down to the roots of understanding data. Data, in my personal opinion, is a five step process; the life-cycle of data, if you will. So you start with data; you process data and it becomes information; information with time evolves into knowledge; knowledge with practice becomes understanding of the problem domain; and then there's something that—with practicing of processing and understanding and intelligence—evolves into wisdom.

The overwhelming majority of AI systems historically are still stuck in being intelligence-glorified knowledge and have not achieved the wisdom stage and the difference is quite clear: Knowledge is the difference between true and false but Wisdom is the difference between right and wrong.

A good friend of mine likes to put it that knowledge is knowing that a tomato is a fruit but wisdom is knowing not to add it to a fruit salad. AI, historically, could not figure out the link or inference to the side of wisdom. The reason being is that for the past 40 to 50 years in AI, the arms race was mainly to create the next Einstein, the best mathematical model that can solve a problem like beating the world champion at Chess or Go.

Artificial Intelligence Aristotle
Source: Wikimedia Commons

Instead of creating an Einstein, we should have focused on was creating an Aristotle which understands the various facets of intelligence, not only the intellectual and mathematical intelligence but also the social intelligence, the cultural intelligence, and the emotional intelligence. The reason why these have not been solved is not because they are unsolvable, it's that no one has focused on them.

So I now belong to the mindset that instead of focusing on artificial intelligence, we should focus on artificial wisdom; to create systems that by design follows a benevolent path instead of a destructive path. As humans, we are very flawed; it does not take a genius to realize that we are irrational, but we're predictably irrational. We carry the seeds of our self-destruction because throughout our history, we have made so many decisions that have caused misery and agony and if there is a superior intelligence it will realize that we are quite flawed and it will be a problem for us.

Most Popular

As a member of team humanity, I need to safeguard the future of our species and I know it's inevitable that if the focus is on intelligence, an intelligent being will realize our flaws and capitalize on them. If the focus is on wisdom, a wiser being or a wiser creation will know that destroying humanity is not good for anyone because at the end of the day, humanity is a lifeform and it shouldn't be messed with on principle.

The thing is humans themselves have done this to other creatures. The way we've historically treated not only animals but each other—even though we are very intelligent—could actually happen to us if we achieve a level of singularity or machine intelligence that surpasses human intelligence without focus on wisdom. So focusing on the regulatory side, focusing on the ethical side from the development standpoint is quite crucial.

The nations of the world should actually pay more attention to this before it's too late. Take for example, the automotive industry, for so many years we had no seat belts and there were a lot of people telling the automakers that you need seat belts because statics show that if you're not wearing a seat belt, you're dying. The automakers resisted that regulation for a very long time until they actually had to be forced to put seat belts in cars.

Now, a lot of people died without seat belts, but it didn't make humanity go extinct. The problem with AI is that by the time people realize that it needs to be regulated, it will already be too late. No one can predict the future, but we have to have strategic foresight on how this could potentially evolve if we're not responsible.

On Malta, where I am based and where I'm from, the government last year has set up a national AI task force to actually study the potential implications of mass artificial intelligence adoption while at the same time encouraging it and coming up with a national strategy that will attract AI developers and AI projects to come and innovate from Malta, but within our regulatory boundaries and within a national strategy that takes into consideration the ethical side of development, the intellectual property side, and all that.

I think it should become a global discussion and only if the focus of research and development is more towards wisdom, that's where we'll crack consciousness, we'll crack things such as emotional intelligence, the social intelligence side of things. It's not difficult, because we're at the very early stage. AI isn't new, it's existed since Alan Turing's time, but we were focusing on one side and forgot 80% of the real problem.

It's understandable because even in school when you say “oh that student is intelligent!” because that student gets the highest grade but that doesn't mean they're the most intelligent student. It's reflective of our own irrational way of thinking.

I-Robot
Source: Eirik Newth / Wikimedia Commons

AD: How do we determine what is reasonable for an artificial intelligence? For example, In Isaac Asimov's I, Robot you had artificial intelligence that protected humans from other humans. In this instance, artificial intelligence is good for some humans and an impediment for others.

AK: It's actually very, very difficult to tell. As I said, AI's getting smarter, it's getting more advanced, but even humans are progressing beyond belief. I could even argue that as humans we're all cyborgs at the moment because the definition of a cyborg is someone who has some sort of cybernetic enhancement to their body. We have these devices that we're holding in our hands all the time and if you think about it historically, we had mainframe computers, then we had PCs, then we had laptops, then we had iPhones, then we had Google Glass—which was a bit temporary—, but it's just a matter of time before these things actually go into our body and we're all getting chipped.

In my personal opinion, there's also an advancement on the human side but that scares me even more than AI becoming much more intelligent. Unfortunately, if humans are advancing more on the cybernetic side to that extent, that's going to cause a major elitist distinction between people, the richer will be able to afford the more expensive, better, smarter chip. Natural selection will actually start favoring the richer over the healthier or the smarter and that in my opinion is a major problem.

On the second part about AI protecting humans from humans, its a very deep philosophical argument. Eventually we are going to get there and—again coming back to regulation—we need to anticipate those trends now and try to come up with laws in order for us not to end up in that kind of situation or scenario.

Blockchain
Source: photo fiddler / Flickr

AD: So the next question is about Blockchain. How will blockchain technology help to make better artificial intelligence?

AK: That is a very good question. So I believe that soon there will be convergence between various disciplines, namely Blockchain, AI, and IoT. On that particular front we have to understand that on the blockchain front, unlike AI, we're at the very early, early stages, like the internet in the 1980's and very early 1990s.

So the internet existed as a protocol for almost 30 years, TCP/IP, it wasn't until one fine invention in the early 90s catapulted the internet into a completely different place, and that invention was the web browser. The day we had the first browser, we started realizing this protocol, TCP/IP, in unimaginable way because of all this indexing that we were talking about at a code level, now we can visualize it and search, we can query, it became much simpler. With Blockchain we haven't achieved that, but it's fine, it's healthy, and we know that we are actually going to go there and eventually we'll get there.

Blockchain, as you know, is a decentralized ledger process that's still focused on data storage, it hasn't reached the processing side yet. I personally believe that we will soon—and when I say soon, I'm saying between five and ten years—blockchain will develop much more sophisticated architectures, decentralized ledger technology architectures, that will not only allow for the storage but also the processing. This on the other side will enhance the capabilities of data acquisition, administration, and processing beyond what we currently have which means that AI systems are going to improve to an unimaginable level.

If AI improves, we go back to the argument about whether AI will become a threat to humanity; however, I genuinely believe that blockchain is a fantastic, godsend solution for us because we have consensus with blockchain and when you have consensus you can have some form of control. If an AI is going to go rogue, collectively speaking its not going to achieve consensus hence its not going to be able to perform certain tasks on the network because we would know that this shouldn't happen because its not to the benefit of the entire network.

So blockchain, because we do have these consensus mechanisms hopefully we'll have some form of intelligence in the proof of works, proof of stakes, and achieving consensus, so that five, ten years from now there will be a lot of improvement on this current technology. This will help us to make and draw the distinction between not true and false but right and wrong and if your decision to execute a certain action in the network is harmful, then you will not achieve that form of consensus.

So I think blockchain is a very good solution and we, as developers, should think about this now. I said before we made a mistake with AI that I think we should recognize, that we really followed the wrong path with AI, and with blockchain we should be responsible and have these conversations and recognize that blockchain is not going to exist alone; AI is soon going to integrate and progress along with it. So let's create our consensus mechanisms not only to do these proofs that validate financial transactions—which is all great—but when it comes to processing, when we recognize that when there is some form of non-benevolent action, we stop it.

Easier said than done, but if we get enough good brains together to solve it, it's not something that's unsolvable. Everything we're doing is just delaying the inevitable, eventually we will be in the situation where AI is surpassing human intelligence, but if we can delay it or control it in a way that is taking most of the harm away from us, then that would be great.

Blockchain
Source: Depositphotos

AD: But let's say a majority makes a bad decision about how AI and blockchain can be integrated, about the process. It certainly seems that majorities tend to make bad decisions, how do we deal with that?

AK: That's an excellent question that goes down to the core principles of democracy and decision making. I think we should have some forms of safeguards at least in terms of things that are obvious. If the majority is making decisions that we know that can potentially harm people, not only physically but even financially, it should not be allowed; even if the majority thinks it's the right thing to do.

Again, the negative side of this is we're achieving some kind of tyrannical set-up where the system is more powerful than the people. I'm not in a good enough position to make a judgment that I feel strongly enough about but I know it's a problem that needs to be discussed and solved so I think more dialogue and more research needs to happen.

I would love to dedicate more time to just sit down and think about this, but the problem that we all have is that obviously in life there are so many other things that are happening. I think that there should be people that dedicate their time just to think about and solve problems like this because you're absolutely right, history is full of majorities making decisions that weren't wrong for the majority but we wouldn't call those good decisions.

AD: I want to go back to what you said earlier, that we shouldn't be trying to create more Einsteins but we should create more Aristotles. AI is developing in such a way that we have AIs that are composing music—they're reflecting the path of some creative minds like Mozart or a Beethoven—but that's not real creativity. Do you think that AIs will develop true creativity like humans at some point, developing new concepts, things like that?

Robot Drummer

AK: I definitely think it's possible, and coming back to what I said, creating wiser artificial intelligence—or artificial wisdom—is actually part of that. At the moment, even if we have those brilliant AIs that are composing music, if you ask that same AI “What is the capital of Estonia?” it doesn't know what you're talking about because it was created for one specific purpose.

This is what shows that we're still looking at a narrow intelligence instead of a wider form of intelligence. At the moment, its very difficult to get the AI that beat the human world champion at chess to make a cup of coffee in a kitchen that it's never seen. Or maybe—excuse this example, maybe don't use it—but one of the interesting problems is choosing the right urinal in a bathroom you've never seen before because the amount of human intelligence that goes into that process is quite staggering.

If you go in and there are five urinals, you take the one on the side. If someone walks in, they're not likely to take the one next to you, they'll go to one a few spaces over, and so on. There's a lot of inference calculation that's not only intelligence, its social, emotional, cultural, and you have to have lived that, comprehended that, and understood that more from practice.

So we have with AI we have been quite narrow, even from a mathematical modeling standpoint, because some will say we want to solve the problem of chess, let's use a neural network. Some would say let's create an air conditioner for climate control, let's use fuzzy logic. We want to create an AI that can ride a bicycle, let's use reinforcement learning.

Algorithm
Source: DepositPhotos

AD: Can you expand on these a bit?

AK: Sure. Reinforcement learning is beautiful, it mimics the way humans learn. In reinforcement learning you have two mathematical functions, the reward function and the punishment function so its very similar to the way the human mind works. So if you're riding a bike and you do the wrong move, you fall, it hurts you, you've triggered your punishment function and you'll know you'll never do that move again. If you're going in a straight line and you're happy that you're going in a straight line, you know that's how you should do it so it triggers a reward function in your brain.

It's the same with reinforcement learning, so if you make a mistake, you mathematically say you should never do that and if you do the right thing, mathematically say do more of that behavior. For example, there are genetic algorithms, which means they are copying mechanisms of natural selection to find the most optimal solution for a specific problem like portfolio management, etc.

So these are not converging worlds so I think there is a future for all of these mathematical models to actually create hybrid systems. So there's no mathematical way to understand things like common sense or gut feeling, but fuzzy logic is the closest mathematical model that can get you to that. There is no mathematical model to teach a child how to count from one to ten, but a neural network is the closest to that. There's no mathematical model to learn how to ride a bicycle, but reinforcement learning gets you closest to that.

So if we combine all those together, we'll achieve some form of artificial intelligence that is much more encompassing and will understand different facets of problem solving that will lead us to wider intelligence that does not specifically focus on one specific problem. I think that's where we should, again, look at things even from a data agnostic behavioral standpoint because now with most AIs, if I have an AI that will trade on financial markets, I will only train it with financial data. If I have an AI that will predict the weather, I will only train it on weather data.

But we have to create AIs that are data agnostic because we, as humans, are data agnostic. So if you decide to become a carpenter, you just learn how to do carpentry. You're not born from day one knowing that you'll be a carpenter. As humans, our brains are wired in a way so that we can learn different things at the same time. With AI, that is just not the case.

AD: So are you saying that we can create AI that has desires, like—I don't want to say this but it's the closest thing—feelings? Is that possible, did I get that right?

AK: Feelings are a very, very human thing.

AD: I know, but what I'm getting at is can we create an AI that can decide that it wants to become a carpenter?

AK: Possibly, if it's been exposed to it. The key is that AI should be created in a way that if I want to turn it into a carpenter or a self-driving car, it's based on the same core principles. So I don't have to build it from scratch to be a carpenter. I need to have a core AI component and if that was to evolve into a self-driving car, I would just add the self-driving component to it. If I wanted it to be a carpenter, I would just add the carpentry component, but the core AI “brain” should be data agnostic.

Dr. Abdulla Kablan
Source: Dr. Abdalla Kablan / Twitter

AD: We're about out of time, did you want to wrap up with any closing thoughts on the matter?

AK: Yeah, so like I said, I am neither pessimistic nor optimistic, I just feel the weight of responsibility—not just on my shoulders but on the shoulders of anyone in this space—that we have to make the right decisions today in order to safeguard tomorrow. This could really go either way and the decisions we make today are definitely going to influence the future.

Just take mobile phones right now. We are already so addicted to them and they have no intelligence in terms of interaction with us. Imagine what will happen if you phone realizes how addicted you are to the Internet, it can manipulate you in ways beyond belief.

[see-also]

Again, we have to be responsible, like everything in humanity, you will have responsible people and irresponsible people, that's why we have problems, that's why we have wars, that's why our history's replete with instances where we almost destroyed ourselves completely with disasters that were created by us as humans; so you'll always have the irresponsible ones.

The problem is that we have to instill the sense of responsibility from now on the developers side. I really like, for instance, how if you go to events like this, more people are using paper cups than plastic cups because we started to realize that plastic is destroying the planet. You meet people now and they're like “Don't give me a plastic cup, give me a paper one.” That's great!

On the developers side, we have no sense of responsibility whatsoever, it's always been about what you feel, so now there's a new school of thought that tells you, “Listen, whatever you do, just don't fuck up the future!” So that's my takeaway from all this.

AD: Thank you Abdalla, this has been a great conversation for me.

AK: Likewise, Ali, this was very intellectually stimulating, thank you.

message circleSHOW COMMENT (1)chevron