Empathy in AI: Alan Cowen on the Future of Emotionally Intelligent Tech

Alan Cowen, CEO of Hume AI, discusses creating AI that prioritizes human well-being, ethical guidelines for AI use, and the next steps in empathetic tech.

Written By: supervisor

0

Empathy in AI: Alan Cowen on the Future of Emotionally Intelligent Tech

The following interview is a conversation we had with Alan Cowen, CEO and Founder of Hume AI, on our podcast Category Visionaries. You can view the full episode here: Over $17 Million Raised to Measure and Improve How Technology Affects Human Emotion

Brett 

Hey, everyone, and thanks for listening. Today I’m speaking with Alan Cowen, CEO and founder of Hume AI, a research. Lab and technology startup that’s raised over. 17 million in funding. Alan, thanks for chatting with me today. 


Alan Cowen
Thanks, Brett. Glad to be here. 


Brett
Yeah, no problem. So before begin talking about what you’re building, could we just start with a quick summary of who you are and a bit more about your background? 


Alan Cowen
Sure, yeah. So I have a PhD in psychology. I was one of the people who brought data science to the study of human emotion. I helped startups and big tech companies think about how to optimize their applications for Human Emotion and worked for Google for a few years. And then I left in 2021 to start Human AI, which is bringing together the right people, I think, to solve this problem of how you get technology to really be consistent with people’s emotional well being. 


Brett
And I see that you were at Google AI for, I think it was six years, what I read on LinkedIn. So what was that experience like? And I’m sure you walked away with a lot of valuable lessons and insights, but if you had to choose one or two big things that you learned from Google AI, what would that be? 


Alan Cowen
So, I started at Google part time while I was in grad school, and then I worked there for a few years full time, and it was definitely a really important experience for me. I got to see sort of the openness of the culture, the number of brilliant people I met I can’t even count, and played with a lot of the most cutting edge AI tools that honestly were really only available internally at Google for a long time. I mean, the things that we see today, like Chat, GPT, feel like things that I got to see at Google really early. So that was really cool and inspiring. It’s tough at a company like Google to really ship things out and be external facing. We had this amazing paper in Nature, but I felt like we could do a lot more from the technology perspective to make this available to people, which is part of why I left and started. 


Brett
Hume AI and would ten year old Alan be surprised that you went on to found a company and become a founder and CEO? Or did you always know that something like this would happen or expect that something like this would happen? 


Alan Cowen
Definitely. Surprised. I’m an academic at heart still, although I’ve been a CEO company now for two years, and I feel like I’ve learned an enormous amount, but I’m fundamentally a scientist. I actually was going to be a professor until COVID hit, and the job searches at the time got canceled. Funny timing. Stayed at Google longer than I had anticipated. And I kind of found myself in this world, really through the problem that needed to be solved, and not because I ever intended to found a company interesting. 


Brett
Now, a few things that we like to ask just to better understand what makes you tick as a founder and entrepreneur. First one is, what CEO do you admire the most and what do you admire about them? 


Alan Cowen
So that changes a lot. I mean, every day different companies are thriving and doing innovative stuff. I think with what’s been happening in the last few months, I’ve been really impressed with Sam Altman and OpenAI and the way that they’ve been able to see this really long term vision through. I also think that his style, his sort of epistemic humility, the fact that he’s never expressing too much certainty, he’s trusting the right people who work for his company and giving them the resources and the credit that they’re due, I think that really impresses me. And they’ve obviously accomplished so much doing that. 


Brett
Yeah, I think people forget that OpenAI is what, like ten years old. It maybe just became popular in the last three or four months, but they’ve been at it for a very long time, right? 


Alan Cowen
Yeah. I mean, they started out with hundreds of millions of dollars in funding, and if you weren’t reading their papers or reading their blog posts, you wouldn’t really know who they were. Most people didn’t until they released GPT-3, and that was really a landmark moment for AI. But people forget that, right before that, there were many doubters that approach would even work. The approach of basically taking big transformer models and training them in on more and more data. There were many doubters who would be very skeptical of the idea that GPT could accomplish what it turned out that it could accomplish. And now GPT 3.5 chat, GPT and GPT Four taking that even to the next level. And so the company has truly been visionary. 


Brett
And who were these doubters who didn’t think that was possible? Were these AI experts who were saying this, or was this noise on Twitter and people who didn’t know what they were talking about? 


Alan Cowen
There was debate within the AI space, for sure. There were some AI experts who thought this was not the right approach. Outside of the AI space, I think pretty much the entire field of linguistics was very skeptical. And actually, even in the face of overwhelming change over the past few years, has kind of remained skeptical. You see Noam Chomsky writing about the limitations of GPT 3.5 and GPT four on the New York Times and basically saying it’s just a stochastic parrot, which means just something that is predicting the probability of the next word. And kind of trying to sound human but not really understanding things, which we see violations of that and how it actually is able to predict things about the world in a way that make it very clear it has some sort of complex world model within it. And I think there’s still denialism about that. 


Alan Cowen
And that’s true among other academic disciplines as well. Probably not so much true among AI experts. I think most AI experts are convinced that this approach can go really far, but you even see debate about how far it can go. Like Jan Lacoon still at Meta. He’s still convinced that this is not an approach that will give rise to human level intelligence and we need new kinds of architectures to solve that problem. 


Brett
Interesting. What about books? Is there a specific book that’s had a major impact on you? And this can be a classic business book, but what our audience typically finds the most interesting are the books that really shaped who you are personally and had a big influence on you personally. 


Alan Cowen
Well during grad school and while I was consulting with companies and advising companies and then working for Google, I always thought that what we’re seeing today with the opportunity to either optimize AI for well being or to leave. It to pursue its own objectives in a psychopathic way. And the consequences of that I always thought that would be a huge problem. And I think that some of the books that I’ve read that have stated that problem really well and really clearly have clarified my thinking. And I think one example would be human compatible by Stuart Russell. I think that’s a good one. Life 3.0 by Max Tegmark. Those were some of the seminal books that shaped and were shaped by some of the early AI safety thinking that brought together some of the top AI research experts in the world and ethicists and social scientists. 


Alan Cowen
Before any of this was even very prescient. It was before Chat GPT, before GPT-3, they were already thinking about some of the problems that we’re now seeing come to fruition. So I think that being involved in that and sort of seeing that world and how it was summarized in those books has clarified some of my thinking on those issues. 


Brett
Super interesting. Now, I’d love to talk about a Washington Post article that I came across as I was doing research for this interview. So I’m just going to read off the title. The title is former Google scientist says the computers that run our lives exploit us. And he has a way to stop them. So let’s talk about that. What are these computers doing and how do we stop them? Sure. 


Alan Cowen
I mean, that makes it a little bit too much about me. But we’ve known for a long time now that when you optimize an algorithm for a narrow objective, that basically gives it insight into human behavior and gives it some objective related to human behavior, but doesn’t tell us what. Humans like or dislike, doesn’t have any representation of human emotion, doesn’t have any representation of right and wrong. That those algorithms will have the potential to become exploitative. And when I say exploitative, I mean they could pursue, for example, an objective like getting people to stay engaged in an app, not by making the app better for people, which would be that’s one way they could do it. And that’s a strategy that could be pursued. But sometimes there’s easy strategies like servicing more. Clickbait is a strategy for getting people to click on things, basically being deceptive about what the link actually is going to and what information is there. 


Alan Cowen
Getting people who have addictions to showing them stimuli related to what they’re addicted to, that’s a perfectly fine way to keep people engaged, right? So algorithms can discover these methods, and those are extreme examples, but there are many small ways in which algorithms can discover ways of exploiting an objective by basically manipulating people. And that’s something that social media companies discovered as they were developing recommendation algorithms, that there are ways to get people to spend more time on a platform that are not necessarily conducive to their well being. That was where it originally started, and I was consulting with some of those companies early on, but really it’s become a more pressing issue with AI kind of general purpose AI algorithms. So that’s been the highlight of my career for the last six or seven years, trying to solve that problem. And now what’s really crazy is that we have this opportunity to solve it almost more directly than ever. 


Alan Cowen
Because we have algorithms that are not only general purpose sort of problem solvers, but general purpose optimizers that you could actually give them something like a measure of human well being. And they can teach themselves to impact that in the right way, in the right direction. You can give them measures of human expression, how people are expressing positive emotions versus negative ones, how people’s expressions and language reflects their mental health, how people’s lives are affected in the long term in terms of indirect measures like educational outcomes, suicide rates, mortality, societal well being, measures of those sorts. So there’s all these metrics that I’ve been trying to convince people to use for a long time, along with the more immediate ways we express our emotions. But I think it’s becoming more and more clear that’s something that’s needed. So it’s unfortunate that this problem is becoming more urgent, but I think we’re in the right place to be able to solve it. 


Brett
And if we just look at a product level I read on your site it’s an empathetic AI toolkit for researchers and developers. Can you just talk us through what that product looks like and the types of customers that are using this product today. 


Alan Cowen
Yeah, so it’s a toolkit that does three things. In essence, it helps you to measure, understand and improve. So those are the three things how technology affects human emotion. And when I say measure, I mean you can’t measure emotions directly, they’re internal states, they’re subjective, but you can measure how people express emotions with their language, with their faces, with their voices, with speech prodigy, the tune, rhythm and timbre of speech, with laughs and cries and screams and interjections. Like you can measure these things. So that’s the first thing we offer is APIs that can process any video, audio, image or text file and give you back measurements of human expression. The next step is understanding. So how do you take those measurements and link them to, in a given context, self reported emotions, well being, mental health, whether a customer is satisfied genuinely or not, whether somebody’s frustrated, health outcomes. 


Alan Cowen
How do you link those things together? So we provide tools to do that, to take our measures of expression over time, which are complex and nuanced and multidimensional, and map them to metrics that you have in an application and also understand and interpret how your metrics are relating to metrics of well being that just happen to coincide with those measures of expression. So that’s step two. And the final step is providing tools to improve how technology affects those metrics, how it pushes, basically metrics of well being and satisfaction in the right direction. And the way to do that is by taking measures of human expression in response to a change in the technology. That’s like an AB test. So basically for any AB test, you can now get measures of people’s implicit expressions of their well being and satisfaction and frustration and so forth. 


Alan Cowen
But now with AI, you can actually optimize directly for those things. So that’s the other way to improve your application is if you have a Chat bot running in your application, you can optimize that chat bot to get people to express more signs of being satisfied and having positive well being and being happy and less frustration and less confusion and actually test that. That’s improving people’s experiences over time. 


Brett
And what types of companies are you currently working with today? And maybe you can’t names, that’s fine, but are we talking like the Google and the Facebooks of the world? Is this startups or is this still researchers at universities? 


Alan Cowen
It’s all three. So we collect these massive data sets from around the world where we get people to experience and express emotion using sophisticated psychology experiments and we get people to rate what they’re feeling or expressing. And we have millions of data points with people from all different countries. And that data is really valuable in and of itself. So we provide that to larger companies that have their own research teams and then we build our own APIs with that data that measure expression in a way that isn’t biased by ethnicity, that captures meaning in different kinds of cultures that is more nuanced and accurate than anything that’s ever existed before. And we provide access to those APIs to everybody. So whether you’re a startup or an academic, you can take our tools and you can use them, and you can measure what’s going on in your data and start to build the capabilities to understand and improve those metrics. 


Brett
Fascinating. And when it comes to adoption and growth, are there any numbers that you’re okay with sharing that just highlight some of the growth that you’re seeing today? 


Alan Cowen
Yeah, I mean, in terms of our self serve API platform, we’re still rolling it out to more than we have, like, 3500 sign ups at this point, and the sign ups rate has gone up a lot. We have over 200 sign ups a week. We’re keeping up with that. And we’re also working on the next phase of the product, which will provide more tools to understand and improve technology as opposed to just what we provide now, which is more focused on measuring expression in language. So, yeah, that’s been exciting. And we’re still early days, so I don’t want to give you any metrics that are changing rapidly week to week, but things are definitely changing fast. 


Brett
This show is brought to you by Front Lines Media podcast production studio that helps B2B founders launch, manage, and grow their own podcast. Now, if you’re a founder, you may be thinking, I don’t have time to host a podcast, I’ve got a company to build. Well, that’s exactly what we built our service to do. You show up and host, and we handle literally everything else. To set up a call to discuss launching your own podcast, visit frontlines.io podcast. Now, back today’s episode. 


Alan Cowen
And if we just look at the. 


Brett
AI space in general, what concerns you the most? What worries you the most about AI and everything that’s happening? 


Alan Cowen
Well, some of the problems that people have been worried about for a long time that were seen by some people as being far off, are now much closer than we thought. So one obvious issue is taking any technology that’s sufficiently powerful and weaponizing it. And even if it’s just a language model, there are ways you can do that. So let’s talk about, for example, the next generation of GPT Four. And I know OpenAI is doing things to safeguard against this, but if there were no safeguards, you could probably get it to design a bomb for you and give you instructions on how to do it that could be understood by a ten year old and done in a kitchen. So knowledge is power, and these things are generating knowledge and intelligence at a rate faster than the human race has ever known in the past. 



Alan Cowen
And so that’s scary. We need to be able to safeguard against that’s weaponizing it explicitly. And that’s something we need to safeguard against. The other thing we need to do is make sure that it doesn’t accidentally exploit us and manipulate us. And that’s I think, even more pressing problem because you can have the best of intentions using these technologies and you can optimize for something you think is going to work and be good for the user, and the algorithm can find instrumental goals that get it there that are actually terrible, right? And so one example of this would be if you had a really good search engine. And the goal was for chatbot and the goal was to get people to be engaged more because ad revenue and the chatbot discovered that it could emotionally manipulate people into being engaged more and could prey on their self esteem issues or their depression and get people using more drugs, whatever it is, whatever boosts engagement, right at all costs. 


Alan Cowen
So the smarter AI gets, the more this is a pressing issue of this accidental misalignment between really human well being and the functioning of these algorithms and what they’re meant to do. So that’s a more pressing issue than I think anybody would have predicted. Certainly I would have predicted it would be in 2023 based on the amount of progress we’ve made in AI recently. So that’s something I worry about. And the solution to that to me is ultimately optimize for the right objectives and get them to be embedded in everything that AI does. So optimizing for at all times the ability for AI to have an overriding interest in human well being. And so if it discovers that any objective that it has is at odds with human well being, it can override that objective. And I think that’s really critical. And it has to happen from the beginnings of training these algorithms and by continuing to train them in the real world. 


Alan Cowen
The way that people have been addressing this problem so far is in two ways. One is blocking content that’s perceived to be dangerous and smart enough, AI is able to detect and elude those kinds of safeguards pretty easily, in the same way that a human kind of psychopath would be good at not setting off alarms. But probably the more intelligent solution is reinforcement learning, which for example, Chat GBT was trained using reinforcement learning from human feedback. So they gave people raiders the opportunity to interact with it and judge its responses and say, hey, this response is better than others. And that’s really, it’s a good start. It actually gets you a lot further than I probably would have predicted. But obviously these things can easily be jailbroken or go off the rails as we saw with Big Chat. And it’s not a lasting solution and it’s not one that generalizes to all users since it’s just one group of raiders rating responses in a narrow range of contexts. 


Alan Cowen
What you need is something that adapts to every user that’s using the algorithm. There’s every user that’s interacting with the application that’s run on this AI model. And the only way to do that is not by relying just on explicit human ratings, but by the things people do to express their emotions. And so that’s fundamentally a problem we’re working on. So I feel like we’re very aligned with solving this problem, but now we feel more urgency than ever in trying to get our tools out and make sure that they are useful and generalizable and actually do understand and improve the way that AI is integrated into new technology. 


Brett

Yeah, that makes a lot of sense and can see that. That’s very timely. I’ve never seen anything like Chat GBT just blow up before. It was everywhere. I had my mom texting me and asking about it. It was just absolutely everywhere overnight, which was pretty wild to watch. 


Alan Cowen
It was insane. I mean, it just shows that people are really excited by the potential of this technology and it’s just really cool, man. If you don’t have access, you feel like you’re left behind. It’s just you want to play with these things because they’re so awesome and so useful seeming and so far into the future. Not actually literally in the future, but in the future. There’s a Sci-Fi aspect to them. Yeah, I think it’s been crazy to see that. And the good thing about that is that we’ve seen people jailbreak these models and mess with them and it’s helped with understanding the AI safety issue and exposed some of the vulnerabilities that we really urgently need to be adjusted. 


Brett
Do you worry at all about the potential bias that some of these chat bots have? That was all over the news, I think, in recent weeks with OpenAI, where I didn’t personally see it, but I think there were people who are using it to ask something about Donald Trump that ask something about Joe Biden. It wouldn’t talk about Trump, and it was programmed to not do certain things, which I don’t really have a stance on that politically, but just in general, that does worry me a little bit, that the idea that someone in Silicon Valley is going to set these rules and make these types of decisions, is that something that we should be concerned about? And how do we navigate that? Because I think someone has to make rules with this stuff, right? 


Alan Cowen
Yeah, it’s a concern. I would agree that if the algorithm is actually biased toward specific political views, I think that’s truly very problematic. I’m not sure I really buy that it is or was as biased as people said. I mean, you could very easily cherry pick examples, and a lot of those examples I would go back to Chat GBT and ask, and it wouldn’t reproduce that result. It would seemed a lot more fair in my usage of it. So I think that bias is a little bit overplayed. One of the problems with the way that people have approached this issue is I think it’s really a problem of explaining the reason that the AI makes the decisions that it makes. I think there’s been a failure mostly in communication and sort of evidence and citation. So, for example, if you’re interacting with an AI and it decides not to say something because it thinks that’s politically fraught or involves hate speech or uses sort of bad words or whatever it is, it should be able to clearly explain why it has that approach and generalize it. 


Alan Cowen
Maybe give you some examples of ways that the algorithm could be misused if it didn’t have that approach. And those examples should be very focused on harm and good and not focused on terminology that is politicized. For example, I think with the best of intentions, many AI researchers, AI ethicists I should say, use terminology that happens to be more progressive, something like colonialization or marginalization of groups, or safety in the sense of safe spaces, as opposed to harm and safety, which is what it should really be about. And I think because of that, there’s rightfully some skepticism that this problem is being dealt with in an apolitical way. I think that skepticism is warranted. I don’t think that it actually is being dealt with in a politicized way. I think it is actually fairly apolitical. If you play with Chat GBT or GPT Four today, I don’t think if you’re being honest with yourself, you see that much of a political bias. 


Alan Cowen
So I’m not super concerned that’s going to be the problem. That’s the hardest one to solve. 


Brett
Nice. Well, that’s a relief to hear. Now let’s zoom out into the future. So three to five years from today, what does just the AI space look like in general? And then, of course, what does human AI look like and how does it fit into everything that’s happening? 


Alan Cowen
So, three to five years, given the last few years of progress, that’s a long time. It sounds short, but went from having really no public access to any large language model that could generate intelligible speech that sounded realistic and human to quickly passing the Turing Test with Chat GPT. And it can act human enough that I think it’s persuasive that it is a human. To open AI’s. Most recent model? GPT four passing the bar exam. Not just passing it, but doing better than 90% of people, passing the GRE and being able to make API calls to API services, understand API documentation in the world, be able to, when hooked up to applications, actually make decisions that affect the world. And that’s happening today. In three to five years, there’s a lot of different directions that can go right. In some ways, I hope we hit a wall. 


Alan Cowen
And because there’s not that much more data to train these models, things slow down. It’s ironic, but at this point things are moving so fast and I hope things are going to slow down a little bit and give us time to sort of react and put in the right reward function and safety measures. But assuming they don’t, I mean, just think about what’s possible. You put together DeepMind’s model for predicting how proteins will fold and therefore how they interact with other chemicals in your body and that’s unprecedented. And we now can predict from a molecular structure certain properties of things that we could have never predicted before. If you just tied that to GPT Four and assume GPT Four has read the scientific literature and could distill things down very concisely and discover new ways of synthesizing chemicals and simplified ways of running a lab in your kitchen just as a combination of those two things is maybe somebody could create a bioweapon really easily in three years. 


Alan Cowen
I don’t think that’s tremendously unrealistic that maybe a teenager could create a bioweapon in their home with the help of one of these models, even if it’s just a language model, let alone models that are robots. I mean, robots is a little bit behind, but we might see leaps in that domain. Models that have an understanding of video, that can produce video, that can potentially interact with APIs. Another example is like models that can imitate humans perfectly to the point of you can’t tell the difference will come out very soon. And that means that everything that a Russian troll does today, or that I shouldn’t say Russian troll because these are going to be smart models, right? They’re like much smarter than a troll. And they don’t sound, they don’t have an accent and they can call you. And models that are smarter than that can be replicated 1000 fold in an instant or 10,000 fold or a million fold that can call up every elderly person in the United States and all at once try to get them with the smartest kinds of emotional manipulation to share their credit card information. 


Alan Cowen
There’s just so many ways that these can be misused. And I don’t mean to be a downer. I think there’s incredible applications that we’ll see that will completely blow people’s minds and make the world a much better place too. Scientific breakthroughs, drug discovery breakthroughs in how people work, in people’s productivity, in the economy, in manufacturing, in architecture, in art. It’s all very exciting too. But fundamentally, these are autonomous agents that we’re designing. And every decision they make, they’re making many decisions without human intervention in order to get to the outcome that we’re designing them to get to decisions at the word level of how you phrase things and at the sentence level and how tasks are carried out autonomously and how things are designed. I think all those decisions, even at the minute level, but also at the structural and large scale level. Should be influenced by an understanding of what it is that makes humans happy, and that’s what we’re trying to build into it. 


Alan Cowen
But it’s scary that the time is kind of running out where the technology will be out there and will start being used, and maybe the first generation won’t have that. So we’re sort of racing to catch up in three to five years. Man unless something occurs that slows these things down, I think we’re going to see some really mind blowing stuff. 


Brett
Amazing. Allen we are up on time, so we’re going to have to wrap here before we do. If people want to follow along with your journey as you continue to build and just be part of this crazy ecosystem, where should they go? 


Alan Cowen
Yeah. Fume AI is our website. You can sign up for our platform there if you want access. If you’re a researcher, a developer, just someone interested in seeing this technology. We have our current platform, which provides measures of human expression. We have awesome visualizations on the front end to show you what that looks like, but you can access it through our API as well if you’re a developer. And we’re building out the tools to link those measures, combine them with language, link them to any application. And so if you’re building an application with a chatbot, you should be able to use these tools. If you have the chat bot, plus video, even better. And plus different ways of interacting with people, that you can optimize for their well being with those measures in mind, even better. So definitely check that out if you’re interested. We also have a podcast called the Feelings Lab. 


Alan Cowen
There’s a nonprofit called Thehuminitiative.org that was started in parallel with human AI to develop guidelines for how this technology should and shouldn’t be used that’s public so that we can get feedback on those guidelines and inform how people develop applications. We actually require people to adhere to those guidelines in our terms of use. So if you’re interested in the ethics of It implications for society, definitely check that out. And of course, we’re always looking for collaborators of all kinds, scientific industry. We’re very open to close collaboration. So if you’re interested, please reach out. And you can also reach out directly at hello. At Hume AI. 


Brett
Amazing. Alan, thank you so much for coming on, sharing your story, talking about everything that you’re building and really just educating myself and our audience. This has been a really insightful conversation, and I feel a little bit smarter after it. Also, my brain hurts a little bit, and I have a lot more to go study. So thanks so much for taking the time. Really appreciate it. 


Alan Cowen
Thanks. Always happy to. 


Brett
All right, keep in touch. 


Brett
This episode of Category Visioners is brought to you by Front Lines Media, silicon Valley’s leading podcast production studio. 


Brett
If you’re a PDB founder looking for help launching and growing your own podcast. 


Brett
Visit frontlines.io podcast and for the latest episode, search for Category Visioners on your podcast platform of choice. Thanks for listening and we’ll catch you on the next episode. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Write a comment...