CEO of AI Truth, Author – What You Don't Know
Did you know that AI is embedded into nearly every element of your life, from your smart phone to Alexa to streaming services? In this episode, Cortnie Abercrombie, CEO & Founder of AI Truth and author of What You Don’t Know: AI’s Unseen Influence on Your Life and How to Take Back Control, talks about what AI really is, what it does well, and how it affects everything from hiring biases to basic daily decisions about where to buy coffee.
Nobody wants to stop and smell the roses. Everybody wants to stop and see a train wreck. Nobody wants to see the good stuff that goes on, right? We only wanna see the bad stuff that goes on. And, and if that's the way we look at life and that's the way we look at things online, the minute that we click on that thing, more of it starts to come, and we start to go down rabbit holes that we shouldn't be going down.
This is Horizons, stories about what's next in the world. Powered by Compass Data Centers.
The biggest thing that you have that you interact with daily that has AI in it is your smartphone. And so when we talk about what is artificial intelligence, think of all the things that daily your smartphone can do. Things like popping up, um, you know, your route home. All of a sudden when you've just been to the grocery store and you're thinking, why on Earth is it telling me to go home, you know, “seven minutes from home.” It's picked up on your patterns. And that's part of what artificial intelligence is: it's there to pick up on patterns, um, just massive, massive patterns all over the place and connect them all together.
The things that AI does really well is machine learning capabilities. Taking in tons and tons of massive feeds of data. And when I say that, think about all the different social media transactions going on daily. I mean, as a human, I can barely even scroll fast enough to keep up with the feeds that come through, but in artificial intelligence, um, it's actually, you know, parsing through those and not only looking at them, but analyzing them. Analyzing them for all kinds of things like sentiments. And, um, you know, as a matter of fact, I worked with a group that was developing an AI social media command center. And you're there to look at the patterns across, um, certain sentiments about your brand. And so when we think about all of [that] ability, that data is massive. It's streaming constantly, and it's changing constantly. And that's what AI is just really good at.
Let's say you go to your daughter's, um, soccer game, or you have a soccer game yourself, and while you're out there – you are going there every like Tuesday and Thursday, or let's say it's a pattern Tuesday and Thursdays – and then it's gonna automatically want to send you some kind of coupon [such as], “Hey, we noticed your on your way back home, and on your path home is this retail store…so we're gonna geofence you.” And when you're in that vicinity on your phone, on your location settings, you'll get a coupon that will drive your behavior to come in for a Coke or a hot dog or a whatever. So, I mean, it's pretty brilliant and it does drive our behaviors more than we even realize. I think because we're like, well, we're thirsty. Why not? It is driving our behaviors. Um, and it is collecting for every single song we're listening to for every single app that we're looking at for each retailer that we're looking at, whether we're looking at Kolh’s or whether we're looking at Neiman Marcus or whether we're looking at, you know, something like that. It's taking in information. Or maybe we're looking at Footlocker and Adidas, you know, we're sports people or we're rich people or we're whatever we are, it's taking in all that information, all that data. And now we even have our payment methods on our phones, so it can even put all of those different elements of data together. So it's super powerful to say, you know, you went to this retailer site this many times or that app, this many times cross reference that with location data that comes off on the location settings of almost any app on your phone. [It also] looks at where you go to eat and what kind of lifestyle you lead, if it's healthy, or if it's, um, you know, mostly sitting in traffic all day with cheeseburgers, you know, having, uh, been purchased on your MasterCard, which you might have done on your phone. You know, all of that stuff is coming across on your phone as you use it. And then the vendors who are behind all of those apps and all of the smartphone capabilities themselves, um, you know, all the ones who have the access to the data for all of that can then put that together in whatever ways they want. And so can any of the apps that are on your phone, especially if you don't disable location settings.
I think we always start scary with every technology. It always starts with no rules. We had no seat belts for a long time, other than, you know, your mom's arm or something. And so everything I think has started scary. And unfortunately we as humans just have to learn the hard way sometimes before we start enacting regulations. They're starting to come around and understand, but AI as a whole is such a new field, um, in the way that it is now. It's been around since the fifties, since Alan Turing really. But it just never had the ability to do what it has today. We've never had nearly as much data available as we do today. And to have such massively available data where it actually can make sense to use artificial intelligence capabilities. Prior to this, though, it was used a lot in physics and other areas of, um, you know, in hypothetical sciences where you're trying to … come up with probabilities for things that might happen. Will this meteor hit earth? Now that we're using it for such high stakes or capabilities, like driving us around, taking care of our health and those kinds of capabilities, I think people are listening. And so I think it is gonna become a positive thing quickly. I think that this is just temporary where we are right now.
We had the great resignation, right? Where everybody called it, the great resignation, but now we're actually seeing a great rehire too. We're also seeing people just, they resigned, but they just wanted a different type of job. They wanted to do something that was more meaningful for them, and that could be anything. So we're also seeing the great rehire and during the great rehire, um, there's like thousands [of people job hunting]. So for any big job that would potentially bring in a lot of applicants, let's say it's someplace everybody wants to work. Let's say they put out a job [opening] and like 10,000 people suddenly swarm to it in one day. These people, the recruiters have no way of dealing with this. So they're just thinking of this, like, “Okay, we gotta get through as many of these as possible, and weed out only the best candidates.” So they're gonna put AI to it. Unfortunately they're gonna do exactly what I just said. They're gonna base it on their best employee or two or three or four or however many that have always been there. If you've seen the tech industry stats about minorities and women, [it’s] horrible. So when they do things like that, it's just gonna re-propagate [these roles with] more of what they [already] have.
And so there has to be a way to open this [AI] back up or else they're just gonna keep recreating history, which is not what they want to do. When you add minorities and other underrepresented groups to the table, that slice gets smaller and smaller and smaller by the minute. And so if we don't have those people at the table telling us and contributing data and making sure that we're getting data that represents them, we're building AI that's [going to] unfairly classify them [going to] unfairly deny them claims for their insurance, for example. Unfairly deprioritize them for care.
When we talk about emergency situations in hospitals, like COVID, the results are just catastrophic. I think there's so many good things that are [going to] come from AI as well, that I don't wanna scare people too much because I don't wanna see it go into a winter again, where we don't have investment in it and so forth. But, we do have to set down some ground rules and some norms to take care of all of us, not just the majority.
What I want people to know is AI is not nearly as complicated as people make it out. Yes, there are lots of ways to actually do what they're trying to accomplish, which is optimizing your outcomes in some way, shape or form optimizing the way your car drives or is driving for you. All of those things are making some kind of outcome for you. This is not nearly as complicated as it seems, and you can understand it enough to take back some control of your life. You are in control at all moments of how you react to these escalating recommendation engines online. You are in control.
How does AI help and hurt predictive policing? Listen in as Cortnie Abercrombie discussed the controversies surrounding this new land of crime monitoring and law enforcement.
Predictive policing. This one gives me much… it's a controversial subject because on one hand, if you're the victim of a crime, you definitely want that person to be caught. You're just filled with emotions and hopefully, you know, you've come out of it okay. But, it's one side of the coin. Then there's the other side where we know that [certain] artificial intelligence capabilities aren't quite there yet. And so we start using proxies of information. And what I mean by that is, like, there was this group and they used to have, um, a sound feature that would predict where the crime is happening and who would be the criminals in their case. They did the “where” part and they based it on sounds that sounded like gunshots. That's where the criminal activity in their minds, um, and [in] the minds of the data scientists who were coding the algorithms, which is the sets of instructions that then become the equations that then get into the code into the systems. And they would code, you know, basically anything that sounds like fire gunshots. And the problem with that is if you think about all of the different groups and how they celebrate for example, and I'm just giving you one example of how to refute, you know, this thinking, um, 4th of July, you know, firecrackers going off everywhere. We have sets of people who are exactly alike, coding these things, not thinking about…what the traditions might be, or even that we do 4th of July. Because sometimes these things are outsourced to other countries, even [those] that aren't even familiar with U.S. customs. And they may, um, build that algorithm and think that that gunshots sound like a perfectly fine way of trying to figure out where criminal activity is occurring. But we know that, you know, there's other ways of thinking and sometimes that doesn't always get into the algorithms and especially not predictive policing ones. And the other part of the equation [is that] unpredictive policing is trying to figure out who, who is gonna commit the crime. This scares me to no end, not because of the intent. The intent is good. We always wanna know, you know, before a crime is gonna happen, that'd be the best thing possible. The problem is is how does that get executed in terms of, um, data that's available and how would you know enough about a single individual to target them as a person who might commit a crime, even if they have a past, uh, criminal history, it's not necessarily indicative of whether or not they're gonna, uh, you know, cause another crime in the future. And so the data available and the way that we look at criminal activity, I think, needs to be much, much further [researched] before we start trying to rely on predictive policing outcomes. And there is a horrible case in Tampa where they're literally focusing on like 15 year olds, um, an autistic person. These are people who don't fit the norm... for criminal activity. And so that is the question is how do we handle, you know, artificial intelligence and data. [We know how] to handle the majority, not the minority. And so when someone falls outside of that majority, we don't know from a data standpoint how to deal with it. We don't deal with it very well. It's an outlier at that point.
The AI your employer uses could be taken too far, infringing on your private, personal life. In ths video, Cortnie Abercrombie discusses how employers are taking monitoring too far in our post-pandemic world.
We do not have rights when it comes to being monitored in your employee environment. And that is a huge, you know, concern because how far can employers take that? Right now, there's no legal, um, recourse for how far employers can go. And when we were talking about the pandemic, we're talking about a time when everybody's been digital, so you're in your own home. And it's kind of creepy to think about a company that whether you have your own home computer that you've been using for that company, or whether you're using their computer and the software is on their computer already, or they have you download this software that will then track your key strokes and where you spend time. There's certain software out there already that, um, is being used to understand productivity in terms of like, – it'll actually put out reports on how long you're on LinkedIn, how long are you on, uh, Facebook, how long are you on Twitter, Instagram, uh, whatever else, whatever other social media outlets, while you're at work. There's ones that will understand what kinds of sites you're going to and if they're related to work or not lik, such as, are you looking for another job, for example. And that's where I think a lot of misunderstanding can also come in. And in sales applications, some people even have, um, tracking, so location tracking… that they asked their employees to download to their phones. And in one known case, this lady there was an app and she downloaded that to her phone at her company's request. And her boss took it to a creepy level. He was like, I even know how fast you were going in your car. And, you know, so then where's our, as employees, where's our recourse for that for, you know. “Hey, you've crossed the line now, you know, now you're sitting here telling me how fast I'm going and whose houses I'm visiting of my own friends. And you're under just understanding who I'm romantically involved with based on where you're seeing my dot on a map,” you know, at the most you're, you know, that's the concern. I mean, we have a lot of things that are going on right now that I think we need to be, we need policies and regulations on some of these things, especially as we see that we have less and less rights on the data privacy front, and especially when it comes to employers because that's your livelihood.
Are we steps away from automation taking away our jobs? Maybe. Maybe not. Listen in as we discuss where automation will have a larger role in some industries than in others.
You could actually be automated out of a job. Anybody actually can be automated out of a job today. Let's say I'm an accountant and I open a particular system and then I open another thing and then I compare the two and then I close the thing and then I enter the results of that. And then I do this other thing. Each one of those is like what they call a little bot that has just a little RPA, um, robotic process automation capability. When you start putting AI into that, what's happening is those little bots are now learning your patterns and what you're doing as an accountant. So it's stringing together all those little activities you just did in little bits of code that are now gonna become a house. The Legos are now becoming a house or whatever it's meant to be. That's how I would describe how we are replacing [jobs]. There are certain parts of everybody's job [that can be replaced]. It’s probably pretty boring, where you're doing repetitive things… When you think about it, there's gotta be some elements that change about your job in order for it to even be a candidate for automation in the first place. And there's gotta be super repetitive parts of your job. The more creative, the more strategic, the more you really have to have, um, intuition to understand the more care [the less automation]. So like nursing, for example, where it's not just about the activities, but it's also about showing people that they're loved and cared for. And you know, that you gotta have a certain amount of nurturing to you or intuition to understand, um, how people get to their health situations that they got to, um, or if they'll even take their medicine, things like those, those are not so easily replaced. So, I would say it really depends on the nature of the job and whether or not there is any data that can help be collected about your job to help an AI actually do your job. And I would say also, even if it was able to replace [you in] the wasy that I've seen, um, intelligent process automation done is you're typically looking for elements that are repetitive. Um, and then you're gonna take those parts away out of someone's job. And then they're typically gonna have more time to focus on more strategic aspects.
CEO of AI Truth, Author – What You Don't Know
AI provides machine learning capabilities and collects massive amounts of data, from social media interactions to online searches, analyzing them for sentiments, patterns, and changes. AI is unarguably quicker, and sometimes more accurate, at measuring those patterns and sentiments than researchers or groups of people. The ability of this tech to quickly analyze and predict actions is critical to meeting the fast-paced consumer expectations of today.
From your phone’s ability to track your location patterns and then give you route and drive-through suggestions, to helping you pick your next bingeable series on your favorite streaming service, AI drives our behavior.
But, similar to humans, AI is not good at checking its bias at the door. Why? It was created by the majority. Being developed by and learning from those majority groups means it doesn’t have the full picture behind patterns. This leads to hiring biases, inaccurate predictive policing, and more.
We saw the Great Resignation come alongside the cultural shifts from COVID-19. Now, we are seeing the opposite happen – a mass influx of rehires. There are thousands of people currently hunting and applying for jobs. And AI is here to help.
Companies (such as global tech firms) that tend to bring in large amounts of job applicants at once use AI to sort through thousands of resumes. This results in only selecting ones that meet the qualifications of a“good” fit for the role. While this is making the lives of hiring managers and HR employees easier worldwide, this solution isn’t perfect.
Hiring managers and companies base recruitment on their “top performing” employees, looking for keywords pulled from their resumes and other demographics. However, if you’ve seen the statistics about the tech industry’s issues surrounding the hiring and retaining of minority and women employees, you know that representation is lacking in that field. In this case, we are seeing roles repropegated with the same type of employees, causing less diversity in the workforce.
There must be a way to program better systems to prevent the US from recreating history when it comes to hiring biases.
Predictive policing is “the usage of mathematics, predictive analytics, and other analytical techniques in law enforcement to identify potential criminal activity.” Today, AI is taking a larger and larger role in helping monitor and predict crimes.
The negative side of this story is that predictive systems notate sounds, such as gunshots, as proof of high crime. While, from the top level, hearing gunshots could be a sign of high crime, it’s not a one size fits all solution, and it can lead to racial and cultural biases. For example, the system may not take into consideration the celebration sounds of cultural holidays, such as Diwali in a predominantly Indian neighborhood. Instead of fireworks, that machine is hearing “gunshots,” in turn labeling that community as a high-crime area.
But, a solution might already be in the works. A group of scientists recently developed a machine learning tool to uncover bias in policing, instead of reinforcing it.
Developed by a team at University of Chicago, the new tool actually forecasts crime by spotting patterns amid reliable data on property crimes and crimes of violence, continually learning as it tracks.
“Rather than simply increasing the power of states by predicting the when and where of anticipated crime, our tools allow us to audit them for enforcement biases, and garner deep insight into the nature of the (intertwined) processes through which policing and crime co-evolve in urban spaces,” their report said [Source].
The team works with community members to solve problems together and create a better AI system to help increase community safety.
That square object in your pocket – connecting you to family and friends and providing you with news and information – is a traveling AI system.
When you think about artificial intelligence, think of all the things that your smartphone does daily. From automatically providing your route home, to showing ads about very specific products you feel like you were just talking about with a friend. Or maybe playing a song similar to the album you’ve had on repeat all day long.
Your phone picks up on your habits to suggest goods and services you might need or want based on those patterns. Turning off your location settings can mitigate the location-based suggestions provided if it feels too invasive, but overall AI is meant to make your life easier every day.