SDO 015 - The Unexpected Data Challenges Faced by AI Researchers - Jazmia Henry
Interview: Jazmia Henry, Senior Applied AI Engineer at Microsoft
What are your thoughts on AI?
The major shift in AI is surprisingly not technical… it’s cultural. ChatGPT, for better or worse, has brought AI to the masses in that it is 1) tangible, 2) can be experienced by anyone, and 3) a 10x experience from standard processes. Everyday people outside of academia and tech circles are now wrestling with the externalities of AI:
"What happens when AI takes over white-collar jobs?"
"What is the role of copyright when content is AI generated?"
"How do we account for racist and misogynistic training data influencing the AI being used by the masses?"
Questions like these were always present, but ChatGPT now forces society to confront them... for better or worse.
This shift became apparent to me when I attended the TransformX conference last October, where I heard leaders such as Greg Brockman (Co-Founder of OpenAI) and Eric Schmidt (Former CEO of Google) speak about the future of AI and foundation models. In summary…
We have “opened Pandora’s box” and must face both the positive and negative consequences of this powerful technology.
Similar to how the iPhone and App Store created new business models and markets, foundation models will enable innovative businesses to build off them.
There is a technical arms race between international powers to produce the most efficient chips and thus take the lead on AI.
You can check out the recordings from this conference via this link.
— Mark

Hear from Jazmia Henry, Senior Applied AI Engineer at Microsoft:
Note: Jazmia informed me that the recent Microsoft Layoff unfortunately impacted many of her AI researcher colleagues that she directly worked with. Please reach out to her if you are looking to hire talented AI researchers, as she will be happy to connect you. You can learn more from her recent post on Linkedin.
Hear from "XYZ" highlights real-world use cases for all of us to learn best practices and upcoming trends within the DataOps space. When I think of people defining what AI technology looks like in the future, my friend Jazmia quickly comes to mind. She has led ML teams within finance, contributed to AI research within top universities such as Stanford, built products exploring edge computing on the blockchain, and is now an AI researcher at Microsoft. Her insatiable curiosity about technology, its future, and social impact is a driving force for her work and what makes me so excited about the interview below. Enjoy!
What are the unique data challenge you face as an AI researcher working within reinforcement learning?
Jazmia: “So reinforcement learning requires you to know two things. One, to have an overarching goal because you want to ensure your simulated environment is as close as possible to wherever you're going to deploy the model. What would traditionally happen before I would go into reinforcement learning is, let's say, I would have a bunch of unlabeled data. I'm trying to figure out what direction to take this unlabeled data. You can sometimes do something where you go, okay, I'm trying to, for example, let's say, cluster my customers into different groups. You can just kind of figure out where the data's taking you, and then just kind of follow it, down wherever it takes you, and then deploy it once it's good enough.
In reinforcement learning, you are training an agent to understand a physical space using a simulated environment. So whatever environment you create has to be as close as possible to the space you're deploying it in. So that requires way more time and effort collecting appropriate data.
And many times there is no appropriate data… it comes to a problem, and you're like, we want to create this model that's going to help us create the best Cheetos, which is an example at my company as we work with PepsiCo. I wanna create the best Cheeto possible. Well, where am I gonna find data on how to make the proper Cheetos?
I have to make that data. I have to create that simulated environment. The environment has to be as close as possible to what the actual lab is like, right? So I'm gonna spend six months collecting data and finding experts and talking to these experts so I can create that environment.
You're replicating it in a way that acknowledges just how messy humans are. It's really like when you're building a machine learning model, you want the data to be clean. But when you're building a reinforcement learning model, you don't want the data to be clean, like you want it to be almost as messy as real life is.
Because let's say I collect a bunch of data at some lab making Cheetos in Texas. It has a level of humidity in there. Maybe it has people who are more likely to be walking around in a certain way that might be different than how somebody might walk around Boston. Maybe in Boston, it's colder and less humid. People are less likely to run up and down the halls, not because of any other reason other than maybe just cultural differences. The type of people that you have in the office might be different type of people. So how will I have a machine that's working with a human being that’s going to be different than expected?
Because if I know that Joe, who works in a factory in Houston, is more likely to be clumsy, I gotta make different adaptions than an office in Boston, where maybe I might not have Joe who's clumsy. But maybe I might have Dave, who's likely to move the machine outta the way and do something himself, right?
I'm creating different spaces. And I want my data to train my agent in a way that's adapting to the difference in space. So something that might be like, “oh, this is crazy. This is a weird observation. I'm gonna just drop it from my data analysis.” You're not doing that in reinforcement learning.
You're like, “this is a weird thing I'm seeing. Let's train on it because that might be something that, later on, will be important to help us continue with our process.” Maybe their machine's running hotter in the South. I'm gonna need to keep that in my data analysis.
So that's the biggest challenge, having a goal and that goal being as close to the actual problem, especially if you’re gonna deploy it. And then the second thing is finding that darn data and making it as close as possible to the actual world so your agents actually learn something of value and not fail.”
Through our previous conversations, you have described to me your deep interest in deploying AI on distributed edge devices. Can you elaborate on what excites you the most about this future of AI and why more people should pay attention?
Jazmia: “So much of AI is what, at least it's coming out, has been coming out in the past few years. I would consider it to be disembodied AI. I don't think that's a proper term. That's just a term that has made it make sense to me in my head.
And what essentially that means is your ChatGPT, for example. You go, and you put in some information into your computer, and then that API, there's a call to the actual machine and then back and forth, right? It's not making any actionable decisions in physical space at all. I cannot say to ChatGPT, "Hey can you teach me how to create a great cake?" And have it reach out and begin grabbing ingredients. Instead, it's just going to give me a list of ingredients. Oh, you might wanna have some flour. You wanna have X, Y, and Z. Those things are cool, and they're interesting things, and they're important things, and they are steps that eventually could lead to what I would consider being embodied AI.
Which is AI that has a physical space. Where I'm able to say, "Hey, machine, can you teach me how to make a cake?" And it's actually teaching you how to make a cake. It's showing you the decisions it would make in a way that you would have to do... and those machines would be distributed. Those machines would be devices with some type of internet of things capability so that the API call can be within some type of edge device. I think things like that would be really cool for a couple of reasons. But the biggest reason, it's gonna sound so silly, is because this is a type of AI that not only I but also most people are used to, even though it's not commercially popular.
So when you were a kid, and you were watching a movie, and they were talking about some type of machine that was really smart and intelligent, it was a bot, it was an android, it had a body, it was doing things, it was reaching for things. Whether you are watching the Jetsons with Rosie the robot, or iRobot with Sunny, or Ultron, right? They were able to do things and have conversations with you and make adaptions and be funny and things like that.
We haven't gotten really to that space yet when it comes to commercial AI. But naturally, to me, we would have to progress into these spaces because when you talk to people about AI, that's what we think about. Especially if we don't work in this space, we're thinking about what we saw on TV, which had some type of physical component. And so, to me, that is a natural progression, being able to create AI that's not only intelligent, smart, able to adapt, but it's able to have some type of physical space with us and work with us for a better, more equitable future.”
You have experience as an exceptional individual contributor and leading exceptional data teams. What advice can you give leaders on fostering teams where technical ICs can thrive?
Jazmia: “First, have a vision for your team and have your vision be compatible with the team that you currently have. I sometimes think what managers do, especially when they're new to management, are they come in and they don't have a vision other than, “oh, we are gonna be the best team,” but they have like no steps of what that looks like.
And then if they do get to the point of having an idea of, okay, in five years, what are we doing? And this is gonna sound so horrible, but I usually found many of them will describe their teams… and I'm like, "that's not your team." You're describing to me a team that's able to build reinforcement learning models and can deploy them to some type of edge device, but then I look at their team, and it's full of analytics folks… Nope, not gonna do that with that team.
And so, the problem with doing that, with not marrying your vision to your team, is that a) you're never going to get to that vision, and b) you kind of create this idea to your team that they have to do extra stuff to be part of that vision. Now they're trying to fit into your vision versus you having a vision that's appropriate for them to grow.
And so, if you're managing a team, what you wanna do is you wanna look around at the skill sets that your team has. You want to have conversations with them about what makes them interested, what gets them up in the middle of the morning pretty excited to go to work. And you want to make sure that whatever projects and proposals you're putting forward, or ones that are compatible with their hopes and dreams.
In my last company, when I first started leading, I had a vision of leading a bunch of quant-based data scientists. That's the space that I came from. And then we started hiring people, and most of them were machine learning engineers that didn't have quant backgrounds— so they had a way of doing things that were different than how I did things. So now I'm learning from them what type of work they find interesting and what gets them up in the morning. And it's different than what I thought we would be doing. And so we just made adaptions, and we changed our visions to be compatible. So if I had my leader say, “oh, you're good with natural language processing, let's build a chatbot.” We could do that. But my team, no, we don't do that. So instead, let's build some recommendation engines that can do X, Y, and Z things because that's what my team enjoys doing. That's the type of work that's good for them. So that's what you wanna do as a leader.
And then this one is a little bit gratuitous, but I think it's a good add-on. If you want to have your IC thrive, you have to trust your team. There is no reason, especially in the land of data and data engineering of any type. There is no reason for me to be upset because it's 10:05, and I look on Slack, and my IC isn't signed on. If there's no meeting at 10 o'clock and they're not signed on at 10:05, let it go, right?
You don't have to have them doing eight to five or whatever. There's no point in that other than micromanagement. If there are no meetings that they're missing, leave them alone. People have kids, people have lives. Most engineers work at home. So be acknowledging of that.
And when you do that, you're actually gonna get your best work from your engineer, cuz they're not gonna feel like. I gotta create some BS bug to stay online long enough for my manager to feel satisfied. Instead, they're just doing the work that they have to do. And you know they'll give you ten times what you asked for because they know that you trust them, and they end up trusting you back in turn.
Many leaders who are very well-meaning have no idea what it's like to be a data person. Data is very new. And so most of the time, people who come from data land haven't yet made it to the point where you can get up to being executive. And so, if you're going to manage your team, advocate for them. Don't be afraid to go into leadership and say, "Hey, I understand that we have this structure where we're telling people they have to do X, Y, and Z. That's great for the rest of the company. Let me tell you why this might look different on my team."
A lot of times, especially when you're talking about startups, that's what makes people stay more so than simply giving them a bunch of perks and ping-pong tables and stock options. Those things are nice, but them knowing that their manager has their back, it's going to do wonders for them.”
Person Profile:
Jazmia Henry is a Senior Applied AI Engineer at Microsoft. Feel free to connect with her on LinkedIn to learn more about her work.
What are others saying in the data industry?
Below are some select articles written by Jazmia.
MLOps: A Primer for Policymakers on a New Frontier in Machine Learning
What: “An explainer of tools for bias mitigation in the MLOps lifecycle.”
Why: It’s a deep dive into how ML models are deployed and their impacts on their outputs.
Who: You are interested in how we can make the future of AI equitable.
What: Jazmia shares the parallels between AI and the way humans think.
Why: It’s an approachable read about how AI “thinks.”
Who: You are interested in the ways in which we can use AI to reason through various problems.
Model Rollbacks Through Versioning
What: A technical deep dive on how organizations can save a substantial amount of money on their ML deployments.
Why: A clear explanation of versioning ML models and their impact on deployment costs.
Who: You are a leader looking for ways to reduce your spend on ML initiatives.
About On the Mark Data:
On the Mark Data helps brands connect to data professionals through captivating content, such as this newsletter and other featured content! Please feel free to check out my website to learn how I can support your data brand via influencer marketing or content and go-to-market strategy consulting.