[00:00:00] Antony: Hey everyone. Welcome back to our, podcast, hope everyone is doing well, and, um, lots of things happening right now. And, uh, we keep tuning and changing, what we focusing on to be better how we interact with our clients, how we, operate, as a company.
[00:00:30] Antony: And so, Robb, do you wanna tell us more about our mobile strategy?
[00:00:35] Robb: Sure. I think there's so many things going on across the platform and it's hard to pick one, thing. Uh, and not talk about them all. But I think it's important to take a little bit of time to talk about mobile based on the fact that it's one of those kind of hidden gems. Um, it, I think we all know it's there, but doesn't get used very often. Uh, and I, I'm sure a lot of you guys have seen that apps is showing up in the menu and then it's disappeared and it's coming back. And, um, I think it's all in an effort to try to raise awareness more of mobile.
Um, what I'm so excited about with Mobile is the fact that it, you know, gets away from a lot of the, uh, issues with telephony and sms. It's an alternative. Uh, it's something that would allow somebody to. Speak to another person or the system anywhere in the world as long as they have an internet connection. and there'd be no telephony cost. as well as that, uh, utilized notifications, particularly the interactive notifications like buttons and things like that, without having to go through all the SMS regulation. And so that's to me, um, kind of an exciting area and a big future for how we'd want to deliver particularly, um, on, on internal use cases. So I think we're gonna start using it internally. We're gonna hopefully be launching you know, our OneReach core bot soon and primary channel will be mobile. And that will mean that, you know, everybody that's within OneReach will have the opportunity to download the mobile app and get the browser plug in. and that will kind of make all of our bots ever present, uh, in a separate channel besides Slack. At the moment, Slack has sort of the main place to access bots and we'd like to shift that over to mobile.
So, yeah, uh, I guess that's coming and, uh, we'll be sharing details about that over the next 30 days. Uh, probably Ivan, uh, we'll be doing that.
[00:03:19] Antony: Cool. Yeah. Finally we
[00:03:21] Daisy: Is this,
[00:03:22] Antony: oh, sorry.
[00:03:23] Daisy: I was just gonna ask a question. Sorry Anthony, I didn't mean to at the same time there is this where we would wanna mention Barb on how we're kind of getting a jump started on that or not, robb?
[00:03:37] Robb: Yeah. Yeah.
yeah. You can say how, how, how this will affect Barb in a positive way.
[00:03:49] Daisy: Yeah, so I think that that's, um, where we were going with, you know, with the Barb bot that's already been kind of deployed within the organization is following kind of that core bot methodology of having one place for everybody to go for, uh, you know, what they need out of the different areas of our business. So right now, what a lot of you guys have already been using is things like, uh, recruitment requests, uh, you know, hiring requests, requesting for training, requesting for help of some sort. and that's kind of the, you know, it will really help Barb a lot in getting into kind of that core bot family so that we can just continue to add skills quickly, successfully, and really grow out our automation for internal usage within our, and throughout our organization. So I'm really excited for that to continue to move forward
[00:04:55] Robb: awesome. And for those who are not familiar with Core Bot, um, core Bot is this concept that ultimately, Everybody's gonna have their own personal assistant. So when you come to OneReach, you get a OneReach account, you get your own personal assistant. Each of us could create skills within our account, share them, uh, externally. So we're all working on our bot automation together. Similar to how Alexa would work, you can create a skill, submit it, and it gets incorporated into a larger ecosystem of skills that anyone in the company can, can access, given that they have the permission. And, um, and that starts to change. You'll start to see changes in the platform because it starts to change how we think a little bit about, um, what is a bot.
We're kind of going to eliminate this concept of bot as a term, um, just because it's not a very good explanation of what these things. what these interfaces are and do. It means a lot of different things to a lot of different people and especially in the industry. So, know, we've been calling 'em intelligent digital workers, um, and that gives us the opportunity to kind of invent our own definition of what it is. I'd rather people say, uh, Hey, what's that? Than we say bot. And they say, and they think they know when, you know, we we're clearly talking about two different definitions. now we'll say IDW or Intelligent Digital Worker, and they'll say, what's that? And we get our chance to explain it, um,
From the platform standpoint. You know, you may not have noticed, but we're slowly, uh, removing this concept that you come in and you have a bunch of bots. And you click on a bot and you have a bunch of flows. Uh, we've been removing the word bot from that. Uh, and those are just really ultimately, what they are is what we'll call them, which is just folders for flows. And, and we're gonna have and introduce this concept of intelligent digital workers and listing them and listing their identifiers in one place. So you think about apps, you know, apps is just a channel, phone is a channel, SMS, is a channel, WhatsApp is a channel, slack is a channel. And if you've created. Uh, identifiers in any of those channels.
You know, there's no place you can go to see, show me all of my identifiers and which bot or IDW are they assigned to. So what the thought is is now when you create a gateway step and you assign an identifier or multiple identifiers to that flow, you will also assign to IDW and that would allow an interface to show you all of your IDWs, what channels there are accessible on, and then ideally grayed out channels that would allow you to click then create flows to add channels to box. think of that dashboard now becoming a place where you can see all your IDWs, can see all of your identifiers and that those identifiers are. Uh, categorized under those IDWs. So, you know that five phone numbers, uh, might be accessed by this IDW. Um, instead of now where you would go into the designer, click on the bot, open up a flow, the gateway step, see what phone number it is, that you labeled the phone number, the bot name, maybe then you would go into the identifiers, for that bot name, and then you would know that that phone number is associated with it. But you wouldn't necessarily know for sure because you can easily mislabel it. Um, or you can label could reuse it for something else and it has the wrong name. And no, no one would ever know.
So, so yeah. So changes in just how we think about it. It's not actually a lot of technology changes, it's just a shift in how we think, um, and aligning more with what we've learned over the years about this, this kind of stuff and apps, will sort of trigger the shift in, in how we think about that.
[00:09:41] Jordan: Yeah, and Rob, as we're building out these ecosystems of bots, our IDWs, do you wanna talk about the differences maybe of what the Alexa ecosystem is and the type of ecosystems that we are building?
[00:09:53] Robb: Sure. Yeah. The Alexa ecosystem has, a couple of major components to it. One is, you can create a skill, for yourself and access that through your own Alexa. So you can add a private skill, you can submit that skill as a public skill, and then that goes through a moderation process, and you're gonna have to have a wake word, um, and it's gonna be this specific unique word, almost like a domain name. Um, and so you gotta come up with one, and then it gets submitted to a library where end users have the option to add that skill to their Alexa device.
And, um, what we are doing is extending that concept. So yes, the first idea is you can create a private skill use it just for yourself. Um, you could download a skill from the library and activate that skill and it ends up becoming a private skill or you can download from the library, make adjustments submit that as a public skill, in which case other IDWs would have access to that skill as soon as we implement the cross account capabilities of moving conversations easily between one subflow and one account and another subflow in a different account. And so anybody can submit these, and, what will be different is instead of wake-word, which is really We'll be using nlu, uh, and, you know, making sure that it, it's a, it's more of a fuzzy match. Um, not a, remember this wake word. If you wanna get, you know, book a vacation day or submit a ticket, we're gonna utilize natural language and we're just gonna have to manage the interaction between these two, making sure that we don't step skills, don't step on each other.
And that will be an ongoing challange but I think a worthwhile one because discovery is so, so important. Um, and then, we'll, also have this concept of, uh, permissions. so that's an additional element where you may have permission to use a skill. You may not have permission to use a skill. And so when people submit skills, permissions will be a part of that, who can have access to this? You cannot. Um, so very granular access as well as group access Uh, and that's, I think, a critical component.
And one of the things I'm really encouraging people to do and will be continuing to encourage is that while we create a skill that maybe one group has access to and another doesn't, that we think in more gray areas than that, and we say, look, may be just different experiences for different access or permission levels. In other words, one person might have access to data, whereas. If you're in a different role, have different permission, it require, uh, permission to get access to that data. So at the top of the flow, it would basically look at your permissions and you might get a different experience based on what access permission you have.
And, uh, and it might include, oh, if you want this information, I've gotta get approval for that for one person. Another person will just get it immediately. Um, so encouraging that kind of behaviour within the flows as they get submitted. So anyway, this is a, this will be an iterative, ongoing, process I guess, uh, where we'll be kind of unraveling it very slowly. But, uh, I'm pretty excited about it cuz it really represents the ultimate vision where this company had started, uh, or at least the ultimate short term vision. Uh, and I, I we're just a couple features away from, kind of seeing it into fruition. I the important thing will be, you know, circling back on training Daisy and that kind of stuff. Once we add these things in, it changes how to think about the platform, how to think about architecting IDWs and sometimes that gets missed. We release the feature, but the training still teaches the old way, which, which almost feels like a workaround once you fix it. So we'll have to do that.
[00:14:39] Daisy: Yeah. I think that while it's a easy, easier fix on the platform side, I think it will. Incur a lot of changes on the training side, not that that's a, a bad thing at all. Uh, we're in the processing of already starting through updates on the training that we have because there's been enough changes already that have put a lot of those things a little bit to the older side. Um, so we'll definitely have that as, as part of the effort in making, seeing this to fruition for sure.
And again, content creation is one of the hardest things because everybody is so busy on, uh, on doing other things, that creating content is always kind of last priority on everyone's list, but it's also what is the most helpful is when the training comes from our internal teams. So anybody who's interested in helping out with creating those videos, we still have. Pages and pages of videos that need to be created, uh, that we have identified. So anybody that's wanting, willing to help out, please ping myself, Tony, and let us know. We, we'd love people's assistance on getting higher quality content created for our LMS.
[00:15:59] Robb: Great. What's next on our agenda there, Antony?
[00:16:03] Antony: Yeah. So next on our agenda is, so the way we, how we think about design, kind of in broader terms, not just graphical design and how we wanna lead in this direction. So talking about design operations and, how we can improve, how we do that and how we can teach others, uh, not others, well, others and ourselves included, right? How we can make it all better.
[00:16:33] Robb: Yeah. Yeah. So this has been this desire from the beginning to push customers into particular direction, not just get on calls and take orders from customers. It's very difficult because, you know, customers are used to just telling vendors what they want and then vendors doing, uh, what the customer asks. Uh, and this concept of the customer's always right and so pleasing the customers about giving them what they ask for. Uh, what we have found over and over and over and over again, is that the customer tells us what they want. They actually aren't right. And we do it anyway. And then they blame us, uh, for it not working and complain about us not taking a leadership role. And then we make suggestions to the customer and then they ignore us. And this cycle continues over and over and over again.
Um, and so the worst case scenario is when we're suddenly trying to make the platform accommodate a customer who's implemented something in a way we would never recommend implementing, and now the platform team's adding features to the system to accommodate an improper use of the platform because our team wasn't able to push back. Um, and it doesn't come from a lack of wanting to or trying to or saying something. It comes from a lack of momentum. I think if you don't, if you don't start leading your customer from the beginning with a vision, then you get into this mode of, of them driving, and then when they drive you in a certain direction and then you push back. they see it as pushback and then kind of start arguing.
So what's better is to step, step, back and influence it holisticly from the beginning at the top level of the company and at the top level of the customer. Make sure that we have a plan that we get buy off on from those executives up front. And then that plan becomes the working plan that we're all working on off of on those individual meetings that our team members and our pods and our CSMs, when they're on those calls, they're not trying to tell the customer what should be done on the fly. They are just reminding them that we all agreed to a certain way and that if we change that, we're going to need to get. A change request in place and we're gonna need those executives to sign off on that change.
And that gives us an opportunity in that change sign off process to make our recommendation against that change to the executives without it feeling like we just escalated over their head because they wouldn't listen to us. now we have a change request. That change request has to get signed by above. It's got disclaimers in it. If you do this, here's the consequences. You're willing to accept those consequences. Those people above will likely not agree to that. And then we will end up in a place where we start following these plans, um, instead of just having ad hoc meetings on what we wanna do and what we want to do next. And there are really good ways to, uh, that we've learned in our past from, from working and consulting. A lot of us have on getting buy-in front from customers on these plans and the details of these plans. Um, and it really starts with having a vision and I think a design led vision is the way to go.
[00:20:57] Robb: So we're putting together, um, Joel's is spearheading this, but we're putting together a design ops team with the entire objective of this team to look at each customer, work with the pods and the CSMs to identify opportunities, um, before the customer has asked for things to present plans, package it in such a way that's highly effective. Um, to prototype it, uh, or build it, in advance and then demo it to the customer. So, so that we are defining what it is that needs to get billed next before they have a chance to ask for it. Uh, and that's, that's gonna really work well in analytics and reporting. I think that's where we gonna gain a lot of traction.
[00:21:54] Robb: So if we can get ahead of them on analytics and reporting and showing them things that they should wanna know and information and data that they want to know, um, that will probably get them in the habit of appreciating what we're doing. Um, I think with a lot of customers, we've all, we've all seen in the past that customers buy OneReach for the automation opportunity, but they tend to keep us for the analytics. Uh, but we gotta drive that appetite for analytics, we're going to have to build it in without asking them what they want to know or asking them for permission to tell them what we wanna tell them.
Uh, because every time we go down that road, a, they don't ask for it. They're so used to not knowing this stuff or not being able to get it. They don't ask for it. And if they do ask for it, it usually is the wrong thing. And then secondly, if we go and ask for their permission to give it to them, then they usually say no often because they're so used to not having it and they, and they don't want to go through a procurement process or pay for it.
So if we can just iterate and build analytics into everything we do and push them, they'll grow an appetite for it, and then we can start to think about ways to monetize it. So analytics, particularly around like analyzing conversations, kind of like the ditto demo for those who saw it. This idea of like summarizing and analyzing conversations between the bot and people and people and other people like agents, uh, and trying to draw out, you know, interesting information and insights can roll up to the top of the organization.
[00:23:50] Robb: I, I think what I'd like to say is imagine that you're the CEO of a company, but you had the ability to sit in on every conversation between employees, uh, on in meetings or customers. How much better at your job you would be if that information was synthesized for you in some way. So if we can do that for them, what we'll understand is that most, most of the data that is that exists within a company is not accessible in things like Salesforce or their databases. Uh, 90% of it's in those conversations. And if we can extract that data for them out of those conversations, it's mind blowing.
We did that demo at the latest conference where we created a bio sketch from a conversation and showed them the information we extracted just from a dialogue that they had. Uh, and it was, was a total hit.
So this is, this is kind of a, an overarching, uh, uh, way to begin this is to say, okay, where does this start? We start with design ops. Design ops is going to think through the experience that it should be, but also analytics. the experience of the executives, and what, what they get. And, and then that definition gets, worked on alongside the pods that then prototype it out, either mock bot or build it, and then, then we show the customer what we created and then the customer has a choice to just keep it or not. Um, so I'm super excited about design apps. I think it's, it's gonna be very, very useful.
[00:25:43] Antony: Okay, so what you just, uh, told us about is more, let's call it strategy, right? So like, um, but when it comes to day to day tactics, like what do you see are kind of next steps that will happen? Like how things will change on how we design, solutions that we're building for customers or internally? Like what are those changes that will, that will make us go toward this strategy?
[00:26:12] Robb: Um, I think that one of the main things is we're gonna have a collaborative team that each bring a unique perspective. people that are focused highly on analytics, people who are focused on linguistics, people that are, , focused on visual design. Uh, and we're user research. So we're gonna represent each of these areas, uh, as an expertise within the design ops team. And each person is gonna look at this through a lens.
So imagine this was like healthcare and we have a, a series of patients, let's say we have. 20 patients. Um, and we're gonna bring a bunch of specialists, heart surgeons, et cetera, uh, together as a team. And each time they meet, they're gonna look at, let's say one to five different patients, over each one, what they know of them, and, um, think of the pod as the general practitioner for that patient. So that general practitioner, pod lead, pod team will come in, of explain what they know about the customer, so as if it was a patient. And then each one, each one of those experts will make recommendations on, on what they should do to maintain good health.
Um, so if a, he, if a good healthcare system, uh, which I think we all know, a good health for care system is one that's proactive. about keeping people healthy. Uh, that, and a bad one is about fixing people after, uh, they get sick and waiting till they get sick to fix them. Um, then a good system for us as a company is keeping our customers and their systems healthy proactively instead of waiting for them to open tickets or for them to complain or things to go wrong, and then going and fixing them.
So it's, it's hard to flip that switch. It's hard to move from reactive to proactive, but, um, it's absolutely critical in, in our case to do that. And I think you do need that expertise. You need multiple people looking at it. And you need to bring up your customer when they're not complaining. Before they complain and you need to, to think about it.
So I've always had this, this idea that, you know, there's a DevOps, you know, expert as well. Like there's a technology side to this as well. and these, these people would go look at their PDEs once a month or once a quarter and make sure that they have all the latest upgrades and that there's not a lot of errors and they're not hitting any soft caps or close to hitting any soft caps.
And they deliver a report to the customer that says, here's what I found in your PDE. All these things look healthy. Here's some recommendations. You should update this, you should update that. And that gets delivered and signed by the customer.
Uh, that, that will flip it around. And what we'll end up with is just super happy customers, but also customers that are now getting used to recommendations from us and responding to those recommendations versus us waiting for them to come in and say, Hey, I have a problem. Can you go fix it?
[00:29:57] Antony: Yeah, makes sense.
[00:29:58] Antony: So, uh, so this leads us to then to how we are changing as organization as well. Like to become more, pot centric organization, right?
Like,
[00:30:11] Robb: Yeah, totally. Jordan, you wanna hit this one?
[00:30:15] Jordan: Sure, happy to. Um, so as Antony said. The, our company is a pod centric company, and so, uh, what does that mean for, for us and what does that mean for, for all of you? Um, the first thing to keep in mind is when we say we're pod centric, what we mean is that the delivery pods, the, the famous delivery pods that, you know, like the sardines, the miscommunicators and all the rest, are the heart and soul of, of OneReach.
And if they are successful, then all of us are successful. And so, uh, the positioning and the mentality of everyone at OneReach should be, how can I help, the delivery pod succeed? And when people ask you what your job is, your answer should be, uh, to support the delivery pods. And that will mean different things for each individual at the company, depending on how you can support them.
Uh, if you are on the, the customer team and you're a CSM, The way to support the delivery pods would be to collect requirements from customers in the right way to make sure you maintain a good relationship so that when the pod needs something, you can get it for them to coordinate resources to identify, uh, when you need a design expert and to help pull in a design expert when it's relevant for the delivery pods or a technical member from the solution pod.
Uh, and so if when you're approaching your day to day work, thinking in terms of how to prioritize my, my work and responsibilities towards supporting the delivery pods in the best way that we can.
[00:31:49] Antony: Okay. Makes sense. And then how like it seemed like we are talking about pods like, well, I guess it, it might be a little bit confusing since now we started calling pods almost any team, right? Like, we have Delivery Pod, then we have Solutions Pod, we have, uh, design Pod, and everything is a pod. So I guess that's our, our internal buzzword right now.
But how do you see like the structure of the pod, what it consists of and, um, who, what, what our responsibilities, I guess, of the major roles in the pod, at least how we see it as of right now.
[00:32:35] Robb: Uh, for me, I would say if I use the healthcare analogy, which I really like, I actually think it's a good, it's a good model for us. I would say that the, the pods represent the doctors and nurses in the clinic the clinics are sort of spread geographically around. and so everything we do is to support those clinics. And, uh, if, if our company is growing, it's because we're opening new clinics. And that those clinics are busy and successfully able to help their patients.
So same for us. If we're growing, it means we're creating new pods and that those pods are successfully delivering solutions to customers and helping. As we find ourselves in a place, which I think we are increasingly, um, where we're getting to the place where we can begin to pick and choose our customers, the ones that we think, make the most difference in the world, have the most positive effect in the world, as we start to be able to curate who we wanna work with, which is a huge luxury and we hope to get there.
And all of this branding and, and all of this work that we're doing to try to elevate our name, writing the book, et cetera, um, is, is all in effort to try to make, get us to a place where financially we don't have to take the customer that wants us, but we can pick whatever customers we want and we can go say, you know what, you know, we think, you know, Tesla for example, is doing a, an important, um, job in getting, getting us off of fossil fuels. So we want to go help them.
So that's, you know, if that's a pod or two pods that we assigned to Tesla as an account, then for example, the, the rest of the design pod or the design ops team rather, or the solutions pod, it's there to, to provide supply and to. And to offer help to make that pod successful.
But the center of the universe is that pod, our universe that pod. Um, they tell us what they need, they tell us how we can help, um, and then we figure out ways to create solutions that will not only help that pod, but hopefully help all the pods.
And so that's the trick on our end. Um, and all the support like solutions pod, etc. Uh, if the pod is at the top and they're the most important and everybody else is at the bottom supporting that, those pods, the trick is to make sure that, you know, while we're supporting one, we're supporting them all. making sure that we're not, you know, supporting one hurting another one, you know. Um, so it's, it's always gonna be about trying to figure out ways to help, uh, all the pods, collectively and, and where to prioritize, where we help. But at the end of the day, if you watch pod growth, if you're, you know, if you're in a pod, you're, you're in the most important part of the company, and the rest of us are trying to figure out how to help you do your job and take care of your customer.
Uh, because if you create awesome solutions for that customer, whether that customer does awesome things or not, it will have an effect the world. It, it will either represent an example of conversational AI done right, that other people will want. Or ideally you are doing work for a company that you know that is already setting the bar and doing something. super important. Um, and so then the quality of your work is in essence helping them do their work. So pod centricity, it's, it's, I think it's a, it's a key part of our business and the structure.
[00:37:03] Antony: And so Then the question is more from another angle. So as we are becoming, uh, more and more pod centric, right? So like we are talking that we have lots of experience on, um, how to create this intelligent, uh, digital workers. um, we have the vision how we, how we think, uh, people should, uh, apply hyper automation in their organization. Right? And so, there, there is a lot of, people still, I don't wanna say scared in the world about all these automation and Skynet and Terminators coming from the future and, and stuff like that. still there is, um, kind of, there is this, notion, I guess that, just having a simpler, um, dumber, um, solutions bots, uh, is better than smart ones, the ones we are trying to create that kind of better understand context. and they are not stateless, they're state full, meaning they, remember what you were talking or you were asking before, uh, and so on.
So I guess kind of if you would have to explain why do you think this approach is less scary than just a simple utilitarian approach to bots as kind of dumber machine?
[00:38:51] Jordan: And, and Antony, maybe I can, I can, um, try to clarify that and see if I understand. The position there, when you refer to scissors as an example, that's sort of a dumb machine that you can, the user can choose what they're cutting. They can choose to cut through a piece of paper, and the scissor has no say and whether or not it cuts the piece of paper.
Uh, and, and so what you're describing is a world where that scissor actually does have a say in whether or not it, it cuts that piece of paper and you go and, and try to cut the piece of paper and the scissor is aware of that and says, Nope, that's, that's not gonna work for me. And some, you know, trap door opens and it stops from the scissor from functioning.
Um, and I think the question you're posing is, is the world in which we're envisioning is that, that the latter, uh, and uh, you're asking. Uh, if people are afraid of that, that latter world, um, or should be, you know, more afraid of the, the current one where the scissor doesn't have any, any oversight or inte.
[00:40:01] Antony: Yeah.
[00:40:01] Robb: Yeah, exactly.
[00:40:02] Antony: better.
[00:40:04] Robb: Yep. Yep. I think it's, I, I got a couple comments from people who were, I don't think they'd read the book yet. So, um, but they, they, they, you know, were talking about, IDWs intelligent digital workers, and somebody wrote a post that was like, this scares me to death. Um, and I, you know, responded saying We've, the invention of the machine has happened and it's behind us. Uh, we've invented machines already and they're already killing people. Uh, tomic bomb is a machine. Uh, cars are a machine and, you know, factory equipment, construction equipment, are machines that are killing people every day. Well, not autonomous bomb, thank God, but, but they, they have the capacity and have kilt and.
This idea that making machines smarter is somehow making them more dangerous. Doesn't make a lot of sense. I could see how making them smarter, uh, could still make them dangerous. They could still be dangerous. Uh, and there's no question that machines can be dangerous, like Jordan pointed out. Scissors can be very dangerous, but usually scissors are gonna be more dangerous in the hands of a human than in the hands of a smarter machine. And you realize is that smarter scissors probably protect us from other humans, um, more than endanger us from that pair of scissors wanting to do harm to us.
And. And so it's, there is no perfect answer. There is no world where danger doesn't exist. There is no machine, um, that doesn't have some form of danger associated with it. But we should be very scared of the dumb machines we have today, like the atomic bomb and the button that under Putin's finger. And if I ever think about, you know, the, those machines being smarter, all I could do is wish the buttons finger was smarter than it is. Um, I wish that button had more capacity to stop him. So I think there's a false sense of safety just in the fact that, you know, we're complacent because we're all familiar and we know the machines around us. Uh, and somehow that makes us feel safe cuz it's known. smarter cars mean machines that stop killing us. They already are. And I think that's something that people just haven't contemplated somehow.
They're saying it as if, if artificial intelligence is not just a smarter machine, they see it as something else. Uh, some sort of conscious being that's going to have self-preservation at its core. And not that it's just a machine, uh, that is now smarter. And yes, I do want my scissors to know whether it's cutting a piece of paper or a finger like I would, I would much prefer that those scissors.
[00:43:59] Jordan: Yeah, an interesting question there, Robb, is if you consider texting with your friends, uh, everyone has, you know, auto correct enabled in their phones or emails, you know, and wherever they're, they're, they're typing or conversing. And so if you imagine a, a dumb machine, uh, version of, of that, that text message, there's no post-processing, there's no auto correct, it's being sent as you type it.
And if you were to choose, uh, if you were to choose to turn that off or to keep that on, I'm willing to bet that most people would say, yeah, I wanna enable auto correct. I still want that, that system enabled, even though autocorrect is not a hundred percent accurate and does give you, you know, some words that you're, you're typing in and autocorrects than something else that, that you didn't mean it to be.
And that does happen. But, but even with, with that flaw and, and not being perfectly accurate, , uh, I'm willing to bet that people are, are still saying it's a better experience to have that machine helping me type the correct words and correct my spelling and grammar and the rest. Uh, and willing to accept that that detractors, that detractors of the wrong words every once in a while. Um, because the assistance of that smart machine is a significantly better experience than without it.
and so that's kind of akin to what I would expect to happen in this world with, with intelligent digital workers. When they start appearing, there will be issues. Um, but people will start to say, well, I, I know there's issues, but I much prefer a world with them in it than with without.
[00:45:36] Robb: I agree.
[00:45:37] Daisy:
[00:45:37] Daisy: I think it's funny how we talk about this as, uh, better or worse because we also aren't measuring it on the same scale. So where if a human makes a mistake, it's forgivable. If a machine makes this mistake, it's not right. So going kind of back to the car, uh, example, you know, there's something like an 11%, death rate in car accidents every year with humans. And it's something like, 0.9% on automated cars, but for some reason we're saying the 0.9%. Is not low enough for, uh, because they, they're still crashing. It's almost like machines have to be a hundred percent perfect, where we would never expect that to be okay from, from a or, you know, we know we can't get there from a human perspective.
So I also think that it's interesting, it's not just better, it's a different scale of measurement. And I think that we have to keep that in mind as we're building these out, because while they're definitely going to have issues and mistakes, are they better than what we have today? And doing it manually? The answer is almost always yes. So even if it's not perfect, it's better than the manual process. And I think that as we iterate and as we grow, we just have to give ourselves some grace in getting there, knowing that we're not achieving perfection. We're just improving from what we currently have today.
And, and I think that if we have that kind of train of thought in mind, it'll be a little bit less, um, overwhelming to get started with things, uh, to get us to the point where we wanna be in the future.
[00:47:28] Robb: Yeah, it's funny, it's, it almost reveals a bias you know? You know, if people think they don't have unconscious or conscious biases, it shows that we have a bias towards humans versus machines just shows us that we definitely bias to the. Things that, you know, most resemble ourselves, and, it's a, it's a double standard.
The, the more you resemble me, the, the more forgiveness I provide,
[00:48:02] Antony: Yeah,
[00:48:02] Robb: um, the more
[00:48:03] Antony: it's
[00:48:04] Robb: give.
[00:48:05] Antony: one expects human beings to be perfect. Like it's almost stupid
that someone could be perfect. But when people think about machines, about automation, about anything artificially created It almost implies that it has to do its function because it was created to do that function. And so, uh, I guess partially that's where it comes from. I completely agree. Like it, it's like thinking logically, uh, having some automation rather than not having it might, might save lives, but at the same time, if, if that thing is not zero, right, it's a risk factor that many people would not be willing to take. I don't know how to put it better.
[00:48:59] Robb: Right.
[00:49:00] Antony: I guess in order for high Hyperautomation to adopt, be adopted more and more, by. companies of this world by, people in this world. It, yeah, there definitely has to be this fundamental shift in perception that something can be good enough, even though it is artificially created. And, and I guess it's, it, it comes from this like AI as a, as a general AI, as a almost self-conscious machine that at some moment you ask a question, is there a difference between human being or artificial thing that kinda is self-conscious, right? And, and, and the funny thing would be if that self-conscious machine will do a mistake, Would it be more forgiven than a dumb one?
[00:50:02] Robb: Yeah.
[00:50:02] Antony: it's
[00:50:03] Robb: dumb machines versus smart ones is what we're really talking about here. not humans versus machines. It's dumb machines versus smart ones, and I think that's where everybody's kind of lost is. we anthropomorphize it and talk about it like it's artificial human intelligence, as if that's, as, if that's even, you know, a worthy goal. Uh, we don't need our machines to be human. We just, we just need them to do the tasks and do the work that humans typically do. And many jobs are designed for humans. you know, the first step is gonna be designing machines to, to do humans jobs, you know, in other words, you know, some form of, you know, synthetic human behaviour Um, but over time, jobs as we can see, get redesigned for machines and they, as that happens, they don't need to operate like humans anymore. They don't need to behave like humans because, We, we redesigned the job for a machine. And so machines aren't going to become like humans. Machines are going to, have their own place in the world their own job descriptions.
And it's not going to be a similar one to humans. It's gonna be something completely different. In other words, call center agent is not going to be a job description for a machine, uh, that's for humans. But we start with mimicking the human job description for the machine cuz that's just what we do.
Cuz that's what we've designed all the jobs around, is the human being and the human brain. very quickly that will change and we'll see that there will be no reason to design a, uh, machine to operate. , and simulate a human, um, the same way that, you know, unless it's for entertainment or, or some other other reason, from a practical standpoint, know, having a nanobot clean your window is gonna make more sense than a humanoid robot
[00:52:31] Daisy: Well, it takes it from emotional to logical, going back to what you said earlier about us providing data to our customers. Hopefully that's what takes it from an emotional to a logical decision process. If I told you that you had less than a 1% chance of getting killed in a car accident, in a self-driving vehicle an 11% chance driving yourself, it would be a fairly logical decision to put you and your family in a self-driving car, right? Um, you have one in a million chance versus an 11 in a million chance. Uh, so it is those kind of things where it takes it from this emotional.
[00:53:18] Robb: if you think about it, you're driving a car, you know, 75 miles an hour, right? Like you're speeding down a highway and you're trusting those brakes, you know, you're trusting the brakes in those cars that it's gonna work. What's the difference, We already trust it.
[00:53:40] Daisy: it's, it's the emotional side that you have control. You think you have control, right? And yeah,
[00:53:48] Robb: And, and then you look at antilock breaking and all of that stuff, and you're like, uh, it's illusion. Yep.
[00:53:54] Daisy: so it kind of just wraps around to data as king. And the more we can provide that information to our customers, I think they're gonna be more apt to make logical decisions.
[00:54:07] Robb: If we don't report on the death rate of humans driving against cars driving, automation will never see its day in the light because people will see one accident and emotionally blow it out of proportion. So yeah, it kind of shows, shows the importance. It's like a full circle back to why analytics has to be at the heart of every solution, um, that our pods are creating.
[00:54:40] Jordan: And, uh, I guess with that we can jump over to the, pod spotlight of the month. Uh, unless wanna, has some closing thoughts on smart versus dumb machines.
All right. So, uh, for this month we wanna call out the First Borscht team, led by, uh, the incredibly impressive Bogdana, who led the team to deploy, uh, their first production solution, which was for, uh, one of our clients, ICF International. And they went live with, uh, a Snap solution, uh, Marilyn Snap solution, um, where, which helps people to get, uh, money to pay for food.
Uh, and they're, they're handling about 15,000 text messages a day, um, to help people get their, uh, get their resources to, uh, pay for food and groceries and, and all the rest. Uh, and it went off without a hitch. Uh, it was probably the quietest. , production delivery that we've had so far. The team did an absolutely fantastic job with it.
Um, and in fact, when we're talking about data and we're talking about, uh, insights into the performance, this is one of the examples of, of highlighting that, that data with our deployments where the, the client came back sifted through all of the, the different information that they were able to collect in the deployment, and was ecstatic to find that they were, they had an initial launch of 75% success rate on the initial launch, which was massive, uh, massive adoption.
And they were already starting to find insights of, uh, the next wave of production, improvements to make. Um, so big, big congratulations to the First Borscht team for their first live deployment and to Bogdana for leading the charge.
[00:56:31] Robb: Yeah. This is a
[00:56:34] Daisy: Yeah.
[00:56:35] Robb: is a big deal. I, I particularly kinds of projects because, at least the way I understand it and the way I heard it, um, Uh, some of the people that are most, uh, susceptible to malnutrition and, you know, not having proper access to food are elderly, um, elderly people. And, uh, there there's an issue with the elderly folks or, or some people, whether they're elderly or not.
There's a particular group that are very proud. Uh, they're used to, you know, being able to take care of themselves, um, and they're very embarrassed about needing help. And so they don't seek out help. They go without eating instead of going and getting help. And this is just a particular type of person I, you know, really can identify with and connect with because they're such a. They're not where they are because they want to be. And they're, clearly in a lot of cases, somebody who's taking care of themselves and is used to that. and so they, they'd, they'd rather their belly's hurt than, than go and ask, uh, for a handout.
Um, and so offering them a text message option makes it easier for them to see if they qualify, uh, and, and it allows them to do it in a more private way, um, less embarrassing. And so it's a way to get help to them, uh, to, to that kind of person, which to me is, is huge. So it's using conversational AI for what it does best is it's, it's helping in the world. And, uh, Man, I, I just love those use cases. and yeah, to your note, I'm hearing, uh, awesome things about Bogdana.
She's coming up in many conversations in a positive way. uh, shout out to you, Bogdana.
[00:58:49] Daisy: Yeah. That's amazing. I, I, uh, echo the loving the use case. I know that this is something that when we had the interns going through, that was one of their, um, projects that they highlighted at the end. Was there anything that we used in the learnings from the interns in the project?
[00:59:08] Jordan: Yeah, it all actually started with the interns and then was, was handed off to, uh, First Borscht when the interns had left. and so it was a continuation on top of their, their great work.
[00:59:19] Daisy: That's amazing. I think that's fantastic. And, now it feels like this is a opportunity to get it in the hands of other states and other counties so that really our impact in, in the world can really start taking effect. So that's fantastic. Congratulations to everybody on that.
[00:59:41] Robb: so many other use cases too,
[00:59:43] Daisy: Absolutely.
[00:59:44] Robb: medicine and, you
[00:59:46] Daisy: Exactly.
That's where I was going. Prescriptions, healthcare. Yeah. He help with if they've fallen or if they can't, like there's just so much that can be, can be done in that, in that realm. So that's really, really exciting guys.
[01:00:03] Robb: awesome. Yeah. And First Borscht is the first pod, first five. So they're the original, the original first pod, is that correct? Jordan?
[01:00:13] Jordan: That's exactly right. They were the first pod originally helmed by, uh, Bailey. Um, and then, uh, a couple weeks later, uh, Kyle jumped in and, and led that team for a little while. And Bogdana took over a couple months ago, uh, and has been, uh, just doing a, an excellent job, as everyone said repeatedly here and, and elsewhere, uh, to get that team, uh, up into their first, first production deployment.
[01:00:39] Robb: of a legend, the pod legend of First Borscht
[01:00:43] Jordan: that's right. That's right. The OGs
[01:00:47] Robb: Yep. The OGs, go First Borscht.
[01:00:51] Jordan: Woo.
[01:00:52] Daisy: great guys. You're really, really cool.
[01:00:55] Antony: All right. I guess we are out of time for today, so unless someone has anything, uh, anything else to discuss, I guess we can wrap it up.
[01:01:08] Jordan: Excellent. Thanks all. Thanks for listening.
[01:01:11] Antony: Yeah. Thanks everyone.
[01:01:13] Daisy: bye.
[01:01:13] Antony: Bye guys.
[01:01:14] Robb: Bye-bye.