AI Expo Africa 2021 ONLINE – Al Lindsay – Elerian AI – Creating AWS Alexa

Machine generated transcript…

Hello, al alright, so uh with me today i have al lindsey hes, an engineer by trade built and led team that created amazon, alexa echo and, of course amazon, prime back in the day, 2008 to 2011 um, so ill, welcome, uh and tell us a little bit About yourself and um, and what youve, what youve been up to hey uh, thanks for having me finn and uh yeah, just to sort of expand on a few of

The things you mentioned there engineer by trade studied computer science at the university of waterloo in canada. Um worked in telecom for 10 or 12 years in uh in in the canadian industry and then joined amazon in 2004 um and and worked roughly 15 years at amazon. Um culminating in the last eight years being dedicated to alexa

And that whole space yeah i mean that must have been an incredible uh journey of growth. I mean there’s there’s, very few companies that that have accelerated at that rate. To that volume growth in every dimension, you can imagine i mean from for a while. There was the fastest consumer electronic product growth, beating even iphone uh in the number of units and devices built

And sold, but also in terms of um organizational from a team of literally zero um to now over 15 000 people working on alexa at amazon. Going from you know, a small number of cloud-based hosts supporting all of those services to you know tens of thousands of of nodes handling. You know massive uh volumes in the cloud scaling services, scaling teams, scaling device, manufacturing shipping customer bases a lot of a lot of uh

Really interesting fun lessons learned there, so those so those fifteen thousand. I mean those werent people delivering parcels. Those were people working on the the alexa project, yeah on alexa itself. So um, you know a large uh team of scientists and engineers and product managers um, but also you know, transcription annotation um data experts um. You know ux experts, you know across the board. You need it.

Takes a village to ship a product and uh um large teams across all those domains. Just give you an idea, i mean even just the music aspects of alexa alone require its own uh organization, focused on making the best music experience. They could possibly have in the music domain and then you can just take that and and expand it across all the various uh capabilities and then there’s an app app. That goes.

With it as well, the companion app so right a lot, a lot of moving parts had to come together to ship, that product, yeah, incredible and, and so when you, when you um, you know, were in the midst of all this. You know where were the: where were the big challenges, and – and i mean you know this – was this was nlp back in you know. This was sort of you know.

You know five ten years ago back where nlp didnt have what it has now um. You know what what were the the big hurdles that you know you guys, you guys feel you over overcame yeah the stone ages of nlp um. That was the phrase i was looking for: yeah uh, i think um speech recognition was probably our our first largest challenge and specifically because we had elected

To to tackle the challenge of what i call ambient computing um, you know the agent that’s just there, you speak to the ambient air you’re, not looking for the device you’re, not picking up your phone as you would, with siri pressing a button holding it close to Your face, but rather being able to call a commando to the

Room: hey play music by sting and have it work that far field speech. Recognition challenge was largely um. You know talked about as an intractable problem uh because of the distance and signal to noise ratio, so there’s a general belief that um, you know you just cant, get good enough with far-field speech, recognition to have a product like that and so um tackling that one Head-On was

Probably one of the largest challenges that we had to overcome to build the device, and we were – we were the first team to to deliver a product in the far field that worked as well as it did um you can later layer on top of that um. The nlu challenges, because i think you know you noted um – it was sort of the early ages of of nlp what i call nlu natural language, understanding there werent a lot of great existence, proofs of this multi-domain natural language, understanding, sure siri came out sometime after we Had started this

Project siri came out in 2011 and for maybe the first instance of it, but i think even even now, weve come to learn that a lot of what what siri did in the early days um well, and what everyone has done. I think to date is a lot of a mix of rules and statistical methods where the rules generally carry a lot of the a lot of

The core business logic so yeah, you know figuring out how to how to go from a cold start on on an idea like alexa to how can we have really advanced, sophisticated um, highly functional natural language, understanding, algorithms that make the customer experience great for customers when They might be asking for any of dozens or now. Thousands of of intents is was it interesting, is continues to be an interesting challenge for that team today. Im sure right right, because this is this – is one of almost the best kept

Secrets in in nlu or nlp right is is that a lot of these systems are not some secret algorithm. That does the works, but are incredibly long. Complex sets of rule-based systems that try and take into account everything that a human could possibly do. Yeah and i dont want to imply that the that its only a rules-based system, obviously rules play important role in bootstrapping um systems from a cold

Start perspective, so you can gather data and then data starts to play a bigger and bigger role over time as you get more and more interactions with live customers um, but i think a lot of you know a lot of smaller businesses, customers and providers. Today, really do have to rely on those rules to get their systems up and running in the absence of the right kind of data, that’s

Matched and collected in the in their production, environment, yeah and that that may be sort of you know takes me onto the question of you know. You know if you were going to comment on, you know how have you seen the field evolve? Um you know i like in the last sort of five years i mean i i i mean i think around sort of 2016. There was a real rush of startups and money that went into this problem of uh. You know human speech and

Recognition um and you know, youve got a wider arc of of seeing how the field has evolved. You know both from the academic and the very applied levels um. You know how? How do you, how would you sort of describe that arc and what what are the notable developments i mean? What the first one to look at, i think, would be the asr. The automatic speech recognition space um. You know that that technology has been around for a long time since the 60s and really didnt make a lot of um

Massive advancements since the early deployments in the 90s of ivr systems um and then i think, the greatest advances that took place and and one of the ones that that that we helped push uh. The science team did is uh around applying deep neural networks to uh. Replace the sort of traditionally uh hidden markov model approaches that were being used in asr and we saw great gains, but you know a number of other things. Um came to play. I think uh moving

To the cloud and and the immense amount of compute power in the cloud leveraging gpus for massive data sets at scale and uh to be able to do that at an affordable price point. It was definitely a sea change in what you could do in the ai space. More broadly, too, i think leverage you know massive amounts of compute power in the cloud um, where your data and computer are very close together and and leveraging that, with a lot of these more academic advances and scientific

Advances, you know things like dnns and cnns, and you know there really are a thousand smaller things that all came together rather than any one big sort of aha breakthrough, but i think the track of asr really accelerated through the through the tens. If you will, the 2010s yeah and weve got everywhere, everybodys asr technology is far more um, accurate and, and we all find it a lot more um. You know tenable as something

That were willing to interact with now than it was in the in the odds um, so that’s that’s, the one area, i think you know in alexa land, obviously voice technology um, the the spoken voice, uh, the text-to-speech voice is another area where we saw massive improvements Again, applying you know deep neural networks and large data um and moving away from the the sort of classic unit selection approaches where you record all the

Little snippets and stitch them together and try and smooth out the transitions to more of more of a you know: a machine learned, um uh, big data approach to making voices, and now i think, its just commonplace. You expect a tts voice to sound almost human yeah um and you wouldnt tolerate the you know the voice of stephen hawking um being generated from some of the old

Uh, some of the old techniques um its just its moved so fast. So you know the ability to understand the words that are being said from asr and to be able to speak with them in a very natural way have made great advances. I think that the nlu side of the of the of the house has been um a little harder. I think um were all seeing uh the companies who are doing generalized ai um sort of a um. You know if you look at

Your your siri, your um samsung, bixby, um, your google and and your alexas um, because the breadth of things that they that they attempt to understand is so wide. The confusability goes up dramatically and its just really hard to have that um. You know highly accurate understanding of what the customer actually wants to do ill. Give you an example. The word play is immensely overloaded. Do you want to play uh you

Know a video on your fire tv, music, a youtube, a song, an audio book, so just relying on you know, an action word like play to give you a solid uh isnt, nearly sufficient enough, whereas i think in the in the beginning, for for some of the Simpler or narrower agents – you can you can get by on on on those sorts of um ambiguous right, uh samples. So i feel that a lot of headways been made in the in the nlu space, but i know you know:

That the real challenge is getting to more of a conversational um understanding of natural language, its more like uh as a human operates um and brings to bear all of the context, and maybe visual signals and body language reading and history of conversations and even sort of Contextual history of of of the world and pop culture that comes to bear when we make when we have conversations and

Youre able to um from a small number of words, sure yeah, absolutely and – and you know i i sometimes uh like to to to sort of separate these problems into. Maybe the engineering issues um and then the more fundamental sort of mathematical ones and um. You know so, certainly you know you mentioned the the asr being around since the 60s and you know its its not until

We get the latency down that that the system starts to work. You know so so so all of these come all these things coming together to be a you know, a a speaking computer um has has this this whole question of latency. You know once the problem has been mathematically solved, but then there are these. You know gnarly questions of exactly what are the rules or what are the algorithms behind which we do this um, you know, could could you sort of put put certain parts of that into different camps? You know one of the

Engineering ones, with a bit more power and speed, we can crack and and wheres the big unknown. I will say that sort of both philosophically and practically its hard to separate the science from the engineering and and the more you can combine those thinkings together. The more successful youll be in the space, because, if you do, if you do your, you know your algorithms, in your math, in sort of an experimental bubble and in non-production sort of coding techniques um its very hard to then take that and

Turn it into a real runtime system – that’s fast, you know, as you said, low latency, um, so its i find there’s always um a delicate balance of making sure that you’re, combining your engineering thinking with your yours, you’re trading, one off for the other, almost always you’re Trading off maybe the cost of the underlying infrastructure uh, what your latency is and what your accuracy is and theyre and theyre intimately tied. You can

Always make it faster by being less accurate, you can always be more accurate by being slower and you can take forever if you want to explore every possible outcome in a brute force way and come back days later with an answer, and so there really is. You have to look at these in in tandem, but um. Absolutely the latency for me from an engineering challenge perspective was one of the most important things we focused on from the very beginning, with

Alexa im sort of understanding every component of it and and trying to crush it down into the um. You know smallest window that you possibly can and i think, a little bit of that muscle came from being a web. You know an e-commerce company where you know web page, a slow web page meant lost sales right and that you know you know if you dont respond within a

Second, youve lost your customer, and, and so you took a similarly aggressive, uh approach with alexa and trying to be super low, latency right out of the gate, yeah, yeah, interesting and so um. You know you, you moved on from alexa now and and um youve got some other projects going. Could you tell us a little bit about about what what the latest game is sure um, ive sort of been enjoying a

Return to um small companies and startups, which is sort of where my career began in the telecom space back in the 90s, with a series of startups, so ive, taken on advisory roles with companies that are that are working in the uh in the ai space, particularly With with speech, and also advising some some large uh multinational companies in that space, as

Well, on their efforts, uh related to speech and and nlu and how they apply to their businesses. Yeah seems a lot of interest in this space right now, everybodys trying to do it um, but its not yet mature. To the point where you know, if you want to go out and write a an ios app, you can find all the resources you

Need um, oh well, trodden ground and figure out how to do it and hire the right people and and train them and get it done, there’s still a lot of um. You know its fairly nascent space. Everybody wants to get into the space and leverage the technology in their products, but um, but its hard for them to find solutions that that dont require them to go out and staff. Scientists – or you know, highly people who were not at the stage where we can copy the code from from gitlab and and build a startup quickly right.

I think the idea of democratizing speech as an interface uh in in technology still has some way ways to go, yeah, yeah and, and is it is it? Is it coming soon um? Is there a, i think, as is always the case with all of these things? It just takes time its coming um and its this map compared to what it was. I can tell you when we, when we were

Getting started with alexa, i mean even finding someone who had an expertise in voice user interface design. You know there was really only one company, you could go and hire those people from um and that is no longer independent. As of a couple of weeks ago, yeah its its uh, if you look around now, youll find all kinds of design firms and engineering shops and platforms and startups that are offering tools and training and expertise and solutions in this space that didnt

Exist even five years ago, its definitely roaming quickly, but it has a long way to go. Okay, okay, so so so tell me a little bit about illyrian and uh what these guys, what these guys are doing. Um. I think whats drawn me to illyrian um. That i find interesting is: is the idea that theyre theyre taking a voice forward approach, so theyre theyre operating in the in the

Call center space and building solutions for call center call centers, but theyre they’ve approached it from how do we solve for the virtual agents speech uh problems first and in particular, how do we, how do we build a really high quality natural language, understanding component um that’s Thats able to make sense of whats going on uh in the conversation with the end consumer and allows allows the their

Partners to um, you know, provide a more human-like experience without necessarily having to staff a massive human workforce to do it right. So i, like, i, really was drawn to the solve the hard problems. First that’s that’s close to my my heart, go after the i mean itd be easier to go and build um another. You know call center in a box solution that integrates with all of the you know the existing products that are out there. You know there’s plenty of

People who have done that and that’s straight engineering problem its much harder to go after the novel speech, challenges first nail, those right and then use that to drive the the more straightforward engineering um challenges that come with the call center. So right right because they’ve got they’ve, got a couple of tricks up their sleeve right. So they’ve got this

Compression thing they can do um and then then you know they’ve they’ve, they’ve strung together. This stack of you know they can understand the voice. Theyve put that, together with you, know the sort of some rules but there’s a there’s, a there’s, an information model which decides whats going on and the flow of it um. You know, does that you know, can you can you elaborate at all on that or

Yeah, i think the again another thing that struck me was just sort of the maturity and the and the breadth of um, of considerations that went into that nlu platform. So you know things like um context and um. You know character context from term to turn uh bringing air um. You know the customer specific data knowledge about their customers, whether its you know incorporating um their email addresses, which would not

Be an invocabulary thing for an off-the-shelf, asr or entity recognition solution, um, just sort of the you know the ability to tailor the models, because i mean you know like i i mean i can still pull up my bank and – and i say you know when it Says whens your birthday birthday, i say first of april 1978 and and the thing that you know the thing says: please say it again and it doesnt get it i mean you know. You think this is a. This is a solution.

Deployed on a fairly you know, large multinationals, um behest and and its still not picking up these basic things right, um, because its this big long string of rules so um, you know its really refreshing to see a company come to space like lyrian that that starts To address this, yes and there there are just so few um um functional nlu solutions. Today, as you

Know um that are a more complex utterances like that right right, which which brings us on to the big players that you know i mean its, not as if the the you know, the the googles and the amazons and the watsons of this world are uh, ignoring The area um, you know what what are they up to and wheres the difference you know. Is there space for a startup like lyrian or other players or

Are they going to get squashed whats happening there? I mean its its, like, i said earlier. I think its such a nascent and rapidly emerging space, there’s room for lots of players and if you take a look, you mentioned some of the big guys. You know your googles and your your samsungs or your apples and your and your amazons um. I see a couple of different things:

Emerging from the large companies, one is platforms, ai platforms tools to let speech scientists – or you know just data modelers and ai scientists in general or engineers. Um take data and turn it into models and deploy them and run them in the cloud and and be able to recognize whether its you know, uh, video or images or voice, or you know, build in nlu solutions. So youve got a bunch of you. Have a rich

Set of tools that are emerging from a lot of the large players um, those tools have um some some challenges in that you, you know you need a certain level of expertise in your in your engineering and your science staff to truly leverage them and then, in A lot of cases you you come up against the you can take it so far, but then now when

You want to you know you want to supplement that language model um with your language model or constrain it or maybe tweak the tweak, the algorithms to more favor your use cases deep inside the asr, the nlu engine um, you just cant. Do it you just cant open up you open up so many of the interior boxes and then and then you hit a wall yeah. I think the other thing i see them doing is, i mean

You see that the the general agents like like we built with alexa um, who are doing a massive breadth of things and and really approaching sort of um sort of the largest uh breadth of capabilities that we see in virtual agents. Um, which is interesting but uh, completely different, set of challenges than what i think um illyrian is is facing in going after. Initially things like call centers where right the problem. Space is uh much more

Constrained yeah um, i mean they they want, they want, they want. They dont want to have general conversation. They want to have you know a conversation which is as tightly controlled as possible because you dont want agents going off script right. Well, you want you want to just you, you want to get to the quick resolution for the customer that solves their problem and provide the best customer experience without

Frustrating them with errors and inaccuracies, yeah and, and then the other, the other part of that is, i suppose, is regionality um. You know, as far as i understand, theyre working with illyrian and working with a south african call, center, um and – and that has you know unique um aspects that you dont get out of the box because you

Have youve got all these very specific regional, accents, um and – and you know, is that that that’s, something that you know we take for granted if were if were in in london or the us, you know we take it for granted that the big tech players are Going to deliver solutions for us right as soon as you step outside that um there’s there’s issues, specifically with nlp nlu, because you know these algorithms need to be trained on on on that the right data yeah, i think that’s that’s, true broadly

Of of all ai and speech recognition, nlu, you need to train with the right data that’s. You know spoken by um. You know with the local dialect by local people, um humans, um in the environment, where they would actually be speaking it i mean the the best there’s, no real substitute for data collected um in the actual use case scenario from real customers, doing real activities to train Your models, its just the best data to get the highest accuracy yeah, and so so for for nlp nlu i mean: do you

Think that um, you know, i mean our call centers where, where the the sort of nlp and lu breakthroughs of the future going to be going to be born or is that is that is that the market driving force – or i mean there’s – probably a there’s – probably A few right, so i i think what were seeing is um. This technology speech spoken language, understanding, um and sort of artificial voices like

Text-To-Speech being applied to almost every facet of business, so i think uh call centers are one important element of it. Um, i think, were were seeing it in almost all areas of business um. I i think, the i think the advances are going to come sort of across the board and be pushed uh from a lot of different places, which is one of the

Interesting things about taking a voice, first approach, if you’re a lyrian and then and then applying that to a vertical like uh, the call center uh vertical is now all of the things that you learn and feedback into your systems can be leveraged to then apply that Same tech to other business cases right right, yeah, yeah, and then you know again its this. This this technology, that’s been uh, wished for, and people have had various points at doing this because its such a big thing right, such a natural thing to

Have a computer that can talk or a system that can talk back to you, um weve been weve, been its been written into science fiction from the year dot, um and yet again time and time again an advance has happened. We havent had it. You know we havent got that talking. Computer um, you know, another advance has happened, we still havent got there. Um you know. Is there? Is there a tipping point where, where, with this stuff, all gels and suddenly you know, we leave our desktops behind and when we just wander

Around the street talking well, i do think that voice as a user interface is amazing and natural. You dont have to be trained on how to use it its accessible to everyone, including people who grew up without technology seniors, but i dont think its the its. The only interface for all scenarios, so um, you know you mentioned walking around

On the street, imagine you’re sitting on an airplane, getting ready to take off responding to your email yeah. I think there’s people where, where you’re, where your text-based chat, um, might actually uh be a better interface just from social norm. Yeah um, but voice is definitely the that natural interface that allows you to just leverage what youve been learning since you were an infant without having to think about you know. What does that box mean? Is that chevron represent there’s more information? If i

Touch it um having to be trained to use a totally different user interface, even even though the the the phone with its touch interface, you know felt so natural, you just see it and you touch it. You still have to be able to understand the the hints and the cues and the yeah, the user interface design with voice yeah. You know the the holy grail would be that you know its like talking to another human and, if you you, even if you have to fumble your way around a bit

Youre able to figure out how to get away by asking, asking or rephrasing or approaching it in a verbal way right all the all the sort of auxiliary words we use to to to say what please say that again, um or or break you know interrupting someones Sentence that kind of stuff – and i think, there’s – also visual, cues too, so i think you know uh vision. Science is

Going to play a big role as uh as this moves forward as well, because you know, if you imagine, you’re at a cocktail party, um pre-covered, i guess and um. You know you’re talking to someone, but its really loud and your ability to you know youve got all these different sounds and if you closed your eyes and tried to carry on that conversation, you probably couldnt

Though you you’re focused on that persons face you’re able to pick up visual cues um, you might miss words or phrases but you’re able to infer based on the context of what youve been saying and where you are and how you know each other yeah. I had this, i had this swedish swedish roommate, who was partially deaf and her subjective

Her subjective experience when she saw someone talking was that the volume went up uh and, if you put, if you put like, if you put a book in front of someones mouth uh, they theyre that that that she, she still partially hurt them. But the volume was quieter and i thought that was it was fascinating. Well, i think a lot of those uh human experiences for understanding language will come to bear in in the you know in the technology world, so the ability to process video, um and and get those subtle gestures. Certainly things like

Context and history and uh current affairs zeitgeist things that are going on in the world today. What happened? You know you might be something that was in the news this morning or that happened. You know halfway around the world and and was a local event. So those types of things are already, i think, being being uh utilized in a lot of the uh more general agents yeah. I think that that area of advancement will be an interesting way in which we get to a more conversational yeah.

That we have today, i mean imagine with something like alexa, where you have to have a sort of a live knowledge of you know all of popular culture. You know to play the right track when someone says says something: um i mean how how did how did you you know? How did you guys approach that question yeah, there’s, um um having people and teams focused on ingesting the data right that, like we literally had a have a zeitgeist team, a team that’s focused on how do we

Figure out what are the most important, significant events that are going on in the world right now and incorporate them into what we know, so we can use it as context. I think the uh you know like a goal is scored in the world cup right um and everybody just instantly knows, and we need to know that too, because it could then be part of you know a question coming into our question. Answer answer engines so yeah. You know being focused on where um there are obviously thousands of sources for that data. That

You need to ingest from and it takes time to build those out but um. You know we put a lot of effort into the into our knowledge um getting live data as well as all the historical, factual data right, that’s interesting, and i guess you have that you have that – that distribution of what people ask for as well and – and you Know the if the world cup goal has a particular anthem, then you know 90 of the time when a certain sound is close. Its going to be you know its going to be this anthem.

Thats that’s um that’s played you have that the interesting thing is the long tail you never fully traverse it there’s, always something new that comes up in and its and its looking at the long tail of of um missed utterances missed questions that you couldnt answer and Figuring out how to solve, for them that that keeps you busy and keeps

You up at night right right yeah, so i heard an interesting um stat recently, which was that the that there’s been this tipping point where um in india uh over 50 of google searches are now done by voice and um, and it struck me that again, maybe Theres this this legacy because were used to using internet explorer, were used to you know. Well, whatever browser you use right, delete that from the video right um

But uh, but but that you know this this this. This generation of of kids in india that have grown up um with with phones but without sort of the desktops, have started to adopt voice in a way that perhaps we havent and and that it that maybe this this um revolution or trans. You know trans trans transition from desktop to to or from from visual to to

Sound is is going to start, maybe in the developing countries um. Maybe i think like. As you noted, there are some technological and cultural uh elements that maybe drove that. The fact that everybody is operating on a mobile um and that it is actually easier to use your voice on a mobile to to enter a string of words into a search.

Box than it is to type it out, um so that that definitely catapults it you dont, i dont, find a lot of people talking to their laptops right, um, its a little more common to find people talking to their phones right um. But i also think the you know the more than 100 million endpoints out there that have alexa, and then i think you know you got billions of android, endpoints and and apple endpoints and so

These um, the agents are available. Wherever you are now, you always have access to uh a voice agent wherever you are – and i think im sure that it just it continues to grow. I dont have access to that data anymore, but uh. The the number of people using it and the number of times per day they use it, continues to grow right right and, of course, its. You know its its its spread across these different categories. So i think we touched on

Earlier voice and chat and the sort of differences between them, they both require nlu um, but one has this sort of extra extra part sandwiched on the end to transmit this turn the sound into into words. I would expect that the majority of those utterances and interactions are also command and control today. So, okay, you know whether its a music or an in-car send a text read a message: um, you know: um, smart, a smart home interaction, turn on the lights, make sure i close the garage door.

I dont think there’s a lot of conversation going on with agents, whether its or alexa device or other or other end devices, and i think that’s really where the, where the future you know development will will take us and i guess, that’s what partly, what makes the Contact center space really interesting because that’s far more conversation, yeah yeah yeah that’s that there’s a whole whole range of things that people are

Discussing in there yeah its a its a compound goal, there are multiple um things that both the both the virtual agent and the human are trying to accomplish. In that conversation right, they might get from where they start to the end of that set of goals. Yeah and then, and then the drivers for the contact centers to adopt this technology are, i mean, i suppose, pretty pretty self-evident, there’s there’s, you know there’s there’s one is the the cost of you know, being able to automate some of this stuff and then two, but Its

Not just cost its, you know, theyre aiming really for customer experience, um and i guess there’s ways for uh for for uh for a conversational agent, an automated conversational agent to to not just you know, reach the human level but potentially surpass it. I think one of the one of the things is that, in the you know, in the call center space, the workforce turnover is is much higher than in in other industries. So you know im not sure i dont have

Handy the stats, but you know if its 10 or 20 or 30 percent of your workforce is still uh fresh, being trained, fresh off training, um inexperience, so the the possibility for errors um is higher, whereas you know with your virtual agent, you always have the expert And and probably you know, it would be your top expert out of all of your human

Agents in terms of what they know or what they can know so always being able to reach an expert um. You know, i think, is going to improve the customer experience on its own right right. Yes, and that that hand off problem where you you make sure you get to the right person, i mean that you sort of almost solve that out of the box right with it, with an automated system and you’re, not waiting you’re, not

Waiting to reach that person you’re not pressing, two or four or whatever you’re just saying i want help with this and boom you’re there right, um yeah. I think i think that’s a its a noble goal for any for any company. If we can, if we can get that waiting time down, then i think weve diminished the frustration in the world right, yeah um. So so what does the competitive

Landscape look like for, for you know this this, i think again, we talked about it a little bit before, but for for a company like lyrian, you know where are the threats and then whats the whats, the landscape, its interesting, i think there’s um, you know, there’s Lots of space, so i dont think that there’s, you know existential threats around for companies that have developed novel um spoken language, understanding solutions, um. I think its its an interesting its an interesting market space where, as i said before, everybodys trying to

Gain the expertise, and what is a lot of the big companies with deep pockets are bringing it in house. So if a competitor starts to emerge that looks highly capable were seeing a lot of them get snatched up by large companies that want that expertise in-house rather than right, because you just you just cant – find the talent, so so what it results in is this

Constantly changing landscape of who the players are there’s, there are some established players, obviously that have been around for a number of years and and likely wont, go away because they have, i think they have um. You know really viable large growth opportunities and plans if they stay independent um, but you know there just arent that there arent that many so and again when, when a bright one does pop up its off. You know im seeing them get snatched up a lot of the time, so it is hard to find

Companies in this space, with this expertise, the second part of it, is really the um the available talent with expertise in this space. I think a lot of people are rushing into this field, but you know the the the folks that really have the expertise are those who already have their nine year phd plus uh many years after that and um its really hard to accelerate that. Just because there’s a higher demand, so it is, you know, illyrians, lucky um, in that they have uh. Some of this

Talent on their bench, uh, which goes a long way, i think its hard to just start a company and find the talent you need um to to really have that deep expertise in in you know: big data, ai um, asr, nlu, there’s, a limited bunch of people In the world, with the expertise today, right, right, yeah, yeah, interesting, um and, and you know, does does that? Does that talent get poached by by the big players or or

You know um did the does. Alerian have sort of people knocking on the door going um. You know please please, please please come over for us. I mean i guess in that regard it looks like the rest of silicon valley and that you’re in right right right. So keep your engineers happy its highly competitive, just as it is in the engineering space uh yeah you should. You should take care of your people right right. Okay, so ive got ive got just a couple last questions and its uh. You know theyre sort of theyre sort of the big

Ones, um, which, which are always fun to ask um at the end, potentially after weve, had a few classes, whiskey, um. So so um, you know. First one is uh. You know you know. Are we gon na see this? This transformation happening soon, um you know, and, and you know how, how ubiquitous is it going to be? Is it is it its potentially a real revolution right yeah, i think um massive investments are being made to push those advancements forwards and try to find those. Those breakthroughs

Technologies, algorithms approaches whatever it is engineering strategies um, i you know ive, obviously in working closely with folks on those challenges. At alexa i mean you really have a lot of really smart people focused on solving these problems. Yes were going to see advances where its going to come from is probably all over since there’s, so many different uh organizations that are focused on it. With with so much talent, um im a little cautious that youll see massive changes. I think um often uh people, people expect that these things move in in massive leaps and

Bounds and that you, you know, kind of like the whole um time travel um, you know slip in the bathtub bang your head and have a vision for the flux capacitor, thus making time travel possible right. It usually doesnt work that way its more perspiration um there’s. A lot of invention going on, but it, but you know you might try uh 300 different um, smaller medium inventions and 10 of them, accrete towards your goal and once in a while

You get one that moves the needle 20, but but its not one big thing: that’s going to change it, its all of this effort in action. You know, i think that will result in in a more incremental um improvement over time. So there is a tipping point where those increments reach a certain aggregation that that starts to create a tipping point in the consumers, mind where you know it feels more natural and conversational and theyre more open to talking to their technology like they

Would a human and uh id love to see that in the next three to five years? I think a lot come together for that to happen, and i dont know what that is, or id be yeah and and and is. Is that thing is that thing agi? I mean is that the is is, is there i guess that’s a philosophical question right is: is a computer that can talk and speak like a human uh, the same thing as a

As a as an intelligent being you know, is that um, so the stuff that you have made of the singularity, the uh, whatever you want to call it there’s, so many that hollywoods latched on to over the last five years, yeah um, you know – and i think A lot of people are worried about the advance of ai. Do we do we see it? Do we see it being born in a call center which handles your handles? Your

Bank details i im im skeptical that well see it in our lifetime at the level that causes people to pause and be concerned about ai, you know really being a an existential threat, as you see in the movies um. What, but, rather what i see is the advances in technology getting to the point where it really is quite conversational and useful um utilitarian way or even in a an emotional support way, or you know, for sort of medical or mental health purposes, and so many great Applications of talking to a computer that feels just

Like talking to a human that dont involve arriving at the singularity, um and that’s being turned into copper, tops to steal a quote from the matrix, yeah brilliant uh. Well, i think that’s us coming up to our five oclock um al and its been a pleasure and um. I hope we get a chance to talk again soon. Great. Thank you. Thank you.

AI Expo Africa 2021 ONLINE – Overview

AI Expo Africa ONLINE is the largest business focused AI, RPA & 4IR trade event in Africa. Our 2021 conference and expo will run on 7th-9th September 2021, followed by a 30 day On-demand archive show, to a regional and global audience and builds upon the phenomenal success of the 2018 / 2019 / 2020 events that cemented it as the largest gathering of its kind in Africa.

Due to the ongoing COVID19 situation we are running an online event to ensure safety and surety for delegates, sponsors and speakers alike. This format also affords us a range of great new opportunities to not only engage local / regional and national buyers and suppliers but also a wider global audience – showcasing the African 4IR market to the world. No travel is needed, just sit back and join our community from the comfort of your home office or workplace anywhere in the world.

The Online programme includes;

-4 track speaking programme with 80+ speakers covering business deployment case studies, innovation, demos & platforms with live Q&A
– Expo hall – Housing vendor e-Booths with vendors showcasing their 4IR products & services with live demos / Q&A
– Innovation Wall – Housing e-Posters showcasing applied R&D applicable to industry or investment ready
– Women, Youth & AI4Good zones – Fostering greater engagement with female professionals, young engineers and social applications of AI.
– Networking Zone – Public and private live meeting spaces to meet specific people you want to talk to / trade with
– Help Desk – Just like a real world event, AI Expo Africa 2021 online has a help desk / support function manned by real people
– Expected footfall – Estimating 2000-3000+ registered decision makers, 4IR tech buyers, suppliers, innovators, SMBs, investors and global brands

Our business audience is comprised of Enterprise decision makers / CxOs, allied to AI Cloud platform providers, Tier 1 / 2 deployment & service providers, AI start ups / innovators, investors, educators, government and AI ecosystem community builders.

You will learn about real Enterprise case studies and the application of AI RPA and Data Science in Business TODAY, available technology and cloud platforms, deployment challenges, ethical considerations allied to the vibrant innovation and start up ecosystem driving the industry in Africa.

About AI Media

AI Media are the curators of Africa’s largest business focused AI & Data Science Community. We believe the tech media industry is changing. We are leveraging new formats and platforms to communicate the business opportunity in the African AI & Data Science landscape for entrepreneurs, investors, business leaders and corporates. We make our content accessible across the whole of Africa by removing price and knowledge barriers to content, creating new event formats and sharing the fastest growing business opportunity across the continent with our community

We are frequently asked to help organisations understand the local and regional detail of the African AI & Data Science market. Its one of the most rapidly developing sectors and we provide consulting support ranging from Ambassadorial Delegations and Investors to Start Ups and Corporates seeking clarity on the B2B / B2C and Investment climate across the continent. We also provide analysis reports and introductions on a consulting basis.

Website: http://aimediagroup.co.za
Magazine: https://issuu.com/aimediasynapse
Instagram: https://www.instagram.com/ai_expo_afr…
Facebook: https://www.facebook.com/AIExpoAfrica/
Twitter: https://twitter.com/aiexpoafrica
LinkedIn: https://www.linkedin.com/groups/13572…

#AI
#AIExpoAfrica
#ArtificialIntelligence
#DataScience
#Africa
#SouthAfrica
#CapeTown
#Conference
#Event
#AIMedia
#RPA
#RoboticProcessAutomation
#4IR


Comments

Leave a Reply

Your email address will not be published. Required fields are marked *