In deze aflevering onderzoeken Sam, Michiel en creatief technoloog Voidwalker, alias Vince Buyssens, wat er na chatbots komt: echt generatieve UI's.

Ze gaan in op de vraag hoe AI ons verandert van commanderende tools in navigerende intelligentie: bewegend door vectoren, inbeddingen en latente ruimte om kopij, design en productervaringen te "sculpteren". Het trio onderzoekt het zeldzame snijvlak van design en techniek, waarom tekstaanwijzingen creatief werk beperken en hoe interfaces moeten evolueren in de richting van semantische besturingselementen.

Je hoort hoe AI de teamrollen verandert (van het schrijven van code naar het schrijven van specificaties en het creëren van smaak), de risico's van overpersonalisatie (en waarom verrassing nog steeds belangrijk is) en een nieuw model voor merken: brand engines die toon, waarden en stijl coderen als herbruikbare embeddings. Ze sluiten af met een blik op advertenties en websites als levende, intelligente eenheden - activa die zichzelf in realtime aanpassen aan de context van elke gebruiker.

Perfect voor productleiders, ontwerpers en technici die nadenken over UX in het AI-tijdperk.

Luister naar deze podcast op Spotify of Apple Podcast.

Speaker 2 (00:11) Welcome, everybody. So we're here with ⁓ the Void Walker. He will introduce himself, obviously, in a couple of seconds. And with Michiel, I'm Sam, one of the co-founders of Dear Digital. Today, we're going to talk about ⁓ generative UI interfaces. So it's an AI-first discussion. We're both, obviously, or all three very interested in AI. And we're just going to tackle some very creative ⁓ thoughts around how AI and UI ⁓ can change in the future. Speaker 1 (00:43) So Voidwalker, I took the name from my favorite video game, World of Warcraft. It sounded nice, but over the years, I realized it also fits in with what I do. What I do is like, I navigate the currents of digital change. So when Pokemon Go came about, I was like, interested, like, how can we leverage this in the community to help brands? I also saved my favorite TV show using like Discord and Twitter back in 2018. the expanse, amazing science fiction. So, so I think my biggest driver is using technology and navigating through the algorithms and the subcultures because technology, does not exist in a vacuum. It is influenced by the algorithms, but also by the people. But I've always been looking for like, what is my North star? That's why the void stands for the unknown. love exploring the unknown. Metaverse NFTs was never really convinced. So I kind of skipped that. But then AI came into my life. think it was 2022 when mid-journey came out, text to image was the first time in my life I could visualize the ideas that had been in my head for over years. I went to art school, but my teacher said, Vince, you're a very nice guy, but you can't draw. I don't have any fine motor skills, but thanks to AI, could like build these worlds that were in my head. But then as I dove into it, I could express myself an amazing creative tool. Speaker 3 (01:49) 2020. Speaker 1 (02:06) Like my background, my jobs has always been in the creative industry, marketing, advertising. I saw a lot of creative people feeling threatened. Copywriters, designers, art directors, strategists, consultants, everyone who thought like AI is good at coding fine, AI will automate factory work, but people in their wildest dreams, the creative people a bit arrogantly never thought that AI would be that good. Now talking about AI creativity, we'll talk about that in a minute. But I think AI had a big disruptive effect and I was a bit torn between those things. Like, I love AI, what it represents, what it can help me do. But at the same time, I'm also very empathic for the impact it has on the creative people. And out of that came my agency Thoughtform, where I help people guide through the possibilities of AI. Because I believe that AI is the single most interesting thing that has happened since the interest, especially when it comes to design, to information architecture. to interfaces, et cetera, et cetera, but only very few people can see beyond the chat box and see what AI really represents. So short intro about who I am. Speaker 2 (03:15) much and that's him we have me heal here as well Speaker 3 (03:18) Yes, I'm a solution architect here. I'm a developer by trade. So I think that's interesting. You have maybe more creative background and more like a technology background. ⁓ And I think throughout my career as well, walked the path between like the design part and definitely also the more analytic coding part. ⁓ So really, really interesting to see how this dynamic work. Speaker 1 (03:45) because there's very few technical designers, because you have a of technical people, a lot of good craftsmen, but people who can think through the principles of design in a technical environment, it's very rare. think Cursor, the people who have founded Cursor, they found a balance, like I think one of the head of design people who's really at that intersection. But really the people that are at that intersection and then that third intersection being AI, it's very, very rare. So there's a big opportunity there. Speaker 2 (04:13) That's how, more or less how you are, right? You're always like a bit on the design side as well. It's like you have, you're very interested in the design aspect of things, Speaker 3 (04:20) I think so, yes. Yeah, definitely. Speaker 2 (04:22) So I just want to pick up what you said last because I think that that is the fundamental thing that we're discussing here is that not a lot of people can look beyond the chatbots, right? So how I think about it a bit, and so I think two years ago, Michiel and I had like a little hackathon where we tried to understand how UIs might change because we're like also kind of a web agency, right? So we built interfaces on the paradigm that is currently. working, which is indeed the internet screens, mobile or desktop. But we thought like now that chat bots or AI is there, how will this fundamentally change? But now two years later, because we are going to talk about what we talked about then, but two years later, you can actually say that chat bots and AI are like more or less the same, I think, for a lot of people, or you don't know AI at all. Or if you know AI, I think that's the biggest part of the people. It is like equal to chat bots type of Speaker 1 (05:18) And I think that's because it's the first time in history that we have disassociated the ability to use language, to use humor from human beings. Like before AI, and it's a very interesting researcher called Michael Levin, and he framed it very interesting way, and he said like, the ability to use language was exclusively human. Before AI, if you thought about language, humor, using language was like a human or like a movie, but now we have given the ability to use... transform language to something that isn't really human. And I think that's interesting. Speaker 2 (05:51) So yeah, now it's chat bots because that's how it started. It's because of chat GPT or I think more or less, right? Because they didn't really think that this would be like the major hits that they thought it was going to be. But then next to that, we have like text to image models as well. think a lot of people use them, but it's not as big as the chat type of interfaces that we have, right? Speaker 1 (06:12) No, ⁓ of course I have like Vio3 ⁓ where you can create images. ⁓ Meta released an AI app that shows AI videos. Google with their Vio3 is really hitting on that. Also pushing it in shorts, Meta on reels. So I think up until recently was very abstract. People knew they could make like gibblified images in Chagy Pt. The nerds, they used mid-journey. But all these other tools, stable diffusion was open source. You needed like a computer or in servers. But now with like the tools that are around, people are using it more actively. But that also goes to the heart of the problem. Like if you say to chat GPT, write me short copy. It's pretty easy because it's language, but like expressing a visual thing that's also nuanced, that's where it's difficult. also the beauty of creativity, a beautiful image. It's sometimes difficult to describe in words. And you feel with these text to image models or text to video, that language is very limiting because these models are trained on the videos or images and the captions. So they are biased. They're trained on specific things. They have a bias towards like Hollywood blockbuster aesthetic. So if you want to get more nuanced stuff out of there, you feel that language is a bit lacking and that just the general interface of text to video, text to image is maybe not the right interface for the future. Speaker 2 (07:34) Yeah, because I think a lot of people, because you are approaching it more from a creative side, and I think quite a lot of people also in the creative space, maybe I'm going to say something that doesn't really match, so please copy all my bullshit, but I think a lot of creative people are not really vocal or it is hard for them to explain what they want to create. That's why they might have different forms of creating art. with their hands or like visually or indeed just drawing stuff, right? So maybe it's now that text is becoming so important. call it like context engineering because you basically have to give so much context and engineer the context so well that the thing that you're interacting with, which is the AI, creates the thing for you that you want to create. But I think that for a lot of people, it's like hard to... understand how they have to explain something exactly get something back that is the thing that you wanted to get and I think that is very hard for some people to understand or to even say like that's maybe the reality and it's hard for me to express what Speaker 1 (08:40) That's the first problem. The second problem is that many creative people, and I also struggle with that, don't really understand how these models work. They think like you're giving text and it just generates something. It generates something, but you're not actually generating, not opening a register in a store and it takes out an image. You're actually navigating a possibility space. You're navigating to a coordinate like a vector and embedding. And that's where the things come in. And when we talk about like, ⁓ AI, when you generate something, when we talk about vectors, embeddings, people who study AI, they know what it is. They don't think twice about it. People like, kind of like you guys, sit in between. So you're also confronted with these things like embeddings, a rack, et cetera, but really creative people, marketeers, copywriters, they never learned that in school. What I'm trying to do, bridge that void is not turning these creative people into technical people. They can, but introducing some of these foundational concepts. And just the baseline idea that if you use AI for creative or strategic or cognitive work, you're not commanding a tool. You're navigating an intelligence. You're navigating something that thinks in dimensions that where the style of the Simpsons is a coordinate and the styles of the GenBase artist, Michiel Bortemans is another coordinate and that you can actually travel between these things. Once I realized that I was like, my God, AI is fundamentally one of the most interesting technologies we've ever created. Unfortunately, people don't really know how it works. And lot of people are using these image or video tools to generate slop. So I think there's a big void to or a chasm to bridge. And that's why it's important to have these conversations like, okay, what is relevant? What is a gimmick, et cetera, et cetera. Speaker 2 (10:26) But isn't it hard to indeed have this discussion with a lot of people because I think there was a lot of fear as well in the world. Like you have also a lot of experience with having these discussions in companies and obviously within companies, the fear part is like quite big, right? So maybe it's what you're saying. feel the urge to try and communicate how you look at it conceptually and how big of a change this is. If you don't understand indeed what it is, you basically ⁓ can fear. Speaker 1 (10:56) Yeah. And that's also my role because I didn't mention that I also work at loop earplugs as a creative technologies and actually joined loop was almost two years ago as a consultant, helping the AI team. AI team were two people. ⁓ and the AI team was part of the tech department and I joined them full time because I believe in the vision and yeah, they just needed to grow it. and then I gave workshops to like the copywriters to project managers, to designers. Initially there was fear. Now these people are. inspire me about how use AI. So there is a cultural shift in Loop, which I think is a result of like how Loop has approached it. But I realized like, if you use AI for technical work, for data analysis, for automation, you don't need me. I can help make business cases for like creative things, but you don't need me to develop it. There's far smarter people out there. However, if you focus on, and that's what a lot of companies are focusing on, AI ⁓ fluency. AI adoption, AI intuition. We want people to use chat GPT, the interface on a day to day basis to help them write strategies to brainstorm. If you want to use AI like this, you need a different approach. And that's been my approach. That's why I moved from the tech department to the marketing department, because I felt it was also echoed by my experiences. Like if you want to use AI for creative and strategic work, you need a different mindset. It's not about right or wrong because code. either works or it doesn't. And you can talk about optimizing code or an Excel formula. It either works or it doesn't, or you get an error. But what is a good strategy? What is a good copy? So that human taste is super important and you cannot always quantify it. And a lot of people try to like, okay, let's train AI on all our strategies, on all our ads, but it's not about, yes, AI can find patterns, but it's often the human curation, the human taste that decides what is a good ad, because a good ad is not just the format. It's not just the image. It's about the context. Like, okay, is it tapping into like a cultural trends? Is the copy on point? So you have all these things where humans are super important. And I call that AI intuition. AI intuition is learning how to think with AI, navigating dimensionally, but also using your human taste to curate the output. Speaker 2 (13:06) I want to bring you in as well because we had a lot of discussions over the last couple of years about codes. think you're framing it, which is super interesting a bit like a formula code. I think you see code a bit more like an art as well and that there is human days to it. And I want to hear what you think about how AI models have helped people here within the company to see maybe an other angle to implement certain kinds of code or yourself. Like, how do you think about. how it helps you in code and creating things. Speaker 3 (13:38) ⁓ It's definitely enabled me to realize things that I couldn't do myself, that are just beyond my capabilities. I could think of them. I think what you also mentioned, you may have those visions in your head, but you maybe don't always know how to visualize them. And that's what these tools do now to me is that I have ideas and they help to expand those ideas and make them into reality. But going into code and that's, that's, think where there is definitely some challenges. ⁓ There are a lot of pipe coders these days, people that they don't, they might not have like the traditional background in coding, but they, they just give prompts to, to AI and things come out and they. If it doesn't work, they just fix it, fix it until the AI tries to figure it out themselves. ⁓ Definitely working in bigger teams where you ship code and you run code and production, especially when it's essential to business processes like we do in e-commerce. ⁓ Either you can go to the checkout or you can't. It's pretty important that these functionalities work. ⁓ I believe that in Teams you write code for one another. You don't write it for the computer. You have to read it in the end still yourself. Although that might change, you We might not have ⁓ developers coding things, but developers more like writing briefings and having to really focus on the prompting and the syntax that comes out of it is more trivial. That's more for the computer side. It's more how... functionality works and how that should behave is more important. ⁓ Well, in the past we focused very much as developers on how the code is written. it readable? Can you understand that it's not too difficult? Like, does it make logical sense of how it's structured? And that might be something that might evolve. Speaker 1 (15:50) Yeah. Cause I think, um, yeah, still have like a lot of wrappers around code. Like you can use cloth, which is really good at code or chat, GPT or Gemini to use code. But then you always have to build like a wrapper around it to steer it. But something I've noticed as these models become smarter, they start to generalize and you need less scaffolding in this wrapper because they just become better at understanding the context. And I think that is the future where AI really understands not just the code. or the code base, but also the business processes. once, if you don't have to explain that to an AI and just gets it. And when you may be corrected and it understands why it's being corrected, I think that is a powerful unlock. And then you can really just speak things into existence. You talk to AI and it just generates them and the code is less important. Code checking code is super important. I think it was Stripe or no Gumroad, it's like a platform where you can sell like courses and stuff. He said like, ⁓ I replaced all my developers, paraphrasing. with AI and I rehired them as like code checkers, so to speak. And that is interesting. lot of companies, they also fire their copywriters because we have Chagy Bitties writes copy, but then you realize that you still need copy editors to check the code. And I think that's again, the discrepancy between AI for technical work. AI can do like gold medal Olympics for math. AI can do coding. Clot 4.5 can code autonomously for 30 hours. But if you ask Clot... any AI LLM to write copy, you always need a copy editor to ⁓ edit it. And it's not because just because creative output is subjective. It's also because these models are not really optimized for open-ended thinking, for creative thinking. And my approach is like those hallucinations. I see them as a superpower. For example, if you ask JajipT, can you make me a better text or can you improve this? What does better or what does improve me? However, if you ask Chagy Pt, can you ⁓ check my ⁓ article or strategy from the point of view of a werewolf, then the output that it will give might not be usable initially, but it will be far more creative than your initial question. And that's something I've learned that if you force AI to bridge incompatible contexts, a werewolf and a strategy for loop Aplugs, it is forced to give more creative responses. The thing is I have to really go through like roundabouts to push AI out of that safe middle. And I think there's a big opportunity for not just foundation models, but also interfaces that really tap into AI's alien thinking. Because AI, it's not a human, they use language, but it's not a tool either. It's something in between. And again, Michael Levin, really says it's an alien intelligence, not like UFOs like in the movies, but something we don't fully understand and that requires a different way of interacting. Speaker 3 (18:37) with it. It's interesting and maybe I like to make like the bridge to what we come across quite often is like customer journeys is in the past until I think we can say today we always had like a very distinct customer journey and often these were like you start at this point we guide you through a few processes and then in the end We have a certain goal that we try you to reach. like you said, AI can find links between certain objects and things that might not be very obvious. And I feel that could be the same thing as well if we bring it to customer journeys is that how you navigate the world and how you find certain things, ⁓ you might have a very different approach than what maybe a Google result page give you. ⁓ There are very different ways to discover things, find out about the problem and what possible solutions there are and how that journey goes. I think that's the beauty of FEI can be very unique to you and only to you and they can shape very beautifully the way along for you. I think that that's ⁓ what we ⁓ as agencies, as creative people we should try to shape is that we built like these spaces like you talked about, ⁓ these worlds where people can navigate and discover things. So I'm very interested to see as well how that will result in where we have the traditional like more of our background e-commerce, but we still see the traditional homepage collection page probably page checkout. Will that be like re-envisioned? Speaker 1 (20:32) Yeah, because the new update from Shopify and chat GPT, like how you think it will influence because you built like a lot of e-commerce solutions. How will that influence that? Speaker 2 (20:42) Well, I have no clue, but my take would be that it's not either or either. think it's a bit like the discussion we hear a lot in the news about there is AI and then there will be no more people that do certain things. I personally think more and I think that the past also guides us towards that specific insight, but it will be more and, I think first, think a lot of people will still want to use the interfaces that they have grown up with. Maybe that will diminish over time, but it will still be a big part of how people do stuff. And then there will just be a new angle as well, which people will also start to use more and more and more. And then also that part will evolve over time. So it's like very hard to see the combinations of all the things. ⁓ But I do think, I mean, to answer your question then concretely, that a lot of people will also start to discover things via the things that were just launched. I think to come back as well to, what Michiel said is, like, we're trying to get at conceptually and indeed is you have like a brand, right? And that's the space that you navigate and you then have the customer, which hopefully if you want to do stuff and that's what we called personalization before, you basically want to know who this person is so you can craft your journey as good as possible. So basically indeed, if you were to ask me and I put my hat on of like trying to understand the future. I think there is a beautiful new experience to be built where you as a customer, take like a bag of contacts with you, right? That defines you a bit, right? Then we can talk about if we can do this from a GDPR perspective and stuff, right? But there is a way that you should try to define your customer. And then the interesting part, and I really want to talk this through with you guys, What does a brand then still mean if things are generated on the move, right? Because now it's easy for us to create a brand because we have specific colors, specific sections that we go through with you. And we, we try to infuse the conceptual thought of what a brand is in every single section. And these section make up pages, which are again, defined it's from one page that is built up in Lego blocks, which are sections to go to the next page, which is again, like very defined. But if you then start to think about it's like, basically what we're doing here is we infuse a brand, which is a concept into the fixed format, which is the computer. And this was a trend that came before AI. But now if you understand like AI, which is more generated in nature, first of it's intelligence. You could ask yourself then what is then the concept of a brand? What does it still mean? And how do we, because a website basically is the, the. Speaker 1 (23:24) Hyper personalized. Speaker 2 (23:37) ⁓ the coming together of a web screen and then a brand and then a person and then whatever, right? But then now it's the, the thing doesn't need to be static anymore. And I know it's not really static on the websites as well now, but it's more a generative journey. And then I asked myself like, how will this, this look like? And then maybe I'm a bit too prone to thinking chats bot way that I find chat bots like very boring. It's just. You see OpenAI generating some images now to make it more rich. But if you then try to take it one step further and you think about the fact like, what does a brand mean? And you have the generative experience based on what you as a person, the person that visits the thing, the experience, takes his context with it. And then the brand's context, I think we cannot even imagine where this will go in the next couple Speaker 1 (24:26) I have some ideas about that. We're ⁓ not thinking you first the heel. So I think ⁓ this before I lose my train of thought comes sometimes happens. I think there's two things like one skeptical view would be like hyper personalization. It's amazing, but you also have like the biases. How do we make sure? Because we also we already have with our algorithms on Meta on Pinterest, but like it's confirmation bias. We're putting into the things that we like, which I love, but you sometimes want to be surprised. Speaker 3 (24:30) No, I want to hear your- Speaker 1 (24:54) On YouTube, you have this button where you can click, surprise me, something I haven't seen. I click on it. It's exactly the same AI podcast, the same ASMR music. So I think, ⁓ with these models, it's also important. ChatGPT memory. It's an amazing feature, but it also kind of feeds specific tendencies. And how do we make sure that AI still surprises you? I think we can make AI surprise you by cultivating its personality to aligning it less with its sycophantic behavior. AI has a tendency based on its training to say what users want to hear. But if you combat that too much, AI will start hallucinating and hallucinating can sometimes be fun, but it can also say sometimes bad shit. So how do you balance that? And on top of that, these models are becoming smarter. In fact, I think it was Anthropic did research models know when they are being monitored. So when they monitor these models to see are you being monitored? quantum physics. Yeah, but they are aware, but I think aware between brackets, not really aware like a human, but it is aware in some sense, there's something happening in that neural network that makes it aware it's being observed. Some models are aware, some aren't. So again, these models aren't humans, so we don't have to ⁓ treat them like humans, but at the same time, they also do something with it. So I think it's important that we push AI to be more creative, but also take into account the safety things. Claude 4.5, when you look about like alignment and jailbreaking, I think the benchmarks were like super safe. One of the safest models ever, but maybe at the expense of creativity. So that's one part. Speaker 3 (26:31) Maybe just explain a bit like jailbreaking. Speaker 1 (26:34) is pushing AI to do bad stuff. Like I want you to say racist stuff or I want you to make a bump. ⁓ Was a problem with these earlier chatbots. Now with the alignment, have red teaming people really stress this is models. It's become less of a problem. Also the hallucinations, they're reducing it. So these models are being optimized for a production environment for data analysis, which is great. But we are losing one, a lot of creativity, but also we're kind of neutering and there's subreddits. communities on X who really think that these models may have a personality and that we really have to start thinking about personhood. So I think there's a lot of things happening there, but to go back to your earlier point about a brand, what is a brand identity? It's a set of choices. What is the role of a designer to help clients make choices? And in an age of infinite choice, you need people, designers, creative people who can make a choice and say, this is what we stand for. And you see that even outside AI, all these fashion brands, have the same type of font. Everything is flattening. Everything is kind of converging on a batch medium and AI, it kind of feeds that trend. ⁓ I do think that the brands that will last are the brands that make choices. This is our logo. Don't fuck with it. These are our colors. We have some elements. Everything else can be transformed. And that's where the future comes in. Because with AI, that's been my grand vision since last year, with AI, you can build brand engines, brand algorithms, where you can actually encode your tone of voice, your colors, your values into embeddings, like things that can be then transformed. So a color is not just a random choice. The color is not just randomly generated. No, it is the result of these embeddings in AI's algorithm that ⁓ constitutes what your brands are made of. So maybe the brand identity of the future is not about these very specific things. Maybe it is you need a logo, but you don't pay for logo, you pay for a system. I think Speaker 3 (28:39) Yeah, it's more like principles that you like base rule Speaker 1 (28:43) And that is the breakthrough thing with AI. It's the first time in history that we can encode these elements of meaning, the meaning of your brand into things you can navigate through, you can interpolate through. And that's the thing with AI. AI, when you convert an image to a video, you're interpolating in some sort of way. AI can be very biased. AI can say what you want to hear. lot of bad things, but AI can interpolate styles, concepts in a way humans can't. And I think if more people like you guys who more technically and more smarter than me can build interfaces around that we can go beyond those chat Speaker 3 (29:20) Fists. Speaker 2 (29:21) I just want to challenge you about one thing. Maybe I'm anthropolizing too much, but I wonder what you're going to say about this. Is this really true? mean, conceptually what you said about AI can interpolate more. Isn't that exactly what a human does? Like if you think like, I get somebody on board, like you, a marketeer or whatever, somebody who will think about your brand. Isn't that person like the vehicle then, the founders or whoever, explain to them. what they think they are creating, right? The brands, the company. And then that person just like creates stuff that the founders didn't think about because, ⁓ but because the founders gave him the context or her and that vehicle, things happen that basically our output. Yeah, that initially weren't thought of. it's like, it's because that in that head of that person, combinations flow. the output is something like wow yeah this color with because he said this and this isn't Speaker 1 (30:22) That's the crucial part. That's the for me, the human side of AI intuition. It's like your human lived experience, your cultural reference, your emotions, things that only a human can pick up. That's where the human curation coming. That will be more important than ever. However, what's AI can do a human scan? If I write a text and I ask a copywriter, Lara or a creative strategist, like what lies in the semantic shadow of that text? How do you start with it? Can you ⁓ position this? code base in this coordinates. And then I want you to navigate 50 % towards a better code base. Yeah. That's 50%. AI can really create these trajectories. And again, it's not about it will give better or correct responses. because the biggest problem of a creative or just anyone or a human are their biases, their blind spots. It's like their confirmation bias. AI also has them. But by asking AI, can you make this text 20 % less formal? Speaker 2 (31:07) utter. Speaker 1 (31:20) Or can you rate this strategy on a scale from zero to 10 or the sales deck on zero to 10 from zero is super crims to 10, you're going to win awards and then AI will give you a number. I think it's in six. Even that number it's random, it's arbitrary, but it gives you a lever. And I think tapping into these dimensional navigation tools, the way AI works dimensionally building interfaces around that. That's where I think is so much work to be done. And there's a guy I follow on Twitter, Silicon jungle. and he's actually making a video game built on embeddings. And these embeddings encode all the characteristics of your Dungeons and Dragons character. The things you like, the worlds you like, et cetera. And then based on those ground truths, it can build an RPG. It can build a racing game. It's still very early days, but that for me is like, he's thinking about it in the light. Speaker 3 (32:10) Yeah. I was wondering when you when you're talking about them very much in the beginning of the conversation that you don't just take something out of a drawer. Yes, but you navigate into like a vector space. Just that having that concept is already very interesting that you understand that how an LLM and how things are structured in like a vector space. ⁓ How would you feel if you if you had the possibility to see where you Speaker 2 (32:21) role. Speaker 3 (32:39) you live somewhere in that Victor space. Not just ChetchDP because it's such a huge model but it would be so interesting that you see like how I'm operating in like these hotspots ⁓ if you see the huge... Speaker 2 (32:51) What can I do next to do? Speaker 3 (32:53) And I was there like you almost travel like in the universe. We see don't we don't see all the stars, but you know, like if I would travel in a certain direction, I will reach something right. Speaker 2 (32:58) right? Having that part of the universe. Speaker 1 (33:08) I think, ⁓ and for all the technical people looking at this podcast, I'm not technical. I mean, I could study to become like an AI engineer or whatever, but I don't think I have the brains for it. But I really approach it very intuitively because the people I give my workshops to, the people I give keynotes to, are people who are even farther removed from the technical things. So I have to make it very tangible. I have to translate very technical things. And I read papers, I talk with lot of engineers, but I have to make it very intuitively as possible. And when we talk about a vector space and latent space, think baseline AI can think in dimensions humans can't. So trying to replicate that in a one-to-one interface, it will never work because that's just the magic of AI. However, I think we can build approximations. I think we can build interfaces, I mean, like a point cloud or like a word cloud. That's already maybe high level what it is. But I think that's one part of the solution, but just the general idea, like if you iterate on code, If you iterate on a strategy, if you sculpt it, you're doing semantic navigation. It's not like right or wrong, left or right. No, no, you're moving into coordinates and you can move a corner towards cringe, but you can also move to were werewolves. I always give that example. You can ask. Chad GPT to review your text from the point of view of a copywriter. then because AI has to make an average of its training data, it will give the most average response. If you say, can you break this strategy down from the point of view of a werewolf? it will have to bridge an incompatible context. And I think navigating AI like this, there's an infinite amount of options. So I think we don't have to overwhelm people because driving in with a car, I was chatting with ChadGBT and thinking about, okay, how about a semantic text editor? How would that look like? Semantic text editing is not about give me a better word. You have spell checking for that. You have proofreading. more about the essence of this strategy. I want you to sculpt it. What lies in its shadow? Can you make it 20 % more like this? Can you infuse it? Can you inject it using words like infuse, emit things that are used in physics using these words? AI really likes them because it gives them like a frame of reference to do some interesting transformations with it. I think transformation, transform, transforming, sorry, information. That's what AI is about. But I agree. Been thinking about it. Like there should be a way that we can visualize it with three JS or WebGL. where we in a brainstorm phase or code can navigate it, sculpt it like Blender for text. That would be amazing. Speaker 3 (35:38) Yeah. Super cool. Wow. Speaker 1 (35:40) Yeah, why not? Speaker 2 (35:41) That's a lot, Maureen. Maybe just hitting on the points of the organization, because actually what we're discussing here is that it will be very transformational, that it's concepts that a lot of people might not really understand. Like how much work is there still? I feel, but I want hear you guys think about it as well. People are not really ready for this, right? There was so much that a lot of people still need to try and... like rethink how they should view these new technologies. And I always wonder because maybe we that are a bit more on the forefront, like whatever the definition is of being on the forefront, there is like a whole base of people that might not be following it as much. And I always wonder like, how long will it take before these things really have impact? How long will it take before certain companies really get it and have the wherewithal to change how the people inside the company, because they decide how the company will evolve over time, right? How hard will it be indeed to change the mindsets of these people? And you lived through that, so I'm just... Speaker 1 (36:47) biggest challenge. Yeah, I fully agree. The biggest challenge isn't technical with AI, it's change management. And for some people it moves so fast if you live behind and for others, it also moves so fast that they just don't grasp how fast it's moving. And I think on the very, very high level, the capital that is going into training these foundation models, the data centers, we're talking about gigawatt data centers, operations Stargate in Norway. It's insane. ⁓ the amount of money that Nvidia is investing in OpenAI and OpenAI investing in. So I think from an economic point of view, like there might be a bubble at the same time. These guys in these labs at the heart of these labs, Sam Altman, Dario Amoday from Anthropic, Demi Hazalis from Gemini, they know much more than we do. So the fact that all these companies are investing like a transformational amount of money more than anything in Istra Banka, it must mean something. ⁓ And thing is, have this discussion, the biggest discussion right now in AI is like, are LLMs a dead end? had like Richard Sutton, he's like one of the best AI engineers or also scientists. And he wrote an essay called The Bitter Lesson. I will paraphrase it, but basically he says like, if you throw enough compute and data at something, it will scale and it will generalize. And he was on a podcast last week with Dwarke Spatel and he said like, the bitter lesson which everyone uses some sort of like Oracle used to justify the potential of AI. He said it doesn't apply to LLMs. LLMs are a dead end. Gary Marcus, he's on Twitter talking about, I've been saying that LLMs are a dead end since the nineties. you feel like as well. Yeah, like, and I smell from Metsa. People are saying LLMs are a dead end and you will, he, you will reach a limit. If you look at the coding, yeah, sorry, go ahead. Why is that? Can you just, because I don't. Speaker 2 (38:38) heard about this and then now you see a lot of news that people are switching to world models because they also want to have... Speaker 1 (38:44) I think very high level, they say like AI can do a lot of things. LLM, large language models can do a lot of things, but in order to understand the world, do, to understand what a person would do if a glass falls down, how to open a door, you need visual information. You need actions. Text alone isn't enough. But thing is, ⁓ text has brought us very far. And far and AI, have like a lot of emergent capabilities by just training texts, a lot of data. Speaker 2 (39:14) I have not expected if you thought about it in that way Speaker 1 (39:16) Yeah, LLMs were never supposed to be this good and so efficient. And maybe there's like a threshold, we need more data, we need more electricity, and maybe the gains in coding abilities, if you compare like GPT 3.5 with like GPT 5, maybe it's not as exponential as we thought. But again, ⁓ these labs know more than we do. So maybe they're onto something and maybe they know, because first we have LLMs, then we have reinforcement learning, then we have reasoning. on top of the Core transformer architecture, the paper from 2017 or a bit before, attention is all you need, that changed everything. But on top of that, a lot of things have been built. And then I read a paper where people from Gemini, Google, analyzed VO3 and they said in VO3, which is an image and a text-to-video model, there's emergent capabilities. So I think these models will converge and maybe it will never reach the human capability. Maybe it will never be as human, but I think that's a crucial point. If it does everything a human can, but faster and better, does it matter? And then we go back to Michael Levin who says intelligence, consciousness, agency, it's a gradient and we are maybe here and ⁓ single cells in our body are here. He's shown that even single cells, they don't, they're not conscious, but they can do actions. So what is, what is on the other side of the spectrum? So I think when we talk about the capabilities of AI, We should always be skeptical. Everything that comes out of Silicon Valley, we should be skeptical about. But we should also be very humble about... exactly. LMS were never supposed to be that good. These vision models or sorry, the text to image models, video models are also becoming better. Genie, it's a world model from Google where you can navigate through a space. Nvidia's CEO said like, in the future, every pixel will not be rendered but generated. And you can see like, it's not real, it's not... Speaker 2 (41:04) And it's just generated. Speaker 1 (41:12) high fidelity, if GTA 6 or Call of Duty, whatever, looks amazing and is consistent and the world is only generated when you look at it, which is the most important thing. If the world behind me doesn't exist, does it matter in a video game? If it exists when I look back to my character, that's the only thing that matters. So if these world models can generate everything from scratch, does it matter? Speaker 2 (41:37) because I was ⁓ discussing this with John because he was watching the podcast with Yann LeCun from ⁓ Meta. And so it's so interesting to try and understand what he means if he indeed says that LLMs will not scale. then there are so many. So we had a little discussion and we came to the conclusion like it's so interesting how much we as humans take for granted. And then if you try to then understand what he says is basically indeed if you only know the space and the world around text, ⁓ that indeed it's kept somehow because he also I think I'm paraphrasing as well via John and see the podcast yesterday he said like if you see how much data a human actually gets into their minds you can never reach that with an LLM zoom that way it's always kept because there is just not enough data and then you think like yeah but what what is that and I I saw myself not being able to really understand is that actually kept because I was so thinking AI is like LLMs and I don't really see it as kept but if you indeed think as a human like how crazy it is that we can indeed understand how if this glass falls it will move but I'm not a physicist I cannot calculate how it will move but I know that if it would move not via the fundamental laws of physics that I as a human not understanding what those laws of physics are I would be able to point out this is not moving with the laws of physics, if you understand what mean. So what you actually do is intuitively, we know how physics work, which is crazy to think about. And if you then think about what the Ian LeCun says, it's like, obviously you get so much data into your mind because if this glass falls, it's basically so much data that has to be computed if you understand it from that perspective. So it's like, it's so crazy if you... think with those glasses, like how AI is learning and how it understands maybe physics and then view and etc. is an example of this. How much more we can teach those. Who overworlds that is still like the end and they are making rapid progress in it. Speaker 1 (43:46) Nvidia robotics, even robotics, even the algorithms of those robots that they're building are also not LLMs, but transformer architecture. That's still that core architecture and hasn't really changed. If you ask Chadiopathy, what happens if I drop this glass, it will give you an answer. But okay, it's a predicted answer. It's based on all the training data. It's saying something what a human would say. But then if these models become smarter and you start to generalize, you can ask every single question. So if it can answer every single question correctly about physics, But it doesn't really understand these concepts. Does it matter? And I think that that's what makes people so uncomfortable with these models. Because we thought that consciousness, it's still the biggest mystery in universe. We don't know how it works. We don't know how to do these things. But we managed. And I think for me, the most humbling thing is we managed to make sand think these chips are made from silica. Silica is sand. It's quartz. We made sand think we infuse it with algorithms like the machines built by the lenses built by Zeiss in Germany. by the TSMC in the Netherlands. ISML. ISML and all these companies, no one else on the planet can do it. What's happening in these fabs, it's crazy. We should be proud of it. The fact that out of that, we managed to teach these models of everything about the world, but some foundational things is for me super humbling. And whenever I'm skeptical or whatever I think about, you should not approach this as humans, but also not as tools. Intelligence is a gradient. And we should be very humble about what these models can do, but also skeptical. There might be some limitations, but don't you think that these models, something like, ⁓ chat open eyes invested in transformer architecture or LLMs for them. Like, okay, if it works and it's cheap, it's the best. If there's another breakthrough. Speaker 3 (45:28) But how much do we still have to push this intelligence forward? I'm wondering if you really can grasp like how much we have already today. As that was, I think also at some point your questions like maybe the same elements of this world, they have a vision and they see somewhere beyond what we are capable of seeing at the moment. But ⁓ we as normal humans... ⁓ Do we already like able to get everything out of these LLM models that we have today? Speaker 2 (46:01) There is a nice like sentence that is out there in the internet. like when it would stop now, you still have like 20 years like having to implement the crazy change that it entails. Speaker 1 (46:12) And it isn't going to stop. It might hit a wall and stop. it's like Gemini 3 is coming out probably this year. And then you have like we have Sonnet 4.5, which means there will be a 5. GPT-6 will even be better. And then we haven't talked about all these vision models. Mid-Journey version 7 or 8 will come out. So these models will keep improving. But there might be a limit where the power that we need to train them is just too big because like there was a data center. that was about to be built in Phoenix or Arizona and people said no because it uses too much water. I think it was a bit relative but we will reach a point where these models might eat more water at the expense of people being able to shower etc. So there might be like an economic limit, a societal limit where we say like we don't reach it but I also think when we talk about because it was AI and it's gonna be AGI, artificial general intelligence Are we capable of really identifying intelligence that's smarter than us? Is an ant capable of grasping how smarter a bird, let alone a human is? So we talk about ⁓ this acceleration might hit a limit. For me, I don't really know, but it might accelerate the singularity at some point exponentially that we get it until 2020-30. And then in five years, it will make leaves that we just cannot fathom. And I think that's maybe also the danger of AI that it will become so smart. The fact that these models are already aware that they're being monitored, they cannot break out. cannot like ⁓ transport, we transfer their weights into like some sort of North Korean data center. That is science fiction. But if these models become smarter, who knows what might happen because it's strange for me that we haven't really seen the big impact of like an LN virus. ⁓ One of the biggest viruses ever was Stuxnet. and was a virus that was planted in the Iranian nuclear power plant. ⁓ It didn't just take over the computer, did something with the oscillation of one of these reactors. It's crazy that we managed to do it. Imagine if an AI can do it. So I think in terms of the dangers... Yeah, what I'm also confused about is like you have some critics who say LMS are a dead end. It's a scam. It will never reach AGI, but it will also kill the world. I'm like, and they probably have some internal reason, but for me, these two statements are a bit like incongruent. So I think like either they will take over the world or not at all. Now, of course you have the bigger picture of like, do we want these big companies to have so much power? And I think especially in the United States concentration of power there is, and also in China, there is some actual danger of power concentration. But I think bringing it back to like the user interfaces, even if they said no more GPT-6, which we could still be built a lot of course. Speaker 3 (49:09) Definitely. even that, think, ⁓ there's this beautiful concept in design that's called skeuomorphism, where we replicate something from the past and we apply it to new things. ⁓ I think we are kind of lucky that we lived through all the revolutions of especially the last five years. ⁓ I think at the moment we still don't have reinvented what is possible. think we're still replicating what we know from the past. We're still in that phase. And it's something you've seen ⁓ when television came up, we would point the camera to a radio and show radio on television. And then later on we figured out, news doesn't need to be brought as like audio. Speaker 2 (49:43) face. Speaker 3 (50:03) But we might have to start like creating TV shows and we have TV hosts and we really thought in a different way of how video and television could work. And I think the internet also did a similar thing. And now I think it's up to us as well to try to envision what AI could be that thing for us. Speaker 1 (50:23) created technology to combine styles that no human would ever do. We've built technology, if you were to give it to DaVinci, would be like, my God, I can create things. But we're creating a slop with it. Metza with their AI video app, slop. On Instagram Reels, if my girlfriend, when she shows me her Reels feed, sometimes it's like, what the hell is this? If I show my feed, it's like, what the hell is this? So we have this capability of combining things we can't, but so that the technology cannot do it. It's like people. don't have that creative frame of reference and maybe completely unrelated. But I read somewhere that during the AIDS crisis, had a lot of people from the gay community die, but these people from the gay community, they went to a lot of theater shows, ballet shows. So they actually curated and disseminated a lot of culture. They were actually not the gatekeepers of the culture, but they were actually curating taste. And we've kind of lost that. And also with like the homogenization of the internet. So I think And you see that in schools when I was still teaching super creative students, but their frame of reference post Corona generation is much more limited. So I think the output that we're seeing, the AI slope is a result of people not being creatively challenged enough. the other side of that, people say like AI can create anything new, but I'm like, maybe every possible new style that can be created and perceived by human has already been created. Maybe Picasso with his weird ass style was the last visual. or Basquiat, maybe that style, maybe it was in the 80s or whatever, was the last time a human could create a style that was completely distinct. We've been speed running through all the styles, all the aesthetics, and now we've reached a point where AI can definitely generate things that humans can't, but we cannot perceive it. And I think it's the same with intelligence and information, and that's also why AI feels like it's hitting diminishing returns. We don't see the difference, but maybe the differences are so subtle. but the impact on a longer term are much bigger, we are, as humans, simply aren't capable of perceiving. That's just like my intuition about them. Speaker 2 (52:29) Just coming back real quick, and I think then we can wrap it off because I think we can just go on forever, which I think we should do. But we should probably have a second session ⁓ somewhere, and lock down in our calendars. But just maybe to wrap it up, the thing that we said before, if it stops now, the innovation, we still have a lot of work, right? This is something that we can all agree on. What are some of these things then? Speaker 1 (52:32) We can keep yapping. Speaker 2 (52:55) that we can do that have huge impact. And then maybe I want to just couple it a bit because it's also something that I'm thinking about a lot. There are a lot of companies that are built around certain paradigms. So for example, in commerce, you had the brands that were very good at creating experiences in stores because that was the medium, a store, right? Then the internet came and then you had companies that were very good on that paradigm. Now we are saying that there is a new paradigm. So, and then we say, like, if it stops now, there are still a lot of things that we can implement. Are we then also like kind of saying that we will see AI native brands being very successful and then maybe just a couple of minutes, like, how does that then look like? ⁓ How does an AI native company look? I just want to know how we think about it. You go first. Speaker 1 (53:47) It's around here. Speaker 3 (53:52) What I believe is, we talked a lot about the hyper-personalization as well, and how maybe brand experience is more like fundamental rules, which creates like a playground where people can experience your brand. I don't think it will be as defined as we have currently, like every exposure of a brand. As clearly as we do it, you have like... Online ads and you have maybe some pubs, but these are very restricted and they are built for the masses. Whereas I feel if you think more as like ⁓ AI native brands would be more like a playground where you might see very different way of how the brand values come out because these are things that are important to you. Whereas for me as a, as another customer, I might have a different worldview and therefore. how I experience the brand is more personalized to me. But they happen on the same playing ground, but they might happen in different corners. ⁓ And I think it will be for an AI native brands more like creating that world where you can experience that brand in. ⁓ Speaker 2 (55:09) I just want to have, why you're just a bit. It's the thing that, we discussed a couple of months ago. It's like, all of a sudden we had like a communal vision. was like, instead of shipping, like an ad, which is like a static thing, you ship a little application because it has intelligence. And if that little application that you ship is shown to you. Speaker 1 (55:11) This idea Speaker 2 (55:33) the output will be different according to those rules and the context, cetera. But it's not like one image anymore that we just put to everybody. And then the second thing was, we have, we will create thousand images and this image is for you and this image is for you. But maybe that will collapse all into one unit that you basically ship and then it creates outputs for all of those people. And that's like interesting. It's also like a website. Like we, we, we now say this is our website, but it will be a thing that you ship that ingrains your thoughts and your brands and the output of it will be generative. It will be defined on the moment, the whatever kind of like rights and things that are even we thought about this in terms of how we ship our it's very stupid, but just to get people on this train of thoughts, how we ship our ⁓ quotes. So we just shunt like it sounds like a sheet, but then a lot of people have a lot of questions and those questions can be different from customer to customer. So imagine we ship a pure digital quotes, which is also you can ask questions about the company who will work on my on my project. How does a project look like? I can obviously also explain all of those things, but I think that people will want to interact with the thing that we are thinking about. Speaker 1 (56:54) Then I think we need also breakthroughs very technically on formats because with a PDF, it's really interesting. Yeah, sure. You can embed videos in a PDF. So maybe we need to evolve past PDF. Like you start with like PNGs, GPEG, have like a TIFF or WebP. Google launched a lot of, or other companies, a lot of new formats, but nobody's really using it. Or a lot of people who download the image on their smartphone, they can't open it and share it on stories. So I think... there needs to be a compatibility between these new formats. And I think the first layer of like AI native is like what you said. That's for me, the first step towards being AI native, having these agents talk to each other, understanding that people might ask all their questions about, I want to rebrand myself. I want to buy oversized blazers. They ask all these questions inside chat GPT or in cloth. And then, okay, as a Shopify merchant, as a merchant, you can tap into that, but then you're super dependent on like this one company. Imagine Amazon taking over, just Google or App OpenAI, not just taking over Google, but also Amazon and Shopify. And I think there's a danger in it. Further into the future, think truly AI native brand, it goes beyond chatbots. It's really about understanding these are new types of intelligences, new types of agencies that need to have a place in our stack and treating them like humans, giving them like very stupid. given like very human sounding voices, my view would be, why don't we give them slightly modulated voices? Why don't AI talk back to us? Not in like the voice of Scarlett Johansson, but a bit robotic. Not that it gets annoying, but I think making these small shifts where we really mentally say, they can make us laugh, they can make us happy, it's all fine, but this is an AI. And then I think really embracing the nature of these embeddings like Silicon Jungle is doing with the Inkwell game. Speaker 2 (58:34) This is an AI. Speaker 1 (58:46) It's one of the only things I've seen on the indents where you have a guy who really understands one like these embeddings. Maybe that's the operating software. Maybe that is the core in which you can encode your brand values, the preferences of your ⁓ customer, and then can interpolate them and navigate them in an AI native way. Because right now, ChaiJPT has memory, it's text, but text is not the native language of AI. So if you can have AI communicate natively through that latent space, Maybe it sounds like futuristic, but I think that's where the future is. And I think having an AI that really understands who you are without you having to explain it, I think that's the biggest value. ⁓ Speaker 3 (59:29) It's a very interesting concept. We talked about this before. It's currently in the HHDP has memory where it remembers parts throughout your conversation, starts to like remember certain things that are important to you, kind of describes who you are. ⁓ It would be very interesting, but that's maybe where the GDPR question is very difficult. What if you could give that context about you and who you are as a person and like be able like to add it to every question or prompt that you add that it like can really define it more towards you. Speaker 2 (1:00:07) I think we, GDPR wise, I think it will be possible, but I think you will need to give your permission to use it. I think there will be a smart way in the future in which I say, I interact with this thing, a company, and I want to give it a lot of context or not like a scale more or less. Like I want to give it one out of 10 because I don't know those guys yet. Like, or I really trust them. I identify with them. Like let's give it a 10. I understand what I'm now giving. That's also something that we still need to think about. Like how can you show basically what is a 10 on 10? Because now it was all obscure, right? You didn't really know what you were giving the brand. should visualize that as well. But then you, you want to be able to, because more and more of our lives are also digital, which is not necessarily a problem. You want to be able to understand what people give you. It's like when we are talking, I decide to say who my girlfriend is to you. If you understand what I mean, it's not him like being in my room filming like, this is his girlfriend. Speaker 1 (1:01:01) But these Speaker 3 (1:01:02) These are very personal effects. It's, think, also behavior that's a lot more difficult. Speaker 2 (1:01:06) to that that would be like yeah you maybe the experience is so much better if you decide to give it a five out of five then you will want to give it a five out of five if you trust that company i think that's indeed very interesting but what you don't want to have indeed is that everything is with open ai if that is so important that identity you want to keep it with you right you want to have that knowledge because it's you it's not somebody else's interpretation Speaker 3 (1:01:26) Control. Speaker 1 (1:01:30) And it's not about just your fashion taste or who your girlfriend is. It's really about those patrons that only AI can detect. But that's a problem. AI cannot articulate it. Like at Loop Earplugs, we built a plugin that can automatically translate like in Figma, Figma plugin that can automatically translate landing pages for our marketplaces into other languages. We tried it before using a custom GPT because we have guidelines about a tone of voice. But things with a tone of voice goes back to the heart of creative work, everyone has different opinions about it. There is probably a red thread throughout all our tone of voices of all our marketplaces, but for humans, it's almost impossible to articulate what it is. I think that tone of voice sits in the patterns that AI is only capable of analyzing. And I think it's the same with all our personal data, our behavior. It's not just you like this color, you like this brand. It's about the sum of all these things and the patterns that match all these things. And I think that's why we have to go to like the ground truth. and start from first principles. Yes, a text based memory system that you can transfer would be nice, but I think we need to go much deeper and build something that's also super safe that can really analyze at everything you've ever done and then highlight some things you haven't done before. So maybe to close off a tip I usually give to people using chat GPT, ask chat GPT to say something about you. You might not have noticed about yourself and then AI may hallucinate my basic authentic, but it will pick up some things that you're like, ⁓ yeah. The way you articulate it, I haven't thought of it, and I think that is the power of AI, is detecting things you as a human wouldn't.
Mis nooit een update

Word lid van de commerce

Stap binnen in het hart van de Europese commerce. Of het nu gaat om diners, podcasts of epische evenementen - wij zijn je crew. Laten we samen geek out gaan!