Orientation Session 1

This lesson preview is part of the Power AI course course and can be unlocked immediately with a single-time purchase. Already have access to this course? Log in here.

This video is available to students only
Unlock This Course

Get unlimited access to Power AI course with a single-time purchase.

Thumbnail for the \newline course Power AI course
  • [00:00 - 00:12] is a orientation, basically just tell you about the lectures, the community side, the resources available to you. This is not a normal course.

    [00:13 - 00:26] Basically, it's a course plus coaching. And then in this one, in particular, we're gonna basically add a one-on-one component and where we basically kind of about code truth for your project.

    [00:27 - 00:49] And then the other thing we added in this cohort is we basically have four lectures in the very beginning in the AI coaching calls where we basically very specifically talk about AI projects and AI modalities. Basically show you the different types of AI projects that are basically emerging, classified them under emerging types.

    [00:50 - 01:07] You guys are all used chat to the team, but the emergence of large language models, you're basically having different types of modalities and not just like text to voice and other things. You're talking about like AI receptionist is one example.

    [01:08 - 01:18] You're talking about digital clones, digital coaches. You're basically talking about enterprise and about the stack with law firms, basically and other things.

    [01:19 - 01:31] We're gonna basically, I think the last cohort, we basically went into a project coaching a little bit early. And so instead we're gonna basically do a series of lectures just really demonstrating these things.

    [01:32 - 01:45] So anyway, so the purpose of this is just an introduction. So on the course, basically because there's actually a lot of resources and different people have different kind of goals, basically in the course.

    [01:46 - 02:02] Basically we have tech business owners, we have multiple fang engineers, we basically have enterprise engineers, people from government tech to kind of all sorts of different kind of about that. So that's basically the general idea.

    [02:03 - 02:35] So my goal for a kind of about today's session is to, oops, my goal for today's session is to just address how are the course, how is the course structured, what are the resources, how do you engage with the community and how do you best utilize to accomplish whatever your goals are? Basically, so I think at a very high level, you guys all want to learn AI, but the very specific goals are a bit different.

    [02:36 - 03:04] Basically people want to build a startup, some people want to do an AI consultancy, some are more clear oriented and so forth. The benefit of a cohort is that you can actually place questions as we go and then we'll stop to address for questions once in a while and then at the very end, we'll basically have a live queue.

    [03:05 - 03:19] And then you can basically place your community intros in a new community platform that we basically have, it's community.newline.co. And there's not too many people basically in every lecture.

    [03:20 - 03:49] So the idea is to basically be interactive so that you get to know other people as well. So the agenda is we're going to introduce you to the people, what come up behind the course, provide a course introduction to the community resources, learning how to learn kind of a deep work and distractions and different kind of tools that you'll be able to use.

    [03:50 - 04:04] You guys all know my thinking about background. So part of the journey in this is that we started building agents internally at a new line.

    [04:05 - 04:14] We built a sales agent, we built a programmatic SEO agent. And then so we were basically searching for the latest techniques on how to do different things.

    [04:15 - 04:30] And in that context, it was actually a little bit difficult to find everything in the same kind of a place. We created this course to have 30% foundational model content and then 70% adaptation concept.

    [04:31 - 04:55] And we're going to break it down into a lot more specifics in the following kind of slides. This is Dippin, he's full time with a new line and he's an AI ML researcher and with 150 citations, he created a lot of the co-created a lot of the lectures as well as the code as well.

    [04:56 - 05:03] And then this is Alvin. So this book can't basically initially came from a collaboration from Alvin and I.

    [05:04 - 05:17] He was basically doing a course with us and then it was taking too long because he was so busy at Apple and working on Apple intelligence. So I just, I suggested to him that we basically turn it into a workshop.

    [05:18 - 05:23] That's the workshop that you guys could buy. I have, hopefully you guys have watched that workshop.

    [05:24 - 05:31] It's four hours. And then that basically provided the initial impetus to basically talk about for this.

    [05:32 - 05:56] So he'll be hosting weekly talk about office hours, not this week because this week will basically be a bunch of introductory kind of material. And then yeah, Marianne for all operations related, you basically have a problem with your profile on community.newline or talking about you have issues.

    [05:57 - 06:09] She'll also be reaching out to you to basically get your partner references, basically on who you want to basically kind of get matched with. We have a number of community oriented.

    [06:10 - 06:21] Ways of collaborating. So you guys basically provided a brief intro on yourself.

    [06:22 - 06:29] And then this is what I added into the webinar about a little bit later. Some of you have seen that so many of you haven't.

    [06:30 - 06:50] Basically, so this is basically from the last cohort. And some of these things are fine tuning, some are plant based, some are multimodal, some are agents.

    [06:51 - 06:55] Basically, there's really kind of a different way. So implementing a lot of these things.

    [06:56 - 07:04] These are all personal, professional work and startups. Basically, some people brought their projects basically from work.

    [07:05 - 07:13] Some people built it to be a startup. Some had an existing business that basically they needed basically certain things.

    [07:14 - 07:21] And then depending on the technique, basically people applied. Obviously, everyone starts with prompts, basically an evaluation.

    [07:22 - 07:37] And then they evolve into rag fine tuning and then agent techniques as well. In terms of overall kind of a goal, it really depends on where your overall goals are.

    [07:38 - 07:48] Like for example, the last, the text to guitar tab generation is someone that was in between jobs. He created this thing with the course.

    [07:49 - 07:54] He went to capital one in generative AI. So he's working there right now.

    [07:55 - 08:01] Some of these are about side projects. Some people had a commercial real estate, basically a side project.

    [08:02 - 08:12] Some basically was working for the US government. So he created a legal legislative aid about system for everything.

    [08:13 - 08:21] Some of these are specific to people's personas. Calorie accounting for I think cuisine or English processing and so forth.

    [08:22 - 08:35] So you see a variety of things here. But the reason why basically we do the onboarding call is we try to adapt the lectures based partially based on your kind of needs.

    [08:36 - 09:11] Basically, the reason why basically we don't have traditional we if you go as you go through, you'll basically see that we don't have traditional ML class about projects is basically because we actually put in, we actually adapted a lot of the exercises and the projects based on people's actual projects. And then so they can basically get a multi model, fine-tuning thing for, I don't know, insurance classification or kind of a, we have a web flavor module basically for different things.

    [09:12 - 09:32] So this is why we wanna basically engage with you and then we'll basically kind of depending on what you're interested in, basically we can basically adapt it. So at the end of that three and a half months, the idea is you should be able to build it, but generalize by agent.

    [09:33 - 10:00] So it could basically be a single agent or a multi agent. Basically, agents is like a very catchy buzzword, but basically what an agent basically use on the underlying layer is they use it from up engineering, it uses RAG, it uses fine-tuning and then you're able to use tools basically as well.

    [10:01 - 10:18] And then more specifically with this cohort, we added a lot of evaluations. So we basically put evaluations and synthetic data and data creation, we very much emphasize it across everything now.

    [10:19 - 11:10] Basically, it's very important and we wanna basically make sure that as you're basically improving your applications, you don't have issues with data drift, domain drift, application drift, model drift, all of these things basically happen as you're basically programming agents. And the class and the ROI is basically depending on your specific kind of area, but most people basically don't know basically that a number of the AI engineering kind of a stack that you basically see out there in the wild, people fine-tune their own models because most people just assume that, oh, everyone's using AI, everyone's using anthropic and everyone's unprofitable because all the money is going to them and that's simply not true.

    [11:11 - 11:38] And so we're basically doing here, is we're basically engineering the context basically around the models. So you may have seen this kind of a meme out there, basically among AI researchers, people are like good model with bad context is less than a bad model with good context.

    [11:39 - 11:56] And a good context is simply AI engineering, it's rag, it's fine-tuning, it's a problem with evaluations and everything else. So you wanna basically kind of keep this in mind, basically whatever it is that you're basically doing.

    [11:57 - 12:23] So what we're basically doing here and why the project is basically a center of everything is we wanna basically have you at the very end of the cohort to be able to demonstrate and prove your skills without working artifact. Basically, so working artifact with metrics and then you can basically demonstrate your scarce and verifiable kind of capabilities.

    [12:24 - 12:38] Who you demonstrated to basically on distribution is dependent on your personal situation and professional situation. We've had multiple people basically that demonstrated to their engineering manager.

    [12:39 - 13:10] For example, there was a person basically last week that I caught up with and he said, because of the course, he's now in charge of three people basically doing rag on customer service logs, basically at his company and he got promoted, basically had us, power salary and so forth. And then what you wanna basically do at the very end is once you have the portfolio rules and the skill tab, you're able to basically very principally articulate basically what it is that you're able to do.

    [13:11 - 13:25] Basically, I'm a multi, or I'm able to engineer multi-modal experiences or basically I'm able to do document processing and link it to voice, whatever that is. Basically, it's unique to all of you specifically.

    [13:26 - 13:37] AI engineering is different than foundational model engineering. So we do go over the concepts of foundational model engineering.

    [13:38 - 13:52] So if you're interested in it, basically we'll be going over some of these things and you can go deep as you want. Basically, so the course is quite intensive, basically.

    [13:53 - 14:11] So you really have to basically allocate time, basically in your calendar and really pick your battles on how to basically come about our project. So we provide a lot of resources to you and in some sense, that's a double edged sword, basically.

    [14:12 - 14:31] I wanna basically make sure you guys are not fully overwhelmed, but the idea is basically, we will basically permanently cover basics to state of the art in foundational model engineer when we basically started this in March. State of the art was primarily a mixture of expert kind of models.

    [14:32 - 14:46] Now we're basically going into hybrid reasoning models and there's different type of dense versus sparse architecture. Basically, we'll be adding those things in, basically are as well.

    [14:47 - 15:03] And then AI engineering basically will go over all of the rag, multi-modal data plumbing and so forth. BookM2 is more around state-of-the-art techniques, basically around state-of-the-art agents, basically.

    [15:04 - 15:31] And then enterprise AI engineer will be going over red teaming security, how to basically going over basically hacking LMs, basically, likely this is not a hacking course. But basically, but we wanna basically provide some of the security and cybersecurity so you understand basically, these are the things that people are gonna basically be doing, basically with your models.

    [15:32 - 15:47] Like, people basically, like, if you read like enthalopics, ROHF, I'm gonna buy a ton of hugging base. Basically, they simulate a lot of conversations that the model's not supposed to do, basically.

    [15:48 - 16:04] And if you read through it, like, you see a lot of, I don't know, profane and lewd behavior, basically, that's modeled inside and people trying to hack it, people trying to get it to do all sorts of random things. And we'll be going over that.

    [16:05 - 16:27] That's related both to enterprise, but it's more sensitive to the enterprise. And then as an AI startup or AI delivery, basically, if you're basically aiming to be an AI consultancy, basically, you'll have to basically utilize the techniques to be able to focus on what it is you're basically thinking about going at.

    [16:28 - 16:45] And then a lot of the kind of AI startups that basically use their entire stack. So we're gonna basically be primarily, the one that we really deconstruct is how do you deconstruct when surf and cursor, we basically go into the architecture, basically.

    [16:46 - 16:54] So we basically introduce you to all the different concepts. And then we go into the architecture of basically some of these products.

    [16:55 - 17:10] One of the things that we basically do is Marianne will be reaching out to you to see your permanent preferences, basically. All of these things are, you guys are all adults.

    [17:11 - 17:16] You can opt in or opt out, basically. No one's forcing you to basically do anything.

    [17:17 - 17:24] I've taken a lot of classes, basically online. And one of the things that we, I've already always found kind of a lacking is the community aspects.

    [17:25 - 17:46] So in the last kind of a cohort, because of the accountability partners, some people actually started their own AI agency, basically, and other things. So the idea is to be able to have someone to be able to check in, basically at a regular basis, talk about the project, share examples, conceptual items.

    [17:47 - 17:55] And then you can basically do a quick check-in or not. Basically, some people do check-ins on Friday.

    [17:56 - 18:01] No, basically, so they share, oh, I basically tried this model. This model's not working.

    [18:02 - 18:09] How did you basically deal with your data set and so forth? Basically, so people basically find it valuable from that standpoint.

    [18:10 - 18:22] So then, basically, in particular, we have four mini projects. These mini projects are basically designed to be like work.

    [18:23 - 18:42] Basically, if you're gonna be doing this for a client or your boss, someone's just gonna basically say, hey, here's the CRM, here's the Salesforce CRM, go and create a chatbot on it. Basically, or here's the customer service log from Zendesk, go and create it.

    [18:43 - 18:56] So we're gonna basically have different mini projects that you guys can do, basically, in groups. And then, they're cooperative, competition, kind of systems.

    [18:57 - 19:13] The idea is basically, these mini projects are things where you can actually, you're trying to basically get the evaluation up or down, depending on what the task is. And what you'll basically find is there's a lot of AI engineering techniques.

    [19:14 - 19:28] For example, we introduce you to 20 RAG techniques. So you'll basically use like one, two, and five, or two, five, and six, or another group will do six, seven, eight, or 12, 20, and 30.

    [19:29 - 19:41] And then, at the very end of the time, basically, then people will basically compare it. And we'll basically show the notebooks and groups of people that basically got to the top of the leaderboard.

    [19:42 - 19:55] Basically, and this is basically to really show, because we go through a lot of like academic techniques. And we don't just, last time we basically did that, it went over people's head.

    [19:56 - 20:04] But like we wanted to basically show, these are actual kind of practical techniques. Wish we were actual kind of jobs.

    [20:05 - 20:13] And this type of cooperated cooperation, kind of a competition is like friendly. It's our way of basically doing it.

    [20:14 - 20:23] Then we'll have a Miami event. Basically, over a weekend to accommodate people's schedules, share techniques, collaborate on many projects, collaborate on projects.

    [20:24 - 20:40] The way to basically meet people have basically shared techniques and network in general. Then this is basically one part of the program, which is basically coaching.

    [20:41 - 20:49] We had group coaching basically in the past. And we found that it was a little bit less effective than we basically liked.

    [20:50 - 21:15] So what we're going to basically do now is we're going to basically introduce you to how to basically build an AI project, including techniques, project types, and distribution. Basically like how do you find your first 100, kind of about, you know, about 100 customers.

    [21:16 - 21:27] And then we'll basically work with you, kind of about 101, to basically document your progress. The 101, basically, aspect will lead to, until you basically get to a certain point.

    [21:28 - 21:42] And then we'll basically let the coaching group, coaching calls, basically go there. And then basically a lot of you guys basically have very specific concerns about basic projects and other things.

    [21:43 - 21:53] So please tell us, kind of about your needs. Some of you guys basically have mentioned that you wanted us to basically put the reference papers, the research papers, the reference, basically.

    [21:54 - 22:03] So in general, a lot of the things that we basically introduce are actually done from research papers, basically. So we can also basically do that as well.

    [22:04 - 22:49] Yeah, as I mentioned, basically, so the initial project coaching, kind of about this week, will basically be lectures, basically, and it will basically be like that for the first four weeks, basically. So we'll basically go into project modalities, get you introduced to basically startups, AI startups, and templates, and basically to get you a sense of, like, for example, you guys have all been using more Uber, you can get an idea, basically, oh, I'm Uber for cats, or Uber for cats, basically either analogy-based or first principle-based thinking.

    [22:50 - 22:58] You can basically use it to create your project. So this is basically one of the ways that we basically created this course.

    [22:59 - 23:14] The reason why basically the academic system is basically so long is the academic system is basically designed to fit into your working memory. So what is basically, kind of about the working memory, basically, so your working memory, think of it as the RAM, basically.

    [23:15 - 23:30] So nowadays, people have 32 gigs, 64 gigs of RAM. Your working memory is seven, basically not seven gigabytes, but seven, seven items, basically.

    [23:31 - 23:55] And so what we're basically trying to do is we're trying to basically manage that bandwidth, basically, so that concept can basically go in and basically just fix it into your long-term memory. And so one of the things we're basically trying to do over this is to be able to manage your cognitive load and specifically manage your germane cognitive load.

    [23:56 - 24:04] These are academic concepts on learning how to learn and neurobiology. You can look this up if you want to.

    [24:05 - 24:13] But basically, you will feel cognitively overloaded, basically, with this course. Many people have told us that, basically.

    [24:14 - 24:24] But we're trying to basically make the lectures, basically, so that certain things fit. And then if it doesn't make sense, you have to basically ask questions.

    [24:25 - 24:51] Asking questions is basically a way of synthesizing your working memory into your existing mental models to be able to see, basically, fit it within your existing mental scaffold. So we're, we, as much as possible, basically are trying to manage your cognitive load, but you also basically have to manage it on your side and not to basically feel overwhelmed, basically with the course as well.

    [24:52 - 25:11] As I basically mentioned, basically, this is not just a lecture about its coaching, its community, is you have a mini project and then you have a project that's associated with you. So what I want you guys to basically do is schedule a time.

    [25:12 - 25:23] Basically, it was dip in or eye every two weeks to basically kind of talk about your project. Basically, and the project can basically be at the very beginning is I have no idea what it is.

    [25:24 - 25:45] I'm thinking about these three domains. And we can basically say, okay, you can basically put a prompt engineering kind of a chat, basically associated with, I don't know, investments, basically, or maybe you basically say, my hobbies are investment, my interest at work is insurance or whatever it is.

    [25:46 - 26:02] And we'll basically kind of go through that kind of a process. And then as you basically kind of get further, basically you'll get more sense of the techniques and then how it basically maps to the product projects learn about modalities as well.

    [26:03 - 26:17] So as I basically mentioned, the course is basically around managing your cognitive load. So maybe you basically notice it in the webinar, but I refer to these things in terms of analogies.

    [26:18 - 26:26] I basically refer to these things as F1 cars. Basically, in some sense, we're basically like F1 car auto mechanics or other things.

    [26:27 - 26:42] So I basically put analogies like that, basically to manage your cognitive load. And once you basically are able to natively process the concepts, then you're able to natively utilize it as a first order object, basically.

    [26:43 - 27:00] And then specifically with this cohort, we basically created a lot more gamified exercises, basically to allow you to basically clarify a lot of kind of concepts. And then the thing that people basically really liked is the notebooks with examples.

    [27:01 - 27:18] You know, the reason why people are able to basically produce their projects is we basically created like the different projects. Basically, and so depending on your project, you can basically go through it and be able to take pieces of it to be able to build your own kind of a project.

    [27:19 - 27:33] So what is AI engineering exactly? Basically, so I wanna basically go into this a little bit, basically just to set the context.

    [27:34 - 27:50] Basically, what we're basically learning here is we're learning transformer based large language models, multimodal large language models. The reason why basically I wanna specify this very specifically is you have generative AI.

    [27:51 - 27:57] People basically say there's a generative AI course, generative AI, people can't. Generative AI actually means different things.

    [27:58 - 28:03] It can mean a diffusion based technology. It can basically mean different things.

    [28:04 - 28:18] So what we're basically learning here is transformer based or multimodal large language models. And applying that basically is through agents to different modalities and to be able to process about all of this.

    [28:19 - 28:41] The overall context is basically that we right now basically are going through a moment where I prefer to ask the React moment for AI engineering. So in the .com boom, basically you had HTML, CSS, you had a JavaScript, but there's no framework.

    [28:42 - 28:58] Basically, there's no virtual DOM basically in the web browser to be able to unify everything. But now basically what's React with Angular, basically these frameworks, it codified a lot of best practices and patterns and tools about the system.

    [28:59 - 29:08] And then it abstracted away a number of libraries that allow you to utilize libraries from other people as well. So we're basically in that phase right now with AI engineering.

    [29:09 - 29:36] And so this is the reason why basically I mentioned that this is less math heavy basically and less probability heavy than you would traditionally basically have to get is because a lot of the math and statistics and calculus is basically abstracted away in terms of libraries. And then the other thing that basically happened is you have a pre-built software brain.

    [29:37 - 29:44] Basically, the pre-built software brain has a lot of the internets about data associated with it. And why is this basically about different?

    [29:45 - 30:13] If you were doing traditional machine learning at a fan company with basically their large data, you would have to basically do a ton of data processing, ton of data engineering, and you would be part of us from the person ML team, basically, and you might be doing data cleaning, data feature engineering and other things. Basically with these software brains, you're able to get one level higher in abstraction than basically traditional machine learnings.

    [30:14 - 30:38] And so even though this was invented in the 1970s, traditional machine learning, up to I think relatively recently, a lot of people basically use traditional machine learning. So whether it's for recommendations, internet things, and this is classically known as like data science, basically, and then in 2016, you basically have deep neural networks.

    [30:39 - 30:56] Basically, you had GPUs that were basically finally fast enough, you know, basically, and sufficiently performant, basically. And they, in the software brain analogy, they were basically the first sensors.

    [30:57 - 31:11] So you basically had a convolutional neural network that could basically be eyes. You basically had ears, and then you had noses, and then you had abilities to talk, basically for the first time.

    [31:12 - 31:26] The thing that's different about transformer, based at large language model, is that language became infused with all these different organs. Basically, so you basically have a software organ that you can infuse language into different about modalities.

    [31:27 - 31:48] So basically what we're basically about going through is we're basically going through a shift right now. Basically, so traditional combat software basically is different abstractions on top of assembly languages and the instruction data sets from the processor.

    [31:49 - 32:14] Basically, so web was fundamentally different in that it allows people to coordinate among each other and search and, which is basically, pull basically is different than push, which is basically, which is social media. Basically, and now we're basically going through a new paradigm where we basically have intelligence on top of combat databases.

    [32:15 - 32:36] So if you think about the web revolution, basically what happened over this period of 30 years, every single person has a database. So you don't think of it as a database, you think of it as notion, you think of it as CRM, you think of it, but you basically use databases all the time every single day for all your different things.

    [32:37 - 33:00] Now, what we're basically kind of about going into is we're basically going into the era of providing intelligence or digital clones on top of combat systems to be able to facilitate and augment combat things as well. We have prompts basically in their languages and they're associated with different types of data.

    [33:01 - 33:09] It could basically be other tasks, it could be other images, we have voices, and then we have video. Basically, then we interact with API.

    [33:10 - 33:21] And at a very simplistic level, this is basically all agents are. Kind of a, of course, the doubles and the details and kind of a, you have to basically utilize AI to basically be able to do this.

    [33:22 - 33:54] And so what we're basically kind of a, this class of, there's not a good name for this, basically just yet, but one of the areas people are calling is they're basically calling this by based software where you can basically type in a general kind of direction. It's basically, whereas it used to be, you have to formally specify every single thing and you have a team of 30 developers basically translating specs to code basically.

    [33:55 - 34:02] And this is basically with a vibe coding. Basically, you guys use cursor, augment, and everything else.

    [34:03 - 34:16] Basically, and it's starting to be transformed into all aspects of society. So my expectation is I think this is one of the biggest waves and I think this will last the next 30 years.

    [34:17 - 34:36] Basically, just like basically how the internet boom basically lasted 30 years and it's still rolling, basically. So what we're basically kind of about focus here is basically, is the React kind of about moment.

    [34:37 - 35:05] So what we will basically do is we will basically introduce you to the concept and then have specific kind of about drills and exercises to allow you to internalize the concepts. You know, basically like you're not gonna be coding probability and statistics all day, all day, but you need to have an intuition and basically of certain statistics and probability concepts so that you're able to utilize these libraries come about well.

    [35:06 - 35:22] And then on the other thing is depending on your goal, some of you guys have mentioned in the onboarding call, you guys want to be able to read research papers yourself as well. So that can basically be, we'll be adding that to basically attach it to different lectures.

    [35:23 - 35:36] So you can basically know where the lectures are basically come from in the research papers. So you can basically pair the concepts with basic research papers and you'll be able to connect and fully read research papers in the future.

    [35:37 - 35:53] And then one of the key things that we basically emphasize is the data sets. Traditional Canva ML is basically very focused on data engineering, processing data, and then basically engineering the features on the data.

    [35:54 - 36:12] Whereas here basically we emphasize synthetic data, your data set, and then LLM as a judge. We're gonna get into a lot of those concepts in the future, but the synthetic data is really important basically and getting it ready for your data set or for your specific personalness.

    [36:13 - 36:30] And then we wanna basically make sure you have the proper evaluation. People in the last cohort left evaluation as I after thought, because like software engineers, we wanna build it, it looks good, and then we put the unit test afterwards.

    [36:31 - 36:39] But with AI engineering, you don't even know if you're improving or not. Basically if you don't have any proper evaluation.

    [36:40 - 37:06] So we're gonna go into a basic evaluation of metrics combined in the future as well. The fundamental Canva thing that we're basically talking about going for is we're basically going for your ability to be able to understand the following Canva concepts basically so that you're able to understand pre-training, fine-tuning, DPO, BPO, evaluation, data curation.

    [37:07 - 37:26] Not necessarily basically for you to basically do these, but basically, but you'll be able to understand the underlying concepts. So by the end of the course, you should be able to understand and be able to do this.

    [37:27 - 37:50] Basically, so on a skill basis, you should be able to utilize and execute basically these things, and then you should be able to get a portfolio group. Basically, I'm able to build this agent with a true positive rate of X, with precision and recall of X, and then distribution is depending on your specific situation, whether it's open source, internal champions, or you're building a startup.

    [37:51 - 38:17] And then your story as well is dependent on your specific situation. So we basically, Canva mentioned this, we basically provide a lot of homework exercises, quizzes, do as much of the homework and activities that you prefer, it really depends on your overall goal.

    [38:18 - 38:31] Basically, it's important that you basically think about it and try to do it before you use chatTPD. Basically, and then don't be guilty if you don't finish all of it.

    [38:32 - 38:40] Basically, yeah. So I'm gonna start going into some of the data Canva resources.

    [38:41 - 38:57] So one of the data resources we basically have is the notion, hopefully you guys have all gone invite basically for this, if you haven't let me know. Basically, so I'm gonna start running through some of the resources that we basically have.

    [38:58 - 39:40] I enjoy basically, so I'm gonna stop sharing, basically and start sharing different documents. So, I'm not sure if you guys have seen this quite yet, but you guys should all have access to this.

    [39:41 - 39:52] If you don't, message Marianne, or I basically have to get you access to this. So, you might be basically wondering, oh, I already know all the open source models.

    [39:53 - 40:04] It's a deep-sea gets Gwen about what's the point of Canva showing. There's a bunch of people that have the open source models, but they've done surgery inside.

    [40:05 - 40:10] Basically, they've sliced up the different Canva layers. Basically, they've added their own data to it.

    [40:11 - 40:24] Basically, so the open source models are basically designed to basically give you some certain open source models that have certain capabilities. So, multimodal, chat, image, audio, video, code.

    [40:25 - 40:40] Basically, so you can basically take a look at it with appropriate kind of links. We basically have the open source kind of models, then we basically have different data sources.

    [40:41 - 40:48] So, you guys are all from a lot of different areas. Basically, but we basically have software development.

    [40:49 - 41:13] Basically, debugging and code generation, bugs, basically bug fixing, resume data set, cover letter data set, web UI data set, stack overflow data set, basically code data set, UI reasoning data set. Basically, so cybersecurity, basically botnet detection, network intrusion.

    [41:14 - 41:57] Retail and e-commerce, Amazon product reviews, online retails, finance and banking, financial fraud data set, financial statements, S&P 500 stock data, healthcare, X-ray data set, healthcare data, transportation and logistics, airline on time performance data. Basically, so we have a bunch of data set here and I want to basically mention, basically, conduct two things is you guys have to use chat GPT and more specifically use the web search tool or the deep research tool.

    [41:58 - 42:20] So, there's many, many people basically in the last cohort where they were not able to do find a data set and they were able to use it using find a data set using the deep research tool. And so, the deep research tool, it's not like the traditional web search like you normally use.

    [42:21 - 42:37] Basically, it's much more fine-grained than that. Basically, and our second camp is basically focused on techniques like multi-hop RL, basically behind deep research.

    [42:38 - 42:47] Basically, and then these are basically talking about cloud services. Cloud, you might be wondering, there's like Amazon, there's talking about all these.

    [42:48 - 42:55] Why do we need additional? But these are things that are designed across the AI engineering lifecycle.

    [42:56 - 43:11] So, when we basically talk about model running and hosting, there are specific ones around reinforcement running, there are specific ones around evaluations, there are specific ones around fine tuning. And basically, they might have a cheaper cost of fine tuning.

    [43:12 - 43:19] Basically, some of these are what basically AI. Yeah, session.

    [43:20 - 43:27] The screen is a little bit blurred. The text is not clear.

    [43:28 - 43:36] Can you increase the size of it? Is it possible?

    [43:37 - 43:40] If not, it's fine. Yeah, thank you, thank you.

    [43:41 - 43:42] Thanks. Yeah.

    [43:43 - 43:54] Yeah. Like, for example, Kanba here, basically, we have a different solution.

    [43:55 - 44:02] So, Grok is a, they built their own semiconductor. And so, on an inference basis, it's actually cheaper than a lot of things.

    [44:03 - 44:27] Basically, there's different serverless platforms that basically allow you to basically boot up serverless things, specifically designed for GPUs and certain things are designed for instruct diffusion models or other things. Basically, so, there's different types of examples of Kanba modeling, GPU hosting.

    [44:28 - 44:46] And even on the GPU, Kanba basis, I'm sure you guys know, basically, GPUs are, you can basically use centralized, but you can also use decentralized in infrastructures like prime intellect. Basically, so, there's also AI captures about solvers.

    [44:47 - 44:58] There's different sources for AI data sets. There's specific things around web scraping that are large language model web scraping systems.

    [44:59 - 45:05] Basically, I'm sure you guys have built scrapers. It's a pain to basically deal with the DOM.

    [45:06 - 45:15] Basically, and then, there's even things around data labeling and more. Basically, so, it's all Kanba shared.

    [45:16 - 45:27] And then, Kanba, there's an emerging set of open source frameworks. Basically, there's different Kanba agent frameworks.

    [45:28 - 45:36] Basically, there's multi-agent orchestration in Kanba systems. Basically, so, there's a plenumist agent, Kanba systems.

    [45:37 - 45:56] There's things designed around prototyping, things around deployment. Basically, an inference, for example, one of the things that we're basically gonna teach is VLLM, which basically has a lot of state of the art, Kanba inference serving, Kanba techniques.

    [45:57 - 46:07] And then, there's different ways to benchmark, to orchestrate Kanba things. Also, things for fine-tuning.

    [46:08 - 46:11] Basically, traditional fine-tuning. Basically, quantized Laura fine-tuning.

    [46:12 - 46:15] Basically, RLA chat. You'll know a lot of these things.

    [46:16 - 46:20] Basically, a little bit as we go through the course. So, you'll basically, Kanba fine.

    [46:21 - 46:31] Basically, it across different Kanba categories. And then, we put a series of articles here.

    [46:32 - 46:37] Basically, on different AI concepts. Basically, if you wanna go a little bit further.

    [46:38 - 46:48] Basically, and then we have a series of project ideas basically here. Basically, YC, talking about project ideas.

    [46:49 - 46:54] We basically have Indie Hacker. Basically, I think there's 200 that's 300 here.

    [46:55 - 47:01] Basically, and then we're in the middle of basically adding kind of more, so as well. So, let me...

    [47:02 - 47:17] Okay, Bashir, basically Kanba asked, when we mentioned text plus image, would that be a transformers plus diffusion? Basically, or something in between.

    [47:18 - 47:30] So, we're gonna go over transformer-based text plus image, which is based on clip embeddings. Basically, and we're not gonna go too much into diffusion.

    [47:31 - 47:42] I think Bashir, you mentioned that you were interested in text plus video. So, we're in the middle of basically potentially exploring the text plus video about fine tuning as well.

    [47:43 - 47:57] And diffusion is a little bit different, but diffusion models still utilize a lot of the same concepts, basically, for fine tuning. Like, for example, Quantized Laura got fine tuning as well.

    [47:58 - 48:16] Sasha basically should, will we be learning about inference for visual transformers? We'll be learning about the inference side, primarily at the library level, basically.

    [48:17 - 48:34] So, we'll introduce you to basically VLLM, basically how to build your own inference, basically. And then, a lot of these basically kind of support, basically they support multimedia.

    [48:35 - 48:47] So, it will basically be the same as learning about inference, basically, for LLLM's. Generally speaking, it's learning for inference with visual transformers.

    [48:48 - 49:01] Yeah, but yeah, it's not called visual transformers, vision, vision transformers. And then, yeah, okay.

    [49:02 - 49:13] And then, then we basically have different newsletters, basically, kind of about Twitter. We're gonna add more here, basically.

    [49:14 - 49:28] Kind of about YouTube, kind of about different Reddit, basically, and different kind of about subtext. So that's, in general, about the opens, oh, and then basically the automation about templates.

    [49:29 - 49:49] So, the automation templates is basically, primarily designed so that you guys can utilize these automation kind of about templates, basically, as you basically do the course. So, you'll basically see a lot of people building AI agencies.

    [49:50 - 50:07] The problem was that AI agencies, if you don't know anything about AI techniques, basically, you have to plug in and along the workflow, you actually not have to know what to basically be able to plug in. We have these AI automation kind of about templates here that you can basically utilize, basically.

    [50:08 - 50:28] So, as you basically go through, kind of about the course, you can basically go in and plug in different things, and then plug in and add in and these workflow automation kind of about systems as well. Before I go on to the next thing, did anyone basically have any questions?

    [50:29 - 50:39] Okay. - Yeah, I do have a question.

    [50:40 - 50:48] What's the best way for us to reach out to you or to marry you? - I don't have access to this resource page that you just shared.

    [50:49 - 51:00] - Oh, yeah. Okay, thank you. I'll get you, because you signed up pretty soon ago, basically, so I think I didn't give you access yet.

    [51:01 - 51:11] Basically, so, so generally speaking, it'll be, I'll show you the community website. Basically, most of the interaction will basically be on the community website.

    [51:12 - 51:19] And then we'll be adding a DM feature. Basically, you can DM us as well.

    [51:20 - 51:29] Basically, yeah. - Will you post presentations and the recordings of the meeting, for example, if you can't make it that day, it'll be a place for us to go on.

    [51:30 - 52:07] - Yeah, that's right, yeah. (mouse clicking) So, this is the community website where you'll be getting an invitation, basically from us, basically for this. So, if you can't about seeing anything or use circle, basically, it's very similar to a course coming by website.

    [52:08 - 52:25] So, the idea of this is basically, it's an integrated discussion and chat with basically the course, with the ability to see the members, basically all in one. And you might be wondering, why do we basically build our own, basically, what's the point of this?

    [52:26 - 52:33] Basically, we're gonna basically be integrating AI chat. So, you can basically ask different videos, different questions.

    [52:34 - 52:49] As we go through the course, they'll basically be, I think, over 40 hours of lectures, plus, basically, maybe 20 to 30 hours. So, I think last, last cohort, we ended up producing like 100 hours of content.

    [52:50 - 53:01] So, it's gonna be very hard for you to basically go through it in reference to things. Basically, even us, we don't remember exactly what we said when.

    [53:02 - 53:12] So, we basically are in the middle building AI chat with it, basically, it doesn't quite work right now. And then, we're gonna be building a digital clone of ourselves.

    [53:13 - 53:24] Basically, the digital clone will basically have all our questions and answers, basically from all our transcripts and interactions. And then, it will basically be a digital clone zow, dependent in Alvin.

    [53:25 - 53:35] Basically, and the third thing we're basically gonna do is AI based summaries to be able to prioritize different discussions and different noise. So, anyway, basically, that's the high level.

    [53:36 - 53:41] You'll basically see these features. Basically, it's gonna come in while we're basically in the course.

    [53:42 - 53:53] And, the very, it's gonna be the first thing, it's basically, it's gonna be getting started. Basically, there's different onboarding, kind of steps here.

    [53:54 - 54:01] Be able to access your course and other things. We're in the middle of finalizing all of this.

    [54:02 - 54:18] But, once you basically click on everything, you're able to basically come up and find instructions on what to do, basically. And then, you wanna basically be able to say hello here.

    [54:19 - 54:23] Basically, I have some information about you. Basically, where your backgrounds are.

    [54:24 - 54:33] Basically, what your superpowers are. Basically, as Mary-Ann has an initial conduct introduction basically here.

    [54:34 - 54:46] We'll have announcements here. Basically, so, any announcements, basically, in the last cohort, basically, like, we posted different things here.

    [54:47 - 54:56] And then, general questions. Basically, any general questions, onboarding and orientation kind of questions.

    [54:57 - 55:10] Then, we'll basically have very specific kind of things around lectures. So, if you basically have lectures about very specific lectures, just basically come up and post it here.

    [55:11 - 55:25] Questions about lecture one, lecture two, the coaching calls, the Q&A, the happy hour, as well as the reverse, the guest lectures and the insult horse. And then, this is basically the course.

    [55:26 - 55:51] Basically, in the initial conduct course, we basically have a series of introduction, basically, tutorials, basically, for you. Getting familiar with basically the course overview, getting started with the course platform.

    [55:52 - 56:04] Basically, Python and tooling essentials, setting up virtual environments. Basic Python introduction, Jupiter notebook, unified Python setup.

    [56:05 - 56:26] And then, basically, restricted versus unrestricted open source AI libraries, hardware for AI, advanced AI concept, basically brainstorming with prompting and using chat TPT as research paper assistant, as well. These were basically things that, there's 18 kind of different things.

    [56:27 - 56:51] You can, should be able to go through it relatively quickly, basically, and, basically, and then basically we'll have about different modules here. You know, one, unit two, unit three, unit four, unit five.

    [56:52 - 56:58] And then, here, we'll basically have the events. You'll basically be able to see the events.

    [56:59 - 57:03] We already sent you calendar invites. That's why you guys are all here.

    [57:04 - 57:14] But you'll be able to see all the events here, basically. And then, these are specific to the lectures.

    [57:15 - 57:38] So, if you have questions about week one, lecture one, basically, about posted here, week one, lecture two, and it will basically be for the next series of weeks. Yeah, so, the exercises, this is an example of the exercises you'll be getting.

    [57:39 - 57:53] The next lecture, we're gonna basically go around technical orientation. So, initially, around 2D and 3D and matrix kind of a multiplication and linear algebra.

    [57:54 - 58:10] So, the idea that we're basically kind of a providing is we have two forms of notebooks, basically, so, one, so it's three forms of notebooks. So, one is we have notebooks around notebook examples, basically, where it's a notebook one through.

    [58:11 - 58:16] And then, there's another set, which are exercises. And there's another set that are many projects.

    [58:17 - 58:35] So, these are all different types, basically, the exercises are designed to be comprehensive so that you'll get an idea of the concept, the underlying concepts. And then, you're able to kind of produce, basically, the exercise.

    [58:36 - 58:51] So, for example, you can see, this is relatively verbose in terms of, like, comments. Basically, why we basically use Facebook, OPT 150, 125 million parameter versus 1.3 billion.

    [58:52 - 59:07] And then, it basically shows how to manage your RAM and VRAM. Basically, as you utilize the collab system, how to basically do memory, kind of clean up.

    [59:08 - 59:28] And then, we basically introduce you to LLM inference, the basic kind of pipeline. And so, we try to basically, in particular, this cohort, we spend resources, basically, kind of cleaning up the exercises and making it much more specific.

    [59:29 - 59:34] Why are you doing LLM? How do you basically build an inference pipeline?

    [59:35 - 59:43] And then, as you go through, basically, you'll have different exercises. So, these are somewhat gamified with a specific theme.

    [59:44 - 59:54] Basically, last cohort, we basically introduced embeddings and multimodal embeddings. And we had people basically do exploration with basically the different things.

    [59:55 - 01:00:06] And what we basically found is that it was not as effective. So, now, we ride much more structured, kind of, by ways of exploration of the different parts of the models.

    [01:00:07 - 01:00:25] So, for example, basically, this is your first LLM inference pipeline, basically, be able to do step one, be able to do step two, load the tokenizer and model, be able to write your prompt here, basically. And then, this is where it basically creates the prediction and then you generate the final layer.

    [01:00:26 - 01:00:40] Then, you basically are able to explore model variety, basically, size number of parameters versus architecture, OPT versus kind of a Mistral versus training data. And then, what does it basically mean?

    [01:00:41 - 01:00:45] What is it, practice? Basically, and this is the exercise, basically.

    [01:00:46 - 01:01:05] A lot of your exercises will be structured like this, basically, so that it's very full of comments and it's very specific, basically, on what to basically do. So, that's basically kind of about the idea of the exercise.

    [01:01:06 - 01:01:15] Basically, yeah. I'm not going to go over all of this, but this is another example.

    [01:01:16 - 01:01:27] How do you deal with temperature versus top K versus top P versus max new tokens, basically, how do you adjust these parameters? Last cohort, we just basically gave them to people as concepts.

    [01:01:28 - 01:01:37] We did have the exercise, but this is a lot more specific on how you basically actually do it. So, this is like an LLM chef motive.

    [01:01:38 - 01:01:43] This is like an LLM chef motive. So, we've made this theme-based and very specific.

    [01:01:44 - 01:01:52] So, that's basically kind of about the general idea of the lectures. We'll provide a introduction on the projects that we kind of get to it.

    [01:01:53 - 01:02:02] Let's see. So, we'd be able to log into that now or we need to wait till new?

    [01:02:03 - 01:02:11] Just wait a little bit. It should be right either later today or most are early tomorrow.

    [01:02:12 - 01:02:19] Yeah. I have to check in with the team.

    [01:02:20 - 01:02:37] Basically, we've been rushing to get everything working. So, about onboarding, when are we expected to go through all of it?

    [01:02:38 - 01:02:44] Should we go to it on our own before first unit or? Yeah.

    [01:02:45 - 01:02:49] So, back here, basically, yeah. So, I would schedule an onboarding call with me, basically.

    [01:02:50 - 01:03:04] So, we had an initial conversation, but the onboarding will basically be more specific around your specific projects. And then basically schedule every two weeks or so.

    [01:03:05 - 01:03:19] And basically, in the very beginning, what we want to do is we want to find get your projects to be much more specific. So, you get an idea of what category, what project templates, and then basically what general techniques to basically look after.

    [01:03:20 - 01:03:28] So, thank you for bringing that up because I have a question about that, too. But I was referring to the one in the community.

    [01:03:29 - 01:03:38] As so, when you were going to the community, there was an onboarding tab. Yeah, so.

    [01:03:39 - 01:04:12] Yeah. Basically, basically, so I just renamed it.

    [01:04:13 - 01:04:19] So, instead of start here, it's basically onboarding. So, we're going to basically refine these series of steps.

    [01:04:20 - 01:04:31] But the idea is to basically go through these links to be able to onboard. OK, like in Q&A, down, general questions on boarding orientation.

    [01:04:32 - 01:04:41] Could you click on that? Yeah, I saw something where there were like 18 items or something before unit one.

    [01:04:42 - 01:04:51] Yeah, so the so that part, yeah. Yeah, this is basically like onboarding kind of a.

    [01:04:52 - 01:04:56] Tutorials, you'll be able to get access to this. OK.

    [01:04:57 - 01:05:05] All right, thank you. Yeah.

    [01:05:06 - 01:05:22] OK. Part of what we're basically kind of doing with this course is we're basically migrating you guys.

    [01:05:23 - 01:05:31] Above, so a lot of people basically think. Of school as basically being able to memorize and remember things.

    [01:05:32 - 01:05:38] So that's actually the lowest level of application, you know, basically learning. So the highest level is being able to evaluate, be able to create.

    [01:05:39 - 01:05:45] So being able to. So at the end of the course, you should be able to say, Oh, this technique is good.

    [01:05:46 - 01:06:03] This technique is not basically be able to create your own things and be able to apply the right, the right force to the right problem. Basically, so what we're basically trying to do is to migrate you basically from the bottom all the way to basically the path with basically the project based coaching.

    [01:06:04 - 01:06:12] Make sure you basically use analogies to basically scaffolding. You can put it into your system prompt, basically into your chat.

    [01:06:13 - 01:06:17] You put your system prompt. Basically, have things explained to you via analogies.

    [01:06:18 - 01:06:28] Basically, and then, yeah, so that's basically at the lectures. We'll basically last cohort.

    [01:06:29 - 01:06:34] We basically had like up to an hour. I have two hour lectures and people basically brains got fried.

    [01:06:35 - 01:06:43] At this time, basically, we're going to. Limit it a little bit on the conceptual side to basically about an hour.

    [01:06:44 - 01:07:02] And then we're going to basically go through basically with 10 minutes, basically the exercise, basically the notebook and then the code and then basically a little bit for Q&A as well. So that's going to be the general kind of idea for the lectures.

    [01:07:03 - 01:07:13] So did you guys have any questions? I have a question.

    [01:07:14 - 01:07:20] Yeah, so we have another course. I assume or time.

    [01:07:21 - 01:07:27] That that that's the same one. It's for people with different time zones.

    [01:07:28 - 01:07:37] OK, isn't it fine if maybe in the following days, in some days, I cannot join this one. Am I free to join the other one?

    [01:07:38 - 01:07:46] Yeah, it is that you can do either basically. There was one person in the previous cohort that ended up joining.

    [01:07:47 - 01:07:55] Both calls and then because he was very enthusiastic, but he ended up. End up taking a lot of his time.

    [01:07:56 - 01:08:00] So he ended up stopping that of the cohort. But you can basically switch between the two.

    [01:08:01 - 01:08:09] That's not my intention to join what I can. And one more question.

    [01:08:10 - 01:08:17] You mentioned Alvin's workshop in the beginning of this lecture. Do you have the URL for it?

    [01:08:18 - 01:08:22] How can I watch that? Yeah, just please.

    [01:08:23 - 01:08:33] Yeah, because you onboarded relatively recently. Just message Marianne Marianne at full stack.io basically.

    [01:08:34 - 01:08:43] But she'll basically get to access to it. OK, thanks.

    [01:08:44 - 01:09:00] Any other questions? I have a question.

    [01:09:01 - 01:09:10] Once should we expect to do tomorrow in T. AI, both can project coaching, not certain AI project coaching.

    [01:09:11 - 01:09:14] Basically, in the first. Basically, four.

    [01:09:15 - 01:09:27] Sessions will primarily be a lecture in a way today. And basically, we entered into the project coaching too quickly, basically, in the previous cohort.

    [01:09:28 - 01:09:40] And we basically found that people didn't have. Like a fully integrated view of what our AI projects and AI startups and the different types of modalities people were just not fully aware.

    [01:09:41 - 01:09:45] So even though we basically provided it in the project types, basically. So the first.

    [01:09:46 - 01:09:56] We'll basically be lectures, basically. And so as you basically get a sense of the project types and project modalities .

    [01:09:57 - 01:10:07] And so of course, you'll basically be able to say, Oh, OK. Basically, I didn't know that I can basically do a citation engine.

    [01:10:08 - 01:10:12] I didn't know I can basically do a better search. That's a project modality.

    [01:10:13 - 01:10:25] Basically, I can basically do, I don't know, AI receptionist. Basically, that's another modality or text to text a video, basically a text to average her or whatever it is.

    [01:10:26 - 01:10:29] Yeah. OK, OK.

    [01:10:30 - 01:10:51] Yeah, and then Bashir, a transformer versus diffusion is something that we will go into basically. So we will mention basically a diffusion lightly, basically.

    [01:10:52 - 01:11:04] And then we will go into a lot of the video architectures. Or like hybrid diffusion versus transformers, basically.

    [01:11:05 - 01:11:12] So we may actually go into it a little bit more, basically. But we're not going to have legal into diffusion.

    [01:11:13 - 01:11:16] Sound good. Thank you.

    [01:11:17 - 01:11:21] I'm happy to explore everything that's possible. But yeah, my interests lean that way for now.

    [01:11:22 - 01:11:30] Yeah. All right, any more questions?

    [01:11:31 - 01:11:43] All right. Yeah, that's it.

    [01:11:44 - 01:11:57] Then basically. Once we basically kind of, once the operations is a little bit smoother, you should expect the video recordings, the lecternals, everything to basically be in there.

    [01:11:58 - 01:12:16] So we will basically try to get the lecternals ahead of time, basically. And but sometimes what will happen is, basically, we will be editing it up to the day of basically, and you'll get like a slightly, you will have to re download it.

    [01:12:17 - 01:12:34] But in order to manage your cognitive load, you can basically will try to basically provide your electronics, even ahead of time. So you're effectively repeating, you know, basically looking at the material multiple times. So this is why we basically suggested going through Alvin's workshop.

    [01:12:35 - 01:12:48] It's basically because you're not, you're seeing the transformer stuff one time , and then you're seeing it another time, basically going through the course. So basically that the space repetition is something that we basically can buy.

    [01:12:49 - 01:13:00] I think it's important and emphasize. All right, so I guess that's it.

    [01:13:01 - 01:13:15] You guys didn't have any kind of more questions. The next lecture will basically be around probability and statistics, but it's the probability and statistics related to a large language models.

    [01:13:16 - 01:13:18] And yeah. Thank you.

    [01:13:19 - 01:13:20] All right. Thank you.

    [01:13:21 - 01:13:23] Thank you very much. Thank you very much.

    [01:13:24 - 01:13:28] I will see you again. Bye. Bye. Bye. Bye. Bye. Bye with the...