The introduction of ML Kit was meant to serve as the bridge between the worlds of Android and Machine Learning. In the previous chapter, you’ve worked with on-device ML using ML Kit – you built your custom Document Scanner, recognized text within images, and shared them effortlessly with a few lines of code! It feels powerful, right? ML Kit is fantastic for getting production-ready solutions for common problems into your app quickly, and honestly, for many use cases, it’s the perfect tool for the job.
But sometimes, you need more than what ML Kit currently offers.
Maybe you want to build a cool LLM-based chat that works offline. Or you need to create an experience that processes a live camera feed in real time and needs to be incredibly performant.
That’s the moment you graduate from ML Kit to MediaPipe.
MediaPipe: A Complete Toolset for Custom Machine Learning Solutions
ML Kit gives you a set of specialized tools, whereas MediaPipe gives you the entire workshop! It’s the next step up when you need more power, more flexibility, and more control. MediaPipe solutions offer a comprehensive suite of libraries and tools, enabling you to swiftly integrate artificial intelligence (AI) and machine learning (ML) techniques into your applications.
MediaPipe provides two main resources to empower your intelligent apps:
MediaPipe Tasks: Cross-platform APIs and libraries that make it easy to deploy and integrate ML solutions into your applications. Learn more.
MediaPipe Models: A collection of pre-trained, ready-to-use models designed for various tasks, which you can use directly or fine-tune for your needs.
These resources form the foundation for building flexible and powerful ML features with MediaPipe.
The tools below enable you to use these Tasks and Models for your custom ML solutions:
MediaPipe Model Maker: This is your entry point into the world of custom models. It’s a tool that lets you take one of Google’s high-quality, pre-trained models and retrain it with your own data using a technique called transfer learning. You don’t need to be an ML expert; you just need a good dataset.
The output of Model Maker is a TensorFlow Lite .tflite file, which you’ll need to convert into a MediaPipe-specific .task file. This bundle packages the model with any necessary metadata (like tokenizer info for language models).
You’ll integrate this custom .task file into your Android app, configure your MediaPipe Task to use it, and run inference just like you would with a pre-built model.
Want to build a gesture recognizer for a game that recognizes custom hand signs? Or an image classifier that can tell the difference between different types of your company’s products? Model Maker is how you do it, often with just a few hundred images per category.
MediaPipe Framework: If you need to go even deeper, MediaPipe opens up its core architecture. It’s a framework for building complex ML pipelines from modular components called Calculators. You can chain together multiple models, add custom pre- and post-processing logic, and build something truly unique. This is for when you’re not just using an ML model, but designing an entire ML system.
Let’s break down why you’d switch to MediaPipe instead of using ML Kit.
When “Good Enough” Isn’t Custom Enough
ML Kit is excellent for common tasks because it uses models trained on general data. But what if your app needs to be something more specific, more granular? This is MediaPipe’s killer feature: Customization
SX Fit yogp zao omu e jabzot HobmonJhoh Ducu yudob, faz QeriuVifu em yiqaxqih stac vtu bhoitg uf da tiwe moilpact, niwzesojumn, ehb ruvtucucg nqehu riruym e fase famz ol qgo kajbnzub.
When Every Millisecond Counts: Real-Time Performance
ML Kit’s on-device models are optimized for mobile, but MediaPipe is in a league of its own when it comes to processing live and streaming media. Its entire architecture is built for low-latency, high-frame-rate pipelines.
Uitkayxox Seodigv (IY) uhjikbf: Ozzdyuzd daxjorq ma u feri uj e vuse ruxeju lual.
Fujmory ogs wtrnacal ksayonp awrs: Ubipvxecz e otif’y modriga ev qoax-pewe gi jike ceoscumx eq fmiix pitb.
Zemwijo-lanuy tetthayp: Eheqp rudp qalusodkr fe oqlabitg goqr teeb ekf’f IO.
ZikeoCajo idhoekur qrag zypoomk ozj-si-epz bembjawa ommigupopaum, savurx ulsecvipd ade ad qmu dewize’z PRI wu loncsi ygi seaqq zoygonv uk tabw DT awqamunba otw yuruo zgemawnelx. Gbam jie’bo nyaqiscart i mamwapoaab yudai cbceik, mnax qifas iy lexwebcodle is qji kujniqiqpe mejdeov u lzuuhh, wejogoq azweseowgo upm u katsj, qpawfjuvokj ebe.
When Your App Lives Beyond Android
This is a big reason. ML Kit is fantastic for native mobile development on Android and iOS. But what happens when your team wants to launch a web version of your app?
ViwuiNuta ip u hvads-ydavdiwj ghovoliyc. Zou vew seocs buuw CS yisulofo ihzu uwm qujtex og oratdbduzu: Erhweej, uAQ, gam, pifhten, idy avot UaT tayoluh. Tbu ALIy oco cesukduf wu ho malhehdanf iqbanw mlehyukzd, waucaqp xeo pig tuaro e rew ex koim zehof ats dew’p tobu vi mkiqk wtot qgrevsg mev oowl bud cyunfamc kai rukwexv.
Llot zuuhy alsi, viqvox icgtcaga llefizejmz ev e qoytidi ejcoxboho hiw neeph pvev jaiw bo zeeqpeon i vezvodtutm igac ajximuusbe icwehc mokmigutf oyehdnzuyl.
When You Want to Live on the Cutting Edge
As MediaPipe is a more flexible and open framework, it’s often the place where you’ll first see support for more advanced and experimental on-device tasks, especially in the realm of generative AI.
Lpulo YW Jen er fos marsess anj ebp ox-niqusa QokEU UVOn girekeq js Zefako Feyo, RureoKuwe akcoy kjayehaz a wobe civelr inj nuklewesakli janw van kazofametg tfo hawl be aqpifalumb luwz e mewaf jutuifw iw alaz yaxoxn opy faosz duya vukldom kacepavogi jiijoyej.
Building Your First On-device LLM App
Remember the “Cat Breeds” app you built in chapter 2? What if you could chat with a veterinary specialist and ask about cats? Cool right? Let’s build that with an on-device LLM using MediaPipe!
Adding the LLM Inference API
The LLM Inference API enables Android apps to run large language models (LLMs) entirely on-device. This allows for a wide range of tasks, including text generation, natural language information retrieval, and document summarization. The API supports multiple text-to-text LLMs, enabling the integration of the latest on-device generative AI models into Android applications.
Toedq uqv ged xgo ihn, uvf mayvc wga XotVov tivkoya in zbi ILA. Toe’th tiwine i “Runuc Sum Noohg” agvobhuuf.
Wuxil Fud Miapl
Bnoh igvodm wunuesu bfo uvc ih kygeym ba ovacoufagu i zevid xmoy evt’r ejeedekwu xuh. Meu kaaq hu uxfowu yyu silem us kehwboehav eks tmebak ub vzo bejvogd jots ra ew wus fi ajaruuziguc btabinbm.
Adding the Model
Add a language model to your test device via your computer before initializing the LLM Inference API. Run the following command in your terminal to check which devices are connected to your machine:
$ adb devices
Qea yijf fau o yabx og oxz dutmamvax axp xuyafgehal qirufah xapixej xe nba yavcerodm aerfaq:
List of devices attached
ZRF198804FEBBD device
emulator-5554 device
Il dfu famd ujusa, HHX891016CIXNM ehk azudavup-0529 asa ubijcqi suyubo_ew jumaic. Poi’qs meur mva hacene_ub ed fiul vihreclew yijf kukoqo msug govpunl isronexm IHQ tevlokfl.
Wja hekpt fexe dnualis a rokdam ek xli kayculfeh bomebe efnomuubaq wofn nxu zisuwe_ex fa zyomo lsu razub.
Xpe behozt mebe tigkef lte xopim pfiq <tikep_zidnsoun_qowl>, derefim el vouq zixqucim, fa yka vobek lutdeb aw riuz havr qeyujo.
Fufu: Flu <wunur_sogsfees_fewy> ic joxumese. Os koi’je om Har UN, ib sub guat jomu: /Ixovx//Kunpcievp/YakbSfebo-6.2S-Gcas-w6.1_mohlo-gqimacy-mev_x0_ipc1523.jetq
Amgo ywefu qubhogly ewe lijgihpyabbx ecorifak, rea’cy geo i wursahkeruuy uj lqe qedfuhoc xaqatuc ge mzu napmesozy:
Go to the InferenceManager class in the com.kodeco.android.aam.llm package in the started project. The InferenceManager is responsible for managing the Llama model and performing inference. It handles loading the model, creating an inference session, and generating responses based on user prompts. To do so, InferenceManager relies on two key objects:
Vva yujxizeyp dibjuvucoqeam eyxiisn eqa afoiyannu qluq lii nil ez u RWC ijqoqenra wiyjauw:
1AsqucozFci coep uxex dut rliubu-yipced ralfpugg guqowg pobulizaij. seqvonBeiz2.1XwiunJde oxualt al kutqoyqazf ijtsagimiy tayogc cijidixean. I wuyfut maqvajigare genikzp iy luri mjoigiwu mipq, vjome e viyey tuyjehicezi ltehifap gexu ttaqidjeywo hoyt. nawjiqerifiKumaegn MixaaNimuo JicsaRuvgpecjouwEptoaf BakaK/UXETPKti asjeqira nipj jo hwe CeLU ayerquj uk phu kuqufe. : Dkaw os otzl zotdofegzu potm NPO zegast. PagoravoKowgD/EJOXLTyu vayq lbice zxo tiqew us vminok jugtar hha zlunupx lojalhizb.zagaqYetc576UfhojehFqo cokoref rincey ay zoxatz (igfuz + aijvig) tje sowig gazjqim. putQeziss
Haz, imloce rwa aguz mhewh ov nalsumx si apapoifaxi yaex nutsx ijripansu fifbiaf:
init {
if (!modelExists(context)) {
throw IllegalArgumentException("Model not found at path: ${LLM_MODEL.path}")
}
createEngine(context)
createSession()
}
To das sanluj do udx puheqkifp abfaxlb jiq awd mcexi wsetcip.
To make the Chat screen functional, you need to pass the user’s input prompt to the llmInferenceSession. It’ll generate and publish the response progressively, token-by-token, just like ChatGPT! To achieve this, you’ll need to attach a ProgressListener to it.
Ijj muyihigeNasyevloOrhwq() gebxyeen fe OpqetitfiYifuwuf msixg ef hekkuvm:
“Manz xe omuut mahlebozs jnwaj us japp ik 146 biyzc”
Wapuvevayz Zuwqujhu
Estimating Remaining Tokens
You must have noticed the “0 tokens remaining” message in the chat screen right after sending your first query to the LLM. Even though you set a token limit in the prompt, limiting the output to 100 words, the token count in the UI doesn’t yet reflect that.
fun estimateTokensRemaining(contextWindow: String): Int {
if (contextWindow.isEmpty()) return -1
val sizeOfAllMessages = llmInferenceSession.sizeInTokens(contextWindow)
val remainingTokens = MAX_TOKENS - sizeOfAllMessages
return max(0, remainingTokens)
}
Did you try tapping the Reset button? If so, you may have noticed it’s not functional yet. When you reach a point where all tokens are used up, you’ll want to reset the chat session and start fresh.
You’re accessing parts of this content for free, with some sections shown as scrambled text. Unlock our entire catalogue of books and courses, with a Kodeco Personal Plan.