[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Google AI Edge\n\n### Deploy AI across mobile, web, and embedded applications\n\n - \n\n #### On device\n\n Reduce latency. Work offline. Keep your data local \\& private.\n- \n - \n\n #### Cross-platform\n\n Run the same model across Android, iOS, web, and embedded.\n- \n - \n\n #### Multi-framework\n\n Compatible with JAX, Keras, PyTorch, and TensorFlow models.\n- \n - \n\n #### Full AI edge stack\n\n Flexible frameworks, turnkey solutions, hardware accelerators\n\nReady-made solutions and flexible frameworks\n--------------------------------------------\n\n### Low-code APIs for common AI tasks\n\nCross-platform APIs to tackle common generative AI, vision, text, and audio tasks.\n[Get started with MediaPipe tasks](https://ai.google.dev/edge/mediapipe/solutions/guide) \n\n### Deploy custom models cross-platform\n\nPerformantly run JAX, Keras, PyTorch, and TensorFlow models on Android, iOS, web, and embedded devices, optimized for traditional ML and generative AI.\n[Get started with LiteRT](https://ai.google.dev/edge/litert) \n\n### Shorten development cycles with visualization\n\nVisualize your model's transformation through conversion and quantization. Debug hotspots by\noverlaying benchmarks results.\n[Get started with Model Explorer](https://ai.google.dev/edge/model-explorer) \n\n### Build custom pipelines for complex ML features\n\nBuild your own task by performantly chaining multiple ML models along with pre and post processing\nlogic. Run accelerated (GPU \\& NPU) pipelines without blocking on the CPU.\n[Get started with MediaPipe Framework](https://ai.google.dev/edge/mediapipe/framework) \n\nThe tools and frameworks that power Google's apps\n-------------------------------------------------\n\nExplore the full AI edge stack, with products at every level --- from low-code APIs down to hardware specific acceleration libraries. \n\nMediaPipe Tasks\n---------------\n\nQuickly build AI features into mobile and web apps using low-code APIs for common tasks spanning generative AI, computer vision, text, and audio. \nGenerative AI\n\nIntegrate generative language and image models directly into your apps with ready-to-use APIs. \nVision\n\nExplore a large range of vision tasks spanning segmentation, classification, detection, recognition, and body landmarks. \nText \\& audio\n\nClassify text and audio across many categories including language, sentiment, and your own custom categories. \nGet started \n[### Tasks documentation\nFind all of our ready-made low-code MediaPipe Tasks with documentation and code samples.](https://ai.google.dev/edge/mediapipe/solutions/guide) \n[### Generative AI tasks\nRun LLMs and diffusion models on the edge with our MediaPipe generative AI tasks.](https://ai.google.dev/edge/mediapipe/solutions/genai/llm_inference) \n[### Try demos\nExplore our library of MediaPipe Tasks and try them yourself.](https://goo.gle/mediapipe-studio) \n[### Model maker documentation\nCustomize the models in our MediaPipe Tasks with your own data.](https://ai.google.dev/edge/mediapipe/solutions/model_maker) \n\nMediaPipe Framework\n-------------------\n\nA low level framework used to build high performance accelerated ML pipelines, often including multiple ML models combined with pre and post processing. \n[Get started](https://ai.google.dev/edge/mediapipe/framework) \n\nLiteRT\n------\n\nDeploy AI models authored in any framework across mobile, web, and microcontrollers with optimized hardware specific acceleration. \nMulti-framework\n\nConvert models from JAX, Keras, PyTorch, and TensorFlow to run on the edge. \nCross-platform\n\nRun the same exact model on Android, iOS, web, and microcontrollers with native SDKs. \nLightweight \\& fast\n\nLiteRT's efficient runtime takes up only a few megabytes and enables model acceleration across CPU, GPU, and NPUs. \nGet started \n[### Pick a model\nPick a new model, retrain an existing one, or bring your own.](https://ai.google.dev/edge/litert/models/trained) \n[### Convert\nConvert your JAX, Keras, PyTorch, or Tensorflow model into an optimized LiteRT model.](https://ai.google.dev/edge/litert/models/convert_to_flatbuffer) \n[### Deploy\nRun a LiteRT model on Android, iOS, web, and microcontrollers.](https://ai.google.dev/edge/litert#integrate-model) \n[### Quantize\nCompress your model to reduce latency, size, and peak memory.](https://ai.google.dev/edge/litert/models/model_optimization) \n\nModel Explorer\n--------------\n\nVisually explore, debug, and compare your models. Overlay performance benchmarks and numerics to pinpoint troublesome hotspots. \n[Get started](https://ai.google.dev/edge/model-explorer) \n\nGemini Nano in Android \\& Chrome\n--------------------------------\n\nBuild generative AI experiences using Google's most powerful, on-device model \n[Learn more about Android AICore](https://developer.android.com/ai/aicore) [Learn more about Chrome Built-In AI](https://developer.chrome.com/docs/ai) \n\nRecent videos and blog posts\n----------------------------\n\n[### A walkthrough for Android's on-device GenAI solutions\n1 October 2024](https://www.youtube.com/watch?v=EpKghZYqVW4) \n[### How to bring your AI Model to Android devices\n2 October 2024](https://android-developers.googleblog.com/2024/10/bring-your-ai-model-to-android-devices.html) \n[### Gemini Nano is now available on Android via experimental access\n1 October 2024](https://android-developers.googleblog.com/2024/10/gemini-nano-experimental-access-available-on-android.html) \n[### TensorFlow Lite is now LiteRT\n4 September 2024](https://developers.googleblog.com/en/tensorflow-lite-is-now-litert)"]]