Here mainly describes how to deploy PaddlePaddle to the mobile end, as well as some deployment optimization methods and some benchmark.
How to build PaddlePaddle for mobile
- Build PaddlePaddle for Android [Chinese] [English]
- Build PaddlePaddle for IOS [Chinese]
- Build PaddlePaddle for Raspberry Pi3 [Chinese] [English]
- Build PaddlePaddle for PX2
- How to build PaddlePaddle mobile inference library with minimum size.
Deployment optimization methods
- Merge batch normalization before deploying the model to the mobile.
- Compress the model before deploying the model to the mobile.
- Merge model config and parameter files into one file.
- How to deploy int8 model in mobile inference with PaddlePaddle.