Using Docker and Ngrok to deploy machine learning inference

Salman Chen
4 min readDec 15, 2020

Hello everyone,

I want to celebrate that finally I have a part-time dream job as a coffee barista, and I am on my way of training (well, not that kind of training of course). If you ever met me in person, maybe your impression from my look is not about someone working as an AI product researcher. I drink coffee a lot and hanged out with fellow coffee enthusiast.

Photo by Nathan Dumlao on Unsplash

Following my previous publication on creating an end-to-end coffee picture classifier on docker–which you can deploy it later on Render, I want to inform you a good news. There is a temporary free alternative to deploy your machine learning or data science project!

I suggest you to read it before, although you can jump into this section if you concerning the deployment side more.

Yes, we will utilise Ngrok for to broadcast our localhost service. Ngrok is a simple tunnel, which I could say–you can share your localhost as long as your process is running.

Get Started!

Since I already provide the repository and explanation in the previous post, I think I will just review on the deployment process. Fortunately, the Fast.ai tutorial already provided scripts for the frontend.

git clone https://github.com/salmanhiro/coffee-classifier.git

To deploy it on docker, you can follow the steps,

docker build -t coffee .docker run --rm -it -p 5000:5000 coffee

And then you can try it on your browser from the localhost address.

Into Ngrok

Ngrok provides a real-time web UI where you can introspect all HTTP traffic running over your tunnels. In the example, I bind my http docker port to http local port, and ngrok actually bind my http local port into <some_keys>.ngrok.io. As simple as that.

You do not need to actually install ngrok since it has ready-to-use shell.

  1. Download the ngrok zipfile. Click on get started for free, sign-up, and download the file. You can follow the get-started instruction there, but anyway I will write it down.
  2. Unzip the file,
unzip /path/to/ngrok.zip

3. Yes it is unzipped! put it on a good directory, maybe you will use it frequently.

4. Note that previously you had your docker container running. if your binded host address is somewhat like localhost:5000 , then you can put it on ngrok http server.

./ngrok http 5000

Then, you will get forwarding address. Here is my example,

Nice, you can copy the forwarding address. Here is mine with widget.

And you will get the http status on your ngrok terminal.

Try to access the page from different network to test the ngrok address. Remember that if you shut down the ngrok process in the previous terminal, you will be assigned to different address when you turn it on later. Maybe you want to try using raspberry pi for hosting?

References

  1. Fast AI course, https://course19.fast.ai/.
  2. Repository for Fast AI model deployment, https://github.com/render-examples/fastai-v3.
  3. Ngrok, https://ngrok.com/.

About the Author

Salman is a cyberpunk enthusiast that loves coffee, lo-fi music, extragalactic exploration, robot, and self-driving car so much. He is working on Asis and Mata — AI personal assistant products — as an applied researcher in the field of natural language understanding and reinforcement learning-driven self-learning agent. Previously, he was a computer vision applied researcher at the same parent company — Zapps AI, a research assistant at the Department of Astronomy, Institut Teknologi Bandung, and a robotics computer vision engineer at Dago Hoogeshool.

He pursued Astronomy and Astrophysics major at Institut Teknologi Bandung, spent a physics summer school at Princeton University, and Machine Learning Summer School (MLSS) 2020 which he is proudly taught by Max Welling’s AMLab and researchers from DeepMind.

Check his profile on LinkedIn.

--

--

Salman Chen

Aspiring computer scientist — interested in astrophysics and neuroscience, love chocolate and cookies