Karan

| Technical Writer: Poornachandra Sarang | Technical Review: ABCOM Team | Copy Editors: Pooja Gramopadhye, Anushka Devasthale | Last Updated: July 23, 2020 | Level: Beginner | Banner Image Source : Internet |


Consider that you have recently developed a movie recommendation system. You would now like to demonstrate your work to your colleagues, friends and, for that matter, invite everybody else on the planet to use your model. The Machine Learning models run on a local machine and at the most on the Cloud. However, providing a web interface to such models is not a seemingly a trivial task. In this tutorial, I will show you how to integrate your ML model on the web.

For this tutorial, I will use a trivial image classification model. Have a look at a small demonstration of what you will create, by the end of this tutorial.

If you would like to try the above application on your images, here is the application URL for you to try it out. This URL is the Cloud URL, where I have deployed the application. At the end of this tutorial, you will be able to position your application on the Cloud and make it publicly available.

Initially, I will show you how to deploy the pre-trained ML model on a local web server, and then I will show you how to deploy the same on the cloud. For a local web server, you will use Flask, which is a micro lightweight web framework written in Python. I assume that you know how to develop and train a machine learning model and save it to a file such as .hdf5 format. I have trained an image classification model based on the CIFAR-10 dataset that classifies the given image into 10 different categories. After training, the model is saved to a .hdf5 file. You will use this model for your web application. I also assume that you have a knowledge of basic web application development and understand HTML and the use of CSS.

While explaining the technique, I will take a top-down approach. You have already seen the result in the above demonstration. You will download the entire source from the repository and run it on your local machine. Then I will dissect the code and explain to you the important portions of the code for you to understand the entire process. I took this approach of teaching as the project involves too many files to be integrated into the project. So focusing on the important code makes learning easier.

I used Windows for this tutorial. The instructions for running the tutorial on a macOS or Linux would remain the same except for the OS-dependent features.

Project Setup

Create a new folder called mlWeb on your PC and open your command line to create the project and set up a virtual environment. I used Anaconda (for Anaconda setup refer to anconda) to create a virtual environment by running this command:
Create a new folder called mlWeb on your PC and open your command line to create the project and set up a virtual environment. I used Anaconda (for Anaconda setup refer to Anaconda site) to create a virtual environment by running this command:

cd <your project folder>
conda create -n mlWeb python=3.7.6

Mukul Rawat

Activate the virtual environment with this command:

conda activate mlWeb

Now install the following packages

pip install flask
pip install tensorflow==2.0
pip install gunicorn
pip install pillow

*Gunicorn is a production server that will be used for deploying the web app in Heroku, which is a production-level web server. The demo application that you saw earlier is running on Heroku. I have specifically installed TensorFlow 2.0 so that you would be able to run the application, even if your PC does not have a GPU. After all, you will need a GPU mainly for quicker training of your ML models and not necessarily for model inference. As we are not going to do any model training here, the GPU is not required to run this tutorial.

Trying It Out

Download the entire project source from our GitHub and copy it to the mlWeb folder that you created in the previous step. At this point, your folder structure will be like this:
image
Now run the application using the following command:

python webapp.py

The command starts the Flask web server where you will see the following message on your console:
image
Open the browser and type the above URL and the application would start running on your desktop just the way you saw it running on the production server in the demo above. Play around with the application to understand its functionality, have a look at the HTML code to check out the two web pages (index.html and success.html) that the application uses. After you understand the flow of the web application, it will be easier for you to understand how the ML model is integrated into the web application.

I will now explain to you the code behind this application. The main important file is webapp.py, which is shown below in its entirety. Just have a look at it. I will explain this entire code after the listing.

import os
import uuid
import flask
import urllib
from PIL
import Image
from tensorflow.keras.models import load_model
from flask import Flask, render_template, request, send_file
from tensorflow.keras.preprocessing.image import load_img, img_to_array

app = Flask(__name__)
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
model = load_model(os.path.join(BASE_DIR, 'ModelWebApp.hdf5'))

ALLOWED_EXT = set(['jpg', 'jpeg', 'png', 'jfif'])
def allowed_file(filename):
    return '.' in filename and\
filename.rsplit('.', 1)[1] in ALLOWED_EXT

classes = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

def predict(filename, model):
    img = load_img(filename, target_size = (32, 32))
img = img_to_array(img)
img = img.reshape(1, 32, 32, 3)

img = img.astype('float32')
img = img / 255.0
result = model.predict(img)

dict_result = {}
for i in range(10):
    dict_result[result[0][i]] = classes[i]

res = result[0]
res.sort()
res = res[::-1]
prob = res[: 3]

prob_result = []
class_result = []
for i in range(3):
    prob_result.append((prob[i] * 100).round(2))
class_result.append(dict_result[prob[i]])

return class_result, prob_result

@ app.route('/')
def home():
    return render_template("index.html")

@ app.route('/success', methods = ['GET', 'POST'])
def success():
    error = ''
target_img = os.path.join(os.getcwd(), 'static/images')
if request.method == 'POST':
    if (request.form):
        link = request.form.get('link')
try:
resource = urllib.request.urlopen(link)
unique_filename = str(uuid.uuid4())
filename = unique_filename + ".jpg"
img_path = os.path.join(target_img, filename)
output = open(img_path, "wb")
output.write(resource.read())
output.close()
img = filename

class_result, prob_result = predict(img_path, model)

predictions = {
    "class1": class_result[0],
    "class2": class_result[1],
    "class3": class_result[2],
    "prob1": prob_result[0],
    "prob2": prob_result[1],
    "prob3": prob_result[2],
}

except Exception as e:
    print(str(e))
error = 'This image from this site is not accessible or inappropriate input'

if (len(error) == 0):
    return render_template('success.html', img = img, predictions = predictions)
else :
    return render_template('index.html', error = error)

elif(request.files):
    file = request.files['file']
if file and allowed_file(file.filename):
    file.save(os.path.join(target_img, file.filename))
img_path = os.path.join(target_img, file.filename)
img = file.filename

class_result, prob_result = predict(img_path, model)

predictions = {
    "class1": class_result[0],
    "class2": class_result[1],
    "class3": class_result[2],
    "prob1": prob_result[0],
    "prob2": prob_result[1],
    "prob3": prob_result[2],
} else :
    error = "Please upload images of jpg , jpeg and png extension only"

if (len(error) == 0):
    return render_template('success.html', img = img, predictions = predictions)
else :
    return render_template('index.html', error = error)

else :
    return render_template('index.html')

if __name__ == "__main__":
    app.run(debug = True)

Code Explanation

We import the Flask libraries in our Python code using the following import statement:

from flask import Flask, render_template, request, send_file

We create an instance of the Flask class and call it as an app using the following statement:

app = Flask(__name__)

When the Flask app runs, it creates a web server listening to a predefined port. You have already tested this by opening the URL http://127.0.0.1:5000 in your browser. The requests coming to this URL must direct to your application’s home page, which in our case is index.html. This direction is done by the following code segment in your above listed webapp.py Python file.

@ app.route('/')
def home():
    return render_template("index.html")

Flask maps various HTTP requests to python functions. Here the URL path (‘/’) is mapped to function home. So when we run our webApp.py, Flask starts a local server at port 5000. And when we type localhost:5000/ or 127.0.0.1:5000/ in our browser, an HTTP request is sent to the server that routes the request to this predefined function, in our current case it is home. The home function, in turn, renders the index.html in the browser. For the flask server to access the HTML pages you need to save all your HTML pages in the “template” folder and the CSS files in the “static” folder. So your folder structure should look like this:
image
To start running the created app instance, use the following code:

if __name__ == "__main__":
    app.run(debug = True)

Setting debug to True makes the runtime errors visible in the browser window. It also restarts the server whenever it notices a change in code.

Now, comes the most important part of this tutorial and that is to load the pre-trained ML model.

Loading the ML Model file

The pre-trained model for our application is stored in the mlweb folder under the name ModelWebApp.hdf5. Just in case you are curious about how to create this .hdf5 file, here is a link to my COLAB project.

To load this model, you use the following code:

BASE_DIR = os.path.dirname(os.path.abspath(__file__))
model = load_model(os.path.join(BASE_DIR, 'ModelWebApp.hdf5'))

The BASE_DIR specifies the path to the current directory where the python application is running. Once the model is loaded, you use it for inference.

Creating Inference Function

For inference, we define a function called predict that takes the filename of the image to be classified and the model to be used for classification as parameters. We first define a few output classes for classification:

classes = ['airplane', 'automobile', 'bird', 'cat', 
          'deer', 'dog', 'frog', 'horse', 'ship', 'truck']

The function definition for predictions is given below:

def predict(filename, model):
    img = load_img(filename, target_size = (32, 32))
img = img_to_array(img)
img = img.reshape(1, 32, 32, 3)

img = img.astype('float32')
img = img / 255.0
result = model.predict(img)

dict_result = {}
for i in range(10):
    dict_result[result[0][i]] = classes[i]

res = result[0]
res.sort()
res = res[::-1]
prob = res[: 3]

prob_result = []
class_result = []
for i in range(3):
    prob_result.append((prob[i] * 100).round(2))
class_result.append(dict_result[prob[i]])

return class_result, prob_result

The model was trained on a CIFAR-10 database where each image is of size 32x32 pixels. Thus, we need to preprocess all our test images and resize each to a tensor of 32x32x3 (3 for RGB). We also scale the pixel value to the range 0 through 1 by dividing it by 255. The preprocessed image is then passed to the model's predict function. Since the last layer of the model was a softmax layer with 10 output neurons, the predict function gives 10 output probabilities for 10 different classes. We store these results and classes in a list and return the top three predictions and their respective classes to the caller through prob_result and class_result variables respectively.

for i in range(3):
    	prob_result.append((prob[i] * 100).round(2))
class_result.append(dict_result[prob[i]])

Now, we need to call this predict function somewhere in our web code. Where do we do this? The inference function is called when you successfully upload the image on the server. Thus, in the index.html, in the form Action, we call this function. The form Action code from index.html is shown below:

<form action="/success" method="post", enctype="multipart/form-data">

We use a post method to send the image URL to the server. The action calls the “/success” URL. It means that the server receives a request from the URL http://localhost:5000/success. Now, we need to create a mapping of this URL to the desired Python function, which will get invoked when you open this URL in your browser window. For this, we will create a Success route just the way we created a home route.

Creating Success Route

The success route displays a web page with the results of the model's predictions to the user. Before trying to predict the image, we need to verify that the image has the desired format. We do this by defining the following function.

ALLOWED_EXT = set(['jpg', 'jpeg', 'png', 'jfif'])
def allowed_file(filename):
    	return '.' in filename and
filename.rsplit('.', 1)[1] in ALLOWED_EXT

Now let’s create our second route

@ app.route('/success', methods = ['GET', 'POST'])
def success():
    	error = ''
target_img = os.path.join(os.getcwd(), 'static/images')

The above success function runs whenever a request for ‘/success’ route is made. The success function sets the path for storing the image, the path is static/images in your project directory. Using the following code, we read the image data posted in the URL and create an img variable for our later use.

    if (request.form):
        link = request.form.get('link')
        try:
            resource = urllib.request.urlopen(link)
            unique_filename = str(uuid.uuid4())
            filename = unique_filename + ".jpg"
            img_path = os.path.join(target_img, filename)
            output = open(img_path, "wb")
            output.write(resource.read())
            output.close()
            img = filename

In case of an exception, we set the error message.

        except Exception as e:
                print(str(e))
                error = 'This image from this site is not accessible or inappropriate input'

If there is no error, we render success.html or else index.html where we print the error message.

if (len(error) == 0):
    return render_template('success.html', img = img, predictions = predictions)
else :
    return render_template('index.html', error = error)

If the code can read the image data without errors, we call the model’s predict method:

class_result , prob_result = predict(img_path , model)

The top three predictions are copied to the predictions dictionary.

predictions = {
    "class1": class_result[0],
    "class2": class_result[1],
    "class3": class_result[2],
    "prob1": prob_result[0],
    "prob2": prob_result[1],
    "prob3": prob_result[2],
}

In the above code, the server received the image through the image URL. Our user interface also allows the user to upload the image from his local driver to the server. In case of such uploads, we use the following code:

elif(request.files):
    file = request.files['file']
    if file and allowed_file(file.filename):
        file.save(os.path.join(target_img, file.filename))
        img_path = os.path.join(target_img, file.filename)
        img = file.filename

After we store the image filename in the img variable , the rest of the process is the same as we saw in the URL functionality method.

On successful predictions, we display the result in an HTML tabular format, which is in success.html file.

<table class="table-bordered text-light table-custom">
    <tr>
        <th>Rank</th>
        <th>Class</th>
        <th>Probability</th>
    </tr>
    <tr>
        <td>1st</td>
        <td>{{ predictions.class1 }}</td>
        <td>{{ predictions.prob1 }} %</td>
    </tr>
    <tr>
        <td>2nd</td>
        <td>{{ predictions.class2 }}</td>
        <td>{{ predictions.prob2 }} %</td>
    </tr>
    <tr>
        <td>3rd</td>
        <td>{{ predictions.class3 }}</td>
        <td>{{ predictions.prob3 }} %</td>
    </tr>
</table>

Here we display the information received from the Flask app by using the curly braces {{}}.

Deploying Another Model

What you learned so far is to load a pre-trained image classifier model on a web server that does classify images received from a web browser. The same application would run equally well from a browser on a user's mobile phone. So the gist of this entire application can be summarized as follows:

  • Develop and train an ML model to meet your purpose

  • Save the model to .hdf5 file format.

  • Write a web application with the desired interface.

  • Write a Python application to define routes to your various web pages

  • Run your Python application in the Flask environment. This starts the web server and provides you a URL to the homepage of your application.

  • Open the home page in your browser and run your application

  • Look at the inference made by your model running on the web.

  • Develop and train an ML model to meet your purpose

  • Save the model to .hdf5 file format.

  • Write a web application with the desired interface.

  • Write a Python application to define routes to your various web pages

  • Run your Python application in the Flask environment. This starts the web server and provides you a URL to the homepage of your application.

  • Open the home page in your browser and run your application

  • Look at the inference made by your model running on the web.

Well, so far you have succeeded in providing a web interface to your ML model. However, how do you make the web interface publicly available? I will now show you how to deploy the local web application that you have created to a production server called Heroku.

Deployment on Heroku

Heroku is a cloud Platform as a Service (PAAS) supporting several programming languages. One of the first cloud platforms, Heroku was developed in June 2007 and is one of the largest PaaS today. You can deploy your web application on Heroku through the command prompt or your GitHub repository. I will describe the deployment process through the command line. Follow the steps listed below:

Heroku account

Create a new heroku account from here:

Git Bash/Git Command line

Download the git command line from this link:
Add it to the path in your environment.

Heroku command line

Install the heroku CLI from here

Creating Deployment Files

After the above setup is done, you need to create two files - requirements.txt and a Procfile.

requirements.txt file

To install the specific packages for our web application in the Heroku platform, you need to create a requirements.txt file. In your virtual environment type the following command in your terminal

pip freeze

This shows you what packages are currently installed in the virtual environment mlWeb. To save this list of packages in requirements.txt file, you just need to type this command on your terminal.

pip freeze > requirements.txt

Procfile

Heroku apps include a Procfile that specifies the commands that are executed by the app on startup. It will tell Heroku how the web application should start and run. The Procfile is a simple text file that is named Procfile without a file extension. The file must be located inside the app folder. Type the following contents in this file.

web: gunicorn webApp:app --timeout 30`

It specifies Heroku to start the Guincorn production server.

Deployment process

Now open a separate command line and initialize git repo in your directory by this command

git init

Now login to your Heroku account through command line using this command

heroku login

Now create a heroku app through the following command.

heroku create imagemodel --buildpack https://github.com/heroku/heroku-buildpack-python.git

We add the Python build pack to let the server know that our application is Python-based.

Note: The imagemodel is the name of your application on the Heroku server. You may specify a name of your choice.
Your screen should look like this:

image

Now run the following commands from the command prompt to complete the deployment process:

git  add .
git  commit -m "heroku app"
git push heroku master

You should see a message Verifying deploy... done in your terminal.
image

On command prompt type following command:

heroku open

It will start the default browser and it will open the deployed URL that is https://imagemodel.herokuapp.com/.

Once it’s over, our web application is deployed now in the cloud. Anyone typing the following URL in his browser would be able to run your application.

https://imagemodel.herokuapp.com/

The screenshot is shown below:

image

Conclusions

In this tutorial, you learned how to provide a web interface to a pre-trained ML model. You learned to deploy it on a local web server and also on a production server called Heroku. Integrating ML models in a web application is trivial. You only need to load the .hdf5 file to restore the trained model. Once the model is loaded, you call its predict method for inference. The results are then copied to some Python variables and passed to the HTML page for rendering. With this knowledge, you will be able to deploy your next ML model on a web server of your choice - with the assumption that you know how to create a web application that runs on your specific web server.

Source: Download project source from our repository.

image