In this example, we try to predict a continuous value (the dependent variable) from a set of independent variables. Specifically, we try to predict boston house prices given 13 features including crime rate, property tax rate, etc.
from keras.datasets import boston_housing
(tr_data, tr_labels), (ts_data, ts_labels) = boston_housing.load_data()
Preparing the Data
The training and test data consists of arrays of decimal numbers. The ranges and distributions of these numbers vary widely so to make learning easier, we normalize them by pulling their mean to 0 and calculating the number of standard deviations from that mean.
mean = tr_data.mean(axis=0)
std = tr_data.std(axis=0)
tr_data -= mean
tr_data /= std
ts_data -= mean
ts_data /= std
Notice that the test data uses the mean and standard deviation from training (not from the test cuz that would be cheating).
Building the Model
Now we build our model or deep learning architecture for regression.
from keras import models
from keras import layers
model = models.Sequential()
model.add(layers.Dense(128, activation='relu', input_shape=(tr_data.shape,)))
model.compile(optimizer='rmsprop', loss='mse', metrics=['mae'])
4 things to note in the above model-building code:
- In the last hidden layer, it has 1 node because we’re trying to calculate a single number, the housing price.
- In the last hidden layer, we use no activation function. Applying an activation function will squeeze that number into some range (e.g. 0..1) and that’s not what we want here. We want the raw number.
- In the model compilation, our loss function is “mse” for regression tasks. “mse” stands for mean-squared error and is the square of the difference between the predictions and the targets
- In the model compilation, the metrics is “mae”, which stands for mean-absolute error. This is the absolute value of the difference between the predictions and the targets.
Train and Test Model
Finally, train/fit the model and evaluate over test data and labels.
model.fit(tr_x, tr_y, epochs=100, batch_size=1)