Take away: You should minimize the time between your experiments (that's way you should start with smaller models). The more experiments you do, the more things you figure out that don't work.
status
not read
reprioritisations
last reprioritisation on
suggested re-reading day
started reading on
finished reading on
Parent (intermediate) annotation
Open it sses.MSE() tf.metrics.mean_square_error() when larger errors are more significant that smaller errors Huber tf.keras.losses.Huber() combintion of MSE and MAE less sensitive to outliers than MSE <span>Take away: You should minimize the time between your experiments (that's way you should start with smaller models). The more experiments you do, the more things you figure out that don't work. <span>
Original toplevel document
TfC 01 regression st_labels, c="green", label="Testing data") plt.scatter(test_data, predictions, c="red", label="Predictions") plt.legend(); Common regression evaluation metrics keyboard_arrow_down Introduction <span>For regression problems: MAE tf.keras.losses.MAE() tf.metrics.mean_absolute_error() great starter metrics for any regression problem MSE tf.keras.losses.MSE() tf.metrics.mean_square_error() when larger errors are more significant that smaller errors Huber tf.keras.losses.Huber() combintion of MSE and MAE less sensitive to outliers than MSE Take away: You should minimize the time between your experiments (that's way you should start with smaller models). The more experiments you do, the more things you figure out that don't work. Tracking your experiments One really good habit is to track the results of your experiments. There are tools to help us! Resource: Try: Tensorboard - a component of Tensorflow library t