Edited, memorised or added to reading queue

on 08-Jul-2025 (Tue)

Do you want BuboFlash to help you learning these things? Click here to log in or create user.

#tensorflow #tensorflow-certificate

Finding the best learning rate

#model
tf.random.set_seed(42) 
model_7 = tf.keras.Sequential([ tf.keras.layers.Dense(4, activation='relu'),
 tf.keras.layers.Dense(4, activation='relu'), 
tf.keras.layers.Dense(1, activation='sigmoid')
]) 

model_7.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy'])


# callback
lr_scheduler = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-4 * 10**(epoch/20)) 
history_7 = model_7.fit(X_train, y_train, epochs=100, callbacks=[lr_scheduler])




# plot learning rate vs the loss
plt.figure(figsize=(10, 7))
plt.semilogx(history_7.history['lr'], history_7.history['loss'])
plt.xlabel('Learning Rate')
plt.ylabel('Loss')
plt.title('Learning Rate vs Loss');
statusnot read reprioritisations
last reprioritisation on suggested re-reading day
started reading on finished reading on

TfC_02_classification-PART_1
cision boundry plt.contourf(xx, yy, y_pred, cmap=plt.cm.RdYlBu, alpha=0.7) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) <span>Finding the best learning rate #model tf.random.set_seed(42) model_7 = tf.keras.Sequential([ tf.keras.layers.Dense(4, activation='relu'), tf.keras.layers.Dense(4, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model_7.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy']) # callback lr_scheduler = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-4 * 10**(epoch/20)) history_7 = model_7.fit(X_train, y_train, epochs=100, callbacks=[lr_scheduler]) # plot learning rate vs the loss plt.figure(figsize=(10, 7)) plt.semilogx(history_7.history['lr'], history_7.history['loss']) plt.xlabel('Learning Rate') plt.ylabel('Loss') plt.title('Learning Rate vs Loss'); <span>




Flashcard 7714071252236

Tags
#tensorflow #tensorflow-certificate
Question

Finding the best learning rate

#model
tf.random.set_seed(42) 
model_7 = tf.keras.Sequential([ tf.keras.layers.Dense(4, activation='relu'),
 tf.keras.layers.Dense(4, activation='relu'), 
tf.keras.layers.Dense(1, activation='sigmoid')
]) 

model_7.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy'])


# callback
lr_scheduler = tf.keras.callbacks.[...](lambda epoch: 1e-4 * 10**(epoch/20)) 
history_7 = model_7.fit(X_train, y_train, epochs=100, callbacks=[lr_scheduler])




# plot learning rate vs the loss
plt.figure(figsize=(10, 7))
plt.semilogx(history_7.history['lr'], history_7.history['loss'])
plt.xlabel('Learning Rate')
plt.ylabel('Loss')
plt.title('Learning Rate vs Loss');
Answer
LearningRateScheduler

statusnot learnedmeasured difficulty37% [default]last interval [days]               
repetition number in this series0memorised on               scheduled repetition               
scheduled repetition interval               last repetition or drill

Parent (intermediate) annotation

Open it
ras.layers.Dense(1, activation='sigmoid') ]) model_7.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy']) # callback lr_scheduler = tf.keras.callbacks.<span>LearningRateScheduler(lambda epoch: 1e-4 * 10**(epoch/20)) history_7 = model_7.fit(X_train, y_train, epochs=100, callbacks=[lr_scheduler]) # plot learning rate vs the loss plt.figure(figsize=(10, 7)) plt.sem

Original toplevel document

TfC_02_classification-PART_1
cision boundry plt.contourf(xx, yy, y_pred, cmap=plt.cm.RdYlBu, alpha=0.7) plt.scatter(X[:, 0], X[:, 1], c=y, s=40, cmap=plt.cm.RdYlBu) plt.xlim(xx.min(), xx.max()) plt.ylim(yy.min(), yy.max()) <span>Finding the best learning rate #model tf.random.set_seed(42) model_7 = tf.keras.Sequential([ tf.keras.layers.Dense(4, activation='relu'), tf.keras.layers.Dense(4, activation='relu'), tf.keras.layers.Dense(1, activation='sigmoid') ]) model_7.compile(optimizer=tf.keras.optimizers.Adam(), loss='binary_crossentropy', metrics=['accuracy']) # callback lr_scheduler = tf.keras.callbacks.LearningRateScheduler(lambda epoch: 1e-4 * 10**(epoch/20)) history_7 = model_7.fit(X_train, y_train, epochs=100, callbacks=[lr_scheduler]) # plot learning rate vs the loss plt.figure(figsize=(10, 7)) plt.semilogx(history_7.history['lr'], history_7.history['loss']) plt.xlabel('Learning Rate') plt.ylabel('Loss') plt.title('Learning Rate vs Loss'); <span>