First we will begin with importing the relevant packages and libraries
import os
import zipfile
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
!pip install nibabel
import nibabel as nib
from scipy import ndimage
import random
from ads.common.model_export_util import prepare_generic_model
from ads.common.model import ADSModel
from ads.catalog.model import ModelSummaryList, ModelCatalog
from ads.catalog.project import ProjectSummaryList, ProjectCatalog
from ads.catalog.summary import SummaryList
from ads.common.model_artifact import ModelArtifact
Requirement already satisfied: nibabel in ./conda/generalmachinelearningforgpusvy/lib/python3.6/site-packages (3.2.1) Requirement already satisfied: packaging>=14.3 in ./conda/generalmachinelearningforgpusvy/lib/python3.6/site-packages (from nibabel) (20.7) Requirement already satisfied: numpy>=1.14 in ./conda/generalmachinelearningforgpusvy/lib/python3.6/site-packages (from nibabel) (1.18.5) Requirement already satisfied: pyparsing>=2.0.2 in ./conda/generalmachinelearningforgpusvy/lib/python3.6/site-packages (from packaging>=14.3->nibabel) (2.4.7)
CT Dataset zips (CT-0 and CT-23) was uploaded to notebook session and were unzipped
# Make a directory to store the data.
os.makedirs("CT-data-subset")
# Unzip data in the newly created directory.
with zipfile.ZipFile("CT-0.zip", "r") as z_fp:
z_fp.extractall("./CT-data-subset/")
with zipfile.ZipFile("CT-23.zip", "r") as z_fp:
z_fp.extractall("./CT-data-subset/")
Functions for processing the CT images are below. The images are read in, normalized, and resized.
Normalization of chest CT data reduces variation. Here, we used the min/max technique to normalize the data. The mathematical formulation : volume (scaled) = (volume - min(volume)) / (max(volume) - min(volume))
One important thing to keep in mind when using the MinMax Scaling is that it is highly influenced by the maximum and minimum values in our data so if our data contains outliers it is going to be biased. MinMaxScaler rescales the data set such that all feature values are in the range [0, 1]. This is done feature-wise in an independent way.
For resizing Volume they are using the Spline interpolated zoom (SIZ). We then take each volume, calculate its depth D, and zoom it along the z-axis by a factor of 1 using spline interpolation , where the interpolant is an order of three. Here, the input volume is zoomed or squeezed by replicating the nearest pixel along the depth/z-axis. As it uses spline interpolation to squeeze or expand the z-axis to the desired depth, it retains a substantial level of information from original 3D volume.
def read_nifti_file(filepath):
"""Read and load volume"""
# Read file
scan = nib.load(filepath)
# Get raw data
scan = scan.get_fdata()
return scan
def normalize(volume):
"""Normalize the volume"""
min = -1000
max = 400
volume[volume < min] = min
volume[volume > max] = max
volume = (volume - min) / (max - min)
volume = volume.astype("float32")
return volume
def resize_volume(img):
"""Resize across z-axis"""
# Set the desired depth
desired_depth = 64
desired_width = 128
desired_height = 128
# Get current depth
current_depth = img.shape[-1]
current_width = img.shape[0]
current_height = img.shape[1]
# Compute depth factor
depth = current_depth / desired_depth
width = current_width / desired_width
height = current_height / desired_height
depth_factor = 1 / depth
width_factor = 1 / width
height_factor = 1 / height
# Rotate
img = ndimage.rotate(img, 90, reshape=False)
# Resize across z-axis
img = ndimage.zoom(img, (width_factor, height_factor, depth_factor), order=1)
return img
def process_scan(path):
"""Read and resize volume"""
# Read scan
volume = read_nifti_file(path)
# Normalize
volume = normalize(volume)
# Resize width, height and depth
volume = resize_volume(volume)
return volume
The paths of all images are stored in a list. Images in CT-0 are images of normal lungs, and images in CT-23 are images of abnormal lungs
# Folder "CT-0" consist of CT scans having normal lung tissue,
# no CT-signs of viral pneumonia.
normal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-0", x)
for x in os.listdir("MosMedData/CT-0")
]
# Folder "CT-23" consist of CT scans having several ground-glass opacifications,
# involvement of lung parenchyma.
abnormal_scan_paths = [
os.path.join(os.getcwd(), "MosMedData/CT-23", x)
for x in os.listdir("MosMedData/CT-23")
]
print("CT scans with normal lung tissue: " + str(len(normal_scan_paths)))
print("CT scans with abnormal lung tissue: " + str(len(abnormal_scan_paths)))
CT scans with normal lung tissue: 100 CT scans with abnormal lung tissue: 100
With the stored paths, images are loaded iteratively using the "process_scan" function. The loaded scans are converted and stored in 3D arrays. Labels are made for abnormal and normal scans. The data is split into train, validation sets
# Read and process the scans.
# Each scan is resized across height, width, and depth and rescaled.
abnormal_scans = np.array([process_scan(path) for path in abnormal_scan_paths])
normal_scans = np.array([process_scan(path) for path in normal_scan_paths])
# For the CT scans having presence of viral pneumonia
# assign 1, for the normal ones assign 0.
abnormal_labels = np.array([1 for _ in range(len(abnormal_scans))])
normal_labels = np.array([0 for _ in range(len(normal_scans))])
# Split data in the ratio 70-30 for training and validation.
x_train = np.concatenate((abnormal_scans[:70], normal_scans[:70]), axis=0)
y_train = np.concatenate((abnormal_labels[:70], normal_labels[:70]), axis=0)
x_val = np.concatenate((abnormal_scans[70:], normal_scans[70:]), axis=0)
y_val = np.concatenate((abnormal_labels[70:], normal_labels[70:]), axis=0)
print(
"Number of samples in train and validation are %d and %d."
% (x_train.shape[0], x_val.shape[0])
)
Number of samples in train and validation are 140 and 60.
Functions for preprocessing the training and validation data for CNN are below. The training data is rotated and an additional channel added. The validation data has an additional channel added
@tf.function
def rotate(volume):
"""Rotate the volume by a few degrees"""
def scipy_rotate(volume):
# define some rotation angles
angles = [-20, -10, -5, 5, 10, 20]
# pick angles at random
angle = random.choice(angles)
# rotate volume
volume = ndimage.rotate(volume, angle, reshape=False)
volume[volume < 0] = 0
volume[volume > 1] = 1
return volume
augmented_volume = tf.numpy_function(scipy_rotate, [volume], tf.float32)
return augmented_volume
def train_preprocessing(volume, label):
"""Process training data by rotating and adding a channel."""
# Rotate volume
volume = rotate(volume)
volume = tf.expand_dims(volume, axis=3)
return volume, label
def validation_preprocessing(volume, label):
"""Process validation data by only adding a channel."""
volume = tf.expand_dims(volume, axis=3)
return volume, label
Tensorflow pipelines are made below. This makes it easier to train the CNN without running out of resources (eg memory)
# Create tensorflow input pipelines
train_input_pipeline = tf.data.Dataset.from_tensor_slices((x_train, y_train))
validation_input_pipeline = tf.data.Dataset.from_tensor_slices((x_val, y_val))
batch_size = 2
# transform and rotate the data using the map function
train_dataset = train_input_pipeline.shuffle(len(x_train)).map(train_preprocessing).batch(batch_size).prefetch(2)
# transform the data using the map function
validation_dataset = validation_input_pipeline.shuffle(len(x_val)).map(validation_preprocessing).batch(batch_size).prefetch(2)
# Showing shape of inputs into the CNN
data = train_dataset.take(10)
type(train_dataset)
images, label = list(data)[0]
print(type(images))
print(type(label))
images_np = images.numpy()
label_np = label.numpy()
print(images_np.shape)
print(label_np.shape)
label_np_1 = label_np[0]
images_np_1 = images_np[0]
print(images_np_1.shape)
<class 'tensorflow.python.framework.ops.EagerTensor'> <class 'tensorflow.python.framework.ops.EagerTensor'> (2, 128, 128, 64, 1) (2,) (128, 128, 64, 1)
import matplotlib.pyplot as plt
data = train_dataset.take(1)
images, labels = list(data)[0]
images = images.numpy()
image = images[0]
print("Dimension of the CT scan is:", image.shape)
plt.imshow(np.squeeze(image[:, :, 30]), cmap="gray")
Dimension of the CT scan is: (128, 128, 64, 1)
<matplotlib.image.AxesImage at 0x7f7bcee1c978>
def plot_slices(num_rows, num_columns, width, height, data):
"""Plot a montage of 20 CT slices"""
data = np.rot90(np.array(data))
data = np.transpose(data)
data = np.reshape(data, (num_rows, num_columns, width, height))
rows_data, columns_data = data.shape[0], data.shape[1]
heights = [slc[0].shape[0] for slc in data]
widths = [slc.shape[1] for slc in data[0]]
fig_width = 12.0
fig_height = fig_width * sum(heights) / sum(widths)
f, axarr = plt.subplots(
rows_data,
columns_data,
figsize=(fig_width, fig_height),
gridspec_kw={"height_ratios": heights},
)
for i in range(rows_data):
for j in range(columns_data):
axarr[i, j].imshow(data[i][j], cmap="gray")
axarr[i, j].axis("off")
plt.subplots_adjust(wspace=0, hspace=0, left=0, right=1, bottom=0, top=1)
plt.show()
# Visualize montage of slices.
# 4 rows and 10 columns for 100 slices of the CT scan.
plot_slices(4, 10, 128, 128, image[:, :, :40])
def get_model(width=128, height=128, depth=64):
"""Build a 3D convolutional neural network model."""
inputs = keras.Input((width, height, depth, 1))
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(inputs)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=64, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=128, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.Conv3D(filters=256, kernel_size=3, activation="relu")(x)
x = layers.MaxPool3D(pool_size=2)(x)
x = layers.BatchNormalization()(x)
x = layers.GlobalAveragePooling3D()(x)
x = layers.Dense(units=512, activation="relu")(x)
x = layers.Dropout(0.3)(x)
outputs = layers.Dense(units=1, activation="sigmoid")(x)
# Define the model.
model = keras.Model(inputs, outputs, name="3dcnn")
return model
# Build model.
model = get_model(width=128, height=128, depth=64)
model.summary()
Model: "3dcnn" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= input_1 (InputLayer) [(None, 128, 128, 64, 1)] 0 _________________________________________________________________ conv3d (Conv3D) (None, 126, 126, 62, 64) 1792 _________________________________________________________________ max_pooling3d (MaxPooling3D) (None, 63, 63, 31, 64) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 63, 63, 31, 64) 256 _________________________________________________________________ conv3d_1 (Conv3D) (None, 61, 61, 29, 64) 110656 _________________________________________________________________ max_pooling3d_1 (MaxPooling3 (None, 30, 30, 14, 64) 0 _________________________________________________________________ batch_normalization_1 (Batch (None, 30, 30, 14, 64) 256 _________________________________________________________________ conv3d_2 (Conv3D) (None, 28, 28, 12, 128) 221312 _________________________________________________________________ max_pooling3d_2 (MaxPooling3 (None, 14, 14, 6, 128) 0 _________________________________________________________________ batch_normalization_2 (Batch (None, 14, 14, 6, 128) 512 _________________________________________________________________ conv3d_3 (Conv3D) (None, 12, 12, 4, 256) 884992 _________________________________________________________________ max_pooling3d_3 (MaxPooling3 (None, 6, 6, 2, 256) 0 _________________________________________________________________ batch_normalization_3 (Batch (None, 6, 6, 2, 256) 1024 _________________________________________________________________ global_average_pooling3d (Gl (None, 256) 0 _________________________________________________________________ dense (Dense) (None, 512) 131584 _________________________________________________________________ dropout (Dropout) (None, 512) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 513 ================================================================= Total params: 1,352,897 Trainable params: 1,351,873 Non-trainable params: 1,024 _________________________________________________________________
# Compile model.
initial_learning_rate = 0.0001
lr_schedule = keras.optimizers.schedules.ExponentialDecay(
initial_learning_rate, decay_steps=100000, decay_rate=0.96, staircase=True
)
model.compile(
loss="binary_crossentropy",
optimizer=keras.optimizers.Adam(learning_rate=lr_schedule),
metrics=["acc"],
)
# Define callbacks.
checkpoint_cb = keras.callbacks.ModelCheckpoint(
"3d_image_classification.h5", save_best_only=True, verbose=True
)
early_stopping_cb = keras.callbacks.EarlyStopping(monitor="val_acc", patience=15, verbose=True)
# Train the model, doing validation at the end of each epoch
epochs = 100
model.fit(
train_dataset,
validation_data=validation_dataset,
epochs=epochs,
shuffle=True,
verbose=2,
callbacks=[checkpoint_cb, early_stopping_cb],
)
Epoch 1/100 WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0292s vs `on_train_batch_end` time: 0.0802s). Check your callbacks. WARNING:tensorflow:Callbacks method `on_train_batch_end` is slow compared to the batch time (batch time: 0.0292s vs `on_train_batch_end` time: 0.0802s). Check your callbacks. Epoch 00001: val_loss improved from inf to 0.71942, saving model to 3d_image_classification.h5 70/70 - 28s - loss: 0.7138 - acc: 0.5286 - val_loss: 0.7194 - val_acc: 0.5000 Epoch 2/100 Epoch 00002: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6587 - acc: 0.6000 - val_loss: 1.3255 - val_acc: 0.5000 Epoch 3/100 Epoch 00003: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6800 - acc: 0.5714 - val_loss: 1.1438 - val_acc: 0.5000 Epoch 4/100 Epoch 00004: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6322 - acc: 0.6143 - val_loss: 1.0400 - val_acc: 0.5000 Epoch 5/100 Epoch 00005: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6047 - acc: 0.6429 - val_loss: 2.3341 - val_acc: 0.5000 Epoch 6/100 Epoch 00006: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6224 - acc: 0.6500 - val_loss: 1.6982 - val_acc: 0.5000 Epoch 7/100 Epoch 00007: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6185 - acc: 0.6857 - val_loss: 1.3129 - val_acc: 0.5000 Epoch 8/100 Epoch 00008: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.6080 - acc: 0.6571 - val_loss: 1.5393 - val_acc: 0.5000 Epoch 9/100 Epoch 00009: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.5853 - acc: 0.7071 - val_loss: 1.3470 - val_acc: 0.5000 Epoch 10/100 Epoch 00010: val_loss did not improve from 0.71942 70/70 - 28s - loss: 0.5809 - acc: 0.6786 - val_loss: 0.7470 - val_acc: 0.5833 Epoch 11/100 Epoch 00011: val_loss improved from 0.71942 to 0.66128, saving model to 3d_image_classification.h5 70/70 - 29s - loss: 0.5885 - acc: 0.7000 - val_loss: 0.6613 - val_acc: 0.6000 Epoch 12/100 Epoch 00012: val_loss did not improve from 0.66128 70/70 - 29s - loss: 0.5900 - acc: 0.6786 - val_loss: 0.8998 - val_acc: 0.6333 Epoch 13/100 Epoch 00013: val_loss did not improve from 0.66128 70/70 - 28s - loss: 0.6191 - acc: 0.6500 - val_loss: 0.7327 - val_acc: 0.4833 Epoch 14/100 Epoch 00014: val_loss improved from 0.66128 to 0.57524, saving model to 3d_image_classification.h5 70/70 - 28s - loss: 0.5686 - acc: 0.7286 - val_loss: 0.5752 - val_acc: 0.7000 Epoch 15/100 Epoch 00015: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5628 - acc: 0.7071 - val_loss: 0.6703 - val_acc: 0.7000 Epoch 16/100 Epoch 00016: val_loss did not improve from 0.57524 70/70 - 29s - loss: 0.5668 - acc: 0.7143 - val_loss: 0.5840 - val_acc: 0.7000 Epoch 17/100 Epoch 00017: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5311 - acc: 0.7357 - val_loss: 0.5768 - val_acc: 0.7667 Epoch 18/100 Epoch 00018: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5373 - acc: 0.7143 - val_loss: 0.5896 - val_acc: 0.6667 Epoch 19/100 Epoch 00019: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5092 - acc: 0.7643 - val_loss: 0.8822 - val_acc: 0.6833 Epoch 20/100 Epoch 00020: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4594 - acc: 0.7857 - val_loss: 0.9411 - val_acc: 0.6500 Epoch 21/100 Epoch 00021: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5418 - acc: 0.7357 - val_loss: 0.6013 - val_acc: 0.7333 Epoch 22/100 Epoch 00022: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4604 - acc: 0.7786 - val_loss: 1.2515 - val_acc: 0.6000 Epoch 23/100 Epoch 00023: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4746 - acc: 0.7714 - val_loss: 0.5893 - val_acc: 0.7500 Epoch 24/100 Epoch 00024: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4794 - acc: 0.7643 - val_loss: 0.5873 - val_acc: 0.7500 Epoch 25/100 Epoch 00025: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4618 - acc: 0.7857 - val_loss: 0.6909 - val_acc: 0.6500 Epoch 26/100 Epoch 00026: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4277 - acc: 0.8000 - val_loss: 0.6746 - val_acc: 0.7667 Epoch 27/100 Epoch 00027: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.5003 - acc: 0.7357 - val_loss: 0.9802 - val_acc: 0.6000 Epoch 28/100 Epoch 00028: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4472 - acc: 0.8071 - val_loss: 0.6473 - val_acc: 0.7333 Epoch 29/100 Epoch 00029: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4041 - acc: 0.8357 - val_loss: 0.8238 - val_acc: 0.6167 Epoch 30/100 Epoch 00030: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.3709 - acc: 0.8571 - val_loss: 0.5922 - val_acc: 0.7333 Epoch 31/100 Epoch 00031: val_loss did not improve from 0.57524 70/70 - 28s - loss: 0.4466 - acc: 0.7786 - val_loss: 1.0057 - val_acc: 0.5667 Epoch 32/100 Epoch 00032: val_loss improved from 0.57524 to 0.56161, saving model to 3d_image_classification.h5 70/70 - 29s - loss: 0.3908 - acc: 0.8214 - val_loss: 0.5616 - val_acc: 0.7333 Epoch 00032: early stopping
<tensorflow.python.keras.callbacks.History at 0x7f7b680d0208>
Here the model accuracy and loss for the training and the validation sets are plotted. Since the validation set is class-balanced, accuracy provides an unbiased representation of the model's performance.
fig, ax = plt.subplots(1, 2, figsize=(20, 3))
ax = ax.ravel()
for i, metric in enumerate(["acc", "loss"]):
ax[i].plot(model.history.history[metric])
ax[i].plot(model.history.history["val_" + metric])
ax[i].set_title("Model {}".format(metric))
ax[i].set_xlabel("epochs")
ax[i].set_ylabel(metric)
ax[i].legend(["train", "val"])
# Load best weights.
model.load_weights("3d_image_classification.h5")
prediction = model.predict(np.expand_dims(x_val[0], axis=0))[0]
print("This model is %.2f percent confident that CT scan is abnormal" % (100 * prediction))
This model is 0.17 percent confident that CT scan is abnormal
# Load best weights.
model.load_weights("3d_image_classification.h5")
predictions = [0]*len(y_val)
num = 0
for index in range(len(x_val)):
predicted_percent = model.predict(np.expand_dims(x_val[index], axis=0))[0]
if predicted_percent >= 0.5:
predictions[index] = 1
else:
predictions[index] = 0
if predictions[index] == y_val[index]:
num += 1
accuracy = num/len(y_val)
print("The model is %.2f percent accurate on validation data" % (100 * accuracy))
The model is 73.33 percent accurate on validation data
from ads.common.model_export_util import prepare_generic_model
#Create directory to store your model artifact
os.makedirs("model_artifact")
#use ADS to create the artifact
model_path = "/home/datascience/model_artifact"
model_artifact = prepare_generic_model(model_path, force_overwrite=True, data_science_env=True)
INFO:ADS:We give you the option to specify a different inference conda environment for model deployment purposes. By default it is assumed to be the same as the conda environment used to train the model. If you wish to specify a different environment for inference purposes, please assign the path of a published or data science conda environment to the optional parameter `inference_conda_env`.
conv_model = tf.keras.models.load_model("3d_image_classification.h5")
predictions = conv_model.predict(np.expand_dims(x_val[0], axis=0))[0]
print("This model is %.2f percent confident that CT scan is abnormal" % (100 * predictions))
WARNING:tensorflow:6 out of the last 66 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f7bb50171e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:6 out of the last 66 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f7bb50171e0> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. This model is 0.17 percent confident that CT scan is abnormal
#changes made in the scor.py file, check it out then reloaded
model_artifact.reload()
a=model_artifact.model.predict(np.expand_dims(x_val[2], axis=0))
WARNING:tensorflow:7 out of the last 67 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f7bb4ff6ae8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details. WARNING:tensorflow:7 out of the last 67 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f7bb4ff6ae8> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/tutorials/customization/performance#python_or_tensor_args and https://www.tensorflow.org/api_docs/python/tf/function for more details.
a[0][0]
0.84771705
compartment_id = os.environ['NB_SESSION_COMPARTMENT_OCID']
project_id = os.environ["PROJECT_OCID"]
# Saving the model artifact to the model catalog:
mc_model = model_artifact.save(project_id=project_id, compartment_id=compartment_id, display_name="3D CNN on CT data",
description="3D image classification from CT scans", training_script_path="CT-project.ipynb", ignore_pending_changes=True)
INFO:ADS:{
"git_branch": "None",
"git_commit": "None",
"repository_url": "None",
"script_dir": "/home/datascience/model_artifact",
"training_script": "/home/datascience/CT-project.ipynb"
}