← Back to tensorflow/tensorflow

How to Deploy & Use tensorflow/tensorflow

TensorFlow Deployment & Usage Guide

1. Prerequisites

System Requirements

  • OS: Linux (Ubuntu 20.04+), Windows 10/11, or macOS 12+
  • Python: 3.9–3.12 (64-bit)
  • Memory: Minimum 8GB RAM (16GB+ recommended for training)
  • Disk: 10GB+ free space (50GB+ for building from source)

For GPU Support (Optional)

  • NVIDIA GPU with CUDA Compute Capability 3.5+
  • CUDA: 12.3 and cuDNN: 8.9 (for TF 2.16+)
  • NVIDIA drivers: 525.60.13 or newer

For Building from Source

  • Bazel: 6.5.0 (exact version required)
  • GCC: 9.3.1+ (Linux) or MSVC 2019 (Windows)
  • Git: 2.17+

2. Installation

Method A: pip Install (Recommended)

Standard Installation (CPU + GPU):

# Create virtual environment
python -m venv tf_env
source tf_env/bin/activate  # Linux/macOS
# or: tf_env\Scripts\activate  # Windows

# Install TensorFlow
pip install tensorflow

CPU-Only (Smaller Package):

pip install tensorflow-cpu

Nightly Builds (Latest Features):

pip install tf-nightly  # Includes GPU support
# or
pip install tf-nightly-cpu

Method B: Docker Deployment

# Download latest GPU image
docker pull tensorflow/tensorflow:latest-gpu

# Run container with GPU support
docker run --gpus all -it -p 8888:8888 tensorflow/tensorflow:latest-gpu-jupyter

# CPU-only
docker run -it -p 8888:8888 tensorflow/tensorflow:latest-jupyter

Method C: Build from Source

# Clone repository
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow
git checkout r2.15  # Switch to desired release branch

# Configure build
./configure

# Build pip package (takes 2-3 hours)
bazel build //tensorflow/tools/pip_package:build_pip_package

# Create package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg

# Install
pip install /tmp/tensorflow_pkg/tensorflow-*.whl

3. Configuration

GPU Memory Configuration

Create a startup configuration in your Python script:

import tensorflow as tf

# Prevent TF from allocating all GPU memory
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    try:
        for gpu in gpus:
            tf.config.experimental.set_memory_growth(gpu, True)
    except RuntimeError as e:
        print(e)

Environment Variables

# Suppress TF logging (0=DEBUG, 1=INFO, 2=WARNING, 3=ERROR)
export TF_CPP_MIN_LOG_LEVEL=2

# Enable XLA JIT compilation for performance
export TF_XLA_FLAGS=--tf_xla_enable_xla_devices

# Configure thread pools
export OMP_NUM_THREADS=4
export TF_NUM_INTEROP_THREADS=4
export TF_NUM_INTRAOP_THREADS=4

Device Plugins (macOS Metal, DirectX)

# macOS Metal (Apple Silicon/AMD)
pip install tensorflow-metal

# Windows DirectML
pip install tensorflow-directml-plugin

4. Build & Run

Verification

import tensorflow as tf
print(f"TensorFlow version: {tf.__version__}")
print(f"GPU available: {tf.config.list_physical_devices('GPU')}")

# Test computation
print(tf.reduce_sum(tf.random.normal([1000, 1000])))

Local Development Server (TensorFlow Serving)

# Install TensorFlow Serving via Docker
docker pull tensorflow/serving

# Serve a SavedModel
docker run -p 8501:8501 \
  --mount type=bind,source=/path/to/model,target=/models/my_model \
  -e MODEL_NAME=my_model \
  -t tensorflow/serving

Production Build Optimization

When building from source for production:

# Optimized for your CPU architecture
bazel build --config=opt --config=mkl //tensorflow/tools/pip_package:build_pip_package

# With CUDA support
bazel build --config=opt --config=cuda //tensorflow/tools/pip_package:build_pip_package

5. Deployment

Cloud Deployment

Google Cloud Platform (Vertex AI):

from google.cloud import aiplatform

model = aiplatform.Model.upload(
    display_name="tf-model",
    artifact_uri="gs://bucket/model/",
    serving_container_image_uri="us-docker.pkg.dev/vertex-ai/prediction/tf2-cpu.2-12:latest"
)

AWS SageMaker:

import sagemaker
from sagemaker.tensorflow import TensorFlowModel

model = TensorFlowModel(
    model_data='s3://bucket/model.tar.gz',
    role=role,
    framework_version='2.13'
)
predictor = model.deploy(initial_instance_count=1, instance_type='ml.m5.xlarge')

Kubernetes (TensorFlow Serving):

apiVersion: apps/v1
kind: Deployment
metadata:
  name: tensorflow-serving
spec:
  replicas: 3
  selector:
    matchLabels:
      app: tensorflow-serving
  template:
    metadata:
      labels:
        app: tensorflow-serving
    spec:
      containers:
      - name: serving
        image: tensorflow/serving:latest
        ports:
        - containerPort: 8501
        env:
        - name: MODEL_NAME
          value: "my_model"

Edge Deployment (TensorFlow Lite)

Convert models for mobile/edge devices:

import tensorflow as tf

converter = tf.lite.TFLiteConverter.from_saved_model('path/to/saved_model')
tflite_model = converter.convert()

# Save the model
with open('model.tflite', 'wb') as f:
    f.write(tflite_model)

6. Troubleshooting

GPU Not Detected

Issue: tf.config.list_physical_devices('GPU') returns empty list

Solutions:

  1. Verify NVIDIA drivers: nvidia-smi
  2. Check CUDA/cuDNN compatibility with your TF version at tensorflow.org/install/source
  3. Install NVIDIA Container Toolkit for Docker:
    docker run --gpus all --rm nvidia/cuda:12.0-base nvidia-smi
    

ImportError: DLL load failed (Windows)

Solution:

  • Install Microsoft Visual C++ Redistributable 2015-2022
  • Ensure Python 64-bit: python -c "import struct; print(struct.calcsize('P') * 8)"

Out of Memory (OOM) Errors

# Limit GPU memory growth
gpus = tf.config.experimental.list_physical_devices('GPU')
if gpus:
    tf.config.experimental.set_virtual_device_configuration(
        gpus[0],
        [tf.config.experimental.VirtualDeviceConfiguration(memory_limit=4096)]
    )

Build Failures (Source)

Issue: Bazel version mismatch

Solution:

# Install exact Bazel version required
npm install -g @bazel/bazelisk
bazelisk sync

Version Conflicts

Issue: AttributeError: module 'tensorflow' has no attribute 'Session'

Solution: You're using TF 2.x code with TF 1.x syntax. Enable compatibility mode:

import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()

Slow Performance on CPU

Solution: Enable Intel MKL optimizations:

pip install tensorflow-mkl  # Intel optimized build
export KMP_AFFINITY=granularity=fine,compact,1,0
export KMP_BLOCKTIME=0

Support Resources: