Skip to content

Linear Regression from Scratch

Learn how to implement a linear regression model using TensorPlay's low-level API and autograd engine.

1. Prepare Synthetic Data

We generate 100 points following the line y=3x+2 with some Gaussian noise.

python
import tensorplay as tp

# Generate random X data
X = tp.randn(100, 1)

# Generate y data with noise
y = 3 * X + 2 + tp.randn(100, 1) * 0.1

2. Initialize Parameters

We need to define the weight w and bias b. We set requires_grad=True so that TensorPlay tracks operations on them for backpropagation.

python
w = tp.randn(1, 1, requires_grad=True)
b = tp.zeros(1, requires_grad=True)

3. Define the Training Loop

We'll use manual gradient descent to update our parameters.

python
learning_rate = 0.01

for i in range(100):
    # 1. Forward pass: compute predicted y
    y_pred = X @ w + b
    
    # 2. Compute Loss (Mean Squared Error)
    loss = ((y_pred - y)**2).mean()
    
    # 3. Backward pass: compute gradients
    loss.backward()
    
    # 4. Update parameters
    # We use tp.no_grad() to perform updates without tracking them in the graph
    with tp.no_grad():
        w -= learning_rate * w.grad
        b -= learning_rate * b.grad
        
        # Manually zero the gradients after updating
        w.grad.zero_()
        b.grad.zero_()
    
    if i % 10 == 0:
        print(f"Iteration {i}, Loss: {loss.item():.4f}")

print(f"Result -> w: {w.item():.2f}, b: {b.item():.2f}")

Summary

In this tutorial, you learned:

  • How to create tensors with requires_grad=True.
  • How to perform a forward pass and compute loss.
  • How to trigger backpropagation with loss.backward().
  • How to manually update weights and clear gradients using tp.no_grad().

Released under the Apache 2.0 License.

📚DeepWiki